content
stringlengths 275
370k
|
---|
World War History Essay Sample
The WW1 led to the rise of authoritiess all around Europe and Asia. A Italy leader Benito Mussolini convinced his state they needed a strong leader and authorities so in 1919 he founded Italy’s Fascist Party. A communist Vladimir Lenin started to garner all of Russia’s weaker district to organize the SovietUnion Nazi Party leader Adolf Hitler became dictator of Germany. In Asia. Nipponese military leaders took control of a political system that had been weakened by economic emphasis. The new authoritiess were to assist and spread out their imperiums. Germany. Italy. and Japan joined together to make the Axis Powers Americans wanted isolationism. and Congress passed Torahs to maintain the United States out of war. President Roosevelt liked the thought of internationalism. but the public forced him to abandon the thought of taking action against attacker states. 11. 2 Hitler set out to unify all the Germanspeaking parts of Europe. European leaders hoped to avoid war by following Hitler’s demands. When Hitler’s demands turned to lands in Poland leaders knew that calming had failed. A NaziSoviet treaty strengthened Hitler’s resoluteness. and on September 1 1939 Germany invaded Poland. September 3. Britain and France declared war on Germany. and World War II was on and traveling. Unable to stand the German blitzkrieg. Poland fell to Hitler. As German forces went through Europe. states fell into German control.
When German military personnels went through the Maginot Lines. A emptying at Dunkirk saved Allied military personnels. The Allies couldn’t salvage France from the German besieging. and France shortly gave up. Winston Churchill was unwilling to give up the battle. even when German bombers blasted London. 11. 3 The Nazi persecution of Jews turned into the Holocaust. While Nazis persecuted anyone who opposed them. their hatred for Jews led them to coerce hideous antiJew policies like the Nuremberg Law. When the Nazis foremost took power. they stripped German Jews of their rights. Nazi persecution of Jews grew when the slaying of a German diplomat made Hitler order a violent antiJewish violent disorder throughout Germany and Austria. The Nazi secret constabulary arrested affluent Jews and demanded they leave and give up their ownerships. Many Jews fled but in-migration limitations in other states kept 1000000s of Jews trapped in Nazidominated Europe. Pressed with the inquiry of what to make with their Judaic population. the Nazis began to recognize their “final solution. ” Jews were rounded up and taken to concentration cantonments for slave labour and to extermination cantonments where they would be killed in gas Chamberss. 11. 4 The United States tried to keep neutrality after World War II began.
President Roosevelt declared the U. S. impersonal. he took actions to back up the Allied battle against Germany. He went over the neutrality Torahs and worked out a destroyersforbases trade with the British. Roosevelt expanded the nation’s function in the war by presenting a jurisprudence that allowed the United States to supply military supplies to Britain. His thought of a hemispheric defense mechanism zone authorized the U. S. Navy to protect British lading ships and shoot Axis vass. The British Prime Minister and President Roosevelt coordinated Allied scheme and pledged their committedness to democracy in the Atlantic Charter. As Japan moved against European settlements in Southeast Asia. Roosevelt hoped to debar war by using economic force per unit area on Japan and directing military supplies to China. These tactics failed. nevertheless. and the Nipponese bombed Pearl Harbor. Hawaii. on December 7. 1941. The following twenty-four hours. Congress declared war on Japan. A few yearss subsequently. Germany and Italy both declared war on the United States. |
SEATTLE — One of colonialism’s most dangerous modalities is the perpetuity of exclusion it affects through policies that severely restrict political and economic liberty. One modern example of this legacy is the set of laws created by the British empire, known as the Frontier Crimes Regulations (FCR), that governed and placed severe restrictions upon the northwestern tribal regions of Pakistan, then part of British India. These are known today as the Federally Administered Tribal Areas (FATA).
Today, the FATA are some of the least developed and poorest areas in Pakistan, with 66 percent of the tribal population living below the poverty line. However, May 24 was a victory towards ameliorating this structural oppression in Pakistan, as the National Assembly passed a constitutional amendment that was signed by Pakistani President Mamnoon Hussain on May 31 to end the enforcement of the FCR and merge these tribal areas with the Khyber-Pakhtunkhwa (KP) Province. This finally begins the process of bringing these regions the same political, social and legal rights as the rest of the country.
Historical Overview of the FCR
The colonial historical context that shaped the nature of the FCR and created structural oppression in Pakistan is crucial in understanding its present impact. Lorenzo Veracini, an associate professor in history at Swinburne University of Technology, wrote about the colonial mindset: “When the settlers occupy the land, indigenous peoples are transformed into ‘neighbours’ and, as a result, into ‘intruders.’ ”
The Pashtun tribes that historically lived along the newly created border in 1893 between British India and the Kingdom of Afghanistan intruded on the British East India Company’s control of the boundary through localized raids and revolts. The solution to curbing these tribes was to legally separate them from the rest of India’s polity under British governance by maintaining them indirectly under the legal framework of the FCR.
The provisions of the FCR created the illusion of increasing the autonomy of the local tribes. Indirect rule allowed for the preservation of local structure, and customs and provisions under the FCR created a council of local elders (Jirgas) that settled legal disputes. However, defendants were denied essential legal privileges such as the right to appeal, present evidence and the right to representation, and punishments could be levied collectively against family members.
Local autonomy under the FCR was further muted through the appointment of a political agent (PA) by colonial officials who had discretion in appointing individuals to the Jirga and could overturn decisions made by the council. To further ensure the cooperation of tribes, the PA could grant political and economic rewards to tribal elites and levy fines, detain individuals and confiscate property as punishments. Collectively, these measures turned these regions into a state of exception that completely disrupted the social and economic development of the tribes that lived there.
The Contemporary Impact of FCR on Structural Oppression in Pakistan
The British colonial mentality of the tribal regions as intruding on the greater interests of the state was perpetuated even after independence and the creation of Pakistan in 1947. The new Governor-General of Pakistan, Mohammad Ali Jimmah, convinced tribal leaders to sign treaties continuing the enforcement of the FCR, under the assurance that the state would not interfere with their autonomy and internal interests.
But in nearly mirroring their colonial predecessors, this assurance of autonomy functioned as a guise for control and continued developmental neglect over those regions — including through all of the changes of government. Even in the most contemporary 1973 constitution of Pakistan, the president controlled these regions through his appointed governor and PA, who carried the same autocratic powers as the colonial PAs and abused them in the same fashion.
Without provincial status, the FATA continued to suffer from a lack of national investment, which impacted access to clean drinking water, healthcare, education and communication capabilities. The arbitrary enforcement of law and order coupled with the lack of development ultimately continued the same policies of structural oppression in Pakistan and made these regions more dangerous, having become a haven for militants, gun runners and drug smugglers.
A New Path Forward or a Renaming of the Same Issues?
There is still a long way to go in terms of merging the FATA with KP Province and rectifying the tremendous damage caused by the FCR. The merger between the tribal areas and KP Province is a gradual process happening over the next two years. Meanwhile, the tribal areas are governed under a set of interim rules that the president signed into law on May 28 called the FATA Interim Governance Regulation, 2018. However, the regulations are troubling in that they keep much of the FCR administrative structure in place under new names — for example, the PA is kept intact with much of the same discretionary power under the new title of deputy commissioner. A senior leader of Pakistan’s secular leftist Pashtun Awami National Party, Afrasiyab Khattak, has gone as far as to call the interim regulations “FCR reincarnated,” according to the Daily Times.
Although this invokes the same oppressive colonial and post-independence specters, the passing of the amendment finally provides a long-term framework for inclusion and dispels the notion of the FATA residents as intruders within their own land. While the success of the merger remains to be seen, this measure opens the possibility of the country coming together in new ways that finally deconstruct the long-term structural oppression in Pakistan and bring long overdue lasting peace, stability and socioeconomic development to the areas that need it most.
– Emily Bender |
On this Virtual Learning Environment-VLE portal, there are interactive learning materials that will enhance your acquisition of the Yoruba language listening, speaking, reading and writing skills. Your frequent interactions with the lessons, video and audio clips, website links, e-books and questions on this page will certainly make you a better learner of the language.
You are assured of a fun-filled learning experience here.
Dear Year 7 Students,
The portal you are working on is known as Virtual Learning Environment-VLE. It is interactive and highly responsive. As you follow all the routes, you will find a whole lot of lessons and topics to refresh your mind on.
You are therefore encouraged to always come on this site for your learning.
The ear is one the sense organs in the body .It is divided into three parts - the outer, middle and the inner ear. The outer is the one we see that looks like a funnel , it picks sounds and send it to the eardrum . The sound pass to the brain for interpretation through the auditory nerves. There are tiny hair at the entrance of the outer ear which traps dust and insects and prevent them from entering the inner ear
ANSWER THESE QUESTIONS
1 The outer part of the ear looks like ...............
2. The ear is an for .......................
3. State the functions of the ear
4. Give four ways of caring for the ear
5 Explain the problems of improper care of the ear
6. Draw and label the structure of the ear
- Teacher: OHANEJE Gertrude |
An infectious disease due to the bacteria Brucella that causes rising and falling (undulant) fevers, sweats, malaise, weakness, anorexia, headache, myalgia (muscle pain) and back pain.
Brucellosis is named after its bacterial cause. It also called undulant fever because the fever is typically undulant, rising and falling like a wave.
Brucellosis is transmitted through contaminated and untreated milk and milk products and by direct contact with infected animals (cattle, sheep, goats, pigs, camels, buffaloes, wild ruminants and, very recently, seals), and animal carcasses. Transmission can be through abrasions of the skin from handling infected animals. In the US, infection occurs more frequently by ingesting contaminated milk and dairy products. Groups at elevated risk include abattoir (slaughterhouse) workers, meat inspectors, animal handlers, veterinarians, and laboratory workers.
The incubation period of brucellosis is usually one to three weeks, but sometimes may be several months after exposure.
The symptoms are like those with many other febrile diseases, but with a marked effect on the musculoskeletal system evidenced by generalized aches and pains and associated with fatigue, prostration and mental depression. Urogenital symptoms may dominate the clinical presentation in some patients. The duration of the disease can vary from a few weeks to many months. |
We have a whale of a math help for kids word problem this week! Be ready for a challenge, but don’t be scared, you don’t have to be a brain sturgeon to figure this out. Practice elementary school math skills such as multiplication, division, and weight conversion with this week’s fun word problem challenge. Practicing math everyday is dolphinitely one of the best ways to improve math grades.
We know you can do it! Don’t trout yourself, any fin is possible! Give the problem below a try, and be sure to check back tomorrow for the solution.
Question: An adult blue whale weighs 200 tons. An adult Hector’s dolphin weighs 125 pounds. How many times the weight of the Hector’s dolphin is the blue whale? (Hint: One ton is 2,000 pounds.)
Solution: A ton is 2,000 pounds, so 200 tons is 200 × 2,000 = 400,000 pounds. If we divide 400,000 by 125, we get 3,200. So, the blue whale is 3,200 times the weight of the Hector’s dolphin.
(Dolphin image above by John on Flickr.) |
|Part of a series on|
Proto-Anatolian is the proto-language from which the ancient Anatolian languages emerged (i.e. Hittite and its closest relatives). As with almost all other proto-languages, no attested writings have been found; the language has been reconstructed by applying the comparative method to all the attested Anatolian languages as well as other Indo-European languages.
For the most part, Proto-Anatolian has been reconstructed on the basis of Hittite, the best-attested Anatolian language. However, the usage of Hittite cuneiform writing system limits the enterprise of understanding and reconstructing Anatolian phonology, partly from the deficiency of the adopted Akkadian cuneiform syllabary to represent Hittite phonemes and partly from Hittite scribal practices.
It is especially pertinent to what appears to be confusion of voiceless and voiced dental stops, in which signs -dV- and -tV- are employed interchangeably in different attestations of the same word. Furthermore, in the syllables of the structure VC, only the signs with voiceless stops are usually used. Distribution of spellings with single and geminated consonants in the oldest extant monuments indicates that the reflexes of Proto-Indo-European voiceless stops were spelled as double consonants and the reflexes of PIE voiced stops as single consonants. This regularity is the most consistent in the case of dental stops in older texts; later monuments often show irregular variation of this rule.
Common Anatolian preserves the PIE vowel system basically intact. Some cite the merger of PIE */o/ and */a/ (including from *h₂e) as a Common Anatolian innovation, but according to Melchert that merger was a secondary shared innovation in Hittite, Palaic and Luvian, but not in Lycian. Concordantly, Common Anatolian had the following short vowel segments: */i/, */u/, */e/, */o/ and */a/.
Among the long vowels, */eː/ < PIE *ē is distinguished from */æː/ < PIE *eh₁, with the latter yielding ā in Luwian, Lydian and Lycian. Melchert (1994) had also earlier assumed a contrast between a closer mid front vowel */eː/ < PIE *ey (yielding Late Hittite ī) and a more open */ɛː/ < PIE *ē (remaining Late Hittite ē), but the examples are few and can be accounted for otherwise.
The status of the opposition between long and short vowels is not entirely clear, but it is known for certain that it does not keep the PIE contrast intact, as Hittite spelling varies in a way that makes it very hard to establish whether vowels were inherently long or short. Even with older texts being apparently more conservative and consistent in notation, there are significant variations in vowel length in different forms of the same lexeme. It has been thus suggested by Carruba (1981) that the so-called scriptio plena represents not long vowels but rather stressed vowels, reflecting the position of free PIE accent. Carruba's interpretation is not universally accepted; according to Melchert, the only function of scriptio plena is to indicate vowel quantity; according to him the Hittite a/ā contrasts inherits diphonemic Proto-Anatolian contrast, */ā/ reflecting PIE */o/, */a/ and */ā/, and Proto-Anatolian */a/ reflecting PIE */a/. According to Melchert, the lengthening of accented short vowels in open syllables cannot be Proto-Anatolian, and the same goes for lengthening in accented closed syllables.
One of the more characteristic phonological features common to all Anatolian languages is the lenition of the Proto-Indo-European voiceless consonants (including the sibilant *s and the laryngeal *ḫ) between unstressed syllables and following long vowels. The two can be considered together as a lenition rule between unstressed moras, if long vowels are analyzed as a sequence of two vowels. All initial voiced stops in Anatolian eventually merge with the plain voiceless stops; Luwian, however, shows different treatment of voiced velar stops *G- and unvoiced velar stops *K- (initial *G being palatalized to */j/ and then lost before /i/, unlike *K), showing that this was a late areal development, not a Proto-Anatolian one.
Proto-Anatolian is the only daughter language of Proto-Indo-European to directly retain the laryngeal consonants. The letter ‹ḫ› represents the laryngeal *h₂ and probably but less certainly also *h₃. The sequences *h₂w and *h₃w yield a labialized laryngeal *ḫʷ.
In addition to the laryngeals, Common Anatolian was long also thought to be the only daughter to preserve the three-part velar consonant distinction from Proto-Indo-European. The best evidence for this was thought to come from its daughter language, Luwian. However, this has been refuted by Melchert: Anatolian is a centum branch.
The voiced aspirated stops lost their aspiration over time and merged with the plain voiced stops. The liquids and nasals are inherited intact from Proto-Indo-European, and so is the glide *w. No native Proto-Anatolian words begin with *r-. One possible explanation is that it was true in Proto-Indo-European as well. Another is that it is a feature of languages from the area in which the daughter languages of Proto-Anatolian were spoken.
According to Fortson, Proto-Anatolian had two verb conjugations. The first, the mi-conjugation was clearly derived from the familiar Proto-Indo-European present tense endings. The second, the ḫi-conjugation appears to be derived from the Proto-Indo-European perfect. One explanation is that Anatolian turned the perfect into a present tense for a certain group of verbs while another, newer idea is that the ḫi verbs continue a special class of presents which had a complicated relationship with the Proto-Indo-European perfect.
- Melchert forthc., 7f.
- Luraghi 1998:174
- Lauraghi 1998:174
- Melchert 1993:244
- Melchert 2015:10
- Melchert 2015:9
- Lauraghi 1998:192
- Melchert 1994:76
- Melchert 2015:7
- Fortson 2009:172
- Melchert 2015:8
- Fortson 2009: 173
- Fortson, Benjamin W. (2009). Indo-European language and culture : an introduction (2. ed.). Oxford: Wiley-Blackwell. pp. 170–199. ISBN 978-1-4051-8896-8.
- Silvia Luraghi (1998). "The Anatolian languages". In Anna Giacalone Ramat; Paul Ramat (eds.). The Indo-European Languages. London and New York: Routledge. ISBN 978-0-415-06449-1.
- Craig Melchert (1987). "PIE velars in Luvian" (PDF). Studies in Memory of Warren Cowgill. pp. 182–204. Retrieved 2008-10-27.
- Craig Melchert (1993). "Historical Phonology of Anatolian" (PDF). Journal of Indo-European Studies, 21. pp. 237–257. Retrieved 2008-10-27.
- Craig Melchert (1994). Anatolian Historical Phonology. Rodopi. ISBN 978-90-5183-697-4.
- Melchert, H. Craig (2015). "Hittite Historical Phonology after 100 Years (and after 20 years)". Hrozny and Hittite: The First Hundred Years (PDF). Retrieved 2016-07-27.
- Melchert, H. Craig. "The Position of Anatolian" (PDF). UCLA – Department of Linguistics – Craig Melchert's homepage. Los Angeles, CA: UCLA College of Letters and Science, University of California. pp. 1–78. Retrieved 10 June 2019. |
Bitter Leaf is effective against human and livestock parasites in their different stages of development.
Parasites are organisms that live off other organisms, or hosts, to survive. Some parasites don’t noticeably affect their hosts. Others grow, reproduce, or invade organ systems that make their hosts sick, resulting in a parasitic infection.
Parasitic infections can be caused by three types of organisms:
Protozoa are single-celled organisms that can live and multiply inside your body. Some infections caused by protozoa include giardiasis. This is a serious infection that you can contract from drinking water infected with Giardia protozoa.
Helminths are multi-celled organisms that can live in or outside of your body. They’re more commonly known as worms. They include flatworms, tapeworms, thorny-headed worms,and roundworms.
Ectoparasites are multicelled organisms that live on or feed off your skin. They include some insects and arachnids, such as mosquitos, fleas, ticks, and mites.
The symptoms of parasitic infections vary depending on the organism. For example:
- Trichomoniasis is a sexually transmitted infection caused by a parasite that often produces no symptoms. In some cases, it may cause itching, redness, irritation, and an unusual discharge in your genital area.
- Giardiasis may cause diarrhea, gas, upset stomach, greasy stools, and dehydration.
- Cryptosporidiosis may cause stomach cramps, stomach pain, nausea, vomiting, dehydration, weight loss, and fever.
- Toxoplasmosis may cause flu-like symptoms, including swollen lymph nodes and muscle aches or pains that can last for over a month.
How Bitter Leaf Helps
The antiparasitic effect of liquid extract of Bitter Leaf has been studied extensively. It is effective against human and livestock parasitic worms in their different stages of development. Bitter Leaf extract kills parasites by paralyzing and making them unable to feed and consequently, they die off. Read More |
The gaseous object G2 has survived its swing around the Milky Way’s central supermassive black hole, but the questions of what it is and where it comes from remain unanswered.
Two new studies suggest that ultraluminous X-ray sources are not all created by beefy black holes.
Astronomers have detected a supermassive black hole in the center of a tiny galaxy — where it has no right to be.
A new diagram might link the diverse visible-light characteristics of quasars to two physical properties — essentially, their accretion rate and orientation. If the analysis holds up, it could point the way toward a long-sought unification.
A new measurement could be the farthest back in time astronomers have ever reached when measuring a black hole’s spin.
New data shed light on last month’s exciting discovery of a black hole triplet — but they suggest instead that the threesome is really just a twosome.
Astronomers have detected a high-speed, long-lasting gas streamer spewing from the active galactic nucleus of NGC 5548. This discovery might provide new insights into how supermassive black holes influence their host galaxies.
Astronomers have discovered that one member of a pair of supermassive black holes is actually a pair itself, turning the system into the most distant black hole triplet yet detected and raising hopes for future discoveries.
Newly published observations provide the first real evidence supporting a theory that tells us how black hole jets form.
Galaxies’ central black holes are surprisingly simple creatures at heart, but they have a complicated past. New studies are starting to remove history’s obfuscating veil.
A bizarre X-ray flare first spotted in 2010 could be a signal from two black holes that will ultimately unite into a single beast.
Astronomers have developed a new method to measure distances to bright but faraway galaxies, a tool which will help better constrain the expansion rate of the universe.
Infrared observations of the Circinus Galaxy may help reveal the shape of the dusty region fueling its active galactic nucleus and shed light on what governs dust structures in other galaxies.
X-ray observations and cosmic coincidence unveil the details of a distant supermassive black hole. The result could be a first step in expanding our understanding of how black holes have beefed themselves up over the last several billion years.
A stellar-mass black hole in the iconic galaxy M83 seems to have kept eating long after it should have stopped. If true, the discovery could have implications for how much black holes can affect their environments.
Astronomers have found supermassive black holes in 151 dwarf galaxies, surprising expectations and providing a time machine into black hole formation.
Strange emission from a distant galaxy paints an enigmatic picture of what’s happening inside its core. One solution: instead of one supermassive black hole, the galaxy hosts two trapped in a tight dance around each other.
Astronomers have revealed a supposedly monster black hole to be rather ordinary in size.
Observations of one of the most powerful supernovas ever recorded suggest that the standard model for gamma-ray bursts might be missing a piece of the puzzle.
Observations reveal ionized metals in the jet shot out by a black hole, long-sought information that will help astronomers understand how these objects create their powerful beams.
The Milky Way's central supermassive black hole eats only a fraction of the gas available to it. New X-ray observations suggest how the beast manages to stay so trim when faced with a feast.
A pulsar discovered last April is helping astronomers measure the magnetic field surrounding our galaxy’s central black hole.
Astronomers around the world are watching as the gaseous object called G2 heads for a close pass around the Milky Way's central supermassive black hole. Now it looks like the distended cloud is starting to swing back toward us.
Twinkle, twinkle, quasi-star: cosmic lenses could tell us what you are.
Astronomers have been waiting for our galaxy’s slumbering supermassive black hole to stir for a snack. Instead, the universe handed them a different treat. |
An embolus (plural emboli; from the Greek ἔμβολος "clot, lit. ram") is any detached, traveling intravascular mass (solid, liquid, or gaseous) carried by circulation, which is capable of clogging arterial capillary beds (create an arterial occlusion) at a site distant from its point of origin.
By contrast there are non-traveling blockages that develop locally from vascular trauma or epithelial pathology and vascular inflammation — like atheroma, thrombi. However, if a thrombus breaks loose from its genesis site it becomes a thrombo-embolus and if not broken down during transit, may cause embolism(s).
Classification by substance
- Thromboembolism – embolism of thrombus (blood clot).
- Cholesterol embolism – embolism of cholesterol, often from atherosclerotic plaque inside a vessel.
- Fat embolism – embolism of bone fracture or fat droplets.
- Air embolism (also known as a gas embolism) – embolism of air bubbles.
- Septic embolism – embolism of bacteria-containing pus.
- Tissue embolism – embolism of small fragments of tissue.
- Foreign body embolism – embolism of foreign materials such as talc and other small objects.
- Amniotic fluid embolism – embolism of amniotic fluid, foetal cells, hair, or other debris that enters the mother's bloodstream via the placental bed of the uterus and triggers an allergic reaction.
In thromboembolism, the thrombus (blood clot) from a blood vessel is completely or partially detached from the site of thrombosis (clot). The blood flow will then carry the embolus (via blood vessels) to various parts of the body where it can block the lumen (vessel cavity) and cause vessel obstruction or occlusion. Note that the free-moving thrombus is called an embolus. A thrombus is always attached to the vessel wall and is never freely moving in the blood circulation. This is also the key difference for pathologists to determine the cause of a blood clot, either by thrombosis or by post-mortem blood clot. Vessel obstruction will then lead to different pathological issues such as blood stasis and ischemia. However, not only thromboembolism will cause the obstruction of blood flow in vessels, but also any kind of embolism is capable of causing the same problem.
Fat embolism usually occurs when endogenous (from sources within the organism) fat tissue escapes into the blood circulation. The usual cause of fat embolism is therefore the fracture of tubular bones (such as the femur), which will lead to the leakage of fat tissue within the bone marrow into ruptured vessels. There are also exogenous (from sources of external origin) causes such as intravenous injection of emulsions.
An air embolism, on the other hand, is usually always caused by exogenic factors. This can be the rupture of alveoli, and inhaled air can be leaked into the blood vessels. Other more-common causes include the puncture of the subclavian vein by accident or during operation where there is negative pressure. Air is then sucked into the veins by the negative pressure caused by thoracic expansion during the inhalation phase of respiration. Air embolism can also happen during intravenous therapy, when air is leaked into the system (however this iatrogenic error in modern medicine is extremely rare).
Gas embolism is a common concern for deep-sea divers because the gases in human blood (usually nitrogen and helium) can be easily dissolved at higher amounts during the descent into deep sea. However, when the diver ascends to the normal atmospheric pressure, the gases become insoluble, causing the formation of small bubbles in the blood. This is also known as decompression sickness or the Bends. This phenomenon is explained by Henry's Law in physical chemistry.
Embolism by other materials are rather rare. Septic embolism happens when a purulent tissue (pus-containing tissue) is dislodged from its original focus. Tissue embolism is a near-equivalent to cancer metastasis, which happens when cancer tissue infiltrates blood vessels, and small fragments of them are released into the blood stream. Foreign-body embolism happens when exogenous—and only exogenous—materials such as talc enter the blood stream and cause occlusion or obstruction of blood circulation. Bullet embolism occurs in approximately 0.3% cases of gunshot wounds. Amniotic-fluid embolism is a rare complication of childbirth.
- Kumar V., Abbas A.K., Fausto N. Pathologic Basis of Disease.
- Hellemans, Alexander; Bryan Bunch (1988). The Timetables of Science. New York, New York: Simon and Schuster. p. 317. ISBN 0-671-62130-0.
- Howland, Richard D.; Mycek, Mary J. Pharmacology. Lippincott's illustrated reviews (3rd ed.). Philadelphia : Lippincott Williams & Wilkins, c2006. p. 227. ISBN 0-7817-4118-1. |
In the brain stem are located groups of nerve cells to which are given the name of centers.
These are three in number, named the cerebrum, the cerebellum, and the brain stem.
The brain stem is really the upper extension of the spinal cord within the head.
These are a few of the centers which are present in the brain stem.
The kind of joint to be used having been hit upon, the next point was to secure a safe passage for the brain stem.
We have a center in the brain stem from which the nervous discharges come.
These are only some of the devices which Nature had to contrive in order to secure a safe passageway for the brain stem.
The brain stem has a special function of its own in connection with the control of what are often called the vital processes.
Both these sets of nerves arise from centers in the brain stem, and both these centers appear to be discharging continuously.
When this warm blood enters the brain stem it arouses the sweat center and an increased secretion of sweat results.
brain stem or brainstem
The portion of the brain, consisting of the medulla oblongata, pons Varolii, and mesencephalon, that connects the spinal cord to the forebrain and cerebrum.
The part of the vertebrate brain located at the base of the brain and made up of the medulla oblongata, pons, and midbrain. The brainstem controls and regulates vital body functions, including respiration, heart rate, and blood pressure. See also reticular formation. |
Ballistics is a science about the motion of projectiles (flying objects), mainly about bullets which are fired from guns. This science shows us the path and behavior of projectiles. Gun ballistics may be divided into the following four categories.
- Internal ballistics is about the motion of projectile inside barrel ( tube part of gun). This motion depends on the pressure of gas, which come into being by burning gunpowder. Pressure depends on (is connected) amount of gun-powder in bullet, type of gun-powder, size of pieces of gun-powder, free space behind a bullet and outside temperature. This motion also depends on size and weight of bullet. The most important number from internal ballistics for performance is a muzzle velocity (Projectile speed from the front edge of barrel).
- Transition ballistics is a science about behavior and changes in the nearest distance from the barrel. It study effect of gases going from a barrel behind a bullet, which are faster than bullet. These gases can negatively (badly) effect ballistic path.
- External ballistics studies the trajectory (path) of a bullet outside the barrel. This trajectory depends on three ballistic conditions. It is a beginning speed, angle of fire and ballistic coefficient ( number, which characterize influence of air). Trajectory of a projectile is also influenced by (depends on) resistance of air, rotation of bullet, density and pressure of air and size and orientation of wind. For long-distanced missiles are also important changes in magnetic field of the Earth.
- Terminal ballistics is about what happens when the bullet hits something.
|Wikimedia Commons has media related to: Ballistics|
References[change | change source]
- U.S. Marine Corps (1996). FM 6-40 Tactics, Techniques, and Procedures for Field Artillery Manual Cannonry. Department of the Army. http://www.fas.org/man/dod-101/sys/land/docs/fm6-40-ch3.htm. |
1. Inherit the Wind was first produced on stage in 1955. It was produced as a television movie in 1960 and in 1988, and it was on Broadway again in 1996. Each time the play has been produced, it has been successful. Research each era and describe the historical events that may have contributed to the relevance of each production.
2. Read one of the following works and compare it to Inherit the Wind:
- Stephen Vincent Benet's The Devil and Daniel Webster
- Arthur Miller's The Crucible
3. Lawrence and Lee convey the theme through the characterization of their two main characters: Drummond and Brady. Restage parts of the play, changing only these character's personality traits: Make Drummond arrogant and overly confident; make Brady decent and kind. Discuss how such a change affects the audience's reaction to the characters and the central conflict.
4. Imagine that Drummond is asked to give the eulogy at Brady's funeral. Write the eulogy. Or, if you prefer, assume that you, like Hornbeck, have covered the trial and now must write an article.
5. Create a Web site to introduce Inherit the Wind to other readers. Design pages to intrigue and inform your audience, and invite other readers to post their thoughts and responses to their reading of the play. |
Got a version of Excel that uses the menu interface (Excel 97, Excel 2000, Excel 2002, or Excel 2003)? This site is for you! If you use a later version of Excel, visit our ExcelTips site focusing on the ribbon interface.
With more than 50 non-fiction books and numerous magazine articles to his credit, Allen Wyatt is an internationally recognized author. He is president of Sharon Parq Associates, a computer and publishing services company.
Learn more about Allen...
Macros in Excel are written in a language called Visual Basic for Applications (VBA). Like any other programming language, VBA includes certain programming structures which are used to control how the program executes. One of these structures is the For ... Next structure. The most common use of this structure has the following syntax:
For X = 1 To 99 program statements Next X
You are not limited to using the X variable; you can use any numeric variable you desire. You are also not limited to the numbers 1 and 99 in the first line; you can use any numbers you desire, or you can use numeric variables. When a macro is executing, and this structure is encountered, Excel repeats every program statement between the For and Next keywords a certain number of times. In the syntax example, the statements would be executed 99 times (1 through 99). The first time through the structure, X would be equal to 1, the second time through it would be equal to 2, then 3, 4, 5, and so on, until it equaled 99 on the last iteration.
Normally, as Excel is working through the For ... Next structure, it increments the counter by one on each iteration. You can also add a Step modifier to the For ... Next structure, thereby changing the value by which the counter is incremented. For instance, consider the following example:
For X = 1 To 99 Step 5 program statements Next X
The first time through this example, X will be equal to 1, and the second time through, X is equal to 6 because it has been incremented by 5. Similarly, the third time through X is equal to 11. You can also use negative numbers for Step values, which allows you to count downwards. For instance, take a look at the following:
For X = 24 To 0 Step -3 program statements Next X
In this example, the first time through the structure X is equal to 24, the second time it is equal to 21, and the third time it is equal to 18.
ExcelTips is your source for cost-effective Microsoft Excel training. This tip (2024) applies to Microsoft Excel 97, 2000, 2002, and 2003.
Save Time and Supercharge Excel! Automate virtually any routine task and save yourself hours, days, maybe even weeks. Then, learn how to make Excel do things you thought were simply impossible! Mastering advanced Excel macros has never been easier. Check out Excel 2010 VBA and Macros today! |
More than just moccasins: American Indian words in English
A menagerie of words
Most English speakers could easily identify words like tomahawk, moccasin, or tepee as having Amerindian origins (from Virginia Algonquian, Powhatan, and Sioux, respectively), but indigenous American languages have given English many other words which have now become so fully naturalized that their roots often go unrecognized. In fact, fully half of the names of the US states (including Arizona, Connecticut, Kentucky, and Missouri, to name a few) are derived ultimately from Amerindian words. Even some words which appear to be thoroughly English have hidden Amerindian roots: woodchuck, which looks like a typical English compound incorporating the word wood, is actually a folk-etymological simplification of an Algonquian word (such as Naragansett ockqutchaun). Similarly, sockeye, referring to the distinctive salmon of the Pacific Northwest, reinterprets a Salish word suk-kegh.
Today, the English language blankets the United States from coast to coast, but when permanent English settlement of North America began at Jamestown, Virginia in 1607 and Plymouth, Massachusetts in 1620, the region of the present-day United States was a diverse patchwork of hundreds of American Indian languages. When the first English-speaking settlers arrived, they encountered flora, fauna, and cultural artifacts for which they had no names in their native tongue, and the Algonquian languages spoken by the communities around the early colonies in Virginia and Massachusetts were thus the origin of many early borrowings. To take animals as an example, we have moose and skunk from Eastern Algonquian, opossum and raccoon from Virginia Algonquian, and quahog from Narragansett.
What do chocolate and coyotes have in common?
The English were relative latecomers to the Americas, so they encountered not only indigenous peoples, but also other European colonists who had already absorbed local words into their own vocabularies. Many words from North American languages made their way into English through the intermediary of another colonial language. Mesquite and coyote, which entered English via Mexican Spanish, originated in Nahuatl (also the ultimate origin of chocolate). Caribou and toboggan came to English through Canadian French, which had borrowed them from Mikmaq. And English took that most American of words, bayou, from Louisiana French, which had adapted it from Choctaw bayuk.
We are thankful for…
November is a natural time to think about the impact of American Indian languages on English: not only is it Native American Heritage Month, but also the month of the US Thanksgiving holiday, which is bound by national mythology with a semi-legendary three-day feast shared by the English inhabitants of the Plymouth colony and their Wampanoag neighbors in 1621. Although the most emblematic foods of the contemporary Thanksgiving, turkey and cranberries, have names of European origin (despite referring to New World foodstuffs), many of the other foods which traditionally grace the Thanksgiving table show their American origins in their etymologies. For instance, the word for squash, a vital food for the early colonists, comes from Narragansett, as does succotash. The cornmeal pone is from Virginia Algonquian, and the pecan, star of many a Thanksgiving pie, comes to English from Illinois, via French.
Any discussion of the impact of Amerindian languages on English must also acknowledge its obverse: the displacement and devastation of American Indian peoples and the languages they spoke which was a direct consequence of the continent’s settlement by speakers of English and other European languages. In the centuries since European colonization, many of the indigenous languages of North America have been lost, or reduced to a tiny, aging population of native speakers. Under present circumstances, the majority of indigenous languages still spoken in the United States today are not likely to survive the century, but attitudes towards American Indian languages, which once emphasized assimilation, have changed, and many communities are now seeking to revitalize their linguistic heritage. It is to be hoped that English will continue to exist alongside, and share mutual influence with, these languages for generations to come. |
An amplifier is a device which increases the amplitude of an electrical signal. Amplitude represents the "strength" of a signal, so amplification effectively means increasing signal strength.
In audio and video work there are several common types of amplifier. Some example are described below.
- Power amplifier: Large amplifier used in sound reinforcement systems. Usually has few (if any) controls other than on/off and level.
- Line amplifier: Small amplifier used to boost a mic-level signal to line-level.
- Powered mixer: A unit that combines a sound mixer and amplifier.
- Distribution amp (DA or VDA): Amplifier designed to take a video signal and amplify it enough o be sent to multiple sources. Also used to adjust the video and chroma levels.
- Proc amp: Processing amplifier, used to selectively adjust different parts of a video signal,e g. balancing colours. |
There has been an ongoing debate as to whether viruses are living organisms or simply inert particles of DNA or RNA. A recent study has now established that viruses may have their own immune systems. This of course tends to put them in the category of living organisms. Per the article:
Viruses can acquire fully functional immune systems, according to new research that bolsters the controversial theory that viruses are living creatures...The study, published in the journal Nature, is the first to show that a virus can indeed possess an immune system, not to mention other qualities commonly associated with complex life forms...The use of a complex immune system “doesn’t prove” that viruses are living beings, “but it does add to the argument,” he said.
Reading about this research at Science Daily, I didn't understand what they were saying. When I first learned about viruses almost 5 decades ago, I learned that they had no metabolism. Tobacco mosaic viruses had been stored in crystalline form in a sealed container on a shelf for a decade, with no evident energy flow. When subsequently placed on a living plant, they were infective. I took this to mean that the molecules in a virus don't move around when they're outside of host cells.
Immune systems of animals, on the other hand, are composed of organs, which are composed of tissues, which are composed of cells. True, bacteria can have immunity to phages. Is this intra-cellular function called an immune system? I don't know.
So what does "immune system of a virus" even mean? I can imagine a virus incorporating genetic code, for example the genes for bacterial resistance to phages. But a functioning immune system? How does this work if the molecules inside of the virus aren't even moving around? This does not compute for me. |
GEOG 101/102: Survey of Physical Geography
Have you ever wondered:
- How human activities have impacted our natural environment?
- Why rainforests have such high biodiversity?
- Why the land surface in Illinois is so flat?
- How beaches form?
- Why it floods so quickly after a rainfall in urban environments?
- Why the soils in Illinois are black and the soils in Georgia red?
- What factors influence the distribution of vegetation globally?
- How much freshwater is there in the world?
- How are mountains built?
- What factors influence the distribution of soils on Earth?
What General Education Objectives are met in Geography 101/102?
The physical geography perspective integrates information from other fields such as geology, biology, physics, and chemistry. Explore aspects of geography such as water resources, soil and vegetation distributions within a global context. Examples of current research in the field of physical geography expose students to the thought process associated with the scientific method. Learn to think critically about the geographic environment by examining the impact of humans on the physical landscape. Develop written, quantitative, technical, and oral skills through a variety of laboratory exercises in GEOG 102.
Facts about Geography 101/102:
Course Offered: Both spring and fall semesters: Geog 101, 3 credit hours; or Geog 101 and Geog 102, 4 credit hours
General Education: Fulfills a science/math distributive area requirement and matches the following general education goals: develop communication and technical skills, apply various modes of inquiry, and develop an understanding of integrated knowledge through a combination of lecture material, readings, laboratory assignments, and exams.
Course Goal: To introduce students to processes and interactions within the physical environment including those associated with hydrology, landforms, soils, and vegetation.
GEOG 302: Soil Science (4)
Lecture, field and laboratory study of physical, chemical, and biological properties of soils with emphasis on soil development, classification, geography, management, and conservation. Lecture, laboratory, and field experience.
GEOG 402/502: Pedology (4)
Soil genesis, distribution, and classification. Environment, geomorphology, and soil formation relationships. Soil description, mapping, and interpretation for land use. Lecture, laboratory, and field experience.
GEOG 403/503: Soil Geography and Land Use Planning (3)
Regional and local problems of soil utilization and management. Strategies for using soil data in land use plans and legislation
GEOG 404: Soil Profile Description and Interpretation
Lecture, lab, and field experience involving description, interpretation, and classification of soil profiles and soil-landscape geographic relationships for agricultural, urban, and wildland use. Participate in soil judging contests.
GEOG 465/665: Field Methods in Physical Geography (with Lesley Rigg) (3)
Field problems of urbaneconomic, cultural, and physical geography. Lecture, laboratory, and field experience.
GEOG 477: Environmental Field Camp (co-instructor)
Field camp designed to train students in field methods and integrative problem solving related to environmental geosciences covering topics such as field methods in hydrogeology, surface-water and vadose-zone hydrology, water quality analysis, ecosystem health, environmental surface geophysics, site evaluation and techniques, and regional landscape history and environmental change. Offered during summer session only.
GEOG 505: Concepts in Physical Geography (with Lesley Rigg) (3) |
Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Chapter 6 Earthquakes
Transcript of Chapter 6 Earthquakes
Evan, Steven, Morgan, and Rachel
Earthquake Hazard and safety
Earthquakes and Seismic Waves
Elastic Strain Energy
Energy stored as a change in shape.
This energy is eventually released as earthquakes.
Faults and Earthquakes
As rocks slowly move past each other, elastic potential energy builds up along a strike slip fault.
Is the location on a fault where rupture and movement begin.
Most fault zones are about 40 to 200 kilometers wide.
The San Andreas Fault is a good example of a Fault Zone.
What is an earthquake?
An earthquake is the rupture and sudden movement of rocks along a fault.
Plate Boundaries and Earthquakes
Lithospheric plates interact at different boundaries and produce earthquakes.
Earthquakes and Plate Boundaries
How are earthquakes measured?
Scientists determine the earthquake's size by measuring how much the rock slipped along the fault, and analyzing the heights of the seismic waves.
Recording Seismic waves
Seismograph: Instrument used to record and measure movements of the ground caused by seismic waves.
Reading a seismogram
A pen is attached to a penduldum.
The drum moves as the ground shakes.
The pen records the motion on the paper wrapped around the drum.
Record of the seismic waves: seismogram
Seismograms are used to calculate the size of earthquakes and to determine their locations.
The heights of the waves on the seismogram indicate the sizes of ground motion for each type of wave.
Scientists can make a graph using average S-waves and P-waves speeds.
Seismic waves are the waves of energy that are released at the epicenter of an earthquake, or the point on Earth's surface directly above the earthquakes focus.
There are three types of seismic waves, Primary waves, Secondary waves, and Surface waves.
Primary waves, also called P-waves, are compressional waves. P-waves cause rock to vibrate as they move through the particles. They are also the fastest seismic wave.
Secondary waves, or S-waves, are also known as shear waves. These waves cause particles to vibrate perpendicular to it's travel. S-waves travel about 60% of the speed of Primary Waves.
When S-waves and P-waves reach the last few kilometers of Earth's crust the energy gets trapped. This energy forms surface waves. These waves travel very slowly and travel along the surface. Thee waves make rock particles to move with a side to side motion and a rolling motion.
Locating an Epicenter
1. Find the arrival time difference of the waves.
2. Find the distance from the epicenter.
3. Plot the distance on a map.
Measuring Earthquake Size
Divergent Plate Boundary:
Rocks that break apart under
tension stress, forming normal
The magnitude scale is based on a seismogram's record of the amplitude of ground motion.
Most common hazard:
the earthquake ruptures gas pipes causing fires.
The magnitude of an earthquake is determined by the buildup of elastic strain energy in the crust.
Some water pipes break and make it hard for firefighters to fight the fire.
Most measured magnitude values range between 0 and 9.
Convergent Plate Boundary:
Rocks that break under
compression stress, forming
Richter Magnitude Scale
Steep hills during earthquakes can cause landslides.
This first magnitude scale was published in 1935 by Charles Ritcher.
Using Seismic Wave Data:
Comparing seismic waves is like watching people running in a race. When they start, the fast and slow waves are close together, but at the end they are very far apart.
Landslide: sudden movement of loose soil and rock.
These magnitudes are only accurate for earthquakes between 3.0 and 7.0 in magnitude.
Transform Plate Boundary:
Rocks slide horizontally past
one another, forming strike-
Landslides can destroy homes, block roads and destroy all electrical wires.
Moment Magnitude Scale
This scale is based on the amount of energy released during an earthquake.
It gives a consistent measure of earthquake size.
The seismic moment is related to this scale.
All seismic waves start at the same place but get to where they want to go at different times. This is because they travel at different speeds and have different paths that they travel on. P-waves and S-waves travel under the surface in the mantle and the Surface waves travel in the Earth's crust.
Liquefaction: solids that act like liquids
The Big Idea: Earthquakes cause seismic waves that can be devastating to humans and other organisms.
Scientist use P- and S- waves to investigate the layering of Earth's interior. They do this by seismic waves hitting other layers of the Earth which changes the waves speed and direction.
A Shadow Zone is a large area of Earth that doesn't receive any seismic waves.
The reason that there are shadow zones is because secondary waves can only travel through solids so they stop when they reach the outer core. Primary waves can travel through the outer core but it still bends it's path.
The intensity depends on the distance from the epicenter and geology.
The maximum is usually found near the epicenter.
Sediments and rocks also affect the intensity.
Plotting Intensity vales
Effects of Shaking
Earthquakes away from Plate Boundaries
Features of Earthquakes
The sediment is mostly strong but when the earthquake shakes the sediment it can act like a liquid and be very dangerous.
Scientists measure seismic waves to determine the facts about Earthquakes. Waves that travel differently helps to locate the epicenter.
Scientists plot the values on the map.
The data is then contoured.
This is similar as topography.
Most earthquakes form along plate boundaries, however, some earthquakes form away from plate boundaries.
Main Idea Lesson 1: Most earthquakes occur at plate boundaries when rocks break and move along faults.
Main Idea Lesson 2: Earthquakes cause seismic waves that provide valuable data.
Main Idea Lesson 3: Data from seismic waves are recorded and interpreted to determine the location and size of an earthquake.
Main Idea Lesson 4: Effects of an earthquake depends on its size and the types of structures and geology in a region.
One large ocean wave caused by earthquakes.
They cause many homes and buildings to flood.
The highest wave can reach 30 meters.
Signs of tsunamis: the shoreline rapidly moves back toward the sea and exposes a large area of land.
Earthquake safety: Before an earthquake
Create an earthquake plan with family
1. Have a meeting spot that is safe
2. Have an backpack full of supplies including:
water for everyone
battery powered radio
first aid kit
Earthquake safety: During an Earthquake
Move away from windows
Seek sturdy shelter
Stay in the open
a. away from power lines
b. anything that could hurt you
Earthquake safety: After an earthquake
Stay calm, Stay away from danger
To stay safe
adult shut off any valves for water and gas pipes
if smell gas leave building
be careful around shards of glass
stay away from beaches in case of Tsunamis |
Schmidt had been working on the excavations at Göbekli Tepe, sometimes called Turkey’s Stonehenge, with the German Archaeology Institute since 1995.
On Göbekli Tepe (“Potbelly Hill”), the German excavations uncovered several massive stone enclosures dating between 10,000 and 8000 B.C.E., the dawn of civilization and the Neolithic age. Many of the stones are carved with highly elaborate depictions of animals and anthropomorphic figures. With no evidence of a contemporaneous village within the vicinity of the Göbekli Tepe ruins, it is believed that the site served exclusively as a ceremonial center. The earliest sanctuary for communal ritual activity known to date, the Göbekli Tepe ruins have led scholars to reconsider the origins of religion and human civilization.
Read “The Göbekli Tepe Ruins and the Origins of Neolithic Religion” in Bible History Daily. |
Fossils are the foundation for scientists’ understanding of the history of Earth and all life on it. Everything humans know about dinosaurs, earlier species of hominids, and all other extinct species began with the discovery of fossils. Much of what anthropologists now understand about early human migration comes from fossils. Scientists’ knowledge of mass extinctions and their ability to make predictions about the planet’s future are largely based on fossils. While the prevailing image of fossils is a paleontologist painstakingly digging up a massive dinosaur skeleton in a remote desert, there are several different types of fossils, and together they form a clear picture about life on Earth before modern humans came to be.
Petrification, which is also known as permineralization, is the process by which the cells of highly porous organic materials such as bones, nuts and wood are gradually replaced over time with minerals. This process happens in situations such as volcanic eruptions. When a tree or animal is buried so suddenly that it does not have a chance to rot or be eaten by a predator, the ash and heat over time transform the organism to stone, preserving it for millennia. Petrified fossils are the ones that most people tend to think of as fossils because they are large and hard and mostly consist of the bones found in archaeological digs. Petrified fossils are the most common fossils and have given paleontologists a great deal of information about prehistoric species, including dinosaurs.
Unlike petrified fossils, carbon fossils are delicate and preserve life in fine detail, including the soft tissue of plants and animals. Insects and fish that have fallen to the bottom of bodies of waters are trapped there by layers of sediment, such as ash from a volcanic eruption that protects them from being eaten or decomposing. Over millions of years, more layers of sediment fall on top of them, and the elapsing time and weight of the increasing layers compress the ash or other material into a rock called shale. The insects and fish disintegrate during this time. All living things contain the element carbon, and the carbon remains in the shale, leaving a thin but detailed layer on the rock. In some carbon fossils, the segments of an insect’s body, the patterns on a butterfly’s wings, or the veins in a leaf are visible.
Sciencing Video Vault
Cast and Mold Fossils
Mold fossils lack a lot of the detail of carbon fossils. They tend to occur in animals with hard body parts, like exoskeletons, teeth, or shells. The organism is trapped in a porous, sedimentary rock, where water flows through it and dissolves the soft tissue of the body. Over time, a mold forms. An interior mold might happen with a fossil that has an empty cavity, like a shell. Sediment fills and hardens inside the shell, while the shell dissolves over time. The interior contours of the shell are left on the sediment that filled in the interior. An exterior mold happens similarly, but the sediment hardens around the hard body parts, which dissolve and leave a hollow cavity where the organism once was.
Scientists who come across mold fossils are left with negative space that represents the animal that was once there. Casting comes into the picture either naturally or synthetically. In some cases, nature creates a cast of the animal or body part by depositing minerals in the hollow spaces left by the mold fossil. If that does not happen, paleontologists can create a synthetic cast using latex or plaster of Paris. They use this to gain a sense of the contours, size and other details of the animal that created the fossil.
True-form fossils are organisms that are preserved entirely in their natural form. This can happen a few ways, but it typically involves the organism becoming entrapped and preserved. Amber is the resin from a coniferous tree from the early Tertiary period. Insects fall into the tree resin and remain stuck there because of its stickiness. Over time, more resin falls on top of them. Over millions of years, the resin hardens and changes its molecular structure in a process called polymerization until it becomes amber. Entrapment in the hardening resin protects the fossilized insect from scavengers and decomposition.
Desiccation is another type of true-form fossil. It is also called mummification. Some animals crawled into caves in the southwest deserts of North America during the ice age and died. Their bodies were dried by the desert air and were preserved perfectly for thousands of years. Mummified remains are so well preserved that hair color and clothing are still visible, but these fossils often fall apart at the slightest touch.
Freezing is one of the best-preserved processes of fossilization. The organism’s soft tissues remain entirely intact. The circumstance that leads to a frozen fossil is often the sudden entrapment of an animal in a location that is freezing. This was not uncommon for large mammals in Siberia and Alaska during the late ice age, particularly woolly mammoths. |
List of Figures Introduction: The Dimensions of Politics and an Approach to its Study What Politics Is... A Comparative Approach Chapter 1: Co-operation, Coercion, and Consent-Opening Ideas Our Social Nature Authority and Leadership Civil Society Beyond the State Conclusion References Further Reading Chapter 2: The Many Ways of Studying Politics Politics as Philosophy Politics as Social Science Units of Analysis: Individual, Group, or Class? Politics as Anthropology Back to Bismarck: Politics and the Study of Politics References Further Reading Chapter 3: From The Republic to the Liberal Republic: History and Ideas Classical Antiquity Feudal Society (the Medieval Era) The Reformation The Enlightenment The Market Economy Synergy Political Revolution Liberal Government Philosophical Works References Further Reading Chapter 4: The Fall and Reluctant Rise of Democracy Democracy Defined Distrust of Democracy From Liberal Government to Liberal Democracy Assessing Representative Democracy Democracy's Prospects Consolidating Democracy References Further Reading Chapter 5: Roadmap to the Rest (A Comparative Framework) Who's In and Who's Out Functions, Institutions, Systems Functions, Institutions, Systems Functions, Institutions, Systems The Judiciary as a Branch of Government Bicameralism-Do Two Houses Make a Home? Degrees of Federalism Electoral and Party Systems Type of Government Recapping Appendix: Comparative Data Set Further Reading Chapter 6: Systems of Government: Parliamentary Options Components of Parliamentary Government Majority, Minority, or Coalition The Government-Formation Process The Conclusion of a Government and Its Implications A Closer Look at Coalition Government Cabinets: Size and Structure Executive Dominance References Further Reading Chapter 7: Systems of Government (2): Degrees of Presidentialism Madisonian Presidentialism Coalitional Presidentialism in Latin America Semi-Presidentialism An Exceptional Case: Switzerland Conclusion References Further Reading Chapter 8: Dividing the State: Federalism and Other Options Unitary, Federal, Confederal Components of Federalism Asymmetrical Federalism Bicameralism Amending Formula Non-federal Options References Further Reading Chapter 9: Who Wants What? The Political Process Social Cleavages Ideology: The Role of Ideas References Further Reading Chapter 10: Who Gets In? The Machinery of Democratic Elections What Is an Electoral System? Criteria for Evaluating Electoral Systems Electoral Systems Considered Electoral Administration References Further Readings Appendix: Electoral System Data Chapter 11: Who is Heard? Varieties of Representation Political Parties Party Systems Election Campaigns Organized Interests Corporatism Social Movements Conclusion References Further Readings Chapter 12: The Official Response: Public Policy and Administration Two Theories about Public Policy Who Does What: Examining the Policy Cycle Types of Policy The Bureaucracy References Further Reading Chapter 13: The Rule of Law in Practice: The Justice System The Nature of Law The Legislative Process Direct Democracy Private Law Administrative Law Court Systems Rights Judicial Review Automatic Justice? References Further Reading Chapter 14: Governing in an Age of Decline? Social and Economic Policy The Nature of Capitalist Market Society Classic Liberalism and Laissez-faire Models versus Reality The Reform of Market Capitalism The Welfare State The Age of Deficit and Debt Going Forward References Further Reading Appendix: Economic Statistics Glossary Index
With timely examples and data, and exemplary conceptual illustrations, the fourth edition offers a solid foundation in the discipline, making it a 'keeper' for beginning scholars. -- Carol Dauda, University of Guelph I strongly recommend [this book] to anyone who wants an engaging theoretical introduction to the study of politics. -- Jonathan Rose, Queen's University The fourth edition of Politics maintains the unique theoretical scope and depth of previous editions, while presenting the material in a cleaner and more accessible manner. An excellent introduction to the discipline of political science. -- James Farney, University of Regina
Larry Johnston is the author of Ideologies: An Analytic and Conceptual Approach (1996) and Between Transcendence and Nihilism (1995). A legislative researcher in Toronto since 1998, he was an academic consultant to the Ontario Citizens' Assembly on Electoral Reform in 2006. He has taught a variety of politics courses at the University of Toronto, McMaster University, and Ryerson University. |
Satellite communications systems transmit signals in the gigahertz range – billions of cycles per second. The satellite must be placed in a geosynchronous orbit, 22300 miles above the earth’s surface, so it revolves once a day with earth. To an observer, it appears to be fixed over one region at all times. A satellite is a solar-powered electronic device that has up to 100 transponders (a transponder is a small, specialized radio) that receive, amplify, and retransmit signals; the satellite acts as a relay station between satellite transmission stations on the ground (called earth stations).
Although establishing satellite systems is costly (owing to the cost of a satellite and the problems associated with getting it into orbit above the earth’s surface and compensating for failures), satellite communications systems have become the most popular and cost-effective method for moving large quantities of data over long distances. The primary advantage of satellite communications is the vast area that can be covered by a single satellite. Companies must lease satellite communications time from suppliers such as Intelsat, Comsat, Inmarsat, Utelsat, and Telstar (AT&T), Large companies that have offices around the world benefit the most from satellite communications. |
- Pirates! Fact and Legend
- How Stuff Works - People - How Pirates Work
- Geographia - The Islands of the Bahamas
- The Pirates of Whydah Information on the first pirate ship ever discovered in North America. Features brief sketches of crew, an article outlining the discovery and findings, and other interesting trivia.
Britannica Web Sites
Articles from Britannica encyclopedias for elementary and high school students.
- pirate - Children's Encyclopedia (Ages 8-11)
Pirates are criminals who attack ships at sea. The most famous pirates sailed the seas from the late 1500s to the early 1800s. A common symbol of piracy was the Jolly Roger-a black flag with a white skull and crossbones.
- pirates and piracy - Student Encyclopedia (Ages 11 and up)
Sea robbers, or men who attack and rob ships at sea, are called pirates. Many of the romantic stories that have been written about them are imaginative pieces of fiction. Nevertheless, their actual adventures often changed the course of history. |
It is one of four paired lobes in the brain's cerebral cortex, and it plays vital roles in memory, attention, motivation, and numerous other daily tasks.
The frontal lobe, similarly to the other lobes of the cerebral cortex, is actually made up of two, paired lobes. Together, these comprise two-thirds of the human brain.
What is the frontal lobe?
The frontal lobe is located near the forehead and plays a vital role in motivation and memory.
The frontal lobe is part of the brain's cerebral cortex. Individually, the paired lobes are known as the left and right frontal cortex.
As the name implies, the frontal lobe is located near the front of the head, under the frontal skull bones and near the forehead. It was the last region of the brain to evolve, making it a relatively new addition to the structure.
All mammals have a frontal lobe, though the size and complexity vary between species. Most research suggests that primates have larger frontal lobes than many other mammals.
The two sides of the brain largely control operations on the opposite sides of the body. The frontal lobe is no exception.
So, the left frontal lobe affects muscles on the right side of the body. Similarly, the right frontal lobe controls muscles on the left side of the body. This can determine how the body is affected by a brain injury.
The brain is a complex organ, with billions of cells called neurons working together. Much of what these neurons do and how they work is not fully understood.
The frontal lobe works alongside other brain regions to control how the brain functions overall. Memory formation, for example, depends on sensory input, which depends on numerous areas of the brain. As such, it is a mistake to attribute any one role of the brain to a single region.
What is more, the brain may "rewire" itself to compensate for an injury. This does not mean the frontal lobe can recover from all injuries, but that other brain regions may change in response to an injury to the frontal lobe.
Functions of the frontal lobe
The frontal lobe plays a key role in future planning, including self-management and decision-making.
People with frontal lobe damage often struggle with gathering information, remembering previous experiences, and making decisions based on this input.
Some of the many other functions the frontal lobe plays in daily functions include:
- Speech and language production: Broca's area, a region in the frontal lobe, helps put thoughts into words. Damage to this area can undermine the ability to speak, to understand language, or to produce speech that makes sense.
- Some motor skills: The frontal lobe houses the primary motor cortex, which helps coordinate voluntary movements, including walking and running.
- Comparing objects: The frontal lobe helps categorize and classify objects, in addition to distinguishing one item from another.
- Forming memories: Virtually every brain region plays a role in memory, so the frontal lobe is not unique. However, research suggests it plays a key role in forming long-term memories.
- Understanding and reacting to the feelings of others: The frontal lobe is vital for empathy.
- Forming personality: The complex interplay of impulse control, memory, and other tasks helps form a person's key characteristics. Damage to the frontal lobe can radically alter personality.
- Reward-seeking behavior and motivation: Most of the brain's dopamine-sensitive neurons are in the frontal lobe. Dopamine is a brain chemical that helps support feelings of reward and motivation.
- Managing attention, including selective attention: When the frontal lobe cannot properly manage attention, then conditions, such as attention deficit disorder (ADHD), may develop.
Effects of damage to the frontal lobe
Damage to the frontal lobe may result in symptoms such as poor coordination.
One of the most infamous frontal lobe injuries happened to railroad worker Phineas Gage.
Gage survived after a railroad spike impaled a portion of his frontal lobe. Though Gage survived, he lost his eye and much of his personality.
Gage's personality dramatically changed, and the once mild-mannered worker struggled to stick to even simple plans. He became aggressive in speech and demeanor and had little impulse control.
Much of what we know about the frontal lobe comes from case reports on Gage. Those have been called into question since, however. Little is known for sure about Gage's personality before his accident, and many stories about him may be exaggerated or false.
The case demonstrates a larger point about the brain, which is that our understanding of it is constantly evolving. Hence, it is not possible to accurately predict the outcome of any given frontal lobe injury, and similar injuries may develop quite differently in each person.
In general, however, damage to the frontal lobe due to a blow to the head, a stroke, growths, and diseases, can cause the following symptoms:
- speech problems
- changes in personality
- poor coordination
- difficulties with impulse control
- trouble planning or sticking to a schedule
Treatment for damage to the frontal lobe
Treatment for frontal lobe injuries focuses on addressing the cause of the injury first. A doctor might prescribe medication to treat an infection, surgery to remove a growth, or medication to reduce the risk of a stroke.
Depending on the cause of the injury, lifestyle remedies may help, as well. For example, frontal lobe damage after a stroke may mean moving to a more healthful diet, and to more exercise to reduce the risk of a future stroke.
After the initial cause of the injury is addressed, treatment focuses on helping a person regain as much functioning as possible.
The brain can sometimes learn to work around an injury as other regions compensate for damage to the frontal lobe. Occupational, speech, and physical therapy can move this process along. These treatments can prove especially helpful in the early stages of recovery, as the brain begins to heal.
Frontal lobe damage can affect personality, emotion, and behavior. Individual, couple, and family counselling may help with the management of these changes.
Medications that address impulse control issues can also be useful, particularly for people who struggle with attention and motivation.
Treatment for frontal lobe damage is often varied, requiring ongoing care and continual re-evaluation of the treatment strategy. It may include speech and occupational therapists, doctors, psychotherapists, neurologists, imaging specialists, and other professionals.
Recovering from a frontal lobe injury is often a long process. Progress can come suddenly or infrequently and is impossible to fully predict. Recovery is closely tied to supportive care, regular cognitive challenges, and a lifestyle that supports good health. |
The Vela 6 nuclear test detection satellites were part
of a program run jointly by the Advanced Research Projects of the U. S.
Department of Defense and the U. S. Atomic Energy Commission, managed by the U.
S. Air Force. The twin spacecraft, Vela 6A and 6B, were launched on 8 April
1970 and placed ~180 degrees apart in nearly circular orbits at a geocentric
distance of ~118,000 km. The orbital period was ~112 hours. Each satellite
rotated about its spin axis with a ~64-sec period. The satellites carried both
X-ray and gamma-ray detectors which could be used for cosmic observations. The
X-ray detector was located ~90 degrees from the spin axis, and so covered the
entire celestial sphere twice per satellite orbit.
The scintillation X-ray detector aboard Vela 6A & 6B consisted of two 1-mm-
thick NaI(Tl) crystals mounted on photomultiplier tubes and covered by a 5-mil-
thick beryllium window. Electronic thresholds provided two energy channels,
3-12 keV and 6-12 keV. In front of each crystal was a slat collimator
providing a FWHM aperture of ~6.1x6.1 degrees. The effective detector area was
~26 cm2. Data were telemetered in 1-sec count accumulations.
Sensitivity to celestial sources was limited by a high intrinsic detector
background. The X-ray detectors failed on Vela 6A on 12 March 1972 and on Vela
6B on 27 January 1972.
The Vela 6 satellites also each carried 6 gamma-ray detectors with a total
volume of 60 cm3, covering the energy range 300-1500 keV. The
gamma-ray detectors continued to provide data until mid-1979. In fact, they
were still working well when tracking conflicts essentially shut the satellite
Data from the Vela 6 satellites was used to look for correlations between
gamma-ray bursts and X-ray events. At least 2 good candidates were found,
GB720514 and GB740723 (Terrell et al. 1982). The four Vela satellites (5A & B,
6A & B) recorded 73 gamma-ray bursts in the ten year interval July 1969 - |
Double fertilization is a complex fertilization mechanism of flowering plants (angiosperms). This process involves the joining of a female gametophyte (megagametophyte, also called the embryo sac) with two male gametes (sperm). It begins when a pollen grain adheres to the stigma of the carpel, the female reproductive structure of a flower. The pollen grain then takes in moisture and begins to germinate, forming a pollen tube that extends down toward the ovary through the style. The tip of the pollen tube then enters the ovary and penetrates through the micropyle opening in the ovule. The pollen tube proceeds to release the two sperm in the megagametophyte.
One sperm fertilizes the egg cell and the other sperm combines with the two polar nuclei of the large central cell of the megagametophyte. The haploid sperm and haploid egg combine to form a diploid zygote, while the other sperm and the two haploid polar nuclei of the large central cell of the megagametophyte form a triploid nucleus (triple fusion). Some plants may form polyploid nuclei. The large cell of the gametophyte will then develop into the endosperm, a nutrient-rich tissue which provides nourishment to the developing embryo. The ovary, surrounding the ovules, develops into the fruit, which protects the seeds and may function to disperse them.
The two central cell maternal nuclei (polar nuclei) that contribute to the endosperm, arise by mitosis from the same single meiotic product that gave rise to the egg. The maternal contribution to the genetic constitution of the triploid endosperm is double that of the embryo.
In a recent study done of the plant Arabidopsis thaliana, the migration of male nuclei inside the female gamete, in fusion with the female nuclei, has been documented for the first time using in vivo imaging. Some of the genes involved in the migration and fusion process have also been determined.
Double fertilization was discovered more than a century ago by Sergei Nawaschin in Kiev, Russian Empire, and Léon Guignard in France. Each made the discovery independently of the other. Lilium martagon and Fritillaria tenella were used in the first observations of double fertilization, which were made using the classical light microscope. Due to the limitations of the light microscope, there were many unanswered questions regarding the process of double fertilization. However, with the development of the electron microscope, many of the questions were answered. Most notably, the observations made by the group of W. Jensen showed that the male gametes did not have any cell walls and that the plasma membrane of the gametes is close to the plasma membrane of the cell that surrounds them inside the pollen grain.
In vitro double fertilization
In vitro double fertilization is often used to study the molecular interactions as well as other aspects of gamete fusion in flowering plants. One of the major obstacles in developing an in vitro double fertilization between male and female gametes is the confinement of the sperm in the pollen tube and the egg in the embryo sac. A controlled fusion of the egg and sperm has already been achieved with poppy plants. Pollen germination, pollen tube entry, and double fertilization processes have all been observed to proceed normally. In fact, this technique has already been used to obtain seeds in various flowering plants and was named “test-tube fertilization”.
The female gametophyte, the megagametophyte, that participates in double fertilization in angiosperms is sometimes called the embryo sac. This develops within an ovule, enclosed by the ovary at the base of a carpel. Surrounding the megagametophyte are (one or) two integuments, which form an opening called the micropyle. The megagametophyte, which is usually haploid, originates from the (usually diploid) megaspore mother cell, also called the megasporocyte. The next sequence of events varies, depending on the particular species, but in most species, the following events occur. The megasporocyte undergoes a meiotic cell division, producing four haploid megaspores. Only one of the four resulting megaspores survives. This megaspore undergoes three rounds of mitotic division, resulting in seven cells with eight haploid nuclei (the central cell has two nuclei, called the polar nuclei). The lower end of the embryo sac consists of the haploid egg cell positioned in the middle of two other haploid cells, called synergids. The synergids function in the attraction and guidance of the pollen tube to the megagametophyte through the micropyle. At the upper end of the megagametophyte are three antipodal cells.
The male gametophytes, or microgametophytes, that participate in double fertilization are contained within pollen grains. They develop within the microsporangia, or pollen sacs, of the anthers on the stamens. Each microsporangium contains diploid microspore mother cells, or microsporocytes. Each microsporocyte undergoes meiosis, forming four haploid microspores, each of which can eventually develop into a pollen grain. A microspore undergoes mitosis and cytokinesis in order to produce two separate cells, the generative cell and the tube cell. These two cells in addition to the spore wall make up an immature pollen grain. As the male gametophyte matures, the generative cell passes into the tube cell, and the generative cell undergoes mitosis, producing two sperm cells. Once the pollen grain has matured, the anthers break open, releasing the pollen. The pollen is carried to the pistil of another flower, by wind or animal pollinators, and deposited on the stigma. As the pollen grain germinates, the tube cell produces the pollen tube, which elongates and extends down the long style of the carpel and into the ovary, where its sperm cells are released in the megagametophyte. Double fertilization proceeds from here.
- Berger, F. (January 2008). "Double-fertilization, from myths to reality". Sexual Plant Reproduction 21 (1): 3–5. doi:10.1007/s00497-007-0066-4.
- Berger, F. & Hamamura, Y. & Ingouff, M. & Higashiyama, T. (August 2008). "Double fertilization – Caught In The Act". Trends in Plant Science 13 (8): 437–443. doi:10.1016/j.tplants.2008.05.011. PMID 18650119.
- V. Raghavan (September 2003). "Some reflections on double fertilization, from its discovery to the present". New Phytologist 159 (3): 565–583. doi:10.1046/j.1469-8137.2003.00846.x. Retrieved 2013-08-27.
- Kordium EL (2008). "[Double fertilization in flowering plants: 1898-2008]". Tsitol. Genet. (in Russian) 42 (3): 12–26. PMID 18822860.
- Jensen, W. A. (February 1998). "Double Fertilization: A Personal View". Sexual Plant Reproduction 11 (1): 1–5. doi:10.1007/s004970050113.
- Dumas, C., & Rogowsky, P. (August 2008). "Fertilization and Early Seed Formation". Comptes Rendus Biologies 331 (10): 715–725. doi:10.1016/j.crvi.2008.07.013. PMID 18926485.
- Zenkteler, M. (1990). "In vitro fertilization and wide hybridization in higher plants". Crit Rev Plant Sci 9 (3): 267–279. doi:10.1080/07352689009382290.
- Raghavan, V. (2005). Double fertilization: embryo and endosperm development in flowering plants (illustrated ed.). Birkhäuser. pp. 17–19. ISBN 3-540-27791-9.
- Campbell N.A & Reece J.B (2005). Biology (7 ed.). San Francisco, CA: Pearson Education, Inc. pp. 774–777. ISBN 0-8053-7171-0. |
Scientific Notation: Multiplying Decimal Numbers by Powers of Ten
In this scientific notation worksheet, students solve 10 problems in which mixed decimal numbers to the hundredths place are multiplied by powers of ten. There are no examples provided.
5th - 6th Math 3 Views 14 Downloads
New Review K-5 Mathematics Module: Number and Number Sense
Reinforce number sense with a collection of math lessons for kindergarteners through fifth graders. Young mathematicians take part in hands-on activities, learning games, and complete skills-based worksheets to enhance proficiency in...
K - 5th Math CCSS: Adaptable
Mayan Mathematics and Architecture
Take young scholars on a trip through history with this unit on the mathematics and architecture of the Mayan civilization. Starting with a introduction to their base twenty number system and the symbols they used, this eight-lesson unit...
4th - 8th Math CCSS: Adaptable
Collecting and Working with Data
Add to your collection of math resources with this extensive series of data analysis worksheets. Whether your teaching how to use frequency tables and tally charts to collect and organize data, or introducing young mathematicians to pie...
3rd - 6th Math CCSS: Adaptable |
In cell biology mitochondria are small organelles found inside most cells. There can be between several hundred up to 3000 inside each cell. Mitochondria are sometimes described as “cellular power plants” because one of its prominent rolls is to generate most of the cell’s supply of adenosine triphosphate (ATP) which is pure chemical energy our body relies upon to survive. The central set of complex reactions involved in ATP production are collectively known as the citric acid cycle, or the Krebs Cycle. This cycle involves the process of a number of redox reactions of glucose, amino acids and fatty acids, obtained through our diet, through a chain of reactions to produce ATP. This process is dependent on the presence of oxygen. When oxygen is limited, the glycolytic products will be metabolized by anaerobic respiration, a process that is independent of the mitochondria and far less productive producing only a small fraction of ATP when compared to aerobic metabolism. This entire process is controlled by many factors such as electron transport, hormones, tropic factors, cytokines, neurotransmitters, cofactors and coenzymes.
What Causes Mitochondrial Dysfunction
Mitochondria can be poisoned by numerous substances, including environmental toxins, heavy metals, excess iron (haemocromatosis), pesticides, chronic bacterial, viral and fungal infections and neurotoxins. These agents can induce excess production of reactive oxygen species such as superoxide, hydroxyl radicals, perioxynitrite, etc which cause the oxidation and thus damage of the mitochondria which in effect reduces their ability to produce energy. This oxidation is a common factor in a variety of diseases such as CFS, fibromyalgia, schizophrenia, bipolar disorder, dementia, Alzheimer’s disease, Parkinson’s disease, epilepsy, stroke, cardiovascular disease, retinitis pigmentosa, and diabetes mellitus.
If there is a problem with the controlling mechanisms which regulate mitochondrial function then the whole process will suffer as a consequence. Certain hormones play a major roll regulating mitochondrial function. For example the thyroid hormone, T3, has a profound effect both indirectly through the activity of the nuclear receptor family on gene transcription, and directly on the impact on mitochondrial enzyme function. Growth hormone deficiencies will also reduce mitochondrial function, not to mention adrenal hormone deficiencies or imbalances also adversely affecting mitochondrial function. Therefore it is very important that these hormone levels are balanced and adequate in order to be able to regulate and maintain a healthy mitochondrial function.
Nutritional deficiencies can also be responsible for a breakdown in mitochondrial function as the substrates and/or cofactors required for the citric acid cycle may not be present in sufficient quantities in order for the process to occur normally. High GI diets may also cause insulin resistance which also causes metabolic dysfunction. The intake of antioxidants is essential in order to prevent oxidative damage to the mitochondria caused by reactive oxygen species.
Finally inflammation is another culprit. This may be the result of food allergies/intollerances, leaky gut, chronic disease, etc.
Mitochondrial dysfunction can be tested for by doing a urinary organic acids test which can be arranged by us during a consultation.
The first step in the treatment involves removing the offending agent such as infection or toxin. This could involve the use of medications or nutritional agents to kill off any infection whether it be bacterial, viral, parasitic or fungal. The elimination of various toxins through a detoxification process, including heavy metals, may also be required.
The second step is to rectify any hormone deficiencies or imbalances such as thyroid and/or adrenal hormones which help regulate the citric acid cycle. Detailed information on these hormones is available throughout this website in the relevant sections.
The third step involves dietary and lifestyle intervention in order to reduce oxidative stress. This involves eliminating dietary antigens which can trigger oxidative damage. It also involves changing the standard western diet which is poor in nutrients, calorie rich, has a high glycemic load, antioxidant deficient and refined which is responsible for increasing oxidative damage, not to mention dietary factors which increase inflammation such as deficient in essential fatty acids, high glycemic load, lack of fiber and saturated fats. In addition to diet exposure to environmental toxins, pollution, smoke, stress etc also need to be eliminated.
The forth step involves supplementing the mitochondria with all the nutrients, substrates and cofactors required for the citric acid to occur. Antioxidants are also required to prevent further oxidation of the mitochondria.
Our laboratory produces a comprehensive mitochondrial support powder supplement. This product supplies a daily dose of:
L-carnitine 250mg, lipoic acid 200mg, malic acid 1.2g, selenium 300mcg (as selenomethionine), magnesium 200mg (as citrate), 1g citrate (as magnesium), alpha keto glucarate 500mg, n-acetyl-cysteine 250mg, manganese 10mg (as gluconate), iron 10mg (as fumerate), fumerate 20mg, 50mg each of Vit B1, B2, B3,B5 and B6. This mitochondrial support powder is available through the members section of this website.
Other supplements we recommend are 150mg CoEnzyme Q10 daily, glutathione 200mg daily and Ribose 3g daily. These are all also available in the members section of this website. Finally Nicotinamide riboside, a precurser to NAD+ production, is available which is essential for ATP production. Finally Pyrroloquinoline quinone (PQQ), a natural coenzyme found in many plants, fruits and vegetables, has been shown to stimulate mitochondria production in addition to protecting them from damage.
All treatments mentioned in this article are available through our laboratory. Refer to our on-line pharmacy to order now or alternatively refer the ordering information page to view the various ordering methods available. |
GHz is the measure of the clock speed (think about a metronome ticking). The GHz just tells you how many times per second the clock on the CPU will tick (Not exactly, there is an equation to figure it out, but just know that 3.0 and above is considered fast). On each tick tasks are run in CPU to process data. So, if you have a CPU with more GHz the clock will tick faster and thus, more tasks can be completed in a shorter amount of time, making your computer faster.
Obviously its quite a bit more complicated than that, but this was just to give you an analogy to understand how it all works.
Keeping all this in mind, the second CPU you mentioned will be faster than the first. |
A ‘mysterious network’ of mud springs on the edge of the ‘market town’ of Wootton Bassett, near Swindon, Wiltshire, England, has yielded a remarkable surprise.1 A scientific investigation has concluded that ‘the phenomenon is unique to Britain and possibly the world’.
The mud springs
Hot, bubbling mud springs or volcanoes are found in New Zealand, Java and elsewhere, but these Wootton Bassett mud springs usually ooze slowly and are cold. However, in 1974 River Authority workmen were clearing the channel of a small stream in the area, known as Templar’s Firs, because it was obstructed by a mass of grey clay.2 When they began to dig away the clay, grey liquid mud gushed into the channel from beneath tree roots and for a short while spouted a third of a metre (one foot) into the air at a rate of about eight litres per second.
No one knows how long these mud springs have been there. According to the locals they have always been there, and cattle have fallen in and been lost! Consisting of three mounds each about 10 metres (almost 33 feet) long by five metres (16 feet) wide by one metre (about three feet) high, they normally look like huge ‘mud blisters’, with more or less liquid mud cores contained within living ‘skins’ created by the roots of rushes, sedges and other swampy vegetation, including shrubs and small trees.3 The workmen in 1974 had obviously cut into the end of one of these mounds, partly deflating it. Since then the two most active ‘blisters’ have largely been deflated and flattened by visitors probing them with sticks.4
In 1990 an ‘unofficial’ attempt was made to render the site ‘safe’.5 A contractor tipped many truckloads of quarry stone and rubble totalling at least 100 tonnes into the mud springs, only to see the heap sink out of sight within half an hour! Liquid mud spurted out of the ground and flowed for some 600 metres (about 2,000 feet) down the stream channel clogging it. Worried, the contractor brought in a tracked digger and found he could push the bucket down 6.7 metres (22 feet) into the spring without finding a bottom.
’Pristine fossils’ and evolutionary bias
So why all the ‘excitement’ over some mud springs? Not only is there no explanation of the way the springs ooze pale, cold, grey mud onto and over the ground surface, but the springs are also ‘pumping up’ fossils that are supposed to be 165 million years old, including newly discovered species.6 In the words of Dr Neville Hollingworth, paleontologist with the Natural Environment Research Council in Swindon, who has investigated the springs, ‘They are like a fossil conveyor belt bringing up finds from clay layers below and then washing them out in a nearby stream.’7
Over the years numerous fossils have been found in the adjacent stream, including the Jurassic ammonite Rhactorhynchia inconstans, characteristic of the so-called inconstans bed near the base of the Kimmeridge Clay, estimated as being only about 13 metres (almost 43 feet) below the surface at Templar’s Firs.8 Fossils retrieved from the mud springs and being cataloged at the British Geological Survey office in Keyworth, Nottinghamshire, include the remains of sea urchins, the teeth and bones of marine reptiles, and oysters ‘that once lived in the subtropical Jurassic seas that covered southern England.’9
Without the millions–of–years bias, these fossils would readily be recognized as victims of a comparatively recent event.Some of these supposedly 165 million year old ammonites are previously unrecorded species, says Dr Hollingworth, and the real surprise is that ‘many still had shimmering mother-of-pearl shells’.10 According to Dr Hollingworth these ‘pristine fossils’ are ’the best preserved he has seen … . You just stand there [beside the mud springs] and up pops an ammonite. What makes the fossils so special is that they retain their original shells of aragonite [a mineral form of calcium carbonate] … The outsides also retain their iridescence …’11 And what is equally amazing is that, in the words of Dr Hollingworth, ‘There are the shells of bivalves which still have their original organic ligaments and yet they are millions of years old’!12
Perhaps what is more amazing is the evolutionary, millions–of–years mindset that blinds hard–nosed, rational scientists from seeing what should otherwise be obvious—such pristine ammonite fossils still with shimmering mother–of–pearl iridescence on their shells, and bivalves still with their original organic ligaments, can’t possibly be 165 million years old. Upon burial, organic materials are relentlessly attacked by bacteria, and even in seemingly sterile environments will automatically, of themselves, decompose to simpler substances in a very short time.13,14 Without the millions–of–years bias, these fossils would readily be recognized as victims of a comparatively recent event, for example, the global devastation of Noah’s Flood only about 4,500 years ago.
Even with Dr Hollingworth’s identification of fossils from the Oxford Clay,15 which underlies the Kimmeridge Clay and Corallian Beds, scientists such as Roger Bristow of the British Geological Survey office in Exeter still don’t know what caused the mud springs.16 English Nature, the Government’s wildlife advisory body which also has responsibility for geological sites, has requested research be done.
The difficulties the scientists involved face include coming up with a driving mechanism, and unravelling why the mud particles do not settle out but remain in suspension.17 They suspect some kind of naturally–occurring chemical is being discharged from deep within the Kimmeridge and Oxford Clays, where some think the springs arise from a depth of between 30 and 40 metres (100 and 130 feet). So Ian Gale, a hydrogeologist at the Institute of Hydrology in Wallingford, Oxfordshire, is investigating the water chemistry.18 Clearly an artesian water source is involved.19 Alternately, perhaps a feeder conduit cuts through the Oxford Clay, Corallian Beds and Kimmeridge Clay strata, rising from a depth of at least 100 metres (330 feet).20 The mud’s temperature shows no sign of a thermal origin, but there are signs of bacteria in the mud, and also chlorine gas.21 But why mud instead of water? Does something agitate the underground water/clay interface so as to cause such fine mixing?22
Research may yet unravel these mysteries. But it will not remove the evolutionary bias that prevents scientists from seeing the obvious. The pristine fossils disgorged by these mud springs, still with either their original external iridescence or their original organic ligaments, can’t be 165 million years old! Both the fossils and the strata that entombed them must only be recent. They are best explained as testimony to the global watery cataclysm in Noah’s day about 4,500 years ago. |
We went to the science lab and used a sense of sound to figure out the objects in six different bags. The students passed the bags around, shook each one, and then drew their prediction in their science journal. At the end, the students shared their predictions and we revealed what was in each bag. One bag had cotton balls, so we discussed how some objects make very soft sounds. The objects we used were cotton balls, coins, pet mulch, Legos, sand, and buttons.
Saturday, October 30, 2010
Sunday, October 24, 2010
We learned about nocturnal and diurnal animals this week. We learned nocturnal animals are awake and feed at night and diurnal animals are awake and feed during the day. We sorted and predicted which animals we thought belong in each category. Then we read Where Are the Night Animals? and checked our predictions. We illustrated on paper plates examples of diurnal and nocturnal animals. The students choose bats as their favorite nocturnal animal so we made bats using our hand prints. We did a shared/interactive writing piece where we talked to our partner and shared out what we learned this week. Then I wrote down what the students told me. When we came to a word wall word, students wrote those.
Thursday, October 21, 2010
Wednesday was national Bullying Awareness day. We talked about bullying and charted what bullying looks and sounds like.
We talked about how there are two kinds of bullying. Bullying on the outside causes physical harm to someone. Bullying on the inside hurts your feelings. We talked about how sometimes inside bullying hurts more than outside bullying. We talked what students should do if they are being bullied. First tell the person Stop! I don't like it! Second, if it keeps happening, report it to an adult. Bullying is someone hurting you over and over. We talked about the difference between tattling and reporting. Then students talked with their partner about who they could report a bully to. Then we decorated footprints.
We hung our footprints on the school mural in the front hallway.
We also wore blue to stomp out bullying! |
Multicultural Children's Literature in the Elementary Classroom
By Mei-Yu Lu
Reprinted by permission
"When I was a child the teacher read, 'Once upon a time, there were five Chinese brothers and they all looked exactly alike'...Cautiously the pairs of eyes stole a quick glance back. I, the child, looked down to the floor...The teacher turned the book our away: bilious yellow skin, slanted slit eyes. Not only were the brothers look-alikes, but so were all the other characters!... Quickly again all eyes flashed back at me...I sank into my seat." (Aoki, 1981, p. 382)
The vignette above reveals how a minority child felt growing up in a time when cultural and linguistic diversity was neither valued in American society nor adequately portrayed in children's literature, an important channel for transmitting societal values and beliefs. The situation, however, has undergone changes in the past twenty years. With the increasing number of linguistic and cultural minorities in the United States, the American society today looks very different than that of Aoki's childhood. These changes in demographic trends impact the education system. Not only do schools need to prepare all children to become competent citizens, but also to create an environment that fosters mutual understanding.
IMPORTANCE OF MULTICULTURAL CHILDREN'S LITERATURE
Jenkins and Austin (1987) suggest that cultural understanding can be reached in many ways, such as by making friends with people from different cultures, and by traveling to other countries. They also emphasize the value of good literature, for it can reflect many aspects of a cultureits values, beliefs, ways of life, and patterns of thinking. A good book for children can transcend time, space, and language, and help readers to "learn about an individual or a group of people whose stories take place in a specific historical and physical setting" (p. 6). In addition, exposure to quality multicultural literature also helps children appreciate the idiosyncracies of other ethnic groups, eliminate cultural ethnocentrism, and develop multiple perspectives. Dowd (1992) also argues that "...from reading, hearing, and using culturally diverse materials, young people learn that beneath surface differences of color, culture or ethnicity, all people experience universal feelings of love, sadness, self-worth, justice and kindness." (p. 220)
Finally, quality literature about a particular ethnic group benefits cultural and linguistic minority children as well. From reading multicultural books about their own culture, children have opportunities to see how others go through experiences similar to theirs, develop strategies to cope with issues in their life, and identify themselves with their inherited culture. It is, therefore important that educators incorporate multicultural literature into the curriculum and make it part of children's everyday life. The following sections will provide guidelines and resources for selecting multicultural literature in the elementary classroom.
GUIDELINES FOR SELECTING MULTICULTURAL CHILDREN'S LITERATURE
The following guidelines for material selection were developed by adopting recommendations from various language arts and multicultural educators: Beilke (1986), Harada (1995), Harris (1991), and Pang, Colvin, Tran, & Yang (1992). They recommend that multicultural literature contain:
1. Positive portrayals of characters with authentic and realistic behaviors, to avoid stereotypes of a particular cultural group.
2. Authentic illustrations to enhance the quality of the text, since illustrations can have a strong impact on children.
3. Pluralistic themes to foster belief in cultural diversity as a national asset as well as reflect the changing nature of this country's population.
4. Contemporary as well as historical fiction that captures changing trends in the roles played by minority groups in America.
5. High literary quality, including strong plots and well-developed characterization.
6. Historical accuracy when appropriate.
7. Reflections of the cultural values of the characters.
8. Settings in the United States that help readers build an accurate conception of the culturally diverse nature of this country and the legacy of various minority groups.
The guidelines above are by no means an exhaustive list. They are meant to provide a starting point from which teachers can explore the many aspects of multicultural children's literature. In addition, teachers may wish to consult with colleagues, parents, and the local ethnic community, drawing upon their specialized knowledge and unique perspectives.
RESOURCES FOR MATERIAL SELECTION
In addition to the guidelines for material selection, it is also imperative that etachers have access to resources for selecting a collection of materials. A useful resource often contains critical reviews, bibliographic information, and an abstract of each work. It may also provide guidelines for using a particular book, and suggest materials for further reading on issues and trends in multicultural literature. Some of these resources are general, covering a variety of cultural groups, while others may focus on a specific category, such as African-Americans. Used appropriately they can help teachers locate the materials in a timely and cost-effective manner. In the following section are just a few resources which can aid the collection-building process.
Specialized Selection Sources
1. Barrera, R.B., Thompson, V.D., & Dressman, M. (Eds.). (1997). "Kaleidoscope: A multicultural book list for grade K-8" (2nd Ed.). Urbana, IL: National Council of Teachers of English.
2. Helbig, A. & Perkins, A. (1994). "The land is our land: A guide to multicultural literature for children and young adults." Westport, CT: Greenwood Press.
3. Miller-Lachmann, L. (1992). "Our Family, our Friends, our World: annnotated guide to significant multicultural books for children and teenagers." New Providence, NJ: R. R. Bowker.
4. Muse, D. (1997). "The new press guide to multicultural resources for young readers." New York, NY: New Press.
1. The ALAN Review
2. Book Links
3. Bulletin of the Center for Children's Books
4. Children's Literature in Education
5. Horn Book Guide to Children's and Young Adults' Books
6. Horn Book Magazine
7. Interracial Books for Children Bulletin
8. Kirkus Review
9. MultiCultural Review
10. School Library Journal can help teachers to develop their multicultural literature collection. In addition, human resourceslibrarians in local or school libraries, as well as professors in the field of education and library sciencecan be valuable resources in the collection-building process. Finally, materials from minority children's household, such as photo albums and books written in their inherited language, are also rich resources.
Aoki, E. M. (1980). "Are you Chinese? Are you Japanese? Or Are you a mixed-up kid? Using Asian American children's literature." Reading Teacher, 34 (4), 382-385. [EJ 238 474]
Beilke, P. (1986) Selecting materials for and about Hispanic and East Asian children and young people. Hamden, CT: Library Professional Publications.
Dowd, F. S. (1992). "Evaluating childen's books portraying Native American and Asian cultures." Childhood Education, 68 (4), 219-224. [EJ 450 537]
Harada, V. H. (1995). "Issues of ethnicity, authenticity, and quality in Asian-American picture books, 1983-93." Journal of Youth Services in Libraries, 8 (2), 135-149. [EJ 496 560]
Harris, V. J. (1991). "Multicultural Curriculum: African American childrens' literature." Young Children, 46 (2), 37-44. [EJ 426 223]
Jenkins, E. C. & Austin, M. C. (1987). Literature for Children about Asian and Asian Americans. New York: Greenwood Press.
Pang, V. O., Colvin, C., Tran, M., & Barba, R.H. (1992). "Beyond chopsticks and dragons: Selecting Asian-American literature for children." The Reading Teacher, 46 (3), 216-224.
Mei-Yu Lu is a doctoral candidate in the Language Education Department at Indiana University-Bloomington. Her research interests are trends and issues in multicultural/international children's literature, critical literacy, and social semiotics. She was a reference librarian for the ERIC Clearinghouse on Reading, English, and Communication from 1995 until 2003. Reprinted with permission from the author.
Back to the Main Windows & Mirrors Page |
Our discussion of pianos will comprise three parts: a brief history, construction of modern pianos, and piano sound production.
A brief history of the piano:
(borrowed liberally from your textbook and other sources)
Proto-pianos (or could they be proto-guitars?) were built and played by the Greeks. An instrument called a monochord was made by Pythagorus in about 582 B.C. consisting of a string stretched across a resonator box. It had a moveable bridge dividing the string into two variable-length sections. Using this Pythagorus (could he be the one of Pythagorean equation fame?) discovered that sections of strings whose lengths were related by integer multiples (one length was 2, 3, 4, etc. times the other) and played simultaneously made sounds that were pleasant heard together.
Over time the number of strings on this type of instrument generally increased and the idea was to pluck multiple strings at the same time.
Keyboards, which eventually allowed one to push a key and pluck a string, were first developed to control pipe organs in around the second century B.C.
Ow, I think I broke my clavichord! About the 11th century the hurdy gurdy and then the clavichord were developed. These comprised multiple strings (more on the clavichord) and a keyboard. Keys were pressed that placed a tangent (a narrow bar situated orthogonal to the string that acted both as a hammer and one end of the string against a string at a prescribed position along the string (and, hence, causing a string to resonate with a certain length). A cloth listing was placed under the opposite end of the string to damped any vibrations on the unused end of the string. This mechanism also quickly damped the vibrating end of the string when the key was released and the tangent dropped from the string. Johann Sebastian Bach was one famous composer who wrote music for the clavichord.
Clavichord or Harpsichord? The harpsichord, a keyboard instrument where strings were plucked instead of struck was developed at about the same time as the clavichord. When a harpsichord key is presed a plectrum plucks a string streched between two bridges. A fairly simple mechanism (relative to the piano anyway) provides for moving the plectrum sideways once a string is plucked and allowing it to drop to its original position for the next note. As long as the key remains pressed the string will continue to vibrate (and decay). When the key is released the plectrum slips by and a damper deadens the string.
It's all in the mechanism: In 1709 Florencian harpsichord maker Bartolommeo Cristofori invented a mechanism that replaced the harpsichord's plucker with a hammer to strike a string to sound a note. This instrument, the pianoforte (that's gravicembalo col piano e forte to you!) was able to produce both loud and soft notes, unlike the harpsichord. Cristofori's mechanism was complex and involved many of the same features as today's piano mechanisms.
Cristofori's invention of a new mechanism opened up new tonal and dynamic possibilities for keyboad instruments. The stronger excitation force provided by striking strings with hammers meant that strings could be tensioned tighter and strung longer. Multiple strings for each note could be used to both increase maximum achievable loudness, to provide for striking a reduced set of these strings for longer and louder sustain of soft notes, and to generally set up tonal possibilities for driven resonance of passive strings in a set.
The modern piano comprises:
keyboard-- the 88 keys are made of wood and covered with ivory or plastic covers. Black keys corresponding to sharp and flat notes are higher, narrower and offset back from the white keys (natural notes).
action-- the mechanism based on Cristofori's that provides for hammers striking strings when keys are pushed. This is a critical bit so we'll come back to the action mechanism.
strings-- 230 of them! Notes corresponding to individual keys are produced by sets of two (lower registers) and three strings. Lower octave strings are wirewound, meaning that they comprise a solid metal core with one (or even two) longer wires wrapped in spirals around the core. Wirewinding of strings is a way to increase their mass per length (and thus decrease their frequencies of oscillation according to Mersenne's laws) while minimizing their stiffness, which can (and unfortunately still does) lead to interharmonicity, where the frequencies of upper harmonics are shifted from integer relationships to the fundamental (text book example: sixteenth overtone (17th harmonic) is half-tone higher than predicted and will noticable produce beats when played against other notes (c.f., page 341).
pin block-- mechanism for tensioning and tuning of individual strings. The pin block is attached to a....
cast-iron frame-- whose purpose is structural, the frame supports the enormous forces of 230 strings under tension (the book notes that this can approach 60,000 lbs).
soundboard-- constructed typically of clear spruce wood, its purpose is to vibrate sympathetically with strings and help send this sound to the listener. Vibrations of the latter are coupled to the soundboard via the...
bridge-- a curved construction of wood and metal that serves as a displacement node for vibrating strings. All of this is placed in or on a...
case-- usually made of wood, it houses and supports the various parts of the piano.
Cristofori's mechanism has evolved into the action of the piano, a piece of complicated-looking machinery involving levers, dampers and hammers for striking a string set. Although it only addresses the action of an upright piano, which is more complicated than that of a grand piano (the difference is the orientation of the strings, etc., which are vertical for an upright piano), this page does a nice job of explaining how piano actions work.
The important tasks to be accomplished by a piano action for a given note are:
- lift the damper felts from the strings so they can resonate when struck.
- strike the string with the hammer (!) with a velocity that depends on how quickly the key is pushed down.
- return the hammer immediately after the strike so as to not further interfere with string vibrations.
- provide for playing short, repeated notes without requiring the player to remove their finger from the key.
- return the damper felts to the string set upon key release.
Piano sound production:
Imagine that you were challenged with designing the different elements of a modern piano. We have already discussed one problem, that thinner, shorter strings used for high notes produce less sound, so we design our modern piano to have 3 strings for every higher note (top 68 keys) and 2 for the lower notes.
But we also want to design for a few other aspects:
- piano sound-- overtone content upon initial hammer strike
- inharmonicity-- piano string stiffness acting to provide an undesired return force in addition to that desired from string tension.
- piano aging-- what happens to piano sounds as the instrument ages, and what do we want to happen?
- piano sound-- overtone content upon sustain of a chosen note, or how does the overtone content change over time?
Overtone content: the first design problem can be addressed empirically. We can devise an arrangement with a hammer that can be moved along the string and experiment until our ear hears what we want. We could also do this for high note triplets of strings and low note doublets.
We would find that an optimal hammer strike point is between 1/7 and 1/8 the distance of the string from the peg end.
Inharmonicity: The second problem is substantial for pianos. The textbook states that the 16th overtone for a piano string can be "off" by as much as one half-tone. If one plays two notes an octave apart (e.g., C3 and C4), the inharmonicity of the second harmonic of the lower note will cause beating against the fundamental of the higher note. To "solve" for this problem octaves are stretched, meaning that a note an octave up is tuned slighly more than double the frequency of the octave below.
Our ears are trained by hearing (well-constructed and tuned) pianos with their inharmonicities for upper harmonics. We expect this sound. Digital electric pianos that try to mimic the sound of standard pianos need provide overtone content that anticipates some inharmonicity.
Aging: Piano hammers are covered with felt. The purpose of a felt is to spread out the impact of the hammer strike-- remember that our clavichord employed sharp hammers which served as displacement antinodes after striking a string. We don't want this to happen for our piano.
One common problem with pianos is that the felts dry out and harden. This serves to emphasize higher harmonics relative to low harmonics, giving a piano a tinny 'honky tonk' sound.
Sustain: Well, sometimes we want it, right? Piano notes are characterized by a sharp attack with rapid early decay, followed by a long, slow decay over time. Term project mystery instrument number 2 was a piano and its waveform and spectra looked like:
Term project mystery So how do we design our piano for attack and sustain? There are two fundamental ways of doing this: polarization and coupled oscillation (or sympathetic resonance).
Polarization happens because of slight imperfections in the piano mechanism in contact with played strings. Although the hammer is meant to impart only vertical motion in the string of a grand piano, eventually horizontal motion also occurs. Horizontal motion is less damped over time because its motion is less coupled to the piano bridge and, ergo, soundboard.
Coupled oscillations are another, more complicated story. In response to this phenomenon, piano tuners will actually tune different strings in a doublet or triplet (per note) to slightly different frequencies!
Strings that are part of a doublet or triplet are usually excited differently upon hammer strike. Althought they will initially be in phase, their amplitudes may differ at first because of imperfections in the hammer head, etc. At some point in time one string may be almost stopped while another is still oscillating. These strings interact via the piano bridge to cause coupled oscillation, where the vibrating string drives the nearly stopped string back into oscillation. When this happens the two strings are often out of phase (one is going up while the other goes down) so that the bridge moves minimally. This means that less string vibration is coupled as sound to the soundboard, and the note can sustain for a long time.
That's the simple explanation, but the reality is a bit more complicated (of course!). Depending upon how it couples to the bridge, one can view the end of a string as "springy," "massive" or "resistive." A springy end is like tacking the end of the string to a mass on a spring (see textbook, Figure 13-5). The motion of the string and support are in phase and, more importantly, the wavelength of the fundamental sound on the string is just a bit longer than (twice its...) length. The net effect makes for a slightly lower-pitched sound than what we might normally expect.
For a massive support, the motion of the string and support are out of phase, and the fundamental wavelength is a bit shorter than (twice the...) strings length.
Resistive loading of the string provides for a fundamental wavelength that is precisely 2 times the length of the string. The frequency is then what we expect, and neither lower nor higher.
Here's the key bit. The bridge of a piano acts either springy, or massive, or resistive, depending on the phase of the string motion relative to the bridge motion (are they going up and down together, or something different?). Thus piano tuners must accomodate this by tuning each member of a doublet or triplet slightly differently.
Let's monitor our old piano to see if we can measure this difference.
Grand piano action explained, with animation. |
What is a dental filling?
A tooth consists of enamel,dentine, pulp and cementum. Enamel is the outermost layer which forms the crown of a tooth. Tooth surface is covered by a bacterial biofilm,known as plaque. The bacteria metabolizes sugar and releases acid which destroys the enamel. This process is called decay or dental caries. When oral hygiene is inadequate, dental decay progresses into the dentine layer and in later stage, into the pulp. Decayed tooth is weak and brittle, thus it becomes soft, forming what is called cavity. A tooth damaged by decay needs to be filled to restore its normal function, shape and strength.
When do you need a filling?
Only a dentist can decide whether you need a filling or not. Initial stages of dental caries can be reversed by practicing good oral hygiene. In the early stages, when the caries affects only the enamel ,the tooth is still intact and the affected area appears opaque on x-ray, as judged by the dentist. The dentist will advise toothbrushing using fluoridated toothpaste at least twice a day to reverse the progression of caries. Reversed caries will appear harder and darker in colour.
If caries progresses into the deeper layers of the tooth, you will feel pain and sensitivity which indicates that the caries has reached the dentine. When caries progresses into the deeper layers of tooth and cavity forms you will need to fill your tooth.
How a dental filling is done?
If the involved tooth is painful the dentist will first administer local anaesthesia to numb the it. Then the decayed tooth material is removed and the tooth cleaned. The dentist then fills the cavity with an appropriate filling material. The materials used for filling includes amalgam ( an alloy of mercury, silver, copper,tin and zinc), composite ( tooth coloured fillings), gold and porcelain. Fillings helps prevent further decay by sealing the tooth and avoiding ingress of bacteria.
Which is the most suitable type of dental filling?
There is no one type of filling which is best for everyone. Different individual present with different types of complaint and damage. The suitable type of filling is determined by the extent of repair, whether you have allergies to certain materials, the location of the damaged tooth and the cost of material used.
There are various types of dental filling materials available:
Many experts believe that gold is the best filling material due to it’s biocompatibility and strength. Gold fillings can last up to 20 years and is well tolerated by the gum tissues. One of the disadvantage of using this material is that the procedure requires multiple visit because it is not directly placed on the tooth. First the dentist takes an impression of the tooth. The inlay is then prepared the laboratory and finally cemented into place. The material is also very expensive, thus patients prefer cheaper material like amalgam.
Amalgam is an alloy of mercury, silver, tin, copper and zinc. It is relatively cheap and stronger compared to other filling materials. Amalgam filling can last up to 10 years. However, due to its dark colour amalgam is used in the back teeth and not in visible areas such as the front teeth.
Composite resin filling material is less stronger than amalgam but it is much preferred because it is matched to be the same colour as the natural teeth. It comes in different shades, which suits different individuals. The aesthetic value of composite makes it the material of choice for visible areas such as the front teeth. It is placed directly into the cavity. Composite may not be the ideal material for large fillings as it may chip or wear over time. It is also susceptible to stains from food such as coffee or tobacco. Composite lasts from 3 to 8 years.
Porcelain fillings are called inlays or onlays and are produced to order in a lab and then bonded to the tooth. They can be matched to the color of the tooth and resist staining. A porcelain restoration generally covers most of the tooth. Their cost is similar to gold.
If decay or a fracture has damaged a large portion of the tooth, a crown, or cap, may be recommended. Decay that has reached the nerve may be treated in two ways: through root canal therapy (in which nerve damaged nerve is removed) or through a procedure called pulp capping (which attempts to keep the nerve alive). |
Folklife & Folk Art Education Resource Guide
Folklife Education: Teacher Background Information
Folklore (folklife) consists of the expressive traditions of everyday people in everyday life. These traditions are passed along through time and space either orally or by example within folk groups (groups of people who share similar values, goals, experiences, and interests). Everyone belongs to at least one folk group and often many more. Major folk groups include: age-based, ethnic, family, gender, occupational, regional, and religious. As people within a folk group interact with one another, the group's values are shared with new members, thus passing on the traditions and customs within the folk group.
Within folk groups, the people who create the objects and actions that represent group values are called tradition bearers or folk artists. A tradition bearer does literally what the name implies: "bears" (carries) the tradition through his or her actions, words, customs, and beliefs. Therefore, all people are tradition bearers of some of the folklore in their group. However, it is those who best understand and represent the group's worldview who are often called upon to represent or pass on the group's traditions. Think of your grandmother who always creates the pumpkin pies for Thanksgiving. Why? Because she is the most seasoned piemaker in the family. However, young people are also expert tradition bearers. A visit to the playground will evidence this as older students pass on the folklore of the playground: rules, games, chants, etc., to new or younger students.
Because folks are experts of their groups' folk expressions, a classroom teacher has the opportunity to include the traditions of his/her students, self, and community, thus highlighting Utah's diversity in class activities. As well, class instruction is elevated when skilled tradition bearers from the community are invited into the classroom to discuss and augment classroom instruction. The stimulation of having everyday folks (who the students will mostly likely encounter again in the neighborhood) visit the classroom is very beneficial to both learner and visitor. Including community traditions and tradition bearers in the classroom strengthens bonds of community, responsibility, and respect. The opportunity of community members to share their expertise with children can also foster mentor relationships. Mentor relationships benefit not only the student and the mentor, but also the classroom teacher who will often see a more engaged student in his/her classroom because of the added attention and example shown to the student. However, to make folklife education, including community education, work, the tradition bearer should be included or invited to visit and share with the class his or her traditions/expertise when an appropriate unit is being taught. For instance, a unit on the Great Depression Era would be enhanced by having a student's grandparent visit the classroom to discuss his/her life during that time: games, chores, schoolwork, living conditions. This exposure would enhance the textbook learning by bringing the era into a "real life" experience for the students. Similarly, during a unit of fractions a discussion with the students regarding the principles and skills of quilting could enhance the learning process for many students, modeling for the students how useful an understanding of fractions is, and another way to comprehend the subject. In fact, folklife education can be integrated into most classroom disciplines. However, both the discipline and folklore must be accurately discussed and understood in order for real learning to take place.
Folklore is expressed in a variety of exciting and meaningful ways. These expressions are grouped by folklorists (people who study folklore and folk groups) into four categories:
customary expressions--things people do (like gestures, birthday parties, planting practices, and dances); verbal expressions--things people say, sing, or write (like jokes, legends, hopscotch rhymes, and nicknames); handmade objects or folk objects--things people make (like quilts, grave markers, paper airplanes, and special foods); folk beliefs--things people believe (like good luck charms and hiccup cures).
Folklorists: Cultural Investigators
Folklorists try to learn about people from their folklore: traditions. Just like folklorists, students can also learn about themselves, their classmates, and their community as they study folklore. Being a student folklorist is a lot like being a cultural investigator (detective). Student folklorists investigate folk groups, including their own and their classmates', in a sensitive, educative manner, in order to better understand their friends and themselves. Learning about yourself and others in the classroom setting is inclusive, for no one is left out. Each student is the docent or "expert" on his or her own traditions, and therefore, each student has the opportunity to excel at school. One boy in a folklife residency I conducted in Tremonton, Utah, became very engaged the day a cowboy poet and rawhide braider visited his class. His teacher mentioned to me that he rarely spoke because he has an acute learning disability. However, on the day of the rancher's visit, this young man, who was from a ranching background, felt "at home" with the materials being discussed and because of this he became actively involved in the learning experience, answering questions and sharing examples of his ranching experiences.
Here are a few reasons why you should use folklife education in your classroom.
studying folklore is fun. Children like sharing their folklore & folk art with others. Students are experts of their own folklore; therefore, studying folklore provides all students, not just the highly motivated or academically successful, with an opportunity of being an expert.
students acquire new perspectives about themselves, their culture, and others' culture when they study folklore. When students share their folkloric expressions with others they learn to interpret them and gain insights into their own values and the values of others.
folklore and folk art circulate within all ethnic groups, therefore the use of folklore from diverse ethnic groups will introduce ethnic heritage to students in a positive and supportive way. With folklore instruction everyone is involved, no one is left out.
through the study of folklore students often bridge the gap between their lives and the lives of those presented in history. By seeing the continuity of their folk group's expressions through time students begin to feel connected to the past.
because everyone is a bearer of folklore, the instruction of folklore and folk art in the classroom connects teachers and students as they explore together examples of their own group's expressive culture.
(I modified the reasons to study folklore from folklorist and educator Elizabeth Radin Simons' Student Worlds, Student Words: Teaching Writing Through Folklore. Portsmouth, NH: Boynton/Cook Heinemann, 1990, pp. 20-5. ~Randy Williams) |
Demographers are social scientists who study the growth, density and trends of human populations. Their work goes beyond just determining the number of people in a certain region. They report detailed statistical information on various aspects of a population, such as death and birth rates, and identify the causes and consequences of population patterns. Demographers' services are needed in a range of settings, including federal and state government departments, social service agencies, market research organizations and advertising agencies.
Using the Skills
To thrive in the job, demographers need strong technical research skills. When studying the structure of a certain population, they must use statistical research programs, spatial analysis software and other advanced technologies. Excellent data collection, analysis and management skills are essential, because the work largely involves dealing with lots of demographic data. These professionals also have a duty to craft reports on their research findings and present them to clients or employers. So they must be competent communicators with good presentation skills.
Studying Human Populations
Demographers study several aspects of a population. When the U.S. Department of Health and Human Services wants to determine whether a certain community needs more hospitals, for instance, the demographers at the Office of Population Affairs may conduct studies to learn the community’s average death rate. To develop an effective campaign strategy, political parties rely on demographers to map the voting-age population in various states. When evaluating where to best market a new luxury car, automobile manufacturers call on demographers to conduct income surveys and determine the consumption potential of various market segments.
Forecasting Population Trends
Policymakers, lawmakers and planners need to make sound plans for the future. Demographers contribute to this process by making reliable population projections. For example, they can use findings from their studies to forecast the number of unauthorized immigrants who will be living in the U.S. in the next 30 years. The federal government can use such a projection to make appropriate amendments to existing immigration laws or develop effective policies. Demographers also explain population patterns, such as the causes of increasing or decreasing immigration levels.
Becoming a Demographer
Although aspiring demographers can enter the profession through a bachelor’s degree in sociology, political science, economics or statistics, a master’s degree in demography is the preferred credential among many employers. Several universities offer certificate programs to individuals with at least a master’s degree in demography. Combining this certificate with a doctoral degree and securing membership in the Population Association of America can improve competence and open career advancement doors. For example, demographers with such credentials can be hired as policy analysis managers in government agencies. Colleges and universities also routinely hire demography professors to instruct the next generation of demographers.
- Photo Credit Sergey Nivens/iStock/Getty Images
Endocrinology Job Description
Endocrinologists treat patients who have medical conditions that affect the endocrine glands and the hormones produced by them. Some common conditions that...
Naturalist Job Description
Naturalists are teaching scientists committed to helping people, young and old, to gain an appreciation for the natural world. Naturalists help students...
How to Become a Demographer
Demography is the statistical study of human populations. Demographers research group patterns and trends such as migrations, birth rates, spending habits and... |
* This is the Consumer Version. *
Aortic stenosis is a narrowing of the aortic valve opening that blocks (obstructs) blood flow from the left ventricle to the aorta.
The most common cause in people younger than 70 is a birth defect that affects the valve.
In people over 70, the most common cause is thickening of the valve cusps (aortic sclerosis).
People may have chest tightness, feel short of breath, or faint.
Doctors usually base the diagnosis on a characteristic heart murmur heard through a stethoscope and on results of echocardiography.
People see their doctors regularly so their condition can be monitored, and people with symptoms may undergo replacement of the valve.
The aortic valve is in the opening between the left ventricle and the aorta. The aortic valve opens as the left ventricle contracts to pump blood into the aorta (see Overview of Heart Valve Disorders). If a disorder causes the valve flaps to become thick and stiff, the valve opening is narrowed (stenosis). Sometimes the stiffened valve also fails to close completely and aortic regurgitation develops.
In aortic stenosis, the muscular wall of the left ventricle usually becomes thicker as the ventricle works harder to pump blood through the narrowed valve opening into the aorta. The thickened heart muscle requires an increasing supply of blood from the coronary arteries, and sometimes, especially during exercise, the blood supply does not meet the needs of the heart muscle. The insufficient blood supply can cause chest tightness, fainting, and sometimes sudden death. The heart muscle may also begin to weaken, leading to heart failure. The abnormal aortic valve can rarely become infected by bacteria (infective endocarditis).
In North America and Western Europe, aortic stenosis is mainly a disease of older people—the result of scarring and calcium accumulation (calcification) in the valve cusps. In such cases, aortic stenosis becomes evident after age 60 but does not usually cause symptoms until age 70 or 80.
Aortic stenosis may also result from rheumatic fever contracted in childhood. Rheumatic fever is the most common cause in the developing world.
In people under 70, the most common cause is a birth defect, such as a valve with only two cusps instead of the usual three or a valve with an abnormal funnel shape. The narrowed aortic valve opening may not be a problem during infancy, but problems occur as a person grows. The valve opening remains the same size, but the heart grows and enlarges further as it tries to pump increasing amounts of blood through the small valve opening. Over the years, the opening of a defective valve often becomes stiff and narrow because calcium accumulates.
People who develop aortic stenosis as a result of a birth defect may not develop symptoms until adulthood.
Chest tightness (angina) may occur during exertion. The symptoms go away with several minutes of rest. People with heart failure develop fatigue and shortness of breath during exertion.
People who have severe aortic stenosis may faint during exertion because blood pressure may fall suddenly. Fainting usually occurs without any warning symptoms (such as dizziness or light-headedness).
Doctors usually base the diagnosis on a characteristic heart murmur heard through a stethoscope and on results of echocardiography. Echocardiography is the best procedure for assessing the severity of aortic stenosis (by measuring how small the valve opening is) and the function of the left ventricle.
For people who have aortic stenosis but do not have symptoms, doctors often do a stress test. People who experience angina, shortness of breath, or faintness during the stress test are at risk of complications and may need treatment.
If the stress test is abnormal or if the person develops symptoms, cardiac catheterization is usually necessary to determine whether the person also has coronary artery disease.
Adults who have aortic stenosis but no symptoms should see their doctor regularly and should avoid overly stressful exercise. Echocardiography is done periodically, at intervals determined by the severity of the stenosis, to monitor heart and valve function.
Before surgery, heart failure is treated with diuretics (see Table: Some Drugs Used to Treat Heart Failure). Treating angina is often difficult because nitroglycerin, which is used to treat angina in people who have coronary artery disease, can rarely cause dangerously low blood pressure and worsen the angina in people with aortic stenosis.
In people who have aortic stenosis that causes any symptoms (particularly shortness of breath on exertion, angina, or fainting), or if the left ventricle begins to fail, then the aortic valve is replaced. Surgical replacement of the abnormal valve is the best treatment for nearly everyone, and the prognosis after valve replacement is excellent.
Sometimes, in children and young adults who were born with a defective valve, the valve can be stretched open using a procedure called balloon valvotomy. In this procedure, a catheter with a balloon on the tip is threaded through a vein or artery into the heart (see Cardiac catheterization). Once across the valve, the balloon is inflated, separating the valve cusps.
Increasingly, frail older people who are at high risk for complications during surgery can have their valve replaced through a catheter threaded up the femoral artery in a procedure called transcatheter aortic valve replacement (TAVR). In some cases, where there is peripheral artery disease in the legs, this catheter-mounted valve can be inserted through a small incision in the left side of the chest (transapical approach) or even under the shoulder (axillary approach). TAVR results in better survival and quality of life than medical therapy or surgery for these people.
People with an artificial valve must take antibiotics before a surgical, dental, or medical procedure (see Table: Which Procedures Require Preventive Antibiotics*?) to reduce the risk of an infection on the valve (infective endocarditis).
Generic NameSelect Brand Names
* This is the Consumer Version. * |
So, what do cicada larvae look like? Technically they’re called nymphs, not larvae. When cicadas progress from one stage of development to another, they molt, rather than pupate. Each stage of development is called an instar. Most, if not all, cicadas go through five instars. The adult phase is the fifth instar.
First, here’s what their eggs look like:
When the eggs hatch, the cicadas don’t look like a grub or maggot as you might expect; instead they look like tiny termites or ants, with 6 legs and antennae. At this point they’re called first instar nymphs.
Here’s some first instar cicadas:
Here is a first and second instar cicada in the soil:
Here is a first, second, third and fourth instar:
If you are interested in participating in cicada nymph research, visit The Simon Lab Nymph Tracking Project page for more information. You must have had periodical cicadas on your property in past 13 or 17 years to find the nymphs — not including the Brood II area, since those nymphs came out of the ground this year. |
biology question #1551
jacqui, a 42 year old female from Stockleigh Pomeroy asks on August 21, 2003,Q:
Explain recessive gene disease and what happens to the DNA to cause parents to be carriers.
viewed 13320 times
Because every person's genetic makeup is a 50/50 combination of the genes that come from two parents, each of us have two copies of every gene. This goes for the genes that code for eye-colour as well as the genes that result in a disease. With many kinds of gene pairs, there is often one kind that is dominant, and one that is recessive. That means that, of the two copies, it will always be the dominant one that is expressed, not the recessive one. The recessive copy is there, inside you, but it is not used by the body to make any proteins. In a genetic disease, this means that it lies dormant and can come out in the next generation. For eye colour it could mean that a mother with brown eyes can produce a blue-eyed child.
This happens because during reproduction, only one set of the two copies is passed onto the child from each parent. For any given gene, a mother or father can have two "recessive type" copies, or two "dominant type" copies, or one of each. They pass only one of these on to their offspring. You can take a few minutes to draw a little chart to see how the probabilities work out. If one parent is pure dominant, and the other is pure recessive, the child will get one dominant gene copy and one recessive gene copy, and the result will be that the recessive copy will not be expressed, but will be "hidden" in his or her genetic code and may come out in the next generation. Remember, when one of the two copies is recessive, only the dominant one gets expressed, or "used" to create either the eye colour proteins or the proteins that result in the genetic disease. Blue eyes are caused by a pair of recessive genes so they are more rare than brown eyes which are dominant.
If both parents have one copy each of dominant and recessive types, then there's a 25% chance the child will get two dominant, or two recessive copies of the gene. If the child gets two recessives, then the blue eyes, or genetic disease will be expressed.
If both parents have double recessive copies, the child will have a 100% chance of getting the blue eyes or the genetic disease. As you can see from these probabilities, most of the time the dominant gene is expressed, and for this reason traits such as genetic disease can lie dormant within the gene pool, only to reappear by shear chance after several generations.
One might ask: why are these traits for genetic disease preserved at all? The answer is that sometimes they are linked to other very beneficial genes. For instance, the gene for cystic fibrosis, a genetic disease that causes the lungs to fill with mucous, is thought to be linked to a gene that confers immunity to certain diarrhoea-causing or other common widespread disease. It is not uncommon in nature to find the "good" linked together with the "bad". |
Prominent Schools of Thought
Topics of Instruction
Children's stories of the eighteenth century were primarily didactic in their purpose. On this page, we attempt to illustrate what schools of philosophy influenced the content of such literature, as well as the types of things children's literature tried to teach.
Prior to the seventeenth and eighteenth century, Britons did not think of childhood as a separate stage of development. Instead, they looked at children simply as small adults. Check out the painting on the left that demonstrates such an attitude.
During the seventeenth and eighteenth centuries, however, adults began to look at their children differently. New philosophies such as John Locke's theorized childhood as distinctly separate from adulthood, and such ideas proliferated.Take a look at another painting that illustrates such a perception. During the eighteenth century in particular, the English began to perceive children as imprintable individuals who could be taught morals and conduct. To support this new thinking, authors began to write literature for children with the intent of teaching them. The goal of children's literature was didactic.
Back to Top
The eighteenth century was greatly influenced by varying philosophies of the day. Three in particular stand out as influencing adult's perceptions of children.
In this model, children's personalities were controlled by forces beyond human control.
This model held that infant's minds were "blank slates" and that adults could imprint upon them whatever they wished.
Children as Inherently Good
This model stressed the inherent goodness of each child. According to such thinking, the child is best educated if allowed personal freedom of growth and thought, independent from the corrupting influence of evil adult institutions.
Back to Top
With more popular philosophies expounding the notion that children could be molded, children's literature became instructional. Three types of instruction were particularly prominent.
Especially in the earlier parts of the eighteenth century, children's literature was devoted to religious pursuit. This emphasis on religious instruction, however, gave credibility to authors later in the century who sought to also stimulate the imaginations of the young with stories of various genres.
Throughout the eighteenth century, the divisions between the classes became more and more blurred as the middle class began to encroach upon the aristocracy. The upper classes retaliated by enforcing class divisions wherever they could. Children's literature of the period assisted such attempts to enforce class differences.
During the eighteenth-century, children's literature reflected and instilled many of the cultural norms concerning gender roles. In particular, the female perception in literature evolved towards more influential agent of social change.
Back to Top
In summary, we offer our own original children's story, the Adventures of Little Tom. We borrow from eighteenth century traditions, trying to incorporate elements we've discussed (as well as others). In addition, we've attempted to follow the basic form of a children's story written in the eighteenth century.
Eighteenth-Century England Home
This site has been accessed times since April 30, 2002 |
Build a Bird
Web Site Links
Flight in Birds
When a bird
is gliding, it doesn't have to do any work. But it can't stay in
the air forever! The wings are held out to the side of the body
and do not flap. As the wings move through the air, they are held
at a slight angle, which deflects the air gently downward. Pushing
the air downward causes a reaction force in the opposite direction.
You will notice a reaction force, any time you push against anything!
The reaction force is called lift. Lift is a force that acts roughly
perpendicular to the wing surface and keeps the bird from falling.
gliding flight, a bird's wings deflect air downward,
causing a lift force that holds the bird up in the air.
There is also
air resistance or drag on the body and wings of the bird. This force
would eventually cause the bird to slow down, and then it wouldn't
have enough speed to fly. To make up for this, the bird can lean
forward a little and go into a shallow dive. That way, the lift
force produced by the wings is angled forward slightly and helps
the bird speed up. Really what the bird is doing here is giving
up some height in exchange for increased speed. (To put it another
way, it is converting its gravitational potential energy into kinetic
energy.) The bird must always lose altitude, relative to the surrounding
air, if it is to maintain the forward speed that it needs to keep
forward and going into a slight dive,
the bird can maintain forward speed.
groups of animals have evolved the capacity for gliding flight.
There are lizards, fish, snakes, squirrels, and opossums that can
glide. There is even a gliding primate-like mammal called the flying
lemur. Some additional gliding creatures are known from the fossil
record. But relatively few groups have actually crossed over into
the realm of powered, flapping-wing flight. Some of the gliding
groups have obvious anotomical limitations that would prevent further
Next, read about |
In our last blog, Should Teachers Give Rewards to Students for Good Behavior? A Psycho-Educational Perspective, we discussed the importance of linking rewards with behavioral goals to maximize the efficiency of our behavior management plan. Now, I want to elaborate on the technique of goal setting to regulate students’ motivation, but first, a brief description of the concept of goals:
The concept of goals is at the heart of most theories of motivation. Goals are internal (within the individual), as opposed to rewards that are externally regulated, and represent something that we want to accomplish; simply put, the goal is the result or outcome that we are trying to reach. We call this mental representation or goal our aim, purpose, or objective. The concept of goal is a motivational concept that influences behavior in several ways:
- Goals narrow our attention to goal-relevant activities and away from what we perceive is irrelevant to the goal.
- Goals guide our behavior and give us direction.
- Goals lead to effort and strengthen our persistence; that is, we are more inclined to work harder and to work through setbacks to reach our goal. In other words, goals direct and motivate our effort.
- A well-developed goal identifies strategies to deal with problems.
We may talk or dream about things we want in our lives, but we do not have a plan to reach them. The difference between just a dream and a goal lies in our plan. Dreams are visions and belong in our imagination; goals are plans that we outline, so that we have a map that we see and follow. Goal setting is more than just scribbling vague ideas on a piece of paper. An effective behavioral goal is like a road map, focused and detailed. In the classroom setting, a behavioral goal specifies what the habitually disruptive student is going to do, clearly indicating what acceptable performance is. In goal setting, we must write a goal that is clear (not vague), so that the child knows what to do, challenging, so that the child feels energized and motivated, and achievable, so that we give the student a genuine chance to succeed. We can set either a directional goal where we motivate the child to reach a particular conclusion (to think or believe in a particular way), or an accuracy goal, where we motivate the student to be more accurate or to develop proficiency. With habitually disruptive students, the psycho-educational teacher will be more effective if he or she intervenes first at the directional level, influencing the student’s belief system to reinforce a particular conclusion, followed by interventions at the accuracy level, so that, with the student, we continue to search for behavioral improvement. Some guidelines for setting goals follow:
- Set a main goal or long-term goal, but do not expect the habitually disruptive child to achieve the goal all at once, that will be too overwhelming to the child. Sub-divide the main goal into smaller and more easily reached goals or mini-goals (also known as short-term goals or proximal goals); succeeding at each mini-goal motivates the child to achieve the main goal. Celebrate and reward each time the child reaches a mini-goal.
- Combine easier goals with at least one hard goal. The easier goals build the habit of following through and you can reward the student quickly. The harder goal forces the student to grow.
- In order for the child to perseverate in reaching a behavioral goal (goal commitment), she must believe that the goal is important to her. Spend time connecting emotionally with the child (i.e. establishing rapport and creating an alliance with the child) and help the child see the meaning of the goal from her own perspective, not from the teacher’s perspective. Help the child understand and articulate why she wants the goal. The stronger the child’s motivation, the greater she will make an effort and will perseverate.
- Self-set goals are more effective in influencing behavior than goals selected by someone else. Work in cooperation and collaboration with the child, and help the child identify self-set goals such as becoming more competent, feelings of pride and accomplishment, satisfying her curiosity, or increasing her feelings of self-control and autonomy.
- When you help the child list self-set goals, you are strengthening self-esteem. You are sending the child the message that she is worthy of these goals, that she is capable of developing the personality traits that will allow her to reach the goal, and that you trust her and have confidence in her ability to follow through and succeed.
- In order for the child to perseverate and commit to a goal, she must believe that, with time and an effective plan, she will reach the goal. Help the student understand that her habitually disruptive behaviors are the result of a lack of plan, or if the child tried before, tell her that the behavioral strategies attempted were inadequate. In other words, the strategy failed, the child did not fail. By definition, goal setting is the process of developing and testing strategies. Be flexible, adjusting and modifying the plan or strategies when needed.
- If you need to change strategies, explain it to the child as a victory, not a defeat, because both the child and you have the insight to realize that something needs to change. In other words, the child is growing up and maturing.
- A clearly stated goal that follows a specific plan has a greater chance to succeed than a general goal. To elicit specific behaviors, it is important that the child clearly understands what he is going to do, that is, the goal canalizes the student’s behavior. To develop a specific plan, you can follow an outline that answers, who (people involved), what (what do you want to accomplish), where (setting), when (time line or period), how (steps) and why (purpose and benefits). Using this format, you create a set of instructions for the student to carry out.
- State behavioral goals positively, that is, what the child is going to start doing, instead of in a negative way, or what the child is going to stop doing. Keep the child focused forward (what she wants), not in the past or what she is leaving behind.
- Determine how you are going to measure the child’s progress towards each mini-goal and main goal. How both the student and you will know when the child reaches the goal? You can use qualitative measures (strategies or procedures that the student knows and applies) or quantitative measures (competency, or the child’s ability to follow the procedure well, e.g. 80% proficiency or three out of five times).
- Give periodic feedback, that is, give the child information about how well he is doing. The student needs to know where his performance is in relation to each mini-goal and the main goal, so that you both determine if the child needs to try harder, if you need to adjust the plan (e.g. developing an easier goal), or if you need to change the strategy or method.
- For students with chronic and/or recurrent behavior problems, be sensitive and reward partial success and effort. Alternatively, you can develop performance improvement goals based on the child’s past performance (e.g. 10% more, then 20% more). Reinforce progress in meeting the goal.
Psycho-Education for Teachers: Understanding the Child Guidance Process Part 1- Definition, Elements, and Steps
Psycho-Education for Teachers: Understanding the Child Guidance Process Part 2- Skills, Techniques, and Procedures
Handling Angry Students: Psycho-Educational Strategies that Work
Of Interest to Teachers...
Of Interest to Teachers...
A Call to All Teachers:
Proudly announcing our new group for educators worldwide, “We Teach the World.” Our aim is to connect teachers and related school personnel all over the world, so that we can share much-needed ideas, strategies, and lesson plans as well as all kinds of resources in classroom management and in student discipline. Coordinating our effort worldwide, we can tell each other where to find important resources and information. If you administer a teaching blog or have created educational resources to facilitate our job, you are welcome to share them here. As long as they contribute to education, we want to know of your business. Teachers with questions, post them here; mentors and seasoned teachers, your valuable experience and unique perspective matter to us, so make your voices heard. Because isolated, we teachers are imaginative, resourceful and resilient, but connected, connected we are imaginative, resourceful, resilient AND powerful. To join us, click on, “We Teach the World.” |
If you listen to some advocates of the paleo diet and similar fad diets, they’ll tell you that tooth decay is a modern development, that people didn’t used to get tooth decay until the agricultural period or even the industrial era when people began eating a lost more carbs. There’s some truth to this, but our first records of gum disease in the human population come from as far back as 1.8 million years ago, and are related to meat getting stuck in our teeth, not carbs.
And we have evidence that whenever people found a high-quality food source, they were likely to experience tooth decay, independent of modern agriculture and processing techniques.
Hunter-Gatherers Had a Sweet Tooth for Acorns
Researchers looked at a settlement that had been in use for more than a thousand years, from about 15,000 years ago to about about 13,700 years ago. Located in the Grotte des Pigeons in Morocco, this settlement had evidence that locals systematically collected sweet acorns and pine nuts. Researchers speculate that the acorns became very sugary, sticky food when cooked.
As a result of this rich food source, the more than 50 skeletons looked at in this site had extensive oral decay, comparable in some ways to modern populations, but without the benefit of modern treatments to stop their progress. About half of all teeth had cavities in them. Numerous teeth had progressed to having internal infections, including abscesses in the jaw, which we would normally treat with a root canal.
Decay before Dentistry?
Part of what made this population so unlucky is that their oral health problems came before anyone was able to treat them.
There was no evidence of oral hygiene at the site, which likely contributed to accelerated decay. These people lived before the Egyptians invented the first profession of what might be called dentistry (perhaps 5000 years ago), and even before primitive experiments with the earliest filling (perhaps 6500 years ago), and earlier than the earliest drilled teeth (perhaps 9000 years ago).
Without the root canal treatment, many of the abscesses burst, leaving holes in the jaws of these people. It was likely that these people experienced significant pain as a result of their dental decay.
We don’t know why, but it seems like these people practiced ritual tooth extraction. They removed the upper central incisors in about 90% of people. This likely isn’t a health treatment, as in medieval dentistry, because they left in so many very damaged teeth. Perhaps, though it was a form of sacrifice to primitive deities they might have imagined were responsible for their suffering.
Although there are definitely some modern foods that are bad for your teeth (we’re looking at you, coke), it’s clear that whenever people had access to high-calorie food, they were likely to suffer tooth decay. |
We love the blinky awesomeness that are LEDs. But have you ever asked yourself, how do they work anyway?
L.E.D. stands for Light Emitting Diode. OR a diode that gives off light. A Diode is a type of electronic component that will only allow electricity to flow in one direction. What makes LED’s different from other diodes is they are very good at emitting light when electricity is run through them. But how do they do that?
Inside your LED are Gallium alloy crystals. I won’t bore you with the specifics, you can read Wikipedia if you want more scientific jargon. However, based on their structure the Gallium alloy crystals can either be called P-type or N-type. N-type has an excess of electrons, P-type is missing electrons. Put these two types next to each other and you have the beginning of an LED.
But to make it emit light you need to add electricity to the LED. When you add electricity to the two sides of the LED, electrons will jump the gap between the P and N types of Gallium alloy. When they jump the gap (actually called a band gap), they emit light.
So how do you get different colors of LED light? By alloying different elements with Gallium. The different elements determine how much energy it takes for the electron to jump the gap. The larger the gap, the higher energy photon the LED emitts. So a smaller gap would be a red photon and a larger gap would be a blue photon. Neato! Wikipedia has a great list of all the different types of gallium alloys and their corresponding color.
LED’s are very energy efficient and last longer than traditional lighting sources. LED’s use 7.5 times LESS power than an incandescent bulb and last 40 times longer! (Source Design recycle Inc) So over the life of the bulb you will save a lot of money. |
Another area in which the brain of people with an autistic spectrum disorder and the brain of neurotypical differ is the cerebellum. A part of the brain, most commonly associated with motor activity. Yet, what is not commonly known, is that this area of the brain is also associated with attention. When adolescents with autism where compared with their neurotypical peers, the children with autism showed less activation in the cerebellum in relation to attention and more in relation to motor activity. This suggests that there's a difference in the way that the cerebellum works in people with an autism spectrum disorder, when compared to neurotypical people. This is particularly interesting, because autism is also heavily linked with a disruption in motor coordination.
There have also been shown slight differences in the amygdala and hippocampus and areas of the prefrontal cortex associated with social cognition. Yet great caution is needed when clear roles are assigned to those very complex brain structures. This is especially the case when dealing with autism, because the localization of brain function (localization is defined as the clear correspondence of a specific part of the brain to a specific brain function) is far less consistent in people with an autism spectrum whose brains tend to be organized in a different way, and in a way that differs from person to person.
Although the differences between the brains of a neurotypical and someone with an autism spectrum disorder are not clear-cut. Modern neuropsychology clearly states that the differences are present and are profound. Yet it is important to remember that this does not mean that the brain differences cause autism, it might be actually the other way around- autism might cause the differences in brain structure. What it does prove, in association with other evidence, is that autism is primarily a biological disorder. |
The Body and the Hormones
Most of us are equipped to have kids. Of course there are exceptions, for example when individuals suffer from certain diseases. In addition to organs like the liver, heart, and lungs, we have reproductive organs and we produce hormones inside our bodies. Hormones are courier substances that travel in the blood to carry messages from one organ to another. There are many different types of hormones. One group, sex hormones, controls the ability of women and men to reproduce.
The most important sex hormones in the female body are estrogen and progesterone. The male hormones are called androgens. The most important androgen is testosterone. It is not true that androgens are found only in males and estrogens are found only in females. Men carry female hormones and women carry male hormones as well.
|Did You Know?
25% of young women who have intercourse without using a method of birth control at any time during the cycle will become pregnant within one month.
85% will become pregnant within one year.
Let’s look at the difference between the male and female reproductive organs. When choosing a method of birth control(contraception) these “little” differences actually make a big difference.
From the reproductive point of view the major differences between males and females are:
- Starting at puberty, men can make babies basically anytime provided they ejaculate.
- Sperm can stay alive in a woman’s reproductive organs for up to three days.
- Men are able to conceive children almost until the end of their lives.
- Men do not have a cycle to regulate fertility like women do.
- Men need to reach orgasm and ejaculate in order to reproduce.
The sperm production - How men produce babies
The male body has internal and external reproductive organs. The internal orangas are (epididymis, vas deferens, prostate, urethra) and external reproductive organs are (penis, scrotum holding the testicles or testes).
Sperm production begins at the onset of puberty, at an average age of 13 years, and lasts throughout the life of a man. The sure sign for a young man that he is able to reproduce is that his erection is followed by an ejaculation. This is of course only “physically speaking”. Emotionally, you might be very far from being ready to take on the responsibility of becoming a father. Sperm, more precisely spermatozoa, are produced by the testicles, which are glands within the scrotum. The scrotum functions like a thermostat, regulating the temperature of the testicles. If you’re a male then you know that the scrotum becomes smaller and more wrinkled when you enter a cold pool. The scrotum contracts to bring the testicles closer to the body to keep them warm. The testicles produce hormones and sperm. Sperm production is an ongoing process. It takes about 70 days for one sperm to mature.
Let’s have a look at how sperm actually grow. At the beginning, sperm forms in the testicles, then travels through the epididymis. After that, the sperm reaches the vas deferens. It is stored there until ejaculation occurs. The prostate gland produces a liquid that helps sperm to survive after leaving the male body. During ejaculation, spermatozoa and liquid from the prostate and other glands make a mix while travelling through the urethra. This mix is called semen. The urethra is a tube that also connects to the bladder for passing urine. During sexual excitement, for example during lovemaking, this connection is interrupted so that the semen does not come into contact with urine. A sperm has the ability to swim and travel on its own. It has an oval-shaped head and a tail that serves as a propeller. Sperm carry the genetic information from the male and can unite with the female egg to produce an embryo. After two months, the embryo becomes a fetus, and later becomes a baby.
Survival of the fittest
Spermatozoa are very fragile and their chances of survival are very low. This is why the testicles of each individual produce millions of spermatozoa each day. The milky or creamy looking ejaculate consists of hundreds of millions of sperm, but only a few of them will survive the journey through the female vagina to the fallopian tube where the female egg is waiting to meet a sperm. Out of those few, only one will actually penetrate the egg and fertilize it.
Sperm, although it is very fragile, can also be very persistent. Occasionally pregnancy can occur without intercourse and even if the hymen is intact. The hymen is the membrane that partially covers the virgin vagina. This is called “splash pregnancy”. Sperm have been known to move very quickly from outside the vagina into the uterus. After intercourse sperm can survive up to three days in the reproductive organs of a female.
Remember what we said earlier about the differences between the sexes? Here are the little differences that make up a female:
- The woman is able to have children from the time she begins to produce eggs (around 12 years) to the onset of menopause (around 52 years).
- The woman can conceive only during the three days (approximately) surrounding ovulation each month (2 days before and on the day of ovulation).
- The woman has a menstrual cycle that determines her fertility.
- The female egg can only be fertilized by male semen in a time period of 6-12 hours.
- The woman can become pregnant without being sexually aroused and reaching orgasm.
- The woman could be a virgin and still get pregnant (splash pregnancy).
Puberty: When hormones start working overtime
The female body has internal and external reproductive organs. The exterior ones are: Mons pubis, clitoris, urethra, opening of the vagina (6-10 cm), inner and outer lips, and hymen. The interior organs are: cervix, uterus (womb), fallopian tubes (8-10 cm) and ovaries. The cervix is the entrance to the uterus
Already at birth, the female body is equipped with a bank account of 300,000-400,000 egg cells, which are located in the ovaries. Of this large amount only 300-500 will be released during the reproductive years of a woman’s life. Starting between the ages of 8-10, hormone production rises and makes the body change from a girl to a young woman. The first menstruation, between ages 11-14, is the sure sign that the body is preparing to have children. This is of course only “physically speaking”. Emotionally, you might be very far from being ready to have children of your own.
From puberty on:
- The female produces one egg (ovulation) every month in the left or the right ovary.
- This egg is released to start its journey to the uterus through one of the fallopian tubes.
- The body prepares for a possible pregnancy.
Keep in mind that we’re talking about the usual stuff here. Of course there are exceptions such as the production of more than one egg, which might lead to two or more babies. This all happens due to the amazing teamwork between the hormones and organs. These things go on over and over again each month and this is what we call the female cycle.
|Did You Know?
|In 2000, the fertility rate for adolescents (number of pregnancies per 1,000 women of reproductive age) was 17.3 compared with 33.9 for women in the 35-39 age group and 5.9 for women in the 40-44 age group. The highest abortion rates (number of abortions per 1,000 women) occur in women 18-19 years and 20-24 years of age.
The Amazing Female Cycle
The cycle covers a time frame of 23-35 days. The average cycle lasts 28 days. The first day of the cycle is the first day of menstruation. The last day of the cycle is the last day before the following menstruation. Cycle lengths vary individually and they are not always regular. Stress, weight gain or weight loss, for example, can disturb it. After the first menstruation it may take 1-3 years until a woman gets a regular cycle.
During the first 14 days of the cycle (usually, but depending on cycle length) an egg is ripening. A hormone in the brain, which is called follicle stimulating hormone (FSH), stimulates the ripening process. The coat around the egg produces estrogen. This most important female hormone makes the lining of the uterus grow to form a nutritious and secure bedding for the egg to settle into after fertilization.
Approximately at day 14 of a 28-day cycle, an egg is ready to be released. Another hormone in the brain, which is called luteinizing hormone (LH), gives the impulse for the egg to emerge from the ovary and be taken up by the fallopian tube. This important event is called ovulation. This is also the most fertile time of the month for the woman to get pregnant. The egg then travels through the fallopian tube to the uterus. The journey takes about seven days. In the meantime, another important hormone produced in the ovary, progesterone, ispreparing the uterus for a pregnancy by securing a sufficient blood supply and by preventing the uterus from contracting and losing a fertilized egg.
Sperm can fertilize the ready egg in the fallopian tube during a 6 to 12 hour period. Fertilization happens when a sperm enters the egg and the embryo starts to form. Two cells divide and become four, the four cells divide and become eight, and so on. By the time the cluster of cells reaches the uterus and settles down into the lining of the uterus, it has become an embryo. This settling down is called implantation. It takes about seven days from fertilization to implantation. The rise of estrogen and progesterone in the blood stream of the woman, along with the pregnancy hormone HCG from cells surrounding the embryo, signals pregnancy. From now on, the female body concentrates on the growth of the embryo and stops the cycle until a few weeks after the baby is born. This is why women cannot conceive again while they are pregnant. A woman can only have one pregnancy at a time, but this does not exclude the possibility of having more than one embryo or fetus at a time, e.g. twins.
The rise in estrogen and progesterone signals to the ovaries: Do not produce any more eggs for now. We have to take care of this embryo first! A pregnancy test can be positive 8-10 days after ovulation. If no fertilization of the egg occurs, the production of progesterone stops. So does the production of estrogens. The message is basically: We do not have a fertilized egg to produce an embryo this month, so stop all the preparations and start all over again! The end of the story is that the thickened lining of the uterus, which was supposed to be the bed for the fertilized egg, is no longer necessary. The same applies to the egg, which did not get fertilized. The body rids itself of this bedding and the egg by bleeding. This is known as the period or menstruation.
The link to contraception
This was a brief description of what’s happening with our bodies when it comes to reproduction. What does this have to do with contraception then? Remember we were talking about the principles of contraception:
- Hormonal methods: make the body believe that the ovaries produce hormones while they are, in fact, resting and not producing eggs. Most hormonal methods stop ovulation.
- Barrier methods: prevent sperm and egg from meeting each other.
- Chemical methods (spermicides): destroy sperm upon contact.
- Surgical methods: interrupt the transportation route of eggs or sperm.
- Emergency contraception: delays egg release.
|Did You Know?
|Did you know that a woman can become pregnant even…
...when she has intercourse for the first time?
...when she has her period?
...if she had no period yet?
...if her partner ejaculates not inside her vagina but close by? |
Detail from Berthe Morisot, 'Le Corsage Noir' (1878)
LOOK & RESPOND
Take a moment to look at the painting.
- Is this work finished? Use visual evidence from the work to explain your answer.
- Identify three distinct types of brushstroke and describe the way in which you imagine the artist achieved the effect, considering the type of brush used, the speed, pressure and direction in which the paint was applied.
COMPARE & CONTRAST
Morisot’s daughter Julie had a very close relationship with Monet, Renoir and Degas. They supported her and often brought her on holidays after her parents passed away, leaving her an orphan at only sixteen years of age. Compare each of the three artists’ pictures included in this resource, according to the following themes:
- The importance of light and colour in the work of the Impressionists
- The influence of Japanese Prints on the work of the Impressionists
- The experience and depiction of modern Parisian life in the 19th century
Recreate the effect... Form
Morisot avoided line as much as possible. Create a still-life or portrait using colour and tone to build up the form of your subject, rather than line. It’s harder than you might think!
Create your own... Impressionist Portrait!
Ask a friend or family member to pose for you under a strong light source (by an open window, or beneath an artificial light). Paint their portrait, recording the way the light falls on them, by experimenting with your brushstroke and colour palette. As a follow up activity, replicate the painting using just two contrasting colours of your choice. Create darker tones by mixing complementary colours, an approach taken by the Impressionists in order to avoid using black paint, which they argued was an ‘unnatural’ colour. |
For those who are not already aware, military reserve forces are composed of people who blend together a military and civilian life. In the United States, Reservists perform around 39 days of military duty per year. The United States Military Reserves currently contains seven different components. Most were founded during the 20th Century, but the basis for one branch dates back to the 1600’s. Explore some interesting facts of each of the seven United States Military Reserves:
Army National Guard
This branch of the Reserve dates back to 1636 in Massachusetts. Up until the 20th Century though, the Army National Guard was a state-funded asset which took on many different names. It’s required by the Constitution that each state have their own, and the only component of the United States Military to be Constitutionally required.
While the Army Reserve wasn’t officially founded until 1908 (as the U.S. Medical Reserve), the roots can be traced back to the 1700’s. There was a Citizen-Soldier Force that served various purposes during the French and Indian War, the Civil War, and all the way up to the Spanish-American War and Philippine Insurrection, which ended in 1902. Officially recognized as the Organized Reserve Force in 1920, The Reserves proved crucial to the American victory during World War II. There was also significant need for the Army Reserve (which took on that name) during the Korean War and the Cold War (to a lesser degree). Currently, members of the Army Reserve typically perform duties one weekend out of each month when not on Active Duty.
From 8,000 to 250,000 in less than a year. The Navy Reserve was founded in 1915, reorganized in 1916, and commanded 8,000 service members at the start of World War I in 1917. By the end of the war in 1918, more than 250,000 were part of the Navy Reserve. While that number is significantly smaller today (60,000), the Navy Reserve currently makes up for 25% of the Navy’s force, making it a crucial and formidable component of the Navy should naval conflict arise.
Marine Corps Reserve
Alongside the Navy Reserve, the United States Marine Corps Reserve was founded in 1916, in anticipation of United States involvement in World War I. Following the war, the Reserve was demobilized to an inactive status with fewer than 1,000 Reservists. This led to a Congressional Act which found a need to maintain a Marine Corps Reserve. During World War II more than half of the Marines that served were Reservists. The Marine Corps Reserves were also mobilized significantly in the Korean War and the Persian Gulf War. Today, they’re an integral part of ongoing military operations
Air National Guard
Though not officially defined until 1947, aviators of the National Guard had been present since World War I. A 1915 unit became the 1st Aero Company, New York National Guard, and went on to be recognized as the first unit of the Air National Guard. Some of their biggest roles in conflict included both the Korean and Vietnam Wars. The former, where 80% of the force was mobilized and the latter where 95% reported.
Air Force Reserve Command
Not to be confused with the Air National Guard, the Air Force Reserve is, as its name defines, a component of the United States Air Force. The newest of the seven reserve components, the Air Force Reserve Command was founded in 1948. Like many other components though, their origins date back to the 1916 National Defense Act which created what was called ‘Reserve Air Power’. One of their most significant roles in recent years came following the terrorist attacks that took place on September 11, 2001. Reservists took to the skies, patrolling and protecting America’s cities.
Coast Guard Reserve
The Coast Guard Reserve dates back to 1939, with formal military reserve commencing in 1941. Staggeringly, 92% of the Coast Guard personnel who served during World War II were Reservists. Without the Reserve, it’s unknown how things would have turned out. Later there was a further surge in Reserve strength during the Vietnam War.
Posted on August 27 2018 in Blog |
Researchers urge for protection of waterways that connect different bodies of water and sustain life.
The Amazon rainforest and basin are crucial for the balance of the Earth's environmental systems that enable life as we know it. The world’s largest rainforest covers 6.7 million square kilometers and encompasses the largest network of forests and rivers in the world, housing around 10% of the world’s biodiversity and 20% of the planet’s freshwater.
However, there are few studies on monitoring freshwater corridors and their importance for biodiversity and related ecosystems services. A new study “Identifying the current and future status of freshwater connectivity corridors in the Amazon Basin”, published in Conservation Science and Practice, assesses the critical areas that need to be protected to maintain this delicate balance. The study was co-authored by Bernardo Caldas, Alliance of Bioversity and CIAT researcher and MEL Director for CALPE, and Michele Thieme.
“The data and information generated by this research group are crucial for the conscious and integrated management of freshwater ecosystems in the Amazon. Besides biodiversity, the health of these freshwater systems is crucial for food production and climate change adaptation strategies”, said Caldas.
Protecting multiple ecosystem services
Rivers and related freshwater systems (floodplains and temporary lakes) in the Amazon serve multiple functions: they provide habitats for freshwater fish populations that provide food security both for local communities and cities in the region, they deliver sediment downstream, mitigate the impacts of extreme weather events such as droughts or floods, and provide habitats for biodiversity. Safeguarding healthy, free-flowing rivers is crucial to maintaining these critical ecosystem services over time.
This new research provides an understanding of where these freshwater corridors or “swim ways” currently exist, and where they might disappear due to future hydropower development that block the movement of key migratory species in the Amazon Basin including fish, dolphins and turtles. The intention of this research is to provide a case for the protection of these key corridors as part of the larger Amazonian Regional Protected Area system in order to ensure the vitality and health of the local ecosystems, freshwater flows, water quality and quantity, forested and stable banks, and species for people and nature.
In conducting the research, scientists of several organizations and academia, led by WWF, analyzed more than 340,000 km of Amazonian rivers, beginning with an assessment of the connectivity status of all rivers and then combining that with occurrence of migratory fish, migratory turtles and dolphins. The resulting map shows where Freshwater Connectivity Corridors (FCCs) exist and where they would be disrupted under a hydropower development scenario considering currently proposed or planned dams.
Read the full article
Top photo credit: Neil Palmer |
Recent research by a team at the Oregon Health & Science University, Portland shows that a key gene known as Atoh1 (also known as Math1) can not only cause cells to develop into hair cells but that these cells function like normal hair cells.
"Our work shows that it is possible to produce functional auditory hair cells in the mammalian cochlea," says John Brigande, assistant professor of otolaryngology at the Oregon Hearing Research Center in the OHSU School of Medicine.
Hair cells can be damaged and lost through aging, noise, genetic defects, and certain drugs and, because the cells don’t regenerate, the result is progressive—and irreversible—hearing loss. Damage to these cells can also lead to tinnitus.
Brigande and colleagues were able to produce hair cells by transferring Atoh1 into progenitor cells in the inner ear of developing mice. This type of cell becomes specialized to perform different functions during development, according to the instructions they receive from genes. The gene Atoh1 is known to turn progenitor cells into hair cells, but it was not previously known whether the hair cells would work normally if Atoh1 was introduced artificially.
To find out, the team inserted Atoh1 into progenitor cells along with a fluorescent protein molecule that is often used in research as a marker, to make cells easily visible. They were then able to see that the gene transfer technique resulted in mice being born with more hair cells in the cochlea than are normally found.
Anthony Ricci, PhD, associate professor of otolaryngology at the Stanford University School of Medicine, demonstrated that the gene-treated hair cells function like ordinary hair cells.
[Source: Medical News Today] |
Word analogies are one of the most effective ways to build vocabulary while simultaneously developing critical thinking skills. Unlike vocabulary activities that require students to memorize word definitions, word analogies develop a deep understanding of a word meaning by exploring the relationships between words. When students form logical connections between words, they create a mental network of ideas, which deepens understanding and increases retention. While solving word analogies, students are contextualizing the words as they search for word relationships.
When students work with analogies, they...
•expand and deepen their vocabulary.
•understand the relationships between ideas and words.
•recognize and understand multiple-meaning words.
•think critically and apply logical reasoning.
•learn to decipher word meanings based on context.
•build a network of understanding that improves retention and aids future learning.
When complete, this bundle will include:
This bundle includes everything outlined below for grades 7-10. It is PACKED with materials that will allow you to effectively use word analogies with your students. You will be provided with tools to break down word analogies in a scaffolded manner so that your students are successful from the very beginning. There are several versions of the materials included which will allow you to perfectly differentiate to meet the needs of your students. Download the preview for a detailed overview of what's included.
•Instructional PowerPoints: To be used for introducing word analogies and strategies to solve them
•150 word analogies for each level:That is a total of 600 word analogies. These have been expertly crafted using Tier Two words for the respective grade levels.
Note: There are no grade level labels (you will only see “Level X”) on these materials, which means you can use them with any grade. Each level is organized in 15 sets of 10 (10 analogies per set).
•Leveled Printables: The included printables are provided in four different formats to allow you to deliver the exact level of support your students need.
•Task Cards: The analogies are also provided in task card format.
•Google Classroom Version: Each of the 15 sets for all four levels is available as a Google Form quiz for those teachers using Google Classroom/Google Drive.
•Teaching Notes: Tips, tricks, and instructional rationale are provided for every component.
Download the preview for a detailed overview of the seventh-grade unit. To see the previews for grades 8, 9, and 10, click on the links below.
Need another grade level?
Get all the latest Teacher Thrive news!
Please read: This is a nonrefundable digital download. Please read the description carefully and examine the preview file before purchasing.
© Copyright 2018 M. Tallman. All rights reserved. Permission is granted to copy pages specifically designed for student or teacher use by the original purchaser or licensee. This is intended to be used by one teacher unless additional licenses have been purchased. The reproduction of any other part of this product is strictly prohibited. Copying any part of this product and placing it on the Internet in any form (even a personal/classroom website) is strictly forbidden. Doing so makes it possible for an Internet search to make the document available on the Internet, free of charge, and is a violation of the Digital Millennium Copyright Act (DMCA). |
The greatest risk lies in the massive emissions of smoke and heat in enclosed spaces. Since the personnel present are often only passing by, they might be unaware of the particularities of the design of the tunnel. Put simply, when they come across a fire, they are already inside. Visibility is key to avoiding disorientation at critical moments, i.e. where speed is the prevailing factor to avoid asphyxia.
This is related to very high fire loads (several MW), extreme temperatures, and an abundance of toxic smoke. Protection with a water-mist system refrigerates the area while also facilitating evacuation.
Protection must also take into consideration the cooling of the tube structure, that is, to avoid structural damages which compromise its integrity.
The identification of risks must include a detailed study of the infrastructure and its design characteristics, as well as those of the environment (location, accessibility, response time, etc.), and of traffic conditions in the tunnel.
Thus, their characteristics vary based on the longitude and number of tubes, the sense of traffic, the natural/emergency ventilation system, the structure, the lining used, uneven levels, different pressure levels, etc. |
Tag Scavenger Hunt
This tag scavenger hunt activity gives students practice researching and finding information via tags, while familiarizing them with LGBTQ religious history. Intended for 11-12th grade or college-age students, its lesson plan includes objectives and a Common Core cross-reference.
Quiz on LGBTQ+ Religious History
This 12-question quiz covers significant events and persons in LGBTQ+ religious history and can be used to introduce students to the subject and to stimulate further research and discussion.
Upstairs Lounge Fire Exhibit
This discussion guide on the Upstairs Lounge Fire exhibit is intended for use in an undergraduate class session on LGBTQ+ history but may be adapted for other types of study.
LGBTQ-RAN provides these short video clips from presentations made by LGBTQ Christian leaders in the U.S. at the Rolling the Stone Away: Generations of Justice and Love Conference in October 2017. These video clips are intended for use in classrooms and instructional settings to illustrate the development and history of LGBTQ religious movements. Permission is given for one-time use in a classroom or other educational settings. For all other uses, contact [email protected]. |
Automatic target recognition (ATR) is the ability for an algorithm or device to recognise targets or other objects based on data obtained from sensors.
Target recognition was initially done by using an audible representation of the received signal, where a trained operator who would decipher that sound to classify the target illuminated by the radar. While these trained operators had success, automated methods have been developed and continue to be developed that allow for more accuracy and speed in classification. ATR can be used to identify man made objects such as ground and air vehicles as well as for biological targets such as animals, humans, and vegetative clutter. This can be useful for everything from recognising an object on a battlefield to filtering out interference caused by large flocks of birds on Doppler weather radar.
Possible military applications include a simple identification system such as an IFF (identification, friend or foe) transponder, and is used in other applications such as unmanned aerial vehicles and cruise missiles. There has been more and more interest shown in using ATR for domestic applications as well. Research has been done into using ATR for border security, safety systems to identify objects or people on a subway track, automated vehicles, and many others.
Target recognition has existed almost as long as radar. Radar operators would identify enemy bombers and fighters through the audio representation that was received by the reflected signal (see Radar in World War II).
Target recognition was done for years by playing the baseband signal to the operator. Listening to this signal, trained radar operators can identify various pieces of information about the illuminated target, such as the type of vehicle it is, the size of the target, and can potentially even distinguish biological targets. However, there are many limitations to this approach. The operator must be trained for what each target will sound like, if the target is traveling at a high speed it may no longer be audible, and the human decision component makes the probability of error high. However, this idea of audibly representing the signal did provide a basis for automated classification of targets. Several classifications schemes that have been developed use features of the baseband signal that have been used in other audio applications such as speech recognition.
Radar determines the distance an object is away by timing how long it takes the transmitted signal to return from the target that is illuminated by this signal. When this object is not stationary, it causes a shift in frequency known as the Doppler effect. In addition to the translational motion of the entire object, an additional shift in frequency can be caused by the object vibrating or spinning. When this happens the Doppler shifted signal will become modulated. This additional Doppler effect causing the modulation of the signal is known as the micro-Doppler effect. This modulation can have a certain pattern, or signature, that will allow for algorithms to be developed for ATR. The micro-Doppler effect will change over time depending on the motion of the target, causing a time and frequency varying signal.
Fourier transform analysis of this signal is not sufficient since the Fourier transform cannot account for the time varying component. The simplest method to obtain a function of frequency and time is to use the short-time Fourier transform (STFT). However, more robust methods such as the Gabor transform or the Wigner distribution function (WVD) can be used to provide a simultaneous representation of the frequency and time domain. In all these methods, however, there will be a trade off between frequency resolution and time resolution.
Once this spectral information is extracted, it can be compared to an existing database containing information about the targets that the system will identify and a decision can be made as to what the illuminated target is. This is done by modelling the received signal then using a statistical estimation method such as maximum likelihood (ML), majority voting (MV) or maximum a posteriori (MAP) to make a decision about which target in the library best fits the model built using the received signal.
Extraction of Features
Studies have been done that take audio features used in speech recognition to build automated target recognition systems that will identify targets based on these audio inspired coefficients. These coefficients include the
- Linear predictive coding (LPC) coefficients.
- Cepstral linear predictive coding (LPCC) coefficients.
- Mel-frequency cepstral coefficients (MFCC).
The baseband signal is processed to obtain these coefficients, then a statistical process is used to decide which target in the database is most similar to the coefficients obtained. The choice of which features and which decision scheme to use depends on the system and application.
The features used to classify a target are not limited to speech inspired coefficients. A wide range of features and detection algorithms can be used to accomplish ATR.
In order for detection of targets to be automated, a training database needs to be created. This is usually done using experimental data collected when the target is known, and is then stored for use by the ATR algorithm.
An example of a detection algorithm is shown in the flowchart. This method uses M blocks of data, extracts the desired features from each (i.e. LPC coefficients, MFCC) then models them using a Gaussian mixture model (GMM). After a model is obtained using the data collected, conditional probability is formed for each target contained in the training database. In this example, there are M blocks of data. This will result in a collection of M probabilities for each target in the database. These probabilities are used to determine what the target is using a maximum likelihood decision. This method has been shown to be able to distinguish between vehicle types (wheeled vs tracked vehicles for example), and even decide how many people are present up to three people with a high probability of success.
CNN-Based Target Recognition
Convolutional neural network (CNN)-based target recognition is able to outperform the conventional methods. It has been proved useful in recognising targets (i.e. battle tanks) in infrared images of real scenes after training with synthetic images, since real images of those targets are scarce. Due to the limitation of the training set, how realistic the synthetic images are matters a lot when it comes to recognise the real scenes test set.
The overall CNN networks structure contains 7 convolution layers, 3 max pooling layers and a Softmax layer as output. Max pooling layers are located after the second, the forth and the fifth convolution layer. A Global average pooling is also applied before the output. All convolution layers use Leaky ReLU nonlinearity activation function. |
Accessibility of the Helen Keller Archive
What Is Digital Accessibility?
By “accessibility,” we mean the design and development of a website that allows everyone, including people with disabilities, to independently use and interact with it.
The World Wide Web Consortium (W3C)’s Web Accessibility Initiative (WAI) specifies that people should be able to:
- and Interact with the web, as well as
- Contribute to it
Why Is Inclusive Digital Design Important?
Accessibility broadens the potential audience that can explore a digital archive:
- According to a 2018 CDC report, one in four Americans has a disability of some kind — which includes mobility, hearing, vision, and cognitive impairments
- Over 26 million adult Americans are blind or visually impaired
But there are additional compelling reasons to make a digital historical archive meet the highest accessibility standards:
- Cross-browser and cross-device compatibility: An accessible site works well on mobile phones and desktops and refreshable braille devices alike. The design flexibility and rigorous code standards required to support the technology that some people with disabilities use to access a site also make a site work well on small screens and less common browsers that might not otherwise be tested.
- Search engine optimization: An accessible site is a discoverable site. Accessibility techniques are also search engine optimization techniques. Correct markup, description of meaningful content, and explicit labeling all improve what a search engine can identify within a site and display to users. Images are minimally searchable, while rich, detailed descriptions of an image make the full context of the picture discoverable via search engine. The text transcript of a video makes the full content of the video discoverable to searchers, and makes the video usable to archive visitors who cannot hear or see. Furthermore, visitors who cannot see images, or cannot decipher handwritten cursive text, will rely on text descriptions and transcripts for that information.
- Disability-specific content: The historical role that Helen Keller played in the disability rights movement, and at the American Foundation for the Blind (AFB), is of particular interest to people with disabilities, especially sensory impairments. It was crucial that AFB make the recent history of disability rights and social and technological developments for people with disabilities fully available to researchers and audiences directly affected by this history.
- The law: Website accessibility is mandated by the Americans with Disabilities Act of 1990 (as amended), Sections 504 and 508 of the Rehabilitation Act of 1973 (as amended), and other international laws and policies.
Most importantly, inclusive digital design is simply the right thing to do. (Plus, there’s no visually discernible difference between websites or mobile apps with accessible designs!) When a web page is properly coded, following W3C standards such as the Web Content Accessibility Guidelines (WCAG), users who are blind or deafblind can use their own technology to access the site independently, via magnification, synthetic speech, or refreshable braille devices. As such, the careful transcription of all materials in the Helen Keller Archive was essential to making these primary sources accessible to all.
Why Was the Digital Helen Keller Archive So Groundbreaking?
The Helen Keller Archive’s commitment to accessibility was a groundbreaking project in three ways:
- The content is accessible.
- The interface is accessible.
- We tested extensively with users and then made additional changes to ensure that the interface is also user-friendly.
Accessible Content Means:
- We used optical character recognition (OCR) to create an initial transcript of thousands of Helen Keller’s papers — letters, draft speeches, receipts, and more.
- A massive volunteer effort helped correct the errors that inevitably occur during OCR — and is still an ongoing effort!
- We also are working hard, with both volunteer and professional help, transcribing all the handwritten and braille documents OCR technology can’t handle.
- Trained staff created detailed descriptions of every photo to make the visual contents of the archive searchable, and accessible to people with visual impairments.
- We are very lucky to have films of Helen Keller in the archive, and to make those accessible for all users we created digital videos that are captioned, transcribed, and audio-described.
- Captioning makes the audio content accessible to users who are deaf or hard-of-hearing, or simply in a setting where it’s hard to hear. Everything that is said aloud in a video is also displayed as text onscreen.
- Video description is similar to captioning, but for blind people — it is an additional audio track that provides an audible description of what is happening visually. Try listening to a video without watching it. Does it still make sense? What information is missing, that was only conveyed visually? That’s what needs to be audio-described.
Here’s how it works: someone watches the video and writes short verbal descriptions of the action or other key visual information such as the setting, costumes, and facial expressions. A narrator then records those descriptions, and an editor inserts the audio descriptions into pauses within a program’s dialogue. (Sometimes it is easier to insert the descriptions at the very beginning or very end of a shorter video.) You can learn more at afb.org/videodescription.
- Transcripts help everyone — they are important not only for searchability, but also accessibility. A person who is deafblind could navigate the Helen Keller Archive and access all of the information in photos and videos by using a refreshable braille display to read the transcripts in dynamic braille output.
Accessible Interface Means:
- The website uses valid, properly labeled HTML on every page.
- The multimedia controls are all keyboard-accessible, so people who don’t use a mouse or can’t easily see the controls can still use keyboard controls to zoom in to a photo, zoom out, enter full-screen mode, start and stop the video player, etc.
- We made the keyboard shortcuts easily discoverable by making them visible on the controls. You’ll notice them on the image viewer, for example: try using “control + alt + z” to zoom in on a photo.
- For people navigating the site with screen readers, we made sure to provide a way to skip over repetitive elements such as the main navigation, and even the “refine search” options.
Testing for Usability:
- To test the design and interface adaptations of our beta site, we recruited 18 users from a wide variety of backgrounds — archivists, the general public, students, writers, and historians.
- Testing took place one-on-one, remotely (over the phone) so that users could access the site on their preferred technology, in a comfortable setting.
- Our participants included a mix of people who were blind, low vision, sighted, hard of hearing, deafblind, and quadriplegic.
- We recruited across a wide age range, as well — from a 10-year-old student, to college students, professionals, and retirees.
- User testing revealed inconsistencies in our coding approach – both approaches were technically accessible, but the variations forced people to do too much mental work to figure out the page. Making it more consistent improved the usability.
- We also made some design changes to improve the discoverability of rich features, like the refine search options.
This project is a constant work in progress. We still have many handwritten documents to transcribe, and we continue to be in communication with the academic and disability community on best practices for meta-tagging and annotation. Any mistakes are ours alone, and we commit to remedying them. Please let us know if you find any issues! You can write to [email protected]. |
Python has one peculiarity that makes concurrent programming harder. It’s called the Python GIL, short for Global Interpreter Lock. The GIL makes sure there is, at any time, only one thread running. Because only one thread can run at a time, it’s impossible to use multiple processors with threads. But don’t worry, there’s a way around this, using the multiprocessing library.
Table of contents
As mentioned already, Python threads share the same memory. With multiple threads running simultaneously, we don’t know the order in which the threads access shared data. Therefore, the result of accessing shared data is dependent on the scheduling algorithm. This algorithm decides which thread runs when. Threads are “racing” to access/change the data.
- Thread safety
- Thread-safe code only manipulates shared data in such a way, that it does not interfere with other threads.
The GIL was invented because CPython’s memory management is not thread-safe. With only one thread running at a time, CPython can rest assured there will never be race conditions.
A demonstration of a race condition
As an example, let’s create a shared variable
a, with a value of 2:
a = 2
Now suppose we have two threads, thread_one and thread_two. They perform the following operations:
a = a + 2
a = a * 3
If thread_one is able to access
a first and thread_two second, the result will be:
- a = 2 + 2,
ais now 4.
- a = 4 * 3,
ais now 12.
However, if it so happens that thread_two runs first, and then thread_one, we get a different output:
- a = 2 * 3,
ais now 6
- a = 6 + 2,
ais now 8
So the order of execution obviously matters for the output. There’s an even worse possible outcome, though! What if both threads read variable
a at the same time, do their thing, and then assign the new value? They will both see that a = 2. Depending on who writes its result first, a will eventually be 4 or 6. Not what we expected! This is what we call a race condition.
- Race condition
- The condition of a system where the system’s behavior is dependent on the sequence or timing of other, uncontrollable events.
Race conditions are difficult to spot, especially for software engineers that are unfamiliar with these issues. Also, they tend to occur randomly, causing erratic and unpredictable behavior. These bugs are notoriously difficult to find and debug. It’s exactly why Python has a GIL — to make life easier for the majority of Python users.
Can we get rid of the Python GIL?
If the GIL holds us back in terms of concurrency, shouldn’t we get rid of it or be able to turn it off? It’s not that easy. Other features, libraries, and packages have come to rely on the GIL, so something must replace it, or else the entire ecosystem will break. This turns out to be a difficult problem to solve. If it interests you, you can read more about this on the Python wiki. |
Looking through the trend of classification of living organisms, one easily notices that as we move from one phylum to the next, the organisms become larger and more complex. Cells on their own are not specialised for the performance of any function of life except for repeated cell division.
Levels of organisation of life
It is from the embryonic cell that tissues, organs and later systems may be derived to form the basis for the levels of organisation of life.
Usually, the cell is bounded by a membrane, and contains a nucleus, and cytoplasm or protoplasm. Some cells are capable of independent existence, carrying out (but without specialisation) all the characteristic (life) processes of living things. Indeed, cells usually have an inherent ability to grow and reproduce, to metabolise, to receive and respond to stimulus and to show movement, but not specialised for any of those functions. Cells may have pseudopodia, cilia, flagella, etc., and other inclusions and organelles.
Among some of the best known organisms of cellular organisation are microscopic and unicellular forms such as amoeba, paramecium, euglena, chlamydomonas, etc. These exist either as one-celled or colonial forms, having a single unit of protoplasm including cytoplasm, one or more nuclei and a variety of organelles.
In multicellular organisms, there are more than one group of cells. Each group of cells are similar in structure and function. A tissue is a collection (group) of cells which are similar in structure, and perform similar functions. Tissues usually have the same origin and occupy the same position in the body of the organism. While some tissues cannot exist on their own, some can live on their own, e.g. Hydra.
Sometimes, the cells of a tissue are held together by a material called matrix, usually secreted by cells. The jelly fish, sea anemones and coral are at the tissue level of organisation. Common examples of tissues in animal and plant bodies include the following:
a. Epithelia which are made of one or several layers of cells. The cells are of different types and are either squamous, cuboidal or columnar. They are seen covering or lining the inner surfaces of the skin, body cavities, blood vessels, trachea, digestive tracts. Their function is either protective or secretive as in the goblet cells in the digestive tract.
b. Connective tissues which bind and support other body structures. Others include muscle, nervous, skeletal and blood tissues, each performing a different and specialised function. In plants, conducting tissues include xylem and phloem while supporting tissues include parenchyma, collenchyma and sclerenchyma.
As organisms grow in sizes and complexities, cells or units of cells become unable to service the needs of such organisms.
An organ is a collection of different tissues, that perform a common function or functions. Some organs carry out a single function, e.g., the function of the heart is to pump blood. Others carry out more than one function, for example, the kidney carries out excretion and osmoregulation and maintenance of the internal environment.
The onion bulb (Allium cepti) presents a good example of an organ. At a glance, we observe the roots for anchorage and absorption of water and other nutrients, the stem to link the roots with leaves and sexual reproduction, while the leaves serve the function of food production and storage.
The most advanced and complex organisms cannot be adequately serviced by tissues and organs alone, but are organised into systems of organs. A system is made up of different organs that perform a particular function. The system level of organisation is common in higher invertebrates and the vertebrates.
The following systems are common in most advanced forms: muscular integument, digestive, circulatory, skeletal, respiratory, excretory, nervous, hormonal and reproductive systems.
Complexity or organisation in higher organisms
We have observed from the above treatment that organisms stand to benefit a lot as they advance from their simple, microscopic and unicellular forms to higher multicellular and complex forms.
(i) Complexity leads to specialisation of the tissues, organs or systems.
(ii) It enhances increase in the size of the organisms, and wider adaptation to and survival in various types of environments.
(iii) Specialisation in various functions in an organism leads to division of labour.
(iv) This in turn brings about efficiency of the tissues, organs or systems.
(v) One body function does not adversely affect other body functions, as various systems operate side by side without adversely affecting the other.
(vi) Reproduction in complex organisms does not lead to the break down of the parent’s body, since that is a specialised system. But in simple unicellular organisms, parents disintegrate after reproduction or conjugation.
The main disadvantage of complexity of organisms is that cells become so specialised, that each cell, tissue or organ may not survive in isolation from others. This is as opposed to lower animals, in which if an individual is cut into parts, each part will develop into another complete individual.
Advantages of simple (cellular) organisation over complexity
(i) Usually, there is individuality of life in simpler forms, for example, amoeba over lizard cells. Each cell of amoeba can exist as an individual or integral whole and perform life processes, but individual cells of the lizard cannot survive as a unit.
(ii) Diffusion alone can meet all the physiological needs of amoeba. These advantages are because:
(b) The distance which materials travel within the cell is small or short as compared with that in the complex forms.
(c) Also, the quantities of materials moving from place to place are smaller and simpler than the complex forms.
(d) Moreover, the simple forms are in more direct contact with the environment than the complex forms. |
What Is Canada Day and How Is It Celebrated?
Here’s what our northern neighbors are up to on the first day of July every year. (Hint: Get ready to celebrate!)
While Americans celebrate their independence on the Fourth of July, Canadians celebrate their national day a few days earlier. Canada Day, on July 1, is the national holiday when Canucks from coast to coast to coast don red and white, celebrate the maple leaf, and toast their country. Sounds like a pretty good excuse for a party, eh? Here’s how Canada’s big day got its start.
When was Canada Day declared a holiday?
Canada Day became the official name for July 1 on October 27, 1982, though it had been unofficially called that for decades. Prior to 1982, July 1 was called Dominion Day, which became a public holiday back in 1879. We bet you’ll be surprised by some of these things you didn’t know about Independence Day in the United States. Similarly, Canada Day doesn’t have a simple history.
Dominion Day? What’s that?
The first official name of the country was the Dominion of Canada, a name it received on July 1, 1867, with Confederation. That name didn’t cover the whole area known as Canada today. As the Canadian Encyclopedia explains, on July 1, 1867, the country of Canada was created out of the provinces of New Brunswick, Nova Scotia, and the province of Canada (which was one of the names for what is now the provinces of Ontario and Quebec).
The anniversary of Confederation was called Dominion Day. In the early years, it wasn’t really celebrated and often became an opportunity for provinces, opposition parties, and others to air grievances. Discussions to change the name from Dominion Day took place at least as far back as the 1950s, but it took until 1982 for the law to be passed to change the name to Canada Day.
Is this Canada’s independence day?
Sort of. July 1 marks the anniversary of Confederation, the day the British North America (BNA) Act came into effect in 1867 and the Dominion of Canada was created. However, only some parts of the country were included, and Indigenous Peoples had no say in the decision. The BNA Act established the Dominion of Canada as a self-governing entity, and the country’s independence grew from there. The BNA Act meant that the parts of the country called Canada were no longer a British colony. Instead, the new dominion had the authority to establish a parliament and make laws, as well as the responsibility to fund and defend itself.
It was only in 1931, with the Statute of Westminster, that Canada was awarded full legal freedom, notes History.com. Still, Canada didn’t achieve full independence—meaning that Britain no longer had the authority to change Canada’s constitution—until 1982. Full Canadian independence came on April 17, 1982, the day that Canada’s Constitution was repatriated.
So, technically, April 17, 1982 could be considered Canada’s independence day. Did you know that there’s controversy over the date of the United States’ independence day, too? Here’s why July 2nd is America’s real independence day.
Is Canada’s birthday on July 1? How old is it?
While the country called the Dominion of Canada was legally created on July 1, 1867, it wasn’t created out of thin air. Indigenous Peoples have lived on the continent since the Ice Age. So, saying that Canada will be 153 years old on July 1, 2020, is taking a colonial view of Canadian history.
Today, in addition to Inuit and Métis people, there are more than 634 First Nations in Canada, with more than 50 unique languages. No one knows how many separate nations once inhabited the land between the Arctic, Pacific, and Atlantic Oceans, nor exactly how long they’ve been there. But, when Canada celebrated its sesquicentennial—its 150th anniversary—in 2017, the Heiltsuk Nation celebrated its 14,000th birthday. A community near Tofino, British Columbia, has been continuously inhabited for at least 5,000 years: Carbon dating shows that Opitsaht, a Tla-o-qui-aht village on Meares Island, has been inhabited since before the time the pyramids were built. So Canada is much older than 153. Don’t miss these other facts you never knew about Canada.
How do Canadians celebrate Canada Day?
With school normally finishing at the end of June, July 1 marks the unofficial start of summer in Canada. Some Canadians celebrate at home, while others celebrate while on vacation elsewhere. In Quebec, there’s less of an emphasis on Canada Day, since Saint-Jean-Baptiste Day, on June 24, usually gets more attention. For 2020, though, Montreal is having a large Canada Day party. Because of COVID-19, it will be a virtual Canada Day hosted from Olympic Stadium.
Not everyone sees Canada Day as a celebration, though. Some Indigenous Peoples, for example, object that the equality and prosperity enjoyed by many Canadians are not enjoyed by all, as explained in this article by New Journeys, a resource for sharing Indigenous resources and stories in Canada.
For those that do celebrate, the prime Canada Day location is in the capital, Ottawa. The celebrations are usually on the vast lawns of Parliament Hill, which can hold about 500,000 people. Families watch the noon show in the typically hot sun, take a break during the afternoon, and usually come back for the evening show. Until the Parliament Buildings’ decade-long restoration is complete, however, the Hill’s Canada Day parties will be small.
Royalty celebrates Canada Day, too
A typical Ottawa Canada Day has dignitaries like the Governor General (the Queen’s representative in Canada, currently former astronaut Julie Payette) and the Prime Minister making speeches and walking through the crowds. Sometimes royalty even shows up. The Queen, Queen Mother, Prince Charles, and Princess Diana (her birthday also happened to be on July 1) have all celebrated July 1 in Canada. The Duke and Duchess of Cambridge attended the Parliament Hill celebrations in 2011 shortly after they were married, leaving Canadians particularly impressed with the Duchess’ maple leaf hat.
Typical Canada Days
Since the 1967 Centennial, celebrations on July 1 have been an opportunity for the Government of Canada to promote national unity, the country’s heritage, and Canadian values like multiculturalism and bilingualism. Canadian musical and dance acts typically make an appearance on Parliament Hill’s main stage, though it’s rarely Canada’s top stars like these 25 famous people you didn’t know were Canadian. More important is representing Canada’s different regions and multiculturalism, showcasing Indigenous Peoples, and ensuring that both of Canada’s official languages get equal time. A highlight is when Canada’s fleet of red and white aerobatic planes, the Snowbirds, do a flyover. The evening ends with fireworks above the Peace Tower and the Parliament Buildings.
In other parts of the country, celebrations are smaller than Ottawa’s but are still a party. An example is at Winnipeg, Manitoba’s The Forks, which people have used as a meeting point for more than 6,000 years. On a typical Canada Day, between 30,000 and 40,000 people show up dressed in red and white and carrying Canadian flags. Across the country, Canada Day adornments often include maple-leaf stickers and temporary tattoos, which you’ll see on the cheeks and foreheads of everyone from babies to great-grandparents.
Canadians across the country celebrate Canada Day with parades, outdoor concerts, and fireworks shows, as well as by just hanging out with friends and family. Barbecues are almost mandatory. Another favorite way to celebrate is watching the 32 riders and horses of the Royal Canadian Mounted Police’s Musical Ride, which often makes a Canada Day appearance somewhere in the country.
Celebrate in Canada next year!
Canadians welcome visitors to explore Canada and celebrate Canada Day with them. Of course, with the country’s borders closed to most foreign nationals due to the COVID-19 pandemic, Canada Day 2020 is only for Canucks. But Canadians look forward to sharing Canada Day with the rest of the world again next year. Here are the most popular travel destinations in Canada to help you start planning. |
AS COVID-19 has spread around the world, people have become grimly familiar with the death tolls that their governments publish each day. Unfortunately, the total number of fatalities caused by the pandemic may be even higher, for several reasons. First, the official statistics in many countries exclude victims who did not test positive for coronavirus before dying—which can be a substantial majority in places with little capacity for testing. Second, hospitals and civil registries may not process death certificates for several days, or even weeks, which creates lags in the data. And third, the pandemic has made it harder for doctors to treat other conditions and discouraged people from going to hospital, which may have indirectly caused an increase in fatalities from diseases other than covid-19.
One way to account for these methodological problems is to use a simpler measure, known as “excess deaths”: take the number of people who die from any cause in a given region and period, and then compare it with a historical baseline from recent years. We have used statistical models to create our baselines, by predicting the number of deaths each region would normally have recorded in 2020 and 2021.
Many Western countries, and some nations and regions elsewhere, regularly publish data on mortality from all causes. The table below shows that, in most places, the number of excess deaths (compared with our baseline) is greater than the number of covid-19 fatalities officially recorded by the government. The full data for each country, as well as our underlying code, can be downloaded from our GitHub repository. Our sources also include the Human Mortality Database, a collaboration between UC Berkeley and the Max Planck Institute in Germany, and the World Mortality Dataset, created by Ariel Karlinsky and Dmitry Kobak.
The chart below uses data from EuroMOMO, a network of epidemiologists who collect weekly reports on deaths from all causes in 23 European countries. These figures show that, compared with a historical baseline of the previous five years, Europe has suffered some deadly flu seasons since 2016—but that the death toll from covid-19 has been far greater. Though most of those victims have been older than 65, the number of deaths among Europeans aged 45-64 was 40% higher than usual in early April 2020.
Below are a set of charts that compare the number of excess deaths and official covid-19 deaths over time in each country. The lines on each chart represent excess deaths, and the shaded area represents the number of fatalities officially attributed to coronavirus by the government.
In March 2020 America’s east coast was hit hard by the pandemic. States elsewhere locked down quickly enough to prevent major outbreaks at that point, but a second wave in November and December surged through most of the country. Excess mortality was low from March 2021 onwards, as a rapid vaccination campaign allowed the country to open up again.
While covid-19 was devastating New York in March 2020, cities in western Europe were also suffering severe outbreaks. Britain, Spain, Italy, Belgium and Portugal have some of the highest national excess-mortality rates in the world, after adjusting for the size of their populations. These countries also suffered a second wave of deaths in the autumn and winter of 2020. Some western European countries were slow to vaccinate their citizens in early 2021, as shown by our covid-19 data tracker. But by June mortality rates had returned to normal across the region.
Countries in northern Europe have generally experienced much lower mortality rates throughout the pandemic. Some Nordic nations have experienced almost no excess deaths at all. The exception is Sweden, which imposed some of the continent’s least restrictive social-distancing measures during the first wave.
In central Europe only the Netherlands and Switzerland suffered large numbers of excess deaths in early 2020. After international travel resumed, the entire region was ravaged in the autumn. Poland, Hungary and the Czech Republic all endured additional spikes of mortality in March and April 2021.
South-eastern Europe has followed a similar pattern. November and December 2020 were particularly lethal, with Bulgaria recording the highest weekly excess-mortality rates of any country in our tracker. Several countries have since experienced further deadly outbreaks.
Among former republics of the Soviet Union, only Belarus suffered substantial excess mortality in early 2020, after introducing almost no constraints on daily life. A second wave in late 2020 affected the entire region. Russia now has one of the world’s largest excess-mortality gaps. It recorded about 580,000 more deaths than expected between April 2020 and June 2021, compared with an official covid-19 toll of only 130,000.
Much of Latin America experienced a devastating first wave from April to July 2020, with Bolivia and Ecuador hit particularly hard. A second wave surged through the region in late 2020, as Mexico, Peru and Brazil all recorded higher peaks of excess mortality than at any previous point during the pandemic. The virus has continued to circulate throughout the continent since then, with Colombia and Paraguay suffering their worst death tolls in April and May 2021.
Outside Europe and the Americas, few places release data about excess deaths. No such information exists for large swathes of Africa and Asia, where some countries only issue death certificates for a small fraction of people. For these places without national mortality data, The Economist has produced estimates of excess deaths using statistical models trained on the data in this tracker (as explained in our methodology post). In India, for example, our estimates suggest that perhaps 2.3m people had died from covid-19 by the start of May 2021, compared with about 200,000 official deaths.
Among developing countries that do produce regular mortality statistics, South Africa shows the grimmest picture, after recording three large spikes of fatalities. In contrast, Malaysia and the Philippines had “negative” excess mortality—fewer deaths than they would normally have recorded, perhaps because of social distancing.
A handful of rich countries elsewhere publish regular mortality data. They tend to have negative excess mortality. Australia and New Zealand managed to eradicate local transmission after severe lockdowns. Taiwan and South Korea achieved the same outcome through highly effective contact-tracing systems. Israel has experienced some excess deaths, but has also outpaced the rest of the world in vaccinating its population, with promising results.
Update (October 14th 2020): A previous version of this page used a five-year average of deaths in a given region to calculate a baseline for excess deaths. The page now uses a statistical model for each region, which predicts the number of deaths we might normally have expected in 2020. The model fits a linear trend to years, to adjust from long-term increases or decreases in deaths, and a fixed effect for each week or month.
Correction: The data for deaths officially attributed to covid-19 in Chile were corrected on September 9th 2020. Apologies for this error.
Sources: The Economist; Our World In Data; Johns Hopkins University; Human Mortality Database; World Mortality Dataset; Registro Civil (Bolivia); Vital Strategies; Office for National Statistics; Northern Ireland Statistics and Research Agency; National Records of Scotland; Registro Civil (Chile); Registro Civil (Ecuador); Institut National de la Statistique et des Études Économiques; Santé Publique France; Provinsi DKI Jakarta; Istituto Nazionale di Statistica; Dipartimento della Protezione Civile; Secretaría de Salud (Mexico); Ministerio de Salud (Peru); Data Science Research Peru; Departamento Administrativo Nacional de Estadística (Colombia); South African Medical Research Council; Instituto de Salud Carlos III; Ministerio de Sanidad (Spain); Datadista; Istanbul Buyuksehir Belediyesi; Centres for Disease Control and Prevention; USA Facts; New York City Health. Get the data on GitHub |
What are 5 fine motor skills?
This is a list of fine motor skills children should demonstrate between the ages of 2 and 5 years.
- 2 years old. Has hand control to build block towers. …
- 3 years old. Able to make a Cheerio or macaroni necklace. …
- 4 years. Scissor skills show improvement – Able to cut simple shapes. …
- 5 years.
How are fine motor skills measured?
The measurement and analysis of fine motor skills often requires the simultaneous measurement of a movement profile and the associated muscle activity. … Companies responded by designing reliable and accurate systems for each task: motion trackers, electromyography (EMG) systems, force plates, etc.
Is clapping a fine or gross motor skill?
Clapping songs and games can help kids develop their fine motor skills and cognitive development. They are also a fun way to spend time with your child.
What are fine motor skills examples?
Examples of Fine Motor Skills
- Dialing the phone.
- Turning doorknobs, keys, and locks.
- Putting a plug into a socket.
- Buttoning and unbuttoning clothes.
- Opening and closing zippers.
- Fastening snaps and buckles.
- Tying shoelaces.
- Brushing teeth and flossing.
What causes issues with fine motor skills?
The following neuroanatomical areas play crucial roles in fine motor control, and therefore any lesion can cause fine motor disability. Causes of lesions/damage include a space-occupying lesion, infection, stroke, toxins, autoimmune inflammation, metabolic, trauma, and congenital absence or abnormality.
What are the 3 motor skills?
Gross motor skills can be further divided into two subgroups: oculomotor skills, such as running, jumping, sliding, and swimming; and object-control skills such as throwing, catching and kicking.
What sports use fine motor skills?
A snooker shot or the hand movements when throwing a dart are examples of fine skills. The majority of movements require an element of both skills, and each movement would sit on a continuum. Gross skills tend to get athletes into position and are large movements involving an element of running, jumping or throwing. |
Lemons may be too sour to be kids' favorite fruit. But they can be used as a teaching topic to launch lessons on a wide range of subjects. Lemons are grown in many parts of the world, which can lead to discussions of geography, and a science lesson might involve discussing lemons' health benefits or the effects of their acidity. Cooking lessons can explore the many ways in which lemons can be used to flavor tasty treats.
Where Lemons Grow
Lemons were first cultivated over 4,000 years ago in southeast Asia, and later spread throughout the Middle East and then on into the Mediterranean region of Europe. Christopher Columbus introduced lemon seeds to the New World, and Spanish missionaries planted the first lemon groves in California in the 1700s. California supplies most of the United States' lemons today, with Arizona coming in second and Texas and Florida being the only other states to support lemon farming as an industry.
Lemons are a low-calorie food, averaging only 15 calories per medium-size fruit. They have zero fat, cholesterol or sodium, and only 5 grams of carbohydrate. One lemon does, however, provide almost 10 percent of the recommended daily value of dietary fiber, along with 40 percent of the recommended vitamin C. Lemons are also high in a compound called limonoids, which has cancer-fighting properties. Researchers affiliated with the Texas A&M University System Health Science Center and the U.S. Department of Agriculture's Agricultural Research Service Western Regional Research Center found that citrus limonoids were able to reduce cancer tumors by up to 50 percent in one study published in a 2004 issue of "Journal of Agricultural Food and Chemistry."
What Lemons Are Used For
About one-third of all lemons grown in the U.S. are processed to be used in juices and concentrates. Lemon peel is use to flavor cakes, cookies and other desserts, while the oil extracted from the peel is an ingredient in many brands of detergent, furniture polish, soap, shampoo and even perfume. The high acid content of lemon juice allows it to be used to bleach fabric and clean metal. This acid can even be used to create a low-powered battery, as attaching electrodes to a lemon will produce a small electric charge.
Lemon World Records
The biggest lemon on record weighed in at almost 12 pounds and measured 29 inches around and 13.7 inches long. It was grown in 2003 by a farmer in Israel, although it's unknown what he used it for after picking it. One thing he didn't do was use it to set another record for world's fastest lemon consumption -- that record belongs to a man who peeled and ate a lemon weighing just over 5 ounces in 8.25 seconds.
- Econedlink: The Lemon Story
- California Department Of Parks And Rereation: California Citrus State Historic Park
- Sunkist Health And Nutrition: Nutrition Labels: Lemon
- Journal Of Agricultural And Food Chemistry: Further Studies on the Anticancer Activity of Citrus Limonoids
- Science Kids: Lemon Facts For Kids
- Purdue University: Fruits Of Warm Climates: Lemon
- Guinness World Records: Heaviest Lemon
- Guinness World Records: Fastest Time To Peel And Eat A Lemon
Paul Johnson/iStock/Getty Images |
There are those who think that crawling is something that children should not do, because it hurt their knees, because they get dirty if they do it on the street and because it is better to walk. A very common mistake that science has denied with a scientific study since crawling helps the physical and psychological development of babies, so it is very important to leave them free to do so and even encourage them to do so. We tell you all about the benefits of crawling and the reasons to let your baby do it.
Benefits of Crawling
A few centuries ago, it was believed that the legs of babies had to be hold with a kind of bandage, to prevent them from crawling on all four legs like animals do. Today is an idea that is already banished, however, stills many parents and grandparents who think that it is better to walk directly and put aside the stage of crawling. Here are the reasons why you should encourage your baby to crawl.
On the importance of crawling thus allowing children to grow and develop in a natural way. Putting them on a walker, when they are babies and begin to move on their own, is a very common mistake. The ideal is to allow the baby to move freely on the ground, so that in this way he begins to crawl and explore his environment. Using walkers at this early age can be detrimental since your legs are not yet ready to support the full weight. What’s more, experts advise encouraging children to put toys at a short distance, so they can move to reach them.
The crawling does the two cerebral hemispheres to connect, achieving greater cognitive and sensory development. Recent research has shown that crawling babies have a greater predisposition to math and science.
By means of crawling the cross pattern is developed, which is the one that allows the equilibrium function to be stimulated correctly. It also benefits the position of the spine making it grow straight and strengthened.
The focus of the eyes
Studies state that neither more nor less than 98% of children with a vision problem who did not crawl, or did not do so in the right measure.
The crawling child focuses the eyes on a specific point which improves the muscular development and at the same time learns to focus both eyes at a distance of about 30-40 cm.
Meet the world
Leaving the child free to crawl, it means letting him know and explore the world around him. Perceiving the environment, knowing what they like and what not and allow that relate to the objects is something basic for their growth.
Lateralization of the brain
Crawling helps to establish lateralization of the brain that occurs more or less at the age of 5-6, when one of the hemispheres becomes dominant. Something very importance for their development.
Development of vestibular system
Through the crawling, the baby develops the vestibular system and proprioceptive system. Both allow knowing where the parts of body are.
As you can see, crawling is something natural in babies that have numerous advantages for their cognitive, physical and sensory development. |
Blood cancers, such as leukemia and lymphoma, are projected to be responsible for 10% of all new cancer diagnoses this year. These types of cancers are often treated by killing the patient’s bone marrow (the site of blood cell manufacturing), with a treatment called irradiation. While effective for ridding the body of cancerous cells, this treatment also kills healthy blood cells. Therefore, for a time after the treatment, patients are particularly vulnerable to infections, because the cellular components of the immune system are down for the count.
Now scientists at MIT have devised a method to make blood cells regenerate faster and minimize the window for opportunistic infections.
Using multipotent stem cells (stem cells that are able to become multiple cell types) grown on a new and specialized surface that mimics bone marrow, the investigators changed the stem cells into different types of blood cells. When transplanted into mice that had undergone irradiation, they found that the mice recovered much more quickly compared to mice given stem cells grown on a more traditional plastic surface that does not resemble bone marrow as well.
This finding, published in the journal Stem Cell Research and Therapy, is particularly revolutionary, because it is the first time researchers have observed that mechanical properties can affect how the cells differentiate and behave.
The lead author of the study attributes the decreased recovery time to the type of stem cell that was given to mice compared to what humans are normally given after irradiation. Humans are given a stem cell that is only able to become different types of blood cells. The mice in this study, however, were give a stem cell that can become many different types of cells such as muscle, bone and cartilage, suggesting that these cells somehow changed the bone marrow environment to promote a more efficient recovery. They attributed a large part of this phenomenon to a secreted protein call ostepontin, which has previously been describe in activating the cells of the immune system.
In a press release, Dr. Viola Vogel, a scientist not related to study, puts the significance of these findings in a larger context:
“Illustrating how mechanopriming of mesenchymal stem cells can be exploited to improve on hematopoietic recovery is of huge medical significance. It also sheds light onto how to utilize their approach to perhaps take advantage of other cell subpopulations for therapeutic applications in the future.”
Dr. Krystyn Van Vliet, explains the potential to expand these findings beyond the scope of just blood cancer treatment:
“You could imagine that by changing their culture environment, including their mechanical environment, MSCs could be used for administration to target several other diseases such as Parkinson’s disease, rheumatoid arthritis, and others.” |
Aortic aneurysm is the case of the formation of a sac as a result of regional enlargement of the artery originating from the left heart and called the aorta. Sudden tears may occur at these enlargement points, which can often be seen in hypertensive patients over 60 years of age. This is known as aortic rupture, or aortic dissection.
What is the aorta?
The aorta is the largest artery in our body and emerges from the left heart. The aortic vein is located at the point where the cleaned oxygen-rich blood in the lungs is pumped from the heart to the body tissues. In this vein, which is located in the center of the blood circulation, an average of 5 liters of blood per minute is pumped in adults.
The aorta consists of four parts: the ascending aorta, the aortic arch, the descending aorta, and the abdominal aorta. Abdominal aorta is the name given to the abdominal part of the vessel. The severity of the symptoms of aortic vessel rupture varies according to the vessel section where the rupture occurred. Tears that occur in areas closer to the heart progress with more severe symptoms that can be fatal.
Aortic aneurysm causes and risk factors
The risk of developing aortic aneurysm increases with age. This is because the vessel wall structure changes over the years. The vascular wall loses its elasticity as the age progresses and its resistance to pressure on the vessel wall decreases.
In more than 50% of cases, atherosclerosis, ie arteriosclerosis, is the cause of aneurysm. It is also common in patients with high blood pressure. High blood pressure causes tension in the vessel wall and prepares the ground for aneurysm. Hypertension is also a risk factor for atherosclerosis.
Bacterial infections may play a role as another causative factor in aneurysm development. The infection causes inflammation in the vessel wall and paves the way for aneurysm. This marsupization caused by infection is called mycotic aneurysm.
Among the less common causes of aortic aneurysm is vascular wall inflammation seen in diseases such as tuberculosis and syphilis. There is an increased risk of aortic aneurysm in some congenital genetic diseases such as Marfan syndrome and Ehler-Danlos syndrome.
What are the symptoms of aortic aneurysm?
Aneurysm that develops in the abdominal region of the aorta usually does not cause any symptoms at the beginning and therefore cannot be detected at an early stage. However, in the course of time, the size of the aneurysm increases and compresses the surrounding tissues and organs and causes complaints. In this case, symptoms related to the digestive system such as pain on the legs and back and indigestion are seen.
If aneurysm has occurred in the chest area of the aorta, symptoms such as chest pain, cough, shortness of breath, hoarseness and swallowing problems are seen.
What are the symptoms of aortic rupture?
The larger the aortic aneurysm, the higher the risk of rupture. Especially dangerous are abdominal aortic aneurysms more than 6 centimeters in diameter and chest area aneurysms more than 5.5 centimeters in diameter. After the rupture of the aneurysm, a very severe pain occurs in the chest or abdomen that radiates to the back. These complaints are accompanied by nausea. Strong internal bleeding quickly causes circulatory shock. Therefore, fast and effective treatment is essential.
How is aortic aneurysm diagnosed?
Doctors often discover an aortic aneurysm when performed for another purpose or during a routine examination. For example, an abdominal aortic aneurysm is often detected during an abdominal ultrasound examination performed for another reason. During listening with a stethoscope, flow sounds created by the blood on the vessel wall can be heard.
An aortic aneurysm in the chest area is usually discovered incidentally on a chest radiograph taken for other purposes. A more accurate result is obtained by making a heart echo. Other parts of the aorta can also be clearly seen with this examination.
Detailed data on the size and severity of the aortic aneurysm can be obtained by computed tomography (CT), magnetic resonance imaging (MRI) or angiography.
How is aortic aneurysm treated?
Treatment depends on the size of the aneurysm. Small-diameter aneurysms that do not cause any symptoms are followed up at regular intervals. In hypertensive patients, blood pressure values are kept within normal limits with treatment. Aneurysms larger than 6 centimeters in the abdomen and 5.5 centimeters in the chest area are surgically treated. The enlarged vessel area is surgically removed and a stent placed in its place. |
Multiplication with arrays worksheet Author. 2nd Grade Math Arrays.
Pin On 2nd Grade Teachers
These grade 2 multiplication worksheets emphasize early multiplication skills.
Multiplication worksheets with arrays grade 2. Grade 2 Multiplication Worksheet Keywords. So you can download it according to your kids needs. Here s a link to a set of worksheets with 2 digit by 2 digit multiplication problems on them.
3rd grade math worksheets multiplication. So improve the interest and knowledge of kids by gifting them a great multiplication sheet. Worksheets Aligned with IB Singapore Math Australian New Zealand Canadian CBSE ICSE K12 other curricula.
Use this worksheet to practice describing an array with repeated addition. Multiplication rows columns arrays numbers. The multiplication sentence is the number of rows times the number of columns and the answer is found by skip counting.
However numerous of us find that we regularly lose these items of paper or we dont use an excellent format like the SMART goals format. Some of the worksheets for this concept are repeated addition arrays georgia standards of excellence curriculum frameworks math mammoth grade 2 a light blue complete curriculum this array represents set 1 word problems arrays math fact fluency work grade 2 multiplication and division word. Multiplication tables of 5 and 10 multiplication tables 2 5 practice.
2nd Grade Math Arrays – Displaying top 8 worksheets found for this concept. Worksheet 1 Worksheet 2 Worksheet 3. Some of the worksheets for this concept are Multiplication with arrays work Repeated addition arrays Repeated addition arrays Grade 2 math practice workbook This array represents Set 1 word problems arrays Workbook 1 2nd grade math workbook pdf.
Multiply the learning fun with our second grade multiplication worksheets and printables. Children can visualize multiplication equations by representing them as an array of objects or boxes in these worksheets. This page contains all our printable worksheets in section Multiplication and Division of Second Grade MathAs you scroll down you will see many worksheets for multiplication as repeated addition multiplication and addition sentences skip – count equal groups model with arrays multiply in any order multiply with 1 and 0 times table to 12 multiplication sentences model division.
Awesome multiplication array worksheets for grade 2. Rows columns arrays worksheet Author. The leap from adding and subtracting can be a daunting task.
With interactive and visual activities your second graders will learn strategies to make multiplication easier including skip counting adding groups and creating arrays. Kids will practice writing number sentences for arrays before applying their knowledge to array word problems. Your kids will learn new things every day with the help of these sheets.
Numbers math worksheet grade 2 exercise. Multiplication worksheets for grades 2 to 6. The number of rows represents the number of groups while the number of columns represents the number of objects per group.
This is the year they learn about multiplication. Fun Multiplication Worksheets Grade 2 Coloring 2nd Common Core. Explore fun printable activities for K-8 students covering math ELA science more.
Make quick math facts printable. Click the checkbox for the options to print and add to Assignments and Collections. 2 digits times 2 digits.
Multiplication array worksheets for grade 2 – When it comes to obtaining a goal you could just put lower on paper what this is you would like. Math Worksheets 3rd Grade Ordering Numbers To 10000 3 Printable. Explore fun printable activities for K-8 students covering math ELA science more.
2nd grade Multiplication with Arrays Printable Worksheets. Grade 2 multiplication worksheets. The Multiplication Worksheets For Grade 2 are also available in various formats like pdf excel printable etc.
Includes math riddles a scoot game task cards and more. Arrays can be used to understand multiplication. As children enter grade 2 they start learning more arithmetic operations other than addition and subtraction.
Grade Arrays Worksheets Multiplication Nouns Verbs And Adjectives. Multiplication Worksheets For Grade 2 With Multiplication Facts Multiplication Tables And Multiplication Problems. In particular recall of the 2 5 and 10 times tables multiplying by whole tens and solving missing factor problems.
Grade 2 Multiplication Worksheet Keywords. Add to my workbooks 48 Download file pdf Embed in my website or blog Add to Google Classroom.
Pin On Math
Pin On 2nd Nbt Common Core Math
Pin On Bc I Moved To Second
Add And Multiply Worksheet Repeated Addition Worksheets Teaching Multiplication Multiplication
Gallery of Beautiful Multiplication Worksheets With Arrays Grade 2
Related Posts for Beautiful Multiplication Worksheets With Arrays Grade 2
- Best Inner Planets Worksheets
- Best Multiplication Arrays Worksheets Free
- Amazing Abc Pattern Worksheets For Kindergarten
- Creative Subtraction Worksheets No Borrowing
- Fresh Multiplication Worksheets Coloring
- Awesome Handwriting Worksheets Custom Printable
- Best Summarizing Story Worksheets
- Fresh Worksheets On Ch Sound Words
- Inspiration Algebra Worksheets Year 5
- Best Reading Worksheets Comprehension |
Eli Whitney was an inventor of both the cotton gin and the mass production particularly of muskets.
Eli Whitney invented the cotton gin to reduce the number of slaves necessary to process cotton. The cotton gin made it possible to profitably grow short staple cotton. Previously only long stable could be profitably grown because of the difficulty of separating the cotton fibers from the seeds. With the cotton gin it was profitable to separate the fibers of short staple cotton which could be grown throughout the south, while long stable cotton could only be grown in costal regions of six southern states.
Instead of the invention of the cotton gin reducing the number of slaves it greatly magnified the number of slaves. Plantations sprang up all over the south that needed slaves.
Eli Whitney made very little money on his invention of the cotton gin while it made the south fabulously wealthy.
Eli also invented the interchangeable parts and mass production of muskets. The manufacturing power of the north based on Eli Whitney's invention of mass production greatly aided in the defeat of the south during the Civil War. |
The California condor can soar 15,000 feet above the earth. It has a wingspan of up to 9.5 feet—nearly twice your height! Because of this broad wingspan, power lines are a serious hazard to condors.
Small birds can sit safely on one power line. They don’t touch the ground, or any other grounded object, so electricity stays in the power lines and doesn’t harm the birds. Condors, with their broad wingspan, are likely to touch a power line and pole at the same time and become a path for electricity to travel down the pole to the ground. Or, their large wings can bridge two power lines at the same time, creating a short circuit. In either situation, the birds are electrocuted.
Condors have a slow rate of reproduction, which is one reason they nearly went extinct. Power line contacts haven’t helped matters.
In 1979, only 25 California condors were left and efforts began to save them. Scientists at the Los Angeles Zoo developed a program to train captive condors to avoid power poles before they are released into the wild. A perch that looks like a power pole delivers a mild shock to any bird that touches it. The birds learn to land somewhere else.
Most condors that have graduated from the shock training program successfully stay away from power lines. Today, about 208 condors are living—76 of them in the wild.
Do the Safe Thing
Watch Those Wires
When you work outdoors with long or tall equipment (such as ladders and paint rollers), be sure to keep yourself and your equipment at least 10 feet away from all overhead power lines. That includes the service drop wires that go from power poles to buildings.
If you plan to dig or move earth in any way (even just planting a tree), make sure to call your one-call utility locator service first so they can mark any underground power lines or other utilities.
If you see a fallen power line, stay far away, and call 911 and your local electric utility immediately. Even if they are not sparking or humming, fallen lines can shock you if you touch them or the ground nearby.
Transformers and substations contain electrical equipment that is dangerous to contact. If you see an unlocked transformer, or if you see someone trying to enter a substation, call 911 and your local electric utility immediately.
How Electricity Travels
Electricity travels in a loop called a circuit. A circuit has an energy source and wires; it may also have a load and a switch. There must be no breaks in the loop in order for current to flow. A loop with no breaks is called a closed circuit. An open circuit has a break.
Your home is part of a large circuit. The generating plant is the energy source, transmission and distribution lines are the wires that connect the plant to your home, the lights and appliances in your home are the load, and on/off switches on the walls and appliances open and close the circuit.
Is electricity created at power plants? No. Technically speaking, electricity can’t ever be “created.” The Law of Conservation of Energy states that energy cannot be created or destroyed, but can only change its form. The total quantity of matter and energy available in the universe is a fixed amount. So, at a power plant, mechanical energy (the energy contained in the movement of giant magnets past coils of wire) changes to electrical energy (the flow of electrons).
You can take this back even further...Where does the mechanical energy come from that moves the magnets? If the power plant runs on fossil fuels, then it comes from a form of chemical energy. Where does the chemical energy in fossil fuels come from? Fossil fuels are made from prehistoric plants, and plants get their energy from the sun. So you could say that electricity generated in a fossil fuel-burning plant ultimately comes from the sun.
The mechanical energy used to move the magnets in a generating plant could also come from falling water, the ebb and flow of the tides, the wind, heat from the sun, and nuclear fission. But in all cases, the energy gets changed from one form to another. It doesn't just appear and disappear.
Who discovered electromagnetic induction? In 1831 Michael Faraday discovered that passing a magnet through a loop of wire created a current. Soon after, Joseph Henry discovered that the current produced around any closed loop of wire is proportional to the rate at which the magnet moves through the loop. The faster the magnet moves, the stronger the current. The wire loop actually transfers kinetic energy (the movement of the magnet) into electrical energy.
What are semiconductors and superconductors? A semiconductor is a normally insulating material that has been mixed with a few conductive atoms that cause the material to control an electric current passed through it. A superconductor is an element, inter-metallic alloy, or compound that will conduct electricity without resistance below a certain temperature. |
Instructors can benefit from creating ESL gerund games to help students understand the concept of the gerund in an entertaining way. When making grammar exercises into games, students don't have time to dwell on the fact that they're actually learning grammar strategies.
YourDictionary has taken the work out of finding other online ESL gerund game resources. The following links are excellent online resources for students to practice identifying gerunds. All of the links are associated with college and university language programs and contain credible and accurate information.
If you ask your students whether they'd like to take a quiz or play a game, they would probably choose to play a game. When you think about it, however, a game is just a quiz without the negative stigma attached to it.
Turning some of the above quizzes into games will serve two functions:
Don't forget to reward those who excel in the games. Offer prizes that work within the natural structure of your classroom. For example, students could compete for extra credit points, raffle tickets for a future drawing, snacks, bonus free time, or longer recesses.
A gerund is a noun which is derived from a verb with "ing" added at the end of the word to indicate continuing action. Example of gerunds include:
When used in a sentence, the gerund looks like the following examples:
In these sentences, the italicized words are the gerunds.
Gerunds can also function as nouns in a sentence. They can either hold the subject position or the direct object position. Consider the following examples:
Dancing is enjoyable.
In this example, the word dancing is a gerund because it is the subject of the sentence.
Guillermo enjoys dancing.
In the sentence, the gerund dancing is functioning as the direct object of the sentence.
Gerunds can also perform the function of a verb within a clause, yet the clause as a whole can function as a noun phrase in the overall sentence. The following sentences are examples of this use of the gerund:
Dancing a jig is fun and healthy.
Dancing in this example is a verb in the clause dancing a jig, but the overall clause is a noun phrase that functions as the subject of the sentence as a whole.
I love leaving work early.
In this sentence, leaving is a verb in the clause leaving work early, but in the overall sentence leaving work early is the object of the subject I and the predicate love.
Create and save customized word lists. Sign up today and start improving your vocabulary!
Please set a username for yourself.
People will see it as Author Name with your public word lists. |
Physical Literacy in Children and Youth (0-17)
Physical Literacy – Long Term Athlete Development
Physical literacy is all about getting kids moving in an appropriate environment that fosters inclusion, the opportunity for successes and failures, and to get to know and play with their peers! The Long Term Athlete Development Plan is a document that was created through the lens of physical literacy.
- Becoming physically literate is influenced by the individual’s age, maturation, and capacity.
- Ideally, supporting the development of physical literacy should be a major focus prior to the adolescent growth spurt.
- The skills that make up physical literacy vary by location and culture and depend on how much importance a society places on certain activities.
The Long Term Athlete Development Plan is a document that outlines what a child should be doing at a specific age and stage. Science, research, and decades of experience all point to the same thing: kids and adults will get active, stay active, and even reach the greatest heights of sports achievement if they do the right things at the right times! The different stages include:
- Awareness and First Involvement: Awareness promotes an understanding of opportunities to get involved in sport and physical activity. It highlights opportunities for persons of all abilities to participate in sport, become an athlete, and go as far as their ability and motivation will take them!
- Active Start (0-6): From 0-6 years, boys and girls need to be engaged in daily active play. Through play and movement, they develop the fundamental movement skills and learn how to link them together. At this stage developmentally appropriate activities will help participants feel competent and comfortable participating in a variety of fun and challenging activities and games.
- FUNdamentals: In the FUNdamentals stage, participants develop fundamental movement skills in structured and unstructured environments for play. The focus is on providing fun, inclusive, multisport, and developmentally appropriate sport and physical activity. These experiences will result in the participant developing a wide range of movement skill along with the confidence and desire to participate.
- Learn to Train: Once a wide range of fundamental movement skills has been acquired, participants progress into the Learn to Train stage leading to understanding basic rules, tactics, and strategy in games and refinement of sport-specific skills. There are opportunities to participate in multiple sports with competitions focused on skill development and retention. Games and activities are inclusive, fun, and skill based. At the end of the Learn to Train stage, participants grow (or progress) towards sports excellence in the Train to Train stage or being Active for Life, either by being Competitive for Life or Fit for Life.
- Train to Train: Athletes enter the Train to Train stage when they have developed proficiency in the athlete development performance components (physical, technical-tactical, mental, and emotional). Rapid physical growth, the development of sporting capability, and commitment occurs in this stage. Athletes will generally specialize in one sport towards the end of the stage. A progression from local to provincial competition occurs over the course of the stage.
- Train to Compete: Athletes enter the Train to Compete stage when they are proficient in sport-specific Train to Train athlete development components (physical, technical-tactical, mental, and emotional). Athletes are training nearly full-time and competing at the national level while being introduced to international competition.
- Train to Win: Athletes in the Train to Win stage are world class competitors who are competing at the highest level of competition in the world (e.g. Olympics, Paralympics, World Championships, World Cups or top professional leagues). These athletes have highly personalized training and competition plans and have an Integrated Support Team of physical therapists, athletic therapists, and sport psychologists providing ongoing support.
- Active for Life: Individuals who have a desire to be physically active are in the Active for Life stage. A participant may choose to be Competitive for Life or Fit for Life and, if inclined, give back as a sport or physical activity leader. Competitive for Life includes those who compete in any organized sport recreation leagues to Master Games. Fit for Life includes active people who participate in non-competitive physical activity.
Resources to further your learning on physical literacy and Long Term Athlete Development Plan:
FUNdamental Movement Skills
Learn to move with the FUNdamental Movement Skills. Children and youth need to be taught HOW to move with individually specific progressions built into all activities. Learning the FUNdamental Movement Skills is essential for enabling children to be active for life!
It is important to note that FUNdamental Movement Skills can be learned at ANY age! Physical Literacy tries to instill the curiosity and motivation to learn new movements at any age or stage of life!
Important FUNdamental Movement Skills: Run • Hop • Skip • Jump • Throw • Kick • Balance • Catch • Strike • Coordination • Agility
Resources to further your learning on FUNdamental Movement Skills
- What are FUNdmanetal Movement Skills? -website
- Ball Skills- poster
- Locomotor Skills- poster
- Run Jump Throw Wheel- poster
- My Skills- poster
- Hop-Skip-and-Jump – resource
- Run, Jump Throw – resource
- Maximum Engagement in Games and Activities (MEGA) -document
- Developing Multi-Sport Skills- Youtube video
- The connection between Physical Literacy and FUNdamental Movement Skills- Youtube Video
24-Hours Movement Guideline
Children and youth should practice healthy sleep hygiene (habits and practices that are conducive to sleeping well), limit sedentary behaviours (especially screen time), and participate in a range of physical activities in a variety of environments (e.g., home/school/community; indoors/outdoors; land/water; summer/winter) and contexts (e.g., play, recreation, sport, active transportation, hobbies, and chores).
Move: 30min+ of tummy time spread out throughout the day. More if better!
Sleep: 14-17 hours (0-3 months) and 12-16 hours (4-11 months) of good quality sleep (including nap time)
Sit: Try not to restrain your baby for more than 1 hour at a time (in a stroller or high chair) and screen time is not recommended. When sedentary use this time to read stories and play interactively when possible!
Move: 180 min + doing as many different activities as possible spread throughout the day. More is better!
Sleep: 11-14 hours of good quality sleep including naps with consistent bedtimes and wake up times.
Sit: Try not to restrain your toddler for more than 1 hour at a time (in a stroller or high chair) and screen time is not recommended if 1 years old. 1 hour of screen time (or less) is toddler is 2 years old. When sedentary use this time to read stories and play interactively when possible!
Move: 180 min + doing as many different activities as possible spread throughout the day. 60min should be energetic play (sweating and breathing heavily) More is better!
Sleep: 10-13 hours of good quality sleep which may include naps with consistent bedtimes and wake up times.
Sit: Try not to restrain your preschooler for more than 1 hour at a time (in a stroller or car seat). 1 hour of screen time (or less).
Sweat: 1 hour of medium to hard (sweating and breathing heavily) physical activity EVERY DAY!
Step: 2-3hours (or more) of light to medium ( walking, playing in a sandbox, gardening) physical activity EVERY DAY!
Sleep: 8-11hours of sleep is required EVERY NIGHT! Electronics should be shut down at least 1hr prior to going to bed.
Sit: No more than 2 hours of sedentary recreational screen EVERY DAY! Limit sitting for extended periods of time.
Resources to further your learning about the 24-Hour Movement Guidelines:
- Build your best day is an interactive computer game to help teach your children about the 24-hour Movement Guidelines!
- ParticipACTION -website
- Canadian Society for Exercise Physiology -website
- Canadian 24Hour MovementGuidelines – printable poster
Physical Literacy and Brain Health/ Mental Wellness
Canadian kids need active bodies to build their best brains. All kids deserve to thrive in mind and body. But in order for them to reach their full mental, emotional and intellectual potential, their bodies have to move to get the wheels in their brains turning.
A growing body of evidence indicates that physical activity in childhood is essential for a healthy brain and leads to improved:
- Thinking and learning
- Emotional regulation and self-control
- Problem-solving ability
- Brain plasticity – the growth of new brain tissue
- Stress management
- Ability to cope with anxiety and depressive symptoms
- Self-esteem and self-worth
- Attention and focus
Canadian kids are sitting too much and moving too little to reach their full potential.
Resources on the relationship between brain health and physical activity:
- 2018 ParticipACTION Physical Activity Report Card – website
- Parenting Science -website
- Kelty Mental Health Resource Centre – website
- Exercise can foster brain health for kids with autism and ADHD– article
The Gender Divide
If a girl hasn’t participated in sports by the age of 10, there is only a 10% chance that she will be physically active as an adult. Only 16% of adult women report sport participation.
The difference in physical activity behaviours between boys and girls starts as young as 6 years old. This difference only increases as children grow older.
Physical literacy is a theory we can use in practical experiences to ensure our girls and women stay active and healthy for life. |
Alloy Steel Alloy steel is steel that is alloyed with a variety of elements in total amounts between 1.0% and 50% by weight to improve its mechanical properties. Alloy steels are broken down into two groups: low-alloy steels and high-alloy steels.
Every steel is truly an alloy, but not all steels are called "alloy steels". Even the simplest steels are iron (Fe) (about 99%) alloyed with carbon (C) (about 0.1% to 1%, depending on type). However, the term "alloy steel" is the standard term referring to steels with other alloying elements in addition to the carbon. Common alloyants include manganese (the most common one), nickel, chromium, molybdenum, vanadium, silicon, and boron. Less common alloyants include aluminum, cobalt, copper, cerium, niobium, titanium, tungsten, tin, zinc, lead, and zirconium.
The following is a range of improved properties in alloy steels (as compared to carbon steels): strength, hardness, toughness, wear resistance, corrosion resistance, hardenability, and hot hardness. To achieve some of these improved properties the metal may require heat treating.
Some of these find uses in exotic and highly-demanding applications, such as in the turbine blades of jet engines, in spacecraft, and in nuclear reactors. Because of the ferromagnetic properties of iron, some steel alloys find important applications where their responses to magnetism are very important, including in electric motors and in transformers. |
In the theory of plate tectonics, the earth’s surface is broken into several distinct plates which move about, carrying the continents with them. As a result, a fixed location on the planet is not really stationary. It is actually moving along the earth! We don’t notice the motion, of course, because it is happening very slowly. However, according to the theory, it is always happening. If scientists make certain assumptions about how this motion occurred in the past, they can conclude that at one time, all the continents on earth were grouped together in a supercontinent called Pangaea. Over time, the motion of the plates then separated the continents into the positions we see today.
If you assume that the plate motions we think are happening today are representative of how fast the plates have always moved, you find that it would take hundreds of millions of years for the continents to have moved from Pangaea to where they are today. However, many young-earth creationists think that plate motions were much faster during the worldwide Flood, and some have produced detailed computer models that attempt to explain how the Flood happened in the context of this catastrophic plate tectonics. Other young-earth creationists are skeptical about plate tectonics, claiming that there isn’t a lot of evidence to support it.
I tend to disagree with the young-earth creationists who are skeptical about plate tectonics. While I am definitely not a geologist or geophysicist, I do think there is a lot of indirect evidence to indicate that the plates are real and that they are really moving. Interestingly enough, I recently ran across an article by Dr. John Baumgardner that, in my mind, really clinches the case for the reality of plate tectonics.1 Not only that, the data used in the article are just plain cool!
It turns out that the Global Positioning System (GPS) has been monitoring over 2,000 stationary receivers that have been placed all over the earth. The GPS confirms that these receivers are moving at surprisingly constant velocities, despite the fact that they are fixed to the ground. The map shown at the top of the post, for example, displays several of the receivers and the directions in which the GPS confirms that they are moving. If you go to the Jet Propulsion Lab (JPL) website that archives these data, you can look at any part of the world and see the receivers that are there and how they are moving. If you do that, you will find that they are moving the way you would expect them to move in the context of plate tectonics.
What’s really fascinating to me about these data is how detailed they are. For example, some of the fastest motion detected by the GPS is around Australia and New Zealand. Let’s look at the GPS data for a receiver that is sitting on the Cook Islands, which are in the Western Pacific, east of Australia:
The black dots are the data, and the red lines are simply meant to guide your eyes along the data. Note that the graphs span a bit more than ten years. In that time, the receiver has moved about 36 centimeters in latitude and about 60 centimeters in longitude. Its elevation, however, hasn’t changed significantly. Now look at the data points themselves. Notice that for both longitude and latitude, most of the points fall along an almost perfectly straight line. There are a few deviations here and there, but the straight lines are unmistakable. Physics tells us that the slope of a position versus time curve tells you the speed. Since a straight line has a constant slope, the straight lines in these graphs tell us that the speed at which the Cook Islands are moving has stayed remarkably constant over the past ten years.
I just think it is nothing short of awesome that we have these data!
1. John Baumgardner, “Is Plate Tectonics Occurring Today?”, Journal of Creation 26(1):101-105, 2012
Return to Text |
One of the favored models for the chemical conditions that drove the development of life is called the RNA world. Findings over the last couple of decades have shown that Ribonucleic Acid, or RNA, can have potent catalytic activity, including the ability to modify other nucleic acid sequences. RNA, however, is not that chemically stable, which is why most organisms store their genetic information in DNA (deoxyribonucleic acid) form. As the "deoxy" in the name implies, however, DNA has one less oxygen atom than RNA, and is less reactive as a result. Transferring the sequence of RNA to DNA and back is easy, but this won't typically transfer the catalytic activity. This creates a chicken-egg sort of problem: information—in terms of ability to catalyze reaction—is best kept in DNA, where it's less likely to be able to function in catalyzing the reaction than RNA.
Researchers have been finding that DNA can catalyze some reactions, though, so a team of researchers asked a critical question: how hard is it to take an RNA catalytic activity and get it to work in DNA form? They started with an RNA-enzyme that will link two RNA molecules together and made a DNA molecule with the same sequence. The DNA form did not work as a catalyst. They then subjected it to what they called "accelerated evolution": several rounds of random mutation followed by selection for activity. By the end, they had DNA molecules that were as active as the original RNA. On average, these final products contained about 15 mutations compared to the starting sequence, but less than half of these mutations were shared among the majority of the final products. They made a test DNA molecule with the seven mutations that most of the final products had in common and found that it did have catalytic activity. Their conclusion is that, although transfer of information from catalytic RNAs to stable catalytic DNAs cannot be done directly, there is not a huge barrier between the two. This makes the move from an RNA world to a DNA world, one that more closely resembles life, a bit more probable than had been expected. |
Through ocean exploration, we can establish the baseline information needed to better understand environmental change, filling gaps in the unknown to deliver reliable and authoritative science that is foundational to providing foresight about future conditions and informing the decisions we confront every day on this dynamic planet. This same knowledge is often the only source for basic information needed to respond appropriately in the face of deep-sea disasters.
Information from ocean exploration is important to everyone. Unlocking the mysteries of deep-sea ecosystems can reveal new sources for medical drugs, food, energy resources, and other products. Information from deep-ocean exploration can help predict earthquakes and tsunamis and help us understand how we are affecting and being affected by changes in Earth’s climate and atmosphere.
Ocean exploration can improve ocean literacy and inspire young people to seek careers in science, technology, engineering, and mathematics. The challenges of exploring the deep ocean can provide the basis for technology and engineering innovations that can be applied in other situations. |
An average household microwave oven operating at 110/220V AC can produce upto 2800V inside it, which is dangerously lethal. Besides that it also has a lower level AC voltage around 3.5V to light up the filament and a regulated DC voltage like 5V/3.3V for the digital electronics part like the display or timers to operate. Have you ever wondered what prevents these high voltages from reaching out to your fingers through the buttons or casing when you touch the oven? The answer to your question is “isolation”. When designing electronics products involving more than one type of signal or more than one operating voltage, isolation is used to prevent one signal from messing up the other. It also plays a vital role in safety by preventing fault conditions in industrial-grade products. This isolation is generally referred as Galvanic isolation. Why the term “Galvanic”? It is because galvanic represents the current produced by some sort of chemical action, and since we are isolating this current by breaking conductor contact it is called as Galvanic Isolation.
There are several types of galvanic isolation techniques and choosing the right one depends on the type of isolation, withstanding capacity, application requirements and obviously, the cost factor is also involved. In this article we will learn about the different types of isolation, how they work and where to use them in our designs.
Types of Galvanic Isolation
- Signal Isolation
- Power Level Isolation
- Capacitors as an Isolator
Signal level isolation is required where two circuits of different nature are communicating with each other using some type of signal. For example, two circuits using independent power source and operating of different voltage levels. In such cases, to isolate the individual ground of two independent power sources and to communicate between those two circuits, signal level isolation is required.
Signal isolation is done by using different type of isolators. Optical and electromagnetic isolators are majorly used in signal isolation purpose. Both these isolators protect the different ground sources from combining together. Each Isolator has its own unique operating principle and application which are discussed below.
Optical isolator uses lights to communicate between two independent circuits. Typically, optical isolators a.k.a Optocoupler have two components inside a single silicon chip, a light emitting diode, and a phototransistor. The LED is controlled by the one circuit and the transistor side is connected with the other circuit. Therefore, the LED and the transistor are not electrically connected. The communication is only done by lights, optically.
Consider the above image. A popular optoisolator PC817 is isolating two independent circuits. Circuit 1 is the power source with a switch, circuit 2 is a logic level output connected with a different 5V supply. The logic state is controlled by the left circuit. When the switch is being closed, the LED inside the optocoupler lights up and turns on the transistor. The logic state will be changed from High to Low.
The circuit 1 and circuit 2 are isolated using the above circuit. Galvanic isolation is very useful for the above circuit. There are several situations where the high potential ground noise induced in the low potential ground and creates a ground loop which further responsible for inaccurate measurements. Similar to PC817 there are many types of Optocoupler for different application requirements.
2. Electromagnetic Isolators
Optoisolators are useful for DC signal isolation, but electromagnetic isolators such as small signal transformers are useful for AC signal isolation. Transformers like audio transformer have their primary and secondary sides isolated which can be used for different audio signal isolation. Another most common use is in network hardware or Ethernet section. Pulse transformers are used to isolate the external wiring with internal hardware. Even telephone lines are used transformer based signal isolators. But, as transformers are isolated by electromagnetically, it only works with AC.
Above image is the internal schematic of RJ45 jack with integrated pulse transformer for isolating MCU portion with the Output.
Power Level Isolation
Power level isolations are required to isolate low power sensitive devices from high power noisy lines or vice versa. Also, power level isolation provides proper safety from hazardous line voltages by isolating the high voltage lines from the operator and other parts of the system.
The popular power level isolator is again a Transformer. There are enormous applications for transformers the most commonly usage is to provide low voltage from a high voltage source. The transformer does not have connections between primary and secondary but could step down the voltage from high voltage AC to low voltage AC without losing the galvanic isolation.
The above image is showing a step-down transformer in action where the primary side input is connected into the wall socket and the secondary is connected across a resistive load. A proper isolation transformer has a 1:1 turns ratio and do not alter the voltage or current level on both sides. The sole purpose of the isolation transformer is to provide isolation.
Relay is a popular isolator with a huge application in the field of electronics and electrical. There are many different types of relays available in the electronics market depending on the application. Popular types are Electromagnetic relays and solid state relays.
An Electromagnetic relay works with Electromagnetic and Mechanically movable parts often referred to as poles. It contains an electromagnet that moves the pole and completes the circuit. Relay creates isolation when high voltage circuits need to be controlled from a low voltage circuit or vice-versa. In such a situation both circuits are isolated but one circuit could energize the relay to control another one.
In the above image, two circuits are electrically independent of each other. But by using the switch on Circuit-1, user can control the state of the load on the circuit 2. Learn more about how a relay can be used in a Circuit.
There are not much difference between Solid State Relay and electromechanical relay in terms of working. Solid state relays work exactly the same but the electro-mechanical part is replaced with an optically controlled diode. The galvanic isolation can be build up due to the absence of a direct connection between the input and output of the solid state relays.
3. Hall Effect Sensors
Needless to say that current measurement is a part of Electrical and Electronics engineering. There are different types of current sensing methods available. Often the measurements are required for High voltage and High current paths and the value read has to sent to a low voltage circuitry which is a part of the measurement circuit. Also from the user perspective, invasive measurement is dangerous and impossible to implement. Hall Effect sensors provide contactless current measurement accurately and help to measure the current flowing through a conductor in a non-invasive way. It provides proper isolation and ensures safety from hazardous electricity. Hall Effect sensor uses electromagnetic field generated across the conductor to estimate the current flowing through it.
The core ring is hooked over a conductor in a noninvasive way and it is electrically isolated as shown in the picture above.
Capacitors as an Isolator
The least popular method for isolating circuits is by using capacitors. Due to inefficiency and dangerous failure outcomes this is no longer preferred, but still knowing it might come in handy when you want to build a crude isolator. Capacitors block DC and allow passing a high-frequency AC signal. Due to this excellent property, the capacitor is used as isolators in designs where DC currents of two circuits need to be blocked but still allowing the data transmission.
The above image is showing capacitors are used for isolation purposes. Transmitter and the receiver both are isolated, but the data communication can be done.
Galvanic isolation – Applications
Galvanic isolation is very essential and the application is huge. It is an important parameter in consumer goods as well as in the Industrial, Medical, and communication sector. In an industrial electronics market, galvanic isolation is required for Power Distribution systems, Power generators, measurement systems, Motor controllers, Input-Output logic devices, etc.
In the medical sector, isolation is one of the major priorities for the equipment as medical devices can be directly connected with the patient’s bodies. Such devices are ECG, Endoscopes, Defibrillators, different kinds of imagining device. Consumer level communication systems also use galvanic isolation. One common example is Ethernet, Routers, Switchers, Telephone switches, etc. Normal consumer goods, like chargers, SMPS, computer’s logic boards are the most common products which use galvanic isolation.
Practical Example of Galvanic isolation
The below circuit is a typical application circuit of galvanically isolated Full-duplex IC MAX14852 (For 500 kbps communication speed) or MAX14854 (For 25 Mbps communication speed) on RS-485 communication line with the micro-controller unit. The IC is manufactured by the popular semiconductor manufacturing company Maxim Integrated.
This example is one of the best examples of galvanic isolation example on industrial equipment. RS-485 is a widely used traditional communication protocol used in industrial equipment. The popular use of RS-485 is to employ MODBUS protocol over TTL segment.
Suppose a High voltage AC transformer is providing sensors data which are installed inside the Transformer via RS-485 protocol. One needs to connect a PLC device with an RS-485 port to harvest the data from the transformer. But the problem is in the direct communication line. PLC uses very low voltage level and very sensitive with the high ESD or surge. If a direct connection is employed, the PLC can be at high risk and need to be galvanically isolated.
Those ICs are very useful to protect the PLC from ESD or surges.
As per the datasheet, Both ICs has a withstand capacity of +/- 35kV ESD and 2.75kVrms withstand isolation voltage up to 60 seconds. Not only this, but those ICs also confirm 445Vrms Working-isolation Voltage, making it a suitable isolator to be used in industrial automation equipment. |
Collision theory qualitatively explains how chemical reactions occur and why reaction rates differ for different reactions. This theory is based on the idea that reactant particles must collide for a reaction to occur, but only a certain fraction of the total collisions have the energy to connect effectively and cause the reactants to transform into products. This is because only a portion of the molecules have enough energy and the right orientation (or "angle") at the moment of impact to break any existing bonds and form new ones. The minimal amount of energy needed for this to occur is known as activation energy. Collision theory is closely related to chemical kinetics.
Drawbacks: If the values of the predicted rate constants are compared with the values of known rate constants it is noticed that collision theory fails to estimate the constants correctly and the more complex the molecules are, the more it fails. The reason for this is that particles have been supposed to be spherical and able to react in all directions; that is not true, as the orientation of the collisions is not always the right one. For example in the hydrogenation reaction of ethylene the H2 molecule must approach the bonding zone between the atoms, and only a few of all the possible collisions fulfill this requirement.
You have rated this answer /10
- Biology Question Answers for CBSE Class 12 Science
- Chemistry Question Answers for CBSE Class 12 Science
- Economics Question Answers for CBSE Class 12 Science
- Hindi Question Answers for CBSE Class 12 Science
- Mathematics Question Answers for CBSE Class 12 Science
- Physics Question Answers for CBSE Class 12 Science |
Candy floss is amazing stuff. It is soft and easily moulded, but quickly dissolves. As I was watching a candy floss seller skilfully creating intricate sugar flowers for my children recently, it struck me that in many ways, candy floss and reading comprehension are similar. Candy floss starts as a swirling, wispy cloud that can just as easily float away as become something tangible.
Often, as I nudge children’s thinking towards tangible understanding, I am aware of the intricate balancing game I am playing – each part of the comprehension process is important, leaning on the others and working together to create an orchestrated mental model.
In Understanding and Teaching Reading Comprehension (2015), Jane Oakhill and Kate Cain suggest that there are four essential ingredients to this process: an understanding of text structure and organisation; depth of vocabulary (what you know about the words and how they are used); inference skills and the ability to monitor your understanding to make sure it makes sense and to correct it if it has gone astray. All these ingredients work together to create the big picture: the mental representation. If we maintain our focus on these four main ingredients when we work with children, we can’t go far wrong. But is essential that we focus on helping children understand how all the aspects work together.
Ingredients of reading
Of course, each of main ingredients are complex themselves, with multiple strands within them, so it is tempting to concentrate on a single aspect until we feel it is under control. For example, to make inferences, children must understand how text works at both a local cohesion level (within and across sentences) and at a global cohesion level (across a whole chapter, book or page). Children need to comprehend the links between sentences and then pick up the nuances of dialogue, tone, emotion and setting to bring the words alive and ensure a complete understanding of the text as a whole. A child’s understanding can break down at any point and it can be a challenge to recognise which part is tricky for them.
However, there are a number of questions you can ask children to help you explore what they are struggling with. These questions can be asked about even a very short text, without much preparation.
- Are there any words or phrases you don’t understand? What are they? What do you think they mean?
- Tell me what happened in this text? Retell the story.
- What would be a good title for this text?
- Was there anything that made you stop and think? Anything you were unsure of? Anything that didn’t make sense?
- What do you think about this text? (a question requiring a personal reflection and response)
When working in this way, I write down the responses of the child as completely as I can, especially when they are retelling the story. By looking closely at what they have said, we can begin to understand the types of inferences they are making, or not making. Their responses can help us to gauge the challenges for the child and to focus our teaching accordingly.
If we establish the fundamental ingredients of comprehension, we can begin to help children understand the process of spinning them all together.
Megan Dixon is director of literacy at the Aspire Educational Trust |
|Part of a series on|
An economic bubble or asset bubble (sometimes also referred to as a speculative bubble, a market bubble, a price bubble, a financial bubble, a speculative mania, or a balloon) is a situation in which asset prices appear to be based on implausible or inconsistent views about the future. It could also be described as trade in an asset at a price or price range that strongly exceeds the asset's intrinsic value.
Many explanations have been suggested, and research has recently shown that bubbles may appear even without uncertainty, speculation, or bounded rationality, in which case they can be called non-speculative bubbles or sunspot equilibria. In such cases, the bubbles may be argued to be rational, where investors at every point are fully compensated for the possibility that the bubble might collapse by higher returns. These approaches require that the timing of the bubble collapse can only be forecast probabilistically and the bubble process is often modelled using a Markov switching model. Similar explanations suggest that bubbles might ultimately be caused by processes of price coordination.
More recent theories of asset bubble formation suggest that these events are sociologically driven. For instance, explanations have focused on emerging social norms and the role that culturally-situated stories or narratives play in these events.
Because it is often difficult to observe intrinsic values in real-life markets, bubbles are often conclusively identified only in retrospect, once a sudden drop in prices has occurred. Such a drop is known as a crash or a bubble burst. In an economic bubble, prices can fluctuate erratically and become impossible to predict from supply and demand alone.
Asset bubbles are now widely regarded as a recurrent feature of modern economic history dating back as far as the 1600s. The Dutch Golden Age's tulip mania (in the mid-1630s) is often considered the first recorded economic bubble.
Both the boom and the burst phases of the bubble are examples of a positive feedback mechanism (in contrast to the negative feedback mechanism that determines the equilibrium price under normal market circumstances).
- 1 History and origin of term
- 2 Types of bubbles
- 3 Impact
- 4 Possible causes
- 5 Stages of an economic bubble
- 6 Identifying asset bubbles
- 7 Examples of asset bubbles
- 8 Examples of aftermaths of bubbles
- 9 See also
- 10 References
- 11 Further reading
- 12 External links
History and origin of term
The term "bubble", in reference to financial crisis, originated in the 1711–1720 British South Sea Bubble, and originally referred to the companies themselves, and their inflated stock, rather than to the crisis itself. This was one of the earliest modern financial crises; other episodes were referred to as "manias", as in the Dutch tulip mania. The metaphor indicated that the prices of the stock were inflated and fragile – expanded based on nothing but air, and vulnerable to a sudden burst, as in fact occurred.
Some later commentators have extended the metaphor to emphasize the suddenness, suggesting that economic bubbles end "All at once, and nothing first, / Just as bubbles do when they burst," though theories of financial crises such as debt deflation and the Financial Instability Hypothesis suggest instead that bubbles burst progressively, with the most vulnerable (most highly-leveraged) assets failing first, and then the collapse spreading throughout the economy.
Types of bubbles
There are different types of bubbles, with economists primarily interested in two major types of bubbles:
- 1. Equity bubble
An equity bubble is characterised by tangible investments and the unsustainable desire to satisfy a legitimate market in high demand. These kind of bubbles are characterised by easy liquidity, tangible and real assets, and an actual innovation that boosts confidence. Two instances of an equity bubble are the Tulip Mania and the dot-com bubble.
- 2. Debt bubble
A debt bubble is characterised by intangible or credit based investments with little ability to satisfy growing demand in a non-existent market. These bubbles are not backed by real assets and are characterized by frivolous lending in the hopes of returning a profit or security. These bubbles usually end in debt deflation causing bank runs or a currency crisis when the government can no longer maintain the fiat currency. Examples include the Great Depression and The Great Recession.
The impact of economic bubbles is debated within and between schools of economic thought; they are not generally considered beneficial, but it is debated how harmful their formation and bursting is.
Within mainstream economics, many believe that bubbles cannot be identified in advance, cannot be prevented from forming, that attempts to "prick" the bubble may cause financial crisis, and that instead authorities should wait for bubbles to burst of their own accord, dealing with the aftermath via monetary policy and fiscal policy.
In addition, the crash which usually follows an economic bubble can destroy a large amount of wealth and cause continuing economic malaise; this view is particularly associated with the debt-deflation theory of Irving Fisher, and elaborated within Post-Keynesian economics.
A protracted period of low risk premiums can simply prolong the downturn in asset price deflation as was the case of the Great Depression in the 1930s for much of the world and the 1990s for Japan. Not only can the aftermath of a crash devastate the economy of a nation, but its effects can also reverberate beyond its borders.
Effect upon spending
Another important aspect of economic bubbles is their impact on spending habits. Market participants with overvalued assets tend to spend more because they "feel" richer (the wealth effect). Many observers quote the housing market in the United Kingdom, Australia, New Zealand, Spain and parts of the United States in recent times, as an example of this effect. When the bubble inevitably bursts, those who hold on to these overvalued assets usually experience a feeling of reduced wealth and tend to cut discretionary spending at the same time, hindering economic growth or, worse, exacerbating the economic slowdown.
In an economy with a central bank, the bank may therefore attempt to keep an eye on asset price appreciation and take measures to curb high levels of speculative activity in financial assets. This is usually done by increasing the interest rate (that is, the cost of borrowing money). (Historically, this is not the only approach taken by central banks. It has been argued that they should stay out of it and let the bubble, if it is one, take its course.)
In the 1970s, excess monetary expansion after the U.S. came off the gold standard (August 1971) created massive commodities bubbles. These bubbles only ended when the U.S. Central Bank (Federal Reserve) finally reined in the excess money, raising federal funds interest rates to over 14%. The commodities bubble popped and prices of oil and gold, for instance, came down to their proper levels. Similarly, low interest rate policies by the U.S. Federal Reserve in the 2001–2004 are believed to have exacerbated housing and commodities bubbles. The housing bubble popped as subprime mortgages began to default at much higher rates than expected, which also coincided with the rising of the fed funds rate.
It has also been variously suggested that bubbles may be rational, intrinsic, and contagious. To date, there is no widely accepted theory to explain their occurrence. Recent computer-generated agency models suggest excessive leverage could be a key factor in causing financial bubbles.
Puzzlingly for some, bubbles occur even in highly predictable experimental markets, where uncertainty is eliminated and market participants should be able to calculate the intrinsic value of the assets simply by examining the expected stream of dividends. Nevertheless, bubbles have been observed repeatedly in experimental markets, even with participants such as business students, managers, and professional traders. Experimental bubbles have proven robust to a variety of conditions, including short-selling, margin buying, and insider trading.
While there is no clear agreement on what causes bubbles, there is evidence to suggest that they are not caused by bounded rationality or assumptions about the irrationality of others, as assumed by greater fool theory. It has also been shown that bubbles appear even when market participants are well-capable of pricing assets correctly. Further, it has been shown that bubbles appear even when speculation is not possible or when over-confidence is absent.
More recent theories of asset bubble formation suggest that they are likely sociologically-driven events, thus explanations that merely involve fundamental factors or snippets of human behavior are incomplete at best. For instance, qualitative researchers Preston Teeter and Jorgen Sandberg argue that market speculation is driven by culturally-situated narratives that are deeply embedded in and supported by the prevailing institutions of the time. They cite factors such as bubbles forming during periods of innovation, easy credit, loose regulations, and internationalized investment as reasons why narratives play such an influential role in the growth of asset bubbles.
One possible cause of bubbles is excessive monetary liquidity in the financial system, inducing lax or inappropriate lending standards by the banks, which makes markets vulnerable to volatile asset price inflation caused by short-term, leveraged speculation. For example, Axel A. Weber, the former president of the Deutsche Bundesbank, has argued that "The past has shown that an overly generous provision of liquidity in global financial markets in connection with a very low level of interest rates promotes the formation of asset-price bubbles."
According to the explanation, excessive monetary liquidity (easy credit, large disposable incomes) potentially occurs while fractional reserve banks are implementing expansionary monetary policy (i.e. lowering of interest rates and flushing the financial system with money supply); this explanation may differ in certain details according to economic philosophy. Those who believe the money supply is controlled exogenously by a central bank may attribute an 'expansionary monetary policy' to said bank and (should one exist) a governing body or institution; others who believe that the money supply is created endogenously by the banking sector may attribute such a 'policy' with the behavior of the financial sector itself, and view the state as a passive or reactive factor. This may determine how central or relatively minor/inconsequential policies like fractional reserve banking and the central bank's efforts to raise or lower short-term interest rates are to one's view on the creation, inflation and ultimate implosion of an economic bubble. Explanations focusing on interest rates tend to take on a common form, however: When interest rates are set excessively low, (regardless of the mechanism by which it is accomplished) investors tend to avoid putting their capital into savings accounts. Instead, investors tend to leverage their capital by borrowing from banks and invest the leveraged capital in financial assets such as stocks and real estate. Risky leveraged behavior like speculation and Ponzi schemes can lead to an increasingly fragile economy, and may also be part of what pushes asset prices artificially upward until the bubble pops.
Simply put, economic bubbles often occur when too much money is chasing too few assets, causing both good assets and bad assets to appreciate excessively beyond their fundamentals to an unsustainable level. Once the bubble bursts, the fall in prices causes the collapse of unsustainable investment schemes (especially speculative and/or Ponzi investments, but not exclusively so), which leads to a crisis of consumer (and investor) confidence that may result in a financial panic and/or financial crisis; if there is monetary authority like a central bank, it may be forced to take a number of measures in order to soak up the liquidity in the financial system or risk a collapse of its currency. This may involve actions like bailouts of the financial system, but also others that reverse the trend of monetary accommodation, commonly termed forms of 'contractionary monetary policy'.
Some of these measures may include raising interest rates, which tends to make investors become more risk averse and thus avoid leveraged capital because the costs of borrowing may become too expensive; others may include certain countermeasures that may be taken pre-emptively during periods of strong economic growth include having the central monetary authority increase capital reserve requirements and attempting to implement regulation that checks and/or prevents processes leading to over-expansion and excessive leveraging of debt. Ideally, such countermeasures lessen the impact of a downturn by strengthening financial institutions while the economy is strong.
Advocates of perspectives stressing the role of credit money in an economy often refer to (such) bubbles as "credit bubbles", and look at such measures of financial leverage as debt-to-GDP ratios to identify bubbles. Typically the collapse of any economic bubble results in an economic contraction termed (if less severe) a recession or (if more severe) a depression; what economic policies to follow in reaction to such a contraction is a hotly debated perennial topic of political economy.
Social psychology factors
Greater fool theory
Greater fool theory states that bubbles are driven by the behavior of perennially optimistic market participants (the fools) who buy overvalued assets in anticipation of selling it to other speculators (the greater fools) at a much higher price. According to this explanation, the bubbles continue as long as the fools can find greater fools to pay up for the overvalued asset. The bubbles will end only when the greater fool becomes the greatest fool who pays the top price for the overvalued asset and can no longer find another buyer to pay for it at a higher price. This theory is popular among laity but has not yet been fully confirmed by empirical research.
Extrapolation is projecting historical data into the future on the same basis; if prices have risen at a certain rate in the past, they will continue to rise at that rate forever. The argument is that investors tend to extrapolate past extraordinary returns on investment of certain assets into the future, causing them to overbid those risky assets in order to attempt to continue to capture those same rates of return.
Overbidding on certain assets will at some point result in uneconomic rates of return for investors; only then the asset price deflation will begin. When investors feel that they are no longer well compensated for holding those risky assets, they will start to demand higher rates of return on their investments.
Another related explanation used in behavioral finance lies in herd behavior, the fact that investors tend to buy or sell in the direction of the market trend. This is sometimes helped by technical analysis that tries precisely to detect those trends and follow them, which creates a self-fulfilling prophecy.
Investment managers, such as stock mutual fund managers, are compensated and retained in part due to their performance relative to peers. Taking a conservative or contrarian position as a bubble builds results in performance unfavorable to peers. This may cause customers to go elsewhere and can affect the investment manager's own employment or compensation. The typical short-term focus of U.S. equity markets exacerbates the risk for investment managers that do not participate during the building phase of a bubble, particularly one that builds over a longer period of time. In attempting to maximize returns for clients and maintain their employment, they may rationally participate in a bubble they believe to be forming, as the risks of not doing so outweigh the benefits.
Moral hazard is the prospect that a party insulated from risk may behave differently from the way it would behave if it were fully exposed to the risk. A person's belief that they are responsible for the consequences of their own actions is an essential aspect of rational behavior. An investor must balance the possibility of making a return on their investment with the risk of making a loss – the risk-return relationship. A moral hazard can occur when this relationship is interfered with, often via government policy.
A recent example is the Troubled Asset Relief Program (TARP), signed into law by U.S. President George W. Bush on 3 October 2008 to provide a Government bailout for many financial and non-financial institutions who speculated in high-risk financial instruments during the housing boom condemned by a 2005 story in The Economist titled "The worldwide rise in house prices is the biggest bubble in history". A historical example was intervention by the Dutch Parliament during the great Tulip Mania of 1637.
Other causes of perceived insulation from risk may derive from a given entity's predominance in a market relative to other players, and not from state intervention or market regulation. A firm – or several large firms acting in concert (see cartel, oligopoly and collusion) – with very large holdings and capital reserves could instigate a market bubble by investing heavily in a given asset, creating a relative scarcity which drives up that asset's price. Because of the signaling power of the large firm or group of colluding firms, the firm's smaller competitors will follow suit, similarly investing in the asset due to its price gains.
However, in relation to the party instigating the bubble, these smaller competitors are insufficiently leveraged to withstand a similarly rapid decline in the asset’s price. When the large firm, cartel or de facto collusive body perceives a maximal peak has been reached in the traded asset's price, it can then proceed to rapidly sell or "dump" its holdings of this asset on the market, precipitating a price decline that forces its competitors into insolvency, bankruptcy or foreclosure.
The large firm or cartel – which has intentionally leveraged itself to withstand the price decline it engineered – can then acquire the capital of its failing or devalued competitors at a low price as well as capture a greater market share (e.g., via a merger or acquisition which expands the dominant firm's distribution chain). If the bubble-instigating party is itself a lending institution, it can combine its knowledge of its borrowers’ leveraging positions with publicly available information on their stock holdings, and strategically shield or expose them to default.
Other possible causes
Some regard bubbles as related to inflation and thus believe that the causes of inflation are also the causes of bubbles. Others take the view that there is a "fundamental value" to an asset, and that bubbles represent a rise over that fundamental value, which must eventually return to that fundamental value. There are chaotic theories of bubbles which assert that bubbles come from particular "critical" states in the market based on the communication of economic factors. Finally, others regard bubbles as necessary consequences of irrationally valuing assets solely based upon their returns in the recent past without resorting to a rigorous analysis based on their underlying "fundamentals".
Experimental and mathematical economics
Bubbles in financial markets have been studied not only through historical evidence, but also through experiments, mathematical and statistical works. Smith, Suchanek and Williams designed a set of experiments in which an asset that gave a dividend with expected value 24 cents at the end of each of 15 periods (and were subsequently worthless) was traded through a computer network. Classical economics would predict that the asset would start trading near $3.60 (15 times $0.24) and decline by 24 cents each period. They found instead that prices started well below this fundamental value and rose far above the expected return in dividends. The bubble subsequently crashed before the end of the experiment. This laboratory bubble has been repeated hundreds of times in many economics laboratories in the world, with similar results.
The existence of bubbles and crashes in such a simple context was unsettling for the economics community that tried to resolve the paradox on various features of the experiments. To address these issues Porter and Smith and others performed a series of experiments in which short selling, margin trading, professional traders all led to bubbles a fortiori.
Much of the puzzle has been resolved through mathematical modeling and additional experiments. In particular, starting in 1989, Gunduz Caginalp and collaborators modeled the trading with two concepts that are generally missing in classical economics and finance. First, they assumed that supply and demand of an asset depended not only on valuation, but on factors such as the price trend. Second, they assumed that the available cash and asset are finite (as they are in the laboratory). This is contrary to the “infinite arbitrage” that is generally assumed to exist, and to eliminate deviations from fundamental value. Utilizing these assumptions together with differential equations, they predicted the following: (a) The bubble would be larger if there was initial undervaluation. Initially, “value-based” traders would buy the undervalued asset creating an uptrend, which would then attract the “momentum” traders and a bubble would be created. (b) When the initial ratio of cash to asset value in a given experiment was increased, they predicted that the bubble would be larger.
An epistemological difference between most microeconomic modeling and these works is that the latter offer an opportunity to test implications of their theory in a quantitative manner. This opens up the possibility of comparison between experiments and world markets.
These predictions were confirmed in experiments that showed the importance of “excess cash” (also called liquidity, though this term has other meanings), and trend-based investing in creating bubbles. When price collars were used to keep prices low in the initial time periods, the bubble became larger. In experiments in which L= (total cash)/(total initial value of asset) were doubled, the price at the peak of the bubble nearly doubled. This provided valuable evidence for the argument that “cheap money fuels markets.”
Caginalp's asset flow differential equations provide a link between the laboratory experiments and world market data. Since the parameters can be calibrated with either market, one can compare the lab data with the world market data.
The asset flow equations stipulate that price trend is a factor in the supply and demand for an asset that is a key ingredient in the formation of a bubble. While many studies of market data have shown a rather minimal trend effect, the work of Caginalp and DeSantis on large scale data adjusts for changes in valuation, thereby illuminating a strong role for trend, and providing the empirical justification for the modeling.
The asset flow equations have been used to study the formation of bubbles from a different standpoint in where it was shown that a stable equilibrium could become unstable with the influx of additional cash or the change to a shorter time scale on the part of the momentum investors. Thus a stable equilibrium could be pushed into an unstable one, leading to a trajectory in price that exhibits a large “excursion” from either the initial stable point or the final stable point. This phenomenon on a short time scale may be the explanation for flash crashes.
Stages of an economic bubble
- Substitution: increase in the value of an asset
- Takeoff: speculative purchases (buy now to sell in the future at a higher price and obtain a profit)
- Exuberance: a state of unsustainable euphoria.
- Critical stage: begin to shorten the buyers, some begin to sell.
- Pop (crash): prices plummet
Identifying asset bubbles
Economic or asset price bubbles are often characterized by one or more of the following:
- Unusual changes in single measures, or relationships among measures (e.g., ratios) relative to their historical levels. For example, in the housing bubble of the 2000s, the housing prices were unusually high relative to income. For stocks, the price to earnings ratio provides a measure of stock prices relative to corporate earnings; higher readings indicate investors are paying more for each dollar of earnings.
- Elevated usage of debt (leverage) to purchase assets, such as purchasing stocks on margin or homes with a lower down payment.
- Higher risk lending and borrowing behavior, such as originating loans to borrowers with lower credit quality scores (e.g., subprime borrowers), combined with adjustable rate mortgages and "interest only" loans.
- Rationalizing borrowing, lending and purchase decisions based on expected future price increases rather than the ability of the borrower to repay.
- Rationalizing asset prices by increasingly weaker arguments, such as "this time it's different" or "housing prices only go up."
- A high presence of marketing or media coverage related to the asset.
- Incentives that place the consequences of bad behavior by one economic actor upon another, such as the origination of mortgages to those with limited ability to repay because the mortgage could be sold or securitized, moving the consequences from the originator to the investor.
- International trade (current account) imbalances, resulting in an excess of savings over investments, increasing the volatility of capital flow among countries. For example, the flow of savings from Asia to the U.S. was one of the drivers of the 2000s housing bubble.
- A lower interest rate environment, which encourages lending and borrowing.
Examples of asset bubbles
This section needs additional citations for verification. (October 2017) (Learn how and when to remove this template message)
- Tipper and See-Saw Time (1621)
- Tulip mania (Holland) (1634–1637)
- South Sea Company (British) (1720)
- Mississippi Company (France) (1720)
- Canal Mania (UK) (1790s–1810s)
- Panic of 1819 (US) (1815–1818) Prices of US land in the south and west, cotton (US main export at the time), wheat, corn and tobacco grew into a bubble following the end of the Napoleonic Wars in 1815 as the European economy was beginning to recover from wars and demand was high for agricultural goods from America. The Second Bank of the US called in loans for specie beginning in August 1818 popped the speculative land bubble. Prices of agricultural commodities declined by almost -50% during 1819–1821 post bubble. A credit contraction caused by a financial crisis in England drained specie out of the U.S. The Bank of the United States also contracted its lending.
- Panic of 1837 (1834–1837) (US) Prices of US land, cotton, and slaves grew into bubbles with easy bank credit by the mid-1830s. Ended in financial crisis starting with the Specie Circular of 1836. Five year recession followed along with currency in the United States contracting by about 34% with prices falling by 33%. The magnitude of this contraction is only matched by the Great Depression.
- Railway Mania (UK) (1840s)
- Panic of 1857 Land and railroad boom in US following discovery of gold in California in 1849. Resulted in large expansion of US money supply. Railroads boomed as people moved west. Railroad stocks peaks in July 1857. The failure of Ohio Life in August 1857 brought attention to the financial state of the railroad industry and land markets, thereby causing the financial panic to become a more public issue. Since banks had financed the railroads and land purchases, they began to feel the pressures of the falling value of railroad securities and many went bankrupt. Ended in worldwide crisis.
- Melbourne Australia land and real estate bubble (1883–1889, crash in 1890–91)
- Encilhamento ("Mounting") (Brazil) (1886–1892)
- US farm bubble and crisis (1914–1918, crash 1919–1920) prices rapidly escalated during WWI and crash after war's end.
- Roaring Twenties stock-market bubble (US) (1921–1929)
- Florida speculative building bubble (US)(1922–1926)
- Poseidon bubble (Australia) (1969–1970)
- Gold and Silver bubble (1976–1980)
- The dot-com bubble (US) (1995–2000)
- Japanese asset price bubble (1986–1991) Japan real estate and stock market boom
- 1997 Asian financial crisis (1997)
- United States housing bubble (US) (2002–2006)
- China stock and property bubble (China) (2003–2007)
- The 2000s commodity bubbles (2002–2008)
- Cryptocurrency bubble (2011–2018)
- 2000s Property bubbles
Examples of aftermaths of bubbles
- Boom and bust
- Business cycle
- Carbon bubble
- Economic collapse
- Extraordinary Popular Delusions and the Madness of Crowds
- Fictitious capital
- Financial crisis
- Hyman Minsky, especially his Financial Instability Hypothesis
- Irrational Exuberance by Robert Shiller
- Jesse Lauriston Livermore The Boy Plunger
- List of commodity booms
- Overheating (economics)
- Real estate bubble
- Reflexivity (social theory)
- Stock market bubble
- Unicorn bubble
- Krugman, Paul (9 May 2013). "Bernanke, Blower of Bubbles?". The New York Times. Retrieved 10 May 2013.
- King, Ronald R.; Smith, Vernon L.; Williams, Arlington W.; van Boening, Mark V. (1993). "The Robustness of Bubbles and Crashes in Experimental Stock Markets". In Day, R. H.; Chen, P. (eds.). Nonlinear Dynamics and Evolutionary Economics. New York: Oxford University Press. ISBN 978-0-19-507859-6.
- Lahart, Justin (16 May 2008). "Bernanke's Bubble Laboratory, Princeton Protégés of Fed Chief Study the Economics of Manias". The Wall Street Journal. p. A1.
- Shiller, Robert (23 July 2012). "Bubbles without Markets". Project Syndicate. Retrieved 17 August 2012.
A speculative bubble is a social epidemic whose contagion is mediated by price movements. News of price increase enriches the early investors, creating word-of-mouth stories about their successes, which stir envy and interest. The excitement then lures more and more people into the market, which causes prices to increase further, attracting yet more people and fueling 'new era' stories, and so on, in successive feedback loops as the bubble grows.
- Garber, Peter (2001). Famous First Bubbles: The Fundamentals of Early Manias. Cambridge, MA: MIT Press. ISBN 978-0-262-57153-1.
- Smith, Vernon L.; Suchanek, Gerry L.; Williams, Arlington W. (1988). "Bubbles, Crashes, and Endogenous Expectations in Experimental Spot Asset Markets". Econometrica. 56 (5): 1119–1151. CiteSeerX 10.1.1.360.174. doi:10.2307/1911361. JSTOR 1911361.
- Lei, Vivian; Noussair, Charles N.; Plott, Charles R. (2001). "Nonspeculative Bubbles in Experimental Asset Markets: Lack of Common Knowledge of Rationality Vs. Actual Irrationality" (PDF). Econometrica. 69 (4): 831. doi:10.1111/1468-0262.00222.
- Levine, Sheen S.; Zajac, Edward J. (27 June 2007). "The Institutional Nature of Price Bubbles". SSRN 960178.
- Brooks, Chris; Katsaris, Apostolos (2005). "A three-regime model of speculative behaviour: modelling the evolution of the S&P 500 composite index" (PDF). The Economic Journal. 115 (505): 767–797. doi:10.1111/j.1468-0297.2005.01019.x. ISSN 1468-0297.
- Brooks, Chris; Katsaris, Apostolos (2005). "Trading rules from forecasting the collapse of speculative bubbles for the S&P 500 composite index". Journal of Business. 78 (5): 2003–2036. doi:10.1086/431450. ISSN 0740-9168.
- Hommes, Cars; Sonnemans, Joep; Tuinstra, Jan; Velden, Henk van de (2005). "Coordination of Expectations in Asset Pricing Experiments". Review of Financial Studies. 18 (3): 955–980. CiteSeerX 10.1.1.504.5800. doi:10.1093/rfs/hhi003.
- Teeter, Preston; Sandberg, Jorgen (2017). "Cracking the enigma of asset bubbles with narratives". Strategic Organization. 15 (1): 91–99. doi:10.1177/1476127016629880.
- Quote from The Deacon's Masterpiece or The One-Hoss Shay, by Oliver Wendell Holmes, Sr.
- Robert E. Wright, Fubarnomics: A Lighthearted, Serious Look at America's Economic Ills (Buffalo, N.Y.: Prometheus, 2010), 51–52.
- "The Role of a Central Bank in a Bubble Economy - Section I - Gold Eagle". www.gold-eagle.com. Retrieved 31 August 2017.
- Garber, Peter M. (1990). "Famous First Bubbles". The Journal of Economic Perspectives. 4 (2): 35–54. doi:10.1257/jep.4.2.35.
- Froot, Kenneth A.; Obstfeld, Maurice (1991). "Intrinsic Bubbles: The Case of Stock Prices". American Economic Review. 81: 1189–1214. doi:10.3386/w3091.
- Topol, Richard (1991). "Bubbles and Volatility of Stock Prices: Effect of Mimetic Contagion". The Economic Journal. 101 (407): 786–800. doi:10.2307/2233855. JSTOR 2233855.
- Buchanan, Mark (19 July 2008). "Why economic theory is out of whack". New Scientist. Archived from the original on 19 December 2008. Retrieved 15 December 2008.
- Porras, E. (2016). Bubbles and Contagion in Financial Markets, Volume 1: An Integrative View. Springer. ISBN 978-1137358769.
- Krugman, Paul (24 August 2015). "A Movable Glut". The New York Times. Retrieved 24 August 2015.
- Caginalp, G.; Balenovich, D. (1999). "Asset flow and momentum: deterministic and stochastic equations". Philosophical Transactions of the Royal Society A. 357 (1758): 2119–2133. doi:10.1098/rsta.1999.0421.
- Caginalp, G.; Porter, D.; Smith, V.L. (1998). "Initial cash/asset ratio and asset prices: an experimental study". Proceedings of the National Academy of Sciences. 95 (2): 756–761. doi:10.1073/pnas.95.2.756. PMC 18494. PMID 11038619.
- Caginalp, G.; Porter, D.; Smith, V.L. (2001). "Financial Bubbles: Excess Cash, Momentum and Incomplete Information". J. Psychology and Financial Markets. 2 (2): 80–99. CiteSeerX 10.1.1.164.3725. doi:10.1207/S15327760JPFM0202_03.
- Righoltz, Barry (6 December 2013). "How do you define a bubble?". Bloomberg. Retrieved 11 November 2016.
- Harmon D, Lagi M, de Aguiar MAM, Chinellato DD, Braha D, Epstein IR, et al. (2015). “Anticipating Economic Market Crises Using Measures of Collective Panic.” PLoS ONE 10(7): e0131871. doi:10.1371/journal.pone.0131871.
- Brandon Keim. (2011). “Possible Early Warning Sign for Market Crashes.” Wired, 03.18.11. http://www.wired.com/2011/03/market-panic-signs/
- Blodget, Henry (December 2008). "Why Wall Street Always Blows It". Retrieved 31 August 2017.
- "In come the waves: The worldwide rise in house prices is the biggest bubble in history. Prepare for the economic pain when it pops". The Economist. 16 June 2005.
The worldwide rise in house prices is the biggest bubble in history. Prepare for the economic pain when it pops.
- Porter, D.; Smith, V. L. (1994). "Stock market bubbles in the laboratory". Applied Mathematical Finance. 1 (2): 111–128. doi:10.1080/13504869400000008.
- Caginalp, G.; Ermentrout, G. B. (1990). "A kinetic thermodynamics approach to the psychology of fluctuations in financial markets". Applied Mathematics Letters. 3 (4): 17–19. doi:10.1016/0893-9659(90)90038-D.
- Caginalp, G.; DeSantis, M. (2011). "Stock Price Dynamics: Nonlinear Trend, Volume, Volatility, Resistance and Money Supply". Quantitative Finance. 11 (6): 849–861. doi:10.1080/14697680903220356.
- Caginalp, G.; DeSantis, M.; Swigon, D. (July 2011). "Are flash crashes caused by instabilities arising from rapid trading?". Wilmott Magazine. 11: 46–47.
- Odlyzko, Andrew. "The British Railway Mania of the 1840s" (PDF). University of Minnesota. Retrieved 29 November 2018.
- Tuckett, David; Taffler, Richard. "A Psychoanalytic Interpretation of Dot.Com Stock Valuations". SSRN. Retrieved 29 November 2018.
- "Bloomberg-Barry Ritholz-How do you define a bubble and are we in one now? December 2013". Retrieved 31 August 2017.
- Leonhardt, David (25 August 2015). "Part of the Problem: Stocks Are Expensive". The New York Times. Retrieved 31 August 2017.
- "Levy Institute-Hyman Minsky-the Financial Instability Hypothesis-May 1992" (PDF). Retrieved 31 August 2017.
- Krugman, Paul (24 August 2015). "A Moveable Glut". The New York Times. Retrieved 31 August 2017.
- "Get the Report: Conclusions : Financial Crisis Inquiry Commission". fcic.law.stanford.edu. Retrieved 31 August 2017.
- "Land Boom in 1880s Melbourne".
- "Historical Rhodium Charts". Kitco. Retrieved 19 February 2010.
- Reinhart, Carmen M.; Rogoff, Kenneth S. (2009). This Time is Different: Eight Centuries of Financial Folly. Princeton, NJ: Princeton University Press. ISBN 978-0-691-14216-6.
- When Bubbles Burst (PDF), World Economic Outlook, International Monetary Fund, April 2003. |
You might not be aware of this, but magic happens in the background of the Linux operating system. Without your help or intervention, programs start and daemons run. These things happen because Linux has an outstanding scheduling system known as cron. Want to make some magic? Let's get to know cron.
The cron utility allows the user to manage scheduled tasks from the command line. Once a user understands how cron works, it's not difficult to use. But for some, the understanding can be a challenge. Users must understand how Linux interprets and reads time on a system. Users also need to know how to edit their crontab files. Once a user has a full understand of these concepts, they will be masters of cron. Let's examine cron and how to create proper entries in a users' crontab file.
By default, a version of cron (there's more than one implementation) will be already installed on the Linux system, so there is no need to worry about installation the tool. And as for its use, there are two commands associated with cron:
- cron: The daemon that is used to execute scheduled commands.
- crontab: The command used to invoke the editor for managing a users cron jobs.
A users' crontab file is the file that holds the jobs read by cron. Each user on a system can have a crontab file (this includes the root user) where jobs and tasks can be controlled. The system itself also has a crontab file located at
/etc/crontab, but should not be edited by the user. This file is generated upon the installation of the operating system. If the
/etc/crontab file is examine it is revealed that it actually controls cron jobs that are located within
/etc/cron.monthly. But that file isn't going to be the focus here. Instead the user crontab file will be the primary focus, as that is the file used for the scheduling of ordinary user tasks.
The one aspect of cron that trips most users up is the way in which time is used. For each crontab entry a specific time is declared for when the entry will run. The time entry is in the form:
0 23 * * *
Each time entry consists of five sections:
- Minute (0-59)
- Hour (0-23 With 0 being 12:00 AM)
- Day of the month (1-31)
- Month (1-12)
- Day of the week (0-6 With 0 being Sunday)
So a typical entry would look like:
Minute Hour Day Month DayOfWeek
Some examples for time:
0 23 * * * Daily at 11 PM
30 22 * * * Daily at 10:30 PM
0 23 1 * * Every first day of the month at 11 PM
0 23 * * 0 Every Sunday at 11PM
Now that time is understood, it's time to begin adding entries. In order to view a users' crontab file the
crontab command is invoked. There are three main options to use with the
- e: Edit the crontab file.
- l: List the contents of the crontab file.
- r: Remove the contents of the crontab file.
When the command
crontab -l is invoked the entries for the users' crontab file will be displayed (if any exist). In order to add an entry to a users' crontab file, the command
crontab -e is invoked so the crontab file will be opened in the default editor (such as ed, vim.tiny, or nano). When the
crontab -e command is run for the first time, the default editor is set. To select the default editor for crontab, select the number which corresponds to the editor desired.
Figure 1 shows a crontab entry created by the Luckybackup backup application.
To illustrate how to add a new entry into crontab, a simple backup script will be used. The contents of that script might look like:
echo Backup Started `date` >> ~/backuplog
mkdir /media/EXT_DRIVE/backups/`date +%Y%m%d`
tar -czf /media/EXT_DRIVE/backups/`date +%Y%m%d`/data.tar.gz /data
echo Backup Completed `date` >> ~/backuplog
Where EXT_DRIVE is the location of an externally attached drive where the backup data will reside.
The above script will be saved in the users' home directory as .my_backup.sh and given executable permission with the command
chmod u+x ~/.my_backup.sh. Now, with crontab in edit mode, create an entry that will execute the script every night at 11 PM, add the following line:
* 23 * * * ~/.my_backup.sh
With that entry in place, save and close the editor (how this is done will depend upon the default editor you have chosen). When this is done, as long as there are no errors, crontab will report "crontab: installing new crontab" to indicate the entry was successful. If there are errors, open the crontab file back up to make the necessary changes.
Editing the crontab of a Different User
Say a different users' crontab must be edited. It is not necessary to
su to that different user, as crontab has an option built in for that specific purpose. If crontab is issued using the
crontab -e -u USERNAME, the crontab file of the user specified (Where USERNAME is the user in question) will be opened for editing. This command, however, can only be issued by a user with administrative user (or the command can be issued using
sudo.) Of course, editing other users' crontab files should be limited only to administrators.
The cron system helps to make Linux one of the most flexible operating systems around. Cron not only helps the system keep its logs rotated and clean, it allows users to schedule their own tasks, scripts, and jobs. Although the time aspect of cron can be a bit tricky to grasp, once it is understood, the rest falls into place.
If the whole idea of editing cron entries from the command line seems a bit much, you will be glad to know there are GUI tools for this task. Take a look at a tool like GNOME Schedule (found in your Add/Remove Software tool) for an application that can manage your cron tasks with the help of a user-friendly GUI. But for those who really want to understand Linux, getting to know cron and crontab is essential. |
and the axis is frictionless. A block with mass hangs from a massless cord that is wrapped around the rim of the disk. The rotational inertia of the disk about the axis is and the system is released from rest. If the kinetic energy of the block became what is the approximate distance the block has fallen, assuming that the gravitational acceleration isA disk is mounted on an axis, as shown in the above figure. The radius of the disk is
A thin rod with mass is standing vertically with one end on the floor. Then, if it is allowed to fall, what is the approximate speed of the other end just before it touches the floor, assuming that the gravitational acceleration is
Note: As shown in the figure below, the rotational inertia of a thin rod that rotates about the axis through its center perpendicular to length is where and are the mass and length of the thin rod, respectively.
A thin hoop of mass and radius is rotating at If it must stop in approximately how much work must be done to it?
A uniform thin rod with length and mass of and respectively, is attached to a rotational axle at its one end by a single bolt. Then how much work is needed to increase the speed of the rod to from rest?
As shown in the figure below, the rotational inertia of a thin rod that rotates about the axis through its center perpendicular to length is where and are the mass and length of the thin rod, respectively.
A Frisbee throwing disk flies straight only if you spin it. When we spin a Frisbee we generate angular momentum, which is conserved. Since it is conserved the Frisbee resists tilting, which would change the angular momentum, and so it tends to fly straight. However, giving a Frisbee angular momentum also takes energy. If I spin a Frisbee faster and so double its angular momentum, by what factor have I increased the rotational kinetic energy? |
Milestones of Communication
A newborn’s ability to functionally use hearing develops with experience. Most babies are born with normal hearing. Binaural hearing (hearing in both left and right ears) allows your child to pinpoint sound with great accuracy and understand speech in a noisy background.
Newborns can localize sound accurately to their right and left sides. Eye movement or a slow head turn in the direction of the sound source can be observed if your newborn is awake, alert and quiet. Between 1 month and 4 months of age, your baby may not exhibit the same type of head-turning or orienting behavior. However, at five months, babies begin to seek the sound source. At about this age, the head turn changes from a reflexive activity (in the newborn) to a purposeful response.
Try this with your five-to six-month old: make soft sounds from behind and to one side as your baby looks straight ahead. A soft rattle shaken at ear-level or whispering the baby’s name should elicit a head turn towards the source. While we expect infants to startle when they hear very loud sounds, your baby should respond to soft sounds as well. During the first year, a baby’s ability to accurately locate sounds is refined. Your baby should look for the sources of common sounds such as the doorbell, the telephone ringing, a door opening, children playing, or a musical toy.
Babies learn to associate what they hear with people, places, objects, or events. It is important to be vigilant to critical milestones which may serve as guideposts for possible normal hearing:
- By 6 months, babies recognize speech sounds of their own language more than those of a foreign language. They recognize familiar voices, play with their own voices, engage in vocal play with parents, and experiment with multiple speech and non-speech sounds.
- By 9 months, babies demonstrate an understanding of simple words (“mommy,” “daddy,” “no,” “bye-bye”).
- By 10 months, a baby’s babbling should sound speech-like” with strings, of single syllables (“da-da-da-da”).
- By 12 months, one or more real, recognizable spoken words emerge.
- By 18 months, babies should understand simple phrases, retrieve familiar objects on command (without gestures) and point to body parts (“where’s your…” ears, nose, mouth eyes, etc.). At the same time, 18-month olds should have a spoken vocabulary of between 20-50 words and short phrases (“no more,” “go out,” “mommy up”).
- By 24 months, a toddler’s spoken vocabulary should be 200-300 words coupled with the emergence of simple sentences. Most should be understandable to adults not with the toddler on a daily basis. A toddler should be able to sit and listen to read-aloud storybooks.
- Between 3 and 5, spoken language should be used constantly to express wants, reflect emotions, convey information, and ask questions. A preschooler should understand nearly all that is said. Vocabulary grows from 1,000 to 2,000 words during this period, with words linked together in complex and meaningful sentences. All speech sounds should be clear and understandable by the end of the preschool period. |
People who fall asleep for more than a few minutes are often aware of those lapses. Drivers may not be aware of shorter lapses and “microsleeps”, which can also have serious consequences when a quick reaction is needed to avoid a crash.
Most people are not aware of how drowsiness affects their driving performance, even without falling sleep. Studies suggest that people cannot reliably detect how sleepy they are.
Research has revealed a few indicators of drowsiness and drowsy driving which include:
- Frequent blinking, loner duration blinks and head nodding
- Having trouble keeping one’s eyes open and focused
- Memory lapses or daydreaming
- Drifting from one’s driving lane or of the road |
- Easy books, books a child has read before, and old favorites are most appropriate.
- Reading aloud is great practice.
- Paired Reading. An adult and a child read together at the same time. The adult reads loud enough for the child to hear both the adult voice and his or her own voice, and remembers to read slowly at the child’s reading pace. Some parents complete all of the reading together. Other parents start reading together, the child signals when ready to read solo, and then the reading is completed by the child.
- Echo Reading. The adult reads a paragraph, sentence, or phrase and the child reads the same section afterwards, like an echo. Some readers can manage echo reading a full paragraph. Others can manage only a phrase.
- Shared Reading. The adult and the child share the reading by taking turns reading aloud. This can be done with two different kinds of texts. With a regular book, the adult and child just take turns reading. The adult starts the reading, and looks ahead for shorter sentences, passages the child can read, or words the child may know. The child reads those parts. In some books, the texts for the parent and the child are different. The parent reads the more sophisticated text and the child reads selections written at the child’s level. Check out We Both Read books.
- Books on tape.
- Older children read to younger children.
- Read aloud while riding in the car.
- Family members bring a favorite passage or favorite poem to read out loud at the dinner table.
- Read from a comic book and mimic how the character may speak the part.
Easy tips for revving up this essential reading skill.
Updated: March 24, 2016 |
During the mid-nineteenth century, states began passing compulsory education laws, and although all states had these laws in place by the time the United States entered World War I, there was still quite a disparity between levels of basic education received by the soldiers. Mobilization efforts during WWI highlighted the need for greater emphasis on education in the United States, but it also highlighted the need to emphasize a common nationality among its citizenry. The war had created a stigma on citizens and immigrants who were too closely related or associated with the enemy. It was felt that the ‘old country’ culture, still held by many, needed to be replaced by a commitment to a less definable, but more patriotic American culture. The desire to eliminate overt connections with European culture, a culture that seemed to instigate war rather than peace, led to strong measures designed to force change in the U.S. population. One measure included the effort to eliminate parochial schools which were viewed as being too closely tied to European culture. When Oregon amended its compulsory education laws in 1922 with the intent to eliminate parochial schools, they faced opposition including a Supreme Court case that ended up ruling against them. It was hoped that public education would transform the population into a more cohesive culture, and while states couldn’t force public school attendance versus private school attendance, over time many states were able to dictate curriculum requirements and achieve the underlying goals sought by legislators during the post-war period.
Many in the United States believed that the nation had a vital responsibility to encourage and spread notions of republican democracy. A growing belief in ‘American exceptionalism’ developed in the post-war years, due in part to wartime propaganda. If the United States was to be exceptional then it needed to guarantee that its public understood what made it exceptional. Accomplishing this task meant that its citizenry needed to understand history, and not just the history of the United States beginning with colonization or independence, but a citizen needed to understand the connection between the United States and ancient history where the foundations of democracy resided. Compulsory education, classes in American History and Western Civilization, and an emphasis on U.S. exceptionalism became the foundation for unifying a nation during the twentieth century. |
About 250 miles overhead, a satellite the size of a loaf of bread flies in orbit. It’s one of hundreds of so-called CubeSats—spacecraft that come in relatively inexpensive and compact packages—that have launched over the years. So far, most CubeSats have been commercial satellites, student projects, or technology demonstrations. But this one, dubbed MinXSS (“minks”) is NASA’s first CubeSat with a bona fide science mission.
Boasting intricate patterns and translucent colors, planetary nebulae are among the most beautiful sights in the universe. How they got their shapes is complicated, but astronomers think they’ve solved part of the mystery—with giant blobs of plasma shooting through space at half a million miles per hour.
Just 25 years ago, scientists didn’t know if any stars—other than our own sun, of course—had planets orbiting around them. Yet they knew with certainty that gravity from massive planets caused the sun to move around our solar system’s center of mass. Therefore, they reasoned that other stars would have periodic changes to their motions if they, too, had planets.
There is this great idea that if you look hard enough and long enough at any region of space, your line of sight will eventually run into a luminous object: a star, a galaxy or a cluster of galaxies. In reality, the universe is finite in age, so this isn’t quite the case. There are objects that emit light from the past 13.7 billion years—99 percent of the age of the universe—but none before that. Even in theory, there are no stars or galaxies to see beyond that time, as light is limited by the amount of time it has to travel.
When the advent of large telescopes brought us the discoveries of Uranus and then Neptune, they also brought the great hope of a Solar System even richer in terms of large, massive worlds. While the asteroid belt and the Kuiper belt were each found to possess a large number of substantial icy-and-rocky worlds, none of them approached even Earth in size or mass, much less the true giant worlds. Meanwhile, all-sky infrared surveys, sensitive to red dwarfs, brown dwarfs and Jupiter-mass gas giants, were unable to detect anything new that was closer than Proxima Centauri. At the same time, Kepler taught us that super-Earths, planets between Earth and Neptune in size, were the galaxy’s most common, despite our Solar System having none.
As Earth speeds along in its annual journey around the Sun, it consistently overtakes the slower-orbiting outer planets, while the inner worlds catch up to and pass Earth periodically. Sometime after an outer world—particularly a slow-moving gas giant—gets passed by Earth, it appears to migrate closer and closer to the Sun, eventually appearing to slip behind it from our perspective. If you’ve been watching Jupiter this year, it’s been doing exactly that, moving consistently from east to west and closer to the Sun ever since May 9th.
By Justin Belknap
NAR# 97349 JR
In the following article, I will tell you about my experience with rocketry in my sixth grade class. How I watched the other kids learn about rockets and learned some things as well. How I built and painted a rocket with someone that was not related to me. Last but definitely not least, the look on the other kids faces when the rockets launched on launch day.
When isolated stars like our Sun reach the end of their lives, they’re expected to blow off their outer layers in a roughly spherical configuration: a planetary nebula. But the most spectacular bubbles don’t come from gas-and-plasma getting expelled into otherwise empty space, but from young, hot stars whose radiation pushes against the gaseous nebulae in which they were born. While most of our Sun’s energy is found in the visible part of the spectrum, more massive stars burn at hotter temperatures, producing more ionizing, ultraviolet light, and also at higher luminosities. A star some 40-45 times the mass of the Sun, for example, might emit energy at a rate hundreds of thousands of times as great as our own star.
If you want to collect data with a variety of instruments over an entire planet as quickly as possible, there are two trade-offs you have to consider: how far away you are from the world in question, and what orientation and direction you choose to orbit it. For a single satellite, the best of all worlds comes from a low-Earth polar orbit, which does all of the following:
The farther away you look in the distant universe, the harder it is to see what’s out there. This isn’t simply because more distant objects appear fainter, although that’s true. It isn’t because the universe is expanding, and so the light has farther to go before it reaches you, although that’s true, too. The reality is that if you built the largest optical telescope you could imagine — even one that was the size of an entire planet — you still wouldn’t see the new cosmic record-holder that Hubble just discovered: galaxy GN-z11, whose light traveled for 13.4 billion years, or 97% the age of the universe, before finally reaching our eyes. |
Most people reading this would be well aware of what a mixer is used for but I'll reiterate here. The job of an Audio mixer is to combine various audio signals into a single audio signal. It is better known in electronic terms as a summing circuit. That is to say that the output is the sum of all of the inputs. A summing note is often represented as a circle with a PLUS (+) symbol in it.
Audio is of course an AC (Alternating current) signal but if we look at the incoming signals as a frozen moment in time we can represent it as 2 or more DC voltages. This is only useful to illustrate the point.
If we had two signals to be mixed. The first was 2 volts and the second was 3, the output should be the sum of these two voltages. 2+3=5. If on the other hand the two voltages were 2 volts and -3 volts then the output would be -1 volt. We are now subtracting 3 volts from the +2 volts leaving -3. It is important to recognise that we are dealing with what is known as a bipolar signal. That is one that can be positive or negative around a zero base-line.
When you get to the stage of adding many signals together, the complexity grows. In1 + In2 + In3 + In4 + .... and so on.
Because each incoming signal has it's own load impedance it is impractical just to wire all of them together and hope for the best. Especially when the following device you are trying to mix into also represents it's own load impedance. Sometimes you may be able to get away with it because the combined impedance is quite high. However most of the time it drags the whole network down and causes one or more devices to fail or distort or what ever. Usually no damage is done but it just won't work.
What is required is a little load isolation. (See Circuit 1: Passive mixer) The trade off is that you can't use terribly high value resistors because of the losses that they may cause. Especially if the load impedance of the following device is a little low. This will give the effect of severely attenuating some or all of the signals. A practical trade off has to be reached and this is as much trial and error as anything because the conditions change with each new device added or changed.
The device used at the summing node, IE: an amplifier or tape deck should be able to provide enough gain to compensate for the combined losses through the resistors and the combined loading of the system. But the loading will change depending on the combination of devices you have hooked into it.
This approach also creates another side effect. That is that a signal flowing into the summing node via one source can pollute the audio signals of other devices. Say you had two cassette decks that you wished to mix. However you also wanted to send the audio from cassette deck #1 to an effects processor. The audio from cassette deck #2, although attenuated slightly, will find it's way back to the audio from cassette deck 1 and also go to the effects processor.
Active Mixer stages that use Op-Amps are generally known as virtual earth pre-amps. These are inverting in nature. 180 degrees out of phase. IE: The signal coming out of the mixer is upside down as compared to that which is entering it. You then need to use another inverting pre-amp to recover the phase.
This would seem silly at first until you realize that virtual earth means that the inverting node of the op-amp is held virtually at ground (zero volts) potential. Any signal entering the stage via one resistor cannot find it's way back out of any other resistor. This prevents the audio from *say* one synth, polluting the audio from another. Particularly useful in a Mixer with many busses and sends.
Generally speaking the pre-amps stage does not provide any gain. IE: is 1:1 unity gain. A signal passing through a resistor with no load also presents no loss. Even with values beyond 1 meg. Although you may drop the effective current at the other end of the resistor. In this case the current loss is largely irrelevant. Especially at line-level. And is compensated by the op-amp's drive current in an active system.
It is better to have a mixer stage with no gain (or unity gain) because this will not amplify the noise. If good quality op-amps are used, they will not add significantly to the over all noise performance. So the RMS voltage coming out of the mixer should be the same as the sum of all it's inputs. If gain is necessary for a microphone or phono etc, the gain should be a special stage at the top of the chain. IE: the first preamp in the mixer channel. This is then mixed with everything else once the microphone is amplified to line level. This gain stage only adds noise to the microphone and not to the sum of the signals passing through the mixer.
It is interesting to note that resistors themselves add noise to a circuit. This is known as thermal noise. Generally speaking the rule of thumb is: The larger the value the resistor, the greater the thermal noise. This may not be significant in mixer stages at line level but where large gains are required it is desirable to use smaller value resistors. (as small as possible within reason.) Of course sometimes this cannot be achieved but is worth remembering as a rule of thumb. Metal film resistors have less thermal noise than carbon film resistors and are more temperature stable over all. So now there's two reasons to use Metal films in audio circuits.
Circuit 2 shows a basic active mixer. It uses 2 virtual earth preamps. One for the summing node and 1 to re-invert the phase of the signal.
The summing node (The point at which all the resistors meet) enteres into the inverting input of the op-amp. A feedback resistor is connected between the output of the op-amp and the inverting input. The function off this feedback loop is essentially to limit the open-loop gain of the op-amp.
Any signal entering the inverting input of the op-amp will appear at the output but it will be upside down. That is to say 180 degrees out of phase. In other words if you put 2 volts in you'd expect -2 volts out. To achieve unity gain (that is no gain or amplification at all) the feedback resistor must be the same as the summing resistor. In this case 10K. All the summing resistors are 10K and the feedback resistor is 10K. Because the feedback resistor feeds the output signal back to the inverting input of the op-amp @ 180 degrees out of phases it cancels out any gain. It also means that the inverting input of the op-amp is held pretty close (If not exactly) at zero volts. or earth potential. Thus the term "virtual earth".
Any signal coming in through the summing resistor is like dumping it to ground via 10K. It theoretically has the same loss. However the feedback resistor of 10K gives the exact opposite in gain. So if you feed 2 volts in you will get 2 volts out only it will be upside down.
Because the summing node (The inverting input of the op-amp) is at virtually earth potential, there is little chance that this signal will bleed it's way out to any of the other inputs. Essentially speaking all the audio sources are isolated from each other.
However we're still left with the problem of the phase being wrong. If the output of the first op-amp were recombined with one of the other signals at a later stage it would cancel out rather than mix. So we have to re-invert the phase with yet another op-amp. This is a unity gain amplifier just like the first except that there is no summing node as such. (Except for the feedback resistor of course) The output of these two stages will now be the summ of all the inputs with the correct phase. Because of the inherent compensation of the feedback/op-amp/summing node, there is virtually no limit to the number of inputs you can put on this. Most modern op-amps have enough drive capability that 128 inputs would be just peanuts.
However it must be remembered that you are summing the inputs so if you had a powersupply of say +/- 15 volts, and 4 inputs of +5 volts each, The result would be 20 volts mathematically speaking. But the op-amp can only produce +15 volts so you would be clipping by 25%. Distortion occurs. Most op-amps can't swing exactly to the supply rails so clipping and distortion would be even worse. In practice however most audio signal wouldn't exceed a few hundred milivolts. A 2 volt peak to peak signal is considered to be a very high level.
Two more variations:
The third circuit shows a Mixer with input attenuation. This is a fairly simple concept. A potentiometer is placed in the signal's path between the source and the summing resistor. When the wiper of the POT is at the top it simply represents a 10K load to the source. 10K is a pretty high value and most line level devices can easily drive this load. With the wiper at the other end of the pot it still represents a 10K load to the source but the input is effectively at ground (Shorted out) so no signal gets through. With the wiper in mid way position the input loading is still 10K, however the signal has to flow through a 5K resistance and is also dumped to ground by 5K. Halving the potential reaching the input resistor/summing node.
The fourth and final circuit shows a full on stereo mixer. Two new types of input networks are shown. the first is a stereo-in with balance. Similar to your stereo amplifier etc. A dual gain pot is use for volume whilst balance is single. Note that following the volume pot is a 10K resistor connected to one end of the balance pot. With the wiper in the centre position and connected to ground as it is, means that the incoming audio is virtually running through a 22.5K resistor to ground. That is 10K +(1/2 of 25K) = 22.5K Because of this attenuation the feedback resistor around the virtual earth op-amp is increased to 33K to compensate. This is not exactly unity gain but it comes awfully close. A very slight and probably un-noticeable gain.
The other input is MONO in but is pannable between left and right. The same deal as above applies here except that the first two 10K resistors are joined together so that the signal is split across two paths. Strictly speaking the first two 10K resistors in the stereo input are not necessary but are needed for the mono circuit so that the pan pot does not short out the signal when at either extremes of travel. They are included in the stereo input simply to compensate for unity gain over all. This input scheme is the basis for 99% of all large mixing consoles.
Driving the busses:
Note here that Mixers are more repetitious than complex. The circuits are relatively simple it's just that there's a lot of them. Especially in large recording consoles.
Usually these desks are seen in two halves. The input half and the output half. No matter how complex the input half may become, the output half is essentially just a virtual earth pre-amp as described in the circuits above. Often it is required to have many such busses for things like effects sends, subgroups, monitor bus and so on.
One of the beauties of the virtual earth mixer is that there is also virtually no limit to the number of additional busses as well as the main bus. One could arrange an effects send buss that derives it's signal from the same channel as the main bus. Except that each has it's own volume, pan and assignment independent of each other.
A FINAL WORD on the CAPACITORS:
There are capacitors in two main circuit functions on the schematics above. the first is an electrolytic blocking capacitor. The idea is that a DC voltage can't got through a capacitor in series. What this means is that any DC offset voltage emanating from a preceding stage or source will be knocked on the head. only the AC voltage (The audio signal) will get through. The reason for this is simple. Suppose you had 10 sources each with a +1volt DC offset. This would add up in the mixer stage to be +10 volts. Not exactly desirable. It is therefore usual to use a blocking capacitor to stop this happening. This may not be so in all cases but is a rule of thumb for most audio circuits. The blocking capacitor is placed on the input near to where an unknown source is to enter the circuit. It is also usual to have one on the output stage which blocks any DC from leaving your circuit and propagating into any following equipment. The reason you need them on both input and output is simply that you never know what you might connect your circuit up to and there is no convention. If you are unsure of the polarity required for the blocking capacitor you can use a bi-polar electrolytic. Which is essentially two normal electrolytic capacitors back to back in the one package. The value of these capacitors are not important as long as it has no effect on the audio signal (IE accidently creates a lowpass filter) and the voltage rating is sufficient enough that it won't burn out. Usually 16 volt rating is sufficient. 25 volts to be on the safe side. 50 volts is called "over-engineering". The value of the capacitor can be anywhere between 0.1uF to 47uF but usually between 1.0uF and 10uF.
The other two capacitors, 27pF and 47pF are optional and for stability of the op-amps. Truth be known these were left in the schematic by accident because I simply modified the circuit from one I was working on at the time of writing. The original circuit was designed to closely approximate another commercial mixer as I was extending it's capabilities.
Out of interest these two capacitors cause the op-amps to behave as slight intergrator-filters limiting the top end response slightly above the audio bandwitdth. This is some times necessary where the op-amps used have such a high gain-bandwitch product that they tend to saturate with RF or at least HF signals. Thus becoming unstable in certain situations. Generally speaking these are largely irrelevant to the design.
Well hopefully I've provided enough information so you could go out and roll your own designs. And hopefully I've been able to work it in such a way that it's relatively understandable. If there are any mistakes, errors or omissions, please feel free to point them out. But Please no nit-picking. I'm only doing this because of the number of questions asked on this subject and the relative interest for people to design their own.
No responsibility is taken for any damages or any other shortcomings if you actually use this information. If you start out building one of my designs and end up wiring yourself to the national grid, it's you're problem.
And as always. Be absolutely ICebox
|View or Download
GIF file format |
A. What is a key signature?
- In musical notation, a key signature is a series of sharp symbols or flat symbols placed on the staff, designating notes that are to be played one semitone higher or lower unless otherwise noted with an accidental.
- Key signatures are generally written immediately after the clef at the beginning of a line of musical notation, although they can appear in other parts of a score, usually after a double bar.
- In music, sharp means higher in pitch. More specifically, in musical notation, sharp means "higher in pitch by a semitone," and has an associated symbol which looks somewhat like a "#" (number sign).
- In music, flat means "lower in pitch." More specifically, in music notation, flat means "lower in pitch by a semitone," and has an associated symbol, which looks like a lowercase "b”
- For example, here is what a B Major scale looks like written with accidentals:
- Here is what the same scale looks like using the key signature:
B. Let’s play Old MacDonald Had A Farm
• This song is in G Major. It has one sharp: F #.
• This song is in 4/4 time.
• Make sure that you hold the dotted half notes and whole notes out for their entire value! |
What are they?
Lobsters are ten-legged (decapod) crustaceans.
The American lobster is the only species of clawed lobster in the Northwestern Atlantic region.
Where are they?
The American lobster is distributed throughout the Northwest Atlantic from the Straights of Bell Isle, Newfoundland to Cape Hatteras, North Carolina.
They are most abundant in coastal zones at depths of less than 150 ft. (~50 m). The greatest abundance of lobster occurs within the Gulf of Maine — from mid-coast Maine to southwest Nova Scotia.
American lobster is a long-lived species known to reach more than 40 lb. (18 kg).
Age is unknown because all hard parts are shed and replaced at molting (shedding), leaving no accreting material for age determination. In Massachusetts, shedding typically occurs between June and October.
Lobsters at minimum legal size are generally considered to be between 5 and 7 years of age based on hatchery observations. Maximum age is generally considered to be between 30 and 40 years.
Fertilized eggs are carried on the female abdomen for a 9 to 12 month period of development prior to hatching.
Female lobsters carry between 1,000 and >100,000 eggs depending on the size of the female.
Hatching typically occurs over a 4 month period from May through September. In Massachusetts we typically see peak hatching from late-June through early-July.
It is unlawful for any fisherman to take or possess any egg-bearing female lobster or female lobster with the egg mass removed, at any time.
When eggs are extruded they are dark green and the female is called a “green egger.”
As eggs develop and approach hatching time they turn brown to reddish brown and the female is called a “brown egger.”
Regulations in red are new this year.
Purple text indicates an important note. |
The circulation of water in the Atlantic Ocean reversed its direction less than 20,000 years ago, a study has found.
Nowadays, warm currents such as the Gulf Stream travel from the tropics to the subpolar North Atlantic, where they cool. Crucially for the world's climate, they bring warmth with them.
But according to researchers led by the Universitat Autònoma de Barcelona, things were very different in the past.
The team investigated the distribution of isotopes in the Atlantic Ocean. These are generated from the natural decay of uranium in seawater, and are distributed with the flow of deep waters across the Atlantic basin, indicating where the waters carrying them originated.
They found that there was a period during the ice age 20,000 years ago, when the flow of deep waters in the Atlantic was reversed, when the climate of the North Atlantic region was substantially colder and deep convection was weakened.
At that time, the balance of seawater density between the North and South Atlantic was shifted in such a way that deep water convection was stronger in the South Polar Ocean, with warm waters flowing southward rather than northward.
The authors say the study shows that the Atlantic circulation pattern in the past was very sensitive to changes in the salt balance of Atlantic Ocean currents. The Southern Ocean was then much saltier than it is now.
Similar changes in seawater salt concentration are expected to take place in the North Atlantic in the course of climate warming over the next 100 years, leading to the possibility that the ocean circulation could change again. |
Who taught you how to fill in a job application? It's not surprising if you don't remember, or if you learned it by trial and error. Completing job applications is not routinely taught in junior high or high school, but it can be very helpful to those groups who do get the benefit of such training. Whether you're teaching teenagers, first-time adult job seekers or newly arrived immigrants, walk your students through the process of filling out a job application to give them the tools for this important part of the job search.
Collect a variety of completed job applications. You want to create a set of good examples and another set of "how not to do it" examples.
Show the students examples of completed job applications from both categories. Point out the differences between good and bad examples, such as complete versus incomplete, neatly printed versus scribbled, and crisp and clean versus stained and wrinkled.
Use one of the good examples of a completed application as an instructional tool. Walk the students through the application and point out key steps. Explain that, because employers are looking for applications that are complete, students should not leave blanks. If something is not applicable to them, tell them to write "N/A" rather than leave the space blank. Remind students to print neatly in black or blue ink. Stress the importance of correct spelling -- job applications are not the place for the abbreviated texting or Twitter language they may be used to.
Stress the responsibilities associated with completing job applications. Underscore the importance of being completely truthful, for example. Tell students they must ask people in advance for permission to use their names and contact information when listing references. Remind them they are signing their names to these applications, which makes them a binding document.
Give each student one or more blank job applications. Start with applications they are likely to encounter when applying for traditional student jobs, such as fast food worker or mall store clerk. Instruct them to fill out the application as if they were applying for an actual job.
Review with the class the applications the students filled out. Without singling out a particular individual, point out common mistakes such as forgetting to fill in all the blanks or using cursive writing instead of printing. Ask students if they have any questions.
- Stockbyte/Stockbyte/Getty Images |
On August 26, 1976, a time bomb exploded in Yambuku, a remote village in Zaire, (now the Democratic Republic of the Congo). A threadlike virus known as Ebola had emerged, soon earning a grim distinction as one of the most lethal, naturally occurring pathogens on earth, killing up to 90 percent of its victims, and producing a terrifying constellation of symptoms known as hemorrhagic fever.
Now, Charles Arntzen, a researcher at the Biodesign Institute at Arizona State University, along with colleagues from ASU, the University of Arizona College of Medicine-Phoenix, and the United States Army Medical Research Institute of Infectious Diseases, Fort Detrick, MD, have made progress toward a vaccine against the deadly virus.
The group's research results appear in today's issue of the Proceedings of the National Academy of Science, along with a companion paper by their collaborators at Mapp Pharmaceuticals in San Diego, CA, led by Larry Zeitlin. Arntzen's group demonstrated that a plant-derived vaccine for Ebola provided strong immunological protection in a mouse model.
If early efforts bear fruit, an Ebola vaccine could be stockpiled for use in the United States, should the country fall victim to a natural outbreak or a bioterrorism event in which a weaponized strain of the virus were unleashed on soldiers or the public.
To date, Ebola outbreaks have been mercifully rare. For researchers like Arntzen however, this presents a challenge: "With other lethal viruses like HIV there is a common pattern of occurrence, allowing for vaccine testing. For example, an AIDS vaccine study is now underway at two locations in Thailand, which were chosen because of a current high incidence of the disease."
By contrast, Ebola events are fleeting, episodic and largely unpredictable. For this reason, Arntzen stresses that an Ebola vaccine would most likely not be used prophylacticallythat is, as a means of protecting large populations, as in the case of common vaccines against diseases like influenza or polio. Instead, the idea is to have a sizeable store of the vaccine on hand in the event of a sudden outbreak, either natural or nefarious.
A killer up close
Ebola belongs to a family of viruses known as filoviridae, which take their name from their serpentine, filamentous structure (see Figure 1). Filoviridae fall into two broad categories known as Ebola-like and Marburg-like viruses. In the original Ebola outbreak in Yambuku, situated along the Ebola River, 280 of the 318 identified cases died. Soon thereafter, an additional 284 cases and 151 deaths occurred in nearby Sudan. In Yambuku, the small local hospital was shut down, after 11 of its 17 staff members died.
The likely reservoir for the disease is bats. Primates including monkeys can become infected from eating bats or from fruit the bats may have dropped. Infected animals can then spread the disease to humans through bites, or when the primates are consumed for fooda practice prevalent in some regions of Africa.
The course of the disease is pitiless, sometimes producing hemorrhagic fever, which causes severe bleeding from mucous membranes, including the gastroinestinal tract, eyes, nose, vagina and gingiva. The very high mortality and gruesome symptoms of the disease have riveted public attention and have been the focus of numerous films and books, notably Richard Preston's The Hot Zone.
Arntzen notes that while no human vaccine against Ebola currently exists, a number of strong candidates have emerged. While some have yielded good results in animal models, in terms of protection against the virus, they have practical shortcomings. "All of these existing vaccine candidates are genetically modified live viruses," he says. Vaccines of this sort require very careful conditions of storage and have a tendency to lose potency over a period of months. "If you've got something that you're going to have to keep at liquid nitrogen temperatures for years at a time, in hopes that there will never be an outbreak, it makes it impractical. "
Fighting pathogens with plants
Of the vaccines available to doctors today, some (like influenza) are produced in eggs, some in cultured animal cells, and others in yeast. Arntzen's team has taken a different approach to vaccine production by converting tobacco plants into living pharmaceutical factories. They created a DNA blueprint for their Ebola vaccine, and used a specialized bacterium to infuse it into the leaves of tobacco. "The blueprint converts each leaf cell into a miniature manufacturing unit," Arntzen says.
In the current study, the vaccine blueprint was designed by fusing a key surface protein (known as GP1) from the Ebola virus with a monoclonal antibody customized to bind to GP1. The resulting molecules' opposite ends attract each other, like a group of rod-shaped magnets. When the vaccine molecules bind to each other, they form an aggregate called an Ebola Immune Complex (EIC). "In immunology, that means you've got something that is much easier for our immune system to recognize," Arntzen says. "Because it has many copies of an identical molecule, it's called a repeating array." (See Figure 2)
Within two weeks after the vaccine "blueprint" is delivered to tobacco leaves, enough of the EIC accumulates to allow its purification from other leaf cell components. The researchers then vaccinated mice with the purified sample, and showed that their immune system gave a strong response.
For the ultimate validation of the vaccine however, it was necessary to show that the vaccinated mice could withstand an Ebola virus infection. Because of the dangers in handling the virus, these experiments were conducted by skilled researchers at a high containment facility operated by the US Army Medical Research Institute in Maryland. It was found that the level of protection of the vaccinated mice was equivalent to that seen in prior experiments with the best, previously available experimental vaccine.
The advantages of using tobacco to manufacture a vaccine are significant. The initial costs for plant growth are much cheaper than design of traditional pharmaceutical facilities. In addition, the material extracted from tobacco leaves can be easily purified, and then might be spray dried or freeze-dried, yielding a highly stable compound, storable at ambient temperatures for extended periods. This will be essential for an Ebola vaccine, since it will primarily be stockpiled to use only if there is a disease outbreak.
Vaccines typically contain adjuvantsimmune modulating factors that improve a vaccine's protective qualities. Most vaccines contain alum (or aluminum hydroxide), which is an FDA approved adjuvant. In the case of the plant-derived Ebola vaccine, alum did not improve the survival rates in mice when it was co-administered with EIC. Instead, the group found that a Toll-like receptor (TLR) agonist known as PIC, when delivered in tandem with EIC, dramatically improved survival.
Toll-like receptors are part of the body's innate immune systeminvolved in processes of inflammation, where defensive cells like macrophages and dendritic cells are attracted to the site of infection. Arntzen explains that the TLR agonist PIC acts to mimic a site of inflammation, amplifying the immune response, without causing tissue damage. In experiments using a combination of PIC and EIC, mice achieved an 80 percent survival rate against a lethal challenge of Ebolacommensurate with the best existing vaccine candidates.
The road ahead
In their companion PNAS paper, Arnzen's collaborators at Mapp Biopharmaceuticals outline the process for creating the monoclonal antibodies used for this research. Treatment for an Ebola infection, Arntzen says, would likely involve the injection of fast acting antibodies to attack the virus directlya process known as passive immunization, combined with a vaccine to stimulate the protective immune response (active immunization). This approach is commonly used in the case of other viral infections, particularly rabies. "Our two papers offer a nice back to back picture," Arntzen says. "We can manufacture both of these post-Ebola exposure reagents for a defensive stockpile, using tobacco."
The next steps for a plant-derived filovirus vaccine will involve using the EIC platform to design protection against the full range of these threadlike viruses. The method, with its straightforward purification protocol might also be used in the case of other pathogens including hepatitis C or dengue fever, where the extraction of glycoproteins has thus far been difficult.
Should efforts succeed in producing a post-exposure therapeutic that could be stockpiled by the U.S. military, the vaccine could also be made available to the Center for Disease Control for immediate use in the event of a remote outbreak.
|Contact: Joseph Caspermeyer|
Arizona State University |
Managing ocular allergy in resource-poor settings
Ocular allergy is a common inflammatory condition seen almost daily at the outpatient clinic. It occurs because the ocular surface is exposed to a variety of allergens, making it susceptible to allergic reactions. The hallmark of the disease is itching, and the clinical symptoms and signs are bilateral and vary according to individual cases.
The common predisposing factors of ocular allergy include environmental allergens, genetic predisposition to atopic reactions and hot, dry environments.
The patient may have associated systemic features like eczema, asthma and rhinitis.
Types of ocular allergy
Ocular allergies can be divided into:
- Vernal keratoconjunctivitis
- Atopic keratoconjunctivitis
- Acute allergic conjunctivitis (includes seasonal and perennial allergic conjunctivitis)
- Giant papillary conjunctivitis
The first two forms of ocular allergies are sight-threatening. Both can lead to damage of the cornea by causing ulcers and scarring (secondary to inflammation of the ocular surface), ultimately leading to vision loss.
Onset of vernal keratoconjunctivitis is usually in childhood (mean age 7 years) and it tends to become less severe by the late teens. It is more common in boys than in girls. If left untreated, it can result in corneal conjunctivalisation and scarring (Figure 1). The symptoms are severe itching, watering, foreign body sensation and thick mucus discharge.
The hallmark sign of vernal keratoconjunctivitis is papillae formation in the tarsal conjunctiva; these can be large and irregular (known as cobblestone papillae) (Figure 2). There is conjunctival injection and/or hyperpigmentation and there may be peri-limbal small white dots (Horner-Trantas dots) (Figure 3). The limbus can become pigmented and the cornea can be affected with plaques and ulceration of the upper cornea.
Atopic keratoconjunctivitis classically presents in adulthood and has a chronic and unremitting course.
History: History of atopy (asthma, eczema). Severe itching, watering, foreign body sensation, mucus discharge. Symptoms occur year-round. Signs: Skin changes on the eyelids, e.g. erythema, dryness, scaliness and thickening. Papillae on the tarsal conjunctiva. In severe cases, conjunctival scarring and forniceal shortening may be present.
Other ocular allergies
These include acute allergic conjunctivitis (seasonal and perennial allergic conjunctivitis) and giant papillary conjunctivitis. Predisposing factors for giant papillary conjunctivitis include contact lens wear and irritation from exposed sutures or a prosthesis.
NOTE: All ocular allergies can have sight-threatening complications if not managed well, e.g. keratoconus (due to excessive rubbing) and glaucoma (due to the prolonged use or misuse of steroids).
How do ocular allergies develop?
The basic mechanism of these conditions is type-1 hypersensitivity. The inflammatory response in vernal and atopic keratoconjunctivitis is due to inflammatory mediators, mainly from mast cells (Figure 5).
Grading of clinical severity
There is no globally accepted system or guidelines for the grading and management of ocular allergy, although several authors have proposed such systems.1-5
All patients with ocular allergy should be graded according to the level of severity.6 This is because the grade of severity has an impact on clinical decision making and helps ascertain the patients’ ocular clinical status and risk of vision loss. It also helps to determine the choice of treatment and the timing/frequency of follow-up.
Table 1 is based on a simplified clinical grading system which the authors have developed for use in Kenya and which applies to all ocular allergies. It takes into consideration the clinical signs present during the objective assessment but not the patient’s symptoms.
The management of ocular allergies in low- and middle-income countries is complicated by the high cost of drugs and the limited options available. Table 2 details the treatment guidelines developed for use in Kenya, based on the severity grading.
Table 2. Treatment and follow-up guidelines, based on severity grading (developed for Kenya)
Note: Patients diagnosed with vernal or atopical keratoconjunctivitis should always be treated as ‘severe’ cases, whatever their presenting clinical signs. There are many tools that can be used in the management of ocular allergy.
Non-pharmacological treatment, including allergen avoidance and cold compresses, are important for providing short-term relief from symptoms. The patient should also be advised to avoid eye rubbing.
Topical lubricants, preferably preservative free, are recommended for use in all grades of severity to dilute allergens and reverse tear film instability secondary to chronic inflammation.
Topical antihistamines and mast cell stabilisers are considered as first-line treatment. Mast cell stabilisers require a loading period of up to two weeks in order to achieve maximal efficacy. It should be combined with an antihistamine (short duration of action) or a mild topical steroid such as fluoromethalone to provide faster relief. Mast cell therapy should be continued when the steroids are stopped.
Dual-action drugs have both antihistamine and mast cell stabiliser action. They are effective in treating ocular allergy and outperform other groups of drugs. Another benefit is improved compliance because of a reduction in the number of medications to be used.
Topical ocular steroids are effective (probably the most effective of all options), but pose the important risk of frequent side effects (glaucoma, cataracts, corneal ulcers). Mild topical steroids should be used in acute crises for short periods of time; preferably less than 2 weeks. In cases of severe ocular allergy, a pulsed topical steroid regimen (start frequently, then taper) is advised. The duration of use is based on the grade of severity. Steroid ointments can be used at night for a short duration. The use of supra-tarsal steroids is recommended only for severe cases where topical medication does not control symptoms or when there is disease progression (refractory cases). Their use is also recommended in patients with severe papillary reaction leading to corneal epithelial erosions/shield ulcers.6
Topical immunomodulators, such as cyclosporin A, have been shown to be of great benefit as steroid-sparing agents in chronic disease, although they are not readily available.7
All patients and their carers should be counselled. A well-informed patient and parent/guardian will be in a better position to take part in the management of the condition. Counselling leads to improved compliance with medication and follow-up visits. It also leads to a reduction in self-medication, which in turn reduces possible misuse of steroids. It is important to make patients with sight-threatening disease aware that it can be blinding, so that they can understand the importance of proper follow-up and keeping their appointments.
Counselling can also help patients to avoid the complications associated with chronic eye rubbing (keratoconus) and the overuse or misuse of steroids (glaucoma, cataract, etc). Talk to patients about what they can do to support themselves, e.g. avoiding allergens, using cool compresses and preservative-free artificial tears, and wearing spectacles or sunglasses when outside. Basic printed information can be issued to patients during clinic visits.
Frequency of follow-up is linked to:
- Clinical severity grading
- Sight-threatening or non sight-threatening condition?
- Clinical response to treatment
A follow-up visit should include recent history, measurement of visual acuity, and slit lamp biomicroscopy. If corticosteroids are prescribed, measurement of intraocular pressure and pupillary dilation should be performed to evaluate for glaucoma and cataract.
If there is inadequate correction of refractive error and a history of frequent changes in spectacle prescriptions, suspect keratoconus. Look out for infections such as viral keratitis and refer all patients with severe disease (i.e. those developing complications) or those not responding to treatment.
1 Takamura E, Uchio E, Ebihara N, Ohno S, Ohashi Y, Okamoto S, et al. Japanese Society of Allergology. Japanese guideline for allergic conjunctival diseases. Allergol Int. 2011;60(2): 191-203.
2 Bonini S, Sacchetti M, Mantelli F, Lambiase A. Clinical grading of vernal keratoconjunctivitis. Curr Opin Allergy Clin Immunol. 2007;7(5): 436-41.
3 Calonge M, Herreras JM. Clinical grading of atopic keratoconjunctivitis. Curr Opin Allergy Clin Immunol. 2007;7(5): 442-5.
4 Sacchetti M, Lambiase A, Mantelli F, Deligianni V, Leonardi A, Bonini S. Tailored approach to the treatment of vernal keratoconjunctivitis. Ophthalmol. 2010;117(7): 1294-9.
5 Bore M, Ilako DR, Kariuki MM, Nzinga JM. Clinical evaluation criteria of ocular allergy by ophthalmologists in Kenya and suggested grading systems. JOECSA.2014;18(1): 35-43.
6 Bore M, Ilako DR, Kariuki MM, Nzinga JM. Current management of ocular allergy by ophthalmologists in Kenya. JOECSA.2014;18(2): 59-67.
7 Ozcan AA, Ersoz TR, Dulger E. Management of severe allergic conjunctivitis with topical cyclosporin a 0.05% eyedrops. Cornea. 2007;26(9): 1035-8. |
1 Revise the following words before starting the activity: star, angel, candle, present, tree, stocking, bauble, holly.
2 Give out photocopies of the Christmas paper craft worksheet and, if you have time, ask the pupils to colour in the pictures.
3 Ask the class to cut out the square. Make sure the children do not cut along the inner fold lines.
4 Show the class hold to fold the activity, taking them through each fold at a time so that no child is left behind. Use instructions, such as Take a corner and fold it into the middle, Turn it over, etc.
5 Ask the children to put the paper craft activity on their tables without touching them. Ask two volunteers to come to the front of the class with their crafts and demonstrate how to play the game. Write their names on the board for scoring.
6 Pupil 1 chooses a picture and asks Pupil 2 to spell it, eg holly. Pupil 1 then repeats the spelling opening and closing the craft on each letter H – O – L – L – Y, leaving the second layer visible, for example: h, s, t, a. Pupil 2 chooses one of the letters and says a Christmas word beginning with that letter, for example, ‘s’ – star. If Pupil 2 says a word correctly, Pupil 1 unfolds the craft and looks at the points under the letter ‘s': 4 points. Write 4 under Pupil 2’s name on the board. Now repeat the procedure with Pupil 2 using the craft until Pupil 1 also scores points. By this stage, most children in the class will understand how to play the game.
7 Pupils play in pairs, noting down their scores. You can either play by saying the first pupil to score 20 wins or putting a time limit on the game.
You can also revise general vocabulary on the inner layer by asking the pupils to say any word in English starting with the letter. |
This is a 3D live wallpaper with a rich collection of virus images. They are beautiful, but most of them are harmful to our health, some even endanger our lives if infected. AIDS virus, Avian Flu virus, Alpha virus, Flu virus, Herpes virus, HIV virus, Influenza virus, Oncolytic virus, Rabies virus, Rhino virus, swine flu virus and Symian virus etc... can be found in this app. Please have fun !!!
A virus is a small infectious agent that can replicate only inside the living cells of an organism. Viruses can infect all types of organism, from animals and plants to bacteria and archaea.
Virus particles (known as virions) consist of two or three parts: the genetic material made from either DNA or RNA, long molecules that carry genetic information; a protein coat that protects these genes; and in some cases an envelope of lipids that surrounds the protein coat when they are outside a cell. The shapes of viruses range from simple helical and icosahedral forms to more complex structures. The average virus is about one one-hundredth the size of the average bacterium. Most viruses are too small to be seen directly with an optical microscope.
The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids – pieces of DNA that can move between cells – while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity.
Viruses spread in many ways; viruses in plants are often transmitted from plant to plant by insects that feed on plant sap, such as aphids; viruses in animals can be carried by blood-sucking insects. These disease-bearing organisms are known as vectors. Influenza viruses are spread by coughing and sneezing. Norovirus and rotavirus, common causes of viral gastroenteritis, are transmitted by the faecal-oral route and are passed from person to person by contact, entering the body in food or water. HIV is one of several viruses transmitted through sexual contact and by exposure to infected blood. The range of host cells that a virus can infect is called its "host range". This can be narrow or, as when a virus is capable of infecting many species, broad.
Viral infections in animals provoke an immune response that usually eliminates the infecting virus. Immune responses can also be produced by vaccines, which confer an artificially acquired immunity to the specific viral infection. However, some viruses including those that cause AIDS and viral hepatitis evade these immune responses and result in chronic infections. Antibiotics have no effect on viruses, but several antiviral drugs have been developed.
Tags: wall paper de codigo de virus , wallpaper themes lipd , porque os virus saoprejudiciais a nossa saude? , virus infection live wallpaper , o virus é prejudicial a nossa saúde , los fondos de pantalla transmiten virus , free live wallpaper of viruses and cells , virus oncoliticos background , live wallpaper particulas q se alimentan |
To fully understand about Quaternions, you should first understand what Complex Numbers are. I'll explain what a Quaternion is in a general matter first, and then explain it's basics.
What is it?
A Quaternion represents a rotation of a point in 3D space around an axis. That's it.
Quaternion has it's name because it's formed by 4 values (quadruple). Don't worry why it uses 4 values now. Just know that each of the values need to be build using sin/cos to actually means anything.
If you have a point in space, and you want to rotate it around any axis, you can use a Quaternion to do so. You can use other techniques, if you want too. Quaternions are just another option. Depending on your problem, it might better suit your solution. |
Section IV: Resource Section
By focusing on local and state history a chapter will help advance the Texas Essential Knowledge and Skills (TEKS) mandated for fourth and seventh grade. However, since the historical subject matter is not limited to local and state history the curricular connections are unlimited. It is often quite easy to use local or relatively nearby resources to study larger national or international events. The use of primary, secondary and motivating resources is encouraged through the course descriptions of most social studies courses in the current framework. Once again, the activities encouraged through Junior Historians assist in meeting those curricular needs.
The lasting and long-term benefit of the activities encouraged through Junior Historians is the development of basic social studies skills that transcend all that we do. Social studies skills are the common thread that binds all grade levels. The skills of thinking critically, communicating effectively, solving problems, and making decisions are repeatedly emphasized through the activities recommended for Junior Historians. The social studies skills TEKS, usually the last three for each grade level, are essential for success at each grade level and in life. The matrix below shows which skills based on the TEKS can be taught or enhanced by which activity found in the Activities Section.
As students engage in the activities recommended through Junior Historians, they will build and strengthen their reading and social studies skills necessary to perform well on the Texas Assessment of Knowledge and Skills (TAKS) in a manner that is relevant to their lives. Many of these activities also make use of technology, which assists in student mastery of a variety of the required Technology Applications TEKS.
BOOKS, BULLETINS, AND LEAFLETS ON LOCAL HISTORY STUDIES
The American Association for State and Local History publishes a wealth of inexpensive material that is invaluable to the Junior Historian sponsor. These materials should be ordered directly from the AASLH at the following address:
American Association for State and Local History
172 Second Avenue North
Nashville, TN 37201
Online @ www.aaslhnet.org
Using, Managing. and Preserving the Records of Your Historical Organization, TR #9
Producing Professional Quality Slide Shows, TR #2
Manuscript Collections: Initial Procedures, TL *131
Collecting and Preserving Architectural Records, TL *132
Historic Site Interpretation: The Student Field Trip, TL #19
History for Young People: Projects and Activities, TL #38
History for Young People: Organizing a Junior Society, TL *44
Planning Museum Tours for School Groups, TL #93
Historic Houses as Learning Laboratories: Seven Teaching Strategies, TL *105
Designing Your Exhibits: Seven Ways to Look at an Artifact, TL #91
Preparing Your Exhibits: Methods, Materials. and Bibliography, TL #4
Methods of Research for the Amateur Historian, TL #21.
Books and Pamphlets
Oral History for the Local Historical Society. Third edition, revised, by Willa K. Baum.
Nearby History: Exploring the Past Around You. By David E. Kyvig and Myron A. Marty.
Local Schools: Exploring Their History. By Ronald E. Butchart.
Houses and Homes: Exploring Their History. By Barbara J. Howe, Dolores A. Fleming, Emory L. Kemp, and Ruth Ann Overbeck.
Public Places: Exploring Their History. By Gerald A. Danzer.
On Doing Local History: Reflections on What Local Historians Do, Why, and What It Means. By Carol Kammen.
Researching, Writing, and Publishing Local History. By Thomas E. Felt.
Oral History: An Interdisciplinary Anthology. Edited by David K. Dunaway and Willa K. Baum.
Transcribing and Editing Oral History. By Willa K. Baum.
Many additional titles are available from the American Association for State and Local History. A complete publications list is free on request.
The following two publications and the architectural slide show are highly recommended for all Junior Historian sponsors as well as for anyone interested in local history.
Teaching History with Community Resources. By Clifford L. Lord.
Localized History Series
Teachers College Press, Columbia University
New York, New York 10027
Oral History for Texans. By Thomas L. Charlton.
Texas Historical Commission
P. O. Box 12276
Austin, Texas 78711 |
- ES Home
- What We Do
- Candidate Conservation
- Listing and Critical Habitat
- For Landowners
- About Us
- FWS Regions
- Laws & Policies
- For Kids
- A Shared Responsibility
- Endangered and Threatened Species under NMFS Jurisdiction
- Recovering West Coast Salmon and Steelhead
- Salmon Research and Climate Change
- Finding a New Future for Corals
- CITES Gives Hope to the Queen Conch
- A Brighter Future for the Kemp’s Ridley
- Mixed News for the Hawaiian Monk Seal
- Caribbean Monk Seal: Gone but Not Forgotten
- Hawaii Longliners Reduce Sea Turtle Bycatch
- Southern Resident Killer Whale Recovery
- Underwater Noise and Endangered Species
- Reducing Threats to Right Whales
- Reducing Obstacles to Fish Migrations
- Partnerships for Steelhead in Southern California
- Conserving Species Before They Need the ESA
- Cooperative Conservation with the States
- Listing Actions
Mixed News for the Hawaiian Monk Seal
Photo Credit: Justin Viezbicke
by T. David Schofield
Question: What is the most critically endangered marine mammal whose entire range lies within the United States?
Answer: The Hawaiian monk seal (Monachus schauinslandi).
Photo Credit: NOAA
This "living fossil" is documented in the scientific literature as the most ancient of all pinniped (fin-footed animal) species. Originating 14 to 15 million years ago, this species is older than some of the Hawaiian Islands it now inhabits. The bad news is that only about 1,100 Hawaiian monk seals remain. Ninety percent of them live in the Refuge and Monument systems of the Northwestern Hawaiian Islands (NWHI) in the Midway Atoll and Hawaiian Islands National Wildlife Refuges, which are managed by the U.S. Fish and Wildlife Service, and at the very tip of the state’s Kure Atoll Seabird Sanctuary. The NWHI population is declining by 4 percent per year. The somewhat good news is that the remaining 10 percent—estimated at 100 individuals found throughout the eight main Hawaiian Islands—are increasing.
Problems such as poor female juvenile survival, entanglement in fishing gear and other marine debris, shark predation, diminishing beach habitat due to sea level rise, and reduced prey availability (which is due to people fishing and other top predator animals) are recognized as causes for the recent decline in the NWHI. Historical causes included pressure from military use of the islands and an intensive sealing and exploration expedition in the mid-1850s. However, the seal population in the main Hawaiian Islands appears to be on the rebound. Although the carrying capacity for monk seals in the main islands is unknown, 88 individuals are routinely sighted, a number that is based on such mark-recapture methods as identifying markings, flipper tags, and tracking by telemetry equipment.
The Recovery Plan for the Hawaiian Monk Seal established a range-wide population goal of maintaining 2,900 individuals for 20 years before the seal can be removed from Endangered Species Act protection. Projections suggest that the main Hawaiian Islands would have to sustain 500 of the seals. Currently, seals there appear to be thriving; the adults appear to be larger than those in the NWHI, mothers appear to be very healthy prior to giving birth, births are increasing, and pups are larger and healthier in comparison to the NWHI pups.
Photo Credit: NOAA
While the monk seal population growth in the main islands is encouraging, these animals face complex and unique impacts that were not previously observed in the larger NWHI population. Non-fatal hookings during recreational fishing, fatal entanglements, dog attacks, and conditioning to humans are among the threats that may be disastrous for the population in the main islands. For example, "R042," a female monk seal born on a popular beach on the Big Island of Hawai‘i, quickly became desensitized to humans. As a result, she began to exhibit friendly behaviors and interactions with people that led to swimming together, tactile petting, and feeding. This seal was first adopted by the local community, but she was soon admonished after becoming too playful and occasionally aggressive. After she jumped onto kayaks and surfboards, it was clear that these behaviors could become a nuisance to people, and the seal had to be relocated to another island. This is the third monk seal that required removal from its island of origin due to negative interactive behavior. In 2003, another monk seal from the Big Island was moved to the island of Kaho‘olawe and eventually moved farther away to Johnston Atoll. A second seal was removed from Kaua‘i and sent to Ni‘ihau. Moving seals away from their island of origin is not a preferred management practice. It demonstrates the need for increased public education about the problems caused by conditioning monk seals to interact with people.
With the decline of the NWHI population, the future of the Hawaiian monk seal may depend on the survival of the increasing population in the main islands. The National Marine Fisheries Service (NMFS) Pacific Islands Regional Office has developed a network of dedicated community members to foster public involvement in monk seal conservation. During the past three years, several island coordinators and a large cadre of volunteers throughout the state have enlisted in the effort. The volunteer network includes approximately 300 members from diverse backgrounds. On a daily basis, they respond to reports of seal haul-outs (literally, seals hauling themselves onto the beach), educate local citizens and tourists, record information, and provide the NMFS Pacific Islands Fisheries Science Center with images that are used to identify and monitor the seals. Outreach programs include a statewide Hawaiian Monk Seal Count every April and October, and school programs have been developed by volunteer teachers. These tools have proven useful in fostering the concept that recovering the monk seal will require efforts by all of us.
Learning from the growing public awareness of the humpback whale following its designation as the official Hawaii State Marine Mammal, monk seal response volunteers lobbied to appoint the seal as the official Hawaii State Mammal. This process helped to inform the Hawaii State Legislature on the critical status of the monk seal while elevating public awareness.
It is imperative that the public understand the plight of the Hawaiian monk seal and support efforts to prevent its continued decline. This year, one of the world’s three species of monk seal was declared extinct (see the following article on the Caribbean monk seal). We all need to work to avoid such a fate for the Hawaiian monk seal—a unique natural treasure.
T. David Schofield, Interim Hawaiian Monk Seal Recovery Coordinator and Marine Mammal Response Network Coordinator in the NMFS Honolulu, Hawaii, office, can be reached at [email protected] or 808-944-2269.
What We Do
- Habitat Conservation Plans (HCPs)
- Safe Harbor Agreements
- Candidate Conservation Agreements
- Candidate Conservation Agreements with Assurances
- Recovery Credits and Tax Deductions
- Conservation Banking
- Conservation Plans Database
- Information, Planning and Conservation System (IPaC)
- Recovery Online Activity Reporting System (ROAR)
- News Stories
- Featured Species
- Recovery Success Stories
- Endangered Species Bulletin
- Partnership Stories |
Faṣlī era, chronological system devised by the Mughal emperor Akbar for land revenue purposes in northern India, for which the Muslim lunar calendar was inconvenient. Faṣlī (“harvest”) is derived from the Arabic term for “division,” which in India was applied to the groupings of the seasons. The era dated from Akbar’s accession year, the Muslim year ah 963 (1555–56 ce). This was also the Hindu Samvat era year 1612. Akbar arbitrarily took 649 years from the Samvat year in order to make the Faṣlī year 963. Thereafter, the Faṣlī era proceeded according to the Samvat calendar. (To transpose Faṣlī into Gregorian, or New Style, calendar dates, add 592/593 years.) The system was introduced into the Deccan (southern India) by Shah Jahān in the 1630s. |
The nose piece of a microscope is responsible for holding multiple lenses that can be adjusted. This adjustment allows the viewer to look at the object under different levels of magnification.Continue Reading
To discover the appropriate level of magnification needed for accurate viewing of the sample, the viewer starts with the lowest setting on the microscope. The viewer then turns the nose piece to the next level of magnification. This process continues until the viewer achieves the appropriate level of magnification.
Alternative names for the nose piece on a microscope include revolving nose piece or turret. The lenses used to view samples are called objective lenses.Learn more about Chem Lab |
Should Art Be For Art's Sake
The teacher presents five traditional European-American theories of art (formalism, instrumentalism, imitationalism, expressionism, and institutionalism). Students review Chicana/o and earlier protest art from an instrumental point of view. They then select one Chicana/o artwork to consider using each of the five theories. As students argue that one theory better accounts for the artwork than another, they reflect on, reassess, modify, or reaffirm their own beliefs about art.
Explain to students that in some European-American cultures there is not a distinct word or definition for art. What European-Americans might call art others might see as an integral, not separate, part of the culture, for example traditional Hopi katsina dolls, Navajo sand paintings, or Australian Aborigine ground paintings. Explain further that philosophers in European-American cultures have been trying to define art for over two thousand years and have developed several theories about the nature and value of art, including the following:
Formalists believe that the best art affects its viewers because of the relationship among the visual elements in the artwork (lines, shapes, colors, values [lights and darks], textures, volume, and space). They believe that art is valuable in itself, that is, art for art's sake.
With younger students or students for whom English is a second language, you might want to make an overhead transparency ot make and post large placards with simple phrases to help identify the five theories:
FORMALISM: Art is for its own sake. It's interesting to look at.
Review key artworks that exemplify the theme of Protest and Persuasion: Diego Rivera, José Guadalupe Posada, Alfredo Zalce, Luis Guerra, Judith Baca, Yolanda López, and Carlos Cortez. Ask students which of the five traditional European-American theories about art they think best explains these artworks. Share the following quotations by Chicano/o and Mexican artists.
Luis Guerra spoke to a writer for the Austin-American Statesman (September 6, 1995)
Judith Baca wrote:
Diego Rivera revealed some of his beliefs about the value of art when he wrote:
Carlos Cortez literally quoted Ricardo Flores Magón's beliefs about art in his print:
David Alfaro Siqueiros, one of the great Mexican muralists, issued a manifesto in which he wrote:
Note that even though these beliefs might all, in some way, seem to be Instrumental, that they are all still quite different from each other.
Next explain that there can be value in considering alternative beliefs to one's own. Doing so offers opportunities to: fully understand the beliefs of others; discover weakness in the arguments of others; reflect critically on their own, perhaps unquestioned, beliefs; reaffirm and strengthen their own beliefs; and modify, clarify, or extend their own beliefs. Review all 20 artworks on Chicana and Chicano Space. Ask students to vote on one they would like to consider from several points of view. Divide the class into five groups assigning one of the five European-American beliefs about art to each group. Each group should:
As each group makes its argument before the class, explain that the task of the listeners is to:
Conclude the lesson by asking students to set aside their assigned beliefs and discuss whether their own beliefs have been altered by the discussion, and if so how. Note whether or not the class has reached a general consensus. If not, remind students that reasonable people have talked about and disagreed about the nature and value of art for centuries.
You may choose to ask students to each write a short essay in which they present their own beliefs about the nature and value of art. You might ask them to illustrate their points, when possible, with examples of artworks from Chicana and Chicano Space.
During group presentations, note whether students are able to focus on the assigned theory of art, and to support their arguments with both visual and contextual evidence. During discussions (and in optional essays), note whether students seriously consider the views of others and perhaps modify or clarify their own beliefs as a result.
Items for a Protest and Persuasion Portfolio might include:
Optional handout or overhead transparencies which presents the five traditional European-American theories art and the quotations from Chicana/o and Mexican artists or large placards with definitions.
E. L. Katz, E. L. Lankford, & J. D. Plank, (1995). "Appendix
A--Theories of Art," in Themes and Foundations of Art. Minneapolis:
West Publications, pp. A1-A4.
© 2001 Hispanic Research Center, Arizona State University. All Rights Reserved. |
Data and computer simulations are reviewed to help better define the timing and magnitude of human influence on sediment flux-the Anthropocene epoch. Impacts on the Earth surface processes are not spatially or temporally homogeneous. Human influences on this sediment flux have a secondary effect on floodplain and delta-plain functions and sediment dispersal into the coastal ocean. Human impact on sediment production began 3000 years ago but accelerated more widely 1000 years ago. By the sixteenth century, societies were already engineering their environment. Early twentieth century mechanization has led to global signals of increased sediment flux in most large rivers. By the 1950s, this sediment disturbance signal reversed for many rivers owing to the proliferation of dams, and sediment load reduction below pristine conditions is the dominant signal today. A delta subsidence signal began in the 1930s and is now a dominant signal in terms of sea level for many coastal environments, overwhelming even the global warming imprint on sea level. Humans have engineered how most water and sediment are discharged into the coastal ocean. Hyperpycnal flow events have become more common for some rivers, and less common for other rivers. Bottom trawling is now widespread, suggesting that even continental shelves have received a significant but as yet quantified Anthropocene impact. The Anthropocene attains the level of a geological climate event, such as that seen in the transition between the Pleistocene and the Holocene. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.