content
stringlengths 275
370k
|
---|
This term we are going to be finding out about our local area.
We started this term by learning about the different types of houses and homes that we see when we are out and about. We looked at how houses have changed over time and then created sketches of the different types of homes we had been learning about. We also created a pastel picture of a city nightscape.
We moved on to look at how shops have changed over time, comparing shops from the past with shops today. Lots of the children were surprised that in the past you did not choose what you wanted off the shelf but had to wait at the counter to be given your items by a shop assistant. The children were also amazed to see how much Boots the Chemist had changed. What buildings and objects can you spot when you are out for a walk?
In literacy we have been exploring the story of Nell the Detective Dog who uses her keen sense of smell to hunt through the town for the book thief. We have learned about writing in the first person using I, my and me and have written a recount of Nell's day from her point of view.
In maths we have used our knowledge of partitioning two-digit numbers into 10s and 1s to help us add. Wew are learning to regroup to help us bridge over 10 e.g. 56 + 37 - 50 + 30 = 80, 6+7 = 13 (10 + 3) so 56 + 37 is 80 + 10 + 3 = 93.
We have also been learning about living things and their habitats and will be setting up an experiment to see how plants grow and change over time. |
One key to a successful life is strong critical thinking skills and the foundation for these skills start when a child is young. This article discusses several ways parents can help their child develop these crucial life-long skills.
Throughout our lives, we experience many problems that we must solve on our own. Whether these difficulties occur at school, in our career or in our own households, we regularly put our critical thinking skills to the test. We are constantly working and improving our critical thinking skills, but the basis for these skills begins while we are young. As a parent, it is important to help your children develop these skills for them to be successful and reach their highest potential. With a little effort, you can teach your children to be smart and precise when thinking through problems and scenarios. Here are a few suggestions for helping your children develop critical thinking skills and pushing them toward finding answers to many problems on their own. Read to your child Reading to your child has numerous benefits. Stories open your child's imagination and are a wonderful tool to teach about chronological order of events, characters, morals and much more. As you read to your children, ask them questions about the story and the characters. The more you can engage your children into reading and answering your questions, the more they can develop their own critical thinking skills. Encourage questions It can be exhausting to answer a million daily questions from your inquisitive child. However, these questions show your child is thinking about the world around him. Taking the time to answer these questions and not push them aside can help your child continually keep their mind flowing. Additionally, when at work and school, they may find they are more successful than those who don't bother asking questions and finding answers on their own. When you are asked a million questions, it may be tiring, but in the long run it may be worth it. Help your child research If your child is thinking and asking those tough questions, teach them how to find answers on their own. Learning how to research thoroughly is a valuable skill that can develop critical thinking skills. It can create new questions, new thoughts and new ideas. There are also numerous resources you and your children can use to research. The Internet, books, even networking with individuals is a valuable way to find information. Let your child be independent If you want your child to think on his own, you can't do everything for him. Children must learn to be independent. If not, they will try to take the shortcut and lean on others to solve problems for them. You can help your children be independent by providing chores, pushing them to sit down and do their homework on their own, establishing rules and consequences when those rules are broken. By being an overbearing parent, you can damage the success of your child without even realizing it. Critical thinking skills are crucial for individuals of all ages. If these skills are developed well, they can be the tools to help individuals achieve success in many areas of their lives.%3Cimg%20src%3D%22http%3A//beacon.deseretconnect.com/beacon.gif%3Fcid%3D143638%26pid%3D46%22%20/%3E |
Substances known as transcription factors often determine how a cell develops as well as which proteins it produces and in what quantities. Transcription factors bind to a section of DNA and control how strongly a gene in that section is activated. Scientists had previously assumed that gene activity is controlled by the binding strength and the proximity of the binding site to the gene. Researchers at the Max Planck Institute for Molecular Genetics in Berlin have now discovered that the DNA segment to which a transcription factor binds can assume various spatial arrangements. As a result, it alters the structure of the transcription factor itself and controls its activity. Neighbouring DNA segments have a significant impact on transcription factor shape, thus modulating the activity of the gene.
For a car to move, it is not enough for a person to sit in the driver’s seat: the driver has to start the engine, press on the accelerator and engage the transmission. Things work similarly in the cells of our body. Until recently, scientists had suspected that certain proteins only bind to specific sites on the DNA strand, directing the cell’s fate in the process. The closer and more tightly they bind to a gene on the DNA, the more active the gene was thought to be. These proteins, known as transcription factors, control the activity of genes.
A team of scientists headed by Sebastiaan Meijsing at the Max Planck Institute for Molecular Genetics have now come to a different conclusion: The researchers discovered that transcription factors can assume various shapes depending on which DNA segment they bind to. “The shape of the bond, in turn, influences whether and how strongly a gene is activated,” Meijsing explains.
Consequently, transcription factors can bind to DNA segments without affecting a nearby gene. As in our car analogy, the mere presence of a “driver” is evidently not sufficient to set the mechanism in train. Other factors must also be involved in determining how strongly a transcription factor activates a gene.
Glucocorticoid receptor is also a transcription factor
One example is glucose production in the liver. If the blood contains too little glucose, the adrenal glands release glucocorticoids, which act as chemical messengers. These hormones circulate through the body and bind to glucocorticoid receptors on liver cells. The receptors simultaneously act as transcription factors and regulate gene activity in the cells. In this way, the liver is able to produce more glucose, and the blood sugar level rises again.
“Sometimes glucocorticoid receptor binding results in strong activation of neighbouring genes, whereas at other times little if anything changes,” Meijsing reports. The scientists found that the composition of DNA segments to which the receptors bind help determine how strongly a gene is activated. However, these segments are not in direct contact with the receptors acting as transcription factors; they only flank the binding sites. Yet, that is evidently enough to have a significant influence on the interaction.
“The structure of the interface between the transcription factor and genome segments must therefore play a key role in determining gene activity. In addition, adjacent DNA segments influence the activity of the bound transcription factors. These mechanisms ultimately ensure that liver cells produce the right substances in the right amounts,” Meijsing says.
The findings could also have medical applications. Many DNA variants associated with diseases belong to sequences that evidently control the activity of transcription factors. “Scientists had previously assumed that these segments exert an effect by inhibiting the binding of transcription factors, thus impeding the activity of neighbouring genes,” Meijsing says. “Our findings have now shown that some of these segments may not influence the contact directly but nevertheless reduce the activation state of the associated transcription factor.”
Original publication: Stefanie Schöne, Marcel Jurk, Mahdi Bagherpoor Helabad, Iris Dror, Isabelle Lebars, Bruno Kieffer, Petra Imhof, Remo Rohs, Martin Vingron, Morgane Thomas-Chollier, Sebastiaan H. Meijsing
Sequences flanking the core binding site modulate glucocorticoid receptor structure and activity
Nature Communications; 1 September, 2016
Dr. Sebastian Meijsing
The post DNA structure influences the function of transcription factors appeared first on Scienmag. |
If you’ve ever wondered what a 13-billion-year-old star sounds like, we finally have an answer. Spoiler warning: It’s a bit strange.
A team of scientists led by Andrea Miglio of the University of Birmingham have put together these recordings of some of the oldest stars in the galaxy by using data from NASA’s Kepler missions. As Gizmodo notes, they did this after measuring the acoustic oscillations of some of the furthest known stars in the Milky Way’s M4 star cluster. Along with sounding cool, this research is also helping scientists develop more precise measurements of star masses and ages.
“The stars we have studied really are living fossils from the time of the formation of our Galaxy, and we now hope be able to unlock the secrets of how spiral galaxies, like our own, formed and evolved,” Miglio said in a statement.
Check out the recordings below and let us know what you think: |
Oops! It looks like your security settings are blocking this video 🙁
If you are on a school computer or network, ask your tech person to whitelist these URLs:
*.wistia.com, fast.wistia.com, fast.wistia.net, embedwistia-a.akamaihd.net
Sometimes a simple refresh solves this issue. If you need further help, contact us.
Gravitational Forces Between Objects Activity for Kids
- 1Video camera or smart phone
- 1Friend to film you
- 1Hold the pencil in line with the top of your head.
- 2Have a friend film you drop the pencil from that height.
- 3Count how many video frames passed from the release of the pencil to the moment it hit the floor.
- 4Divide that number by the number of frames your camera captures each second (most cameras capture 30 frames each second).
- 5Multiply 1/2 times 9.8. Then multiply that by the number of seconds twice to find your height in meters.
- 6You can then convert that number to feet and inches.
How It Works
Due to gravity, all falling objects accelerate towards the center of the Earth at the same rate. Without air resistance, they speed up by 9.8 meters per second, each second. We can use that knowledge to calculate the height. To get more accurate results you could repeat the drop a few times and average the data. |
There are various reasons for the causes of the Second World War. This article explains in detail all the reason that causes the Second World War. However, before we directly explain the causes of the Second World War, let’s have a quick look at the impact of the Second World War.
The Second World War lasted from 1939 to 1945. In fact, the war was a devastating war in the history of the world. More than 50 million people lost their lives in the war, of these, 28 million were civilians. 12 million people died in the concentration and labor camps.
In Poland out of 6 million dead, five million were civilians. In this respect, Soviet Russia was the worst sufferer. She lost 20 million lives, which was about 10% of her total population. Germany lost about 6 million lives. The material loss in terms of money was no less staggering. The war cost the warring nations around $1,384,900,000.
- Causes of the Second World War
- The Treaty of Versailles contained the germs of the War
- Militarist Japan Attacks China
- Aggressive Intentions of Fascist Italy
- Germany prepares for War under the Nazi regime
- The Anglo-French Weakness Responsible for War
- The failure of the League brings War nearer
- Hitler annexes Austria and Czechoslovakia
- England and France awake from their lethargy
- German Attack on Poland-One of the Main Causes of the Second World War
- Summarize of the Causes of the Second World War
- Conclusion on the Causes of the Second World War
Causes of the Second World War
The Treaty of Versailles contained the germs of the War
Germany had gone to World War I with great ambitions of world domination. The German people had made extreme sacrifices in the hope of receiving the reward of world domination at the end of the war. The German Emperor and German statesmen expected an easy victory. However, the results of War upset all calculations of the Germans. Germany had nothing but disappointment and bitterness in store for her. The most “superior” nation in the world had to swallow the bitter pill of the Treaty of Versailles. It brought nothing but utter humiliation to the Germans.
The War-guilt was writ large on the forehead of Germany. Efforts were made to mutilate Germany so that she might not be able to rise again and disturb the peace of the world. Also, she was deprived of all her overseas colonies.
Most of the German colonies were divided between England and France. Germany was cut into two parts with the creation of the Polish Corridor. Several strips of territory like Danzig were snatched away from her. For a long time to come, she was deprived of her riches coal and iron resources. Besides, she pays a huge bill of reparations and she was to bear, for a long time to come, the burden of an army of occupation. Militarily, Germany was completely weak. She could keep an army of one lakh men only.
It was but natural that the great power like Germany should smart under such humiliating and degrading treatment. But, she reveals her future attitude to the world when her officers sank the naval fleet rather than handing it over to the Allies. Similarly, German officers and soldiers burnt the packs of French flags and works of art when they were ready for despatch to France under the terms of the Treaty of Versailles. This happened on the eve of the singing of the Treaty of Versailles. Thus, Germany violates the terms of the Treaty even before the signing of the treaty. Germany later took to aggressive measures for what it could not achieve through negotiation and peaceful means.
Militarist Japan Attacks China
Militarist Japan Attacks China was another causes of the Second World War. Japan had come out of the War with her hunger and ambitions unsatiated. She regarded the reward of a few islands and some possessions on the mainland of China as a poor compensation for the part she had played in the War on the side of the Allies. But she complaints that she did not have an equal share in the distribution of spoils. Her military resources had very much increased and her population was constantly increasing. Moreover, she wanted markets for the consumption of her goods. She had coveted Manchuria and attacked and occupied it in 1931.
The League of Nations failed to prevent Japan from setting up a puppet regime in Manchuria. England and France were busy with their domestic problems and did not want to bring about a war over the question of the distant Manchuria. Thus, Japan had its way. Flushed with an easy victory over Manchuria, the Japanese military leaders made up their minds to take over actual control of the administration into their hands. Thus, in 1932, amidst a wave of political murders and assassinations including that of the Prime Minister, the militarists succeeded in taking things into their hands.
The Constitutional and civil authorities ceased to function and Japan became a Semi-Fascist State. She now looked at the mainland of China like a hungry vulture. She had already left the League of Nations and was not afraid of a censure there. To ward off the possibility of Russian intervention, Japan entered into an Anti-Comintern Pact with Germany. She then fell upon her helpless victim in 1937 and started a systematic conquest of the mainland. Japan entered World War II in 1941 with an attack on Pearl Harbour.
Aggressive Intentions of Fascist Italy
Italy too did not satisfy with the Peace Settlement. She regarded her gains inadequate. Her ambition was to carve out a large African Empire. Mussolini established a Fascist regime in Italy and thus, gave the lead to the rest of Europe to follow her example.
In 1926, Italy established a protectorate over Albania. Italian aggression became naked, when unprovoked, she attacked Abyssinia in 1935. She succeeded in conquering this African state in spite of the opposition from League circles.
Italy too quitted the League because their policies and aims were incompatible with the Covenant of the League. With these moves, the world was slowly moving towards a general war.
In 1937, Italy joined the Berlin-Tokyo Anti-Comintern Pact. Thus, it forms the Berlin-Rome-Tokyo Axis. Once again, an aggressive bloc with ulterior motives of world domination had come into existence. In the spring of I939, Italy forcibly occupied Albania. On May 22, 1939, Italy entered into a ten-year military Pact with Germany. The stage was then set for action.
Germany prepares for War under the Nazi regime
Germany had a brief experiment in parliamentary government but the Weimer Republic failed to deliver goods to the people. Millions of unemployed and half-starved Germans fell an easy prey to the Utopian propaganda of the National Socialists who promised to make Germany once again great. The National Socialists adopted terrorist methods to capture power. Ultimately, Hitler, the leader of the National Socialists, became Chancellor of Germany in January 1933.
That very year, on the failure of the Disarmament Conference, Germany secured the right, to re-arm herself. Also, she quitted the League of Nations. On the death of Hinderburg in August 1934, Hitler became President. He became the Fuehrer (leader) and as such emerge out to be the supreme ruler of Germany. Once again, the German policy, as under William II, became aggressive and dominating. While appeasing England by a naval agreement in 1935, Hitler moved German troops into Rhineland in 1936.
England and France did nothing against this violation of the Treaty of Versailles except the sending of some mild paper protests – thrown into the waste-paper basket in Berlin. While showing peaceful aspiration, Hitler at the same time started preparations for a Second War. Germany had started making 3 powerful fighting-force armed with tanks and war-planes. In I936, Hitler denounced the Locarno Pact. Also, an Anti-Comintern pact was signed with Japan in I936, and Italy joined it next year, Thus Germany was on the warpath.
The Anglo-French Weakness Responsible for War
A part of the blame for the War 1939-45 may be thrown on England and France. They failed to stop and check things when they could easily do it. For this reason, they were responsible for undermining the authority of the League of Nations too. Japan was allowed to swallow Manchuria. England and France forgot that in a changing world, their defensive line could be in far off Manchuria. Similarly, when Italy attacks Abyssinia, France looks on in the hope that Italy would be ultimately her friend.
However, Mussolini from Italy had already decided to change the road. He went instead towards Berlin. He left the League and even denounced the Franco-Italian Treaty. Again, it was sheer vacillation on the part of England and France; when they failed to take an effective step in 1936, against the re-militarization of the Rhineland by Germany. It seems England and France had some wishful thinking on the prospects of a Russo-German War. They failed to realize and see the shape of things to come. Similarly, Western democracy did not respond to the call of the “Loyalists” in the Spanish Civil War.
On the other hand, the victory of General Franco was merely due to the help rendered by Germany and Italy. In this way, Hitler and Mussolini added another member to their family of dictators. Thus, England and France failed miserably to check the rising tide of the Fascist regimes in Europe and it was due to their vacillating policy.
The failure of the League brings War nearer
The failure of the League was another causes for the Second World War. The aim of the League of Nations was to preserve the peace of the World. But, with two Great Powers- the United States and the Soviet Russia-Out of it, it was bound to fail. The Soviet Union joined it too late-when it had already become defunct. Moreover, the League had no armed fumes and its decisions had to be unanimous.
The League was effective when it deals with small powers. However, when the Great Powers involves, it could not do anything. One after the other, Great Powers openly flouted its authority. Japan gave the first shock in 1931 when she attacked Manchuria. Italy almost challenged the League and secured her objective in 1936-Abyssinia. Thus, the authority and the prestige of the League had come to an end, and all the three Great dictatorial Powers Germany, Japan and Italy had left it by 1936.
The small nations had lost all confidence in the effectiveness of the collective security under the Covenant of the League. Thus, the last hopes of mankind for peace had vanished with the failure of the League.
Hitler annexes Austria and Czechoslovakia
Hitler was very clear about his objectives even before coming to power. He had visions of a “Greater Germany” consisting of all the Germans living on the Continent; and contiguous to Germany. He had promised his people that he would unite the Germans under one flag. On assuming power, Hitler stirred up trouble in Austria. The National Socialists of Austria took orders from him. Notwithstanding the fact that in 1936 he had assured Austria “complete sovereignty”, he took steps for its amalgamation with Germany.
Two years later, Hitler had occupied (March 1938) Austria forcibly. The provisions of the Treaty of Versailles could not give Austria 8 separate existence for a long time. There was no opposition to Hitler’s decree that “Austria is a land of the German Reich”.
The next objective of Hitler was Czechoslovakia. For that matter, the Sudeten German Party stood active for his help. Hitler demands the German majority areas of Czechoslovakia and threatens war on that country if his demands were not welcome by the end of September 1938. Once again, England and France wavered to make war on the question of Czechoslovakia. Possibly, they were not ready for it.
While Russia favored a stiff attitude towards Germany, the British Prime Minister, Chamberlain, fled to Germany and appeased Hitler at Munich (September 1938). Chamberlain brought back peace to England but the peace was without honor. Hitler occupied the Sudeten land, and a little later, declared a protectorate over the whole of Czechoslovakia. The Western democracies looked on in impotency and the small nations lost all confidence in the guarantees and promises of these powers. Thus, the world faced a grim situation at the end of 1938. War could come at any time. Thus, from the above points, we can say that Hitler is one of the main causes of the Second World War.
You may also like reading Hitler Responsible for World War 2
England and France awake from their lethargy
England realized on Hitler’s annexation of Czechoslovakia that there could be no end to the ambitions of the unscrupulous dictator, and her policy of appeasement would not pay any dividends. There was thus a momentous change in the foreign policy of Britain. She was now ready to check German aggression in any part of Europe.
The historic declaration of the new British foreign policy came in the worlds that Britain had been united from end to end in a conviction that we must now make our position clear and unmistakable, whatever may be the result. We welcome the co-operation of any country, whatever may be its internal system of government, not in aggression but in resistance to aggression. This was the first clear indication that Britain meant business. She entered into alliances and gave guarantees to the small Powers of Europe-Greece, Roumania, Poland, and Turkey, and declared that “In the event of any action being taken, which clearly threatens independence, and which Governments consider it vital to resist with their national forces. His Majesty’s Government will feel bound to lend at once all the support in their power”.
France stood by Britain in this hour of trial and, entered into a close military alliance with her. Thus, it forms the so-called “Grand Alliance” for the preservation of peace. Once again, the Powers of Europe were divided into two hostile camps and war could come about any moment.
German Attack on Poland-One of the Main Causes of the Second World War
In May 1939, Germany signed a military alliance with Italy. Germany was now fully ready for war. However, she did not want to fight on two fronts. She started the world by signing a Non-Aggression Pact with the Soviet Union in August 1939. This was a signal for the war. It was a masterstroke of German diplomacy. She succeeded in drawing a wedge between her enemies. Hitler’s intention was to finish off with England and France first and then to fall on Russia.
On the other hand, Soviet Russia had been disgusted with the Anglo-French attitude over Austria and Czechoslovakia and she did not put any faith in their words. Hitler now made his demands on Poland the restoration of Danzig, and a strip of territory across the Polish Corridor, in order to build a rail and motor road to East Prussia. However, he attacked Poland on Sept 1, 1939, at five O’clock without giving Poland a chance to negotiate. Poland, as it happened, was the unhappy victim of the critical move in the ghastly game. As a result of Hitler Poland attacked Poland, England and France declared war on Germany on September 3, 1939. The Second World War had begun.
Summarize of the Causes of the Second World War
The causes of the Second World War were born not of the First World War as the latter’s cause was in the policy of Bismarck in 1870. The period between the two World Wars had been termed as “twenty year’s truce”. The dictated peace of Versailles did not satisfy the Germans. Along with this treaty the appeasement policy, ideology, failure of disarmament and collective security, isolation of the United States, and the series of crises led to the War in 1939.
The treaty of Versailles contained the seeds of the Causes of the Second World War. Diktat or the dictated peace angered the Germans. The war-guilt cause, territorial re-adjustments, and economic provisions of the treaty were insulting to Germany. It alone had to undertake disarmament. The victors of the First World War did not observe the principle of reciprocity. Nazi Germany under Hitler denounced the treaty and violated the provisions of the treaty. Hitler could understand the psychology of the Germans and used the treaty as a weapon for his aggressive designs. France and Britain were silent when the provisions were violated. It was too late when they reacted.
The failure of collective security was responsible for the Second World War. The League of Nations was responsible to maintain peace. But the covenant of the League was violated. Article 10 became a dead letter. The League system of collective security failed. In spite of the Locarno pact and Kellogg-Briand Pact, the world was moving towards disaster in the 1930s. The League of Nations remained a silent spectator when Japan invaded Manchuria, Italy invaded Ethiopia. The failure of the League led to another War.
As there was the failure of collective security, thus, nations began to rearm themselves. The World Disarmament Conference of February 1932 failed, Germany violated the clauses of the treaty of Versailles regarding disarmament. It withdrew from the Disarmament Conference in 1933. The viewpoint of France and Germany were the opposite. The former wanted to base security upon military strength. Germany wanted a reduction in the French army. Thus, After the failure of disarmament, the nations increased their expenditure on arms and the world marched towards the inevitable Second World War.
The United States did not sign the Treaty of Versailles. It also was not a member of the League. This isolation of the United States increased the French fear. The French had hoped that American presence would restrain Germany. In the inter-war period, the absence of the United States made the Peace-keeping body (the League) ineffective. Perhaps, Hitler would not have followed an aggressive foreign policy if the United States was there to check it.
The fear of communist Russia made the West to treat Stalin as a greater danger than Hitler. Politicians of Britain and France hated the revolutionary path of the Soviet Union. They were afraid of communist agitation among their own people. The West failed to understand the policies of Germany and Italy were directed against both communist Russia and democracies. Even the Russian desire for an alliance with Britain was not possible as the latter did not want it. The result was the Nazi-Soviet non-aggression pact. This pact secured Hitler’s promise of non-aggression from Stalin. The former then attacked the Western democracies.
The road to war in 1939 went through different steps. The Japanese seizure of Manchuria in 1931 was opposed by verbal protest only. Neither the United States nor the League acted effectively. Japan withdrew from the League in 1933. France and Britain wanted to appease Mussolini. In I935, he began the invasion of Ethiopia. The Italian troop movement through the Suez Canal was not checked. Article 16 of the League, providing for economic sanctions was not applied rigorously. The Laval-Mussoline Pact of 1935 had also improved relations between France and Italy. Mussolini occupied Ethiopia. Afterward, there was a Rome-Berlin axis in October 1936. Italy left the League in 1937. Italy and Germany also supported the anti-democratic forces of Franco in the Spanish Civil War.
The final step to the Second World War was Hitler’s attack on Poland. Before it, he had taken steps that brought the world nearer to war. His “Third Reich” had criticized the clauses of the Treaty of Versailles limiting German armaments. Hitler sent troops in March I936 to the demilitarized Rhineland. He signed a treaty with Mussolini. This was Rome-Berlin Axis of October 1936. Hitler concluded a treaty with Japan known as Anti-Comintern pact directed against the Soviet Union. Italy joined it in 1937 and the Rome-Berlin-Tokyo Axis came into being. Hitler was encouraging Nazi agitation in Austria for annexation to Germany. In March 1938, he sent troops to Austria and made annexation or Anschluss a fact. Germany became more powerful. There was an increase in its economic power and armed forces. It also controlled the communications of South-Eastern Europe by rail, river, and road.
Hitler’s next target was Czechoslovakia. The pretext was the self-determination of the Sudeten German. The British and French wanted to appease Hitler. Neville Chamberlain, Daladier, Hitler, and Mussolini met in Munich on 29th September, I938. Czechoslovakia was dismembered. Sudeten area came under Germany. Slovakia became autonomous. Munich was a bitter tragedy for the Czechs. In March I939, Hitler showed his real desire. Thus, he occupied Czechoslovakia.
The policy of appeasement was a failure leading to another World War. Hitler was not at all satisfied with Czechoslovakia. The motive behind the appeasement was to satisfy the dictators with Ethiopia, Austria, and Czechoslovakia. Hitler next turned towards Poland.
Poland became the final step towards the Second World War. The Polish Corridor separating East Prussia from the rest of Germany was an insult to the Germans. Danzig was also separated. Britain had signed a pact of mutual assistance with Poland in April 1939. To check the danger from the east, Hitler had signed the non-aggression pact in August 1939. He attacked Poland on 1st September 1939. On 3rd September, Britain and France declared war on Germany. Thus, the Second World War began.
Conclusion on the Causes of the Second World War
After the First World War, the League of Nations was formed with the object of preventing wars internationally. But, it becomes only a nominal pact, for Germany, Italy, and Japan set aside the terms of the pact. On the other hand, the league could not prevent them from their aggressive policy. As soon as Hitler came to power, German withdrew from the League. She attacked and occupied Rhineland in 1936. The greatest event during the period was the formation of the Axis Powers by signing the Anti-Comintern Pact in 1937.
During the Civil War in Spain, the German army marched into Austria and occupied it. By the end of 1938, Germany practically violated all the terms of the treaty and declared war on Britain and France. In August 1939, Germany and the Soviet Union signed a non-aggression pact. At the same time, Great Britain and France also made a declaration guaranteeing the integrity and independence of Poland, Greece, Romania, and Turkey. In 1939 Germany annexed Czechoslovakia and Italy occupied Ethiopia.
Germany demanded, in March 1939, the return of Danzig and the right to maintain a rail and motor road across the Police Corridor to East Prussia. On August 29, 1939, Germany demands that a Polish delegate with full powers be sent to Berlin for settlement of all German Polish difference. It was further stipulated that the Polish delegate should reach Berlin on August 30. It was an unreasonable demand and was intended to give no time to Poland. Before Poland could send any reply, Germany had cut all communication lines. At 4.45 am on September 1, Germany, without declaring war, invades Poland by air and land. On September 3, Great Britain declared war on Germany. The French Government also followed suit. Thus, Europe entered the Second World War exactly twenty-five years and one month after the outbreak of the First World War.
Source: Mohammed Rafi Komol & O. Jnanendra Singh
A Guide to History of Modern Europe 1789-1945.
Imphal, Khumanthem Babudhon, 2018, |
¶ 3 Leave a comment on paragraph 3 0 Deep into the nineteenth century, Native Americans still dominated the vastness of the American West. Linked culturally and geographically by trade, travel, and warfare, various indigenous groups controlled most of the continent west of the Mississippi River. Spanish, French, British, and later American traders had integrated themselves into many regional economies, and American emigrants pushed ever westward, but no imperial power had yet achieved anything approximating political or military control over the great bulk of the continent. But then the Civil War came and went and decoupled the West from the question of slavery just as the United States industrialized and laid down rails and pushed its ever-expanding population ever-farther west.
¶ 4 Leave a comment on paragraph 4 0 Indigenous Americans had claimed North America for over ten millennia and, into the late-nineteenth century, perhaps as many as 250,000 natives still claimed the American West. But then unending waves of American settlers, the American military, and the unstoppable onrush of American capital conquered all. The United States removed native groups to ever-shrinking reservations, incorporated the West first as territories and then as states, and, for the first time in its history, controlled the enormity of land between the two oceans.
¶ 5 Leave a comment on paragraph 5 0 The history of the late-nineteenth-century West is many-sided. Tragedy for some, triumph for others, the many intertwined histories of the American West marked a pivotal transformation in the history of the United States.
II. Post-Civil War Westward Migration
¶ 7 Leave a comment on paragraph 7 0 In the decades after the Civil War, Americans poured across the Mississippi River in record numbers. No longer simply crossing over the continent for new imagined Edens in California or Oregon, they settled now in the vast heart of the continent.
¶ 8 Leave a comment on paragraph 8 0 Many of the first American migrants had come to the West in search of quick profits during the mid-century gold and silver rushes. As in the California rush of 1848–49, droves of prospectors poured in after precious-metal strikes in Colorado in 1858, Nevada in 1859, Idaho in 1860, Montana in 1863, and the Black Hills in 1874. While women often performed housework that allowed mining families to subsist in often difficult conditions, a significant portion of the mining workforce were single men without families dependent on service industries in nearby towns and cities. There, working-class women worked in shops, saloons, boarding houses, and brothels. It was often these ancillary operations that profited from the mining boom: as failed prospectors often found, the rush itself often generated more wealth than the mines. The gold that left Colorado in the first seven years after the Pike’s Peak gold strike—estimated at $25.5 million—was, for instance, less than half of what outside parties had invested in the fever and the 100,000-plus migrants who settled in the Rocky Mountains were ultimately more valuable to the region’s development than the gold they came to find.
¶ 9 Leave a comment on paragraph 9 0 Others came to the Plains to extract the hides of the great bison herds. Millions of animals had roamed the Plains, but their tough leather supplied industrial belting in eastern factories and raw material for the booming clothing industry. Specialized teams took down and skinned the herds. The infamous American bison slaughter peaked in the early 1870s. The number of American bison plummeted from over 10 million at mid-century to only a few hundred by the early 1880s. The expansion of the railroads would allow ranching to replace the bison with cattle on the American grasslands.
¶ 11 Leave a comment on paragraph 11 0 It was land, ultimately, that drew the most migrants to the West. Family farms were the backbone of the agricultural economy that expanded in the West after the Civil War. In 1862, northerners in Congress passed the Homestead Act, allowed male citizens (or those who declared their intent to become citizens) to claim federally-owned lands in the West. Settlers could head west, choose a 160 acre surveyed section of land, file a claim, and begin “improving” the land by plowing fields, building houses and barns, or digging wells, and, after five years of living on the land, could apply for the official title deed to the land. Hundreds of thousands of Americans used the Homestead Act to acquire land. The treeless plains that had been considered unfit for settlement became the new agricultural mecca for land-hungry Americans.
¶ 13 Leave a comment on paragraph 13 0 The Homestead Act excluded married women from filing claims because they were considered the legal dependents of their husbands. Some unmarried women filed claims on their own, but single farmers (male or female) were hard-pressed to run a farm and they were a small minority. Most farm households adopted traditional divisions of labor: men worked in the fields and women managed the home and kept the family fed. Both were essential.
¶ 14 Leave a comment on paragraph 14 0 Migrants sometimes found in homesteads a self-sufficiency denied at home. Second or third sons who did not inherit land in Scandinavia, for instance, founded farm communities in Minnesota, Dakota, and other Midwestern territories in the 1860s. Boosters encouraged emigration by advertising the semiarid Plains as, for instance, “a flowery meadow of great fertility clothed in nutritious grasses, and watered by numerous streams.” Western populations exploded. The Plains were transformed. In 1860, for example, Kansas had about 10,000 farms; in 1880 it had 239,000. Texas, for instance, saw enormous population growth. The federal government counted 200,000 persons in Texas in 1850, 1,600,00 in 1880, and 3,000,000 in 1900, becoming the sixth most populous state in the nation.
III. The Indian Wars and Federal Peace Policies
¶ 16 Leave a comment on paragraph 16 0 The “Indian wars,” so mythologized in western folklore, were a series of sporadic, localized, and often brief engagements between U.S. military forces and various Native American groups. The more sustained and more impactful conflict, meanwhile, was economic and cultural. The vast and cyclical movement across the Great Plains to hunt buffalo, raid enemies, and trade goods was incompatible with new patterns of American settlement and railroad construction. Thomas Jefferson’s old dream that Indian groups might live isolated in the West was, in the face of American expansion, no longer a viable reality. Political, economic, and even humanitarian concerns intensified American efforts to isolate Indians on reservations. Although Indian removal had long been a part of federal Indian policy, following the Civil War the U.S. government redoubled its efforts. If treaties and other forms of persistent coercion would not work, more drastic measures were deemed necessary. Against the threat of confinement and the extinction of traditional ways of life, Native Americans battled the American army and the encroaching lines of American settlement.
¶ 17 Leave a comment on paragraph 17 0 In one of the earliest western engagements, in 1862, while the Civil War still consumed the nation, tensions erupted between Dakota Sioux and white settlers in Minnesota and the Dakota Territory. The 1850 U.S. census recorded a white population of about 6,000 in Minnesota; eight years later, when it became a state, it was more than 150,000. The influx of American farmers pushed the Sioux to the breaking point. Hunting became unsustainable and those Sioux who had taken up farming found only poverty. Starvation wracked many. Then, on August 17, 1862, four young men of the Santee band of Sioux killed five white settlers near the Redwood Agency, an American administrative office. In the face of an inevitable American retaliation, and over the protests of many members, the tribe chose war. On the following day, Sioux warriors attacked settlements near the Agency. They killed 31 men, women and children. They then ambushed a U.S. military detachment at Redwood Ferry, killing 23. The governor of Minnesota called up militia and several thousand Americans waged war against the Sioux insurgents. Fighting broke out at New Ulm, Fort Ridgely, and Birch Coulee, but the Americans broke the Indian resistance at the Battle of Wood Lake on September 23, ending the so-called Dakota War, also known as the Sioux Uprising.
¶ 18 Leave a comment on paragraph 18 1 More than two thousand Sioux had been taken prisoner during the fighting. Many were tried at federal forts for murder, rape, and other atrocities. 303 were found guilty and sentenced to hang, but at the last moment President Lincoln commuted all but 38 of the sentences. Terrified Minnesota settlers and government officials insisted not only that the Sioux lose much of their reservations lands and be removed further west, but that those who had fled be hunted down and placed on reservations as well. On September 3, 1863, after a year of attrition, American military units surrounded a large encampment of Dakota Sioux. American troops killed an estimated 300 men, women, and children. Dozens more were taken prisoner. Troops spent two days burning winter food and supply stores, all to pacify the Sioux resistance. Conflict still smoldered for decades.Further south, tensions flared in Colorado. In 1851, the Treaty of Fort Laramie had secured right-of-way access for Americans passing through on their way to California and Oregon. But a gold rush in 1858 drew approximately 100,000 white goldseekers and they demanded new treaties be made with local Indian groups to secure land rights in the newly created Colorado Territory. Cheyenne bands splintered over the possibility of signing a new treaty that would confine them to a reservation. Settlers, already wary of raids by powerful groups of Cheyennes, Arapahos, and Comanches, meanwhile read in their local newspapers sensationalist accounts of the Sioux uprising in Minnesota. Militia leader John M. Chivington warned settlers in the summer of 1864 that the Cheyenne were dangerous savages, urged war, and promised a swift military victory. Sporadic fighting broke out. Although Chivington warned of Cheyenne savagery, the aged Cheyenne chief Black Kettle, believing that a peace treaty would be best for his people, traveled to Denver to arrange for peace talks. He and his followers traveled toward Fort Lyon in accordance with government instructions but on November 29, 1864, Chivington ordered his seven hundred militiamen to move on the Cheyenne camp near Fort Lyon at Sand Creek. The Cheyenne tried to declare their peaceful intentions but Chivington’s militia cut them down. It was a slaughter. Black Kettle and about two hundred other men, women, and children were killed. The Sand Creek Massacre was a national scandal, alternately condemned and applauded. News of the massacre reached other native groups and the American frontier erupted into conflict. Americans pushed for a new “peace policy.” Congress, confronted with these tragedies and further violence, authorized in 1868 the creation of an Indian Peace Commission. The commission’s study of American Indian decried prior American policy and galvanized support for reformers. After the inauguration of Ulysses S. Grant the following spring, Congress allied with prominent philanthropists to create the Board of Indian Commissioners, a permanent advisory body to oversee Indian affairs and prevent the further outbreak of violence.The Board effectively Christianized American Indian policy. Much of the reservation system was handed over to Protestant churches, which were tasked with finding agents and missionaries to manage reservation life. Congress hoped that religiously-minded men might fare better at creating just assimilation policies and persuading Indians to accept them. Historian Francis Paul Prucha believed that this attempt at a new “peace policy… might just have properly been labelled the religious policy.” Many female Christian missionaries played a central role in cultural re-education programs that attempted to not only instill Protestant religion but also impose traditional American gender roles and family structures. They endeavored to replace Indians’ tribal social units with small, patriarchal households. Women’s labor became a contentious issue, for very few tribes divided labor according to white middle-class gender norms. Fieldwork, the traditional domain of white males, was primarily performed by native women, who also usually controlled the products of their labor, if not the land that was worked, giving them status in society as laborers and food providers. For missionaries, the goal was to get Native women to leave the fields and engage in more proper “women’s” work–housework.
¶ 19 Leave a comment on paragraph 19 0 Christian missionaries performed much as secular federal agents had. Few American agents could meet Native Americans on their own terms. Most viewed reservation Indians as lazy and thought of Native cultures as inferior to their own. The views of J. L. Broaddus, appointed to oversee several small Indian tribes on the Hoopa Valley reservation in California, are illustrative: in his annual report to the Commissioner of Indian Affairs for 1875, he wrote, “the great majority of them are idle, listless, careless, and improvident. They seem to take no thought about provision for the future, and many of them would not work at all if they were not compelled to do so. They would rather live upon the roots and acorns gathered by their women than to work for flour and beef.”
¶ 21 Leave a comment on paragraph 21 0 If the Indians could not be forced through kindness to change their ways, most agreed that it was acceptable to use force, which native groups resisted. In Texas and the Southern Plains, the fierce Comanche, Kiowa, and their allies had wielded enormous influence. The Comanche in particular controlled huge swaths of territory and raided vast areas, inspiring terror from the Rocky Mountains to the interior of Northern Mexico to the Texas Gulf Coast. But after the Civil War, the U.S. military refocused its attention on the Southern Plains.
¶ 22 Leave a comment on paragraph 22 0 The American military first sent messengers to the Plains to find the elusive Comanche bands and ask them to come to peace negotiations at Medicine Lodge Creek in the fall of 1867. But terms were muddled: American officials believed that Comanche bands had accepted reservation life, while Comanche leaders believed they were guaranteed vast lands for buffalo hunting. Comanche bands used designated reservation lands as a base from which to collect supplies and federal annuity goods while continuing to hunt, trade, and raid American settlements in Texas.
¶ 23 Leave a comment on paragraph 23 0 Confronted with renewed Comanche raiding, particularly by the famed war leader Quanah Parker, the U.S. military finally proclaimed that all Indians who were not settled on the reservation by the fall of 1874 would be considered “hostile.” The Red River War began when many Comanche bands refused to resettle and the American military launched expeditions into the Plains to subdue them, culminating in the defeat of the remaining roaming bands in the canyonlands of the Texas Panhandle. Cold and hungry, with their way of life already decimated by soldiers, settlers, cattlemen, and railroads, the last free Comanche bands were moved to the reservation at Fort Sill, in what is now southwestern Oklahoma.
¶ 24 Leave a comment on paragraph 24 0 On the northern Plains, the Sioux people had yet to fully surrender. Following the troubles of 1862, many bands had signed treaties with the United States and drifted into the Red Cloud and Spotted Tail agencies to collect rations and annuities, but many continued to resist American encroachment and a large number of Sioux refused to sign and remained fiercely independent. These “non-treaty” Indians, such as those led by famous chiefs Sitting Bull and Crazy Horse, saw no reason to sign treaties that they believed would not be fully honored.
¶ 25 Leave a comment on paragraph 25 0 Then, in 1874, an American expedition to the Black Hills of South Dakota discovered gold. White prospectors flooded the territory. Caring very little about Indian rights, and very much about getting rich, they brought the Sioux situation again to its breaking point. Aware that U.S. citizens were violating treaty provisions, but unwilling to prevent them from searching for gold, federal officials pressured the western Sioux to sign a new treaty that would transfer control of the Black Hills to the United States while General Philip Sheridan quietly moved U.S. troops into the region. Initial clashes between U.S. troops and Sioux warriors resulted in several Sioux victories that, combined with the visions of Sitting Bull, who had dreamed of an even more triumphant victory, attracted Sioux bands who had already signed treaties but now joined to fight.
¶ 26 Leave a comment on paragraph 26 0 In late June 1876, a division of the 7th Cavalry Regiment led by Lieutenant Colonel George Armstrong Custer was sent up a trail into the Black Hills as an advance guard for a larger force. Custer’s men approached the village known to the Sioux as Greasy Grass, but marked on Custer’s map as Little Bighorn, and found, given the influx of “treaty” Sioux, as well as aggrieved Cheyenne and other allies, had swelled the population of the village far beyond Custer’s estimation. Custer’s 7th Cavalry was vastly outnumbered and he and 268 of his men were killed.
¶ 27 Leave a comment on paragraph 27 0 Custer’s fall shocked the nation. Cries for a swift American response reprisals filled the public sphere and military expeditions were sent out to crush native resistance. The Sioux splintered off into the wilderness and began a campaign of intermittent resistance but, outnumbered and suffering after a long, hungry winter, Crazy Horse led a band of Oglala Sioux to surrender in May of 1877. Other bands gradually followed until finally, in July 1881, Sitting Bull and his followers at last laid down their weapons and came to the reservation. Indigenous powers had been defeated. The Plains, it seemed, had been pacified.
IV. Western Economic Expansion: Railroads and Cattle
¶ 29 Leave a comment on paragraph 29 0 As native peoples were pushed out, American settlers poured in. Aside from agriculture and the extraction of natural resources—such as timber and precious metals—two major industries fueled the new western economy: ranching and railroads. Both developed in connection with each other and both shaped the collective American memory of the post-Civil War “Wild West.”
¶ 30 Leave a comment on paragraph 30 1 As one booster put it, “the West is purely a railroad enterprise.” No economic enterprise rivalled the railroads in scale, scope, or sheer impact. No other businesses had attracted such enormous sums of capital, and no other ventures ever received such lavish government subsidies (business historian Alfred Chandler called the railroads the “first modern business enterprise”). By “annihilating time and space,” by connecting the vastness of the continent, the railroads transformed the United States and they made the American West.
¶ 32 Leave a comment on paragraph 32 0 No railroad enterprise so captured the American imagination—or federal support—as the transcontinental railroad. The transcontinental railroad crossed western plains and mountains and linked the West Coast with the rail networks of the eastern United States. Constructed from the west by the Central Pacific and from the east by the Union Pacific, the two roads were linked in Utah in 1869 to great national fanfare. But such a herculean task was not easy, and national legislators threw enormous subsidies at railroad companies, a part of the Republican Party platform since 1856. The 1862 Pacific Railroad Act gave bonds of between $16,000 and $48,000 for each mile of construction and provided vast land grants to railroad companies. Between 1850 and 1871 alone, railroad companies received more than 175,000,000 acres of public land, an area larger than the state of Texas. Investors reaped enormous profits. As one congressional opponent put it in the 1870s, “If there be profit, the corporations may take it; if there be loss, the Government must bear it.”
¶ 33 Leave a comment on paragraph 33 0 If railroads attracted unparalleled subsidies and investments, they also created enormous labor demands. By 1880, approximately 400,000 men—or nearly 2.5% of the nation’s entire workforce—labored in the railroad industry. Much of the work was dangerous and low-paying and companies relied heavily on immigrant labor to build tracks. Companies employed Irish workers in the early-nineteenth century and Chinese workers in the late-nineteenth. By 1880, over 200,000 Chinese migrants lived in the United States. Once the rails were laid, companies still needed a large workforce to keep the trains running. Much railroad work was dangerous, but perhaps the most hazardous work was done by brakeman. Before the advent of automatic braking, an engineer would blow the “down brake” whistle and brakemen would scramble to the top of the moving train, regardless of the weather conditions, and run from car to car manually turning brakes. Speed was necessary, and any slip could be fatal. Brakemen were also responsible for “coupling” the cars, attaching them together with a large pin. It was easy to lose a hand or finger and even a slight mistake could cause cars to collide.
¶ 34 Leave a comment on paragraph 34 0 The railroads boomed. In 1850, there were 9,000 miles of railroads in the United States. In 1900 there were 190,000, including several transcontinental lines. To manage these vast networks of freight and passenger lines, companies converged rails at hub cities. Of all the Midwestern and Western cities that blossomed from the bridging of western resources and eastern capital in the late nineteenth century, Chicago was the most spectacular. It grew from 200 inhabitants in 1833 to over a million by 1890. By 1893 it and the region from which it drew were completely transformed. The World’s Columbian Exposition that year trumpeted the city’s progress, and broader technological progress, with typical Gilded Age ostentation. A huge, gleaming (but temporary) “White City” was built in neoclassical style to house all the features of the fair and cater to the needs of the visitors who arrived from all over the world. Highlighted in the title of this world’s fair were the changes that had overtaken North America since Columbus made landfall four centuries earlier. Chicago became the most important western hub, and served as the gateway between the farm and ranch country of the Great Plains and eastern markets. Railroads brought cattle from Texas to Chicago for slaughter, where they were then processed into packaged meats and shipped by refrigerated rail to New York City and other eastern cities. Such hubs became the central nodes in a rapid-transit economy that increasingly spread across the entire continent linking goods and people together in a new national network.
¶ 35 Leave a comment on paragraph 35 0 It was this national network that created the fabled cattle drives of the 1860s and 1870s. The first cattle drives across the central Plains began soon after the Civil War. Railroads created the market for ranching, and because for the few years after the war that railroads connected eastern markets with important market hubs such as Chicago, but had yet to reach Texas ranchlands, ranchers began driving cattle north, out of the Lone Star state, to major railroad terminuses in Kansas, Missouri, and Nebraska. Ranchers used well-worn trails, such as the Chisholm Trail, for drives, but conflicts arose with Native Americans in the Indian Territory and farmers in Kansas who disliked the intrusion of large and environmentally destructive herds onto their own hunting, ranching, and farming lands. Other trails, such as the Western Trail, the Goodnight-Loving Trail, and the Shawnee Trail, were therefore blazed.
¶ 37 Leave a comment on paragraph 37 1 Cattle drives were difficult tasks for the motley crews of men who managed the herds. Historians struggle to estimate the number of men who worked as cowboys in the late nineteenth century, but counts range from 12,000 to as many as 40,000. Most were young. Perhaps a fourth were African American, and more were likely Mexican or Mexican American. (The American cowboy was an evolution of the Spanish (and later Mexican) vaquero: cowboys adopted Mexican practices, gear, and terms, such as “rodeo,” “bronco,” and “lasso”) There are at least sixteen verifiable accounts of women participating in the drives. Some, like Molly Dyer Goodnight, were known to have accompanied their husbands. Others, like Lizzie Johnson Williams, helped drive their own herds. Williams made at least three known trips with her herds up the Chisholm Trail. Most, though, were young men, many hoping one day to become ranch owners themselves. But it was tough work. Cowboys received low wages, long hours, and uneven work, they faced extremes of heat, cold, and sometimes bouts of intense blowing dust, and they subsisted on limited diets with irregular supplies. Fluctuations in the cattle market made employment insecure and wages were almost always abysmally low. Beginners could expect to earn around $20-25 per month, and those with years of experience might earn $40-45. Trail bosses could sometimes earn over $50 per month.
¶ 39 Leave a comment on paragraph 39 1 But if workers of cattle received low wages, owners and investors could receive riches. At the end of the Civil War, a $4 steer in Texas could fetch $40 in Kansas. Prices began equalizing, but large profits could still be made. And yet, by the 1880s, the great cattle drives were largely done. The railroads had created them, and the railroads had ended them: railroad lines pushed into Texas and made the great drives obsolete. But ranching still brought profits and the Plains were better suited for grazing than for agriculture and western ranchers continued supplying beef for national markets.
¶ 40 Leave a comment on paragraph 40 1 Ranching was just one of many western industries that depended upon the railroads. By linking the Plains with national markets and moving millions, the railroads made the modern American West.
V. The Allotment Era and Resistance in the Native West
¶ 42 Leave a comment on paragraph 42 0 As the rails moved into the West, and more and more Americans followed, the situation for native groups deteriorated even further. Treaties negotiated between the United States and Native groups had typically promised that if tribes agreed to move to specific reservation lands, they would hold those lands collectively. But as American westward migration mounted, and open lands closed, white settlers began to argue that Indians had more than their fair share of land, that the reservations were too big and that Indians were using the land “inefficiently,” that they still preferred nomadic hunting instead of intensive farming and ranching.
¶ 43 Leave a comment on paragraph 43 0 By the 1880s, Americans increasingly championed legislation to allow the transfer of Indian lands to farmers and ranchers while many argued that allotting Indian lands to individual Native Americans, rather than to tribes, would encourage American-style agriculture and finally put Indians who had previously resisted the efforts of missionaries and federal officials on the path to “civilization.”
¶ 44 Leave a comment on paragraph 44 0 Passed by Congress on February 8, 1887, the Dawes General Allotment Act splintered Native American reservations into individual family homesteads. Each head of a Native family was to be allotted 160 acres, the typical size of a claim that any settler could establish on federal lands under the provisions of the Homestead Act. Single individuals over the age of 18 would receive an 80 acre allotment, and orphaned children received 40 acres. A four year timeline was established for Indian peoples to make their allotment selections. If at the end of that time no selection had been made, the Act authorized the Secretary of the Interior to appoint an agent to make selections for the remaining tribal members. To protect Indians from being swindled by unscrupulous land speculators, all allotments were to be held in trust—they could not be sold by allottees—for 25 years. Lands that remained unclaimed by tribal members after allotment would revert to federal control and be sold to American settlers.
¶ 46 Leave a comment on paragraph 46 0 Americans touted the Dawes Act as an uplifting humanitarian reform, but it upended Indian lifestyles and left Indian groups without sovereignty over their lands. The act claimed that to protect Indian property rights, it was necessary to extend “the protection of the laws of the United States… over the Indians.” Tribal governments and legal principles could be superseded, or dissolved and replaced, by U.S. laws. Under the terms of the Dawes Act, native groups struggled to hold on to some measure of tribal sovereignty.
¶ 47 Leave a comment on paragraph 47 0 The stresses of conquest unsettled generations of Native Americans. Many took comfort from the words of prophets and holy men. In Nevada, on January 1, 1889, Northern Paiute prophet Wovoka experienced a great revelation. He had traveled, he said, from his earthly home in western Nevada to heaven and returned during a solar eclipse to prophesy to his people. “You must not hurt anybody or do harm to anyone. You must not fight. Do right always,” he exhorted. And they must, he said, participate in a religious ceremony that came to be known as the Ghost Dance. If the people lived justly and danced the Ghost Dance, Wovoka said, their ancestors would rise from the dead, droughts would dissipate, the whites in the West would vanish, and the buffalo would once again roam the Plains.
¶ 48 Leave a comment on paragraph 48 0 Native American prophets had often confronted American imperial power. Some prophets, including Wovoka, incorporated Christian elements like heaven and a Messiah figure into indigenous spiritual traditions. And so if it was far from unique, Wovoka’s prophecy nevertheless caught on quickly and spread beyond the Paiutes. From across the West, members of the Arapaho, Bannock, Cheyenne, and Shoshone nations, among others, adopted the Ghost Dance religion. Perhaps the most avid Ghost Dancers—and certainly the most famous—were the Lakota Sioux.
¶ 49 Leave a comment on paragraph 49 0 The Lakota Sioux were in dire straits. South Dakota, formed out of land that had once belonged by treaty to the Lakotas, became a state in 1889. White homesteaders had poured in, reservations were carved up and diminished, starvation set it, corrupt federal agents cut food rations, and drought hit the Plains. Many Lakotas feared a future as the landless subjects of a growing American empire when a delegation of eleven men, led by Kicking Bear, joined Ghost Dance pilgrims on the rails westward to Nevada and returned to spread the revival in the Dakotas.
¶ 50 Leave a comment on paragraph 50 0 The energy and message of the revivals frightened Indian agents, who began arresting Indian leaders. The Chief Sitting Bull, along with several other whites and Indians, were killed in December, 1890, during a botched arrest, convincing many bands to flee the reservations to join the fugitive bands further west, where Lakota adherents of the Ghost Dance were preaching that the Ghost Dancers would be immune to bullets.
¶ 51 Leave a comment on paragraph 51 0 Two weeks later, an American cavalry unit intercepted a band of 350 Lakotas, including over 100 women and children, under the chief Spotted Elk (later known as Bigfoot). They were escorted to the Wounded Knee Creek where they encamped for the night. The following morning, December 29, the American cavalrymen entered the camp to disarm Spotted Elks band. Tensions flared, a shot was fired, and a skirmish became a massacre. The Americans fired their heavy weaponry indiscriminately into the camp. Two dozen cavalrymen had been killed by the Lakotas’ concealed weapons or by friendly fire, but, when the guns went silent, between 150 and 300 native men, women, and children were dead.
¶ 53 Leave a comment on paragraph 53 0 Wounded Knee marked the end of sustained Native American resistance in the West. Individuals would continue to resist the pressures of assimilation and preserve traditional cultural practices, but sustained military defeats, the loss of sovereignty over land and resources, and the onset of crippling poverty on the reservations and marked the final decades of the nineteenth century as a particularly dark era for America’s western tribes. But, for Americans, it became mythical.
VI. Rodeos, Wild West Shows, and the Mythic American West
¶ 55 Leave a comment on paragraph 55 0 “The American West” conjures visions of tipis, cabins, cowboys, Indians, farm wives in sunbonnets, and outlaws with six-shooters. Such images pervade American culture, but they are as old as the West itself: novels, rodeos, and Wild West shows mythologized the American West throughout the post-Civil War era.
¶ 56 Leave a comment on paragraph 56 2 In the 1860s, Americans devoured dime novels that embellished the lives of real-life individuals such as Calamity Jane and Billy the Kid. Owen Wister’s novels, especially The Virginian, would establish the character of the cowboy as the gritty stoics with a rough exterior but the courage and heroism needed to rescue people from train robbers, Indians, or cattle rustlers. Such images were further reinforced, particularly in the West, with the emergence of the rodeo added to popular conceptions of the American West. Rodeos began as small roping and riding contests among cowboys in towns near ranches or at camps at the end of the cattle trails. In Pecos, Texas, on July 4, 1883, cowboys from two ranches, the Hash Knife and the W Ranch, competed in roping and riding contests as a way to settle an argument and is recognized by historians of the West as the first real rodeo. Casual contests evolved into planned celebrations. Many were scheduled around national holidays, such as Independence Day, or during traditional roundup times in the spring and fall. Early rodeos took place in open grassy areas—not arenas—and included calf and steer roping and roughstock events such as bronc riding. They gained popularity and soon dedicated rodeo circuits developed. Although about 90% of rodeo contestants were men, women helped to popularize the rodeo and several popular women bronc riders, such as Bertha Kaepernick, entered men’s events, until around 1916 when women’s competitive participation was curtailed.Americans also experienced the “Wild West” imagined in so many dime novels by attending traveling Wild West shows, arguably the unofficial national entertainment of the United States from the 1880s to the 1910s. Wildly popular across the country, the shows traveled throughout the eastern United States and even across Europe and showcased what was already a mythic frontier life.William Frederick “Buffalo Bill” Cody was the first to recognize the broad national appeal of the stock “characters” of the American West—cowboys, Indians, sharpshooters, cavalry, and rangers—but Cody shunned the word “show” when describing his travelling extravaganza, fearing that it implied exaggeration or misrepresentation of the West. Cody instead dubbed his production “Buffalo Bill’s Wild West” and tried to import actual cowboys and Indians into his productions. But it was still, of course, a show. It was entertainment, little different in its broad outlines from contemporary theater. Operating out of Omaha, Nebraska. Buffalo Bill created his first show in 1883. Storylines, punctuated by “cowboy” moments of bucking broncos, roped cattle, and sharpshooting contests, depicted westward migration, life on the Plains, and Indian attacks.
¶ 58 Leave a comment on paragraph 58 1 Buffalo Bill was not alone. Gordon William “Pawnee Bill” Lillie, another popular Wild West showman, got his start in the business in 1886 when Cody employed him as an interpreter for Pawnee members of the show. Lillie went on to create his own production in 1888, “Pawnee Bill’s Historic Wild West.” He was Cody’s only real competitor in the business until 1908, when the two men combined their shows to create a new extravaganza, “Buffalo Bill’s Wild West and Pawnee Bill’s Great Far East” (most just called it the “Two Bills Show”). It was an unparalleled spectacle. The cast included Mexican cowboys, Indian riders and dancers, Russian Cossacks, Japanese acrobats, and aboriginal Australian performers.
¶ 59 Leave a comment on paragraph 59 0 Cody and Lillie knew that Native Americans fascinated audiences in the United States and Europe and both featured them prominently in their Wild West shows. Most Americans believed that Native cultures were disappearing or had already, and felt a sense of urgency to see their dances, hear their song, and be captivated by their bareback riding skills and their elaborate buckskin and feather attire. The shows certainly veiled the true cultural and historic value of so many Native demonstrations, and the Indian performers were curiosities to white Americans, but the shows were one of the few ways for many Native Americans to make a living in the late nineteenth century.
¶ 60 Leave a comment on paragraph 60 3 In an attempt to appeal to women, Cody recruited Annie Oakley, a female sharpshooter who thrilled onlookers with her many stunts. Her stage name was “Little Sure Shot.” She shot apples off her poodle’s head and the ash off her husband’s cigar, clenched trustingly between his teeth. Gordon Lillie’s wife, May Manning Lillie, also became a skilled shot and performed under the tagline, “World’s Greatest Lady Horseback Shot.” Both women challenged expected Victorian gender roles, but were careful to maintain their feminine identity and dress.
¶ 61 Leave a comment on paragraph 61 2 The western “cowboys and Indians” mystique, perpetuated in novels, rodeos, and Wild West shows, was rooted in romantic nostalgia and, perhaps, in the anxieties that many felt in the new “soft” industrial world of factory and office work. The cowboy, who possessed a supposedly ideal blend “of aggressive masculinity and civility,” was the perfect hero for middle class Americans who feared that they “had become over-civilized” and looked longingly to the West.
VII. The West as History: the Turner Thesis
¶ 63 Leave a comment on paragraph 63 0 In 1893, the American Historical Association met during that year’s World’s Columbian Exposition in Chicago. The young Wisconsin historian Frederick Jackson Turner presented his “frontier thesis,” one of the most influential theories of American history, in his essay, “The Significance of the Frontier in American History.”
¶ 64 Leave a comment on paragraph 64 2 Turner looked back at the historical changes in the West and saw, instead of a tsunami of war and plunder and industry, waves of “civilization” that washed across the continent. A frontier line “between savagery and civilization” had moved west from the earliest English settlements in Massachusetts and Virginia across the Appalachians to the Mississippi and finally across the Plains to California and Oregon. Turner invited his audience to “stand at Cumberland Gap [the famous pass through the Appalachian Mountains], and watch the procession of civilization, marching single file—the buffalo following the trail to the salt springs, the Indian, the fur trader and hunter, the cattle-raiser, the pioneer farmer—and the frontier has passed by.” A
¶ 65 Leave a comment on paragraph 65 0 Americans, Turner said, had been forced by necessity to build a rough-hewn civilization out of the frontier, giving the nation its exceptional hustle and its democratic spirit and distinguishing North America from the stale monarchies of Europe. Moreover, the style of history Turner called for was democratic as well, arguing that the work of ordinary people (in this case, pioneers) deserved the same study as that of great statesmen. Such was a novel approach in 1893.
¶ 66 Leave a comment on paragraph 66 0 But Turner looked ominously to the future. The Census Bureau in 1890 had declared the frontier closed. There was no longer a discernible line running north to south that, Turner said, any longer divided civilization from savagery. Turner worried for the United States’ future: what would become of the nation without the safety valve of the frontier? It was a common sentiment. Theodore Roosevelt wrote to Turner that his essay “put into shape a good deal of thought that has been floating around rather loosely.”
¶ 67 Leave a comment on paragraph 67 0 The history of the West was many-sided and it was made by many persons and peoples. Turner’s thesis was rife with faults, not only its bald Anglo Saxon chauvinism—in which non-whites fell before the march of “civilization” and Chinese and Mexican immigrants were invisible—but in its utter inability to appreciate the impact of technology and government subsidies and large-scale economic enterprises alongside the work of hardy pioneers. Still, Turner’s thesis held an almost canonical position among historians for much of the twentieth century and, more importantly, captured Americans’ enduring romanticization of the West and the simplification of a long and complicated story into a march of progress. |
Rutgers University physicist Michele Kotiuga and her colleagues studied perovskites without oxygen ions in its structure. Such defects are common in perovskites, crystal materials that are widely used in applications from catalysts to electronics and solar cells. The perovskites the team looked at are rare earth nickelates.
“Typically, oxygen vacancies will make a material more conductive,” Kotiuga said. “In this work, we present an example where oxygen vacancies make the material more insulating.”
Oxygen vacancies typically result in travelling electrons that facilitate the flow of current and make the material conductive. The researchers found that with some nickelates, the lack of oxygen ions could help slow or stop the flow of current.
“Based on the number of oxygen vacancies and their distribution, the resistivity changes,” Kotiuga said. “We can manipulate the resistance by one, removing some oxygen and two, applying an electric field to redistribute them. If the vacancies have accumulated at one side of the device, they can be more effective in blocking the flow of current.”
These properties open up interesting possibilities for these rare earth perovskites. For example, oxygen defects in many devices leads to conductivity where it shouldn’t be, shorting out the device. Consequently, manufacturers must go to considerable time and expense to fabricate and optimize their processes to prevent this.
“Using materials where these defects do not lead to conducting states that render the device inoperable would simplify the whole production process,” she said.
Another possibility offered by the “tunable resistance” of materials like these nickelates is in the field of artificial intelligence. Zhen Zhang from Purdue University’s School of Materials Engineering explained how these materials might be used to mimic nerve cells.
“The key feature of synaptic activity in the brain is that the weight of a synapse that connects neurons can be tuned during the learning process,” he said. “The resistance modulation thus can be used in the circuit to mimic the functionality of a synapse.”
In other words, researchers may be able to use these crystals to not only improve electronics, but to mimic the way our brains learn.
Additionally, the same materials that these researchers analyzed could potentially hide objects from infrared cameras. Click here to learn more about this application.
The team examined the materials using tools at the CLS and the Advanced Photon Source in Illinois and included collaborators from the Massachusetts Institute of Technology and Purdue University.
The team’s research is published as “Carrier localization in perovskite nickelates from oxygen vacancies” in the Proceedings of the National Academy of Sciences (PNAS).
Kotiuga, Michele, Zhen Zhang, Jiarui Li, Fanny Rodolakis, Hua Zhou, Ronny Sutarto, Feizhou He et al. "Carrier localization in perovskite nickelates from oxygen vacancies." Proceedings of the National Academy of Sciences 116, no. 44 (2019): 21992-21997. DOI:10.1073/pnas.1910490116.
For more information, contact:
Canadian Light Source |
Law of the excluded middle
The law in classical logic stating that one of the two statements "A" or "not A" is true. The law of the excluded middle is expressed in mathematical logic by the formula , where denotes disjunction and denotes negation. From the intuitionistic (constructive) point of view, establishing the truth of a statement means establishing the truth of either or . Since there is no general method for establishing in a finite number of steps the truth of an arbitrary statement, or of that of its negation, the law of the excluded middle was subjected to criticism by representatives of the intuitionistic and constructive directions in the foundations of mathematics (cf. Intuitionism; Constructive mathematics).
|[a1]||D. van Dalen (ed.) , Brouwer's Cambridge lectures on intuitionism , Cambridge Univ. Press (1981)|
Law of the excluded middle. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Law_of_the_excluded_middle&oldid=17830 |
In July 2021, DfE published The reading framework: Teaching the foundations of literacy which, it says, focuses on the early stages of teaching reading and the contribution of talk, stories and systematic synthetic phonics (SSP); supports primary school leaders to evaluate their teaching of early reading and best practice for improving early reading (especially in Reception and Year 1); and older pupils who have not yet mastered the foundations of reading. Reading the document for myself, a number of issues emerged from the guidance:
- First and foremost is the use of inconsistent terminology, and a lack of clarity as to whether the focus of the document is on reading, writing, speaking or listening, and thus how literacy is defined (read the Introduction on page 6).
- It is poorly pitched for schools, as teachers will already be familiar with much of the content, the majority of which is covered in English ITT courses. Further, the document seems to assume an automatic deficit model of language and literacy practice within all settings; not helped by the inclusion of constant audit pro formas to help "review" provision and practice.
- Similarly, it seems to assume a deficit in relation to the home learning environment, with a repeated theme of the "quality of talk" that goes on at home being fundamental to children’s success.
- There is no reference at all to pre-school input or language interventions that are required to take place from the age of two years following the EYFS Progress Check, nor how primary schools will be building on records of language and literacy development from a child’s previous setting.
Below I discuss in more detail some concerns with the guidance, and make suggestions of additional issues to consider relating to the teaching of reading in Reception and Year 1.
Definitions and conceptualisations
Approaches to literacy education are underpinned by particular ways of conceptualising reading and writing in terms of what reading and writing actually are, and by particular ways of conceptualising how reading and writing can be learned. This document is a prime example of this where reading is conceptualised as the accurate decoding of text to read conventional words; and writing as the use of accurately reproduced conventional text to convey written meaning, ie conventional reading and writing. If reading and writing are conceptualised in this way, it therefore follows that literacy policy, curricula and pedagogy will involve adult conceptualisations of how best to teach children to do this. This is what this document is attempting to do through its "how-to" strategies for teachers.
The teaching of writing within the document is rooted in a fundamental assumption that it can only be developed as the result of age-appropriate, systematic school instruction; thus negating any recognition of children writing independently before this point, and incorporating a notion of writing as only being writing when it is formed of conventional text. It is important to additionally make a distinction between writing and handwriting. In England, children are taught handwriting skills from the age of six years in Year 1. Handwriting is however different from writing in that handwriting practice involves children learning how to form a fluent writing style through being taught effective ways to reproduce letters; in other words, the graphic symbols that represent the English alphabet. This is distinct from using writing as a means of producing meaningful communication.
Becoming a reader and a writer
It is a major omission that the guidance makes no mention of beginner reader behaviour or practice, or building on skills established in the EYFS.
The beginner reader
From about 30 months onwards, most children are at the stage of role play reading. In this phase they are readers in so far as they show an interest in books and the print they see around them. They may recognize environmental print; the sign for a fast food outlet for example, or the logo on a supermarket shopping bag. They imitate the things they see adult readers doing such as holding a book, a magazine, or a newspaper carefully (although they may not hold it the right way around!), turning the pages, and sometimes providing a narrative as they do so. They often retell stories they have heard as they ‘pretend’ to read. They may read to their cuddly toys, dolls, friends, or younger siblings. It is important to develop children’s confidence in themselves as readers and there are several ways that they can be supported and encouraged to develop their early reading skills. The main goal is to instil a love of books and reading. In order to develop and consolidate their reading skills, beginner readers need to be treated as readers from the outset; to see reading as part of everyday life; to have access to a wide variety of resources and activities to encourage, develop and support their interest in print and their reading skills; be encouraged to join in with texts and read too; to see that reading is enjoyable and purposeful. Children who are more interested in reading will make use of the opportunities and experiences that are offered to them such as play-based reading opportunities that are grounded in meaningful contexts, constancy and consistency in terms of opportunities and situations in which to develop their reading skill, and to participate in an environment rich in reading opportunities (Bradford, in Palaiologou 2020:.256-7). Decodable texts are therefore insufficient on their own for learning to read.
The beginner writer
Children learn to write conventionally over a period of time, usually years. Two main elements underpin the development of writing; first, writing skills development (the development of fine motor skills including hand-eye coordination and the physical ability to successfully manipulate a chosen writing tool); and second, compositional skills development (the cognitive processes involved in understanding and applying organisational elements such as genre, grammar and spelling to effectively communicate meaning). Children begin to explore the features of writing from a very early age. They do so with the intention of creating meaning before understanding of the alphabetic principle has developed, and despite the fact that the writing produced is not conventional in that it cannot be read by an adult (Bradford and Wyse, 2010; 2013, Bradford, in Palaiologou, 2020: 260).
A critical approach to the guidance
Bearing these points in mind, the following areas of the guidance may need to be reconsidered:
- "The guidance also considers the role of poetry, rhymes and songs in attuning children to the sounds of language" (p7). This is correct, but is additionally fundamental to successful language development in children which then leads on to the development of sound awareness and phonic understanding to prepare children for learning to read (and write) conventional text.
- "Pupils who fail to learn to read early on start to dislike reading" (p8). The document emphasises reading for pleasure at the outset – but paradoxically states that this can only happen after children have worked their way through ‘appropriate’ levels of decodable texts. There are many ways to read together with children, including shared Storytime and book corners where children must be allowed to access favourite and other texts independently so that they are active participants in their choice of text rather than being ‘held back’ because they are not allowed to move on from prescribed levels of decodable texts. This approach will not instil a love of reading.
- “Phonics sessions might be only ten minutes long in the first few days. However, by the end of Reception children will need about an hour a day to consolidate previous learning, learn new content and practise and apply what they have learnt, maybe split into different sessions for different activities” (p47). The phrasing here is misleading and needs to be qualified. Learning to read (and write) should of course move beyond literacy sessions per se and extend to the wider curriculum, for example recording in maths or science, or writing non-fiction historical accounts. The current phrasing implies children sitting down for phonics sessions for an hour from the age of five years. What about practising and applying what they have learnt split into different ‘subject’ area sessions beyond literacy for different activities – moving away from a new, prescribed Literacy Hour; which never worked as a successful literacy strategy, and emphasised the need for a more holistic approach in the classroom.
- “Dictation is a vital part of a phonics session. Writing simple dictated sentences that include words taught so far gives children opportunities to practise and apply their spelling, without their having to think about what it is they want to say” (p49). I would argue that even if children are writing simple dictated sentences they are still thinking about what they want to say because this is an intrinsic part of the writing process – they still have to think of the individual letters, how they are formed, graphemes, phonemes. I cannot see how this exercise will support their writing development in the way it is suggested. Writers write best when they are writing about something that interests them. At the age of four, my great nephew decided he wanted to write his own Christmas cards in December 2020. He sat down for two hours to do so, with the support of his mother. Here is the card he wrote for me and my husband:
Note Ethan’s attention to detail! How he conveys his (very meaningful) message. What he already knows about writing a Christmas card. Think about how much more Ethan learnt about writing that afternoon! Intrinsic motivation to write is critical in the overall trajectory of a child’s writing development. How will dictation, where he is required to write what he is told to, help him in his future life?
- “Children’s writing generally develops at a slower pace than their reading” (p50). This statement is, quite simply, incorrect. Children’s writing always develops at a slower pace than their reading.There is no mention in the document of the complexities of learning to write and the effort that is involved for young writers, for example the ongoing development of working memory, a comfortable pencil grip, or the critical significance of competent hand eye coordination. In short, children’s ability to write fluently develops over a period of years and they work through a series of well documented and defined phases and stages. This is fact, established within a consistent, evidence-based research base.
- Activities that can hinder learning (p52) – I disagree with the comments on whiteboards. They can be used successfully in the classroom, with the teacher scanning children’s responses, or TAs or other assistants focusing on individual children’s responses. Whiteboards can be photographed as evidence, for example during guided writing sessions. Whiteboards are particularly helpful for children who are not keen on writing or who are still developing their fine motor skills; the whiteboard provides a no-risk space in which to practice, and the non-permanence of written text will support some children, for example repeated attempts at getting a letter or word ‘right’ without fear of crossings out on paper. Writing with a felt-tipped pen rather than a pencil will also help some children as they develop their fine motor skills and dexterity.
While there are also helpful aspects to this document - such as the continued importance of phonics after KS1 in relation to spelling – it does need to be used with caution. Teachers need to retain confidence in what they are already getting right in the teaching of reading, in the great strides that have been made in the teaching of phonics since 2006 and the Rose Review. Most importantly, they must not feel professionally undermined by the content.
Helen Bradford is a freelance early years consultant and an Early Education Associate
Palaiologou, I. (2021). The early years foundation stage theory and practice. 4th Ed. London: Sage.
Related webinars, autumn 2021
Interested in finding out more on this topic? Sign up for one of our upcoming webinars:
- Approaching and revising your literacy policy, Trainer: Helen Bradford
- The power of picturebooks, Trainer: Anne Harding
- Making phonics learning memorable in pedagogically sound ways, Trainer: Kym Scott
- Promoting communication and language skills in the EYFS, Trainer: Helen Bradford
- Building up communication and language post pandemic, Trainer: Carla Cornelius |
PhD. in Mathematics
Norm was 4th at the 2004 USA Weightlifting Nationals! He still trains and competes occasionally, despite his busy schedule.
I want to complete a table of values for the tangent function, tangent theta. I have some angles here, they're all angles in the first quadrant so I want to focus on those today. Remember the unit circle definition for tangent. Tangent theta equals y over x where theta is the angle, say this angle, and y over x, y and x come from the coordinates of this point.
Now there are these five angles in the first quadrant that you have to be really familiar with and the nice thing about them is they all have coordinates that come from these set of numbers. For example this point has coordinates 1,0 and this point has coordinates 0,1 and for the rest of them you just have to think about the order of these numbers. They're actually written in order from smallest to largest.
So as we work our way up, the x values go down, so this will have an x value of root 3 over 2, root 2 over 2 and a one-half. Also as we go up, the y values increase, so starting at zero then going to a half, root 2 over 2 and root 3 over 2 and that gives us all five of our special points in the first quadrant. With these points you can find the sine, cosine, tangent of any angle in this quadrant.
Back to our definition; tangent theta equals y over x, so let's look at this point. This point represents the tangent of 0. The tangent of 0 would be 0 over 1 or 0. Pi over 6 is represented by this point. Tangent of pi over 6 would be one-half divided by root 3 over 2. One-half divided by root 3 over 2 that's the same as one-half times 2 over root 3 which is 1 over root 3 and if we rationalize the denominator that's the same as root 3 over 3.
So the tangent of pi over 6 is root 3 over 3. This one is pretty easy the numbers aren't so nice, but the tangent of this angle is going to be 1 because the x and y coordinates are exactly the same, this angle is pi over 4 or 45 degrees. So the tangent of pi over 4 is 1, y over x. This angle is pi over 3, the tangent of this angle is going to be root 3 over 2 divided by one-half, root 3 over 2 divided by one-half that's the same as root over 2 times 2 which is root 3, so the tangent of pi over 3 is root 3.
Finally pi over 2, it's 90 degrees, the tangent would be 1 over 0 that's undefined, and so it turns out that the tangent is undefined at pi over 2.
These are the basic values of tangent and we'll use these whenever we're trying to find the tangent of any special angle using reference angles in a later episode, but tangent of 0 was 0, tangent of pi over 6 is root 3 over 3, tangent of pi over 4 is 1, tangent of pi over 3 is root 3 and very important tangent of pi over 2 is undefined. |
The disease usually continues for years after onset.
Although it may be detected at any age, first symptoms usually appear during childhood and in young adults. It is likely to occur on the individuals who have familial history for allergic disease and/or the individuals with allergic vulnerability.
CAUSES OF ALLERGIC RHINITIS
Actual causes of allergic rhinitis include:
- Genetic Factors
- Animal hair
- House dust mite.
THE MOST COMMON SYMPTOMS OF ALLERGIC RHINITIS INCLUDE THE FOLLOWING
Allergic rhinitis may progress mild, moderate/severe. The physician decides the severity of the condition depending on the accompanying symptoms.
The most common symptoms are;
- Nasal congestion
- Watering, bloodshot on the eyes
- Itching and burning sense in the nose, throat, mouth and eyes
- Nasal congestion, swelling
- Ear obstruction
- Decrease in smelling and tasting sense
- Swelling and bruises under the eyes
Life of quality is severely affected in moderate and severe allergic rhinitis cases; daily activities and sleep quality is disrupted.
HOW IS ALLERGIC RHINITIS DIAGNOSED?
A detailed patient history is obtained; physical exam is performed. Allergy tests are done.
TREATMENT OF ALLERGIC RHINITIS
First, the contact with allergens should be discontinued. Then an allergic rhinitis treatmentbasing on elimination of the symptoms is implemented.
Antihistaminic agents and steroid drugs are frequently used for medical treatment. Vaccination is an option. Vaccination therapy allows immunization of the body by administering the allergen in gradually increasing doses. If nasal congestion cannot be eliminated by other method, surgical procedures may be preferred. The Rhinolight treatment is preferred to reduce or eliminate the symptoms of the patients with both seasonal and persistent allergic rhinitis. A high density light with special wavelength is applied into the nostrils of the patient in Rhinolight therapy. Achievement rates are considerably higher. |
Communities of species previously unknown to science have been discovered on the seafloor near Antarctica, clustered in the hot, dark environment surrounding hydrothermal vents.
The discoveries, made by teams led by the University of Oxford, University of Southampton and British Antarctic Survey (BAS), include new species of yeti crab, starfish, barnacles, sea anemones and an octopus.
For the first time, researchers used a Remotely Operated Vehicle (ROV) to explore the East Scotia Ridge deep beneath the Southern Ocean, where hydrothermal vents, (including ‘black smokers’ reaching temperatures of up to 382 degrees Celsius) create a unique environment that lacks sunlight, but is rich in certain chemicals. The team reports its findings in this week’s issue of the on-line journal PLoS Biology.
“Hydrothermal vents are home to animals found nowhere else on the planet that get their energy not from the Sun but from breaking down chemicals, such as hydrogen sulphide,” said Professor Alex Rogers of Oxford University’s Department of Zoology, who led the research. “The first survey of these particular vents, in the Southern Ocean near Antarctica, has revealed a hot, dark, ‘lost world’ in which whole communities of previously unknown marine organisms thrive.”
Highlights from the ROV dives include images showing huge colonies of the new species of yeti crab, thought to dominate the Antarctic vent ecosystem, clustered around vent chimneys. Elsewhere the ROV spotted numbers of an undescribed predatory sea-star with seven arms crawling across fields of stalked barnacles. It also found an unidentified pale octopus, nearly 2,400 metres down, on the seafloor.
“What we didn’t find is almost as surprising as what we did,” said Professor Rogers. “Many animals such as tubeworms, vent mussels, vent crabs, and vent shrimps, found in hydrothermal vents in the Pacific, Atlantic, and Indian Oceans, simply weren’t there.”
The team believe that the differences between the groups of animals found around the Antarctic vents and those found around vents elsewhere suggest that the Southern Ocean may act as a barrier to some vent animals. The unique species of the East Scotia Ridge also suggest that, globally, vent ecosystems may be much more diverse, and their interactions more complex, than previously thought.
“These findings are yet more evidence of the precious diversity to be found throughout the world’s oceans,” said Professor Rogers. “Everywhere we look, whether it is in the sunlit coral reefs of tropical waters or these Antarctic vents shrouded in eternal darkness, we find unique ecosystems that we need to understand and protect.”
BAS author Dr Rob Larter explains how complex it is to operate the ROV at depth. “Beneath 2.5 km of water it is totally dark and the lights on the remotely operated vehicle only give a few metres visibility. Therefore it was essential to have detailed maps of the sea bed to navigate the vehicle around the vent sites. At the start of the project we had low resolution maps of the sea bed topography from work done by BAS in the 1990s. We improved these using the modern sonar equipment on the British Antarctic Survey vessel RRS James Clark Ross, and then made even more detailed maps using sonar equipment on the remotely operated vehicle itself. These maps are of such detail that we can even pick out some individual vent chimneys”.
ROV dives were conducted with the help of the crews of the Natural Environment Research Council’s RRS James Cook and RRS James Clark Ross. The discoveries were made as part of a consortium project with partners from the University of Oxford, University of Southampton, University of Bristol, University of Newcastle, British Antarctic Survey, National Oceanography Centre, and Woods Hole Oceanographic Institution supported by NERC in the UK and the US National Science Foundation. |
Flipped learning has risen to prominence over the past few decades and its popularity has soared in more recent years. It’s not entirely clear when the term ‘flipped learning’ was coined to explain this pedagogical practice, but what is certain is the impact many educators claim it has on students’ learning.
What is Flipped Learning?
Flipped learning sees the traditional roles of learning and homework switched, whereby the traditional teaching/learning practice is conducted outside of class hours (home-learning) and application of knowledge is applied during class time.
Before implementing flipped learning in your classroom, there are some aspects that need to be considered. Firstly, in order to do this, you need to understand that flipped learning and a flipped classroom are not interchangeable - flipping a classroom can, but doesn’t always, lead to flipped learning. There are four core pillars that educators must follow in order to successfully execute flipped learning pedagogy which are outlined below:
The way in which you absorb flipped learning isn’t as structured as usual classroom learning and your classroom needs to accommodate for this. Lessons won’t always follow the same structure and classrooms should support the ability for students to work independently or in groups. These flexible spaces mean that students are able to choose where and how they learn.
Traditional learning in class is teacher-centred, whereby the teacher is the primary source of information. Students are fed information in class and exploration and expansion of the topic is usually done so independently outside of school hours. Flipped learning sees a shift from this to student-centred learning.
Students research subjects independently ahead of lesson time and instead, use the lesson to broaden their understanding and delve deeper into a topic, and if needed have the guidance of teacher. This style of learning promotes student ownership of work and means students are more engaged with the information they’re consuming.
Intentional content within flipped learning focuses on choosing the best content to be delivered both in and out of the classroom. In this model, new content is taken outside of the classroom and therefore the highest quality content is needed to ensure students are able to learn this to the best of their ability.
By taking the learning outside of class, students aren’t restricted to how much time they can dedicate to learning and can use class time applying the knowledge they’ve learnt with the guidance of their teacher. By carefully selecting the best content for students to use outside of class, you will not only free up lesson time so that students really get to apply what they have learnt, you will also help shape students in becoming autonomous learners.
In a flipped classroom, educators’ roles shift from what is traditionally expected of a teacher in the classroom. In the traditional sense, a teacher’s sole purpose would be to impart knowledge and teach students new content, which they would then apply outside of the classroom.
This classroom is more controlled and structured than a flipped classroom, whereby students are applying knowledge they’ve researched outside of class. In this classroom, educators’ new role turns to guiding students, providing real-time feedback, assessing and evaluating their work in the moment. They take control of a more chaotic classroom and reflect on their processes. Despite being a less visible role in the classroom, they are key to the success of the flipped classroom.
Flipped Learning Pros and Cons
The effectiveness of the flipped learning model has been evaluated since its rise to popularity with many educators and theorists finding multiple benefits associated with the practice.
However, as with most things, there are certain aspects that leave a few people sitting on the fence. We’ve gone through some of the main benefits of flipped learning as well as areas for concern and disputed them where possible:
Flipped learning allows students to be in control of their own learning. By giving them the content they need to learn outside of class time, they are free to learn at their own pace without time restrictions. They can revisit the subject matter as many times as they need to and make appropriate notes. Additionally, they can come to class prepared with directed questions to ask their teachers that will help improve their understanding of a topic.
A student-centred classroom helps to create autonomous learners. The ability for students to work under their own direction is something they can easily transfer to university and jobs in later life. A core part of in-class flipped learning are collaborative projects - these tasks, alongside in-class discussions, allow students to learn from one another, but also - to apply the knowledge which they have learnt independently.
Students receive personal and directed feedback. When students have the opportunity to apply new knowledge they’ve learnt in front of their teachers and peers, more quickly can their teachers step in and provide praise and feedback to further encourage and guide students. By providing live feedback, students are able to instantly apply this and improve their current understanding which will only help further their development.
Studying information outside of lessons limits the amount of time spent learning this information and students’ workload
Parents are provided insight into what their children are studying and the curriculum they are taught. In traditional teaching, parents only get to view students attempting to apply knowledge through homework set, not the knowledge they’ve been taught. By looking at the intentional content teachers assign students to learn the curriculum, they’re fully aware of the information they’re being taught. This level of insight means they can actively involve themselves in their child’s studies and provide better support at home if needed.
Many have argued that flipped learning is more efficient than traditional learning for both students and teachers. Studying information outside of lessons limits the amount of time spent learning this information and students’ workload. For example, if a video takes 20 minutes to watch, the learning is condensed into this time, plus any additional rewinds and note taking, as opposed to an hour lesson in which they have to work to the pace of the average student This means students have more free time outside of lessons, and the time they spend on new topics is better absorbed.
For most educators who follow the flipped learning model, the content they give to their students to study outside of lesson time needs to be accessed online. This can cause a digital divide within schools, whereby students who don’t have access to internet at home are left at a disadvantage as they’re unable to access the learning materials.
This argument does surface regularly and isn’t just applicable to the flipped learning model. A lot of valuable resources and whole-school software systems require students to have an internet connection in order to access them. This can be solved by schools setting up homework clubs, in which students are able to use school computers and internet access to complete any additional work which requires it.
In order for flipped learning to be successful, teachers are reliant on students completing the work at home, we could argue that this is the same case as with homework, however the key difference is that if students do not partake in their outside-of-class learning, they won’t be able to engage with their in-class learning.
By delivering content to students via a medium they’re familiar with and enjoy, the content they’re absorbing is only going to resonate with them more
However, as with all types of teaching and learning, student participation is crucial to it ever being effective, and a benefit of flipped learning would be that because class time is all about applying knowledge learnt, it will become starkly apparent which students are not doing the work outside of class and teachers will be able to intervene much more quickly.
Despite flipped learning helping to make home-learning more efficient for students, it has been disputed that it can in fact add to teachers’ workloads. Recording all lessons for year groups and planning new lessons which are centred around the application of knowledge as opposed to retention does and will take time. However, this doesn’t have to be a time drain, filming lessons as they happen, sharing the workload between colleagues and filtering the flipped learning approach into your lessons will help to ease the strain of this task.
The majority of the at-home work carried out by students involves a lot of screen time which raises concerns for some as it takes away from face to face interaction which students need to engage in to become well-rounded individuals. Today, the everyday secondary school student will be engaged with technology, as a society we are more open to online applications.
By delivering content to students via a medium they’re familiar with and enjoy, the content they’re absorbing is only going to resonate with them more. Additionally, with more collaborative learning during lesson time, students will get more opportunities to engage with peers via face-to-face interaction.
Flipped Learning and Show My Homework
Despite flipped learning not providing homework in the traditional sense, learning at home is still a central part to the practice and therefore communicating this to students is key. Show My Homework, although used predominantly for conventional homework tasks, can be used to fully support the flipped learning model.
Show My Homework acts as a communication tool for most teachers, relaying information regarding home-learning that both parents and students can access on their own personal devices. With this at the core of its functionality, the software lends itself to supporting the flipped learning model. Teachers can upload the intentional content students should be focusing on at home and any additional directions along with due date, suggested time to complete the task and its marking scheme if applicable.
By communicating home-learning in this manner, teachers can rest assured that students have received the work set and all content and resources are fully accessible - this avoids students losing worksheets or not being able to find the correct webpage or video.
Finally, the benefits of having the ability to see if students have viewed the work that’s been set to them, and also parents being able to view when work has been set, opens up more opportunities for parents to support their students at home. |
The summer holidays, bank holidays, and some (hopefully) sunnier weather are great excuses to take your Speech and Language therapy practice out and about. The park is a great place to start!
There’s a lot of great opportunities to practice communication skills, and no, it’s not just your child perfecting the phrase ‘I want an ice-cream!’
Working on Speech at the Park – 4 activities
- If your child is practicing a certain sound, take your pictures with you. Set challenges, such as ‘say a word for every bar of the monkey bars, then try to swing across’. Engage the competitive instincts of your child and maybe join in!
- Stepping stones. You could use hoops, a set of steps in the park, or even the paving slabs on a quiet path as your stepping stones. Take turns to say one of the child’s practice words – if you say a word clearly or with the target sound, you can jump onto the next stepping stone. Your child decides whether you said the word clearly/with the target sound, and you decide for your child! Have some sort of celebration or reward planned for the end of the stepping stones.
- Word catch. Every time you catch the ball, you have to say one of the target words. Change the word every few turns. If you say the wrong word, or drop the ball, you’re out!
- Play syllable I-spy. For children practising syllable clapping, or using this strategy to help make their speech clearer, you can play a different version of the classic game, but spying things with one, two or three syllables in their name. To make it easier, you can give the first sound as another clue once your child has made a few guesses
- One syllable – bench, tree, bird, swings, ball
- Two syllables – ice-cream, pushchair, pigeon
- Three syllables – butterfly, roundabout
Understanding of language – 3 ideas for the park
- Scavenger hunt! Tell your child 3 or 4 items to find (and possibly bring back to you). This could be sights around the park, e.g. a statue, a green bench, or natural items e.g. a long stick, a small green leaf, a daisy. This works on your child’s memory for spoken language, and if you add in adjectives such as long or small, you are also helping them understand basic concepts.
- Listen and do – ‘run to the statue, then hop to the green bench’, ‘walk to the slide, then jump to the swings’- your child is practising following instructions with 4 key words and burning off some energy! See the language e-mail course if you need a refresher on key words.
- Under or over? In or on? Challenge your child to find as many things they can go ‘under’ as possible, thinking creatively… they can go under the climbing frame, but can they also go under the slide, the picnic blanket, and is there a bridge they can go under too? Walk around together, saying ‘let’s go under it’ as you go under. Think of the rhyme ‘we’re going on a bear hunt’ and chant in the same way – you could read this story together before you go to the park. Once you’ve found everything you can go under, you can try finding things you can go over, on or in!
Use of spoken language – 4 things to do
- Drama kings and queens. Act out a story together, take photos, then tell the story to someone else when you get home. You could act out a favourite story, or make up your own. You could give your child a starter scenario to get the creative juices going – how about pretending they’ve landed on an alien planet, or just discovered some buried treasure…
- Take a toy. Explain to a younger child that this is the toy’s first time to the park, so they’ll need to describe the park to their toy, and maybe even explain what to do on the equipment. Include some pretend play of the toy joining in on the equipment or finding delicious things to eat, and you’re also working on play skills
- Give your child a turn to set you a ‘listen and do’ challenge (see above). The challenge for them is to explain clearly what they want you to do, using a range of different action words – and then correctly remember what they asked you to do!
- Cloud spotting. Lie back on a picnic blanket and practice describing what you can see in the clouds. Instead of just pointing out the dragon you can see, show your child the ferocious dragon with a pointy tail. If your child can see a banana, can they describe the banana as long, curvy, or spotty?
Written by Alys Mathers, Speech and Language Therapist |
Mathematics education researchers recognize the importance of mathematical preparation for teaching grades K-12. Mathematics is often taught in primary and secondary schools as a set of algorithms bypassing the conceptual understanding needed to advance to higher levels. The need for teachers to have a good procedural understanding of the arithmetic of integers, fractions and decimals, as well as a sound conceptual understanding of the fundamentals is essential, as they must provide their students with this needed understanding for reaching higher levels of mathematical thinking. Please watch the video and hear what the teachers who have attended the Teacher’s Institute have to say about their experiences.
Our Teacher’s Institute aims to show participants that deep understanding of elementary ideas is attainable in K-12 classrooms, and one way to cultivate this understanding is through engaging problems. The Institute’s philosophy is that learning mathematics can be motivated by interesting problems. The idea is to come up with problems whose solutions require, or strongly motivate, the development of the area of mathematics to be learned.
Participants of the Teacher’s Institute are expected to attend two 3-hour classes each day and spend an hour each night in a study hall to practice the skills learned during the classes. |
Here, in this program you will learn how to find the largest among the three numbers by using if else and display. the three numbers are entered in as num1, num2 and num3 respectively. The if, elif and else is the ladder that used to find the largest among the three numbers.
Input# Python program to find the largest number among the three input numbers # change the values of num1, num2 and num3 # for a different result num1 = 11 num2 = 18 num3 = 14 # uncomment following lines to take three numbers from user #num1 = float(input("Enter first number: ")) #num2 = float(input("Enter second number: ")) #num3 = float(input("Enter third number: ")) if (num1 >= num2) and (num1 >= num3): largest = num1 elif (num2 >= num1) and (num2 >= num3): largest = num2 else: largest = num3 print("The largest number between",num1,",",num2,"and",num3,"is",largest)
OutputThe largest number between 11, 18 and 14 is 18.0 |
Understanding Gardening Terms
submitted by Sarah Browning, Nebraska Extension Educator
Sometimes understanding gardening terms is difficult. For example, what exactly is a hybrid vegetable? How do hybrids differ from heirlooms? What are open pollinated varieties? Let's take a look at the meaning of these terms and why it's important for gardeners to understand them.
Species is a naturally occurring plant, which evolved with no human manipulation or intervention. Very few plants grown in home vegetable gardens today are naturally occurring species, but an advantage of these plants is gardeners can save seeds from them each year and grow new plants the following season that are identical to the original parent plants.
One example of a naturally occurring plant still grown in many home gardens is red vein sorrel, Rumex sanguineus, which is used as a salad green.
Cultivar, a contraction of the terms "cultivated variety", refers to a plant created through human breeding using techniques such as plant selection and plant crossing. Cultivars are indicated on plant tags and catalogs with single quotes around the cultivar name, such as 'Celebrity' tomato or 'Kentucky Wonder' beans.
The vast majority of vegetables available to home gardeners are cultivars and have been developed through some type of human manipulation. One of the earliest forms of plant breeding used was simple - plant selection. With this technique, seed is saved from the healthiest, best tasting, most productive plants to grow out the next year. Makes sense, right? If you wanted to save seed for next year's garden, you'd choose the best plants in your garden as your seed source.
The development of cultivated plant varieties enabled farmers to increase the health, vigor and harvest from their plantings.
Heirloom is an open-pollinated plant cultivar developed through many years of plant selection. Some horticulturists define heirloom plants by the number of years they have been in cultivation, with many heirloom vegetables tracing their heritage back for hundreds of years. Other authorities use 1951, when the first hybrid vegetable varieties were introduced, as a cut-off year for heirloom vegetables. Anything introduced after 1951 is considered a "modern" vegetable cultivar.
Others define heirlooms as lines of plants, grown locally or regionally, that have been passed down through families or groups. Among these heirloom vegetables there is a great variety of colors, flavors, shapes and textures.
All heirloom plants are open pollinated – meaning that seed from these varieties can be saved each year by home gardeners and will grow 'true to type' from seed each time. In other words, plants grown from seed will look exactly like the parent plant did, having the same plant size and growth habit, as well as fruit size, color and flavor.
One drawback to heirloom vegetables is that they often lack disease resistance found in many modern varieties to common disease problems, such as the soil-borne diseases Verticillium and Fusarium wilt.
Hybrid is a plant cultivar developed through the crossing of two species or two distinct parent lines. Hybrid plants are usually stronger, more vigorous and have improved disease resistance. They often have fruits that are more uniform in size, shape and color, have better storage quality and shipping ability. For tomatoes, improved shipping ability usually means the tomato flesh is firmer than that found in most heirloom tomatoes.
However, hybrid plants do not grow 'true to type' from seed. So you'll need to buy new seed each year.
So when you're purchasing plants this spring, keep these terms in mind. Especially if you would like to save seeds from year to year.
The information on this Web site is valid for residents of southeastern Nebraska. It may or may not apply in your area. If you live outside southeastern Nebraska, visit your local Extension office |
Approximations are deliberate misrepresentations of physical or mathematical things, e.g., Π is approximately 3, an atom is spherical, the drag force on a moving tank is zero. The question is not why do we need them. The most accurate mathematical description of reality is Quantum electrodynamics (QED). Everything else, every physics formula, all engineering empirical formulas work with around three decimal place accuracy. Gödel proved that it will always be possible that unknown truths exist outside of human knowledge. Nothing is absolute.
- 1 General approximations
- 2 Mental Calculation
- 3 Engineering and Science approximations
- 4 Formal Math approximations
- 5 Assumptions, preditions, and simplifications
- 6 Examples
- 7 Unanswered Questions
- 8 More
Before calculators there were slide rules which required estimating the power of 10. This prevented a lot of mistakes. The term calculator actually referred to a person not a machine. Hundreds of human calculators were employed during WWII. Calculations were split up into small tasks that could be checked. It was not a solitary job. A majority were women.
Today there are competitions and training software. There are books and wiki pages written about how to do this. There are tricks such as .. when multplying by 9, subtract one from the other number and then subtract it from 9. So 9*7 ... 7-1=6 .. 9-6 = 3 ... answer 63 or instead of 9*7 round up to 10*7 then subtract a 7: (70-7= 63).
Engineering and Science approximations
The scientific method is carried out with a constant interaction between scientific laws (theory) and empirical measurements. Theory and measurements are constantly compared to one another.
Approximation also refers to using a simpler process. This model is used to make predictions easier. The most common versions of philosophy of science accept that empirical measurements are always approximations — they do not perfectly represent what is being measured.
The history of science indicates that the scientific laws commonly felt to be true at any time in history are only approximations to some deeper set of laws.
Each time a newer set of laws is proposed, the old law and proposed must correspond with the same predicts at the margins or limits. This is the correspondence principle.
Between the old and the new laws, there is approximation confusion, empirical doubt, and a theory competition. The theory competition does not hold previous theories sacred. Here is a small sampling of these related topics:
Formal Math approximations
Assumptions, preditions, and simplifications
Approximation is a big part of engineering. All short cuts are approximations. General solutions are often not possible. Approximations are sometimes the only practical solution. New short cuts are being discovered every day.
Engineering starts with estimating things. Estimating the number of steps, stairs, lamp posts, or people that could be fired without anyone noticing. Every moment of life can be turned into an estimation hypothesis and can be rewarded with a fact. Engineers can entertain themselves very easily.
How many babies are born in the world every second?
Let's get to solving our first problem. Let's think, from common knowledge what we have as input data:
- World population: 6,000,000,000
- Average person's lifespan: ~ 60 years
60 years have 60 × 365 days, 60 × 365 × 24 hours, 60 × 365 × 24 × 60 minutes or 60 × 365 × 24 × 60 × 60=1892160000 seconds. So assuming that the whole world's population is renewed over the course of 60 years, that means that the number of babies born per second are:
According to Wikipedia in 2007 about 134 million babies were born, which amounts to 4.2 babies per second. So our guesstimate was not so bad at all!
All the zeros in the numbers above can certainly cause some confusion, so using scientific format turns out to be much more convenient.
What is the bandwidth of a Boeing 747?
- Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway. —Tanenbaum, Andrew S.
Imagine we didn't have Internet and satellite communication, and the only way to transfer data between Europe and USA was to burn the data on DVDs, pack them on board of an airplane and ship them overseas. What would be the bandwidth of a 747 full of DVDs?
According to technical specs, the length of a 747 is 70m, and the internal diameter of the cabin is about 6 m. This gives a total usable volume of:
Assuming we have the DVDs in thin cases, with dimensions 12 cm × 12 cm × 0.5 cm, with a volume of:
the total number of DVDs that we could fit in the plane would be given by:
In terms of data this is 28 × 106 GB, or estimating the average flight duration about 7 hours, the data transfer rate is:
Now we have to stop for a moment, and think if the assumptions we did in this problem are realistic. One thing we did not take into account, is the weight of all the disks. Will the airplane be actually able to take off with all that weight. Let's check! Assuming that about 50 % of the DVD case is empty space, and the density of the plastic comprising the DVD is comparable to the density of water, we estimate the total weight of the cargo to be about 240 tons. The empty airplane weighs 162 tons, while the maximum take-off weight is 333 tons. This gives maximum of 171 tons of useful cargo, or 70 % of what we calculated we could pack inside. This new factor reduces the bandwidth of the plane down to 0.8 TB/s. This is about 4% of the bandwidth of a single strand optical fibre (20 TB/s). Added to this the 15 hour round trip of the plane (a 54000 second latency, about a million times what you expect on an ADSL connection), and the infrastructure required to read that information off the 747 and distribute it, this is probably not an effective way to communicate.
- The mass of how many Ford Mustangs is equal to the mass of the water in the Atlantic Ocean?
- How many jelly beans fill a one-liter jar?
- What is the mass in kilograms of the student body in your school?
- How many golf balls will fit in a suitcase?
- How many gallons of gasoline are used by cars each year in the United States?
- How high would the stack reach if you piled on trillion dollar bills in a single stack?
- Approximately what fraction of the area of the continental United States is covered by automobiles?
- How many hairs are on your head?
- What is the weight of solid garbage thrown away by American families every year?
- If your life earnings were doled out to you at a certain rate per hour for every hour of your life, how much is your time worth?
- How many cells are there in the human body?
- How many individual frames of film are needed for a feature-length film? How long is such a film?
- How many water balloons will it take to fill the school gymnasium?
- How many flat toothpicks would fit on the surface of a sheet of poster board?
- How many hot dogs will be eaten at major league baseball games during a one year season?
- How many revolutions will a wheel on the bus make during a trip from Baton Rouge, LA to Washington, D.C.?
- How many minutes will be spent on the phone by students in the United States this year?
- How many pizzas will be ordered in your state this year?
A 2008 book on this subject is Guesstimation: Solving the World's Problems on the Back of a Cocktail Napkin from the Old Dominion University PHYS 309 course called: Physics on the Back of an Envelope. |
There are a large number of infections that are borne and carried by animals and spread to humans. These are called zoonotic diseases wherein the microbe or pathogen jumps from one species of the animals to another. Now a new study has ranked mammals based on the risk they pose to humans in terms of spreading infectious diseases to humans.
Bats - Number one Carriers of disease - Image Credit: Barsan ATTILA / Shutterstock
Zoonotic diseases occur with a remarkable frequency with each new spread from animals to humans – termed a spillover – leading to widespread afflictions. Some of the notable zoonotic diseases that have spread from animals to humans include swine flu from pigs, Ebola and Marburg hemorrhagic fever from bats etc.
The researchers from the New York’s EcoHealth Alliance that is now published in the Journal Nature, looked at various mammals worldwide and their propensity to cause or trigger spillovers. The spillovers are typically more likely to occur from other mammals rather than other creatures such as birds or insects since humans resemble more closely with other mammals compared to these other species. The three great culprits that trigger these spillovers to humans were found to be bats, primates and rodents.
The researcher team led by Peter Daszak, looked at all the pertinent data regarding viral infections and mammals from around the world. They studied 754 different mammalian species that made up 14% of all mammalian species. Further 585 known viruses that infect mammals were correlated with these mammals. Of these 585, around one third of the viruses have spilled over to humans in the past. The family of mammals as well as their geographical location was also taken into consideration. All of these data were integrated using advanced mathematical models of predictions and the probability of which of the animals could be responsible for maximum spills to humans was found. The results thus come from a computer generated program and may not be an actual representation of real world. With time and more information however these likely spillovers from mammals could be better predicted say researchers.
Results showed that large animals are more likely to carry more number of viruses than smaller ones and those that can live in a large geographical region are more likely to carry more viruses than those who are confined to small local habitats and geographical areas.
Bats seemed to be one of the largest and most frequent carriers of viruses that could affect humans with the spillovers most probable in South and Central America. Bats are known to spread viral infections such as Ebola, Marburg hemorrhagic fever, Histoplasmosis, Nipah virus encephalitis, severe acute respiratory syndrome or SARS, Hendra virus disease etc. There are an estimated 17 other zoonotic diseases that bats might spread but humans are not yet aware feel the experts.
The second on the list were primates. Primates of Africa, Central America and Southwest Asia are notoriously known for spread of infections to humans. Rodent family comes in third with the risk of spread due to these animals being most likely in North and South America, and in Central Africa. There are 10 more diseases in the least carried by primates and rodents that have not yet being found say researchers.
According to study author Kevin Olival, associate vice president for research at EcoHealth Alliance, these viruses have been on Earth for a very long time and nothing can actually be done about their existence. However an understanding of likely spread helps to control the disease from becoming a public health menace. Those at risk include hunters, workers at slaughterhouses and agriculture workers. Not everyone is at risk say researchers. The study was funded as part of the United States Agency for International Development (USAID) Emerging Pandemic Threats PREDICT program. |
The machining and machinery industry manufactures metal parts that are used to make machines, tools, and other machine parts. In other words, the industry creates metal machines that make other machines. The machining and machinery industry also produces parts for such items as engines, tools, and other machinery. It is, in effect, the first stage of the manufacturing process. This industry covers, for example, the manufacture of boring or jig machines that produce nuts and bolts, as well as the production of the nuts and bolts themselves.
A mechanic performs minor repair work on a machine.
Although humans have used wheels and other tools since the dawn of history, the use of machine tools is relatively recent. When the Scottish inventor James Watt experimented with steam engines in the mid-1700s, he could not find anyone who could drill a perfect hole; thus his engines leaked steam. Then in 1775, Englishman John Wilkinson invented the first relatively accurate machine tool, a mill to bore cylinders for Watt’s steam engine.
Several years later, Matthew Murray, Joseph Clement, and Richard Murray developed the planer, which could be used to smooth holes and flat surfaces to the necessary degrees. Henry Maudslay introduced the concept of precision to heavy machinery. Previously only watches and scientific instruments were made with this degree of precision. The early 1880s saw the development of the first screw-cutting lathe, which remains the standard today. Also about that time, electric motors began making major improvements in industrial productivity.
This period in the 19th century, which came to be known as the industrial revolution, brought about the mass production of many products. As new entrepreneurs and inventors emerged, the number of manufacturing plants on both sides of the Atlantic grew, as did the demand for machine tools and equipment.
The United States gradually became the principle producer of machine tools. The most rapid growth came, however, during World Wars I and II when there was a huge demand for tanks, planes, jeeps, ships, and guns. Machines had to be devised to turn out the required parts. After World War II, the numbers and types of consumer goods that Americans desired continued to increase, and the mass-production methods developed for war were converted and improved to accommodate those demands.
Electrical control mechanisms were refined during the 1940s, and when computers were introduced into industry, the nature of many manufacturing operations changed. Automated equipment, including robotics, now perform many operations formerly done by machine operators and other precision metalworkers. Computer-controlled equipment is being used to program machines and to design and manufacture machine tools. As technology continues to advance, machines are becoming increasingly more sophisticated and able to produce highly precise machined parts.
People often think of the machining and machinery industry as being limited to operating machine tools. A machine tool is a power-driven machine, not portable by hand, that is used to shape or form metal by cutting, impact, pressure, electrical techniques, or by a combination of these processes. Operating a machine tool, however, is just one of the many occupations in the machining and machinery industry.
To remain competitive, companies must invent, improve, and anticipate the future needs of its customers. To accomplish these goals, the industry needs a variety of competent and creative workers. In this industry, the development process begins with research engineers, and in some cases, industrial designers, who analyze market needs and decide what new products are in demand. They are usually part of a team that includes marketing specialists, production personnel, sale representatives, and manufacturing experts. Once the research team has envisioned a new product, design engineers and technicians devise the method of its construction.
A manufacturing engineer then designs the machines, or the configuration of equipment, that will construct the product. Once the product has been designed, the rest of the manufacturing process simply involves creating the item. However, depending on what the product is - a mold for an injection-molding machine, a plastic component for a cellular phone, or a component part for another machine, for example - the subsequent production steps will vary.
Many types of workers are involved in all aspects of the machining and machinery industry. For example, in addition to design engineers and industrial designers, there are workers such as general maintenance mechanics, who operate and repair machines and mechanical equipment. More complex power-generating equipment in industrial plants is operated and maintained by stationary engineers, who receive specialized training.
Some workers, like boilermakers and millwrights, install huge pieces of machinery. Precision machinists construct machines, while layout workers and job setters prepare workpieces and machines for operation. Industrial engineers devise efficient processes that use machines and workers together, and in manufacturing, mechanical engineers develop the specifications for machines and tools.
Instrument makers design the electrical equipment that measures and regulates machine operation, while numerical control tool programmers write the computer instructions to run machines. Precision metalworkers, such as tool and die makers and mold makers, design and produce dies and molds that manufacture products with machines. Fluid power technicians install and maintain component parts of machines. Finally, in the field of nondestructive testing, industrial radiographers and laser technicians utilize techniques to determine the quality of products made by machines and components that will be used in machines.
Each of the careers in this industry offers its own opportunities for advancement. Workers with the best potential, however, are those who become skilled at what they do, seek further training or education, and always remain aware that changing technology and a global economy will affect jobs and opportunities in their industry. Trade associations and unions, in an effort to improve the skill level of workers and keep them in the industry, often offer multiple levels of training and certification. Some are short-term programs, but many last several years because of the knowledge required in specific jobs.
In order to advance in their careers, some workers who enter the machining and machinery industry choose to travel the road from apprentice to journey worker. Others choose to move from programming or tool and die making to design, while still others become trainers and supervisors or move into technical sales and customer support. Those who dream of owning their business should remember that most of the small businesses in this industry are owned by people who came up through the ranks.
The state of the machining and machinery industry is closely tied to economic conditions. However, even when the economy improves as it did in the latter part of the 1990s, there seems to be a lag time of about one year before machine tool shipments reflect that improvement. Also, industry analysts say that uncertainty has started to affect manufacturing executives who are deciding whether to invest in new machinery. They cite several factors that are dimming prospects for increased factory capital spending. These include an emerging crunch in credit, which is shrinking the money available for capital loans that would be used to purchase new machinery.
A steam turbine rotor in its casing during manufacture
Statistics on machine tool consumption indicate that the machine tool industry went into a slump in the early 1980s, and despite periods of increased orders, it has never completely come back. Analysts do see some bright spots, however. The automotive industry, which accounts for almost half of machine tool orders, needs to replace some of its aging equipment. Also, there has been growth in the nonelectric machinery industry, which includes food-processing equipment.
Although economic conditions did improve during the late 1990s, employment opportunities did not increase proportionately. Many companies laid off machining workers during the past decade and are hiring fewer workers than in the past. In addition, automation is affecting employment opportunities for some workers in the machining industry (although automation does create some machinist jobs in the area of machine repair, supervision, and maintenance). The manufacturing industry has been revolutionized by highly productive, computer-controlled machining and turning centers that change their own tools; transfer machines that completely machine, assemble, and test mass-produced products; and innovative metal removal and forming systems. Robots and robotic equipment are becoming more common and are being used in many areas where the work is tedious, repetitious, or dangerous. Automated inspection equipment, such as electronic sensors, cameras, X-rays, and lasers, is increasingly being used to test and inspect parts during production.
All of these factors have affected the machinery industry. The use of computers and automated equipment is resulting in fewer opportunities for machine operators and layout workers. According to Occupational Outlook Handbook, employment of industrial machinery repairers and machinists is expected to grow more slowly than the average for all occupations through 2014. A decline is expected in the employment of tool and die makers and for computer numerical control (CNC) programmers due to strong foreign competition,. However, despite sluggish employment growth in the machining industry, the U.S. Department of Labor predicts that job opportunities for machinists will be excellent due to the increased numbers of automated production processes that require the supervision of skilled machinists and a relative lack of candidates entering training programs. Even if actual production levels fall, machinists are still needed to repair, monitor, and control expensive automated equipment. Employers value the skills that good machinists bring to manufacturing, as they are often versatile and able to handle a large number of contingencies. For this reason, skilled machine workers will be in demand for the foreseeable future.
For More Information
For information on scholarships as well as facts about the machine tool industry, contact
Association for Manufacturing Technology
7901 Westpark Drive
McLean, VA 22102-4206
For information about the custom precision manufacturing industry, contact
National Institute for Metalworking Skills
3251 Old Lee Highway, Suite 205
Fairfax, VA 22030-1504
Email: [email protected]
National Tooling & Machining Association
9300 Livingston Road
Fort Washington, MD 20744-4988
For information on careers and educational programs in the machining and machinery industry, contact
Precision Machined Products Association
6700 West Snowville Road
Brecksville, OH 44141-3292
For information about career opportunities in tooling and machining as well as skill development programs, contact
Precision Metalforming Association and Educational Foundation
6363 Oak Tree Boulevard
Independence, OH 44131-2500
Words to Know
Computer numerical control (CNC): A self-contained numerical control (NC) system for a machine tool utilizing a dedicated computer that is directed by stored instructions to perform some or all of the basic NC functions; can become part of a direct numerical control (DNC) system.
Custom precision manufacturing industry: Composed of mostly small businesses; companies design and manufacture special tools, dies, jigs, fixtures, gauges, special machines, and precision machined parts.
Direct numerical control (DNC) system: Connects from two to more than 50 machine tools, each with its own NC or CNC unit, to a common supervisory computer.
Electrical discharge machining (EDM): A method of removing metal by a series of rapidly recurring electrical discharges between a tool (electrode) and a work-piece in the presence of a dielectric fluid; could be defined as one of several types of chipless machining processes.
Electronic control: The use of electronic techniques to control machines, machine tools, power, and data.
Fluid power industry: Composed of three large segments: mobile hydraulic, industrial hydraulic, and pneumatic; includes hydraulic and pneumatic pumps, cylinders, rotary actuators, motors, valves, and other products.
Industrial revolution: Started in Great Britain in the late 18th century and spread to Europe and the United States by the early 19th century; changed the way goods were produced from individual craftspeople to mass production; ultimately changed societal structure of those countries.
Laser beam machining: A chipless machining process for cutting, drilling, slotting, or scribing metal parts.
Machinery: A group or groups of parts that are arranged to perform a useful function, such as an automobile, appliance, or manufacturing equipment; some machines give people a mechanical advantage in completing a task while others perform functions that no person could do for long, continuous periods.
Machine tools: Tools used on various machines for cutting, drilling, and so forth; also, machines that are used in manufacturing facilities.
Machining: Any one or group of operations that changes the shape, surface finish, or mechanical properties of a material by using special tools and equipment.
Measurement and control system: Used in most industries to measure and analyze the variables involved in production operations, including measurements of pressure, temperature, composition, and flow; controls automatically make adjustments to maintain smooth operations.
Nondestructive testing (NDT): Test that examines an object or material but does not affect its future usefulness; methods include visual-optical, liquid-penetrant, magnetic-particle, eddy current, ultrasonic, and radiographic. NDT can detect internal or external imperfections; determine structure, composition, or material properties; and assess quality.
Numerical control: Numeric data stored on magnetic tapes or disks, or punched tapes or cards, that operates machine tools; the data are usually produced by computer from design data.
Precision metal worker: Generic term that refers to tool and die makers, mold makers, and precision machinists. Tool: A device, instrument, or machine that performs an operation, such as a hammer, lathe, screwdriver, or drill press.
Manufacturing; Boilermaker and Mechanic; Fluid Power Technician; General Maintenance Mechanic; Industrial Designer; Industrial Engineering Technician; Industrial Engineer; Industrial Machinery Mechanic; Industrial Radiographer; Instrument Maker and Repairer; Instrumentation Technician; Job and Die Setter; Laser Technician; Layout Worker; Machine Tool Operator; Mechanical Engineering Technician; Mechanical Engineer; Millwright; Numerical Control Tool Programmer; Precision Machinist; Precision Metalworker; Stationary Engineer |
THE EUROPEAN UNION
The European Union — EU. In geographical terms, the European Union comprises the combined territories of its Member States. Since the Treaty of Lisbon (see 15.15), it now has legal personality in its own right and absorbs what used to be known as the European Community/ies. Although it is often abbreviated to ‘Union’ in legislation (e.g. in the Treaty on the Functioning of the European Union), this practice should be avoided in other texts. Use either the full form or the abbreviation ‘EU’.
The (European) Community/ies. Now absorbed by the European Union, so the name should no longer be used except in historical references. Use instead ‘the European Union’ or ‘EU’. For example, ‘Community policy/institutions/legislation’ should now read ‘European Union / EU /policy/institutions/legislation’. However, note that the European Atomic Energy Community (Euratom) continues to exist.
Common, meaning EU, is still used in set phrases such as common fisheries policy, common agricultural (not agriculture) policy, etc. Do not use the term in this sense outside these set phrases.
Common market. This term is normally used in EU documents only in phrases such as ‘the common market in goods and services’.
Single market. This term is generally preferable to internal market (which has other connotations in the UK), except in standard phrases such as ‘completing the internal market’, which was originally the title of the key White Paper.
The Twenty-seven (Twenty-five, Fifteen, Twelve, Ten, Nine, Six). These expressions are sometimes used to refer to different memberships of the European Union at different periods. In this context the only correct abbreviation is EU-27, 25, 15, 12, 10, 9 or 6 (not EUR-25 etc.) to avoid confusion with the euro.
Acquis. The acquis (note the italics) is the body of EU law in the broad sense, comprising:
- the Treaties and other instruments of similar status (primary legislation);
- the legislation adopted under the Treaties (secondary legislation);
- the case law of the Court of Justice;
- the declarations and resolutions adopted by the EU;
- measures relating to the common foreign and security policy;
- measures relating to justice and home affairs;
- international agreements concluded by the EU and those concluded by the Member States among themselves in connection with the EU’s activities.
Note that the term covers ‘soft’ law as well, e.g. EU guidelines, policies and recommendations.
Candidate countries have to accept the entire acquis and translate it into their national language before they can join the EU.
If qualified, acquis may also refer to a specific part of EU law, e.g. the Schengen acquis.
When you are producing documents intended for the general public, use the term acquis only with an accompanying explanation, or paraphrase it with a more readily understood expression, such as ‘the body of EU law’.
The way in which the European Union operates is regulated by a series of Treaties and various other agreements having similar status. Together they constitute what is known as primary legislation.
THE TREATIES — AN OVERVIEW
The treaties founding the European Union (originally the European Communities) were:
- the ECSC Treaty (Paris, 1951), which established the European Coal and Steel Community (expired in 2002),
- the EEC Treaty (Rome, 1957), which established the European Economic Community (later the EC Treaty, now the Treaty on the Functioning of the European Union),
- the Euratom Treaty (Rome, 1957), which established the European Atomic Energy Community.
Then in 1992 the European Union was established by:
- the EU Treaty (Maastricht, 1992).
Over the years these founding Treaties have been amended by:
- the Merger Treaty (1965)
- the Budget Treaty (1975)
- the Greenland Treaty (1984)
- the Single European Act (1986)
- the Treaty of Amsterdam (1997)
- the Treaty of Nice (2001)
- the Treaty of Lisbon (2007)
- five Accession Treaties (1972; 1979; 1985; 1994; 2003).
THE TREATIES IN DETAIL
Order of listing. When listed together the Treaties should be put in historical order: ECSC Treaty, EEC Treaty, Euratom Treaty, EU Treaty.
ECSC Treaty — Treaty establishing the European Coal and Steel Community.
Signed in Paris on 18 April 1951, it came into force on 23 July 1952 and expired on 23 July 2002. It is sometimes also called the Treaty of Paris.
Treaty on the Functioning of the European Union (TFEU).
This is the new name — introduced by the Treaty of Lisbon — for what was formerly known as the EC Treaty (Treaty establishing the European Community) and earlier still as the EEC Treaty (Treaty establishing the European Economic Community). The original EEC Treaty was signed in Rome on 25 March 1957 and came into force on 1 January 1958.
Euratom Treaty — Treaty establishing the European Atomic Energy Community.
Also signed in Rome on 25 March 1957, it came into force on 1 January 1958. The standard form is now Euratom Treaty rather than EAEC Treaty.
Treaties of Rome refers to the EEC and Euratom Treaties together.
Merger Treaty — Treaty establishing a Single Council and a Single Commission of the European Communities.
Signed in Brussels on 8 April 1965, it came into force on 1 July 1967.
Budget Treaty — Treaty amending certain Financial Provisions of the Treaties establishing the European Communities and of the Treaty establishing a Single Council and a Single Commission of the European Communities.
Signed in Brussels on 22 July 1975, it came into force on 1 June 1977.
Greenland Treaty — Treaty amending, with regard to Greenland, the Treaties establishing the European Communities.
Signed on 13 March 1984, it came into force on 1 January 1985. This made arrangements for Greenland’s withdrawal from the then European Communities and granted the island ‘Overseas Countries and Territories’ status.
Single European Act.
Signed in Luxembourg and The Hague on 17 and 28 February 1986, it came into force on 1 July 1987. This was the first major substantive amendment to the EEC Treaty. It committed the signatories to a single European market by the end of 1992 and generally expanded the scope of European policy-making. It also made minor amendments to the ECSC and Euratom Treaties.
Treaty on European Union (TEU) or EU Treaty.
Signed in Maastricht on 7 February 1992, it came into force on 1 November 1993. Often known as the Maastricht Treaty, it established a European Union based on (1) the existing Communities plus (2) a common foreign and security policy (CFSP) and (3) cooperation on justice and home affairs (JHA).Among other things it gave the European Parliament an equal say with the Council on legislation in some areas and extended the scope of qualified majority voting in the Council. It also laid down a timetable and arrangements for the adoption of a single currency and changed the name of the European Economic Community to the European Community. It has now been amended by the Treaty of Lisbon (see 15.15).
For the short form, write ‘the EU Treaty’ or, in citations, abbreviate to TEU. (see 15.18).
Treaty of Amsterdam — Treaty of Amsterdam amending the Treaty on European Union, the Treaties establishing the European Communities and certain related acts.
Signed in Amsterdam on 2 October 1997, it came into force on 1 May 1999. After enlargement to 15 members in 1995 and with further expansion in prospect, it sought to streamline the system, taking the innovations of Maastricht a step further. Among other things, it broadened the scope of qualified majority voting and brought the Schengen arrangements and much of justice and home affairs into the then Community. It also incorporated the Social Protocol into the EC Treaty. Under the Common Foreign and Security Policy, the arrangements on defence aspects were strengthened. Finally it completely renumbered the articles of the EU and EC Treaties.
Treaty of Nice — Treaty of Nice amending the Treaty on European Union, the Treaties establishing the European Communities and certain related acts.
Signed in Nice on 26 February 2001, it came into force on 1 February 2003. It amended the founding Treaties yet again to pave the way for enlargement to 25 Member States, making certain changes in institutional and decision-making arrangements (qualified majority voting, codecision) and extending still further the areas covered by these arrangements. It changed the name of the Official Journal of the European Communities to ‘Official Journal of the European Union’.
Treaty of Lisbon — Treaty of Lisbon amending the Treaty on European Union and the Treaty establishing the European Community. Signed in Lisbon on 13 December 2007, it came into force on 1 December 2009. It amended the EU’s two core treaties: the Treaty on European Union and the Treaty establishing the European Community. The latter was renamed the Treaty on the Functioning of the European Union. The principal changes include the following:
- the European Union acquired legal personality and absorbed the European Community;
- qualified majority voting was extended to new areas;
- the European Council was made a European institution in its own right and acquired a President elected for 2½ years;
- the post of High Representative of the Union for Foreign Affairs and Security Policy (also a Vice-President of the Commission) was established;
- the role of the European Parliament and national parliaments was strengthened;
- a new ‘citizens’ initiative’ introduced the right for citizens to petition the Commission to put forward proposals.
These changes also had major consequences for terminology, in particular all references to ‘Community’ became ‘European Union’ or ‘EU’ and a number of institutions were renamed. This process is still ongoing, though.
Accession treaties. The original Treaties have been supplemented by six treaties of accession. These are:
- the 1972 Treaty of Accession (Denmark, Ireland and the United Kingdom),
- the 1979 Treaty of Accession (Greece),
- the 1985 Treaty of Accession (Portugal and Spain),
- the 1994 Treaty of Accession (Austria, Finland and Sweden),
- the 2003 Treaty of Accession (Cyprus, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia and Slovenia),
- the 2005 Treaty of Accession (adding Bulgaria and Romania).
Do not confuse the dates of these Treaties with the actual dates of accession (1973, 1981, 1986, 1995, 2004, 2007).
Note that the accession of Romania and Bulgaria is considered to have completed the fifth enlargement, rather than constituting a sixth enlargement.
Treaties versus Acts of Accession. Take care to distinguish between Treaty of Accession and Act of Accession. Treaties of accession set out principles and regulate ratification, while acts of accession contain the technical details of transitional arrangements and secondary legislation (droit dérivé) requiring amendment.
Citation forms. Always use a treaty’s full title in legislation:
- … the procedure laid down in Article 269 of the Treaty establishing the European Community … (Article 2(2) of Council Decision 2000/597/EC, Euratom)
However, the Treaty of Amsterdam and the Treaty of Nice may be cited as such:
- … five years after the entry into force of the Treaty of Amsterdam …
On the other hand, it is common usage in legal writing (e.g. commentaries, grounds of judgments) to cite the Treaties using a shortened form or abbreviation:
- The wording of Article 17 Euratom reflects …
- Under the terms of Article 97 TFEU the Commission can …
- The arrangements for a rapid decision under Article 30(2) TEU allow …
This form can be used practically anywhere (except, of course, in legislation), especially if the full title is given when it first occurs.
Citing subdivisions of articles. Paragraphs and subparagraphs that are officially designated by numbers or letters should be cited in the following form (note: no spaces):
- Article 107(3)(d) of the Treaty on the Functioning of the European Union …
Subdivisions of an article that are not identified by a number or letter should be cited in the form nth (sub)paragraph of Article XX or, less formally, Article XX, nth (sub)paragraph.
- The first paragraph of Article 110 of the Treaty on the Functioning of the European Union …
- Article 191(2) TFEU, second subparagraph … |
For projects with just a few LEDs, a small display or a buzzer,
power is usually not a big issue and can easily be supplied by the Arduino pins itself or through a battery or small USB power supply.
But for projects that involve larger motors, solenoids, high-power LEDs, Peltier elements, etc, the power management is often non-trivial and appropriate power supplies and switches (relays, MOSFETs, BJTs) become non-trivial parts of the design, often the most challenging!
To test whether the project can handle the required current, it is often crucial to test it with a dummy load instead of the actual motor/lamp/relay. This allows in particular to stress-test the system by making it sure it can run with for example 400mA, when the actual motor only requires 300mA.
Fancy dummy loads with constant current, constant power or constant resistance can be bought, but it's a niche market for professionals, and for a hobbyist a simpler system is often sufficient.
The most common small-signal resistors used in circuits have a power rating of 0.25W, which is really not a lot. However, they are very cheap, a pack of 100 costs about 50 cents, so 1 euro buys you 200, which can dissipate in theory 50Watt, if the power is distributed equally and the resistors are not packed too close.
Here's the description of a board with 192 equally valued resistors: they nicely fit in 24 rows of 8 on a 7x9cm prototype board, with still space for an 8-fold dip switch and 2 female banana connectors. Each row has 8 resistors in series, and with the dip switch one can select how many of these rows to put in parallel, and thus how much current to let flow.
Step 1: Theory
Current I, voltage V and resistance R are related by Ohm's law: V=IR.
The power dissipation of a resistor is P=IV=I^2R=V^2/R. Thus, the maximum voltage across a resistor is Vmax=sqrt(P*R). For P=0.25W, this gives Vmax=1,58V for R=10Ohm, Vmax=5V for R=100Ohm Vmax=15,8V for R=1kOhm etc. For a string of n resistors in series, the total voltage is equally divided over the n resistors and thus can be n times larger: Vmax=n*sqrt(P*R).
Boards with 10Ohm or 100Ohm resistors come out to practical values this way: a row of eight 10Ohm resistors can be used up to 12.6V and a row of eight 100Ohm resistors can handle up to 40V. At these maximum voltages, each string will draw 12.6V/80Ohm=158mA, or 3.8A for 24 rows in parallel. For R=100Ohm, each row draws 40V/800Omh=50mA, corresponding to 1.2A in total.
In either case the board dissipates a total of 192x0.25W=48W. In practice, it is better to stay well below this, since the resistors are closely packed together and heat up each other, so their temperature will rise above that of a single resistor at its maximum dissipation. As a rule of thumb, it is best to stay at half the nominal voltage if kept on for long amount of time, while running at the full nominal voltage is fine for a few tens of seconds. So the 40V board will do fine for typical laptop power supply voltages of circa 20V, and the 12.6V board will be fine for 5V from USB supplies.
Step 2: Construction
MATERIALS AND TOOLS
192 0.25W 1% resistors of equal value. A 7x9cm prototype board with 24x30holes An 8-position DIP switch 2 female banana connectors Soldering iron, some tin, some solid-core hookup wire Multimeter to check the correct resistors and connections
Sample a couple of resistors from each string to make sure that their value is close (within 1%) of the nominal value. If it turns out that half the resistors are somewhat below the average value and the other half above, it's best to make rows with 4 resistors from batch A and 4 from batch B.
Take 8 resistors and bend their legs 90degrees. Stick one leg in the corner and the other 3 holes lower. On the other side, bend the legs 45 degrees to hold it in place. The next resistor has one leg in the same hole as the previous resistor, and the other three holes lower. Also here bend the legs 45 degrees. Continue with all eight.
Solder the eight resistors in place (9 joints) and cut off the legs. Repeat this 24 times for all 24 rows. Now all 192 resistors are in place. Check with a multimeter that each row has a resistance of 8R, and that the resistance between the rows is infinite. Solder the dip switch array to the board. Make some holes in the PCB for the banana connectors. The board shown is soft enough that the sharp end of a pair of scissors will do. Mount the banana connectors
Connect the top of all 24 resistor rows together and connect with an insulated wire to one of the banana connectors. Connect the bottom 8 connections of the dip switch together and to the other banana connector.
Now the 24 resistor rows need to be connected to the 8 dip switches. One practical way is connect 5 of the switches with 4 rows each, one switch with two rows and two switches with one row each. This way any number of rows from 1-24 can be activated. In addition, it is better to avoid a single switch to connect neighbouring rows: this way the heat is distributed more equally over the board in case only part of the rows are active. The picture shows two ways how I connected them. The left setup was the first attempt to distribute the rows homogeneously over the board, but the result is a rather messy spaghetti of hookup wire. On the right the distribution is less even, but in my \ opinion still satisfactory, and it reduces the soldering work significantly.
Label the board, in particular note down the resistance per row and the maximum voltage and for each row indicate the switch that it is connected to. On the dip switch, I indicated with '|' a switch that actives four rows, ':' for the switch that connects two rows and '.' for the switches that activate one row.
Test the board with a multimeter. Activating one group of 4 rows should give a resistance of 2R, two groups R, four groups R/2, etc.
The board is now ready for use! |
Contractual work—teacher and student establish an agreement that the student must perform a certain amount of work by a deadline. Provide a new learning experience; systematic but flexible.
List any materials in your lesson plan. It is always desirable that new ideas or knowledge be associated to daily life situations by citing suitable examples and by drawing comparisons with the related concepts.
Samford "A unit plan is one which involves a series of learning experiences that are linked to achieve the aims composed by methodology and contents". The objectives should not be activities that you plan on using to teach the lesson.
Related to social and Physical environment of the learner. It is the actual step by step plan. For example, tell the student the expected outcome and why it is important for them to acquire the skill.
While it provides teachers with options, it can be confusing for some teachers when they need to decide on what to teach. Also, it facilitates teaching literature and English together.
Principle of absorption and integration: The back-of-the-envelope lesson plan is clearly by someone who has done this lesson many times before and has simply scribbled a brief reminder of main aspects of the lesson.
In fact, there is no need for a lesson plan to ever be seen, touched, considered or dreamed of by students, and nor does it even need to exist on paper or disk, though it usually does. This step should involve a good deal of activity on the part of the students.
Introduction - Writing a good lesson plan is a requirement for every public school teacher. There is some confusion about what a TEFL lesson plan is and is not. What is the purpose of the assignment? For example, tell the student the expected outcome and why it is important for them to acquire the skill.
The teacher will take the aid of various devices, e. How will you get her there? Your primary objective is that "Each beginning student will learn how to bend their knees and ease themselves to the ice to reduce the possibility of injury". It pertains to preparing and motivating children to the lesson content by linking it to the previous knowledge of the student, by arousing curiosity of the children and by making an appeal to their senses.
Both of these are examples only. Circulate around the room and provide additional prompting as needed. Independent work—students complete assignments individually.
Your primary objective is that "Each beginning student will learn how to bend their knees and ease themselves to the ice to reduce the possibility of injury". What might they hear? It requires a good deal of mental activity to think and apply the principles learn to new situations.
Sometimes another person may be asked to substitute for you or to use your lesson plan.
What level of learning do the students need to attain before choosing assignments with varying difficulty levels? There is a difference between revision after the summer break and spending weeks on the topic as though the students have not done this before. A lesson plan is a teacher's plan for teaching a lesson.
What makes writing relevant to certain individuals? Care is taken when creating the objective for each day's lesson, as it will determine the activities the students engage in.Lesson Plan Tutor Name: _____ Student Name: _____ Activity: Going outside.
Resources: An “outside” towel, located by main doors. Plan: Arielle will be allowed to jump around and do whatever physical activity she likes so long as she can prove that she is paying attention by. Pondering the Purpose, Aiming for an Audience Lesson Plan With instruction on both audience and purpose, your students will be equipped to.
Here's a complete lesson plan for teaching Author's Purpose. This video goes over the introduction of the lesson, as well as the students participating in the activity.
It concludes with students demonstrating their understanding of Author's Purpose. View, download and print The Purpose Of A Lesson Plan pdf template or form online. Lesson Plan Templates are collected for any of your needs.
What is a Lesson Plan? This page is about lesson plans in the context of TEFL (teaching English as a foreign language). In other contexts, the term lesson plan may have a broader meaning, including something more akin to an entire curriculum.
Lesson Planning Mistakes A list of mistakes made in writing lesson plans and an explanation of what to do about them in order to improve and communicate effectively. Write a Lesson Plan Guide Dec 1, Write a Lesson Plan Guide. How to Develop a Lesson Plan. We have received several questions regarding how to write a good lesson plan.Download |
Location of the Holderness Coast
The Holderness Coast is located on the east coast of England. It extends 61km from Flamborough in the north to Spurn Point in the south.
The Holderness Coastline is one of Europe’s fastest eroding at an average annual rate of around 2 metres per year. This is around 2 million tonnes of material every year. Approximately 3 miles (5kms) of land has been lost since Roman times including 23 towns/villages. These are shown on the map below.
Underlying the Holderness Coast is bedrock made up of Cretaceous Chalk. However, in most places, this is covered by glacial till deposited over 18,000 years ago. It is this soft boulder clay that is being rapidly eroded.
There are two main reasons why this area of coast is eroding so rapidly. The first is the result of the strong prevailing winds creating longshore drift that moves material south along the coastline. The second is that the cliffs are made of soft boulder clay which erodes rapidly when saturated.
Holderness Coast Case study
The Holderness Coast is a great case study to use when examining coastal processes and the features associated with them. This is because the area contains ‘textbook’ examples of coastal erosion and deposition. The exposed chalk of Flamborough provides examples of erosion, features such as caves, arches and stacks. Coastal management at Hornsea and Withernsea are examples of hard engineering solutions to coastal erosion. Erosion at Skipsea illustrates the human impact of erosion in areas where coastlines are not being defended. Mappleton is an excellent case study of an attempt at coastal management which has a negative impact further along the coast.
Spurn Point provides evidence of longshore drift on the Holderness Coast. It is an excellent example of a spit. Around 3% of the material eroded from the Holderness Coast is deposited here each year.
Esri story map – Holderness Coast |
Using Video in learning and teaching – 2. Educational Contexts
Considering these different affordances of video, it is possible to look at educational contexts in which these can be used.
Providing resources to aid transition to University, and more general study skills
A host of non subject-specific guides to help students in their orientation to University life can be provided, and these might include a number of the different approaches outlined above. In our first example, Discover Study Skills Online, recent graduate Kieren Bentley guides prospective and new students through some basic orientation, to teach them how learning at University may differ. This example is broken down into discrete units, and supporting worksheets are made available to the viewer.
Providing an introduction to a module.
A number of the above techniques could be combined to provide an introduction to a module. This might typically provide the viewer with a summary of the key learning outcomes as well as other important information. This is a scenario in which the module leader might want to address the viewers directly as presenter, so as to make a personal connection with them. In this example, Professor Tim Birkhead provides and overview of a module studying animal behaviour in Animal and Plant sciences. In this module, as well as providing this introduction, there are also a series of podcasts provided to brief the students prior to each lecture, made available via MOLE.
Preparing for specific classes in a module.
Again a number of the above techniques can be used to do this. A specific type of this approach is used in flipped learning. In flipped learning, specific resources are provided for the students to engage with before the class. This then has the potential to use the time spent in class for working on specific problems and exploring topics in more depth, rather than providing basic instruction. In this example, Dr Anthony Rossiter has created over 250 screencasts for his students to use before their lectures. These are created very simply, by recording Powerpoint presentations that Anthony then annotates as he is explaining the solution to a wide range of engineering maths, using freely available software and a sympodium monitor. Anthony’s extensive collection of resources are freely available on his YouTube channel, and the University’s iTunes U site
For more information on creating flipped learning resources, see this guide, produced by colleagues from the Technology Enhanced Learning Team, Online Learning Team, and Krirsten Bartlett from Psychology.
Explaining difficult concepts using animations
There are many cases in which we teach complex concepts which our students may find difficult. This might be because the concepts are abstract, or because we are looking at underlying data that is complex. In this example, animations are used to explain some of the abstract “Zeno’s Paradoxes”, along with explanations from Angie Hobbs, our very own Professor of the Public Understanding of Philosophy.
Preparing students for laboratory or practical classes and field trips.
Laboratory classes, field trip and other forms of practical work can bring tremendous value to the learning experience, and provide real world practical skills that students may need to work in professional life after graduation. Often there may be a need for students to learn some basic practical skills to facilitate this learning. Instructional videos can be used to great effect, to prepare students by practically demonstrating how key procedures and techniques should be performed. In the case of computer based techniques, these can also be easily produced by creating screencasts. In the iDig example below, archaeology students are taught key practical techniques employed during archaeological excavations. These videos can be downloaded by the students and taken into the field with them, so the resources are available to them at the time and place when they most need them.
iDig – Mobile Field Training for Archaeologists
In this example, engineering students are provided with vital health and safety orientation prior to their project work work in the workshops in the Diamond.
Using video to trigger discussion or other activities.
By bringing alternative perspectives into the classroom using video, a wide range of discussion, reflection and other activities can be prompted. Equally, different scenarios could be presented, and students invited to evaluate these. This example could be used to trigger a discussion around a range of topics relating to this most important of contemporary humanitarian issues:
Increasing the reach of public engagement events.
Engaging wider audiences with our work is a very important component of academic life, and core to our mission here at Sheffield. At Sheffield there are many opportunities to engage the public and present your subjects to them in innovative ways, supported by our Public Engagement and Impact team. In this example, The Sounds of the Cosmos, our headline event from the 2014 Festival of the Mind 2014, was captured and made available via iTunes U, in which it became an item of featured content by Apple, attracting over 60,000 visitors world wide.
Solar System example
Excellence in learning and teaching case studies
Documenting and sharing excellence in learning and teaching is fundamental pillar of our learning and teaching strategy. Video can provide a rich and diverse narrative via which we can convey what we believe to be excellence in teaching. These case studies, which are normally documentary in nature, focus on why we believe our learning and teaching is innovative, and feature interviews with staff and students about their experiences, along with footage of learning and teaching taking place. The sorts of examples chosen normally demonstrate where innovative practice takes place beyond a conventional classroom setting, or where there has been some other unique form of teaching and or assessment. In this example, students of Politics study the workings of Parliament, have guest lectures from visitors such as John Bercow and david Blunkett, and get to to directly contribute to parliamentary processes by writing evidence for select committees.
Student generated Media for assessment
Getting students to create their own videos as part of their assessments is probably the single largest development we have seen in the use of educational video in the last 10 years. Rather than just being passive consumers of content, students can become active creators of video, and in doing so, become active creators of their own knowledge. Students can receive training by the Creative Media Team, in all aspects of video production. In producing videos, students can typically
- Develop new digital literacy skills
- Gain new understandings of their subjects by having to articulate their knowledge in new ways, using new technologies
- Employ the power of their own creativity to pursue subjects further
- Gain important team working and time management skills
In this example, we have combined an excellence in learning and teaching case study, along with some excellent student created videos, as part of the module in Animal and Plant Sciences – APS279 Getting Science on Film. |
We’re aware that customers who live close to some of our sites may see swarming midges (known as chironomids) in or around their properties.
Are these midges a health hazard?
- Although the swarming of these insects, sometimes in considerable density, can be a nuisance, they are NOT harmful or hazardous to health.
- These midges have no mouth parts and are unable to bite or sting.
- They do not spread disease.
Why can’t Thames Water get rid of them?
- It is illegal to apply insecticide where there is a risk it could drift into other properties.
- Even if we were allowed to use insecticides, it would be impossible to completely eradicate the midges as they occur naturally near areas of open water.
- Midges are an essential part of the food chain, being vital food for birds and other wildlife on and around our sites, so eliminating them entirely could have a serious impact on wildlife.
What are midges?
- Midges are flying insects that vary in size depending on the type of species.
- There are many different species that emerge at different times of the year mainly between March and October.
- The adult stage lasts for only about one week during which time they swarm as part of the mating process.
Feel free to contact us regarding midges, this will help us identify the areas worst affected. |
Europe’s next ride to the Moon: Chandrayaan-1
Excitement is rising as ESA is in the final stages of preparation for the first collaborative space mission with the Indian Space Research Organisation (ISRO). Chandrayaan-1 will study the Moon in great detail and be the first Indian scientific mission leaving Earth’s vicinity.
Europe is supplying three instruments for the mission.
The Moon retains its fascination for planetary scientists and presents many mysteries still ripe for investigation. Chandrayaan, which means ‘journey to the Moon’ in Hindi, will study the Moon at many wavelengths from X-rays, visible, and near infrared to microwaves during its mission. It will orbit the Moon in a circular path, just 100 km above the lunar surface.
“The low orbit will give us really high resolution data,” says Detlef Koschny, ESA Chandrayaan Project Scientist. The principal mission objective is to map the surface of the Moon in unprecedented detail. At present the maps planetary scientists have show details of some 30-100 m across. Chandrayaan will produce maps with a resolution of between 5 and 10 m across the whole surface of the Moon. “We aim to have this in two years,” says Koschny.
Building on the experience gained with SMART-1, Europe’s first mission to the Moon, which was launched in September 2003 and concluded its work three years later, ESA is assisting ISRO with operations, data handling and flight dynamics. ESA is also coordinating the provision of three European instruments.
The Compact Imaging X-ray Spectrometer (CIXS) will carry out high-quality, low-energy (soft) X-ray spectroscopic mapping of the Moon. The Infrared Spectrometer, known as SIR-2, will observe the chemical composition of the Moon’s crust and mantle. Both of these instruments were flown on SMART-1 and have been upgraded and rebuilt for Chandrayaan-1. They will continue the work on surface composition started by the original instruments.
The third European contribution is the Sub-keV Atom Reflecting Analyser (SARA). Derived from the ASPERA (energetic neutral atoms analyser) instruments, flown on Mars Express and Venus Express, it will be the first lunar experiment dedicated to direct studies of the interaction between electrically charged particles and the surface of the Moon.
With no atmosphere, the Moon’s surface is constantly bombarded by the wind of particles released by the Sun. SARA will monitor these interactions and use them to image the Moon’s surface composition, study surface magnetic anomalies and study the gases released from the lunar surface by the collision of the solar particles.
All European instruments are nearing completion and will be delivered to ISRO soon.
The low orbit means that these instruments, all of which rely on collecting the energy or particles emitted by the lunar surface, will work better. “Being closer to the surface means that the signal received from the surface will be stronger. This is good for global mapping,” says Christian Erd, ESA Chandrayaan Project Manager.
Apart from these European instruments, Chandrayaan-1 will carry another eight science instruments. They include a 29 kg landing probe (MIP), which will be dropped onto the Moon’s surface at the beginning of the mission to conduct investigations.
Chandrayaan-1 is scheduled to launch in April 2008 from Sriharikota, India. It will be carried into space by a Polar Satellite Launch Vehicle (PSLV) and placed on a five and a half day cruise to the Moon. It will then take two weeks of manoeuvres to fit into its operational orbit.
In addition to the great science it will address, Chandrayaan-1 will be a stepping-stone to future missions to other bodies, as well as to the Moon. For example, ESA’s BepiColombo mission to Mercury will carry a replica of SARA’s sensor subsystem, allowing the results from the two celestial bodies to be compared directly. |
Explanation of Names
Chrysopidae Schneider 1851
85 spp. in 14 genera* in our area(1)
; ca. 1,200 spp. in 75 genera and 3 subfamilies worldwide(2)
*Genera not yet in the guide: Chrysopodes (1 sp., FL, TX), Nacarina (1 sp., NC, FL), Nineta (1 sp., AZ-UT), Pimachrysa (5 spp., AZ-CA), Plesiochrysa (1 sp., FL)
Key to most NA genera provided in(3)
Soft-bodied insects with copper-colored eyes, long thread-like antennae, and lacy wings.
To the naked eye the wings appear hairless, but under magnification short hairs can be seen along the edges and veins. See photo and diagram of wing venation:
Most species are green, but some are brown, especially overwintering adults of certain species:
Pinned specimens turn yellowish.
Larvae are flat and elongated (alligator-like) with large jaws. Debris-carrying larvae cover themselves with bits of litter, perhaps to deter predators.
Common in grass and weeds and on tree/shrub foliage
Some adults are predators, others take liquids such as honeydew, and some feed on pollen
Larvae are predatory on other insects, especially aphids (sometimes called 'aphid lions'); will also consume larger insects, insect eggs, and pupae.
Eggs are characteristically stalked. The eggs and egg-laying process is illustrated here:
The larvae pupate in silken cocoons that are generally attached to the underside of leaves or stems.
often give off an unpleasant odor when handled.
Some species used as biological controls
of larvae feeding on aphids.
Adults are crepuscular or nocturnal.
have different wing venation and usually more oval wings
have raptorial forelegs
Duelli et al. 2014. A New Look at Adaptive Body Coloration and Color Change in “Common Green Lacewings” of the Genus Chrysoperla (Neuroptera: Chrysopidae). Annals of the Entomological Society of America, 107(2): 382-388
Entomology and Nematology News
. The Fascinating Behavior of Debris-Carrying by Green Lacewing Larvae
. Debris-Carrying in Larval Chrysopidae: Unraveling Its Evolutionary History
. Green lacewings (of Florida) Neuroptera: Chrysopidae |
Health Tip: Teaching Your Child About Food Allergies
-- Children with food allergies may feel less afraid if they know what they're dealing with.
The Kids With Food Allergies website suggests children with food allergies should know:
- The warning signs of an allergic reaction, and how to best communicate that they are having a reaction.
- Not to share food with others at lunch.
- How to read food labels to look for allergens.
- To always wash hands before and after eating.
- To never share medications with other children.
- To politely decline food that isn't from home.
Posted: August 2014
Recommended for you |
If you are working through the sections of the Forces chapter of the AQA KS4 Physics Specification in their order, then you will already already have worked through Forces and their interactions, Work done and Energy transfer, Forces and elasticity and Moments, levers and gears.
The next requiring your attention is this one, Pressure and pressure differences in fluids.
Most people know that liquids are fluids; we often hear the phrase "drink your fluids".
But most people don't know that gases are also fluids !
Why are they both "fluids"?
Its because of their particle structure; they are both made up of particles that are able to move around fluidly (meaning: over and around each other, unlike the particles of a solid which are fixed in positions, free only to vibrate). Hence the term "fluid".
OK. So, now we know that both liquids and gases are fluids, what causes them both to exert pressure on objects?
Well, again, it is due to their particle nature.
Their particles are constantly moving and quite literally bashing into whatever gets in their way. So, if an object, such as a person, is "in a fluid", like water, it is bombarded by the fluid's particles causing it to "feel a pressure".
Why don't we notice the pressure of the air when we are "in it"?
It's because in a gas, the particles as so free to move that they bombard us equally on all sides and in every direction, including top and bottom, so although the pressure is there we don't notice any single direction of pressure.
So, the first thing to learn is - in a fluid Pressure acts in every direction. See diagram 1.
Because the pressure acts in every direction over our body, we tend not to notice it.
The situation when we get into a liquid, however, is different!
In any volume of a liquid there are vastly more particles, compared to in air, so there is a greater pressure (due to more bombardment).
But, there is also a greater pressure DIFFERENCE over an object, such as a person, that is partly in and partly out of the liquid.
In diagram 2 you can see that the pressure due to the Water, even just below the surface is greater than the pressure due to the Air.
And as you look deeper in the Water you see that the pressure arrows get bigger indicating a larger pressure.
It is this Pressure DIFFERENCE that is the cause of floating.
If you look again at diagram 2, you should be able to agree that all of the left pointing pressure arrows cancel with all of the right pointing pressure arrows, leaving just a single black downward Air pressure arrow and a single blue upward Water pressure arrow.
In diagram 3 we have deleted the left and right arrows leaving just the single up and down arrows since these are the ones that are causing the overall pressure DIFFERENCE.
Now, can you see that the DIFFERENCE of these pressures will cause a single resultant force and it will be upward, won't it?
This upward force is called Upthrust and if the Upthrust is big enough to balance the person's weight then he or she will float.
OK, so far so good, but what has Area to do with this?
Well, the water particles have to bombard the object (the person) to produce the Upthrust. If they only have a small area over which to act (just the feet), then it makes sense that the final upward force, or Upthrust, will be relatively small compared to if, for example, the person was lying down!!
In diagram 4 the person is lying flat in the water and so many more upward pressure arrows are able to act.
Since this pressure is acting over a larger area, it produces a larger resultant upward force or Upthrust. This is why it is easier to float lying down.
Note: It is not the Pressure or Pressure DIFFERNECE that is greater due to the increased area; it is simply that a given Pressure DIFFERENCE acting over a larger Area must inevitably produce a larger Upthrust force.
So objects with a large Area will be more likely to float.
Thankfully, we can sum all of this up very neatly with a simple bit of mathematics:
So, the Force is proportional to both the Pressure or Pressure DIFFERENCE and to the Area of the surface; the bigger the area the bigger the Upthrust.
So, to float on water easily, lie down to increase your contact Area!
If the upward force, F, the Upthrust, generated by the Pressure DIFFERENCE is greater than or equal to your Weight, then you will float!
By now you should begin to realise how it is that huge ships like ocean liners or oil tankers can float!
The clue is - the above equation - and specifically, the word Area.
Some of the world's largest ships have masses exceeding half a million metric tons and they are made from steel, but they float!
If you took such a lump of steel (or even just 1 Kg of steel) and simply dropped it in the water it would sink, no doubt.
The reason that a gigantic steel ship floats, is that the steel is spread out into a hull that covers a huge surface area.
The characteristic triangular cross sectional shape of a typical hull is also important because it allows a greater area to be in contact with the water as more cargo is loaded onto the ship and the ship drops lower and lower into the water. (Of course, there is a limit to how low the ship can "sink down" and all ships have a line around their hull which marks the maximum level to which they can safely be loaded.)
So, if you ever have to make a raft out of pieces of wood and perhaps some empty barrels or crates and some rope, the main design principle that you would adhere to would be to try to produce as large a surface (or base) area as possible. This would produce the largest "force" or "upthrust" from the water and increase your chance of floating.
In a moment we will look at how we know that the water pressure is 10,000 Pa at a depth of 1m (ie it wasn't just a guess), but before that, since we have introduced an equation, we need to explore it a little further, and like most equations, we can rearrange it. So, here is another example question involving the equation, but rearranged.
So, the equation rearranged for Pressure is:
Both forms of the equation are important and you must be able to recall and use them.
OK, so how do we KNOW that the water pressure is 10,000 Pa at a depth of 1m, which is what we stated in Example 1?
To calculate the pressure at a point below any depth of any liquid (or below a "column of liquid"), not just water, we use the following equation:
So, Pressure, p, in a liquid increases with depth (height of the column), h.
You should be able to see from the equation that the pressure above a point in a liquid is proportional to the column height or "depth of liquid", and to the density of the liquid.
You need to make sure that you understand why these are so.
First, why is pressure proportional to depth or why does pressure increase with depth?
Well, the liquid is made from particles; there will be more particles in a higher/deeper column of water above a point and so there will be a greater weight of liquid above the point and thus a greater pressure.
Imagine 3 fish at 3 different depths of water, as shown below.
The first fish has less water particles above it, so less weight of water above it and so less Pressure; hence shorter pressure arrows compared to the third fish in the deepest water.
Second, why is pressure proportional to the density of the liquid?
This is also due to the particle nature of the liquid; a more dense liquid has more particles in a given space and so, once again, a greater weight of liquid above the point and so a greater pressure.
Don't make the mistake of thinking that the pressure due to the column of fluid above a point always acts downwards and so causes a downwards force; it doesn't!
Remember what we said at the start of this section - in a fluid, pressure acts in every direction.
So, if we put an object 1m below the water it doesn't get forced downwards by the water, instead we might feel it being buoyed up by the water and it might even want to float!
This is because the pressure lower down will always be greater than the pressure higher up, making an object float or making it feel less heavy.
A heavy bag is lowered into water on a rope.
It is "buoyed" up due to a greater pressure at the bottom compared to the top.
But take note that the pressure acts all over the bag.
A good example of how water pressure increases with depth is a dam built across a lake to make a reservoir. Engineers know that the base of the dam has to be built stronger than the top to cope with the higher pressure.
Air is a Gas and we have already stated that a gas is a Fluid, just like a liquid.
Reminder: gases and liquids are called fluids due to the nature of their particles - they are able to move around or to "flow", fluidly.
A solid, on the other hand is not a fluid because its particles are not able to flow; (they do move, but only in the sense of a vibration).
Since the air particles are particles of matter (again, like liquid particles) they are held by the force of gravity, forming a "blanket" or "belt" of particles around the Earth which we call the "atmosphere".
Here is a definition for Atmosphere:
The atmosphere of Earth is the layer of gases, commonly known as air, that surrounds the planet and is retained by Earth's gravity.
Living at the bottom of this layer of gases, as we humans and most animals do, we feel the maximum atmospheric pressure on a daily basis. So, what is the value of this Standard Atmospheric Pressure?
It is a huge value, approximately 100,000 Pa.
Two questions immediately arise the first time you see this number;
1) Why is it so large? and
2) Why aren't we crushed by such a huge pressure? (it is equivalent to a force of 1000N acting on every cm2 of our body.
1. Why is atmospheric pressure so large?
The answer is pretty obvious when you think about it; we have said that the atmosphere is a layer of gases and we are at the bottom of it! So, as we walk about the Earth, there is always something like a 20Km thickness of gas above our heads weighing down on us. The atmospheric pressure is due to the weight of the long column of particles above us. So, its not surprising that the pressure is large.
2. Why aren't we crushed by such a huge pressure?
At the beginning of this section on Pressure, on the first outline diagram of a person, we showed AIR pressure arrows pointing inwards, but the diagram must be incomplete. (Let's repeat diagram 1 here.)
Now that you know the size of the indicated inward atmospheric pressure you should realise that the person would get crushed IF diagram 1 told the whole story.
So, what is missing from the diagram?
Answer- there MUST be arrows inside the body pointing outwards!
There must be air inside the body producing an outward pressure to balance the inward pressure due to the atmosphere.
This turns out to be true.
There is air inside our bodies, in particular inside our lungs, ears, throat, stomach etc. This air produce a pressure which balances the inward pressure due to the atmosphere.
Additionally, there is a lot of liquid in our bodies, and liquids, unlike gases are incompressible so they resist the atmospheric pressure and push back on it with the same pressure.
Now we know why we are not crushed by the atmosphere, let's move on.
At sea level a person has about 20 Km of atmosphere (air) above him or her producing a maximum pressure of about 100,000 Pa. If the person goes higher and higher up, say by climbing a mountain (a very big one, like in the Himalayas) then it is obvious that there will be less atmosphere above him or her and hence less weight acting downwards and so, less pressure.
So, we can initially conclude:
Pressure is due to the action of particles bombarding or colliding with an object. The higher up one goes, the lower is the atmospheric pressure because there are less particles above the person or less weight of air above the person.
See diagram 2.
But there is a second reason why Air pressure gets lower with height (a reason not shared by liquids, our other "fluid").
Unlike water and other liquids, the density of air is not constant.
As height increases, the density of the air in our atmosphere decreases. We say, it "gets thinner". At the very highest points we say that the air is "too thin to breathe" and at that point people need to bring their own air with them in cylinders. We see RAF jet pilots and mountaineers on Everest doing this, wearing full face breathing apparatus.
A decrease in density means a decrease in number of particles in the column of air above the person and hence in the weight of the column and thus in the pressure.
See diagram 3
So, we can finally conclude:
Atmospheric pressure varies with height due to two reasons:
1. As one moves higher up into the atmosphere, there is a shorter column of air above a person, so the weight of the air above a person decreases and the pressure decreases.
2. As one moves higher up into the atmosphere, the density of the air decreases and so the number of particles decreases, causing a decrease in the weight of the air above a person and a decrease in the pressure. |
Educators have long known that the arts can contribute to student academic success and emotional well being. The ancient art of storytelling is especially well-suited for student exploration. As a folk art, storytelling is accessible to all ages and abilities. No special equipment beyond the imagination and the power of listening and speaking is needed to create artistic images. As a learning tool, storytelling can encourage students to explore their unique expressiveness and can heighten a student's ability to communicate thoughts and feelings in an articulate, lucid manner. These benefits transcend the art experience to support daily life skills. In our fast-paced, media-driven world, storytelling can be a nurturing way to remind children that their spoken words are powerful, that listening is important, and that clear communication between people is an art. [and there's more...]
Storytelling Lesson Plans and Activities
Using Storytelling To Assess Listening and Speaking Skills
Exploring Cultural Roots
Why Storytelling? More Reasons...
Gaining Verbal Skills
Becoming verbally proficient can contribute to a student's ability to resolve interpersonal conflict nonviolently. Negotiation, discussion, and tact are peacemaking skills. Being able to lucidly express one's thoughts and feelings is important for a child's safety. Clear communication is the first step to being able to ask for help when it is needed.
Both telling a story and listening to a well-told tale encourages students to use their imaginations. Developing the imagination can empower students to consider new and inventive ideas. Developing the imagination can contribute to self-confidence and personal motivation as students envision themselves competent and able to accomplish their hopes and dreams.
Passing On Wisdom
Storytelling based on traditional folktales is a gentle way to guide young people toward constructive personal values by presenting imaginative situations in which the outcome of both wise and unwise actions and decisions can be seen.
Storytelling in the Classroom | Lesson Plans & Activities
Story Library | Articles | Links | Curriculum Ideas Exchange
Storytelling Store | Story Arts Theatre | Awards
Newsletter | Site Map | Contact Us | Home
Copyright © 2000 Story Arts |
The 20th Century was a time of profound changes in our understanding of the physical world. While Einstein’s Theory of Relativity challenged our established notions of space, time and gravitation, the Quantum Revolution opened our eyes to a world of mysterious phenomena and apparent paradoxes—ones that we have no choice but to accept and make sense of! Did you know that certain tiny particles, such as electrons, can exist in different places at once and that it is possible to know either where they are, or how fast they are moving, but never both? In this course you will have a first taste of counterintuitive mysteries and paradoxes. These include quantum superposition (epitomized by the iconic “Schrödinger’s Cat” thought experiment), wave-particle duality, spooky coincidences between twin particles living far apart, and many other mind-bending phenomena. Apart from covering the basics of these phenomena, there will also be discussion about their technological applications, philosophical implications, and a glimpse of the history of the science. No prior knowledge of physics is required. Bring with you the most commonsense notions and watch them crumble under the weight of weird quantum phenomena!
By completion of this course, successful students will be able to:
- Describe the basic meaning of single-particle interference and wave-particle duality
- Answer simple questions about related thought experiments
- Explain the concept of interaction-free measurement
- Discuss how quantum non-locality is observed and its philosophical implications
- Name a few applications of quantum effects such as superposition and entanglement
Christoph Simon is a professor in the Physics and Astronomy department. He is also a member of the U of C's Institute for Quantum Science and Technology. His research area is theoretical quantum physics. He studies counter-intuitive quantum phenomena such as superposition and entanglement and their potential applications. |
1996 Tyler Laureates
Dr. Willi Dansgaard, Dr. Claude Lorius, and Dr. Hans Oeschger
Preserved within the great polar ice sheets in an exquisite record of the earth's global climate extending back thousands of years. Within them lie concentrations of oxygen isotopes, carbon dioxide and other gases present in ancient atmospheres, the acids from numerous volcanic eruptions, evidence of storms that raged around the world, and other traces of global climate change deposited during the span of human existence.
Searching for clues to the earth's climate record through the analysis of ancient polar ice was a revolutionary idea when first proposed in 1954. Today, it is a basic tenet of global climate research showing a strong relationship between climate and the chemical composition of the atmosphere. In addition to providing the scientific community with a fundamental understanding of climate on earth, the data from polar ice studies is used in virtually all reports about global warming to emphasize the potential for atmospheric pollution to adversely affect global climate.
The three scientists most responsible for the scientific imagination, long-term vision, and wisdom that led to this breakthrough in understanding the earth's system are honored for their scientific accomplishments with the year's Tyler Prize for Environmental Achievement.
Taken together, the work of these three scientists has revolutionized scientific knowledge of how the temperature and composition of the atmosphere have changed over the past 150,000 years. By drilling into the ice caps of Greenland and Antarctica, and by analyzing the chemical and isotopic composition of the ice and of air-bubbles trapped in the ice, they have shown that the succession of glacial and interglacial ages that dominates the climatic history of the earth over the past 150,000 years involves substantial changes in carbon dioxide and methane. This discovery has launched a major international research effort to understand the mechanisms by which these atmospheric changes are linked to changes in the land surface and particularly to changes in ocean circulation and chemistry.
"I believe that a few hot summers would not have been sufficient to raise the global climate changes as a central scientific issue, had it not been for the ice core evidence provided by these scientists," said Dr. Edwin Boyle, Professor of Earth, Atmospheric and Planetary Sciences at the Massachusetts Institute of Technology, in supporting their nomination for the Tyler Prize.
The importance of Drs. Dansgaard, Oeschger, and Lorius' research extends far beyond the scientific community and has had a profound impact in the environmental policy making arena.
Stephen H. Schneider, former Head of Interdisciplinary Climate Systems at the National Center for Atmospheric Research, observed, "While they have not themselves participated in environmental advocacy or policy analysis, their fundamental scientific contributions are frequently used by those interested in policy implications... to build the credibility of scientific understanding needed for environmental action in the are of global warming and global change.
Willi Dansgaard, Professor Emeritus of Geophysics at the University of Copenhagen, was the first paleoclimatologist to demonstrate that measurements of the trace isotopes oxygen-18 and deuterium (heavy hydrogen) in accumulated glacier ice could be used as an indicator of climate and atmospheric environment as derived from samples of successive layers of polar ice, often collected under extreme weather conditions.
The first polar deep ice core drilling expedition took place in 1966, with the collection of the American Camp Century Core from Greenland. In cooperation with other laboratories, Dr. Dansgaard and his group performed the first isotopic analysis of the ice and perfected the methods to date the ice sheets and measure acidity and dust records, thus demonstrating its value as an environmental indicator. Since that time, Dr. Dansgaard has organized or participated in 19 expeditions to the glaciers of Norway, Greenland, and Antarctica.
Dr. Dansgaard is a member of the Royal Danish Academy of Science and Letters, the Royal Swedish Academy of Sciences, the Icelandic Academy of Sciences, and the Danish Geophysical Society. He is the recipient of the Royal Swedish Academy of Sciences' Crafoord Prize, the International Glaciological Society's Seligman Crystal, and the royal Swedish Society of Geography and Anthropology's Vega medal.
Claude Lorius, chairman, French Institute of Polar Research and Technology (Grenoble) has participated in 17 polar field campaigns, with a cumulative total of 5 years spent in some of the coldest spots on the planet. He was the first to appreciate the value of the air bubbles trapped in the ice sheets and developed methods to determine the atmospheric pressure at the time of ice formation thus providing insight to the original thickness of the ice.
He played a significant role in promoting international cooperation in polar ice research. Foremost among these efforts was the successful collaboration between Soviet, American, and French scientists in the recovery and analysis of the longest ice core drilled to date. The information obtained from the Vostok Core, collected in East Antarctica, is exceptional because it provides the first continuous ice record of the drastic swings in global climate over the last 150,000 years extending from the present interglacial (or warming) period through about 100,000 years of glacial cooling, then on through the previous interglacial episode and into the tail of another glaciation. The drilling has now reached a depth of 3,100 meters which will allow scientists to extend the time scale to about 400,000 years.
Data from the analysis of the Vostok Core by Dr. Lorius and his team are stunning and include detailed records of air temperature, methane, carbon dioxide, and aerosols, to name but four climate system properties that this record has faithfully preserved. Of particular interest has been the reconstruction of atmospheric carbon dioxide and methane variations during the last climatic cycles which shows a strong relationship between climate and the chemical composition of the atmosphere (in particular the concentration of greenhouse gases.) This data provides a strong warning signal about the possible impact of human activities on climate.
Dr. Lorius was born on February 27, 1932 in Besancon, France. He received a masters and doctorate degree in Physical Sciences from the Sorbonne University in Paris. Dr. Lorius began his scientific career in 1955 as a researcher on the Antarctic Committee for the International Geophysical Year at the National Center for Scientific Research (CNRS).
Hans Oeschger, Professor Emeritus of Physics, University of Bern, Switzerland, is the pioneer of gas composition measurements on polar ice. A physicist by training, he developed numerous methods for extracting data from sequential layers of polar ice, thus demonstrating the wealth of geochemical information present in the ice archive.
Dr. Oeschger and his colleagues developed techniques for measuring radiocarbon on very small samples of carbon dioxide, oxygen isotopes, and the radiocarbon dating of ice. Their measurement of carbon dioxide concentrations from air bubbles trapped in ice revealed for the first time the important role that the world's oceans play in influencing global climate. Thus, it is now widely held that it is ocean-influenced changes in the levels of atmospheric gases that support the creation of the great glacial ice caps.
Dr. Oeschger began his work on isotopes and greenhouse gases around the same time as Dr. Dansgaard initiated his studies. Their combined work documented that abrupt climate swings are associated with changes in atmospheric greenhouse gases. The paradigm has come to be known as "Dansgaard-Oeschger events," the study of which has led to profound insights about the response of the present-day climate system to man's activities.
Dr. Oeschger was born on April 2, 1927 in Ottenbach, Switzerland. He earned a doctor of science degree from the University of Bern in 1955 and has been associated with that institution since that time as a researcher and professor. He became professor emeritus in 1992.
Dr. Oeschger is a member of a number of scientific academies and honor societies including the National Academy of Sciences, Swiss Academy of the Technical Sciences, and the Swiss Academy of Natural Sciences. Past honors include the Harold C. Urey Medal from the European Association of Geochemistry and the Seligman Crystal from the International Glaciological Society.
For More Information on the Tyler Prize, Contact:
Amber Brown, Administrator |
One–step equation word problem is a simple word problem involving only addition or subtraction of a constant to an unknown value and either multiplying or dividing a certain number to an unknown quantity.
To solve one-step equation word problems, follow the following steps:
- Assign a variable for the unknown quantity or number.
- Set up the equation that best translate the word problem.
- Solve the defining equation.
- State the final answer.
Below are examples illustrating how to solve one-step equation word problem:
1. When 5 is added to a number, the result is 12. What is the number?
Let be the number. The equation is . Solving for a, we get
Therefore, the number is 7.
2. If a number is halved, the result is 24. Find the number.
Let be the number. The equation is Solving for , we get
Therefore, the number is 48.
3. Mary has pesos in her wallet. If her mother would give her additional 200 pesos, she will have 230 pesos in all. How much money does she originally have?
Since the unknown amount is already as , so the equation is: . Solving for we get
1. John bought 10 apples at 12 pesos each. How much do 10 apples cost him?
2. If an item is subject to 20% discount, what is the discount if it was originally marked at 500 pesos?
3. If 10 is subtracted from a number, the difference would be 5. What is the number? |
Kenneth R. Hirsch, MD
According to the Mayo Clinic, scientists don’t understand why moles form or whether they have a purpose. However, it’s not uncommon for adults to have from 10 to 40 moles on their bodies, says the Cleveland Clinic. Most moles develop in childhood, but some develop later on. Although it’s normal for moles to change or even disappear, moles that look different or crop up suddenly in adulthood should be checked out.
Video of the Day
What Causes Moles to Form
Melanin, a naturally occurring pigment providing skin color, produces cells called melanocytes, according to the Mayo Clinic. Sometimes, these cells cluster together for unknown reasons and cause moles. However, some scientists believe moles are caused from skin damaged by the sun, according to American Osteopathic College of Dermatology. Moles tend to get darker with more sun exposure, says the Cleveland Clinic, but sometimes, they get darker during puberty or pregnancy. Moles are common, though, and are usually harmless.
Changes in Moles
Although it’s normal for moles to change slightly or disappear in adulthood, if they change shape, or brand new moles quickly develop, it’s important to seek advice from your health-care provider who can determine your next step. According to the Cleveland Clinic, if you start to see any change in mole color, size or form, or you notice mole bleeding, itching, scaling or pain, see a dermatologist.
The Cleveland Clinic suggests examining moles on a regular basis. Use a mirror or ask a loved one to help you to inspect moles in places you can’t easily see, such as the back of your thigh. Double-check moles on skin regularly exposed to sun.
Moles and Cancer
Sometimes, moles turn to cancer. According to the Mayo Clinic, “several types of moles have a higher than average risk of becoming cancerous.” Congenital nevi--moles people are born with--could increase the risk of a fatal type of skin cancer called malignant melanoma. Atypical, dysplastic, nevi moles that are hereditary, irregular and bigger than a quarter inch also can cause malignant melanoma, say the Mayo Clinic. Finally, the more moles people have, the greater at risk they are for melanoma. Men are most likely to develop melanoma on their backs, whereas women are most likely to develop melanoma on their lower legs, says the Cleveland Clinic.
If your dermatologist decides you have a dangerous-looking mole, he or she will start by taking a biopsy of the mole, explains the Cleveland Clinic. It’s a safe and easy procedure, and if it’s determined that the mole is cancerous, it’s then carefully removed by a simple surgical procedure. If caught early and removed, cancer is not likely to spread. |
In the summer of 1981, a colleague at the NASA Ames Research Center in Mountainview, California, gave me a small black stone wrapped in aluminum foil that changed the course of my life.
About the size of a marble and indistinguishable from any other rock that might be found on a beach, the stone was a piece of a meteorite that entered the Earth’s atmosphere over southern Australia in September 1969. It began with a bright fireball that broke up into several pieces, followed by claps of thunder, and minutes later a shower of black stones fell over several square miles near the small town of Murchison.
When the original boulder-sized object broke apart into smaller pieces, the surfaces of the fragments were heated by atmospheric friction to white-hot temperatures. In a few seconds the friction slowed them from their initial velocity and they finally fell to the ground at the same speed they would reach if they were dropped from an airplane. When the first stones were found they were still warm and had a smoky, aromatic smell. During the next few weeks townspeople and scientists collected over a hundred kilograms of meteorites ranging in size from marbles to bricks.
The smoky aroma hinted that there was more to this meteorite than just the mineral content typical of other stony meteorites. The Murchison belongs to a relatively rare group of meteorites called carbonaceous chondrites. The odor is produced by organic compounds older than the Earth itself, some of which were present in the vast molecular cloud of interstellar dust and gas that gave rise to our solar system 4.57 billion years ago.
Most of the organic material, nearly 2% of the total mass of the Murchison, is in the form of a coal-like polymer called kerogen, but there are also hundreds of different compounds that sound like a chemist’s laboratory: oily hydrocarbons, fluorescent polycyclic aromatic hydrocarbons (PAH), organic acids, alcohols, ketones, ureas, purines, simple sugars, phosphonates, sulfonates and the list goes on. Where did all this stuff come from? Did it have anything to do with the origin of life?
In 1953, a young graduate student named Stanley Miller published a short paper in Science that suggested an answer. Miller’s mentor was Nobelist Harold Urey at the University of Chicago, who had proposed that the early Earth’s atmosphere was likely to be a mixture of hydrogen, methane, ammonia and water. This is called a reducing atmosphere, with no free oxygen.
Miller decided to expose such an atmosphere to an electrical discharge to simulate lightning. The result was astonishing. because several amino acids were produced, along with hundreds of other compounds that resembled the mix of organic compounds in the Murchison. And then sixteen years later, Keith Kvenvolden and a group of researchers at NASA Ames analyzed a sample of the Murchison meteorite and convincingly demonstrated to everyone’s satisfaction that amino acids were present among the organic compounds. Not just the amino acids associated with life on the Earth, which might have been contamination, but more than 70 others that were clearly alien to biology as we know it. This study confirmed Stanley Miller’s conclusion that amino acids, the fundamental building blocks of proteins, can be synthesized by a non-biological process.
From this, it seemed reasonable to think that amino acids and other organic compounds would have been available on the prebiotic Earth, either delivered by meteoritic infall or synthesized by geochemical processing of atmospheric gases.
With a sample of a carbonaceous meteorite in hand, I was ready to do an experiment I had been dreaming about. I had spent much of my earlier research career studying lipids which, along with proteins, nucleic acids and carbohydrates, represent the four major kinds of molecules that compose living organisms. “Lipid” is a catch-all word for compounds like fat, cholesterol and lecithin that are soluble in organic solvents. In earlier research I had extracted triglycerides (fat) from the livers of rats, phospholipids such as lecithin from egg yolks, and chlorophyll from spinach leaves. All of these procedures used an organic solvent mixture of chloroform and methanol to dissolve the lipids, and I wanted to try the same thing with the Murchison material.
The surfaces of the meteorite stone had surface contamination from being exposed to sheep pastures in Australia and the fingers of everyone who handled it, so I broke it into smaller pieces and carefully obtained an interior sample weighing about one gram. I pulverized the sample in a clean mortar and pestle with a mixture of chloroform and methanol as the solvent, and decanted the clear solvent from the heavier black mineral powder. The chloroform solvent had a yellow tint, which meant that it had dissolved some of the organic material in the meteorite. I dried a drop of the solution on a microscope slide, added water, then examined it at 400X magnification. It was an extraordinary sight. Lipid-like molecules had been extracted from the meteorite and were assembling into the cell-sized membranous vesicles.
Could it be that similar compartments were present when the first liquid water appeared on the Earth over 4 billion years ago?
Maybe if we studied the Murchison meteorite we might know what kinds of molecules made up the membranous boundaries of the first cellular life. But a huge question remained: Where did the stuff come from? For that matter, where does anything come from?
In next week’s column I will describe a remarkable tale of stellar nucleosynthesis, scientific insight, and a missed Nobel prize.
I want to thank Bente Lilja, Patrick, Gerhard, and Michael for their thoughtful comments regarding the previous two columns. I would like to respond individually, but like most academics I must reply to a hundred or so emails during the work day. I am gathering them as we go along and once a month will set aside some column space to answer as best I can. Now, back to my story. |
- Formation of igneous rocks
- Intrusive and extrusive igneous rocks
- Composition of igneous rocks
- Uses of igneous rocks
- Describe how igneous rocks form.
- Describe properties of common igneous rocks.
- Relate some common uses of igneous rocks.
extrusive rock: Igneous rock that forms on Earth’s surface from rapidly cooling lava.
intrusive rock: Igneous rock that forms beneath Earth’s surface from slowly cooling magma.
Introducing the Lesson
Tell students that one of the three major types of rocks is called “fire rock.” Ask them to guess which type of rock it is (igneous), and explain why it has that name (it forms from molten rock). Tell students they will learn more about igneous rock in this lesson.
Building Science Skills
Students can learn about igneous rocks with the hands-on activity at the URL below. In the activity, they will study samples of common igneous rocks (basalt, granite, pumice, and obsidian), model rates of cooling and relate them to crystal formation in the rocks, and identify the environments in which the rock samples formed.
In this igneous rock activity, you can make pancakes to model the formation of igneous rocks—and students get to eat the “rocks” at the end of the activity! From the activity, students will learn to differentiate between intrusive and extrusive igneous rocks by texture. They will also learn why some extrusive rocks have holes and why both intrusive and extrusive rocks may vary in color.
Give visual and kinesthetic learners an opportunity to examine several igneous rock specimens that demonstrate important lesson concepts, such as intrusive and extrusive rocks and igneous rocks with a variety of different textures. Have them relate the features they are seeing and touching to the text in the lesson.
Have a few creative students collaborate on an illustrated poster showing a diversity of uses of igneous rocks. Display the poster in the classroom, and urge other students to examine it.
Students can actively model the formation of intrusive and extrusive igneous rocks with the kinesthetic game at the URL below. Students will learn how rate of cooling affects the type of rock that forms and how to classify igneous rocks based on texture.
Show the class one or more specimens of intrusive and extrusive igneous rocks. The intrusive specimens should have markedly larger crystals than the extrusive specimens. Call on volunteers to describe the features of the two rocks. Ask students to decide which rock formed under the surface and which formed on the surface. Have them explain their reasoning.
A documented misconception held by middle school students is that all rocks are the same, and it’s hard to tell how they originated. Use actual rock specimens or photos to show students how igneous rocks differ from other types of rocks such as sedimentary rocks, and how intrusive and extrusive igneous rocks differ from each other. In both cases, relate the differences in rock features to the ways the rocks formed. For example, relate the coarser texture of intrusive igneous rock to slow cooling below Earth’s surface.
Reinforce and Review
Copy and distribute the lesson worksheets in the CK-12 Earth Science for Middle School Workbook. Ask students to complete the worksheets alone or in pairs to reinforce lesson content.
Lesson Review Questions
Have students answer the Review Questions listed at the end of the lesson in the FlexBook® student edition.
Check students’ mastery of the lesson with Lesson 4.2 Quiz in CK-12 Earth Science for Middle School Quizzes and Tests.
Points to Consider
Do you think igneous rocks could form where you live?
Would all igneous rocks with the same composition have the same name? Explain why they might not.
Could an igneous rock cool at two different rates? What would the crystals in such a rock look like? |
auroral oval - the oval, centered on a magnetic pole, where auroral activity takes place. At any time, the limits of the ring depend on conditions such as the number and energy of electrons in the aurora.
coronal mass ejection (CME) - violent eruption of a large bubble in the Sun's outer corona, which sends huge amounts of ionized gas (particles) into the solar wind. An average CME releases a mass of particles equivalent to a mountain.
geomagnetic substorm - disruption in the Earth's inner magnetosphere caused by the impact of intense solar winds, such as those caused by flares and CME's. Storms can cause power outages, interfere with radio communications, and pose a health hazard to astronauts working in space.
magnetic poles - two spots on the northern and southern hemispheres that are situated over the magnetic poles of the Earth's core. A compass needle points to the north magnetic pole. In contrast, the geographic poles are surface points located on the Earth's axis of rotation.
solar flare - a sudden outburst of energy and matter from the Sun. Flares can release more energy than billions of tons of TNT (hydrogen bombs are measured in megatons: millions of tons!) in a matter of seconds or minutes.
sunspots - areas of concentrated magnetic field on the surface of the sun, which appear dark in visible light images. These regions are really the "footprints" of magnetic field lines that can erupt into flares or coronal mass ejections. |
As the key component in aluminum production, bauxite became one of the most important minerals of the last one hundred years. But around the world its effects on people and economies varied broadly – for some it meant jobs, progress, or a political advantage over rival nations, but for many others, it meant exploitation, pollution, or the destruction of a way of life.
Aluminum Ore explores the often overlooked history of bauxite in the twentieth century, and in doing so examines the social, political, and economic forces that shaped the time. Its development became a strategic industry during the First World War, and then the subject of international struggle for dominance during the Second World War. Yet in post-war years it was globalization, not military conquest, that expanded global value chains. The extraction of bauxite – a mineral found mostly in the developing world – was made profitable by the growth of multinational corporations and the spread of globalization, leaving behind a troubled cultural and environmental legacy.
In this wide-ranging collection, scholars from around the world consider multiple perspectives on this history – from Guinea to Nazi Germany to Jamaica – all while examining the central place of one commodity in a time of change.
Aluminum Ore will appeal to specialists in the areas of resource extraction, globalization, economic and political history, as well as general readers with an interest in resource development in the twentieth century.
Introduction: Opening Pandora’s Bauxite: A Raw Materials Perspective on Globalization Processes in the Twentieth Century / Mats Ingulstad, Espen Storli, and Robin S. Gendron
1 The Global Race for Bauxite, 1900-40 / Espen Storli
2 “Of the Highest Imperial Importance”: British Strategic Priorities and the Politics of Colonial Bauxite, ca. 1916–ca. 1958 / Andrew Perchard
3 Nazi Germany’s Pursuit of Bauxite and Alumina / Hans Otto Frøland
4 National Security Business? The United States and the Creation of the Jamaican Bauxite Industry / Mats Ingulstad
5 The Soviet Union’s “Bauxite Problem” / Stephen Fortescue
6 “Greece Has Been Endowed by Nature with This Precious Material”: The Economic History of Bauxite in the European Periphery, 1920s-70s / Leda Papastefanaki7 The Volta River Project and Decolonization, 1945-57: The Rise and Fall of an Integrated Aluminum Project / Jon Olav Hove
8 Canada and the Nationalization of Alcan’s Bauxite Operations in Guinea and Guyana / Robin S. Gendron
9 Transnational Restructuring and the Jamaican Bauxite Industry: The Swinging Pendulum of Bargaining Power / Lou Anne Barclay and Norman Girvan
10 Issues of Governance, Liberalization, Policy Space, and the Challenges of Development: Reflections from the Guinean Bauxite-Aluminum Sector / Bonnie Campbell
11 White Metal: Bauxite, Labour, and the Land under Alcan in Twentieth-Century Guyana, Jamaica, and Australia / Bradley Cross
12 Battles over Bauxite in East India: The Khondalite Mountains of Khondistan / Samarendra Das and Felix Padel
13 Success without Bauxite: Norsk Hydro’s Long Wait to Achieve Backward Integration / Pål Thonstad Sandvik
Receive the latest UBC Press news, including events, catalogues, and announcements.Subscribe to our newsletter now
Read past newsletters |
Rainforest vegetation on the Caribbean island of Dominica
Click on image for full size
NBII Digital Image Library - Randolph Femmer, photographer
Tropical rainforests are home to thousands of species of animals, plants, fungi and microbes. Scientists suspect that there are many species living in rainforests have not yet been found or described.
There are areas of rainforests where plants are densely packed. Areas where sunlight can reach the surface are full of interesting plants. In other areas a canopy, made from the branches and leaves of tall trees, shades the ground below, preventing smaller plants from growing.
Rainforests get their name because they receive a lot of rain - an average of 80 inches (203 cm) a year! Rainforests are found at and near the equator, where it is always warm and muggy. The temperature doesn't change very much during the year.
Learn more about the animals that live in tropical rainforests by exploring the links below.
Shop Windows to the Universe Science Store!
Our online store
includes issues of NESTA's quarterly journal, The Earth Scientist
, full of classroom activities on different topics in Earth and space science, as well as books
on science education!
You might also be interested in:
On this page we will learn about insects, arachnids, centipedes and other "tiny creatures" of the rain forest. So, let's get started! The most feared spider in the world resides in the jungle. Tarantulas...more
The birds of the rain forest are the most beautiful in the world. A wide range of colors can be seen flying through the trees. Many species of tropical birds are kept as pets because of their looks. Hundreds...more
The tropical rain forests of the world are full of reptiles. Reptiles are cold blooded, which means their body temperature depends on their environment. So, it is important for them to stay in warm climates....more
Biomes are large regions of the world with similar plants, animals, and other living things that are adapted to the climate and other conditions. Explore the links below to learn more about different biomes....more
Places located at high latitudes (far from the equator) receive less sunlight than places at low latitudes (close to the equator). The amount of sunlight and the amount of precipitation affects the types...more
Forests cover almost a third of the land surface on Earth. A new report is out about forests and how they impact global climate. Scientists know that forests play a role in both lessening and adding to...more
Today, the South American jungle is home to many snakes. But 60 million years ago it was home to much larger snakes. Scientists have discovered the fossils of a snake that was longer than a school bus....more |
For professionals whose job it is to evaluate infrastructure, it’s clear that the country’s vast system of roads and bridges is in urgent need of repair. In 2007, officials at the Federal Highway Administration rated 25 percent of US bridges “structurally deficient or functionally obsolete.” And just this year, the American Society of Civil Engineers released its annual infrastructure report card, giving the overall state of bridges a “C” and roads a “D-“.
The majority of these structures are made of concrete, many erected in the 1940s and 50s. Today, these bridges and roadways are crumbling into disrepair, partly due to age and partly because of winter de-icing. While road salt melts ice from surfaces, it can also work its way into the many micropores in concrete, thawing the water molecules within. This rapid thawing can cause the concrete to expand and crack from within, taking years off its service life.
Now engineers at the National Institute of Standards and Technology (NIST) have developed and patented a new technique, called VERDiCT (Viscosity Enhancers Reducing Diffusion in Concrete Technology), that could potentially double the lifespan of a piece of concrete. By mixing a nano-sized additive with cement, they devised a method that slows the infiltration of road salt. They reasoned that the longer it takes for deteriorating agents to penetrate, the longer concrete will last without cracking.
In conventional concrete manufacturing, dry cement–typically consisting of limestone, clay, and other minerals–is mixed with water to make a paste and combined with aggregates, such as rocks or sand. As it dries, the paste glues the aggregates together into a concrete slab. Recently there have been efforts to create stronger, high-performance concrete, mainly by increasing the material’s density. To do this, researchers either add various strengthening chemicals or grind the dry materials used to make cement so that they are even finer than those found in conventional mixes. Once combined with water, the paste and resulting slab is much denser and stronger than traditional concrete.
However, scientists have found a major downside to such high-performance alternatives. “In fast-track construction, everyone is going for early-strength concrete because they want to get traffic back up and running,” says Dale Bentz, a chemical engineer at NIST and lead investigator on the project. “To get that strength, you might grind concrete finer [to make it] more reactive, but that also generates more heat, and when it cools down and contracts, it could cause cracking. So you get high-performance concrete between the cracks, which is not what you want.”
Bentz and his colleagues took a nano-scale approach to improving concrete instead. They recognized that within concrete there are millions of tiny micropores filled with water molecules. It is known that chloride and sulfate ions from road salt penetrate concrete by diffusing into this water solution, so they hypothesized that increasing the viscosity of the solution within these micropores might slow the penetration of road salt and other deteriorating agents, and extend concrete’s lifespan.
“If these ions are floating around, if they’re moving through honey instead of water, they’ll be significantly slowed down,” says Bentz. “The trick is to find the right chemical that will change the viscosity of the solution.”
The researchers took a cue from the food industry, which uses additives as thickeners in everything from salad dressings to carbonated drinks. Bentz searched for similar additives that would both increase the viscosity of the water solution found in concrete and slow ion diffusion; he even tried using food thickeners, including xanthum gum, which is used in sauces and ice cream.
After screening multiple additives in water solution in order to model the behavior of ions in concrete, the team found that those with a smaller molecular size were more successful at slowing the rate of ion diffusion. Additives that occur in small molecular chains, with branches of hydrogen and oxygen, were particularly good at increasing a solution’s viscosity. Bentz says this might be due to the fact that such hydrogen and oxygen branches can interact with water molecules to form a barrier against infiltrating ions, making it harder for them to penetrate.
The team also tested various additives within small cylinders of cement mortars–essentially, concrete without the aggregates. Bentz mixed the additives with cement, let the mortars dry, and placed each mortar into a chloride solution for up to one year. After removing the mortars from the solution, he and his team broke apart each mortar and analyzed how far chloride ions were able to penetrate. Compared with mortars without any additives, those with additives showed significant reduction in chloride diffusion.
However, the technique may not be quite ready for industry-scale application, mainly due to potential costs. Bentz says to get such results he had to make the additive as much as 10 percent of the cement solution. “The industry is comfortable with one percent, so there’s a cost factor, in that it’ll cost 10 percent more,” says Bentz. “We’ve demonstrated proof of concept, and now we would like to find an additive that works at 3 to 5 percent concentration.”
Jason Weiss, professor of civil engineering at Purdue University, works on improving concrete mixtures and increasing the material’s long-term performance. He says that such a technique may one day make bridges and roads less susceptible to corrosion. “This has an enormous potential,” says Weiss. “This would imply that a bridge that could last 30 years would now last 40 to 45 years under the same type of chemical attack.” |
From Earliest Times ::
What we know about early societies can be inferred only from those objects that have survived and have been recovered and identified. We cannot converse with early man and only rarely can we read what he or she had to say about him- or herself, for so much 'so-called' early literature was in fact written down many centuries after the event, an echo of an oral tradition. How we choose to infer from evidence will depend on the overarching view we hold about the way societies developed. There are two major theories of human development. The first, we might call the ecological view, says that people adapt rationally by coming up with similar solutions and responses when they find themselves facing similar environmental situations, implying maybe that the human mind is 'hard-wired' to think in particular ways. The second, a form of cultural relativism, says that the human mind is capable of a wide range of different responses, and that each society develops in a distinct, idiosyncratic way according to the accidents of a particular cultural and historical tradition. We do know, when examining early civilizations geographically isolated from one another, that each appeared to reach a certain level of development in a particular way. Despite superficial similarities, the great civilizations of South America developed languages, forms of writing, architecture and technologies distinct from those of early China, the Indus Valley or Egypt. Only where there was some form of cross-fertilization, as between Mesopotamia (the land between two rivers) and Egypt, would similarities appear. But their beliefs show that they shared certain traits. They all believed that supernatural forces animated and sustained the universe and that forms of sacrifice played a central role in the relationship between those forces and themselves. Their societies became stratified as farmers, artisans, priests, soldiers and kings defined their roles and their responsibilities to the gods, all on the basis of shared beliefs about how the universe worked. Another feature common to almost every civilization was the use they made of music for solace, celebration and entertainment.
[see also: Is Music What We Are?]
Brief Timeline of History (10,000 BC to 1900 AD)
The earliest known flute, discovered in Slovenia in south-east Europe, 12-centimeter (5 inch) long, was made by Neanderthal humans 45,000 years ago. The instrument was made from the leg bone of a bear, and its original four fingerholes are intact. Its lowest note was identified as a B flat or A although beyond that the instrument is unplayable. The flute was found in a cave near the town of Nova Gorica, 65 kilometers (40 miles) west of Slovenia's capital, Ljubljana. There is some debate whether this is really a flute and we offer below some links including those that cover the debate.
German archeologists have discovered a 35,000-year-old ivory flute in a cave in the hills of southern Germany, the university of Tübingen announced Friday. The instrument, among the world's oldest and made from a woolly mammoth's ivory tusk, was assembled from 31 pieces that were found in the cave in the Swabian Jura mountains, where ivory figurines, ornaments and other musical instruments have been found in recent years. According to archeologists, humans used the area for camps in the winter and spring. The university plans to put the instrument on display in a museum in Stuttgart, according to reports.
A bird-bone flute unearthed in a German cave was carved some 35,000 years ago and is the oldest handcrafted musical instrument yet discovered, archaeologists say, offering the latest evidence that early modern humans in Europe had established a complex and creative culture. A team led by University of Tuebingen archaeologist Nicholas Conard assembled the flute from 12 pieces of griffon vulture bone scattered in a small plot of the Hohle Fels cave in southern Germany. Together, the pieces comprise a 8.6-inch (22-centimeter) instrument with five holes and a notched end. Conard said the flute was 35,000 years old.
Evidence that “…many of these items [musical instruments] were discovered in the Neander Valley of Germany where the very first Neanderthal fossil was discovered in 1856. A tuba made from a mastodon tusk, what looks like a bagpipe made from an animal bladder, a triangle and a xylophone made from hollowed out bone.”, published by Discovery magazine, was actually an April Fool hoax.
In September 22, 1999, Reuters reported the discovery of the world's oldest playable flute in China. Made about 9,000 years ago and in pristine condition, the 8.6 inch instrument has seven holes and was made from a hollow bone of a bird, the red-crowned crane. It is one of six flutes and 30 fragments recovered from the Jiahu Neolithic archaeological site in Henan province. Garman Harbottle, of the Brookhaven National Laboratory in Upton, New York, said, in a telephone interview, "They are the oldest playable musical instruments". In addition to suggesting that the early Chinese were accomplished musicians and craftspeople, the Jiahu site reveals that the Chinese in Jiahu had already established a village life. They had parts of the city, or village, that were devoted to different functions. Some of the other flutes, which have between five and eight holes, could also be played.
A rectangular stone musical instrument, confirmed to be a type of percussion instrument used in ancient China, was recently unearthed at the site of Qijia Culture in Qinghai Province, northwest China. Archaeologists from the Chinese Academy of Social Sciences (CASS) and the Qinghai Provincial Archaeological Research Institute said that this is the first such instrument ever found in the history of Chinese archaeology. They said that the discovery may reverse the traditional theory that ancient percussion instruments were triangular-shaped or square. The Qijia Culture flourished in the transitional period from the Neolithic Age to the Bronze Age, some 3,500 to 4,000 years ago. Wang Renxiang, a researcher from the CASS, discovered the relic at the home of a farmer who lives in the Lajia village, where the ruins of the Qijia Culture are located. The finely-cut and well-polished instrument, 96 cm long and 66 cm wide, is dark blue and still produces a loud, clear sound. A number of jade articles used in primitive religious rituals were found at the site, as well as a city moat which, experts said, is dozens of metres wide and five metres deep.
[taken from: 4000 year-old percussion instrument unearthed]
Tunes were rung on handbells in China over 5000 years ago, although western civilisation has long associated the sound of larger bells with the Christian Church. Many religions around the world make use of bells in their worship. Bells of many shapes and sizes have been used to ring out glad tidings, toll for the departed and to call the faithful to worship. The British Isles have long been known as the "Ringing Isles" and in the eighteenth century, the composer Handel cited the bell as the English National Instrument. Tower bell ringers started the art of "ringing the changes" as long ago as the sixteenth century. This change ringing, practised in the frequently cold belfry, brought about a suggestion, according to some history books, "Why don't you create some small bells which you can hold in your hand and take to the local inn to practise in warmth and comfort?"
Oldest known bell found near Babylon.
The first recorded use of handbells in China.
'Campaniles' or 'bell towers' are built throughout Europe. Campaniles were used to ring warnings of Battles, important civic events, and of course, Church services.
Bells are made from a mixture of alloys creating the familiar bronze bell which exists today. For the first time, bells can be created to have a specific tone or pitch.
Construction begin on the "campanile" in the Italian City of Piza. Unfortunately, it is built on unstable ground and is unsuitable as a bell tower.
The art of changing ringing begins. The creation of music based on ever changing sequential patterns of sound on multiple bells.
The first handbells are created to train change ringers.
Handbell ringers experiment with new forms of music.
P.T. Barnum hires 'Swiss Bell Ringers' to be part of his circus.
The earliest historical records relating to bronze drums appeared in the Shi Ben, a Chinese book dating from at least the third century BC. This book is no longer extant; however a small portion of it appears in another classic, the Tongdian by Du You. The Hou Han Shu, a Chinese chronicle of the late Han period compiled in the fifth century AD, describes how the Han dynasty general, Ma Yuan (14 BC-49 AD) , collected bronze drums from Jiaozhi (northern Vietnam) to melt down and then recast into bronze horses. From that point on, many official and unofficial Chinese historical records contain references to bronze drums. In Vietnam, two fourteenth-century literary works written in Chinese by Vietnamese scholars, the Viet Dien U Linh and the Linh Nam Chich Quai record many legends about bronze drums. Later works such as the Dai Viet Su Ky Toan Thu, a historical work written in the fifteenth century, and the Dai Nam Nhat Thong Chi, a book about the historical geography of Vietnam compiled in the late nineteenth century, also mention bronze drums. Additionally, a wooden tablet found in Vietnam dating from the early nineteenth century describes the discovery of some bronze drums.
[taken from: The Present Echoes of the Ancient Bronze Drum: Nationalism and Archeology in Modern Vietnam and China by Han Xiaorong]
Chinese music is as old as Chinese civilization. Instruments excavated from sites of the Shang dynasty (circa 1766-c. 1027 BC) include stone chimes, bronze bells, panpipes, and the sheng. The ancient Chinese wind instrument, the cheng, sheng or Chinese organ, consisting of a set of pipes arranged in a hollow gourd and sounded by means of free-reeds, the air being fed to the pipes in the reservoir by the mouth through a pipe shaped like the spout of a tea-pot.
Music flourished during the Shang dynasty (1523-1027 BC) but the intellectual foundation of the Chinese musical system grew out of early advances in Chinese philosophy and mathematics. For the Chinese the understanding of the meaning of existence, a quest central to cultures from all ages and places, focused on (1) inter-personal communication and its contribution to society as a whole, and (2) the human position in the cosmos. Their cosmic view, based on a universal resonance and harmony, informed the Chinese system of harmonic intervals. Acoustical systems were seen to mirror the physical universe; the study of one led to a deeper understanding of the other. The Chinese were among the first to consider tuning systems and temperament, using acoustical physics and mathematics. Chinese musicians using silk strings were the first to employ scales based on equal temperament.
In the Zhou dynasty (circa 1027-256 BC) music was one of the four subjects that the sons of noblemen and princes were required to study, and the office of music at one time comprised more than 1400 people. Although much of the repertoire has been lost, some old Chinese ritual music (yayue) is preserved in manuscripts.
- In Chinese music theory, which dates back to the fifth century BC but would later influence the theory of music in Japan, the five notes of the musical scale (called a pentatonic scale) were intimately related to all the other 'fives' based on the five material agents: the directions, the seasons, organs, animals, etc. The five material agents were a sophisticated theory of change: all change, including musical change, was governed by the relationship of the five material agents either as they engendered one another or conquered one another. These two possible relationships, the sequence of the five material agents as the either engender or conquer one another, in part governed the sequence of notes in the scale.
|chiao (3rd note)||cheng (4th note)||kung (1st note)||shang (2nd note)||yü (5th note)|
In addition, the five material agents were collapsed in a larger notion of yang and yin, the male (creation) and female (completion) principles of change in the universe. Likewise, the pentatonic scale was divided into a male scale and a female scale, or ryo and ritsu in Japanese. The most important note in the pentatonic scale is the third note of the scale, called the 'cornerstone'. Corresponding with the five material agents, the "cornerstone" is related to the 'Wood agent' and therefore also to 'Spring' and to the 'East', or beginnings, and jen , or 'benevolence, humaneness', the most important of the virtues). While in the West we define tonal scales based on the first note of the scale (called the 'tonic'), in Chinese and Japanese music, the scale is defined by the 'cornerstone', or third note. If the relationship between the first note (kung, which corresponds to the 'Earth agent' and the 'centre') of the scale and the 'cornerstone' form a perfect third (if you play middle C and E on a piano, you're playing a perfect third), the scale is male; if these two notes form a perfect fourth (middle C and F on a piano), the scale is female.
Chinese and Japanese musical theories were based on the eight categories of sound (called, in Chinese, pa yin): metal (bells), stone (stone chimes), earth (ocarina), leather (drums), silk (stringed instruments), wood (double-reed wind instruments), gourd (sho, or mouth organ), and bamboo (flute).
[taken from: Early Japanese Music by Richard Hooker]
During the Qin dynasty (221-206 BC) music was denounced as a wasteful pastime; almost all musical books, instruments, and manuscripts were ordered to be destroyed. Despite this severe setback Chinese music experienced a renaissance during the Han dynasty (206 BC-AD 220), when a special bureau of music was established to take charge of ceremonial music. During the reign (AD 58-75) of Ming-Ti, the Han palace had three orchestras formed from 829 performers. One orchestra was used for religious ceremonies, another for royal archery contests and the third for entertaining the royal banquets and the harem.
The tolerance of the T'ang Imperial Court to outside influence and the free movement along the East-West trade route known as the Silk Road, saw major urban centres become thriving cosmopolitan cities with the Chinese capital, Chang'an (modern Xian) expanding to reach a population of over one million. During the T'ang dynasty (618-906) Chinese secular music (suyue) reached its peak. Emperor T'ai-Tsunghad ten different orchestras, eight of which were made up of members of various foreign tribes; all the royal performers and dancers appeared in their native costumes. The imperial court also had a huge outdoor band of nearly 1400 performers. Musicians from the West were regular features in the major cities and introduced new instruments and music styles. The T'ang emperor Xuanzong (712–755) was a great lover of the new Western music that was played regularly at court along with traditional Chinese music and instruments such as bells and zithers.
By the late T'ang and the Song period (960-1279 AD) the cosmic philosophical viewpoint had disintegrated. The Song period, in particular, spelled disaster for ancient musical aesthetics as China experienced serious political retreat.
Chinese cultural achievements from these early periods entered the cultural life of those countries bordering or engaged in contact with China, i.e. Korea, Japan, and Southeast Asia. The origins of Japanese music begin at around 3000 BC during the Jomon Culture, when, according to evidence found at archeological digs, music was first used in ceremonies. The primitive instruments included stone whistles, bronze bells, barrel drums, zithers, and croatal bells. The first music apparently directly a result of migration from China and Korea, was passed down from the ancient Ainu of Japan, and became the dominant secular musical style of ancient Japan, gigaku or Kure-gaku. This style is associated with the popular dances and pantomimes of southern China and northern Indochina. It was, as far as we know, the most popular 'official' music in late sixth-century Japan.
Both Togaku and To-sangaku were musical styles derived from T'ang China. The musical life of the T'ang court obeyed a formal set of rules, called the Ten Styles of Music which governed the hierarchy and use of Chinese and foreign musical styles in the T'ang court. When musical performances followed these academic rules and types, it was known asTogaku, or T'ang music. When, however, the music consisted of popular music from T'ang China, this music was classified as To-sangaku, or unofficial T'ang music. Sangaku was the most popular and exciting of these early music types, where songs were interspersed between acrobatics and energetic pantomimes.
Finally, Koma-gaku was the music of the three Korean kingdoms and Rinyu-gaku was the music of Southern Asia. The latter always involved dances and pantomimes.
The Indus Valley civilization, the largest of the four ancient civilizations, flourished around 2,500 B.C. in the western part of South Asia, in what today is Pakistan and western India. It is often referred to as Harappan civilization after Harappa, the first city discovered in the 1920's. Most of the civilization's ruins, including other major cities, remain to be excavated. Its script has not been deciphered and basic questions about the people who created this highly complex culture are unanswered. However, there is some evidence that it had a musical tradition some of which survives to this day amongst the Dravid people of southern India and Sri Lanka.
Ahmad Hassan Dani comments: "There is one particular aspect which does survive, not only in South India, but also in Sri Lanka. This came to my mind when the year before last I was in Sri Lanka at the time of their general election and they had a music performance. In the music performance they were having the dance, and with their drum or dholak, and it at once reminded me of my early life, for I was born in Central India, and I had seen this kind of dance. Not with tabla, tabla is a later comer in our country. It at once reminded me that we have got this dholak in the Indus Valley Civilization. I don't know about the dance, but at least the dholak we know. We have not stringed instruments in Indus Valley Civilization. We have got the flute, we have got cymbals, we have got the dholak. Exactly the same musical instruments are played today in Sri Lanka and South India. So I would like to correct myself: to say that nothing is surviving in South India [is wrong]; this is the only instrument which is surviving there according to me from the Indus Civilization."
The Uruk Lute: Elements of Metrology by Richard Dumbrill
"Earlier this year I examined a cylinder seal acquired by Dr Dominique Collon on behalf of the British Museum. The piece is now listed as BM WA 1996-10-2,1, and depicts, among others, the figure of a crouched female lutanist. The seal, which I shall not discuss here, has been identified by Dr Collon as an Uruk example and thus predates the previously oldest known iconographic representations by about 800 years. Little can be said about the instrument except that it would have measured about 80 centimetres long and that some protuberances at the top of its neck might be the representation of some device for the tuning of its strings. Otherwise, the angle of the neck in its playing position as well as the position of the musician’s arms and hands is consistant with one of the aforementioned Akkadian seals, namely BM 89096. This shows that the instrument evolved very little for the best part of one millennium, for the probable reason that it already had completed its development, as early as the Uruk period.
The existence of the lute among the instrumentarium of the late fourth millennium [BC] is of paramount importance as it is consequential to the understanding and usage of ratios at that period. I am further willing to hypothesize that the lute might have been at the origins of the proportional system. This is what I shall now demonstrate.
The lute differs from the two other types of stringed instuments, namely harps and lyres, in that each one of their strings produces more than one sound. This peculiarity qualifies the lute as a fretted instrument, not on the basis that it is provided with frets as we know them on the modern guitar, for instance, but in that each of the different notes generated from each of its strings is determined by accurate positions marked on the neck of the instrument. These are defined from the principle of ratios, and it is the principle of the stopping of the strings along the neck of the instrument that was at the origins of the understanding of such ratios."
Note: Uruk or Erech was an ancient Sumerian city of Mesopotamia, on the Euphrates and NW of Ur (in present-day Southern Iraq). It is the modern Tall al Warka. Uruk, dating from the 5th millennium BC, was the largest city in Southern Mesopotamia and an important religious center. The sanctuaries of the goddess Inanna (who corresponds to the Babylonian Ishtar and is also called Nana or Eanna) and Anu, the sky god, date from the early 4th millennium BC The temple of Anu, known as the white temple, stood on a terrace and seems to have been a primitive form of ziggurat. Uruk was the home of Gilgamesh, it's legendary king, and is mentioned in the Bible (Gen. 10.10). There have been excavations at the site since 1912.
Iconographic evidence from about 3000 BC indicates that double-reed wind instruments were in use in Mesopotamia. The Gold Lyre of Ur, c. 2650 BC, was one of a number of musical instruments discovered in royal burial sites which help illustrate the prominent role music played in Sumerian life and religion. Musicians and their instruments appear frequently in the artwork and archeological artifacts of Iraq's deep antiquity.
While the exact music from ancient Mesopotamia can never be recovered, Iraq has produced intruiging written evidence supporting the existence of sophisticated music theory and practice in Sumerian, Babylonian and Akkadian cultures. A family of musical texts inscribed in cuneiform tablets reveal a wealth of musical information about specific tuning modes, string names and hymns. These written documents demonstrate that musical activity was being recorded a thousand years prior to the rise of ancient Greek civilization, a culture commonly credited with the earliest development of musical documents.
[taken from: The Sumerian Gold Lyre by Douglas Irvine]
The religion of the ancient Sumerians has left its mark on the entire middle east. Not only are its temples and ziggurats scattered about the region, but the literature, cosmogony and rituals influenced their neighbours to such an extent that we can see echoes of Sumer in the Judeo-Christian-Islamic tradition today. From these ancient temples, and to a greater extent, through cuneiform writings of hymns, myths, lamentations, and incantations, archaeologists and mythographers afford the modern reader a glimpse into the religious world of the Sumerians. Each city housed a temple that was the seat of a major god in the Sumerian pantheon, as the gods controlled the powerful forces which often dictated a human's fate. The city leaders had a duty to please the town's patron deity, not only for the good will of that god or goddess, but also for the good will of the other deities in the council of gods. The priesthood initially held this role, and even after secular kings ascended to power, the clergy still held great authority through the interpretation of omens and dreams. Many of the secular kings claimed divine right; Sargon of Agade, for example, claimed to have been chosen by Ishtar/Inanna. The rectangular central shrine of the temple, known as a 'cella,' had a brick altar or offering table in front of a statue of the temple's deity. The cella was lined on its long ends by many rooms for priests and priestesses. These mud-brick buildings were decorated with cone geometrical mosaics, and the occasional fresco with human and animal figures. These temple complexes eventually evolved into towering ziggurats. The temple was staffed by priests, priestesses, musicians, singers, castrates and hierodules. Various public rituals, food sacrifices, and libations took place there on a daily basis. There were monthly feasts and annual, New Year celebrations. During the later, the king would be married to Inanna as the resurrected fertility god Dumuzi, whose exploits are dealt with below.
[taken from: Sumerian Mythology - FAQ]
The history of musical development in Iran [Persia] dates back to the prehistoric era. The great legendary king, Jamshid, is credited with the invention of music. Fragmentary documents from various periods of the country's history establish that the ancient Persians possessed an elaborate musical culture. The Sassanian period (A.D. 226-651), in particular, has left us ample evidence pointing to the existence of a lively musical life in Persia. The names of some important musicians such as Barbod, Nakissa and Ramtin, and titles of some of their works have survived. With the advent of Islam in the seventh century AD. Persian music, as well as other Persian cultural traints, became the main formative element in what has, ever since, been known as "Islamic civilization. Persian musicians and musicologists overwhelmingly dominated the musical life of the Eastern Moslem Empire. Farabi (d. 950), Ebne Sina (d. 1037), Razi (d. 1209), Ormavi (d. 1294), Shirazi (d. 1310), and Maraqi (d. 1432) are but a few among the array of outstanding Persian musical scholars in the early Islamic period. In the sixteenth century, a new "golden age" of Persian civilization dawned under the rule of the Safavid dynasty (1499-1746).
[taken from: An Introduction to Persian Music by Professor Hormoz Farhat]
Ugarit, not far from modern-day Beruit, flourished from the fourteenth century BC until 1200 BC, when it was destroyed. Its language is similar to Phoenician. The city was rediscovered in 1928 by a peasant whose plow uncovered a ancient tomb near Ras Shamrah in northern Syria. A group of French archaeologists led by Claude F.A. Schaeffer started excavating the city in 1929.
Of particular importance to music history was the discovery of a terracotta tablet which includes a musical staff and which has been dated to about 1400 BC and is now housed at the National Museum, Damascus. This, the oldest known musical staff, is written on the lower part of the tablet below the double line, while the words to a hymn referring to the gods appear on the upper part. This is therefore a complete text, with both words and music.
Duchesne-Guillemin was one of the early investigators of the reconstruction of ancient Babylonian musical scales and music theory. She was the first scholar to explore and explain the musicological significance of the sequence of number-pairs of musical strings in a cuneiform text of the first millennium B.C.E. excavated at the archaeological site of Nippur in southern Iraq. She was able to demonstrate that the tablet presented two series of intervals on a musical scale; that musical intervals of fifths, fourths, thirds, and sixths were known at that time; and that the evidence for an ancient Mesopotamian heptatonic-diatonic scale was strong. She was also one of the few scholars who attempted to interpret the musical instructions found on a cuneiform tablet (mid-2nd millennium B.C.E.) from ancient Ugarit in Syria which contained a nearly complete hymn written in the Hurrian language but whose musical instructions were in semitic Akkadian.
Trumpets in the Bible were of a great variety of forms, and were made of various materials. Some were made of silver (Num. 10:2), and were used only by the priests in announcing the approach of festivals and in giving signals of war. Some were also made of rams' horns (Josh. 6:8). They were blown at special festivals, and to herald the arrival of special seasons (Lev. 23:24; 25:9; 1 Chr. 15:24; 2 Chr. 29:27; Ps. 81:3; 98:6). This type of trumpet, the shofar is still blown today in Jewish services on Rosh Hashanah (the Jewish New Year).
Our understanding of the role of music in the life of Ancient Egypt is enriched immeasurably by the pictorial evidence that has survived in paintings and carvings. The wall paintings from the tomb-chapel of Nebamun, an obscure accountant attached to the Temple of Amun in Thebes (present-day Karnak) who died around 1350 BC, include three female musicians who are shown clapping in time to the music of a flute played by a fourth musician, with the words of their song spelled out in a hieroglyphic speech bubble above their heads. Two of the seated musicians are shown not in profile, as one might expect in Egyptian art, but full face. The artist felt able to break the rules because the women he was depicting were foreign and of low status.
The musicologist Rafael Pérez Arroyo, former director of Sony Hispánica collection, has released the first fruit of his many years of research into the music of Ancient Egypt. The result is a spectacular and luxuriously edited book of some 500 pages entitled Music in the Age of Pyramids. As Todd McComb points out ".... Arroyo composed this music himself. It is not based upon surviving notation. His study has obviously been very extensive however: metric structure of hymns which survive in writing, whatever discussion of music theory he could find, sonic descriptions by ancient authors, iconography, etc.. He believes he has detected a partial chironomy (hand gestures, the same source claimed for Biblical music), and discovered three basic modes for Ancient Egyptian music. This leaves the sense that some 'shell' of Ancient Egyptian music has been unearthed, but no real music. Arroyo establishes a pentatonic basis, and sometimes uses Coptic hymns for the music. Arroyo also makes many claims regarding Ancient Egypt's musical influence on other cultures. While his correlations with known symbology in for example Indian & China are certainly worth considering, I personally find his claims to causality to be over-stretching."
The earliest evidence of musical activities in Denmark are the large, twisting bronze horns dating from the Bronze Age (1500-500 BC). Soon after the first examples (three pairs) were found in a bog in 1797, the name lur was attached to them. 61 lurs have been found (the most recent in 1988) in southern Scandinavia and the Baltic area, most of them in Denmark (38). They tend to be found in pairs, but it is uncertain what they were used for. The same applies to the two golden horns found in Gallehus, which some people have interpreted as musical instruments.
[taken from: Denmark - Culture - History]
Professor Peter Holmes of London believes that the first Bronze horns could have been cast in the North East of Ireland about 1500 B.C. These were quite small, relatively heavy and not highly decorated. Gradually as the culture surrounding them spread South through the Island, so too the casting expertise improved until the youngest instruments were made in the South West around 800 B.C. Because of this gradual evolvement, a wide variety of shape, design and size of horn were made. It appears that certain particular designs and tuning were indicative of the area where an instrument was made. Because a large number of originals survive it is likely that there were many horns played throughout Ireland in the Late Bronze Age. As a fragment was found in West Scotland and a drawing comes from Sussex in England it is also quite possible that a sibling culture existed there and there was most likely interaction between the two Islands.
The bells or Crothall (rattles) present a different mystery as all 48 were found in the 1820s at one particular site in the Irish Midlands. This may suggest that they were assembled from around the country for burial or equally, they may have been a feature of that particular area. Crothalls can either be shaken hard and fast between the two hands to produce complex rhythms or let hang by the attached ring and made to gently chime.
Though there is still much to be learned about the Bronze Age in Ireland, intensive studies have been published by Prof. George Eoghan, Prof. Peter Holmes and others on the vast amounts of jewelry, sacred horns, tools, weapons, cauldrons and remains of habitation that survive. These point to a rich and varied culture with high levels of population and probably a common religion and economy. Curiously, following the end of the Bronze Age around 650 BC there seems to have been a form of Dark Age as virtually no artifacts or remains come down to us from the following three hundred years around 300 BC.
We come into the Iron Age with a completely different set of influences which stem mainly from Celtic Switzerland. This era which lasted through to and beyond the introduction of Christianity in 432 AD is also believed to be the source of many of the great myths and legends of Ireland that survive today, though it is quite possible that some of the older stories may come from the earlier Bronze Age. It is important to point out that the major part of Irish Pre-history was not Celtic in origin. For over six thousand years the earlier or original Irish developed and practiced a unique culture.
They created decorative jewelry of great beauty. Their weapons were fine and deadly. An entire technology was developed around the earliest examples of bronze welding. Discoveries of artifacts from other cultures in Ireland and Irish objects abroad prove the Irish traders traveled throughout Europe, North Africa and the Middle East during the Stone and Bronze Ages.
It is therefore a misnomer to refer to the people of Ireland as Celtic. To do this is to deny more than two thirds of Irish Pre-history and history. Today it is generally accepted that the closest descendants of ancient Ireland live in Connamara in the West of Ireland. Here the people could be referred to as Aboriginal or Native. Through their distinctive Irish language and long tradition of music and song they keep alive much of Ireland's long and powerful story.
[taken from: The History of Bronze Age Horns]
Livy vividly depicts the noise accompanying the Gaul's mad rush into battle. Describing the battle of the river Allia (387 or 380 BC), he says:
".. they are given to wild outbursts and they fill the air with hideous songs and varied shouts.' Of the Gauls in Asia he writes: 'their songs as they go into battle, their yells and leapings, and the dreadful noise of arms as they beat their shields in some ancestral custom - all this is done with one purpose, to terrify their enemies."
Celts fought at the battle of Telamon in 225 BC.
"The Celts had drawn up the Gaesatae from the Alps to face their enemies on the rear ... and behind them the Insubres .... The Insubres and the Boii wore trousers and light cloaks, but the Gaesatae in their overconfidence had thrown these aside and stood in front of the whole army naked, with nothing but their arms; for they thought that thus they would be more efficient, since some of the ground was overgrown with thorns which would catch on their clothes and impede the use of their weapons. On the other hand the fine order and the noise of the Celtic host terrified the Romans; for there were countless trumpeters and horn blowers and since the whole army was shouting its war cries at the same time there was such a confused sound that the noise seemed to come not only from the trumpeters and the soldiers but also from the countryside which was joining in the echo."
Greeks & Romans ::
Little written music has been found from this ancient time, and we have no recordings from thousands of years ago, so people today do not know what the music of ancient Greece sounded like. Because ancient Greeks wrote about their music and music theory, we know something about them. The ancient Greeks are remembered for creating special arrangements of tones we now call the Greek modes. They were used later in religious music in Europe, and have been the basis of much Western music for centuries. The uneven meters that are still popular in Greek music date back to ancient times when Greek poetry was read in a special, rhythmical way. Instead of music notation looking like it does today, ancient Greek music notation used letters of the alphabet. When there was music and text, the alphabet-style music notation appeared above the words. There is an example of this early notation carved in stone from the second century BC, in the Archaeological Museum in Delphi, Greece. It is a hymn sung to the Greek god Apollo.
[taken from: Greek Music by Silver Burdett]
Music was essential to the pattern and texture of Greek life, as it was an important feature of religious festivals, marriage and funeral rites, and banquet gatherings. Our knowledge of ancient Greek music comes from actual fragments of musical scores, literary references, and the remains of musical instruments. Although extant musical scores are rare, incomplete, and of relatively late date, abundant literary references shed light on the practice of music, its social functions, and its perceived aesthetic qualities. Likewise, inscriptions provide information about the economics and institutional organization of professional musicians, recording such things as prizes awarded and fees paid for services. The archaeological record attests to monuments erected in honor of accomplished musicians and to splendid roofed concert halls. In Athens during the second half of the fifth century BC, the Odeion (roofed concert hall) of Perikles was erected on the south slope of the Athenian akropolis—physical testimony to the importance of music in Athenian culture.
In addition to the physical remains of musical instruments in a number of archaeological contexts, depictions of musicians and musical events in vase painting and sculpture provide valuable information about the kinds of instruments that were preferred and how they were actually played. Although the ancient Greeks were familiar with many kinds of instruments, three in particular were favored for composition and performance: the kithara, a plucked string instrument; the lyre, also a string instrument; and the aulos, a double-reed instrument. Most Greek men trained to play an instrument competently, and to sing and perform choral dances. Instrumental music or the singing of a hymn regularly accompanied everyday activities and formal acts of worship. Shepherds piped to their flocks, oarsmen and infantry kept time to music, and women made music at home. The art of singing to one's own stringed accompaniment was highly developed. Greek philosophers saw a relationship between music and mathematics, envisioning music as a paradigm of harmonious order reflecting the cosmos and the human soul.
[taken from: Music in Ancient Greece]
We summarise John Curtis Franklin's thesis entitled The Invention of Music in the Orientalizing Period but include also his more recent thoughts on the subject.
The legend that Terpander rejected "four voiced song" in favor of new songs on the seven-stringed lyre (fragment 4 Gostoli) suggested initially that an encounter between two musical traditions, which may have taken place during the Greek Orientalizing period (c. 750-650 BC), was catalyzed by the westward expansion of the Assyrian empire. The seven-stringed lyre answers clearly to the heptatony which was widely practiced in the ancient Near East, as known from the diatonic tuning system documented in the cuneiform musical tablets. However, while the Greek evidence preserves vestiges of the Old Babylonian (< Ur Dynastic III, Sumerian) version of diatonic music with its practical and theoretical emphasis on a central string. I am no longer certain that the system's transmission took place in the Orientalizing Period; I am now convinced that the seven-stringed lyre survived in Cyprus and those areas of the Aegean where Bronze Age Achaean culture persisted, such as Athens, Euboea, Lesbos, Arcadia, Crete, Smyrna. "Four voiced song" must be understood as describing the inherited melodic practice of the Greek epic singer. The syncretism of these two traditions may be deduced from the later Greek theorists and musicographers. Though diatonic scales were also known in Greece, even the late theorists remembered that pride of place was given to other forms of heptatony — the chromatic and enharmonic genera, tone structures which cannot be established solely through the resonant intervals of the diatonic method. Nevertheless, these tunings were consistently seen as modifications of the diatonic — which Aristoxenus believed to be the "oldest and most natural" of the genera — and were required to conform to minimum conditions of diatony. Thus the Greek tone structures represent the overlay of native musical inflections on a borrowed diatonic substrate, and the creation of a distinctly Hellenized form of heptatonic music. More specific points of contact are found in the string nomenclatures, which in both traditions are arranged to emphasize a central string. There is extensive Greek evidence relating this "epicentric" structure to musical function, with the middle string a sort of tonal center of constant pitch, while the other strings could change from tuning to tuning. So too, in the Mesopotamian system; the central string remained constant throughout the diatonic tuning cycle. The question that remains is whether the Mesopotamian approach to diatony was known in the Minoan and Mycenaean palaces, as the Ugaritic evidence might suggest, or whether it revitalized a Bronze Age Aegean tradition during the Orientalizing period, via Phoenician or Neo-Assyrian influence, as Cypriot and Lydian evidence might suggest.
In a manner analogous to the way the Greeks took up ideas of music from the ancient Near East, so the Romans inherited many of their ideas about music and their use of musical instruments from the Greeks. This much is confirmed by writings surviving from the period and from the fragments and icongraphic evidence we have of the instruments themselves. Because, so far as we know, the Romans had no organised form of musical notation we have no idea what music they played. Modern 'reconstructions' of music to accompany 'Roman' events must be considered wholly conjectural. We do know that in the Greek and Roman civilizations double-reed instruments were highly regarded. Playing the aulos or tibia was associated with high social standing and the musicians enjoyed great popularity and many privileges. Portrayals of aulos players in Ancient Greece traditionally depict a musician blowing two instruments; this proves that the aulos was a double instrument. Different types of aulos were played on different occasions - as was the Roman tibia - for example, in the theatre, where it accompanied the chorus. So, whether in religious ceremonies, public performance, private functions or on the battlefield, music, indisputably, played an important role in the lives of both the Greeks and the Romans.
Ktesibios (Ctesibios) of Alexandria who lived between 300-230 BC, invented the hydraulus, in which water pressure was used to stabilize the wind supply. The pipes were arranged in rows upon the wind chest and the air was permitted to enter any pipe at will by means of wooden sliders. The hydraulus was the prevailing organ for several centuries and reappeared at intervals throughout the Middle Ages.
Early Christian Church ::
The leaders of the early Christian Church, guided by Old Testament precedent and New Testament admonition (e.g. Colossians iii.16 and James v.13), gave their general approval to the use of music in the services of the church; but although Christianity was a Jewish sect at its inception and therefore heir to the musical materials and practices of Judaism, it possessed during its earliest period neither the financial resources nor, since it was forced by persecution to conceal its activities, the physical facilities necessary for the development of a tradition of choir singing like that of the Jews. As a result of these circumstances the singing that flourished among the early Christians was largely congregational. Specific practices varied from place to place, but the activity of singing praise was common to Christians everywhere. 'The Greeks use Greek', reported Origen (b. 185 - d. 253 or 254), 'the Romans Latin ... and everyone prays and sings praises to God as best he can in his mother tongue'. The singing of Old Testament psalms was practiced, initially at least, by Christians of both sexes and of all ages, but some of the later church Fathers, heeding the interdiction of St. Paul (1 Corinthians xiv.34), opposed the participation of women in congregational singing.
Not only were the psalms themselves borrowed by the Christians from their Jewish predecessors but Jewish methods of performance were also incorporated into Christian worship. References to antiphonal and responsorial singing occur in the works of several patristic writers. Eusebius (b. about 260 - d. before 341), Bishop of Caesarea, in whose Historia ecclesiastica Philo's account of antiphony among the Therapeutae is quoted, remarked that in his own time the manner of singing described by Philo was still practiced among the Christians. Responsorial psalmody was mentioned, probably with reference to Rome, by Tertullian (born c. 160 AD). Antiphonal and responsorial singing may have appeared first among those Christians in closest geographical proximity to the Judaic roots of Christianity, but by the end of the fourth century at the latest these methods of performance were common to Eastern and Western churches alike. Moreover, antiphonal and responsorial singing were not used exclusively in connection with psalm texts but were applied to other types of texts as well, and exercised an influence on the development of the early Christian liturgy. Patristic opinion was divided concerning the propriety of using instruments to accompany singing. Because of their association with pagan festivities, instruments were censured by many of the church Fathers, among them Clement of Alexandria (died c. 215 AD), who forbade their use in church. Even as late a writer as Didymus of Alexandria (died 396 AD), however, defined a psalm as 'a hymn which is sung to the instrument called either psaltery or cithara'.
[taken from: Influence of the Ancient JewishTemple and Synagogue Tradition on Early Christian Music and Liturgy]
During its thousand-year history the Byzantine Empire outfought, out-maneuvered, or simply outlasted successive waves of enemies who attacked it from all four points of the compass. The remarkably varied peoples who made up the Byzantine Empire created a distinctive and vibrant civilization where art and learning flourished when most of Western Europe was still literally mired in the Dark Ages. And although the Byzantine Empire as a political entity was finally extinguished in the mid-fifteenth century, it survived long enough to transmit to the West the great literary works of classical antiquity that helped inspire the Italian Renaissance. Byzantine music is the medieval sacred chant of Christian Churches following the Orthodox rite. This tradition, encompassing the Greek-speaking world, developed in Byzantium from the establishment of its capital, Constantinople, in 330 until its fall in 1453. It is undeniably of composite origin, drawing on the artistic and technical productions of the classical age, on Jewish music, and inspired by the monophonic vocal music that evolved in the early Christian cities of Alexandria, Antioch and Epheus.
Byzantine chant manuscripts date from the ninth century, while lectionaries of biblical readings in Ekphonetic Notation (a primitive graphic system designed to indicate the manner of reciting lessons from Scripture) begin about a century earlier and continue in use until the twelfth or thirteenth century. Our knowledge of the older period is derived from Church service books Typika, patristic writings and medieval histories. Scattered examples of hymn texts from the early centuries of Greek Christianity still exist. Some of these employ the metrical schemes of classical Greek poetry; but the change of pronunciation had rendered those meters largely meaningless, and, except when classical forms were imitated, Byzantine hymns of the following centuries are prose-poetry, unrhymed verses of irregular length and accentual patterns. The common term for a short hymn of one stanza, or one of a series of stanzas, is troparion (this may carry the further connotation of a hymn interpolated between psalm verses). A famous example, whose existence is attested as early as the fourth century, is the Vesper hymn, Phos Hilaron, "Gladsome Light"; another, O Monogenes Yios, "Only Begotten Son," ascribed to Justinian I (527-565), figures in the introductory portion of the Divine Liturgy. Perhaps the earliest set of troparia of known authorship are those of the monk Auxentios (first half of the fifth century), attested in his biography but not preserved in any later Byzantine order of service.
[partly taken from: Orthodox Byzantine Music]
Origen's observation that the practices of early Christians reflected their cultural origins found its most remarkable example in the early Coptic church. The Coptic Kyrie is related to ancient Egyptian traditions for the sun-god. Scholars have found that the Antiphonal singing system between a group of priests is related to that of groups of priestesses in ancient Egypt where both were characterized by the use of melismata (where many notes were sung over one of the seven vowels which were called 'magic vowels', used to express feelings of piety and humility on religious occasions). Both were characterized also by the use of professional blind singers and percussion instruments in the performance of religious music.
[taken from: Coptic Music]
Music of the Dark Ages (475-1000) ::
386: Hymn singing introduced by Ambrose, Bishop of Milan.
450: First use of alternative singing between the precentor and community at Roman Church services, patterned after Jewish traditions.
c. 500: Foundation of Schola Cantorum for church song, Rome by Pope Gregory.
500: Boethius writes De Institutione Musica.
500: In Peru, flutes, tubas and drums in use.
521: Boethius introduces Greek musical letter notation to the West.
600: Pope Gregory orders the compilation of church chants, titled Antiphonar.
Chant was the true basic ancestor of western tonal music. In a process that lasted several centuries, the Roman Church absorbed and compiled liturgical melodies from diverse European regions. Those different dialects or styles included, among others, Gallican, Beneventan, Visigothic or Mozarabic, and Ambrosian Chant. The whole repertory was reorganized by Pope Gregory II (715-31), after whom the expression Gregorian Chant was coined.
for more information: Antiphonary.
seventh century: Musica rythmica and Musica organica by St. Isidore of Seville (c. 560-636)
Although secular music experienced its most dramatic expansion in the eleventh century, and significant historical documentation is lacking before that time, it would be a mistake to think that secular music did not enjoy popularity before the High Middle Ages. The music of the people, musica civilis, was sufficiently common to draw disparagement from the early Church fathers and in the seventh century, another Church Father, St. Isidore of Seville, the first Christian writer to essay the task of compiling for his co-religionists a summa of universal knowledge, made a study of musical instruments in two treatises. Musica rythmica investigated stringed and percussion instruments, while Musica organica covered the wind instruments. These studies included many instruments used only in secular music.
[taken from: The End of Europe's Middle Ages]
609: The crwth, a Celtic string instrument, appears.
The crwth is a medieval bowed lyre and ranks as one of Wales's most exotic traditional instruments. it has six strings tuned g g' c' c d' d'' and a flat bridge and fingerboard. the gut strings produce a soft purring sound, earthy but tender. the melody is played on four of the six strings, with the other two acting as plucked or bowed drones and the octave doublings producing a constant chordal accompaniment. The crwth has been played in Wales in one form or another since Roman times. It was an instrument of the highest status during the middle ages whose best players could earn a stable income in the courts of the Welsh aristocracy. Crwth players had to undergo years of apprenticeship and memorise twenty-four complex pieces of music.
[taken from: About the crwth]
for more information: The Crwth.
sixth century: Neums and neuming.
The earliest systems of musical notation were developed between 1500 and 3000 years ago by the Greeks. These schemes were generally based on letters of the Greek alphabet. This had several problems: the melody of the song could be confused with its words, the system was not very accurate, and it was immensely complicated. Neumes and neuming were developed to overcome these problems. Neumes were small marks placed above the text to indicate the 'shape' of a melody. As a form of notation, they were initially even less effective than the letter-based systems they replaced, but they were unambiguous and took very little space, and so they survived when other systems failed. Our modern musical notation is descended from neumes. The psalms provide clear evidence on Biblical texts being sung. Many of the psalms indicate the tune used for them. There are places in the New Testament (e.g. Mark 14:26 and parallels, Acts 16:25) which apparently refer to the singing of psalms and biblical texts. But we have no way to know what tunes were used. This was as much a problem for the ancients as it is for us. By the ninth century they were beginning to develop ways to preserve tunes. We call the early form of this system neuming, and the symbols used nuemes (both from Greek pneuma).
The earliest neumes (found in manuscripts such as Y/044) couldn't really record a tune. Neither pitch nor duration was indicated, just the general 'shape' of the tune. Theoretically only two symbols were used: "Up" (the acutus, originally symbolized by something like /), and the "Down" (gravis, \). These could then be combined into symbols such as the "Up-then-down" (^). This simple set of symbols wasn't much help if you didn't know a tune but could be invaluable if you knew the tune but didn't quite know how to fit it to the words. It could also jog your memory if you slipped a little.
Neumes were usually written in green or red ink in the space between the lines of text. They are, for obvious reasons, more common in lectionaries than in continuous-text manuscripts. As the centuries passed, neuming became more and more complex, adding metrical notations and, eventually, ledger lines. The picture below (a small portion of chapter 16 of Mark from the tenth-century manuscript 274) shows a few neumes in exaggerated red. In this image we see not only the acutus and the gravis, but such symbols as the podatus (the J symbol, also written !), which later became a rising eighth note.
By the twelfth century, these evolved neumes had become a legitimate musical notation, which in turn evolved into the church's ancient" plainsong notation" and the modern musical staff. All of these forms, however, were space-intensive (plainsong notation took four ledger lines, and more elaborate notations might take as many as fifteen), and are not normally found in Biblical manuscripts (so much so that most music history books do not even mention the use of neumes in Biblical manuscripts; they usually start the history of notation around the twelfth century and its virga, punctae, and breves). The primary use of neumes to the Biblical scholar is for dating: If a manuscript has neumes, it has to date from roughly the eighth century or later. The form of the neumes may provide additional information about the manuscript's age.
[taken from: Neumes]
Gregorian Chant Notation - Neumes
Origin of Music Notation
744: Singing school established at the Monastery of Fulda.
Established in 743/44, Fulda was a Benedictine monastery in Hesse-Nassau that grew rich from pilgrimages to the grave of St. Boniface and gained renown as an intellectual centre as its library grew. Sts. Boniface and Sturmius founded the house as a training school and base for missionaries whom Charlemagne sent to the Saxons. Soon after the death of Boniface, Fulda became an important destination for pilgrims, and about a century after its founding, the abbot Rabamus Maurus increased the intellectual riches of the monastery through its school, scriptorium, and library, which, at its peak, held approximately 2,000 manuscripts. It preserved works such as Tacitus' Annales, and the monastery is considered the cradle of Old High German literature. The abbots of Fulda became, in the tenth century, the abbot general of the Benedictines in Germany and Gaul. In the twelfth century, they became imperial chancellors and in the thirteenth century, princes of the empire. Fulda was the center of monastic reform during the reign of Henry II.
750: Gregorian church music is sung in Germany, France, and England.
Charlemagne (742-814) was an enthusiastic lover of Church music, and especially of this style which he had learnt to know in Rome. In his own chapel he carefully noted the powers of all the priests and singers, and sometimes acted as choir-master himself, in which capacity he proved a very strict, often severe master, He extinguished the last remnants of the Ambrosian style at Milan, and it was with his approval that Pope Leo III (795-810) imposed a penalty of exile or imprisonment on any singer who might deviate from the orthodox Cantus firmus et choralis. He not only founded schools of music in France, but throughout Germany, at Fulda, Mayence, Treves, Reichenau, and other places. Trained singers from the famous choirs in Rome were sent for to take charge of these institutions, and seem to have been not a little shocked at first by the barbarism of their pupils. One says that their notion of singing in Church was to howl like wild beasts; while another, Johannes Didimus, in his 'Life of Gregory', affirms that, "these gigantic bodies, whose voices roar like thunder, cannot imitate our sweet tones, for their barbarous and ever-thirsty throats can only produce sounds as harsh as those of a loaded wagon passing over a rough road."
757: Wind organs, originally from Byzantium, start to replace water organs in Europe.
Evidence of the first purely pneumatic organ is found on an obelisk erected at Byzantium before 393 AD. Byzantium became the centre of organ building in the Middle Ages, and in 757 Constantine V presented a Byzantine organ to Pepin the Short. This is the earliest positive evidence of the appearance of the organ in Western Europe. By the tenth century, however, organ building had made considerable progress in Germany and England. The organ built c. 950 in Winchester Cathedral is said to have had 400 pipes and 26 bellows and required two players and 70 men to operate the bellows. The keyboard, or manual, was a creation of the thirteenth century, making possible the performance of more complex music. The earliest extant music written specifically for organ, dating from the early fourteenth century, gives evidence that by then the manuals of the organ had full chromatic scales, at least in the middle registers. Organs in the Middle Ages already had several ranks of pipes, each key causing a number of pipes to sound simultaneously. All were diapasons, or principals, the pipes of timbre characteristic only of the organ, and the various pipes controlled by one key were tuned to the fundamental and several harmonics of a given tone.
781: English monk Alcuin (c. 732–804) meets Charlemagne; Alcuin encouraged study of liberal arts, influencing the Carolingian Renaissance. Alcuin was largely responsible for the revision of the Church Liturgy during the reign of Charlemagne.
790: Schools for church music established by Charlemagne (742-814) at Paris, Cologne, Soissons and Metz, all supervised by the Schola Cantorum in Rome.
The rise of secular music was aided by the development of a corpus of Latin lyrical literature during the reign of Charlemagne that included a collection of secular and semi-secular songs. Some scholars even played at setting to music the works of classical poets, such as Horace, Virgil, Cassiodorus, and Boethius.
800: Charlemagne crowned first Holy Roman Emperor: the beginning of the Carolingian Renaissance
c. 800: Hildebrandslied
There are no written monuments before the eighth century. The earliest written record in any Germanic language, the Gothic translation of the Bible by Bishop Ulfilas, in the fourth century, does not belong to German literature. It is known from Tacitus that the ancient Germans had an unwritten poetry, which among them supplied the place of history. It consisted of hymns in honour of gods, or songs commemorative of the deeds of heroes. Such hymns were sung in chorus on solemn occasions, and were accompanied by dancing; their verse form was alliteration. There were also songs, not choric, but sung by minstrels before kings or nobles, songs of praise, besides charms and riddles. During the great period of the migrations poetic activity received a fresh impulse. New heroes, like Attila (Etzel), Theodoric (Dietrich), and Ermanric (Ermanrich), came upon the scene; their exploits were confused by tradition with those of older heroes, like Siegfried. Mythic and historic elements were strangely mingled, and so arose the great saga cycles, which later on formed the basis of the national epics. Of all these the Nibelungen saga became the most famous, and spread to all Germanic tribes. Here the most primitive legend of Siegfried's death was combined with the historical destruction of the Burgundians by the Huns in 435, and affords a typical instance of saga-formation. Of all this pagan poetry hardly anything has survived. The collection that Charlemagne caused to be made of the old heroic lays has perished. All that is known are the Merseburger Zaubersprüche, two songs of enchantment preserved in a manuscript of the tenth century, and the famous Hildebrandslied, an epic fragment narrating an episode of the Dietrich saga, the tragic combat between father and son. It was written down after 800 by two monks of Fulda, on the covers of a theological manuscript. The evidence afforded by these fragments, as well as such literature as the Beowulf and the Edda, seems to indicate that the oldest German poetry was of considerable extent and of no mean order of merit.
[taken from: German Literature]
850: Setting out of Church modes in Alia Musica.
c. 870: Musica enchiriadis
Although singers probably improvised polyphony long before it was first notated, an anonymous treatise from the ninth century, Musica enchiriadis (Music Handbook, 870), is the earliest that describes two types of early organum: parallel motion in which a plainsong melody (vox principalis) is duplicated a perfect fourth or fifth below by an organal voice (vox organalis), with duplication of either voice at the octave possible; or, Oblique motion in which the organal voice remains on the same pitch in order to avoid tritones against the principal voice.
889: Regino, Abbot of Prüm, writes his treatise on church music: De harmonica institutione.
Reginon or Regino of Prüm, medieval chronicler, was born at Altripp near Speyer, and was educated in the monastery of Prüm. Here he became a monk, and in 892, just after the monastery had been sacked by the Danes, he was chosen abbot. In 899, however, he was deprived of this position and he went to Trier, where he was appointed abbot of St Martin's, a house which he reformed. He died in 915, and was buried in the abbey of St Maximin at Trier, his tomb being discovered there in 1581. Reginon wrote a Chronicon, dedicated to Adalberon, bishop of Augsburg (d. 909), which deals with the history of the world from the commencement of the Christian era to 906, especially the history of affairs in Lorraine and the neighbourhood. The first book (to 741) consists mainly of extracts from Bede, Paulus Diaconus and other writers; of the second book (741-906) the latter part is original and valuable, although the chronology is at fault and the author relied chiefly upon tradition and hearsay for his information. The work was continued to 967 by a monk of Trier, possibly Adalbert, archbishop of Magdeburg (d. 981). The chronicle was first printed at Mainz in 1521; another edition is in Band I of the Monumenta Germaniae historica Scriptores (1826); the best is the one edited by F Kurze (Hanover, 1890). It has been translated into German by W Wattenbach (Leipzig, 1890). Reginon also drew up at the request of his friend and patron Radbod, archbishop of Trier (d. 915) a collection of canons, Libri duo de synodalibus causis et disciplines ecclesiasticis, dedicated to Hatto I, archbishop of Mainz; this is published in Tome 132 of J P Migne's Patrologia Latina. To Radbod he wrote a letter on music, Epistola de harmonica institutione, with a Tonarius, the object of this being to improve the singing in the churches of the diocese. The letter is published in Tome I of Gerbert's Scriptores ecclesiastici de musica sacra (1784), and the Tonarius in Tome II of Coussemaker's Scriptores de musica mediiaevi.
890: Ratbert of St. Gall born, hymn writer and composer.
The Abbey of St. Gall (in German, St. Gallen), founded in 613, is situated in Switzerland, Canton St. Gall, 30 miles southeast of Constance. For many centuries it was one of the chief Benedictine abbeys in Europe. It was named after Gallus, an Irishman, the disciple and companion of St. Columbanus in his exile from Luxeuil. When his master went on to Italy, Gallus remained in Switzerland, where he died about 646. A chapel was erected on the spot occupied by his cell, and a priest named Othmar was placed there by Charles Martel as custodian of the saint's relics. Under his direction a monastery was built, many privileges and benefactions being upon it by Charles Martel and his son Pepin, who, with Othmar as first abbot, are reckoned its principal founders. By Pepin's persuasion Othmar substituted the Benedictine rule for that of St. Columbanus. He also founded the famous schools of St. Gall, and under him and his successors the arts, letters, and sciences were assiduously cultivated. The work of copying manuscripts was undertaken at a very early date, and the nucleus of the famous library gathered together. The abbey gave hospitality to numerous Anglo-Saxon and Irish monks who came to copy manuscripts for their own monasteries. Two distinguished guests of the abbey were Peter and Romanus, chanters from Rome, sent by Pope Adrian I at Charlemagne's request to propagate the use of the Gregorian chant. Peter went on to Metz, where he established an important chant-school, but Romanus, having fallen sick at St. Gall, stayed there with Charlemagne's consent. To the copies of the Roman chant that he brought with him, he added the "Romanian signs", the interpretation of which has since become a matter of controversy, and the school he started at St. Gall, rivalling that of Metz, became one of the most frequented in Europe. The chief manuscripts produced by it, still extant, are the Antiphonale Missarum (no. 339), the Antiphonarium Sti. Gregorii (no. 359), and Hartker's Antiphonarium (nos. 390-391), the first and third of which have been reproduced in facsimile by the Solesmes fathers in their Paléographie Musicale.
[taken from: Abbey of St. Gall]
By the late 900s, the Abbey of St. Gall had a library of 600 books. The city of Córdoba, in the north-central part of Andalusia, with its population of possibly 500,000 people, 1,600 mosques (including the great Mosque of Córdoba, considered by some architectural historians to be the most spectacular Islamic building in the world), 900 public baths and 80,455 shops, had a library with 400,000 volumes and was so great a cultural and intellectual centre that the Saxon nun Roswitha of Gandersheim (c.935-c.1000 AD) described the city (at that time) as “the ornament of the world.”
tenth century: The Eisteddfods of the Middle Ages.
Many claim that an eisteddfod took place during the reign of King Cdwaladr (who died in 664). The Juvencus Codex (ninth century), in which a number of Welsh stanzas are found, makes it clear that Welsh lyric poetry was being written at this time at the latest. In the tenth century, we find the Welsh Laws, Leges Wallicae, codified by Hywel Dda, in which is mentioned that "the king has twenty-four officers of the court", one of them is "the Bard of the Household [Bardd Teulu]". In various writings it is said: "There are three legal harps; the king’s harp [telyn e brenhyn]; the harp of a chief of song [a thelyn penkerd]; and a harp of a gwrda [a thelyn gurda]". According to the Dimetian and Gwentian Codes the chief of song is "a bard who shall have gained a chair". He was richly rewarded and enjoyed many privileges. By the 'chief of song' (Penkerdd) they probably meant "the head of the whole bardic community within the limits of the kingdom". In 1070, Bleddyn ap Kynfyn is said to have held an eisteddfod lasting 40 days. "Degrees were conferred on chiefs of song, and gifts and presents made to them, as in the time of the Emperor Arthur".
[taken from:The Eisteddfods of the Middle Ages]
c. 950: Organ with 400 pipes finished at Winchester Monastery, England.
In about the year 950 a famous organ was built at Winchester Cathedral. A contemporary poem described it (see reference below), and it was an outstanding example of an early, large Blokwerk organ. There were 26 bellows supplying wind to an undivided chest of 400 pipes; the keyboard (or keyboards) had a 40-note compass, and required two players, possibly owing to the clumsy nature of the playing technique. Each key played ten ranks of pipes.
for more information: The Organ in Medieval Literature
980: Antiphonarium Codex Montpellier written, important musical manuscript.
Music of the High Middle Ages (1000-1350) ::
early eleventh century:
- Christianity began to penetrate Finland from the West in some form probably as early as in the eleventh century, imported by merchants, Christianized Vikings and German missionaries. At about the same time, Orthodox Christianity from Novgorod began to make inroads in the eastern reaches of Finland. Tradition holds that the first Crusade to Finland was undertaken around the year 1155, by which time Christianity already had a foothold in the land. Finland was finally incorporated into the Catholic Church and the Kingdom of Sweden when the Pope granted King Erik Knutsson permission to take Finland under his protection in 1216. With Christianity came liturgical chant, Gregorian (Latin) chant from the West and Orthodox (Byzantean) chant from the East. Although the Western influence is easier to trace, its progress is by no means clear. Ilkka Taitto, the leading Finnish scholar in the field, has divided the history of Latin chant in Finland into three periods: 1) the missionary period, from c. 1100 to 1330; 2) the established repertoire period, from c. 1330 to 1530; and 3) the early Lutheran period, from c. 1530 to 1640. By contrast, it is almost impossible to estimate with any precision when polyphonic singing arrived in Finnish churches — perhaps in the fourteenth century, in the form of simple types of organum. We also do not know when the first organs appeared in Finnish churches — this may have happened as late as in the sixteenth century, and in any case initially only very few churches acquired organs.
early eleventh century: The songs known as Carmina Burana are collected.
The Carmina Burana is a collection of poems, songs, and short plays found in Benediktbeuern, a Benedictine abbey about 100 km south of Munich, in 1803. This manuscript was of thirteenth-century German origin and contained approximately 250 poems, and other pieces. When Johann Andreas Schmeller published the collection in 1847, he gave it the title of Carmina Burana. This name means 'songs of Beuren,' though it has since been discovered that the manuscript did not originate there, and may have come from Seckau. Although the manuscript dates from the thirteenth century, most of it was written in the twelfth. This was a period of peace and prosperity in comparison with the years of war which preceded it. The majority of the Carmina Burana is written in Latin, which was the standard language of literacy at the time. There are, however, many pieces written in Middle High German, which shows the blossoming influence of vernacular languages on literature which began during this time. This collection is the most important and comprehensive source for both early German literature and Goliardic verse, the secular poetry of the Goliards serving as a counter-weight in an age of faith. Goliardic verse developed with the beginning of European universities in the twelfth century. It flourished for more than a century written by itinerant clerks and monks who wrote a style of secular lyric poetry commending the pleasures of life - wine, women and song - in a humourous and satirical manner. The Church, whose officials were often the butt of these ribald commentaries, was not amused, and subject to ecclesiastical suppression, the movement had disappeared sometime during the fourteenth century.
for more information: The Real Goliards - Historical Facts and Links About the Real Goliards
early eleventh century: Guido d'Arezzo develops an improved form of musical notation
It is likely that Guido was born in France. He served as a Benedictine monk then traveled in 1025 to work for Bishop Theobald in Arezzo, Italy where he lived for some years. Although Guido was not a composer, he is included here because his contributions as an early music theorist made it possible for early composers to begin recording their work in manuscript. Around 1025 Guido created a system of musical notation using a 4-line staff which has evolved into the system we use today. The importance of this work is enormous. Before Guido's invention of musical notation, every singer had to memorize the entire chant repertoire. Those singers then went on to teach the next generation. Small errors in memory or differences of taste caused the chants to change over the years and no two singers would learn a chant precisely the same way. Notation made it possible to record a chant in a definitive form for posterity and easier communication. Guido's last recorded activity is in 1033. His actual death date is unknown.
for more information: Why middle C?
10th & 11th centuries: ars antiqua - (Lat., the old art)
Contrary to the description of organum given in the ninth century handbook Musica enchiriadis, The Winchester Troper, an example of a later form, is characterised as follows:
(a) the vox principalis becomes the lower voice.
(b) the vox organalis becomes the upper voice.
(c) the two voice parts often cross.
(d) perfect consonances (unison, octave, fourth, and fifth) continue to be favoured; other intervals occur incidentally and infrequently.
(e) sections of both the Mass and Divine Office, that normally would have been sung by soloists in plainchant, become troped, i.e. they receive polyphonic treatment.
The Winchester Troper is the earliest known practical source (i.e. not a treatise) but its voices are notated in unheighted neumes without staff lines, so that only pieces that also occur in later manuscripts can be reconstructed.
1054: The Great Schism divides western and eastern Christianity
1066: Battle of Hastings; William of Normandy conquers England.
1066–1077: Bayeux Tapestry.
1095–1099: Crusades; Jerusalem captured 1099.
1105: Fall of Toledo
In Toledo, the Arabs had huge libraries containing the lost, to Christian Europe, works of the Greeks and Romans along with Arab philosophy and mathematics. The intellectual plunder of Toledo brought the scholars of northern Europe like moths to a candle. They set up a giant translating programme in Toledo. Using the Jews as interpreters, they translated the Arabic books into Latin. These books included most of the major works of Greek science and philosophy along with many original Arab works of scholarship. The intellectual community which the northern scholars found in Spain was so far superior to what they had at home that it left a lasting jealousy of Arab culture, which was to colour Western opinions for centuries. The texts included works on medicine, astrology, astronomy, pharmacology, psychology, physiology, zoology, biology, botany, mineralogy, optics, chemistry, physics, mathematics, algebra, geometry, trigonometry, music, meteorology, geography, mechanics, hydrostatics, navigation and history. These works alone however, didn't kindle the fire that would lead to the renaissance. They added to Europe's knowledge, but much of it was unappreciated without a change in the way Europeans viewed the world. al-Andalus had developed the first universities in Europe, where scholars from other lands came to study and returned home. Spain was the broadest highway for the entry of Muslim culture into Europe, but it was far from the only one. The Muslims, who held Sardinia, took Sicily in the ninth century and established schools there as well. The Islamic world was not closed off; quite the contrary, it was mercantile, and trading contacts were major movers of culture. Both Pepin and Charlemagne, Frankish kings whose influence reached from what is now northern Italy and Germany all the way to the Pyrenees, exchanged diplomats with the Muslim courts. Rome traded with al-Andalus; Venice, Greece, and Russia had trade with Egypt.
c. twelfth century: The beginnings of troubadour and trouvère music in France.
When the Romans settled in Gaul they brought with them their amusements as well as their laws and institutions. Their scurrae, thymelici and joculatores, the
tumblers, clowns and mountebanks, who amused the common people by day and the nobles after their banquets by night and travelled from town to town in pursuit of their livelihood, were accustomed to accompany their performances by some sort of rude song and music. In the uncivilised North they remained buffoons; but in the South, where the greater refinement of life demanded more artistic performance, the musical part of their entertainment became predominant and the joculator became the joglar (Northern French, jongleur), a wandering musician and eventually a troubadour, a composer of his own poems. These latter were no longer the gross and coarse songs of the earlier mountebank age, which Alcuin characterised as turpissima and vanissima, but the grave and artificially wrought stanzas of the troubadour chanso.
[taken from: The Troubadours by H.J. Chaytor]
While secular music in Latin was probably doomed to decline, music in the vernacular continued to grow and kept pace with the rapid expansion in vernacular literature. The chansons de geste were ideally suited to be set to music. The Goliards and jongleurs were a mixed crowd of travelling entertainers who reveal musical influences from as far away as Scandinavia and Egypt. These vagrant minstrels were the forerunners of the Trouvères, or Troubadours, who, adding the Arabian lyricism encountered in the Crusades to their creations, poured the excitement of chivalry into intensely beautiful music and poetry. Arising in south-west France during the twelfth and thirteenth centuries, the Troubadours praised both warlike heroism and sentimental passion.
Trouvère is the Northern French (langue d'oïl) equivalent of the troubadour (langue d'oc), and refers to poet-composers who were roughly contemporary with the troubadors but who composed their works in the northern dialects of France, a collection of related, regional languages from Champagne, Picardy, Normandy and England. According to musicologist J B Beck, the troubadours emerged from a tradition of nomadic singers called histrions, mimes, and jongleurs. Their roots can be traced back to the sixth century, when Caesar of Arles wrote a decree banishing secular entertainers at the urging of church bishops. His text notes that they are responsible for "infamous and diabolic songs of love." The jongleurs were always only part-time musicians; their primary function was to entertain using acrobatics, animals, and props. Menestrels were full-time musicians; their rank is subordinate to the troubadour because their repertoire consisted primarily of other composers' songs. The troubadours were the composers of music and lyric; they primarily performed their own songs. The highest ranks of troubadours were the doctores de trobar who were the most outstanding of the composers. Many of the troubadour men and women were of noble backgrounds. The upper-class frequently sent their boys to Catholic monastic schools where they learned grammar, religious music and neumatic notation as a part of the basic trivium and quadrivium. Beck argues that many of these students became talented composers and musicians. After finishing their formal education, these young men returned home to apply their artistic training to more secular themes.
Troubadour music is synonymous with themes of courtly love. At the same time, the Catharist heresy emerged in southern France. The Cathars were ascetics whose beliefs encompassed a love greater than mere sexual contact. The contact of troubadours with Cathars is documented by de Rougemont:
"Moreover, quite possibly the presence of troubadours at such courts is a sign of heretical tendencies in them. The troubadours, like the Cathars, extolled (without always practising) the virtue of chastity; that, like the Pure, they received from their lady but a single kiss of initiation. They reviled the Catholic clergy and the clergy's allies, the members of the feudal caste. They liked best to lead the wandering life of the Pure, who set off along the road in pairs. And in their verse are expressions taken from Catharist liturgy."
After the fall of the Roman empire, the vulgar Latin once spoken in France evolved into two similar languages, the langue d' oïl of Northern France and the langue d'oc of the southern Occitanian regions. The troubadours wrote their verse using the langue d' oc, which is said to be the more lyric and beautiful of the two languages. The troubadour counterpart of Northern France was known as a trouvère. Most song genres developed by the troubadours have their northern counterpart in the langue d' oïl. Both northern and southern performers led similar courtly roles, but only the southern troubadour is identified with the Catharist heresy, a relationship which eventually led to the persecution of troubadours by Pope Innocent III. The northern crusaders mobilized to crush the Catharist heresy in 1209 after failed attempts to convert the southern nobility using missionaries. Led by Simon of Montfort, the first campaigns crushed the poorly organized resistance. Many troubadours joined the Occitanian defense; others fled to less dangerous surroundings. In 1216 the resistance won their first victory by successfully forcing Simon to withdraw. "A wave of excitement ran through Occitania; the troubadours mocked Simon, and the exiled and the dispossessed began to weave new plots." By 1244, the campaigns were over and the Catharist church went underground. The Pope responded by sending inquisitors into Southern France. By 1350, nearly all remaining followers of the Catharist church, including many troubadours, had been imprisoned or burned at the stake.
One trouvère, Guillaume le Breton, wrote of one of the bloody battles of the campaigns:
The men of Toulouse tried to defend themselves within their camp, but soon had to give ground. Unable to resist the furious charge, they retreated shamefully before their enemies. Like a wolf who, having broken into a sheepfold by night, does not care to slake his thirst or fill his belly with meat, but is content to tear open the throats of the sheep, adding dead to the dead, lapping up blood with his tongue, so the army consecrated to God thrust through their enemies and with avenging swords, executed the wrath of God on the people who offended Him doubly by deserting the faith and by associating with heretics. No one wasted time in taking booty, or prisoners, but they reddened their swords with heavy blows. . . . On that day the power and virtue of the French shone forth clearly; they sent seventeen thousand men to the swamps of hell.
Many of the songs of the Troubadours were written down and, while the musical notations provide a faithful representation of the melodies, there is no indication of the rhythm, not surprising since the songs were still transmitted orally and the notations were only intended to assist the memories of the singers. Nevertheless, theorists have divided them into four main rhythmic modes that correspond to classical Latin poetic metres. These are:
litany - all verses are sung to the same melody, i.e. the chansons de geste.
hymn - the song consists of a continuous melody, without repetition.
sequence - a different melody is used every two verses, e.g. AA BB CC, etc. from which the instrumental lai and estampie probably developed.
rondeau - a song with a refrain, i.e. the virelai and the ballade.
The principal Troubadour genres are:
canso - courtly love-song
dansa - mock-popular song based on a dance form
descort - discordant in verse form or feeling
escondig - a lover's apologia
gap - a challenge
pastorela - an amorous encounter between a knight and a shepherdess
planh - a lament
sirventes - a satirical poem devised to a borrowed melody
tenso, partimen and joc-partit - songs of debate
vers - an early term used by troubadours
The principal Trouvère genres are:
chanson avec refrains - in which the strophic repetition is broken into by the insertion of refrains (courtly tags with tunes)
chanson d'amour - related to the canso of the troubadour
chanson de toile - a courtly, mock-popular song like the pastorela (weaving-song, old French chanson d'histoire)
jeu-parti - related to the joc-partit of the troubadour
lai - related to the descort of the troubadour
The formes fixes (rondeau, virelai and ballade) appear toward the end of the period but were never central to either tradition. The rondeau followed the pattern ABaAabAB; A (a) and B (b) represent repeated musical phrases; capital letters indicate repetition of text in a refrain, while lowercase letters indicate new text. The ballade employed the pattern aabC. The virelai used the pattern AbbaA. The trouvère Adam de la Halle (b. c. 1250) wrote the first polyphonic settings of the formes fixes. Guillaume de Machaut wrote both text and music for many monophonic and polyphonic chansons in the formes fixes. Later composers, including Guillaume Dufay, favoured the rondeau.
ca. 1134–1220: Notre Dame Cathedral, Chartres construction.
1149: Oxford University founded.
1163–1182: Notre Dame Cathedral, Paris construction.
1170: Murder of Thomas Becket, Archbishop of Canterbury, on orders of Henry II.
1189–1193: Third Crusade, led by Frederick Barbarossa of Germany, Philip Augustus of France, and Richard I of England
c. twelfth century: Chinese opera appears.
Opera in China arises from a tradition extending back at least as far as the twelfth century, when opera was performed in the huge public theatres of Hangzhou, then capital of the Southern Song dynasty (1179-1276). The most popular theatrical form at the time was the southern play in which the dialogue, written in rhymed verse, was either sung or spoken. The three extant southern play scripts, composed by anonymous writing societies, have no internal divisions, such as acts or scenes, and, according to contemporaneous descriptions, were performed with a string and wind orchestra, and an offstage chorus which accompanied the major arias, evidently along with the audience.
[taken from: Chinese Opera: A Brief History by Thomas A. Wilson]
twelfth century: Organum Floridus - Aquitanian Polyphony
twelfth & thirteenth century: Notre Dame Polyphony
Organum was usually a neumatic and melismatic chant section by the choir at the beginning and end of a piece. The number of voices contained in a section of organum determines its nomenclature (ie: 2 voices is Organum duplum, three is Organum triplum etc...). Organum duplum is usually linked to Leonin (second half of the twelfth century), the first known significant composer of polyphony "organum" who worked in Paris at the Notre Dame Cathedral. One of Leonin's major contributions to music was a collection of organum with two-part settings of portions of the mass known as the Magnus Liber Organi. He may also have been the first to use a rhythmic system of two main note values, long and short, and certain standard patterns, usually with groupings of threes. Three was considered the pure number, possibly for the Holy Trinity. Only after time was the number two accepted in music. After the advent of florid organum, the older style of note against note was referred to as discant organum. One variant of discant style, clausula, which originated with the Notre Dame school, was a 'closed form' in discant organum, where the slow moving melismas of the chant melody, usually on just a few syllables, were repeated twice in the tenor. Clausulae were designed to be replaceable and could be composed later than the work into which they were being inserted. Leonin's pupil Pérotin (1180-c. 1238) made some important revisions to his teacher's Magnus Liber Organi and developed ideas of his own about polyphony. To the additional voice part that Leonin added, Pérotin added a third and fourth vocal part. Pérotin named the three additional parts the duplum, triplum, and quadruplum. All three of these voice parts were based on and written above the original chant. There was a new emphasis on combining measured rhythm of discant type with the drone-like long-held notes of the tenor. Discant clausula, the text of which consists of one or two words or a single syllable based on a fragment of Gregorian chant, played a prominent role in works by musicians at Notre Dame, and slowly the older florid organum fell into disuse. Eventually, the clausula would break away from the organum within which it had been embedded. It would take on a life on its own as a separate composition. The newly autonomous piece was called a motet (from the French for word, mot).
c. thirteenth century: Johannes de Garlandia
De mensurabili musica, On the Measurements of Music
This enormously influential treatise from the second part of the thirteenth century was the first to provide a full treatment of the rhythmic element of music together with its notation. It also treats the polyphony of the Notre Dame school, in which rhythm played an important part. Practical issues of music are here presented fully methodically. About half of the treatise is devoted to rhythmic matters, the rest deals with three species of polyphony. The French theorist Johannes de Garlandia, Magister in Paris at the end of the thirteenth century probably used an earlier anonymous treatise as the foundation of his work. Most later treatments of mensural theory in the thirteenth century, including De mensuris et discantu by Anonymous 4 and the Anonymous of St Emmeram, are heavily influenced by Johannes' work. With regard to notation, the work treats the form of single notes and ligatures. It also defines pauses and their notation.
[taken from: Select List of Late Medieval Treatises On Music]
early thirteenth century: Motet & (Polyphonic) Conductus - Notre Dame
The fundamental difference between the motet and the conductus was that, unlike the former which was built on a pre-existing plainsong tenor, the conductus used a composed cantus firmus; it was thus an entirely original work although the actual process of its composition was not new. The procedure was still one of a successive accumulation of counterpoints above a foundation voice. This was written for the occasion and although comparable to the tenor of a motet, it had a livelier popular rhythm and was greater in length so that it did not have to be repeated, as in the motet. Being therefore less abstract than the tenor of the motet, this voice was less alien to polyphony, which was to model its proceedings after it. Whereas in the motet the time values of the notes diminished as they ascended from the duplum to the triplum and quadruplum, the conductus had the same rhythm in all the voices and also the same text. Originally the conductus was a monodic processional song intended to accompany the actions of the priest or the faithful during the service; hence its name. But by the end of the twelfth century it had been affected by the advent of polyphony. Although he wrote conductus for one voice, as in the Beata viscera, Pérotin also produced examples for two voices and for three voices, the latter in the very beautiful Salvatoris hodie intended for the feast of the Circumcision. Like the motet, the conductus failed to find a place in the liturgy. After having provided a whole collection of pious but extra-liturgic pieces, it became increasingly profane while retaining the use of the Latin language. A good example of this secularization is the anonymous conductus, Veris ad imperia which, with its refrain Eia and its spontaneous character of a popular dance, is a wonderful evocation of the awakening of nature at Spring's behest. All in all, the conductus was the Latin equivalent of the works of troubadours and trouvères and it may be this that explains its decline into total disuse in the first quarter of the thirteenth century. While the 'church' composers preferred the elaborations of the motet form, the troubadours and their successors evolved the polyphonic rondeau.
[taken from: The conductus]
1215: Magna Carta (England), signed at Runnymede.
1220–1258: Salisbury Cathedral construction.
1226: Death of St. Francis of Assisi (b. 1182).
1233: Pope Gregory IX establishes the Inquisiton.
1252: Gold currencies (florins) introduced in Florence and Genoa (first coins in use since the Roman Empire).
1257: Chinese silk becomes available in Europe.
1260: The first mastersinger school is established in Mainz.
From the ninth century onwards, a new kind of music began to appear, in which the older chants were implemented by additional voice parts of increasingly independent character. The gradual melodic and rhythmic independence of these 'counter parts' led eventually to the rich polyphonic music of the later mediaeval period. From the beginning of the twelfth century, the composers of secular song (the knightly troubadours, trouvères and Minnesingers) and of vocal and instrumental dance music also began to make use of polyphonic settings.
ca. 1263–1269: Marco Polo (ca. 1254–1324) accompanies father, Nicolo Polo, and uncle, Maffeo Polo, to Court of Kublai Khan. The three return to China, 1271–1295.
1265-1321: Italian Florentine poet, Durante degli Alighieri, better known as Dante.
1274: Death of St. Thomas Aquinas (b. 1225).
1291–1515: Expansion of the Swiss Confederation.
1297–1309: Swiss Confederation recognized by the enemies of the Hapsburgs.
1299: Ottoman Empire founded.
thirteenth & fourteenth century: Types of Motet.
Franconian Motet - a thirteenth-century form in which each voice is given a different rhythmic mode - named for Franco of Cologne, a theorist active from ca. 1250 to 1280, who wrote Ars Cantus Mensurabilis (c. 1280), a treatise on rhythm based on the shape of the notes which allowed for the division of the breve into two or three semibreves a feature absent from earlier neumatic notation.
Petronian Motet - a thirteenth-century form in which the upper voice is given more freedom and often moves with the rhythm of the words - named for Petrus de Cruce (Pierre de la Croix), active ca. 1270 to 1300, who wrote motets characterised by a triplum voice containing up to six semibreves to the breve, a much faster pace than in the Franconian model described above.
Isorhythmic Motet - a fourteenth-century form which utilizes rhythmic and/or melodic patterns (repeated) in its tenor and/or all other parts.
see Ars Nova below.
After the fourteenth century, any polyphonic composition on a Latin text other than the Ordinary of the Mass was called a motet.
c. 1322-23: Ars Nova (Lat., the new art).
Theorist and composer, Philippe de Vitry [Vitriaco, Vittriaco] (1291-1361) attended the Sorbonne in Paris and was ordained a deacon; held prebends in Cambrai, Clermont, St. Quentin, and elsewhere, and was canon of Soissons and Archbishop of Brie. From 1346 to 1350 he was employed by Duke Jean of Normandy, remaining in his service when the duke became king in 1350. Pope Clement VI appointed him Bishop of Meaux in 1351. Vitry was known in his lifetime as both a poet and a composer, although little poetry, and only a handful of motets, survive; a number of his early motets appear in the Roman de Fauvel. His fame rests primarily on his treatise Ars nova (c. 1322-23), which established a new theory of mensural notation so extending the former Ars antiqua (old art), also termed Ars vetus, used in the fourteenth century to refer to the earlier style typical of twelfth-century Notre Dame organum and of the thirteenth-century motet and conductus. Characteristics include the predominance of triple meter and a limited rhythmic vocabulary. In his treatise, Vitry recognizes the existence of five note values (duplex longa, longa, brevis, semibrevis and minima), codifies a system of binary as well as ternary mensuration at four levels (maximodus, modus, tempus, prolatio), and introduces four time signatures. He also discusses the use of red notes to signal both changes of mensural meaning and deviations from an original cantus firmus. The Ars Nova is transmitted in four manuscripts, which appear to represent Vitry's work as formulated by his disciples; only the last ten of its twenty-four chapters, those that address mensural rhythm and notation, are original.
- Philippe de Vitry
1304–1374: Francesco Petrarch, Italian poet and humanist
1305: Giotto (1266-1337), frescos, Arena Chapel, Padua.
1307: Dante (1265–1321), The Divine Comedy.
1314: Gervais du Bus, Roman de Fauvel (satirical poem of over 3000 verses, attacking Church & State); enlarged in 1316 by Chaillou de Pestain, with additional poetry and music.
1326: Pope John XXII forbids the use of counterpoint in church music.
The beginnings of later medieval music can be traced to the Church. The papacy had always been a patron of the arts, especially during the Early and High Middle Ages when the Church dominated medieval life. The composers and innovators of the new musical trends were often church officers, but as the later Middle Ages progressed, composers increasingly came from secular backgrounds and the material world became the driving force behind artistic developments. Even musical styles were subject to this secularisation. For example, the motet, which originated in the religious music of the thirteenth century, quickly moved out of the church and into the courts of the nobility, becoming the dominant form of secular music in the fourteenth century. The motet became so unreligious that Pope John XXII issued a bull in 1326 forbidding the performance of motets in churches.
[taken from: The End of Europe's Middle Ages ]
1327–1377: Edward III (England).
1334–1336: Giovanni Boccaccio (1313–1375), Caccia di Diana ("Diana's Hunt"), first Italian hunting poem in terza rima.
1337–1453: Hundred Years' War: Series of wars between France and England. In the end, England was expelled from all of France, except Calais. (Begins with war between Philip VI of Valois and Edward III.)
1341: Francesco Petrarca (1304–1374) crowned with Laurel, Rome.
1346: Battle of Crécy.
1347: English capture Calais.
1347–1361: The Black Death (resulting in possibly 24 million or more deaths—about 25%–50% of Europe's population). [See Robert S. Gottfried, The Black Death: Natural and Human Disaster in Medieval Europe (New York: The Free Press, 1983).]
1348: Emperor Charles IV founds University of Prague.
Music of the Late Middle Ages (1350-1500) ::
1340-1377: Guillaume de Machaut.
In comparison with the late thirteenth-century motets, Machaut's motets are longer, their texts more secular and their musical texture filled with even more complex rhythm. The rhythm is actually pan-isorhythmic within which the hocketus, the fast tempo style of motet performance using the technique of hocket (Fr. hoquet, 'hiccup'), is widely used. The hocket technique uses rests to interrupt the continuous flow of melody in various motet parts, creating an effect of pulsating polyphonic texture. Machaut composed one of the earliest polyphonic settings of the Mass, Messede Notre Dame, using in it the standard five parts from the Missa ordinaria, the Ordinary of the Mass.
De Chant et de Ditté Nouvelle - Machaut and the French Ars Nova by Hope Greenberg
International Machaut Society - including links on the web
ca. 1351/3: Boccaccio, Decameron.
1356: Battle of Poitiers.
1360: Lull in Hundred Years' War after Treaty of Brétigny.
1363–1404: Philip the Bold (Burgundy).
1364–1380: Charles V (France).
c.1369-c.1453: John Dunstable.
The greatest English composer of the time, Dunstable's influence reigned supreme throughout the first half of the fifteenth century. Although most of Dunstable's extant works are religious, it is obvious that he was familiar with all aspects of Continental techniques and was capable of employing them all without subjection to any single style. Nothing is certain about the career of John Dunstable. Some of his earliest works date from c.1410-1420, which would approximate his birth somewhere in the late 1300's. It is widely held as true that Dunstable spent the years from 1422 to 1435 in France as a musician to the Duke of Bedford (a brother of King Henry V and Regent to France during those years). Musically, Dunstable's significant contribution to the theory and practice of composition in the early fifteenth century was the introduction of more melodic music and outlining chords as a part of the melody. This incorporated a more tonal centre in the his works and in the music as a whole. He also introduced leaps of a third or even the sixth as consonant and pleasing sounds to the ear. One such piece by Dunstable is the secular song O Rosa Bella which as Grout says in A History of Western Music, it can "illustrate the expressive lyrical melodies and the clear harmonic profile of the English music of his time." The presence of chant in this time period is still rather common. Dunstable is well known for his combination of the sequence Veni sancte spiritus with the hymn Veni creator. This four-part motet is one of his most famous pieces. However, the work Quam pulchra es, which consists of three free voices, demonstrates Dunstable's creativity and ability to compose free of a chant melody. The three voices of this piece move in the same basic rhythm and ususally enunciate the same syllable to help outline a general form, but still move individually and lyrically. Dunstable has also received credit for writing a number of carols. Carols are uniquely English compositions, that although not folksongs, have the quality of simple two- or three-part harmonies and melodies that emphasize the text. They also contain a refrain between each stanza. The text is either English or Latin, or both. John Dunstable was a prominent composer who influenced both the composers of his time, like Leonel Power, and those to follow, like Dufay and Binchois.
[taken from: John Dunstable]
John Dunstable Web Presentation
ca. 1370: Sir Gawain and the Green Knight.
(and after): Geoffrey Chaucer (ca. 1343–1400), Canterbury Tales, written initially as unrelated fragments, later assembled together.
1377: Papacy reestablished in Rome, Pope Gregory XI.
1377–1399: Richard II (England).
- 1377-1455: Oswald von Wolkenstein
Austrian poet and composer, the one-eyed von Wolkenstein left home when only ten years old and, as a soldier, he traveled to France, Spain, Italy and even as far as the Nordic and Slavic countries and to Asia. After his father's death, and in order to extend his lands and consolidate his position, he spent time, from 1415, in the service of the German King (from 1433 Emperor) Sigismund, who he accompanied to the Council of Constance (in which he played an important part) and on numerous diplomatic missions. Never an easy personality, he became embroiled, between 1421 and 1427, in a series of bitter quarrels with other landowners. His wild and lawless behaviour led to a period of imprisonment, but from 1430 to 1432 he was again involved in politics, and attended the Council of Basle, after which he retired to his estates and gave up writing music and poetry. By the beginning of the fifteenth century, the German minnesong had already been, for a century, in the hands of 'masters' such as Frauenlob and Regenbogen how gave the rhyme verse minnesong its devotional character, called the 'master-song'. In France, music developed constantly - the solo song was followed by the chanson, accompanied by instruments, and together with the older motet, a wide range of rhythmic and melodic possibilities was explored to achieve astonishing but subtle elaborateness. The techniques of counterpoint also developed, although the results were initially awkward. The first to study and use the new French notation in Germany was the 'Monk of Salzburg' (late fourteenth century), who lived a generation before Oswald. However, despite its 'French' appearance the whole construction in general and detail adhered to the German style, extending to the use of the 'flowers' (coloraturas) of the mastersingers. Oswald von Wolkenstein revolutionized the German tradition. He always used French notation, and, of decisive importance, he used simultaneously, duple and triple metre rhythm. He also used a mensural notation similar to that we still use today. Only in one area does he follow a conservative path. Like the minnesong and master-song, the verses in each stanza do not form a continuous rhythm, but each line is self-contained followed by one of a new rhythmic construction. This is the musical form still used today in many Protestant Chorales with fermatas on the last note of each line. No less important, Oswald's achievements in polyphonic music moved the genre on from the primitive compositions of 'The Monk of Salzburg' to works that matched the full sophistication of contemporary French composition. In part, he did so by using a number of French compositions as models, merely replacing their original text with German words. But from this he and the generations after him were to produce their own unique compositions.
[taken from Oswald von Wolkenstein]
late fourteenth to early fifteenth century: Ars Subtilior (Lat., the more subtle art) - Northern Italy, Southern France, Cyprus
The term used to describe the musical style of the late fourteenth century, specifically that of French composers such as Cuvelier, Philippus de Caserta, and Jacob de Senleches, who lived after Guillaume de Machaut. These composers refined the notational features of the ars nova period to produce a more sophisticated and more rhythmically complex style. The Chantilly Codex, apparently compiled shortly before 1400, is easily the most famous manuscript of the ars subtilior. The bulk of the works apparently date from c. 1370-95, with the possible exception of Baude Cordier's famous "puzzle" rondeaus added at the beginning of the manuscript. It has been suggested that Cordier (flourished 1384-1398) was the editor for the codex. The primary locations at which this music was written were the courts of the Antipope in Avignon and of Foix, both in southern France. The Papal schism lasted from 1387 to 1417. The items in the manuscript include some songs dating back to Machaut and his contemporaries, and then later pieces for which Machaut's most elaborate songs apparently served as inspiration. The rondeaus of Cordier are notated in the shape of a circle and a heart and represent this style at its most obscure.
Codex Chantilly and l'ars subtilior
Music of the ars subtilor
The courtly culture and music that blossomed on the island of Cyprus reached its climax in the years between 1359 and 1432. Pierre I de Lusignan (died 1369) entered history as Cyprus's 'sun-king'. His fame in Europe was mainly due to an extended three-year tour he made there. During this journey, Pierre became acquainted with the most important centres of European musical activity. No less figure than Guillaume de Machaut wrote a chronicle 8000 lines long in honour of this nobleman, La Prise d'Alexandrie [The Conquest of Alexandria]. Wherever the Cypriot court passed during this European tour, Pierre I was greeted with the highest honours. On his arrival in Avignon (March 29, 1363), Froissart relates that he "was received most sincerely, piously, and very honourably". He continues: "All the cardinals, the clergy of the city and all the holy colleges went to meet [Pierre I] with croses and miters with holy water and a very grand profusion of relics and saints' statues, and great was the pomp before him..." The band of musicians in the retinue of Pierre I de Lusignan also caused great excitement during this tour. They also pleased Charles V in Rheims that he donated 80 francs in gold "for the musicians of the King of Cyprus". The spectacular journey was not without its effect on the music on Cyprus, for after his return Pierre I extended what was to become a lasting influence. Until far into the fifteenth century, the musical life at the court of Nicosia could not be imagined without the French ars nova, and later the ars subtilior. Many French musicians and composers were active at the Cypriot court, and Nicosia became one of the most important centres of the Ars Subtilior style.
The Medieval Music of Cyprus from which extract has been taken
1380: John Wycliffe (and others), first English translation of the Bible.
1391–1399: First Ottoman Siege of Constantinople.
1397: Turkish Invasion of Greece.
1399–1413: Henry IV (England).
c. 1400: Aztec music
Mesoamerican religion reflects the belief that all things have a life force, and that ancestors and the gods can be invoked to help the living. Rituals included blood sacrifice, the burning of copal incense (an aromatic tree resin), drinking, music and dance. Maya rulers were also shamans who communicated directly with the gods, sometimes achieving trance with the help of hallucinogens, sometimes seeming to transform themselves into the gods or their animal counterparts. Archaeological understanding of Precolumbian beliefs is enriched by the Conquest period books written by Spanish priests and their native informants, and by the Popol Vuh, the Highland Guatemala Maya creation myth recorded in the sixteenth century from what may have been a much earlier tradition.
[taken from: Mesoamerica]
above: Aztec flute, Central Mexico, AD 1400-1519. Pottery. Music was an important part of all ritual. Musicians used rattles, drums, conch shell trumpets, whistles and flutes.
ca. 1415: Tres Riches Heures, completed by the Limbourg brothers for the Duc de Berry.
1415: Battle of Agincourt.
1417–1436: Brunelleschi, Dome of Florence Cathedral.
1419: Alliance between Burgundy and England.
1419–1467: Philip the Good (Burgundy).
1420: English occupy Paris.
after 1428: Donatello, David.
1431: Jeanne d'Arc executed.
1432: Jan van Eyck (Burgundian–Flemish painter, 1386–1440), the Ghent Altar-piece.
1436: French recapture Paris (from English).
1436: Filippo Brunelleschi, Dome of Florence Cathedral completed.
ca. 1445: Johann Gutenberg (ca. 1400–1467), invents printing with movable metallic type; first Bible, ca. 1455.
1447–1455: Pope Nicholas V established Vatican Library.
- c. 1452-60: The Lochaimer Liederbuch
A German manuscript song collection copied in or near Nuremberg that containing mostly monophonic Lieder but with some polyphonic examples. The manuscript also contains Paumann's Fundamentum Organisandi the work of a German organist who, as a performer on many instruments won great renown. In 1470, he visited the court in Mantua but when both the Duke of Milan and the King of Aragon desired his services he declined, fearing reprisals by competing Italian organists. His treatise, written earlier, in 1452, entitled Fundamentum organisandi, elucidates the embellishment of chant in keyboard style, and contains arrangements of chants and secular melodies.
1453: End of Hundred Years' War.
1453: Battle of Castillon—English driven from France; Turks capture Constantinople.
1455–1487: Wars of the Roses (England).
1456: Ottomans occupy Athens.
1467: Charles the Bold becomes Duke of Burgundy.
1470: Printing presses set up at the Sorbonne, Paris, and at Utrecht.
1476: Liber de natura et proprietate tonorum by Johannes Tinctoris (c. 1436 - c. Oct 12, 1511)
Tinctoris was a Franco-Flemish musical theorist and composer. While employed as a singer at Cambrai in 1460, he met Dufay. By 1475 he had moved to Italy, serving at the court of Ferdinand of Sicily and Aragon. He is known to have returned to France and his homeland in 1487, but he probably remained in his Italian post till his patron died in 1494. Later he was appointed a canon of Nivelles. He was the most important theorist of his time, writing twelve treatises of which two were printed. His surviving musical output consists of four Masses, two motets, a Lamentation setting, seven chansons and one Italian song.
In Liber de natura et proprietate tonorum, Tinctoris set out his rules of counterpoint rules. This book must stand, therefore, as one of the most prominent and credible authorities furthering our understanding of fifteenth-century musical composition. No account of music of this time can be contemplated without regard to his comprehensive writings. He may well provide not only an important insight into modal thought of the time, but also bring these concepts and practices into close juxtaposition with the very issues of recta and ficta. Did composed music have a modal basis? If so then how were views on recta and ficta use affected?
Tinctoris described and named, in Chapter 1, the eight modes. He concluded the chapter by explaining how these were grouped into the four categories, Protus, Deuterus, Tritus and Tetrardus. In order to allay any fear that this book would be interpreted as yet another account of ecclesiastical chant and its perceived modal categories, his final sentence
in Chapter 1 reads as follows:
Hii autem sunt octo toni, quibus non tantum in cantu gregoriano qui simplex est et planus, verum et in omni alio cantu figurato et composito
utimur, circa hoc in libello nostra fert intentio.
[These however are the eight modes, which we use not merely in Gregorian Chant which is simple and plain, but also in all other figured and composed
songs, about which is our purpose in this booklet.]
Tinctoris' definition of mode (tonus) at the beginning of chapter 1 is short and sweet:
Tonus itaque nihil aliud est quam modus per quem principium, medium et finis cuiuslibet cantus ordinatur.
[Mode accordingly is nothing other than the manner by which the beginning, middle and end of any song whatsoever is arranged.]
[some material taken from: Mode versus Ficta - in context by Roger Wibberley]
1477: Death in Battle of Nancy of Charles the Bold, last Duke of Burgundy (1467–1477). Maria (of Burgundy) marries Maximilian (later Maximilian I), son of Frederick III (Emperor of Austria). Burgundy becomes part of Austrian Empire.
First book printed in England, William Caxton's Dicets and Sayings of the Philosophers.
ca. 1477: Sandro Botticelli (Alessandro di Mariano dei Filipepi) (1444–1510), Primavera.
1478: Ferdinand and Isabella, with authorization of Pope Sixtus IV, establish Spanish Inquisition.
1479: Marriage of Ferdinand V of Aragon and Isabella of Castile.
- c. 1480: The Glogauer Liederbuch
The first German manuscript song collection to be written out in partbooks. The Lieder, in 3 or 4 parts, are divided equally between sacred and secular texts, but a quantity of pieces are apparently for instrumental ensemble, probably the earliest in this genre to survive.
1485: Battle of Bosworth, death of Richard III, ends Wars of the Roses, Henry Tudor crowned King of England (Henry VII).
ca. 1485: Botticelli, The Birth of Venus.
1487: Bartholomew Diaz sails around the southern tip of Africa (then called the Cape of Storms; renamed Cape of Good Hope by John II of Portugal.
c. 1490: Burgundian dance.
Our primary knowledge of Burgundian dance in the late Middle Ages is based on a manuscript housed in the Brussels Bibliothèque Royale, Les Basses danses de Marguerite d'Autriche, published c. 1490. The manuscript, printed on black paper and with gold and silver calligraphy, contains music and a shorthand form of tablature for the description of more than fifty bassesdanses. Popular from the fourteenth century to the second half of the sixteenth, the bassedanse (It., bassadanza) was a regal processional dance consisting of only five steps. The simplest components were single steps and double steps (notated ss and d). These were walking steps that progressed forward or backward. The single step consisted of a step and weight change; the double was composed of three steps. Each step was punctuated by a slight rising and lowering of the body. The branle (notated b) was a sideways step performed with a slight swaying motion. The reprise or démarche (notated z, or s in other sources), was a backward step; and révérence (notated R) was the formal bow or curtsy. No floor patterns were provided in this manuscript, but the bassedanse was usually danced with one couple standing behind another, partners holding inside hands. Delicate and tranquil in style, the bassedanse was intended to be danced by an unlimited number of noble performers, and its small steps perfectly accommodated the lady's long train and the exaggerated, pointed toes of the gentleman's shoes, known as poulaines. (For a late sixteenth description of the bassedanse, see Thoinot Arbeau's 1588 treatise, Orchesographie.) Soft, mellow musical instruments such as the vielle, (a bowed string instrument), or recorders were used for small, indoor occasions. The most popular musical accompaniment, however, consisted of an ensemble of three loud, shrill instruments: two were double-reed woodwind instruments called shawms (the forerunner of the oboe) and one was the sackbut, a brass instrument that later was developed into the trombone. One shawm played the notes of the music (tenor melody), while the other instruments improvised on the tenor. (For further reading on medieval and early Renaissance Burgundian, Italian, and French court dance, see the bibliography in the reference below.)
[taken from: Burgundian Dance in the Late Middle Ages]
1485: Battle of Bosworth, death of Richard III, ends Wars of the Roses, Henry Tudor crowned King of England (Henry VII).
ca. 1485: Botticelli, The Birth of Venus.
1487: Bartholomew Diaz sails around the southern tip of Africa (then called the Cape of Storms; renamed Cape of Good Hope by John II of Portugal.
1491–1492: Siege of Granada, Moorish troops finally expelled from Spain. 200,000 Jews expelled from Granada.
1492: Christopher Columbus (1451–1506) arrives in new world (Bahamas).
Second journey to Caribbean, 1493.
1493: Maximilian I becomes Austrian Emperor.
1494: Ludovico Sforza becomes Duke of Milan.
1494: Albrecht Dürer travels to Italy; then returns to Nuremberg.
1495: Expulsion of Jews from Portugal.
1495–1498: Leonardo da Vinci (1452–1519) paints his Last Supper in the refectory of Santa Maria delle Grazie, Milan.
1497–1498: Vasco da Gamba finds sea route to India.
1498–1500: Columbus' third voyage—to Trinidad and coast of South America.
1499: Swiss independence recognized by the Empire (Peace of Basle). French expell Ludovico Sforza from Milan. Amerigo Vespucci and Alonso Hojeda sail to the mouth of the Amazon River.
Music of the Proto-Renaissance ::
Trevor Dean writing about the British historian Philip Jones (1921-2006) and his writings on the Italian city-states of the 13th- to 15th-centuries, and on Italian economic history, tells us that so far as the historical problem of the "renaissance" was concerned Jones believed that it was totally inadequate to start in the 15th-, or even the 14th-century, or to conceive of it in narrowly cultural terms. The "renaissance" had to be put against the much longer, larger and more important history of the transformation of the classical world and of various forms of "revival", political, economic and social, from the twelfth century onwards. If a renaissance happened at all, it occurred in the 12th- and 13th-centuries, and the content had more to do with the rebirth of an urban ruling class and of the republican city-state than with painting and philosophy. The recreation of an "economically developed, urbanised, unified, European and Mediterranean society . . . was the real Renaissance", Jones wrote.
The very beginnings of the Renaissance period can be traced back to around 1150 in northern Italy. Some texts refer to the years from 1200 to the early or mid-fifteenth century as the "Proto-Renaissance", while others lump this time frame in with the term 'Early Renaissance'. The first term seems more sensible, so we're borrowing its use here. Following Justinian's reconquest of Italy in 533, Italy was left largly depopulated and until the late eleventh century most of the population lived on the land, with relatively few living in towns or cities. A resurgence of urban living in the twelfth century, particularly in great commercial trading cities such as Venice, Florence, Genoa and Siena, the intermediaries in the trade between Muslim and Byzantine states to the east and central Europe to the north, and the wealth that this trade and its attendent services like banking created, led to the formation of city-states, regions ruled centrally by a single city, republics (as was the case with Florence, Venice, Genoa and Siena) and duchies (Milan and Savoy). Through most of the late middle ages, Italy had been fought over by the Pope and the Holy Roman Emperor, each so intent on the other that neither missed an opportunity to use politically and stategically the growth of powerful self-governing regions to strengthen their own positions. By the beginning of the Renaissance, there were five major players in city-state politics: the Papal States (or Romagna) ruled by the Pope, the republics of Firenze (Florence) and Venezia (Venice), the kingdom of Napoli (Naples), and the duchy of Milano (Milan).
One of the more interesting developments in sixteenth century music theory was the revival of certain aspects of ancient Greek musical practice. This revival was part of the widespread curiosity, absent three centuries earlier when Toledo fell to Christian forces and with it the great wealth of ancient knowledge preserved by the Moors, about Greek culture and learning. This curiosity was encouraged further by the migration of Byzantine scholars in 1453 after the fall of Constantinople to the Turks. The application of Greek thought to music lingered behind other fields, such as literature and political theory, because of the technical nature of many Greek writings on music, and the lack of existing translations from the Greek into either Latin or the vernacular. In fact the only widely available text that preserved Greek music treatises in Latin was the De institutione musica of Boethius. By the end of the proto-renaissance, towards the middle of the fifteenth century, when the recently invented art of printing was applied to monophonic music and, a few decades later, to polyphony also, and in particular to the polyphony of the later decades of the fifteenth century, music was appearing both in the form of prints and as manuscripts. Among the most important of these printed sources are the prints of secular music published by Ottaviano de' Petrucci. Petrucci is the man whose position as a printer of music could be said to be analogous to that of Gutenberg as a printer of books. Even though Petrucci was not the first to print music or even the first to do so from movable type, he was the earliest to accomplish printing in an important way with respect to music other than plainsong.
The Ars Nova of the Trecento
We have good evidence that, notationally at least, trecento Italian music was already using a sophisticated notational system to rival that proposed in France by Philippe de Vitry. Both the Italian and French methods are described by Marchettus of Padua a musical theorist who was, from 1305 to 1307, maestro di canto at the Cathedral of Padua. His principal theoretical works are Lucidarium and Pomerium, two treatises that provide the most complete known fourteenth-century explanation of Italian trecento theory. The Lucidarium, which covers the basics of traditional music theory and of plainchant, includes an original and highly influential section setting forth a division of the whole tone into five parts. The Pomerium deals with mensural music, emphasizing notation in the Italian manner but also pointing out with approval certain aspects of the French system. Italian notation, possibly developed from the notational system of Petrus de Cruce, which allowed changes in time more than once in the course of a piece, suited the display of strings of short notes in coloratura style, a feature of Italian music from this period. The French notation allowed for jerky dance rhythms which were frequently interrupted by rests or disturbed by syncopation. So, while many ideas appear in both traditions, to some extent, the notation employed was that best suited to the needs of the composer, and further development, whether borrowed from another system or as something completely original, followed as the composer's imagination expanded. But, one might ask, what inspired composers to try out new ideas? Why were the needs of the French so different to those of the Italian? All music of this period was essentially vocal. Whether religious or secular, the texts to some extent determined the style that best suited their expression through the medium of music and Italian ideas about the use of words, as for example in lyric poetry, led composers to write more expressive flowing lines in their setting of them. Pope John XXII's 1326 bull, forbidding the use of motets in church, may have come about as much from the growing association of the motet style with secular subjects as from the way in which the new contrapuntal style tended to obscure the message the religious texts told. The result was that Italian composers could not compete with the northern European composers tradition, that they became mere imitators, and that while while Guillaume Dufay continued to write isorhythmic motets in Italy, Dunstable and the English school of composers established a tradition for free liturgical motet with their melodic and harmonic novelty. It would not be until 1577 that Palestrina would asked to rewrite the church's main plainchant books, following the Council of Trent's guidelines. His most famous mass, Missa Papae Marcelli, was probably composed to satisfy the council's requirements for musical cogency and textual intelligibility, and thereby established the classic model of Renaissance polyphony that, in the hands of Constanzo Porta (c. 1529-1601) and others, persisted through to the threshold of the seventeenth century.
The Madrigal and Caccia
The caccia, like the French chace. is canonic but while the French form has all three voices in canon, the Italian employs canon only in the upper two parts. A concluding ritornello, one or two lines, return repeatedly at the end of each stanza. Coloratura features too. The subjects chosen for the caccia texts are naturalistic; the barking of dogs in the hunt, the ringing of bells, sounding of horns and the shouting during a fire, the fanfares of trumpets in a battle. Giovanni da Firenze (who flourished fourteenth century and is also known as Johannes de Florentia and Giovanni da Cascia) wrote two exciting cacce, Per larghi prati and the especially fine Con bracchi assai, which describes a quail hunt.
The madrigal, very different from that of the sixteenth century, began as a two-part, polyphonic form, not unlike the conductus, consisting of a number of short stanzas concluding, like the caccia, with a short ritornello. Sometimes the ritornello will be marked off from the rest of the piece and include a change of time. Coloratura passages can also be found. Later, Landini, and others, wrote three-part madrigals although these were always more unusual. Jacopo da Bologna (flourished c. 1340-55), an Italian composer, also described as a virtuoso harpist, served at the courts of Mastino della Scala, the tyrant ruler of Verona (where he was involved in a contest with Giovanni da Firenze) probably between 1340 and 1345, and at the Visconti court in Milan, possibly from 1345 to 1355. He helped to give the Italian trecento style its impetus and wrote a treatise on notation as well as composing approximately thirty-five surviving pieces, most of which are madrigals (though he also composed motets and a caccia). He was Landini's teacher. One of his best known works is the madrigal Fenice fù which as well as being a lyrical setting of its text also contains much use of imitation. His Non al suo amante is the only surviving Petrarch setting from the fourteenth century. The text of another madrigal suggests that singing should be smooth and sweet, not loud and raucous. The text to Uselletto selvaggio attacked amateur composers and musical theorists that wrote and commented upon French and Italian ars nova music. He tell us that the world is full of little masters who write a few madrigals or motets and consider themselves Philippe de Vitrys or Marchettus de Paduas.
Although popular in the first half of the century, as the century progressed the madrigal went out of favour. The growing influence of French music, centered on Florence, is exemplified by an Italian treatise for the use of girls at a Florentine convent which concerns itself entirely with French methods of notation and rhythm and quotes pieces from French sources, and the The Chantilly Codex, with its repertoire of French music, which may have been written in Italy, probably at Florence. Johannes Ciconia (c. 1335-1411), the Franco-Flemish composer of vocal music, was active mostly in Italy but for a period in Liège. He was one of the more important composers to begin the movement from the complex and rhythmically animated lines of the late Medieval period (the French style) to the smoother harmonic contours of the early Renaissance (the Italian style) a synthesis that would be explored to even greater effect by Guillaume Dufay (1397-1474). One of Paolo da Firenze's compositions appears in the Lucca codex, but, after Paolo, the madrigal and the caccia, in its canonic form, were abandoned altogether. The ballata, like the French virelai originally monodic, achieved popularity during the second half of the fourteenth century as a work of three voices, and, from then on, became the sole Italian secular musical form. For a time, two-part ballate were being written, most notably 99 by Landini. But the move from two, to three and later to four voice writing showed how, from virelai-like monody, through two-part writing drawn from the French conductus and organum tradition, via three-part ballate, to French inspired four-part writing, Italian polyphony was ever open to exploring new musical forms.
- While we have no French instrumental music from this period, and barely more than a handful of arrangements and estampie from England, a relatively large quantity of Italian instrumental music in the form of keyboard arrangements of vocal music and monodic pieces for solo wind or stringed instruments including eight estampies, four saltarelli, a trotto and two compositions called Lamento di Tristano and La Manfiredina has survived in a manuscript dating from about 1400. The keyboard arrangements are always in two parts, no matter how many parts the original vocal piece had, the left hand part always slow moving, while the right hand is full of rushing semiquavers (sixteenth notes). Among the composers represented are Guillaume de Machaut, Pierre des Molins, Jacopo da Bologna, Bartolino da Padua, Francesco Landini, and Antonio Zacaria da Teramo. Transcriptions and imitations of such music constitute one of three categories of renaissance instrumental music. Two others, dance and improvised music, these categories frequently overlapping, include fifteenth-century monophonic dance music that survives in contemporary Italian dance treatises. Domenico of Piacenza, the dancing master at Ferrara, whose pupils spread his art all over Italy shortly after 1450, was clearly a central figure. Among his disciples was Guglielmo Ebreo of Pesaro (perhaps the person known also as Giovanni Ambrogio da Pesaro) who taught at Florence and whose book is one of several known Italian dance manuals of the period. It includes a number of tunes as well as choreographic directions for many dances, two of which are credited to Lorenzo de' Medici and many to Domenico. Another dance theorist of the time whose treatise likewise includes tunes is Antonio Cornazano. The 1400 manuscript aside, there is still a scarcity of instrumental music surviving from fifteenth-century Italy. What other sources might there be that can show us the role instrumental music played in fifteenth-century Italy. Contemporary commentaries indicate that music was used in church services, festivities, receptions and social gatherings. At the celebration of the wedding of Costanzo Sforza and Camilla of Aragon at Pesaro in 1475, the guests heard not only two antiphonal choruses of sixteen singers each, but "organi, pifferi, trombetti ed infiniti tamburini". When Galeazzo Maria Sforza, Duke of Milan, went to Florence in 1471, he took along forty players of 'high' instruments. The nobility retained numerous instrumentalists to accompany their singers and to render solos and ensemble music. Instrument collections in the various palaces included lutes, viols, harps, flutes, and so on. From the beginning of the fourteenth century, and maybe even earlier, bands of wind instruments were employed by cities such as Florence and Lucca. Civic records include references to trumpeters, pifferi, and bagpipe players. Wind bands seems to have had about eight or nine players.
The oldest Florentine organ for which we have evidence was built c. 1299. From the trecento, Tuscan organ builders enjoyed an unmatched reputation while Venice produced one of its greatest organ builders, Fra Urbano, who constructed a famous organ for St. Mark's in 1490 and remained active for more than forty years. As this was additional to an earlier organ there, the quattrocento saw Venice able, like Naples, to boast antiphonal organ-playing. At Brescia, four generations of the Antegnati family worked as builders. In addition, numerous Germans and Frenchmen competed with native organ builders to build instruments in Italy.
Stringed keyboard-instruments were to be found in Italian homes, especially to accompany frottole. Some late fifteenth-century makers are known by name. The Antegnati, famous for their organs, also made lutes and viols, and may have established the high standard of Brescian lute- and viol-making from about 1495. Tinctoris states that the viol was used generally "for the accompaniment and ornamentation of vocal music and in connection with the recitation of epics." But in fifteenth-century Italy it was also used independently and in viol and mixed ensembles.
:: next lesson |
Morning is the period of time between midnight and noon or, more commonly, the interval between sunrise and noon. Morning precedes afternoon, evening, and night in the sequence of a day. Originally, the term referred to sunrise.
The name (which comes from the Middle English word morwening) was formed from the analogy of evening using the word "morn" (in Middle English morwen), and originally meant the coming of the sunrise as evening meant the beginning of the close of the day. The Middle English morwen dropped over time and became morwe, then eventually morrow, which properly means "morning", but was soon used to refer to the following day (i.e., "tomorrow"), as in other Germanic languages—English is unique in restricting the word to the newer usage. The Spanish word "mañana" has two meanings in English: "morning," and "tomorrow," along with the word "morgen" in Dutch and German which also means both "morning," and "tomorrow." Max Weber, (General Economic History, pp23) states that the English word "morning" and the German word "Morgen" both signify the size of land strip "which an ox could plow in a day without giving out". "Tagwerk" in German, and "a day's work" in English mean the same. A Good morning in this sense might mean a good day's plow.
Significance for humans
Some languages that use the time of day in greeting have a special greeting for morning, such as the English good morning. The appropriate time to use such greetings, such as whether it may be used between midnight and dawn, depends on the culture's or speaker's concept of morning.
Morning typically encompasses the (mostly menial) prerequisites for full productivity and life in public, such as bathing, eating a meal such as breakfast, dressing, and so on. It may also include information activities, such as planning the day's schedule or reading a morning newspaper. The boundaries of such morning periods are by necessity idiosyncratic, but they are typically considered to have ended on reaching a state of full readiness for the day's productive activity. For some, the word morning may refer to the period immediately following waking up, irrespective of the current time of day. This modern sense of morning is due largely to the worldwide spread of electricity, and the concomitant independence from natural light sources.
The morning period may be a period of enhanced or reduced energy and productivity. The ability of a person to wake up effectively in the morning may be influenced by a gene called "Period 3". This gene comes in two forms, a "long" and a "short" variant. It seems to affect the person's preference for mornings or evenings. People who carry the long variant were over-represented as morning people, while the ones carrying the short variant were evening preference people.
- Online Dictionary Definitions of "morning"
- Origin of the phrase "Good Morning
- Etymology of the word "morning
- Weber, Max (1961). General Economic History. New York: Collier Books. p. 23.
- "Why some of us are early risers". BBC News. London. 2003-06-17. Retrieved 2008-01-30.
- Gene determines sleep patterns |
Characteristics of an Effective Teacher
as an Enquiry is developed
- They have well-articulated goals and rationales. They
encourage parents and others to voice their questions and concerns.
- They share what they know about an issue but also acknowledge
what they do not know. They encourage class efforts to look for answers.
- They teach complexity and try not to protect students
from it. They recognise the difficulty of complexity, even for adults.
- They teach multiple perspectives and explore alternative
views on issues. They use disagreements between perspectives to spur
further clarification and research.
- They are aware of their own feelings and opinions about
an issue. They make it clear to students what their views are and that
it is okay if other people disagree (especially students). |
Invasive exotic plants are those plants transported outside their normal home ranges and cause damage or harm in their new location. In their new homes, these alien species are free from the natural competition, herbivores, insects and diseases that normally keep populations in check. Therefore many exotic species can spread rampantly, displace natural plant species and become nuisances. Invasive species are recognized as one of the leading threats to biodiversity and may impose enormous costs on forest managers.
Increasing global trade and widespread use of non-native plants for horticultural and landscaping purposes has contributed to the enormous challenge we now face. While many plants are accidentally introduced into new areas, many more are purposefully transported outside of their home range for a variety of reasons. Not only do these plants have the potential to become established and invasive in their new environments, but they may also carry with them exotic forest pests, such as insects and pathogens, which can have devastating consequences on natural ecosystems. Many of the worst insect and disease epidemics in the history of our nation's forests have begun with the introduction of plant materials from other countries or regions.
Many invasive plants reduce biodiversity by occupying habitat normally utilized by native species. More aggressive invasive species can actually displace natural vegetation by growing so densely as to prevent reproduction by native species, or by physically overtaking natural vegetation such as is observed with kudzu. The long term effects of invasive plants on biodiversity are just beginning to be understood. Forests are complex systems of interacting organisms; the loss of one plant species can affect many other plants, animals, and microorganisms. The interactions are so complex, that often we do not realize their extent until the devastating consequences of biodiversity reduction are observed.
Some invasive plants are a nuisance or even a danger to humans. Invasives are particularly good at invading disturbed sites not occupied by natural vegetation, making regeneration of forests after thinning or harvesting difficult. Likewise, landscape plantings and gardens can easily be overtaken. Some exotics produce chemicals or cause allergic reactions that may harm humans, while others create dangerous fire hazards because of highly flammable plant tissues and the build-up of large quantities of combustible fuels.
The North Carolina Forest Service strongly urges you to utilize native plants in your landscapes, gardens, and forests; and to remove exotic invasives when observed. Some exotic invasive plant species are very difficult to eradicate. Please contact your local NCFS office for assistance with identification and eradication of exotic invasives, and recommendations for native plant species to suit your needs.
Porcelainberry (Ampelopsis brevipedunculata)
Oriental Bittersweet (Celastrus orbiculatus)
English Ivy (Hedera helix)
Cypressvine Morningglory (Ipomoea quamoclit)
Japanese Honeysuckle (Lonicera japonica)
Kudzu (Pueraria Montana var. lobata)
Chinese/Japanese Wisteria (Wisteria sinensis / Wisteria floribunda)
Invasive Herbs and Grasses
Garlic Mustard (Alliaria petiolata)
Sericea Lespedeza (Lespedeza cuneata)
Japanese Stiltgrass (Microstegium vimineum)
Chinese Silvergrass (Miscanthus sinensis)
Japanese Knotweed (Polygonum cuspidatum) |
Guided Inquiry Design
Authors: Carol C. Kuhlthau, Leslie K. Maniotes, Ann K. Caspari
Download the Presentation: Guided Inquiry Design Framework
Guided Inquiry professional development, training workshops, residential institutes and book clubs, contact Dr Leslie Maniotes – [email protected]
Guided Inquiry is an innovative team approach to teaching and learning where teachers and school librarians, with other experts and specialists, join together to design and implement inquiry learning. It engages children in constructing personal knowledge while using a wide range of sources of information and creatively sharing their learning their fellow students in an inquiry community. Guided Inquiry Design is grounded in the research of the Information Search Process (ISP) that describes students’ process of learning from a variety of information sources in extensive research projects. The ISP research goes inside the inquiry process to reveal ways to guide students in deep engaging learning.
Research Foundation of Guided Inquiry Design
For many years I have been conducting research on the process of learning from a variety of sources of information (Kuhlthau 1985, 2004, 2005). The second edition of Seeking Meaning (2004) is a good summary of this work. My studies opened insights into students’ perspective of their experience in research projects. I investigated students’ feelings as well as their thoughts and actions while they were in the stages of learning from various sources of information described in my model of the Information Search Process (ISP). What is too often thought to be a simple report or a routine term paper assignment was found to be a complex inquiry process that requires guidance, instruction, and assistance for optimal learning for every child.
In these studies I investigated students’ thoughts, feelings and actions while they were involved in extensive research projects. I found that they progress through six identifiable stages, which I named for the main task to accomplish in each stage, plus a seventh assessment stage.
- Initiation: initiating a research project
- Selection: selecting a topic
- Exploration: exploring for focus
- Formulation: formulating a focus
- Collection: collecting information on focus
- Presentation: preparing to present
- Assessment: assessing the process (Kuhlthau, 1985)
Six Stages in the Information Search Process (ISP)
These studies showed that students’ thoughts are charged with emotions that influence the actions they take. Students experience a dip in confidence and an increase in uncertainty when they least expect it, during the Exploration stage. They often expect to be able to simply collect information and complete the assignment. This simple view of the research process sets up stumbling blocks, especially in the Exploration and Formulation stages. When their expectations do not match what they are experiencing, they become confused, anxious and frustrated. The early stages of the ISP reveal the struggle they experience in learning in an extensive inquiry project. Feelings are important and indicate when they are having difficulty and when they are doing well on their own (Kuhlthau, 2004).
Let’s take a closer look at each stage in the ISP and what it tells us about guiding inquiry.
Initiation: Initiating a Research Assignment
Students often feel apprehensive and uncertain about what is expected of them and overwhelmed at the amount of work ahead. Talking with classmates is a natural action to take, but some feel they should be “going it alone” and that checking with others might not be “entirely fair.” It is important to make sure that students understand that it is not only fair to talk about their ideas and questions, it is necessary to have these conversations at this stage to begin to get their thinking going.
Selection: Selecting a Topic
Many students want to select a specific topic or question quickly and dive right into collecting information and completing the assignment. This is where students can go astray right at the beginning. They need lots of groundwork before they can form meaningful questions that they want to pursue and that are worth investigating. Selection is a time for introducing and expanding on the general topic to be researched.
Exploration: Exploring for a Focus
In preparation for forming important questions, students need to build background knowledge on the general topic and to discover interesting ideas. A common problem is that many students skip over the Exploration and Formulation stages and attempt to move on to the Collection stage without having formed a focus for their research. For most students, the Exploration stage is the most difficult in the research process. As they browse information about their topics, they become confused by ideas that don’t fit together. They encounter inconsistencies and incompatibilities of different perspectives and differing points of view. They have difficulty determining importance from everything in a text. They need to understand that there are different kinds of reading for different stages in their learning process. At this stage they are exploring for interesting ideas rather than collecting detailed information. They need to learn to browse through a variety of texts, skimming and scanning to get a general picture. They need to recognize when to slow down and read to gain sufficient background knowledge and to pick up interesting ideas. Exploration is best achieved by jotting down interesting ideas from a variety of sources rather than taking extensive, detailed notes from one text. Students need support, structure, and strategies for learning from different sources of information to assimilate new ideas and form a focused question from the ideas that arise in their explorations.
Formulation: Formulating a Focus
Formulating a focus marks the turning point of the ISP when students identify a focus, an area of concentration, “something to center on,” and clarify their research question. Once they have formulated a focus for their research, their feelings of uncertainty and confusion begin to diminish and confidence increases. It is important to note that forming a focused question comes at the midpoint of the ISP not at the beginning as often expected.
Collection: Collecting Information on Focus
A good focus is one in which ideas continue to grow and evolve based on thorough reading of information and detailed note taking in the Collection stage. Students assume a “study” frame of mind of concentrated attention. A clear focus enables students to determine importance in what they are reading. It helps them to discriminate between less significant facts and more important ideas. A good focus can be adapted and altered as they continue to learn while they read, write and collect information. Interest in the project deepens as students get further along in constructing an understanding of their focused question.
Presentation: Preparing to Present
The Presentation stage marks the beginning of the writing process that introduces another set of challenges. Students who construct their ideas as they collect information are better prepared for writing and creatively presenting what they have learned. They experience fewer writing blocks because they have been constructing their learning all the way through the research process. These students often express a sense of accomplishment and satisfaction in what they have learned and created. Students that merely collect facts in a “cut and paste” fashion have difficulty preparing an original presentation and often express disappointment and boredom with their inquiry project.
Assessment: Assessing the Learning
The way students feel at the close of a research project is a good way to assess what went well and what problems they encountered in the research process. Feelings of satisfaction and accomplishment indicate that they constructed their own understanding of their topic. Feelings of disappointment and boredom indicate a “cut and paste” approach with little real learning. Self-assessment gives students a sense of how to approach future research assignments and inquiry projects. After several research projects, these students showed that they had internalized the stages in the ISP as their own “process” explaining that this is the “way I learn.”
Impact of ISP Model
The ISP has become one of the standard models of information seeking behavior and one of the most highly cited in the field. Over the years the ISP research has changed the way many librarians and teachers help students with project-based learning. It has opened a window into what students are experiencing when they are constructing new understandings and learning from multiple sources in the dynamic information environment. It has revealed ways to guide students in their learning. Students need considerable guidance and intervention in learning throughout the inquiry process in order to construct personal understanding. Without guidance, they tend to expect a simple collecting and presenting assignment that leads to copying and pasting with little real learning. With guidance, they are able to construct new understanding in the stages of the ISP and gain personal knowledge and transferable skills in learning from a wide variety of information sources.
From the ISP to Guided Inquiry Design
GUIDED INQUIRY DESIGN FRAMEWORK
|What Students are doing in ISP||STAGES of ISP||PHASES of
|Initiating the research project||INITIATION||OPEN|
|Selecting a Topic||SELECTION||IMMERSE|
|Formulating a focus||FORMULATION||IDENTIFY|
|Collecting information on focus & seeking meaning||COLLECTION||GATHER|
|Preparing to present||PRESENTATION||CREATE and SHARE|
|Assessing the process||ASSESSMENT||EVALUATE|
Kuhlthau, Maniotes, and Caspari 2012
From the ISP to Guided Inquiry Design
The ISP model describes what students’ experience in the phases of the inquiry process. These studies provide solid evidence on how to guide learning in the inquiry process that prepares students for learning, living and working in the information age. The Guided Inquiry Design framework is built around the ISP with specific direction for guiding students in each phase of the inquiry process.
Guided Inquiry opens the inquiry process at Initiation, immerses students in background knowledge at Selection, guides in exploring interesting ideas at Exploration, enables identifying an inquiry question at Formulation, supports gathering to address the question at Collection, intervenes for creating and sharing at Preparation, and assesses throughout the inquiry process and evaluates at the close. Let’s take a closer look the Guided Inquiry Design Framework.
Guided Inquiry Design Framework
The Guided Inquiry Design process begins with Open the inquiry to catch students’ attention, get them thinking, and help them make connections with their world outside of school. Next is Immerse, which is designed to build enough background knowledge to generate some interesting ideas to investigate. Then Explore those ideas for an important, authentic engaging inquiry question. Next, pause to Identify and clearly articulate the inquiry question before moving on to Gather information. After gathering, Create and Share what students have learned and then Evaluate to reflect on content and process and assess achievement of learning. The shape of the Guided Inquiry Design Process follows the flow of confidence and interest of students in the inquiry process that will help you guide students in reading to learn. This is a general framework for designing an inquiry approach across all curriculum subjects for students of all ages. Think of inquiry as a way of learning in the information age school, not simply as an occasional research assignment.
Now let’s look at each phase in the inquiry process and think about how to design student learning in each phase.
Invitation to Inquiry
Open is the invitation to inquiry at the beginning of the inquiry process. It is a distinct and important phase of the process that sets the tone and direction of the inquiry. Once the learning team has decided on the learning goals, they need to create a powerful opener that invites the learners in and introduces the general topic to engage all of the students. The main goal is to open students’ minds and stimulate curiosity and inspire them to want to pursue the inquiry. The opener is designed to spark conversations and stimulate students to think about the overall content of the inquiry and to connect with what they already know from their experience and personal knowledge. It sets the stage for learning.
Build background knowledge
Connect to content
Discover interesting ideas
In the Immerse phase, the students build background knowledge together through an immersion experience. The learning team designs engaging ways for students to immerse in the overall ideas of the curriculum area under study, for example reading a book, story, or article together; viewing a video; or visiting a museum, a field site, or an expert. The main task of Immerse is to guide students to connect with the overall content and to discover interesting ideas that they want to explore further. As they build background knowledge together, each student reflects on ideas that matter to him or her and are worth further reading and investigation.
Explore interesting ideas
In the Explore phase of Guided Inquiry, students browse through various sources of information exploring interesting ideas to prepare to develop their inquiry question. The learning team guides students to apply the reading strategies of browsing and scanning a variety of sources. Students dip into a few texts to read lightly in order to make sense of the information they find and to raise lots of questions. “Dipping in” is a reading strategy that enables students to go further into interesting ideas without becoming overwhelmed by a multitude of specific facts. Students can easily become overwhelmed by all the information and confused by facts that don’t fit together. The learning team guides them to keep an open mind as they explore and reflect on new information they are encountering and to begin to find questions that seem particularly important to them. Guiding students through the Explore phase leads them to form a meaningful inquiry question.
Pause and ponder
Identify inquiry question
In the Identify phase learners pause in the inquiry process to develop a meaningful inquiry question and form a focus. In Guided Inquiry they have had lots of preparation for this phase. Students are ready to identify an important question for their inquiry because of the time they have spent immersing and exploring to build enough background knowledge to ask meaningful questions. The main task of the Identify phase is to construct an inquiry question from the interesting ideas, pressing problems and emerging themes they have explored in various sources of information. The team introduces strategies that enable each student to think through information and ideas to clearly articulate a focused question that will frame the rest of their inquiry.
Gather important information
A clearly articulated question gives direction for the Gather phase. Gather sessions are designed to help students collect detailed information from a variety of sources. In this way they are learning to determine importance in what they are concentrating on in their reading, listening and observing. The learning team guides students in locating, evaluating and using information that leads to deep learning. The main task of the Gather phase is for students to choose what is personally meaningful and compelling about their inquiry question in the information sources they find and reflect upon. The learning team guides students in a structured approach for managing their search and documenting what they are learning. First students “go broad” to find a range of sources that are useful for understanding their inquiry question. Next students “go deep,” by choosing a core of the most useful sources to read closely and reflect with sustained attention as they find connections and gain personal understanding.
Reflect on learning
Go beyond facts to make meaning
Create to communicate
After students have thoughtfully gathered enough information to construct their own understandings of their inquiry question, they are ready to organize their learning into a creative presentation during the Create phase. Creating a way to communicate what they have learned about their inquiry requires students to articulate what is most important about their subject and enables them to integrate their ideas more firmly into deep understanding. The learning team guides students to go beyond simple fact finding and reporting and to summarize, interpret and extend the meaning of what they have found and create a way to share what they have learned. Create sessions are designed to guide students to reflect on all they have learned about their inquiry question and decide what type of presentation will best represent their ideas for a particular audience. The learning team guides students in creating a meaningful, interesting, clearly articulated, well-documented presentation that tells the story of what they have learned.
Learn from each other
Tell your story
Share is the culminating phase in the inquiry process when students share the product they have created to show what they have learned. Students have become experts on the question for their inquiry community. They now have the opportunity and responsibility to share their insights with their fellow students and communicate their learning to others. Their inquiry products may be shared with a wider audience, such as their parents or another group of students in their school or in another school, perhaps online. An important component of Guided Inquiry is the collaborative learning that takes place when students share what they have learned in the inquiry process.
Evaluate achievement of learning goals
Reflect on content
Reflect on process
The Evaluate phase, which occurs at the close of the inquiry process, is an integral part of Guided Inquiry. Although Guided Inquiry incorporates assessment for determining student progress throughout all of the phases of the inquiry process, evaluation occurs at the end when the learning team evaluates students’ achievement of the learning goals. In addition, the learning team guides students in reflection for self-assessment of their content learning and their progress through the inquiry process. Students’ self-reflection takes place while the entire process is fresh in their minds to reinforce content learning and establish good habits and competencies for learning and literacy.
Guided Inquiry Design for teaching and learning in the information age schools
Competency in using all kinds of information for clear, deep understanding is essential for every child in today’s world. Guided Inquiry provides opportunities for children to learn strategies in locating, evaluating and using a wide range of media and a variety of texts and puts all of their strategies and skills into action throughout the inquiry process. Starting at youngest age, children are introduced to inquiry as a way to learn that prepares them for living and working in the information age. As students continue through the elementary and middle school and on to high school Guided Inquiry creates an environment that motivates them to want to learn. It engages them in determining importance and meaning by connecting the curriculum with their world for deep lasting learning. The Guided Inquiry Design framework is an innovative, dynamic approach to teaching and learning for providing information age education for children across the United States and in countries around the world.
Kuhlthau, Carol, Leslie Maniotes and Ann Caspari. Guided Inquiry: Learning in the 21st Century, Libraries Unlimited, 2007.
Kuhlthau, Carol, Leslie Maniotes and Ann Caspari. Guided Inquiry Design: A Framework for Inquiry in your School, Libraries Unlimited, 2012.
Kuhlthau, Carol. Seeking Meaning: A Process Approach to Library and Information Services, 2nd ed. Libraries Unlimited, 2004.
Kuhlthau, Carol. Teaching the Library Research Process, Scarecrow Press, 1985.
Todd, R., Kuhlthau, C. C., and Heinstrom, J. 2005. Impact of School Libraries on Student Learning. Institute of Museum and Library Services (IMLS) Leadership Grant Project Report http://cissl.rutgers.edu. |
Thick billed murres are one of the most abundant sea bird species found in the Canadian Arctic. These large sea birds nest in dense colonies located on seaside cliffs and are known as one of the deepest underwater divers of all birds.
Many studies support seabirds as indicators of ecosystem changes. Given their large presence in the Canadian Arctic, this makes murres an even more interesting subject of study.
Dr. Emily S. Choy is the W. Garfield Weston Post Doc at Mcgill University, currently studying the thick billed murre colony on Coats Island in Northern Hudson Bay.
Her work focuses on the physiology of the species and how they are being affected by rising temperatures and increased shipping in the region.
Q: What are some of the changes that you have seen within the thick billed murre population in the last several years?
A: The program that I am a part of has been going on for the last 30 years…so, I’m lucky to be part of a program that has had long term data collection. During that time, sea ice has decreased drastically in the Arctic. We've found that the prey of the thick billed murres has shifted over the past 30 years from about 50 per cent Arctic cod, which is a sea ice associated fish, to over 50 per cent capelin.
Also, both the male and female [thick billed murres] have duties, and take turns incubating the nest. They will take 12 to 24-hour shifts with the eggs. But on days that are very hot, for example in 2011, the birds actually died while incubating their eggs. Also, in years that are particularly hot, the mosquitoes become bad and the time in which the mosquitoes come out increases.
So, the birds are dehydrating because of the hot weather, but they're also being parasitized by mosquitoes. The birds only have one egg per year and will not abandon their eggs, so they will go to the point of actually dying on their egg in the heat rather than to abandon. They put everything into this one egg in terms of reproductive success and it's important that they survive.
Q: Why is this shift in their diet so significant?
A: Arctic cod is a very important prey species in the Arctic. It is a very high fat, high lipid prey so it is one of the dominate prey for many marine mammals, including whales, seals and seabirds. And this prey is highly associated with sea ice, but also with cooler temperatures. There has been a decline in Arctic cod in many regions of the Arctic, but also moving into Hudson Bay has been capelin, which is a more Atlantic species and is actually smaller in terms if body size than Arctic cod.
The shift is due to the change of oceanographic conditions. The water is becoming warmer in Hudson Bay. So, the Arctic cod are moving northward and the more southern species such as capelin are moving in.
Some of the studies that my supervisor, Kyle Elliot, and Tony Gaston had done have shown that as the birds have switched to capelin the growth rates of their nestlings have actually decreased. Which is believed to have to do with the smaller body size of capelin versus Arctic cod.
Q: What specifically are studying within the Coats Island thick billed murre colony?
A: I'm interested in the impacts or physiological effects of climate change on the murres. So, I am interested in the direct effects of the increases in temperature on the birds, but also the indirect effects of the shifts of prey on the bird’s energetics. So, I have a few projects that I am focusing closest on to answer those questions.
First, I am using something called open flow respirometry to look at the impacts of increasing temperature. So basically, I'm using respirometry and I'm measuring the metabolic rate, or breathing rate, of the birds. I’m using those measurements to see whether they become more stressed when their metabolic rate increases.
The big thing about murres is that they as a species are amazing divers. They can dive up to 100-200 meters, but they can also spend up to 4 hours a day flying. Because they are amazing divers their energetic costs of flight are huge, and they have the highest energetic costs of flight of any animal or bird.
So that is one of the reasons I am looking at the effects of heat and prey changes is because these birds, they already have a lot of stress in terms of their energetic costs, so can they also withstand increases in temperature or changes in food?
The second part is using heart rate to look at how changes in the prey are affecting the birds and changes in behaviour. I plant little tiny heart rate monitors into the birds and then I can measure heart rate demands. To do that I am also using other technologies like accelerometers which are basically little back packs that go on the birds. This can tell us what the birds behaviours are as well as an idea of the birds acceleration in three dimensions.
Q: In what ways have thick billed murres adapted to the cold?
A: Particularly with murres, the only uninsulated part of their body are their legs. They are also black and white in colour so in the sun, particularly on their nest, they tend to heat up very quickly.
My preliminary result showed their metabolic rates increased significantly with increasing temperature, but in terms of their ability to basically cool themselves, they are very, very poor adapted for trying to cool their bodies down at higher temperatures and are at risk of overheating.
Q: What is a typical day in the field collecting your data?
A: So, there is a team of about 6 to 11 people. We arrive on the beach, but we have two cabins where we stay at the top of the cliffs and have to haul our gear up with pullies.
We have long days in the field. Everyday each team member is assigned a spot on the cliff for a roll check for things like new eggs, new chicks, any losses in eggs or chicks and a general count on the birds.
Depending on the day and the time of year, we will do banding. We will place a band on the chicks, or we'll band the adults. These birds tend to be quite loyal to their nests, so the same birds tend to fly back to their same nests every year.
I was also studying heart rate and thermal tolerance, so there are days that I dedicate to that. I would also work with other team members and do heart rate surgeries, so I planted heart rate monitors in the birds and that allows for fine scale energetic measurements.
Q: What are the biggest challenges to completing this work?
A: Well it's tough doing work in the Arctic in general because of the climate. There were some days where we were actually stuck because the weather had gotten really bad. Our plane was at one point supposed to pick us up and they couldn't land, so we had to spend an additional week on Coats just for poor weather. So, on bad weather days when it is really rainy or really windy, we can't work because it's not safe.
Another issue that is affecting the birds is that due to the decline in sea ice, there has been an increase in polar bears. So now polar bears who usually feed on their traditional diet of seals have increased their predation on the murre colony. Over the past two years there have been bears on the colony, and obviously it is not safe to work when they are in the area.
Another issue is I'm using a lot of novel technologies like heart rate and some of this has never been done before so there is just the challenge of being the first to be finding out all of the challenges in the field. In terms of the thermal tolerance work, very little of that work has been done on cold-adapted birds.
*This interview was edited for length and clarity. |
According to a radical theory, a spacecraft could survive a trip through a wormhole exiting on the other side –which could be a totally different universe– technically intact.
However, the biggest problem is that these cosmic gateways are considered to be inherently unstable. According to scientists, when a particle enters a wormhole, it creates dangerous fluctuations which cause the structure to collapse in on it.
However, a new theory proposed by a group of physicists suggests that a person or spacecraft could make it through a wormhole in the center of black holes, theoretically at least. This could help the traveler access another universe on the other side.
According to researchers, wormholes are theoretical tunnels that act as shortcuts in space-time. If wormholes actually exist, one they these cosmic shortcuts could help us reach distant corners of the universe in a relatively small time.
Assuming that wormholes can be found at the center of a black hole, researchers from the University of Lisbon in Portugal proposed a model how objects like a chair, a researcher and a spacecraft would be able to survive the journey… technically intact.
“What we did was to reconsider a fundamental question on the relation between the gravity and the underlying structure of space-time,” said one of the team, Diego Rubiera-Garcia from the University of Lisbon in Portugal.
“In practical terms, we dropped one assumption that holds in general relativity, but there is no a priori reason for it to hold in extensions of this theory,” added Professor Rubiera-Garcia.
The team of researchers analyzed each object individually as a physical body traveling towards a black hole, modelling an aggregation of points connected by physical interaction holding them together as they travel alongside a ‘geodesic line.’
“Each particle of the observer follows a geodesic line determined by the gravitational field,” says Rubiera-Garcia. “Each geodesic feels a slightly different gravitational force, but the interactions among the constituents of the body could nonetheless sustain the body.”
According to the theory of General Relativity proposed by Albert Einstein in 1915, a ‘body’ approaching a black hole will be crushed along one direction and stretched along the other. Since the wormhole radius is believed to be finite, researchers demonstrated that the body would be crushed as much as the size of the wormhole.
In their study published in the Journal Classical and Quantum Gravity, scientists demonstrated their theory by showing how time spent by a light ray in a round trip among two parts of the body is ALWAYS finite.
Researchers explain that finite forces no matter how strong, are able to compensate for the impact of a gravitational field near and inside the wormhole on a physical body travelling through it.
“Thus, different parts of the body will still establish physical or chemical interactions and, consequently, cause and effect still apply all the way across the throat of the wormhole,” they explain.
According to Rubiera-Garcia, a physical object could survive a trip through a wormhole exiting on the other side –which could be a totally different universe– technically intact. However, the body would be crushed to the size of the finite wormhole. At least it would make it through… right?
“For a theoretical physicist, the suffering of observers is admissible (one might even consider it part of an experimentalist’s job), but their total destruction is not,” wrote Rubiera-Garcia and his team.
However, all of this remains a theory until we manage to actually see a black hole. |
Lesson 1: Networks
The ability to expand beyond the limit of a single computer in a single office has expanded the reach of the PC to global proportions. Two technologies have driven this expansion: computer networking and the portable computer. In this tutorial, we first take a look at how the networks that link up computers on a global scale are put together. Then we examine the portable computer, whose introduction has allowed users instantaneous access to the computing and networking power of all the latest computer technology anywhere they go.
A network is defined as two or more computers linked together for the purpose of communicating and sharing information and other resources. Most networks are constructed around a cable connection that links the computers. This connection permits the computers to talk (and listen) through a wire.
After this lesson, you will be able to:
Estimated lesson time: 40 minutes
- Define basic networking concepts and describe how a network functions.
- Configure and change network interface cards.
- Define Internet terms and functions.
Before You Begin
Although there are no prerequisites for this tutorial, it is highly recommended that you be familiar with all aspects of the hardware presented in earlier tutorials. |
2 HistoryThanksgiving, or Thanksgiving Day, is a holiday celebrated in the United States on the fourth Thursday in November.It became an official Federal holiday in 1863.Congress finally ruled in 1941 that the fourth Thursday in November would be the legal National Thanksgiving Day holiday.The event that Americans commonly call the "First Thanksgiving" was celebrated by the Pilgrims after their first harvest in the New World in This feast lasted three days, and it was attended by 90 Native Americans and 53 Pilgrims.
3 The StoryMany people in England were unhappy with their King who said that everyone must pray how he prayed. They unhappy Englishmen keft their homes and went to Holland. Pilgrims, you know, are people who are always traveling to find something they love, or to find a land where they can be happier; and these English men and women were journeying, they said, "from place to place, toward heaven, their dearest country."
4 Traveling to AmericaAfter hard times in Holland, the Pilgrims hired the Mayflower to take them across sea. There were one hundred people on board and it was cold and uncomfortable.The sea was rough, and pitched the Mayflower about, and they were two months sailing over the water.At last the Mayflower came in sight of land. The Pilgrims first arrived in America on December 11, 1620The month was cold November, and there was nothing to be seen but rocks and sand and hard bare ground.
5 SettlingAt last all the tired Pilgrims landed from the ship on a spot now called Plymouth Rock. The weather was cold, the snow fell fast and thick, the wind was icy, and the Pilgrim fathers had no one to help them cut down the trees and build their church and their houses.The Pilgrim mothers helped all they could; but they were tired with the long journey, and cold, and hungry too, for no one had the right kind of food to eat, nor even enough of it.
6 Meeting the IndiansIndians were spotted. One of the kind Indians was called Squanto, and he came to stay with the Pilgrims, and showed them how to plant their corn, and their pease and wheat and barley."Let us thank God for it all," they said. "It is He who has made the sun shine and the rain fall and the corn grow." So they thanked God in their homes and in their little church; the fathers and the mothers and the children thanked Him."Then," said the Pilgrim mothers, "let us have a great Thanksgiving party, and invite the friendly Indians, and all rejoice together."
7 The First MealSo they had the first Thanksgiving party, and a grand one it was! Four men went out shooting one whole day, and brought back so many wild ducks and geese and great wild turkeys that there was enough for almost a week.The friendly Indians all came with their chief Massasoit. Every one came that was invited, and more, I dare say, for there were ninety of them altogether.Each meal, before they ate anything, the Pilgrims and the Indians thanked God together for all his goodness. The Indians sang and danced in the evenings, and every day they ran races and played all kinds of games with the children.
9 FoodThey had a traditional British Harvest Feast in thanks to God for surviving and the blessings of provisions that would see them through the winter.More meat than vegetables, including venison, fish and wild foul, which may or may not have actually been turkey.They probably didn't have much in the way of desserts, as they didn't have a lot of flour or sugar on hand.They probably had some fruits and corn.Instead of pumpkin pie, they probably had boiled pumpkin.The first Thanksgiving feast lasted for three days. |
Overview (What you need to know)
Mental health has become a new concept to many people around the world and has started to become a more accepted term in society. However, many people still don’t even understand what it means. Mental health Disorder “is any disorder that causes a person to experience different behaviors than their normal or altered mood.” It includes things such as our emotional, psychological, and social well being. It can also affect the way many people feel, act and think. It is also important to note that mental health is important at every stage of life. In the western region of the world mental health has become a very common and well known thing. Most kids are even learning about it in school whereas in the western region it’s a very uncommon term to most people. The reason my topic matters is because I am hoping to be able to educate and clarify some of the misconceptions presented about mental health around the world. An example of this is the graph below that shows what percentage of people have certain attitudes towards mental health. One thing I would like to point out from this chart is that 60% of people believe that “mental unhealthy people should have their own groups-healthy people need not be contaminated by them.” This is just one of many of the misconceptions and viewpoints people have about people with mental health.
Example of Cultural misconceptions
“Attitudes toward mental illness vary among individuals, families, ethnicities, cultures, and countries. Cultural and religious teachings often influence beliefs about the origins and nature of mental illness, and shape attitudes towards the mentally ill.” There was a study conducted about how culture can influence the view point on mental health and I feel as though its important to add for the understand of “cultural misconceptions of mental health”
“In a 2003 study, Chinese Americans and European Americans were presented with a vignette in which an individual was diagnosed with schizophrenia or a major depressive disorder. Participants were then told that experts had concluded that the individual’s illness was “genetic”, “partly genetic”, or “not genetic” in origin, and participants were asked to rate how they would feel if one of their children dated, married, or reproduced with the subject of the vignette. Genetic attribution of mental illness significantly reduced unwillingness to marry and reproduce among Chinese Americans, but it increased the same measures among European Americans, supporting previous findings of cultural variations in patterns of mental illness stigmatization.”
How Can You Help
In the comments below, please let me know:
- Have you ever heard any types of misconceptions about mental health? If so, what was it?
- If you could tell people one thing about mental health, what would it be?
- What are some ways you, your school, or your community have found ways to spread awareness about the idea of mental health?
Please feel free to also read other peoples comments and add on to what they have already mentioned. |
Little Gopher is upset in the beginning of the story because he is smaller then the other children and he can not keep up with their strength. When he grows a bit older, he goes to the hills alone to think about becoming a man. This is where the Dream-Vision occurs. The young Indian Maiden and the old grandfather in the clouds gave Little Gopher a rolled-up animal skin, a brush made of fine animal hairs, and pots of paints. They told him to paint pictures of deeds of warriors, visions of the shaman, and a picture pure as the colors in the evening sky. Little Gopher gathered flowers and berries to make his paints, and painted pictures of great hunts and great deeds. He struggled with finding the colors of the sunset. He often looked at the colors of the sky and did not give up on this task. One night he heard voices in the sky telling him to go to the hillside where he sees the sun set and he will find what he needs. The next evening, in this place Little Gopher found brushes filled with paint the colors of the sunset on the ground all around him. Little Gopher finally painted a picture pure as the colors in the evening sky. He left his brushes on the ground and returned to the village. The next morning, the hillside was covered with plants of brilliant reds, oranges, and yellows. The brushes had taken root and multiplied. Now every spring the ground is covered with these beautiful plants and Little Gopher is praised for being the person who brought the sunset to the earth.
The Native American culture is best described through its use of traditional literature. Much understanding of their ways and beliefs can be found through the study of their legends. Although stories of Native American warrior’s brutality, war, and fighting do exist, these people were mostly about peace with others and kindness toward our earth. “The Legend of the Indain Paintbrush” is a beautiful, well-written example of how the Native Americans believe the people, earth, and sky are all connected. The beginning of the story also reflects the true value that each tribe places upon each individual person in that tribe. dePaola writes, “The wise shaman of the tribe understood that Little Gopher had a gift that was special.” The Native Americans believe that each person, animal, plant, etc. has a purpose and can be used to benefit the well-being of others.
This story along with other De Paola stories would be excellent for a genre study in the classroom. It is easy to pick out elements of a legend and it would be fun to see kids compare these legends. |
High School Trigonometry/Applications of Right Triangle Trigonometry
In this lesson we will return to right triangle trigonometry. Many real situations involve right triangles. In your previous study of geometry you may have used right triangles to solve problems involving distances, using the Pythagorean Theorem. In this lesson you will solve problems involving right triangles, using your knowledge of angles and trigonometric functions. We will begin by solving right triangles, which means identifying all the measures of all three angles and the lengths of all three sides of a right triangle. Then we will turn to several kinds of problems.
- Solve right triangles.
- Solve real world problems that require you to solve a right triangle.
Solving Right Triangles
You can use your knowledge of the Pythagorean Theorem and the six trigonometric functions to solve a right triangle. Because a right triangle is a triangle with a 90 degree angle, solving a right triangle requires that you find the measures of one or both of the other angles. How you solve will depend on how much information is given. The following examples show two situations: a triangle missing one side, and a triangle missing two sides.
Solve the triangle shown below.
We need to find the lengths of all sides and the measures of all angles. In this triangle, two of the three sides are given. We can find the length of the third side using the Pythagorean Theorem:
(You may have also recognized the "Pythagorean Triple", 6, 8, 10, instead of carrying out the Pythagorean Theorem.)
You can also find the third side using a trigonometric ratio. Notice that the missing side, b, is adjacent to angle A, and the hypotenuse is given. Therefore we can use the cosine function to find the length of b:
We could also use the tangent function, as the opposite side was given. It may seem confusing that you can find the missing side in more than one way. The point is, however, not to create confusion, but to show that you must look at what information is missing, and choose a strategy. Overall, when you need to identify one side of the triangle, you can either use the Pythagorean Theorem, or you can use a trig ratio.
To solve the above triangle, we also have to identify the measures of all three angles. Two angles are given: 90 degrees and 53.13 degrees. We can find the third angle using the triangle angle sum:
Now let's consider a triangle that has two missing sides.
Solve the triangle shown below.
Solution: In this triangle, we need to find the lengths of two sides. We can find the length of one side using a trig ratio. Then we can find the length of the third side either using a trig ratio, or the Pythagorean Theorem.
We are given the measure of angle A, and the length of the side adjacent to angle A. If we want to find the length of the hypotenuse, c, we can use the cosine ratio:
If we want to find the length of the other leg of the triangle, we can use the tangent ratio. (Why is this a better idea than to use the sine?)
Now we know the lengths of all three sides of this triangle. In the review questions, you will verify the values of c and a using the Pythagorean Theorem. Here, to finish solving the triangle, we only need to find the measure of angle B:
Notice that in both examples, one of the two non-right angles was given. If neither of the two non-right angles is given, you will need new strategy to find the angles. You will learn this strategy in chapter 4.
Angles of Elevation and Depression
You can use right triangles to find distances, if you know an angle of elevation or an angle of depression. The figure below shows each of these kinds of angles.
The angle of elevation is the angle between the horizontal line of sight and the line of sight up to an object. For example, if you are standing on the ground looking up at the top of a mountain, you could measure the angle of elevation. The angle of depression is the angle between the horizontal line of sight and the line of sight down to an object. For example, if you were standing on top of a hill or a building, looking down at an object, you could measure the angle of depression. You can measure these angles using a clinometer or a theodolite. People tend to use clinometers or theodolites to measure the height of trees and other tall objects. Here we will solve several problems involving these angles and distances.
You are standing 20 feet away from a tree, and you measure the angle of elevation to be 38°. How tall is the tree?
The solution depends on your height, as you measure the angle of elevation from your line of sight. Assume that you are 5 feet tall. Then the figure below shows the triangle you are solving.
The figure shows us that once we find the value of T, we have to add 5 feet to this value to find the total height of the triangle. To find T, we should use the tangent value:
The next example shows an angle of depression.
You are standing on top of a building, looking at park in the distance. The angle of depression is 53°. If the building you are standing on is 100 feet tall, how far away is the park? Does your height matter?
If we ignore the height of the person, we solve the following triangle:
Given the angle of depression is 53°, angle A in the figure above is 37°. We can use the tangent function to find the distance from the building to the park:
If we take into account the height if the person, this will change the value of the adjacent side. For example, if the person is 5 feet tall, we have a different triangle:
If you are only looking to estimate a distance, than you can ignore the height of the person taking the measurements. However, the height of the person will matter more in situations where the distances or lengths involved are smaller. For example, the height of the person will influence the result more in the tree height problem than in the building problem, as the tree is closer in height to the person than the building is.
Right Triangles and Bearings
We can also use right triangles to find distances using angles given as bearings. In navigation, a bearing is the direction from one object to another. In air navigation, bearings are given as angles rotated clockwise from the north. The graph below shows an angle of 70 degrees:
It is important to keep in mind that angles in navigation problems are measured this way, and not the same way angles are measured in the unit circle. Further, angles in navigation and surveying may also be given in terms of north, east, south, and west. For example, N70° E refers to an angle from the north, towards the east, while N70° W refers to an angle from the north, towards the west. N70° E is the same as the angle shown in the graph above. N70° W would result in an angle in the second quadrant.
The following example shows how to use a bearing to find a distance.
A ship travels on a N50° E course. The ship travels until it is due north of a port which is 10 nautical miles due east of the port from which the ship originated. How far did the ship travel?
The angle opposite d is the complement of 50°, which is 40°. Therefore we can find d using the cosine function:
Other Applications of Right Triangles
In general, you can use trigonometry to solve any problem that involves right triangle. The next few examples show different situations in which a right triangle can be used to find a length or a distance.
In lesson 3 we introduced the following situation: you are building a ramp so that people in wheelchairs can access a building. If the ramp must have a height of 8 feet, and the angle of the ramp must be about 5°, how long must the ramp be?
Given that we know the angle of the ramp and the length of the side opposite the angle, we can use the sine ratio to find the length of the ramp, which is the hypotenuse of the triangle:
This may seem like a long ramp, but in fact a 5° ramp angle is what is required by the Americans with Disabilities Act (ADA). This explains why many ramps are comprised of several sections, or have turns. The additional distance is needed to make up for the small slope.
Right triangle trigonometry is also used for measuring distances that could not actually be measured. The next example shows a calculation of the distance between the moon and the sun. This calculation requires that we know the distance from the earth to the moon. In chapter 5 you will learn the Law of Sines, an equation that is necessary for the calculation of the distance from the earth to the moon. In the following example, we assume this distance, and use a right triangle to find the distance between the moon and the sun.
The earth, moon, and sun create a right triangle during the first quarter moon. The distance from the earth to the moon is about 240,002.5 miles. What is the distance between the sun and the moon?
Let d = the distance between the sun and the moon. We can use the tangent function to find the value of d:
Therefore the distance between the sun and the moon is much larger than the distance between the earth and the moon.
In this lesson we have returned to the topic of right triangle trigonometry, to solve real world problems that involve right triangles. To find lengths or distances, we have used angles of elevation, angles of depression, angles resulting from bearings in navigation, and other real situations that give rise to right triangles. In later chapters, you will extend the work of this chapter: you will learn to find missing angles using trig ratios, and you will learn how to determine the angles and sides of non-right triangles.
Points to Consider
- In what kinds of situations do right triangles naturally arise?
- Are there right triangles that cannot be solved?
Trigonometry can solve problems at astronomical scale as well as earthly even problems at a molecular or atomic scale. Why is this true?
- Solve the triangle:
- Two friends are writing practice problems to study for a trigonometry test. Sam writes the following problem for his friend Anna to solve:
- In right triangle ABC, the measure of angle C is 90 degrees, and the length of side c is 8 inches. Solve the triangle.
- Anna tells Sam that the triangle cannot be solved. Sam says that she is wrong. Who is right? Explain your thinking.
- Use the Pythagorean Theorem to verify the sides of the triangle in example 2.
- The angle of elevation from the ground to the top of a flagpole is measured to be 53°. If the measurement was taken from 15 feet away, how tall is the flagpole?
- From the top of a hill, the angle of depression to a house is measured to be 14°. If the hill is 30 feet tall, how far away is the house?
- An airplane departs city A and travels at a bearing of 100°. City B is directly south of city A. When the plane is 200 miles east of city B, how far has the plan traveled? How far apart are city A and City B?
- The modern building shown below is built with an outer wall (shown on the left) that is not at a 90−degree angle with the floor. The wall on the right is perpendicular to both the floor and ceiling.
What is the length of the slanted outer wall, w? What is the length of the main floor, f?
- A surveyor is measuring the width of a pond. She chooses a landmark on the opposite side of the pond, and measures the angle to this landmark from a point 50 feet away from the original point. How wide is the pond?
- Find the length of side x:
- Anna is correct. There is not enough information to solve the triangle. That is, there are infinitely many right triangles with hypotenuse 8. For example:
- 62 + 5.032 = 36 + 25.3009 = 61.3009 = 7.832
- About 19.9 feet tall
- About 120.3 feet
- The plane has traveled about 203 miles. The two cities are 35 miles apart.
f ≈ 114.44 ft w ≈ 165.63 ft
- About 41.95 feet
- About 7.44
- angle of depression
- The angle between the horizontal line of sight, and the line of sight down to a given point.
- angle of elevation
- The angle between the horizontal line of sight, and the line of sight up to a given point.
- The direction from one object to another, usually measured as an angle.
- A device used to measure angles of elevation or depression.
- A device used to measure angles of elevation or depression.
- nautical mile
- A nautical mile is a unit of length that corresponds approximately to one minute of latitude along any meridian. A nautical mile is equal to 1.852 kilometers.
This material was adapted from the original CK-12 book that can be found here. This work is licensed under the Creative Commons Attribution-Share Alike 3.0 United States License |
Summary The Accessibility, defined as the ease with which people with some degree of disability can access the contents of a website, it is an issue that web designers and programmers to be taken into account in their projects. This will be possible only after the recognition of the right of access to content for people with special skills and knowledge of its limitations and particular ways of accessing the Internet. What is Accessibility? Often (though not as much as we would like) we discover that the infrastructure of big cities changed in the sense of promoting access and mobility for people with some degree of physical disability. Ramps where there were only stairs, wide doors and specially adapted health are just some examples of this new way of integrating our fellows. This healthy phenomenon is also expressed on the Web where designers begin to consider issues in their projects that will make their content accessible to people with disabilities.
Thus arises the concept Accessibility, broadly defined as the ease with which people with disabilities interact with Web sites. An accessible site is one in which the design elements as color, font size and arrangement of elements not difficult to understand the content of the site. Definition of disability The disability can be divided into the following categories: visually impaired or hearing impaired motor or cognitive impairment Deficiencies Each of these categories includes its own set of states. For example, within the visually impaired is included low vision problems, blindness and blindness. |
Decorating a Tree with a Ribbon
This Demonstration calculates the length of a ribbon wrapped around a tapered post or a tree (geometrically, a frustum, which is a truncated cone) in two ways:[more]
1. approximately, by unfolding the cone and adding the lengths of all segments, where is the number of turns.
2. exactly, using the formula for arc length, , where is the height of frustum and derivatives are taken with respect to , the height, of the components of the parametric equation of the ribbon (as a curve):
where is the radius of the frustum at height and slope .
In addition, the Demonstration calculates the surface area and the volume for different heights and slopes.[less]
The result of revolving the line about the axis is a cone. Here is the slope and is the intercept of the line. In order to keep the base radius of the frustum constant at 3 feet while changing the slope, the intercept value has to be a function of the slope.
By unfolding the cone, the subtended angle is calculated by the simple formula:
Here is the arc length of the base of the cone and is its the side length.
The length of one turn of the ribbon is the arc length of the ribbon on the unfolded frustum. It is approximated by the length of the segment on the unfolded frustum.
The length of the side of the frustum is .
The distance between two consecutive turns is .
The lateral surface area of the frustum is , where is the radius at the top of the frustum.
Finally, the volume of the frustum is . |
Moving Dipper
Most of the time, the stars that form a "connect-the-dots" pattern in the sky aren't related to each other -- they just happen to line up in the same direction in our sky.
One exception is the five stars in the middle of the Big Dipper -- Mizar, Alioth, Megrez, Phad, and Merak. They're all about the same distance from Earth -- about 80 light-years. And they're moving in the same direction and at about the same speed. What's more, a lot of other stars in that region of the sky are moving along with them.
All of these stars are members of the Ursa Major Moving Group -- Ursa Major because that's the constellation to which the Big Dipper belongs.
All of these stars were probably born about 500 million years ago, from a single vast cloud of gas and dust. They formed a large star cluster. As they orbited the center of the Milky Way galaxy, though, the cluster was slowly pulled apart. Stars closer to the center of the Milky Way moved a little faster than those farther away, stretching the cluster into a long streamer of stars.
Today, the stars are too widely spread to form a cluster -- their gravity no longer binds them to one another. But not enough time has passed to disperse them through the galaxy. So the stars still move through the galaxy in the same way -- forming a wide but rapidly spreading group.
The two stars at the ends of the Big Dipper aren't members of the group, so they go their own way.
Script by Damond Benningfield, Copyright 2010 |
Once upon a time, a small worm mucking about on the Cambrian seafloor did something really, really careless: it lost its legs.
As the old saying goes, “use it or lose it”. Since the worm – a squirmy creature belonging to the Facivermis genus - was not using its legs for locomotion, it evolved into a more primitive, legless animal.
It’s not just a brilliant example of an animal evolving from a more complex creature to a more streamlined one. At 518 million years old, this case of ‘backwards’ evolution, so to speak, is the earliest known example of this occurring.
“We generally view organisms evolving from simple to more complex body plans, but occasionally we see the opposite occurring,” said palaeobiologist Xiaoya Ma of the University of Exeter in the UK.
“What excited us in this study is that even at this early stage of animal evolution, secondary-loss modifications – and in this case, reverting ‘back’ to lose some of its legs – had already occurred.”
Multicellular life is thought to have emerged during the Ediacaran Period, which started 635 million years ago. But it wasn’t until the Cambrian, which started around 541 million years ago, that life really started to diversify.
In an event called the Cambrian explosion, most of the major animal phyla appeared on the fossil record over a period of about 25 million years in marine ecosystems around the world. These would eventually diversify further, and evolve to produce pretty much all multicellular life surviving today.
Back then, those Cambrian critters were some real oddballs, evolving some of the features that we find really useful today – eyes, spines, heads, bilateral symmetry, and, of course, legs. But not every creature that evolves a trait necessarily needs it forever.
Think of snakes, which had legs, and eventually lost them again, because they developed a different, perfectly functional style of locomotion.
Or stick insects, which seem to have evolved and lost wings several times over the course of their history. This loss of an evolved feature to revert to a previous state is called secondary loss or reversion.
But the Facivermis genus – creatures we’ve known about for several decades – have been something of a mystery. The worm has a long body with a bulbous bottom, hooks around the anus, and five pairs of feathery appendages up the top near its head.
Some studies thought the worm might be the missing link between legless marine worms of the Cycloneuralia clade, and ancient extinct creatures called lobopodians – a group of worms that includes the famous Burgess Shale beastie Hallucigenia, and typically have pairs of legs along the length of their bodies.
But exquisite new fossils from China have revealed more information about the strange critter. In particular, a fossil with a tube around the lower portion of the animal, indicating that Facivermis was anchored, and lived a lot like modern tubeworms do.
“Living like this, its lower limbs would not have been useful, and over time the species ceased to have them,” said palaeobiologist Richard Howard of the University of Exeter.
“Most of its relatives had three to nine sets of lower legs for walking, but our findings suggest Facivermis remained in place and used its upper limbs to filter food from the water.”
The team’s phylogenetic analyses concluded that Facivermis was not related to cycloneuralians – it’s pure lobopodian. Other lobopodians have two distinct kinds of legs – longer ones used for grasping at the front of the body, and shorter clawed ones at the back for crawling. Facivermis lost the back set, keeping the front ones for filter feeding.
Lobopodians eventually evolved into arthropods (insects, crabs, shrimps and spiders), tardigrades and onychophorans (velvet worms). The team’s analysis found that Facivermis was likely derived from the onychophoran stem group, rather than the basal group from which all three emerged.
“We therefore conclude,” they wrote in their paper, “that Facivermis provides a rare early Cambrian example of secondary loss to accommodate a highly specialised tube-dwelling lifestyle.”
The research has been published in Current Biology. |
What Is Grand-Pré?
Grand-Pré is a community located in the rural regions of Kings County, Nova Scotia, Canada. It sits on a peninsula that protrudes into the Minas Basin and is surrounded by marshland. It includes the archaeological ruins of aboiteau wooden sluice systems and dyked farmland. These systems were created by the Acadian population during the 1600s. Since then, the have continued to be developed and utilized. This site also displays the most extreme tidal movements in the world, with tides measuring at 38 feet in depth. Grand-Pré and its surrounding area have been listed as a monument to the Acadian culture and lifestyle. It was inducted to the UNESCO World Heritage Site list on June 30, 2012.
Why Is Grand-Pré A UNESCO World Heritage Site?
The landscape of Grand-Pré is protected as a UNESCO World Heritage Site because it provides a unique look into the way of life and adaptation techniques of some of the first European settlers on northern Atlantic coast of North America. This site displays the hard work and dedication that went into developing farmland for crops in the harsh climate of the Nova Scotian coastline and its extreme tidal activity. These settlers used the polder technique to create farmable land. This technique involves building dykes around a low-lying plot of land so that the tide can no longer reach it. Additionally, polders require sluices (water channels) so that excess water from groundwater seepage and tidal infiltration can be drained during low tide. The Acadians made a community effort to manage these polders and control water levels at all times. This system was later taken over by the Planter settlers and has been continued for over 300 years to present day.
This site is not only important for its use of the polder system, but also because it provides a memorial to the Acadian Diaspora through Nova Scotia, New Brunswick, and Prince Edward Island and their forced removal by the British during the Great Deportation (also known as the Grand Dérangement). The Acadians were forced off their colonized lands between 1755 and 1764 as part of British military efforts against France. They were first sent to the original 13 colonies of present-day United States and later deported to Britain and France. The landscape of Grand-Pré is the principal memorial site for this historic event.
Management Of Grand-Pré
The landscape of Grand-Pré is managed by various public entities, including the Federal Government Parks Canada Agency, the Grand-Pré Marsh Body, and the Stewardship Board. Other authorities include local farmers, the Grand-Pré municipal government, and regional technical experts. They are also included in decision making concerning uses of the site. Of note is that these management efforts are not only appropriate but also effective because of their inclusion of a wide range of stakeholders, all of whom (including the Acadian diaspora) agree with and abide by the regulations set for the site’s practical management. Together, all of these groups have worked to extend the site’s buffer zone in order to ensure the visual authenticity and integrity of the site as viewed over the coastal area from nearby Horton’s Landing. The memorial sites located throughout the landscape are managed by the Société Promotion Grand-Pré. |
Geography mission statement
Geography enables pupils to learn about their local area, their region, their country and the wider world.
Pupils will develop skills of enquiry, discussion, questioning, research, they will gain an understanding of globalisation, sustainability, multiculturalism, their place in the world and how they can help shape the future.
Current Human, Physical and Environmental topics will inspire pupils and encourage motivation to learn.
Geography today is exploring and explaining the world we live in, by looking at the relationship between people and the environment.
The new geography national curriculum has been slimmed down but is based upon essential knowledge pupils need to acquire.
There is an increased emphasis on different regions and places around the world, understanding the importance of their location.
Renewed emphasis on human and physical processes by understanding how they operate and students gain an environmental understanding of human and physical working together.
Technical procedures of map work and fieldwork are reinforced.
Pupils at Kirkburton Middle School are taught:
Locational knowledge to
- extend their locational knowledge and deepen their spatial awareness of the world’s countries using maps of the world to focus on Africa, Europe, Asia (including China and India), focusing on their environmental regions, including polar and hot deserts, key physical and human characteristics, countries and major cities
Place Knowledge to
- understand geographical similarities, differences and links between places through the study of human and physical geography of a region within Europe, Africa, and of a region within Asia
Human and physical geography to
- understand, through the use of detailed place-based exemplars at a variety of scales, the key processes in:
- physical geography relating to: plate tectonics; rocks, limestone, weathering and soils; weather and climate, hydrology and coasts
- human geography relating to: population and urbanisation; international development; economic activity in the primary, secondary, tertiary and quaternary sectors; and the use of natural resources
- understand how human and physical processes interact to influence, and change landscapes, environments and the climate; and how human activity relies on effective functioning of natural systems.
Geographical skills and fieldwork to
- build on their knowledge of globes, maps and atlases and apply and develop this knowledge routinely in the classroom.
- interpret Ordnance Survey maps in the classroom, including using grid references and scale, topographical and other thematic mapping, and aerial and satellite photographs
- use fieldwork to collect, analyse and draw conclusions from geographical data.
- to support physical geography, practice fieldwork skills and complete decision making exercises. |
Siberia (possibly from the Mongolian for "the calm land") is a vast region of Russia and northern Kazakhstan constituting almost all of northern Asia. It extends eastward from the Ural Mountains to the Pacific Ocean and southward from the Arctic Ocean to the hills of north-central Kazakhstan and the borders of both Mongolia and China. All but the extreme south-western area of Siberia lies in Russia, and it makes up about 75% of that country's territory.
Siberia was occupied by differing groups of nomads such as the Yenets, the Nenets, the Huns, and the Uyghurs. The Khan of Sibir in the vicinity of modern Tobolsk was known as a prominent figure who endorsed Kubrat as Khagan in Avaria in 630. The area was conquered by the Mongols in the 13th century and eventually became the autonomous Siberian Khanate.
The growing power of Russia to the east began to undermine the Khanate in the 16th century. First groups of traders and Cossacks began to enter the area, and then the imperial army began to set up forts further and further east. By the mid-17th century, the Russian-controlled areas had been extended to the Pacific.
Siberia remained a mostly unexplored and uninhabited area. During the following few centuries, only a few exploratory missions and traders inhabited Siberia. The other group that were sent to Siberia were prisoners exiled from western Russia.
The first great change to Siberia was the Trans-Siberian railway, constructed in 1891 - 1905. It linked Siberia more closely to the rapidly-industrializing Russia of Nicholas II. Siberia is filled with natural resources and during the 20th century these were developed, and industrial towns cropped up throughout the region.
With an area of over 9,653,000 km2, Siberia makes up roughly three-quarters of the total area of Russia. Major geographical zones, include the West Siberian Plain and the Central Siberian Plateau.
The West Siberian Plain consists mostly of Cenozoic alluvial deposits and is extraordinarily flat, so much so that a rise of fifty metres in sea level would cause all land between the Arctic Ocean and Novosibirsk to be inundated. Many of the deposits on this plain result from ice dams’ having reversed the flow of the Ob and Yenisei Rivers, so redirecting them into the Caspian Sea (perhaps the Aral as well). It is very swampy and soils are mostly peaty Histosols and, in the treeless northern part, Histels. In the south of the plain, where permafrost is largely absent, rich grasslands that are an extension of the Kazakh steppe formed the original vegetation (almost all cleared now).
The Central Siberian Plateau is an extremely ancient craton (sometimes called Angaraland) that formed an independent continent before the Permian (see Siberia (continent)). It is exceptionally rich in minerals, containing large deposits of gold, diamonds, and ores of manganese, lead, zinc, nickel, cobalt and molybdenum. Only the extreme northwest was glaciated during the Quaternary, but almost all is under exceptionally deep permafrost and the only tree that can thrive, despite the warm summers, is the deciduous Siberian Larch (Larix sibirica) with its very shallow roots. Soils here are mainly Turbels, giving way to Spodosols where the active layer becomes thicker and the ice content lower.
Eastern and central Sakha comprise numerous north-south mountain ranges of various ages. These mountains extend up to almost three thousand metres in elevation, but above a few hundred metres they are to an extraordinary degree, devoid of vegetation. The Verkhoyansk Range was extensively glaciated in the Pleistocene, but the climate was too dry for glaciation to extend to low elevations. At these low elevations are numerous valleys, many of them deep, and covered with larch forest except in the extreme north, where tundra dominates. Soils are mainly Turbels and the active layer tends to be less than a metre deep except near rivers..
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Siberia". |
Most of our nearest relatives are, like us, social beings, forcing them to find ways to cooperate. Chimpanzees are no exception, and a study of them performing group tasks has found strong parallels with the way humans combine, suggesting this behavior has deep evolutionary roots.
To test chimpanzee cooperation, a team led by famed primate researcher Professor Frans de Waal of the Yerkes National Primate Research Center challenged a group of 11 adult chimpanzees to cooperate for a reward. One chimp had to remove a barrier while another pulled in a plate carrying a piece of fruit. No individual could manage both roles on their own. Subsequently, the challenge was altered to require a third participant.
The scenario was obviously different from one the apes would have been familiar with, and they took some time to understand how it worked. Nevertheless, they managed well. In Proceedings of the National Academy of Sciences, the team reports that some chimps tried to “freeload” by letting others do the work and then stealing the food themselves, or not sharing with their co-worker. However, effective mechanisms developed to punish this, and over the thousands of successful fruit captures, cooperative acts outnumbered those where one chimpanzee cheated by five to one.
After a piece of fruit had been delivered the plate would be immediately refilled, and this went on for an hour at a time. Consequently, fights that interfered with this would eat into the total amount of fruit that could be captured in the space of an hour. This seldom happened. In the space of 94 test sessions, 3,656 pieces of fruit were captured, despite the fact that the chimpanzees were relatively new to each other and still in competition to establish social rank. |
Skip to main content
More Search Options
A member of our team will call you back within one business day.
Asthma is a disease that inflames and narrows the airways in your lungs. No one is sure what causes asthma. But with the help of your healthcare team, you can keep your asthma under control. This sheet will tell you more about what happens inside your lungs when you have asthma.
Inside the lungs there are branching airways made of stretchy tissue. Each airway is wrapped with bands of muscle. The airways get smaller as they go deeper into the lungs. The smallest airways end in clusters of tiny balloonlike air sacs (alveoli). These clusters are surrounded by blood vessels. When you inhale (breathe in), air enters the lungs. It travels down through the airways until it reaches the air sacs. When you exhale (breathe out), air travels up through the airways and out of the lungs. The airways produce mucus that traps particles you breathe in. Normally, the mucus is then swept out of the lungs to be swallowed or coughed up.
The air you inhale contains oxygen, a gas your body needs. When this air reaches the air sacs, oxygen passes into the blood vessels surrounding the sacs. Oxygen-rich blood then leaves the lungs and travels to all parts of the body. As the body uses oxygen, carbon dioxide (a waste gas) is produced. The blood carries this back to the lungs. Carbon dioxide leaves the body with the air you exhale. The process of getting oxygen into the body and carbon dioxide out is called gas exchange.
When you have asthma, your airways are more sensitive than those of other people. This means your airways react to certain things called triggers and become inflamed. Inflammation makes the airways swollen and narrowed. This is a chronic (long-lasting or recurring) problem. The airways may not be narrowed enough for you to notice breathing problems. But the inflammation makes the lungs more sensitive: Inflamed airways react to triggers even more easily, causing a flare-up.
Symptoms of chronic inflammation: You may not notice any symptoms. Or, you may have mild symptoms such as:
Shortness of breath
Wheezing (a whistling noise, especially when breathing out)
Effects of chronic inflammation: Over time, chronic mild inflammation can lead to permanent scarring of airways and loss of lung function. This can cause permanent breathing problems. This is one reason asthma needs to be treated even if there are no symptoms.
When sensitive airways are irritated by a trigger, the muscles around the airways tighten (bronchospasm). This squeezes the airways so that they become narrower. The lining of the airways swells. Thick, sticky mucus increases and begins to clog the airways. All of this decreases lung function—that is, it makes emptying the lungs more difficult. You have to work hard to keep breathing and getting needed oxygen into the lungs.
Symptoms of moderate flare-ups: Your symptoms may include the following:
Coughing, especially at night
Getting tired or out of breath easily
Fast breathing when at rest
A life-threatening flare-up is due to severe muscle spasm, severe swelling, and large amounts of thick, sticky mucus. Together, these block the airway. Lung function is severely decreased. Waste gas is trapped in the alveoli, and gas exchange can’t occur. The body is not getting enough oxygen. Without oxygen, body tissues, especially brain tissue, begin to die. If this goes on for long, it can lead to brain damage or death.
Symptoms of severe flare-ups: Call 911, or have someone call for you, if you have any of these symptoms and they are not relieved right away by taking your quick-relief medication as prescribed:
Severe difficulty breathing
Being too short of breath to speak a full sentence or walk across a room
Lips or finger tips or nails turning blue
Feeling as though you are about to pass out
Peak flow less than 50 percent of your personal best |
Welcome to the Spina Bifida Resource Center! Helpful videos by medical professionals, in-depth articles and clarity about SB are the useful things you’ll find here. In addition, you’ll find state specific information such as local clinics, charities, camps for children and more. We have everything you need to make your life easier as you begin to learn more about SB.
Neural tube defect (NTDs) are the birth defects that occur due to faulty development of brain and spinal cord. These defects occur when the brain and spinal cord are not completely covered with membranes and bones, leaving the brain and spinal cord open . NTDs occur in the 3rd week of pregnancy when some special type of cells on the dorsal aspect of the fetus begin to form the neural tube. The NTDs can be classified into two types, namely open NTDs and closed NTDs. Open NTDs are the most common type of these defects.
Source: www. hastane.deu.edu.tr
NTDs are one of the most common congenital birth abnormalities, occurring in almost one in 1,000 live births in the United States. An NTD occurs very early in the developmental period since the neural tube is formed 28 days after conception. When the neural tube does not close adequately, a NTD develops. NTDs develop in the fetus before most women know they are even pregnant .
Types/examples of the Open Neural Tube Defect
As mentioned earlier, there are two types of NTDs, that is, open, which are more common and closed. A defect is said to be an open NTD if brain or spinal cord is exposed at birth through an opening in skull or vertebral column. Examples of open NTDs are given below :
The term anencephaly means “without head”. It is a type of open NTD’s that develop when the cranial end of the neural tube fails to close completely, usually during the 23rd and 26th days of pregnancy resulting in the lack of a major part of the brain and cranium. Infants born with this defect do not have the main parts of the forebrain and are usually blind, deaf and unconscious. Absence of forebrain part ensures that the baby will never gain consciousness [3,4]. Infants having this defect are either stillborn or usually die within a few hours or days after birth.
Encephalocele is another open neural type of defect which is characterized by herniation of a part of brain through the skull that is sac-like and covered with meninges. Encephaloceles can be classified into various types depending upon their location. Encephaloceles are often evident and diagnosed immediately. In some cases small encephaloceles remain undiagnosed [3,4].
Hydranecephaly is an open type of NTD characterized by the absence of cerebral hemispheres; instead the skull is filled with sacs of cerebrospinal fluid .
Iniencephaly is a rare type of open NTD that causes an extreme bending of the head to the spine. Diagnosis of this abnormality can usually be made on antenatal ultrasonography; however, immediately after birth “the bent head with face looking superior” will indicate the diagnosis. Usually the neck is absent. The skin of the face connects directly to the chest and the scalp connects to the upper back. The infant will usually not survive more than a few hours [3,4].
Spina bifida is an NTD characterized by the incomplete closure of bone around the spinal cord resulting in a defect in the spinal cord. It can be further divided into two classes; spina bifida cystica and spina bifida occulta .
Spina Bifida Cystica
Spina bifida cystica can be further divided into two types, namely meningocele and myelomeningocele. Meningocele means herniation of the meninges (but not the spinal cord) through the defect in the spinal canal. Myelomeningocele means the protrusion of the meninges as well as the spinal cord through the opening [3,4].
Spina Bifida Occulta
By definition, spina bifida occulta refers to hidden split spine. In this NTD the spinous process and the neural arch appear abnormal on an X-rays, and are generally harmless. In most of the cases the spinal cord and spinal nerves are not involved [3,4].
Causes and risk factors of the Open Neural Tube Defects
The exact cause of NTDs is still under debate but there are certain environmental and genetic factors which are thought to be important in causation of NTDs. These are (5):
NTDs in the antenatal period can be diagnosed with the following tests :
Serum alpha fetoprotein levels
Diagnosis after birth can be made by:
History and examination
X-rays spine (6).
There is no treatment available for anencephaly as the babies usually do not survive more than a few hours. There is certain role for aggressive surgical management in cases of spina bifida, meningoceles and mild myelomeningoceles. The outcome of surgical repair often depends on the amount of brain tissue involved in the encephalocele. The aim of treatment for NTDs is to allow the patient to achieve the maximum level of function [5,7].
Open neural tube defects are of various types. Folic acid deficiency in pregnant women and some genetic factors are involved in the causation of NTDs. Prognosis of NTDs depend upon the type of defect and adequate surgical treatment
Through our content we want to empower the lives of people with SB and to promote the prevention of it through education, public awareness and research. Working together with local organizations we aim to enhance the lives of those who are affected with SB. We want to build a stronger community and create a better future for those with SB. |
Dry mouth (Xerostomia)
- What is dry mouth?
- What causes it?
- What are the symptoms?
- How is it diagnosed?
- What is the treatment?
- Are there any self-help remedies?
What is dry mouth?
The medical term for dry mouth is xerostomia, and it is a condition which occurs when the production of saliva sharply decreases or stops altogether. Saliva is the clear, watery solution which is present in the mouth at all times, and its function is to lubricate the mouth so that we can speak and taste our food. It also aids in preventing tooth decay as it washes away food and plaque from the surface of the teeth.
People who suffer from dry mouth are at increased risk of tooth decay, gum disease and a range of other illnesses affecting the soft tissues of the mouth. The diet may also be severely affected because food cannot be tasted as it normally would.
What causes it?
There are several reasons why dry mouth may occur. Among them are the following:
- A side effect of medication: Medications are by far the most common cause of dry mouth, and the range of medications which can give rise to this condition include everything from pain relieving drugs to anti-depressants. Because older people tend to take a wider range of medication than their younger counterparts, there is a much higher rate of dry mouth among this age group. Patients who are undergoing chemotherapy/radiation therapy, or other forms of treatment for cancers in the head and neck area, are found to suffer particularly from dry mouth or xerostomia.
- A side effect of diseases and infections: Many diseases and infections can give rise to dry mouth, particularly those associated with HIV. There is also another type of auto-immune disease called Sjogrens disease which gives rise to severe xerostomia. Patients affected by Sjogrens disease find that, not only does their saliva disappear, but they are also unable to shed tears because their lachrymal glands (which secrete tears) are also attacked by the disease.
- Surgical removal of the salivary glands: In rare cases where the salivary glands have been removed surgically, xerostomia will be a permanent feature of life, and steps will be taken to treat it.
What are the symptoms?
The symptoms to watch out for in cases where xerostomia may be present are:
- Frequent and severe thirst.
- Red, raw tongue.
- A painful, burning sensation in the mouth, and especially on the tongue.
- Pain on swallowing food or water, or difficulty in speaking.
- Sores inside the mouth or at the corners of the lips.
- Any alteration in taste.
- Sore throat or hoarseness.
- Halitosis (bad breath).
- The sudden onset of dental problems, or difficulty in wearing dentures.
- Recurring yeast infections in the mouth.
Many of the above symptoms are quite common and, taken in isolation, they rarely indicate the onset of a condition as serious as xerostomia. However, if a number of them are present at the same time, medical investigation is warranted.
How is it diagnosed?
Dry mouth can be diagnosed either by your doctor or dentist. As well as taking a thorough medical history and asking for a description of your symptoms, your dentist will also look for signs of cavities and gum disease and may suggest a referral to your GP if the onset of dry mouth is suspected.
Your GP will carry out a thorough examination of your mouth to assess the flow of saliva and will also look for cracks and sores inside the mouth and around the area of the lips. They will also take account of what medication, if any, you are currently taking either with or without prescription, and if you are receiving any medical treatment for a particular condition.
What is the treatment?
The treatment of xerostomia depends on the severity of the problem, and very often the condition will remain a problem as long as its cause is present. For example, if you are receiving radiation treatment for head or neck cancer, xerostomia will remain a side-effect for as long as the treatment continues. In some cases, radiation therapy may permanently affect the ability of the salivary glands to produce saliva.
Having said that, the treatment of xerostomia focuses on three main areas:
- Relieving the symptoms of the condition.
- Preventing the onset of dental decay and.
- Trying to increase the flow of saliva.
Sometimes, your doctor and dentist will work in tandem in the management of xerostomia, with your dentist concentrating on oral hygiene and the prevention of dental decay and your GP working on reducing the more unpleasant side effects of the condition, and trying to increase the flow of saliva.
Are there any self-help remedies?
The symptoms of xerostomia may be relieved to some degree if you follow some of these self-help tips:
- Give up smoking.
- Take sips of water on a regular basis to keep the mouth moist. Carry water at all times throughout the day, and keep a glass of water close by your bed at night.
- If you like chewing gum, try to stick to the sugar-free variety.
- Over-the-counter saliva substitutes and oral moisturisers are now available in many pharmacies, so they are worth checking out.
- If you feel you need to suck on something to relieve the symptoms of dry mouth, try sugar-free hard sweets, chips of ice cubes or sugar-free ice pops.
- Always use a mouth rinse which is alcohol and peroxide-free.
- Things to avoid include salty foods, dry foods, and food and drink which contain high doses of sugar. Caffeine and alcohol should also be avoided, if possible, as these increase water loss by triggering frequent urination, thereby causing further dehydration.
- Use a moisturiser on your lips to prevent them from becoming dry and chapped, and use a soft-bristle toothbrush on teeth and gums to prevent further damage inside your mouth.
Back to top of page |
Types of Magma
1.Felsic magma is the sort that’s rich in feldspar and silica (quartz). As opposed to mafic magma, which is rich in magnesium and iron (Fe to chemists). Felsic magma used to be called ‘acid’. It is pale grey or pinky-grey in color. It derives from a melt of continental (rather than oceanic) crust.
2.Intermediate magma most commonly transforms into andesite due to the transfer of heat at convergent plate boundaries. Andesitic rocks are often found at continental volcanic arcs, such as the Andes Mountains in South America, after which they are named. Felsic Magma.
3.Mafic describes a silicate mineral or igneous rock that has a chemical makeup that is rich in magnesium and iron. Mafic rocks and minerals are usually dark in color. Mafic magma has a low viscosity and mafic eruptions tend to be less violent due to the ability of water and other volatiles to escape more easily. Mar 24, 2017
4.Ultramafic (also referred to as ultrabasic rocks, although the terms are not wholly equivalent) are igneous and meta-igneous rocks with a very low silica content (less than 45%), generally ;18% Mg O, high Fe O, low potassium, and are composed of usually greater than 90% mafic minerals (dark colored, high magnesium and … Ultramafic are igneous and meta-igneous rocks with a very low silica content (less than 45%), …Ultramafic magmas in the Phanerozoic are rarer, and there are very few recognized true ultramafic lavas in the Phanerozoic. Many surface … Containing mainly mafic minerals. Used of igneous rocks and often used interchangeably with ultrabasic. Dunite is an ultramafic rock.
1.Stress causes rocks to deform, meaning the rocks change size or shape. There are different kinds of stress that rocks experience, and these determine how the rocks deform. Tensional stress is when rock is stretched apart. … Compressional stress is when rock is pressed together.
Lithostatic Stress:Overburden pressure, also called lithostatic pressure, confining pressure or vertical stress, is the pressure or stress imposed on a layer of soil or rock by the weight of overlying material.
Stress is the force applied to a rock, which may cause deformation. The three main types of stress go along with the three types of plate boundaries: compression is common at convergent boundaries, tension at divergent boundaries, and shear at transform boundaries. |
Of the representations which Egyptian Antiquity presents us with, one figure must be especially noticed, viz., the Sphinx -- in itself a riddle -- an ambiguous form, half brute, half human. The Sphinx may be regarded as a symbol of the Egyptian Spirit. The human head looking out from the brute body, exhibits Spirit as it begins to emerge from the merely Natural -- to tear itself loose there from and already to look more freely around; without, however, entirely freeing itself from the fetters Nature had imposed. The innumerable edifices of the Egyptians are half below the ground, and half rise above it into the air. The whole land is divided into a kingdom of life and a kingdom of death.
G. W. F. Hegel - The Philosophy of History, page 218 (first published in 1837)
The Great Sphinx of Giza (Construction Date Unknown)
A Sphinx was a symbolic (symbolique?) creature that played an important role in both Ancient Egypt and Greece.
In Egyptian Art:
The Sphinx seems to have originated in Egypt, no later that the beginning of the Old Kingdom (circa 2700 B.C.). In Egyptian art it was usually portrayed as a recumbent male lion with the head of a ram, bird, or human. The largest and most famous Egyptian sphinx is the Great Sphinx of Giza, situated on the west bank of the Nile, facing due east, with a small temple between its paws. Other famous Egyptian sphinxes include the Alabaster Sphinx of Memphis, the Avenue of the Ram-Headed Sphinxes and the Avenue of the Human-Headed Sphinxes that line the roadway linking the huge New Kingdom Temples of Karnak and Luxor (ancient Egyptian Thebes) respectively.
The name "Sphinx" is the term used by the ancient Greeks for these creatures. The ancient Egyptian word for Sphinx is unknown. The Arabic name for the Great Sphinx is Abu al-Hôl, which translates as "Father of Terror."
In Greek Art:
The symbol was exported to Mycenaean Greece during the Late Bronze Age; however, in Greece, the gender of the Sphinx was always female. In fact, the name "Sphinx" is derived from the Greek and means "to strangle" (Σφιγξ).
The most famous Sphinx in Greek mythology appears in the story of Oedipus, the legendary King of the Bronze Age Greek City of Thebes. In this story, the Sphinx was a demon of destruction and bad luck. She was usually represented in Greek vase-painting and bas-reliefs seated in an upright rather than a recumbent position, and was frequently depicted as a winged lion with a woman's head and breasts. In the myth, the Goddess Hera, to punish the Thebans, had sent the Sphinx from her Ethiopian homeland to sit on a high rock along the main road leading to Thebes and ask all passersby a most famous riddle:
Which creature in the morning goes on four feet, at noon on two, and in the evening upon three?
She strangled anyone unable to answer. Oedipus was the first person to solve the riddle. His correct answer was "a man" -- a man crawls on all fours as a baby, walks on two feet as an adult, and then walks with a cane in old age. Defeated at last, the Sphinx, in a fury, threw herself from her high rock and died.
The subject of Oedipus and the Sphinx has been a popular one for artists to portray over the centuries. For example: in European art, two famous paintings with this title were created by the 19th century French painters Ingres (in 1808) and Moreau (in 1864). See the following:
The Great Sphinx:
The Great Sphinx is a mixture of both sculpture and architecture which guards the Giza Plateau in Egypt; it probably is the most famous and mysterious object of art on the planet Earth. Since time immemorial it has been regarded in awe by numerous philosophers, mystics, military conquerors and archaeologists. The Sphinx is located within the pyramid complex of Pharaoh Khafre (in Greek = Khephren) of the Fourth Dynasty. Khafre ruled from about 2558 - 2532 B.C. and his pyramid is second only to the Great Pyramid in size. The Sphinx is also huge, being 240 feet long and 66 feet high; its paws are 50 feet long and its head is 30 feet long and 14 feet wide.
The Sphinx at Giza is one of the few works of art that was specifically mentioned by Gurdjieff as being an example of Objective Art. In P. D. Ouspensky's In Search of the Miraculous (at page 27), Gurdjieff is recorded as making the following remarks:
... "At the same time the same work of art will produce different impressions on people of different levels. And people of lower levels will never receive from it what people of higher levels receive. This is real, objective art. Imagine some scientific work -- a book on astronomy or chemistry. It is impossible that one person should understand it in one way and another in another way. Everyone who is sufficiently prepared and who is able to read this book will understand what the author means, and precisely as the author means it. An objective work of art is just such a book, except that it affects the emotional and not only the intellectual side of man.”
“Do such works of objective art exist at the present day?” I [Ouspensky] asked.
“Of course they exist,” answered Gurdjieff]. “The great Sphinx in Egypt is such a work of art, as well as some historically known works of architecture, certain statues of gods, and many other things. ...
Cyril Aldred's Description of the Great Sphinx
Most Egyptologists believe that the Sphinx was constructed by Pharaoh Khephren and originally represented a composite of the god-king Khephren, the sun god Re-Atum and the ferocious powers of a lion; its initial purpose was to protect the Khephren pyramid-tomb. The following is a description of the sculpture by the noted museum curator and Egyptologist, Cyril Aldred (1914-1991), from his book entitled The Egyptians (Third Edition - 1998), page 106:
The immense necropolis of the Giza plateau, a veritable city of the dead elite of their time, was protected by a guardian colossus, the Great Sphinx. This huge statue of a recumbent lion with the head of a king, in this example, Khephren, is hewn out of a knoll of rock left after the extraction of stone from a local quarry. It is the first known version, and the largest, of a type of representation that haunted the imagination of the ancients, and attracted legends around it in Egypt, before the days of Oedipus. According to Egyptian belief, the lion and its derivative, the sphinx, were the protectors of thresholds and would seize any intruder who violated sacred precincts. By the New Kingdom at the latest, however, the Great Sphinx was regarded as a manifestation of Re-Herakhty, the sponsor of the ruling king, and its connection with Khephren had been almost entirely lost.
Comments of René Adolphe Schwaller de Lubicz
Schwaller de Lubicz in 1960
Since he spent more than fifteen years studying the monuments of ancient Egypt, the comments of Schwaller de Lubicz concerning the Great Sphinx should be of particular importance. Unfortunately, his published works provide few comments concerning the aesthetic meaning of the Sphinx; however, he does venture some observations concerning its great antiquity. In his book entitled Sacred Science (pages 96-97) he tells us the following:
In any case, there was an unbroken tradition concerning an alluvial origin for the Delta and the existence of a maritime gulf before this ushering in of earth by the Nile. A great civilization must have preceded the vast movements of water that passed over Egypt, which leads us to assume that the Sphinx already existed, sculptured in the rock of the west cliff at Giza, that Sphinx whose leoninebody, except for the head, shows indisputable signs of aquatic erosion.
We have no idea when and how the submersion of the Sphinx took place. Both ancient and modern texts concerning this monument are rare and remain evasive. No Greek traveler makes mention of it and Pliny devotes only a few lines to it after having described the Pyramids:
In front of them is the Sphinx, which deserves to be described even more, and yet the Egyptians have passed it over in silence. The inhabitants of the region regard it as a deity.
It is known that the great hollow carved into the rock around the Sphinx was filled up several times during the course of history by sand dunes which submerged all but the head. A commemorative stela erected between the paws of the Sphinx recounts how Thothmes IV(1425 B.C.) had the sand cleared away from it during the first year of his reign. Wishing to perform an act of worship to Harmachis once while he was out hunting, the king stopped and drew near to the Sphinx:
Now, a great magical power had existed in this place from the beginning of all time and it extended over all the region.. . . And at this time, the Sphinx-form of the most mighty god Khepera came to this place and the greatest of Souls, the holiest of the holy ones, rested therein.
The sun being at its zenith, Thothmes IV became drowsy, and the Sphinx spoke to him, saying:
Behold me, 0 my son Thothmes. . . the sand whereon I have my being hath enveloped me in on all sides; say unto me that thou wilt do for me all that I desire.
It is thus that the Sphinx was liberated from the sands, but according to Maspero,it seems that this was not the first time:
The stela of the Sphinx bears, on line 13, the cartouche of Khephren in the middle of a gap. . . . There, I believe, is the indication of an excavation of the Sphinx carried out under this prince, and consequently the more or less certain proof that the Sphinx was already covered with sand during the time of Cheops and his predecessors.
A legend affirms that even in Cheops’ day, the age of the Sphinx was already so remote that it was impossible to situate it in time. This Sphinx is a human and colossal work. There is an enigma about it that is linked with the very enigma posed by the Sphinx itself. The account of Diodoros is the only document that sheds any light on the Nile’s flooding of the valley during a remote epoch when the land was already inhabited by a great people. This took place during the reign of Osiris, after this Neter had founded several famous cities:
Then it happened that the Nile, at the time of the rising of Sirius which is the season when the river is usually at flood, breaking out of its banks inundated a large section of Egypt, particularly that part where Prometheus was governor. Few inhabitants escaped from this deluge.
Observe that in the myth, Osiris represents the waters of the West and of renewal. Rather than seeing a simple symbolization of the territory in the mythical legends of Osiris and Horus, as Plutarch does, would it not be wiser to see them as the traditional description of the great events which formed the land of Egypt? Then the reign of Osiris would evoke the flowering of a great civilization become legendary, which preceded the destruction by the waters of the river. The myth’s death and resurrection of Osiris would admirably motivate the fact we have just described. Following this destruction by the flooding came the reign of Horus.
The Osirian principle is that of karmic religion—whose profound significance we will later examine—while the revelation of Horus is reserved for the intimate teaching of the temple, for those who have renounced the illusions of this earth.
The above ideas of Schwaller de Lubicz have been seriously analyzed by a Gurdjieff student, John Anthony West, and a professional geologist, Robert Schoch. Their findings are summarized in the book by West entitled Serpent in the Sky.
P. D. Ouspensky's Assessment
P. D. Ouspensky in about 1935
A few years before he met Gurdjieff, P. D. Ouspensky visited Egypt and spent some time viewing its ancient monuments, including the Great Sphinx of Giza. The following is his assessment of the Sphinx, written in about 1914, as published in his book entitled A New Model of the Universe (pages 362-365): |
Inside your skull is a massive supercomputer. You own it free and clear. With its 100 billion neurons, and with a typical neuron linking to 1000 to 10,000 other neurons, your highly networked brain is incredibly powerful and capable.
Pick up a simple object nearby like a pen or a spoon, and look at it. Turn it upside down. Spin it around. Notice that your brain is able to recognize the object no matter how you position it. You can change the lighting by putting the object in shadow. You can obscure part of it from view. You can bend or break it. And your brain still recognizes that object simply and easily. Even a child can do this.
But what’s happening under the hood? Your visual cortex, consisting of about 538 million neurons, is doing an enormous amount of parallel processing on the signals it’s receiving from your eyes. Your visual cortex detects edges, evaluates color, tracks motion, interprets reflection, and more — all in real time. |
Muscles of the Neck
Deep muscles of the neck (not illustrated) are responsible for swallowing. Superficial muscles of the neck move the head (see Table 7.2 and Figure 7.13).
Swallowing is an important activity that begins after we chew our food. First, the tongue (a muscle) and the buccinators squeeze the food back along the roof of the mouth toward the pharynx. An important bone that functions in swallowing is the hyoid. The hyoid is the only bone in the body that does not articulate with another bone. Muscles that lie superior to the hyoid, called the suprahyoid muscles, and muscles that lie inferior to the hyoid, called the infrahyoid muscles, move the hyoid. These muscles lie deep in the neck and are not illustrated in Figure 7.13. The suprahyoid muscles pull the hyoid forward and upward toward the mandible. Because the hyoid is attached to the larynx, this pulls the larynx upward and forward. The epiglottis now lies over the glottis and closes the respiratory passages. Small palatini muscles (not illustrated) pull the soft palate backward, closing off the nasal passages. Pharyngeal constrictor muscles (not illustrated) push the bolus of food into the pharynx, which widens when the suprahyoid muscles move the hyoid. The hyoid bone and larynx are returned to their original positions by the infrahyoid muscles. Notice that the suprahyoid and infrahyoid muscles are antagonists.
Muscles That Move the Head
Two muscles in the neck are of particular interest: The sternocleidomastoid and the trapezius are listed in Table 7.2 and illustrated in Figure 7.13. Recall that flexion is a movement that closes the angle at a joint and extension is a movement that increases the angle at a joint. Recall that abduction is a movement away from the midline of the body, while adduction is a movement toward the midline. Also, rotation is the movement of a part around its own axis.
Sternocleidomastoid muscles ascend obliquely from their origin on the sternum and clavicle to their insertion on the mastoid process of the temporal bone. Which part of the body do you expect them to move? When both sternocleidomastoid muscles contract, flexion of the head occurs. When only one contracts, the head turns to
the opposite side. If you turn your head to the right, you can see how the left sternocleidomastoid shortens, pulling the head to the right. Each trapezius muscle is triangular, but together, they take on a diamond or trapezoid shape. The origin of a trapezius is at the base of the skull. Its insertion is on a clavicle and scapula. You would expect the trapezius muscles to move the scapulae, and they do. They adduct the scapulae when the shoulders are shrugged or pulled back. The trapezius muscles also help extend the head, however. The prime movers for head extension are actually deep to the trapezius and not illustrated in Figure 7.13.
Muscles of the Trunk
The muscles of the trunk are listed in Table 7.3 and illustrated in Figure 7.14. The muscles of the thoracic wall are primarily involved in breathing. The muscles of the abdominal wall protect and support the organs within the abdominal cavity.
Muscles of the Thoracic Wall
External intercostal muscles occur between the ribs; they originate on a superior rib and insert on an inferior rib. These muscles elevate the rib cage during the inspiration phase of breathing. The diaphragm is a dome-shaped muscle that, as you know, separates the thoracic cavity from the abdominal cavity (see Fig. 1.5). Contraction of the diaphragm also assists inspiration. Internal intercostal muscles originate on an inferior rib and insert on a superior rib. These muscles depress the rib cage and contract only during a forced expiration. Normal expiration does not require muscular action.
Figure 7.14 Muscles of the anterior shoulder and trunk. The right pectoralismajor is removed to show the deep muscles of the chest.
Muscles of the Abdominal Wall
The abdominal wall has no bony reinforcement (Fig. 7.14). The wall is strengthened by four pairs of muscles that run at angles to one another. The external and internal obliques and the transversus abdominis occur laterally, but the fasciae of these muscle pairs meet at the midline of the body, forming a tendinous area called the linea alba. The rectus abdominis is a superficial medial pair of muscles. All of the muscle pairs of the abdominal wall compress the abdominal cavity and support and protect the organs within the abdominal cavity. External and internal obliques occur on a slant and are at right angles to one another between the lower ribs and the pelvic girdle. The external obliques are superior to the internal obliques. These muscles also aid trunk rotation and lateral flexion. Transversus abdominis, deep to the obliques, extends horizontally across the abdomen. The obliques and the transversus abdominis are synergistic muscles.
Rectus abdominis has a straplike appearance but takes its name from the fact that it runs straight (rectus means straight) up from the pubic bones to the ribs and sternum. These muscles also help flex and rotate the lumbar portion of the vertebral column. |
House of Aluminium
What is Aluminium?
The chief ore of aluminium is bauxite. Bauxite was named after the village Les Baux in southern France,where it was first recognised as containing aluminium and named by the French geologist Pierre Berthier in 1821. Bauxite is usually strip mined because it is almost always found near the surface of the terrain, with little or no overburden. Approximately 70% to 80% of the world’s dry bauxite production is processed first into alumina, and then into aluminium.
Aluminium is the third most abundant element (after oxygen and silicon), and the most abundant metal, in the Earth’s crust. It makes up about 8% by weight of the Earth’s solid surface. Aluminium is a relatively soft, durable, lightweight, ductile and malleable metal with appearance ranging from silvery to dull gray, depending on the surface roughness. It is non-magnetic and does not easily ignite.
Aluminium is known for its corrosion resistance properties due to a thin surface layer of aluminiumoxide that forms when the metal is exposed to air, effectively preventing further oxidation. Owing to its resistance to corrosion, aluminium is one of the few metals that retain silvery reflectance in finely powdered form, making it an important component of silver-coloured paints. |
Newton’s theory of gravitation assumes that the speed of gravity is infinite and the gravitational interaction is instantaneous. However, Einstein’s theory postulates that it is exactly equal to the speed of light. A team of Chinese physicists lead by Tang Ke Yun, at the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, China, has measured the speed of gravity with a relative error of about 5% by using Earth tides during three solar/lunar eclipses 1. The resulting value, between 0.93 to 1.05 times the speed of light, confirms the result postulated in Einstein’s theory. It is expected that new measurements using the same method, but with better gravimeters, could reduce the error by about an order of magnitude.
The Earth tides are essentially the motions in the solid earth induced by the lunar and solar tidal forces. These tidal forces cause a mass redistribution of the solid Earth which results in an additional force. The vector sum of the lunar tidal force, the solar tidal force and this additional force creates a phase lag which can be used to determine the speed of gravity during a solar/lunar eclipse, when the Sun, Moon and Earth are on the straight line observed from a point on the Earth.
The true position of the Sun cannot be observed directly, so the solar tidal force is calculated from its apparent position. Using a formula that incorporates the difference between the time of gravity issued from the Sun and the time of gravity received at ground station, the speed of gravity could be inferred. This time difference has two origins: the first one is the phase lag from the delayed response of Earth to solar tidal force due to its anelasticity and the second one is the time difference caused by the travel speed from the Sun to the ground station, if the real travel speed cg is different from the speed of light c. Both effects can be incorporated in the formula used in the analysis.
Observational data was recorded July 7, July 22, and August 5 of 2009 by two spring gravimeters coupled to rubidium-clocks located inside the western border of China, very far away from the effect of tides in the ocean: the Shiquanhe (SQH) station in Tibet and the Wushi (WS) station in Xingjiang, China. During these days the phases of the moon are either “new moon” or “full moon,” and the Sun, the Earth and the Moon are almost collinear. A low-pass filter has been used to filter the high frequency disturbances in the signal of the spring gravimeters as shown in Figure 1. The speed ratios of gravity to light are presented in Table 1, resulting in the quoted value from 0.93 to 1.05 for the speed ratio of gravity to light.
The 5% error in the speed of gravity can be reduced in the future by several means. First, the use of either superconducting gravimeters or atom interferometers will reduce the systematic error in the tidal gravity measurements. Second, more data of Earth tide from those stations and the use of additional stations will reduce the statistical errors. And third, other methods for the measurement of the speed of gravity by comparison with the Coulomb speed as proposed by Yin Zhu 2, could be used.
For some readers, the use of Newton’s theory instead of Einstein’s one in order to measure the speed of gravity could be surprising. However, the measurement of the speed of gravity by using Einstein’s theory has no meaning, since within general relativity gravity propagates along null geodesics. The speed of light in vacuum, c, and the speed of gravity, cg, are both determined by exactly the same metric information, hence it is impossible to declare c and cg to be two separate parameters in general relativity. The only possibility is the use of a post-Newtonian (PPN) theory, a relativistic theory of gravitation using a fixed Minkowskian spacetime which introduces corrections to Newton’s theory by using an expansion in powers of 1/c2.
The problem with the use of PPN theories for the measurement of the speed of gravity is that such a test is model dependent, since the speed of gravity enters in the PPN expansion of at the powers of 1/c4. The corresponding PPN parameters are known with a large experimental uncertainty and small deviations due to c cg can be accommodated into a PPN theory with instantaneous gravity waves by properly changing its parameters within current experimental limits. For such a reason, previous claims of the measurement of the speed of gravity 3 has been have been strongly criticized .
Finally, let me remark that a direct measurement of the speed of gravity could be obtained after the direct observation of gravitational waves. But, currently, only indirect observation has been obtained using binary pulsars, like PSR 1913+16, whose orbital decay agree at the ~1% level with general relativity (when it is assumed that energy is emitted in the form of quadrupole gravitational radiation). These results yield experimental insights into the speed of the gravitational waves, but cannot result in a direct measurement.
- K. Y. Tang, C. C. Hua , W. Wen, S. L. Chi, Q. Y. You, D. Yu, Observational evidences for the speed of the gravity based on the Earth tide, Chinese Science Bulleting, Open Access, Available Online, December 2012. DOI: 10.1007/s11434-012-5603-3 ↩
- Y. Zhu, Measurement of the Speed of Gravity, Chinese Physics Letters 28: 070401, 2011. DOI: 10.1088/0256-307X/28/7/070401 ↩
- E. B. Fomalont, S. M. Kopeikin, “The measurement of the light deflection from Jupiter: experimental results,” The Astrophysical Journal 598:, 704-711, 1999. DOI:10.1086/378785 ↩ |
When it comes to healthy soil, the answer is an unequivocal “Yes!”
Organic matter is the decomposed organic material that is added to soil – leaves, plant parts and composted manure – basically, plant and animal remains.
Organic matter is an important component of healthy soil because it improves overall soil structure, increases nutrient levels, improves water-holding capability, helps stabilize pH levels and aids in soil erosion reduction. Organic matter is also the food source for the many microorganisms and earthworms that work away at decomposing the added soil amendments. All of this, in turn, provides the optimum environment for both healthy plants and healthy microbes. Regularly adding organic matter throughout the season will help ensure the soil remains healthy and has the power to keep your garden glowing.
Plant roots also play an important role in organic matter contribution. Dead roots feed the soil microorganisms and live ones release carbon dioxide, oxygen and organic elements that aid in nutrient development and its availability to the plants.
The organic matter in soil provides most of the needed nitrogen for plant growth and the relationship between what lives in the soil and organic matter decomposition is closely intertwined – one cannot function without the other. If that interaction failed to exist the needed nutrients would not be available for the plant life and the soil structure would decline, resulting in an unproductive (and unattractive) garden bed.
There are a number of factors that influence the activity of the microbial community and, therefore, the rate of decomposition of the organic matter and should be considered when adding amendments to the soil. Microorganisms are most effective, and the rate of decomposition more rapid, when temperatures are between 0°C to 45°C. Above or below will slow, or stop, the decomposition process.
Determine your soil’s pH (acid/alkaline). The soil pH directly affects the type of microbe that lives there, which in turn, impacts the level of decomposition activity. The rate of decomposition is greater in neutral soils – pH of 7, so if need be, adjust the pH level to welcome the right microorganisms and improve their ability to do the job quickly and efficiently.
Adequate soil moisture is key for the proper decomposition of organic matter. Most microbes prefer a damper home. However, excessive water will lead to reduced microbe activity due to reduced aeration – the soil pores are filled with water instead of oxygen, a much-needed component for faster and complete decomposition. Waterlogged areas will decompose slowly.
So, keep composting those carrot tops. Shred the late season leaves and dig them into your beds and borders! And don’t consider adding soil amendments as just spring or summer chores. Give your vegetable beds a nutrition boost, reduce winter erosion and keep weeds at bay by planting cover crops (aka “green manure”) such as hairy vetch or winter rye when the cool fall weather arrives. As the summer heat hits once again, your tomatoes and peppers will thank you.
May 14, 11:00 am - Strathroy Public Library - Gardening Myths and Legends.
March 22 - May 12 - London Middlesex Master Gardener "Seeds to Table" Instructor. |
In 1993, the movie "Jurassic Park" popularised the modern study of dinosaurs. Dinosaurs roamed during the Mesozoic Era, which lasted from 180 to 200 million years ago. Children are curious, and teaching them about dinosaurs using displays is a simple yet enjoyable way they can learn about them.
Other People Are Reading
Create a landscape wall display using coloured paper and children's artwork. Decorate the wall in a simple blue and green landscape environment. Involve your students in decorating a dinosaur of their choice and labelling it with the scientific name, its common name and era and period it roamed the earth. For example, the Tyrannosaurus Rex lived during the Cretaceous Period of the Mesozoic Era.
Create a Timeline
Create a timeline of the Mesozoic Era and the periods it encompassed. Line one or two walls in your room with a border of a timeline. It should be low enough for children to reach. Start the timeline with the Early Triassic Period and extend it to the late Cretacious Period, which is where the Mesozoic ends. Have your students find illustrations either online or in magazines of dinosaurs from each period and have them label them appropriately.
Create a Realistic Diorama
Create a realistic display of a dinosaur. For example, the Herrerasaurus is not well-known, but lived in the mid-Triassic Period in regions in South America, such as Argentina. Find a picture or toy of your chosen species and fill your diorama with leaves and fake trees that may have been common for the time. If you choose to use an illustration, cut it out, paste it to cardboard and make a simple stand so it stays upright. Use this to start a discussion of dinosaurs and the different characteristics of each.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for |
Deadly Tides Mean Early Exit for Hot Jupiters
Bad news for planet hunters: most of the "hot Jupiters" that astronomers have been searching for in star clusters were likely destroyed long ago by their stars. In a paper accepted for publication by the Astrophysical Journal
, John Debes and Brian Jackson of NASA's Goddard Space Flight Center in Greenbelt, Md., offer this new explanation for why no transiting planets (planets that pass in front of their stars and temporarily block some of the light) have been found yet in star clusters. The researchers also predict that the planet hunting being done by the Kepler mission is more likely to succeed in younger star clusters than older ones.
"Planets are elusive creatures," says Jackson, a NASA Postdoctoral Program fellow at Goddard, "and we found another reason that they're elusive."
When astronomers began to search for planets in star-packed globular clusters about 10 years ago, they hoped to find many new worlds. One survey of the cluster called 47 Tucanae (47 Tuc), for example, was expected to find at least a dozen planets among the roughly 34,000 candidate stars. "They looked at so many stars, people thought for sure they would find some planets," says Debes, a NASA Postdoctoral Program fellow at Goddard. "But they didn't."
More than 450 exoplanets (short for "extrasolar planets," or planets outside our solar system) have been found, but "most of them have been detected around single stars," Debes notes.
"Globular clusters turn out to be rough neighborhoods for planets," explains Jackson, "because there are lots of stars around to beat up on them and not much for them to eat." The high density of stars in these clusters means that planets can be kicked out of their solar systems by nearby stars. In addition, the globular clusters surveyed so far have been rather poor in metals (elements heavier than hydrogen and helium), which are the raw materials for making planets; this is known as low metallicity.
Debes and Jackson propose that hot Jupiters—large planets that are at least 3 to 4 times closer to their host stars than Mercury is to our sun—are quickly destroyed. In these cramped orbits, the gravitational pull of the planet on the star can create a tide—that is, a bulge—on the star. As the planet orbits, the bulge on the star points a little bit behind the planet and essentially pulls against it; this drag reduces the energy of the planet's orbit, and the planet moves a little closer to the star. Then the bulge on the star gets bigger and saps even more energy from the planet's orbit. This continues for billions of years until the planet crashes into the star or is torn apart by the star's gravity, according to Jackson's model of tidal orbital decay.
"The last moments for these planets can be pretty dramatic, as their atmospheres are ripped away by their stars' gravity," says Jackson. "It has even been suggested recently the hot Jupiter called WASP-12B is close enough to its star that it is currently being destroyed."
Debes and Jackson modeled what would have happened in 47 Tuc if the tidal effect were unleashed on hot Jupiters. They recreated the range of masses and sizes of the stars in that cluster and simulated a likely arrangement of planets. Then they let the stars' tides go to work on the close-in planets. The model predicted that so many of these planets would be destroyed, the survey would come up empty-handed. "Our model shows that you don't need to consider metallicity to explain the survey results," says Debes, "though this and other effects will also reduce the number of planets."
Ron Gilliland, who is at the Space Telescope Science Institute in Baltimore and participated in the 47 Tuc survey, says, "This analysis of tidal interactions of planets and their host stars provides another potentially good explanation—in addition to the strong correlation between metallicity and the presence of planets—of why we failed to detect exoplanets in 47 Tuc."
In general, Debes and Jackson's model predicts that one-third of the hot Jupiters will be destroyed by the time a cluster is a billion years old, which is still juvenile compared to our solar system (about 4-1/2 billion years old). 47 Tuc has recently been estimated to be more than 11 billion years old. At that age, the researchers expect more than 96% of the hot Jupiters to be gone.
The Kepler mission, which is searching for hot Jupiters and smaller, Earth-like planets, gives Debes and Jackson a good chance to test their model. Kepler will survey four open clusters—groups of stars that are not as dense as globular clusters—ranging from less than half a billion to nearly 8 billion years old, and all of the clusters have enough raw materials to form significant numbers of planets, Debes notes. If tidal orbital decay is occurring, Debes and Jackson predict, Kepler could find up to three times more Jupiter-sized planets in the youngest cluster than in the oldest one. (An exact number depends on the brightness of the stars, the planets' distance from the stars, and other conditions.)
"If we do find planets in those clusters with Kepler," says Gilliland, a Kepler co-investigator, "looking at the correlations with age and metallicity will be interesting for shaping our understanding of the formation of planets, as well as their continued existence after they are formed."
If the tidal orbital decay model proves right, Debes adds, planet hunting in clusters may become even harder. "The big, obvious planets may be gone, so we'll have to look for smaller, more distant planets," he explains. "That means we will have to look for a much longer time at large numbers of stars and use instruments that are sensitive enough to detect these fainter planets."
The Kepler mission is managed by NASA's Ames Research Center, Moffett Field, Calif., for the Science Mission Directorate at NASA Headquarters in Washington.
20 Years of Hubble: Exoplanets
NASA's Goddard Space Flight Center |
Confucianism is a Chinese ethical and philosophical system developed from the teachings of the Chinese philosopher Confucius (Kǒng Fūzǐ, or K'ung-fu-tzu, lit. "Master Kong", 551–478 BC). Confucianism originated as an "ethical-sociopolitical teaching" during the Spring and Autumn Period, but later developed metaphysical and cosmological elements in the Han Dynasty. Following the abandonment of Legalism in China after the Qin Dynasty, Confucianism became the official state ideology of China, until it was replaced by the "Three Principles of the People" ideology with the establishment of the Republic of China, and then Maoist Communism after the ROC was replaced by the People's Republic of China in Mainland China.
The core of Confucianism is humanism, the belief that human beings are teachable, improvable and perfectible through personal and communal endeavour especially including self-cultivation and self-creation. Confucianism focuses on the cultivation of virtue and maintenance of ethics, the most basic of which are ren, yi, and li. Ren is an obligation of altruism and humaneness for other individuals within a community, yi is the upholding of righteousness and the moral disposition to do good, and li is a system of norms and propriety that determines how a person should properly act within a community. Confucianism holds that one should give up one's life, if necessary, either passively or actively, for the sake of upholding the cardinal moral values of ren and yi. Although Confucius the man may have been a believer in Chinese folk religion, Confucianism as an ideology is humanistic and non-theistic, and does not involve a belief in the supernatural or in a personal god.
Cultures and countries strongly influenced by Confucianism include mainland China, Taiwan, Korea, Japan and Vietnam, as well as various territories settled predominantly by Chinese people, such as Singapore. Although Confucian ideas prevail in these areas, few people outside of academia identify themselves as Confucian, and instead see Confucian ethics as a complementary guideline for other ideologies and beliefs, including democracy, Marxism, capitalism, Christianity, Islam and Buddhism.
Humanism is at the core in Confucianism. A simple way to appreciate Confucian thought is to consider it as being based on varying levels of honesty, and a simple way to understand Confucian thought is to examine the world by using the logic of humanity. In practice, the primary foundation and function of Confucianism is as an ethical philosophy to be practiced by all the members of a society. Confucian ethics is characterized by the promotion of virtues, encompassed by the Five Constants, or the Wuchang (五常), extrapolated by Confucian scholars during the Han Dynasty. The five virtues are Ren (仁, Humaneness), Yi (義, Righteousness or Justice), Li (禮, Propriety or Etiquette), Zhi (智, Knowledge), Xin (信, Integrity). They are accompanied by the classical Sizi (四字) with four virtues: Zhong (忠, Loyalty), Xiao (孝, Filial piety), Jie (節, Continency), Yi (義, Righteousness). There are still many other elements, such as Cheng (誠, honesty), Shu (恕, kindness and forgiveness), Lian (廉, honesty and cleanness), Chi (恥, shame, judge and sense of right and wrong), Yong (勇, bravery), Wen (溫, kind and gentle), Liang (良, good, kindhearted), Gong (恭, respectful, reverent), Jian(儉, frugal), Rang (讓, modestly, self-effacing). Among all elements, Ren (Humanity) and Yi (Righteousness) are fundamental.
See main article: Ren (Confucianism).
Ren is one of the basic virtues promoted by Confucius, and is an obligation of altruism and humaneness for other individuals within a community. Confucius' concept of humaneness is probably best expressed in the Confucian version of the Ethic of reciprocity, or the Golden Rule: "Do not do unto others what you would not have them do unto you."
Confucius never stated whether man was born good or evil, noting that 'By nature men are similar; by practice men are wide apart' - implying that whether good or bad, Confucius must have perceived all men to be born with intrinsic similarities, but that man is conditioned and influenced by study and practise. Xunzi's opinion is that men originally just want what they instinctively want despite positive or negative results it may bring, so cultivation is needed. In Mencius' view, all men are born to share goodness such as compassion and good heart, although they may become wicked. The Three Character Classic begins with "People at birth are naturally good (kind-hearted)", which stems from Mencius' idea. All the views eventually lead to recognize the importance of human education and cultivation.
Rén also has a political dimension. If the ruler lacks rén, Confucianism holds, it will be difficult if not impossible for his subjects to behave humanely. Rén is the basis of Confucian political theory: it presupposes an autocratic ruler, exhorted to refrain from acting inhumanely towards his subjects. An inhumane ruler runs the risk of losing the "Mandate of Heaven", the right to rule. A ruler lacking such a mandate need not be obeyed. But a ruler who reigns humanely and takes care of the people is to be obeyed strictly, for the benevolence of his dominion shows that he has been mandated by heaven. Confucius himself had little to say on the will of the people, but his leading follower Mencius did state on one occasion that the people's opinion on certain weighty matters should be considered.
See main article: Li (Confucianism). In Confucianism, the term "li", sometimes translated into English as rituals, customs, rites, etiquette, or morals, refers to any of the secular social functions of daily life, akin to the Western term for culture. Confucius considered education and music as various elements of li. Li were codified and treated as a comprehensive system of norms, guiding the propriety or politeness which colors everyday life. Confucius himself tried to revive the etiquette of earlier dynasties.
It is important to note that, although li is sometimes translated as "ritual" or "rites", it has developed a specialized meaning in Confucianism, as opposed to its usual religious meanings. In Confucianism, the acts of everyday life are considered rituals. Rituals are not necessarily regimented or arbitrary practices, but the routines that people often engage in, knowingly or unknowingly, during the normal course of their lives. Shaping the rituals in a way that leads to a content and healthy society, and to content and healthy people, is one purpose of Confucian philosophy.
Loyalty is the equivalent of filial piety on a different plane. It is particularly relevant for the social class to which most of Confucius' students belonged, because the only way for an ambitious young scholar to make his way in the Confucian Chinese world was to enter a ruler's civil service. Like filial piety, however, loyalty was often subverted by the autocratic regimes of China. Confucius had advocated a sensitivity to the realpolitik of the class relations in his time; he did not propose that "might makes right", but that a superior who had received the "Mandate of Heaven" (see below) should be obeyed because of his moral rectitude.
In later ages, however, emphasis was placed more on the obligations of the ruled to the ruler, and less on the ruler's obligations to the ruled.
Loyalty was also an extension of one's duties to friends, family, and spouse. Loyalty to one's family came first, then to one's spouse, then to one's ruler, and lastly to one's friends. Loyalty was considered one of the greater human virtues.
Confucius also realized that loyalty and filial piety can potentially conflict.
See main article: Filial piety. "Filial piety" is considered among the greatest of virtues and must be shown towards both the living and the dead (including even remote ancestors). The term "filial" (meaning "of a child") characterizes the respect that a child, originally a son, should show to his parents. This relationship was extended by analogy to a series of five relationships :
The Five Bonds
Specific duties were prescribed to each of the participants in these sets of relationships. Such duties were also extended to the dead, where the living stood as sons to their deceased family. This led to the veneration of ancestors. The only relationship where respect for elders wasn't stressed was the Friend to Friend relationship. In all other relationships, high reverence was held for elders.
The idea of Filial piety influenced the Chinese legal system: a criminal would be punished more harshly if the culprit had committed the crime against a parent, while fathers often exercised enormous power over their children. A similar differentiation was applied to other relationships. Now filial piety is also built into law. People have the responsibility to provide for their elderly parents according to the law.
The main source of our knowledge of the importance of filial piety is the Classic of Filial Piety, a work attributed to Confucius and his son but almost certainly written in the 3rd century BCE. The Analects, the main source of the Confucianism of Confucius, actually has little to say on the matter of filial piety and some sources believe the concept was focused on by later thinkers as a response to Mohism.
Filial piety has continued to play a central role in Confucian thinking to the present day.
Relationships are central to Confucianism. Particular duties arise from one's particular situation in relation to others. The individual stands simultaneously in several different relationships with different people: as a junior in relation to parents and elders, and as a senior in relation to younger siblings, students, and others. While juniors are considered in Confucianism to owe their seniors reverence, seniors also have duties of benevolence and concern toward juniors. This theme of mutuality is prevalent in East Asian cultures even to this day.
Social harmony—the great goal of Confucianism—therefore results in part from every individual knowing his or her place in the social order, and playing his or her part well. When Duke Jing of Qi asked about government, by which he meant proper administration so as to bring social harmony, Confucius replied:
There is government, when the prince is prince, and the minister is minister; when the father is father, and the son is son. (Analects XII, 11, trans. Legge)
Mencius says: "When being a child, yearn for and love your parents; when growing mature, yearn for and love your lassie; when having wife and child(ren), yearn for and love your wife and child(ren); when being an official (or a staffer), yearn for and love your sovereign (and/or boss)."
See main article: Junzi. The term jūnzǐ is crucial to classical Confucianism. Confucianism exhorts all people to strive for the ideal of a "gentleman" or "perfect man". A succinct description of the "perfect man" is one who "combines the qualities of saint, scholar, and gentleman." In modern times the masculine translation in English is also traditional and is still frequently used. Elitism was bound up with the concept, and gentlemen were expected to act as moral guides to the rest of society.
They were to:
The great exemplar of the perfect gentleman is Confucius himself. Perhaps the tragedy of his life was that he was never awarded the high official position which he desired, from which he wished to demonstrate the general well-being that would ensue if humane persons ruled and administered the state.
The opposite of the Jūnzǐ was the Xiǎorén . The character 小 in this context means petty in mind and heart, narrowly self-interested, greedy, superficial, or materialistic.
See main article: Rectification of Names. Confucius believed that social disorder often stemmed from failure to perceive, understand, and deal with reality. Fundamentally, then, social disorder can stem from the failure to call things by their proper names, and his solution to this was Zhèngmíng . He gave an explanation of zhengming to one of his disciples.
Zi-lu said, "The ruler of Wei has been waiting for you, in order with you to administer the government. What will you consider the first thing to be done?"
The Master replied, "What is necessary to rectify names."
"So! indeed!" said Zi-lu. "You are wide off the mark! Why must there be such rectification?"
The Master said, "How uncultivated you are, Yu! A superior man, in regard to what he does not know, shows a cautious reserve.
If names be not correct, language is not in accordance with the truth of things.
If language be not in accordance with the truth of things, affairs cannot be carried on to success.
When affairs cannot be carried on to success, proprieties and music do not flourish.
When proprieties and music do not flourish, punishments will not be properly awarded.
When punishments are not properly awarded, the people do not know how to move hand or foot.
Therefore a superior man considers it necessary that the names he uses may be spoken appropriately, and also that what he speaks may be carried out appropriately. What the superior man requires is just that in his words there may be nothing incorrect."
(Analects XIII, 3, tr. Legge)
Xun Zi chapter (22) "On the Rectification of Names" claims the ancient sage-kings chose names that directly corresponded with actualities, but later generations confused terminology, coined new nomenclature, and thus could no longer distinguish right from wrong.
To govern by virtue, let us compare it to the North Star: it stays in its place, while the myriad stars wait upon it. (Analects II, 1)Another key Confucian concept is that in order to govern others one must first govern oneself. When developed sufficiently, the king's personal virtue spreads beneficent influence throughout the kingdom. This idea is developed further in the Great Learning, and is tightly linked with the Taoist concept of wu wei : the less the king does, the more gets done. By being the "calm center" around which the kingdom turns, the king allows everything to function smoothly and avoids having to tamper with the individual parts of the whole.
This idea may be traced back to early Chinese shamanistic beliefs, such as the king being the axle between the sky, human beings, and the Earth. Another complementary view is that this idea may have been used by ministers and counselors to deter aristocratic whims that would otherwise be to the detriment of the state's people.
In teaching, there should be no distinction of classes. (Analects XV, 39)The main basis of his teachings was to seek knowledge, study, and become a better person.
Although Confucius claimed that he never invented anything but was only transmitting ancient knowledge (see Analects VII, 1), he did produce a number of new ideas. Many European and American admirers such as Voltaire and H. G. Creel point to the revolutionary idea of replacing nobility of blood with nobility of virtue. Jūnzǐ (君子, lit. "lord's child"), which originally signified the younger, non-inheriting, offspring of a noble, became, in Confucius' work, an epithet having much the same meaning and evolution as the English "gentleman". A virtuous plebeian who cultivates his qualities can be a "gentleman", while a shameless son of the king is only a "small man". That he admitted students of different classes as disciples is a clear demonstration that he fought against the feudal structures that defined pre-imperial Chinese society.
Another new idea, that of meritocracy, led to the introduction of the Imperial examination system in China. This system allowed anyone who passed an examination to become a government officer, a position which would bring wealth and honour to the whole family. The Chinese Imperial examination system seems to have been started in 165 BC, when certain candidates for public office were called to the Chinese capital for examination of their moral excellence by the emperor. Over the following centuries the system grew until finally almost anyone who wished to become an official had to prove his worth by passing written government examinations.
His achievement was the setting up of a school that produced statesmen with a strong sense of patriotism and duty, known as Rujia . During the Warring States Period and the early Han Dynasty, China grew greatly and the need arose for a solid and centralized cadre of government officers able to read and write administrative papers. As a result, Confucianism was promoted by the emperor and the men its doctrines produced became an effective counter to the remaining feudal aristocrats who threatened the unity of the imperial state.
During the Han Dynasty, Confucianism developed from an ethical system into a political ideology used to legitimize the rule of the political elites. Most Chinese emperors used a mix of Legalism and Confucianism as their ruling doctrine, often with the latter embellishing the former. The practice of using the Confucian meritocracy to justify political actions continues in countries in the Sinosphere, including post-economic liberalization People's Republic of China, Chiang Kai-Shek's Republic of China, and modern Singapore.
The works of Confucius were translated into European languages through the agency of Jesuit scholars stationed in China. Matteo Ricci was among the very earliest to report on the thoughts of Confucius, and father Prospero Intorcetta wrote about the life and works of Confucius in Latin in 1687. Translations of Confucian texts influenced European thinkers of the period, particularly among the Deists and other philosophical groups of the Enlightenment who were interested by the integration of the system of morality of Confucius into Western civilization. Confucianism influenced Gottfried Leibniz, who was attracted to the philosophy because of its perceived similarity to his own. It is postulated that certain elements of Leibniz's philosophy, such as "simple substance" and "preestablished harmony", were borrowed from his interactions with Confucianism. The French philosopher Voltaire was also influenced by Confucius, seeing the concept of Confucian rationalism as an alternative to Christian dogma. He praised Confucian ethics and politics, portraying the sociopolitical hierarchy of China as a model for Europe.
From the late 17th century onwards a whole body of literature known as the Han Kitab developed amongst the Hui Muslims of China who infused Islamic thought with Confucianism. Especially the works of Liu Zhu such as Tianfang Dianli(天方典禮) sought to harmonize Islam with not only Confucianism but Daoism and is considered to be one of the crowning achievements of the Chinese Muslim culture.
Important military and political figures in modern Chinese history continued to be influenced by Confucianism, like the Muslim warlord Ma Fuxiang. The New Life Movement relied heavily on Confucianism. The Kuomintang party purged China's education system of western ideas, introducing Confucianism into the curriculum. Education came under the total control of state, which meant, in effect, the Kuomintang party, via the Ministry of Education. Military and political classes on the Kuomintang's Three Principles of the People (三民主義) were added. Textbooks, exams, degrees and educational instructors were all controlled by the state, as were all universities.
For many years since the era of Confucius, various critiques of Confucianism have arisen, including Laozi's philosophy and Mozi's critique. Lu Xun also criticised Confucianism heavily for shaping Chinese people into the condition they had reached by the late Qing Dynasty: his criticism are well portrayed in two of his works, A Madman's Diary and The True Story of Ah Q.
In modern times, waves of critique along with vilification against Confucianism arose. The Taiping Rebellion, May Fourth Movement and Cultural Revolution are some upsurges of those waves in China. Taiping rebels described many sages in Confucianism as well as gods in Taoism and Buddhism as mere legends. Marxists during the Cultural Revolution described Confucius as the general representative of the class of slave owners. Numerous opinions and interpretations of Confucianism (of which many are actually opposed by Confucianism) were invented.
Confucianism has a related principle idea called "He Er Bu Tong" (和而不同, peaceful while differing or harmonious while diversified). Although people have differences in opinions, interests, preferences, profiles, etc., they should first keep peace among themselves, and people should live in harmony with each other while maintaining their diversity. There are still other aphorisms relating to Confucian ideas, e.g. If what others say is right and you are truly at fault, change it. If not, be ever more careful of committing that kind of fault (有則改之,無則加勉), Seek to emulate others' virtues, and reflect on your own weak points when you see the failings of others (見賢思齊焉,見不賢而內自省).
Confucianism "largely defined the mainstream discourse on gender in China from the Han dynasty onward," and its strict, obligatory gender roles as a cornerstone of family, and thus, societal stability, continue to shape social life throughout East Asia. Confucians taught that a virtuous woman was supposed to uphold 'three subordinations': be subordinate to her father before marriage, to her husband after marriage, and to her son after her husband died. Men could remarry and have concubines, whereas women were supposed to uphold the virtue of chastity when they lost their husbands. Chaste widows were revered as heroes during the Ming and Qing periods, and were deemed so central to China’s culture and the fate of all peoples, the Yongle Emperor distributed 10,000 copies of the Biographies of Exemplary Women (Lienü Zhuan) to various non-Chinese countries for their moral instruction. The book served as Confucianism's seminal textbook for Chinese women for two millennia, but cementing the "cult of chastity" as an exemplar of Chinese superiority also condemned many widows to lives of "poverty and loneliness."
However, recent reexaminations of Chinese gender roles suggest that Daoism and the yin-yang dichotomy played an even greater part in stifling female roles, and that many women flourished within Confucianism. During the Han dynasty period, the important Confucian text Lessons for Women (Nüjie), was written by Ban Zhao (45-114 CE): by a woman, for women.
She wrote the Nüjie ostensibly for her daughters, instructing them on how to live proper Confucian lives as wives and mothers. Although this is a relatively rare instance of a female Confucian voice, Ban Zhao almost entirely accepts the prevailing views concerning women's proper roles; they should be silent, hard-working, and compliant. She stresses the complementarity and equal importance of the male and female roles according to yin-yang theory, but she clearly accepts the dominance of the yang-male. Her only departure from the standard male versions of this orthodoxy is that she insists on the necessity of educating girls and women. We should not underestimate the significance of this point, as education was the bottom line qualification for being a junzi or "noble person,"...her example suggests that the Confucian prescription for a meaningful life as a woman was apparently not stifling for all women. Even some women of the literate elite, for whom Confucianism was quite explicitly the norm, were able to flourish by living their lives according to that model.
Ever since Europeans first encountered Confucianism, the issue of how Confucianism should be classified has been subject to debate. In the 16th and the 17th centuries, the earliest European arrivals in China, the Christian Jesuits, considered Confucianism to be an ethical system, not a religion, and one that was compatible with Christianity. The Jesuits, including Matteo Ricci, saw Chinese rituals as "civil rituals" that could co-exist alongside the spiritual rituals of Catholicism. By the early 18th century, this initial portrayal was rejected by the Dominicans and Franciscans, creating a dispute among Catholics in East Asia that was known as the "Rites Controversy". The Dominicans and Franciscans argued that ancestral worship was a form of pagan idolatry that was contradictory to the tenets of Christianity. This view was reinforced by Pope Benedict XIV, who ordered a ban on Chinese rituals.
This debate continues into the modern era. There is consensus among scholars that, whether or not it is religious, Confucianism is definitively non-theistic. Confucianism is humanistic, and does not involve a belief in the supernatural or in a personal god. On spirituality, Confucius said to Chi Lu, one of his students, that "You are not yet able to serve men, how can you serve spirits?" Attributes that are seen as religious—such as ancestor worship, ritual, and sacrifice—were advocated by Confucius as necessary for social harmony; however, these attributes can be traced to the traditional non-Confucian Chinese beliefs of Chinese folk religion, and are also practiced by Daoists and Chinese Buddhists. Scholars recognize that classification ultimately depends on how one defines religion. Using stricter definitions of religion, Confucianism has been described as a moral science or philosophy. But using a broader definition, such as Frederick Streng's characterization of religion as "a means of ultimate transformation", Confucianism could be described as a "sociopolitical doctrine having religious qualities." With the latter definition, Confucianism is religious, even if non-theistic, in the sense that it "performs some of the basic psycho-social functions of full-fledged religions", in the same way that non-theistic ideologies like Communism do.
Strictly speaking, there is no term in Chinese which directly corresponds to "Confucianism." Several different terms are used in different situations, several of which are of modern origin:
Three of these use the Chinese character 儒 rú, meaning "scholar". These names do not use the name "Confucius" at all, but instead center on the figure or ideal of the Confucian scholar; however, the suffixes of jiā, jiào, and xué carry different implications as to the nature of Confucianism itself.
Rújiā contains the character jiā, which literally means "house" or "family". In this context, it is more readily construed as meaning "school of thought", since it is also used to construct the names of philosophical schools contemporary with Confucianism: for example, the Chinese names for Legalism and Mohism end in jiā.
Rújiào and Kǒngjiào contain the Chinese character jiào, the noun "teach", used in such as terms as "education", or "educator". The term, however, is notably used to construct the names of religions in Chinese: the terms for Islam, Judaism, Christianity, and other religions in Chinese all end with jiào.
Rúxué contains xué 'study'. The term is parallel to -ology in English, being used to construct the names of academic fields: the Chinese names of fields such as physics, chemistry, biology, political science, economics, and sociology all end in xué. |
Warfare between Europeans and Indians was common in the seventeenth century. In 1622, the Powhatan Confederacy nearly wiped out the struggling Jamestown colony. Frustrated at the continuing conflicts, Nathaniel Bacon and a group of vigilantes destroyed the Pamunkey Indians before leading an unsuccessful revolt against colonial authorities in 1676. Intermittent warfare also plagued early Dutch colonies in New York. In New England, Puritan forces annihilated the Pequots in 1636-1637, a campaign whose intensity seemed to foreshadow the future. Subsequent attacks inspired by Metacom (King Philip) against English settlements sparked a concerted response from the New England Confederation. Employing Indian auxiliaries and a scorched-earth policy, the colonists nearly exterminated the Narragansetts, Wampanoags, and Nipmucks in 1675-1676. A major Pueblo revolt also threatened Spanish-held New Mexico in 1680.
Indians were also a key factor in the imperial rivalries among France, Spain, and England. In King William’s (1689-1697), Queen Anne’s (1702-1713), and King George’s (1744-1748) wars, the French sponsored Abnaki and Mohawk raids against the more numerous English. Meanwhile, the English and their trading partners, the Chickasaws and often the Cherokees, battled the French and associated tribes for control of the lower Mississippi River valley and the Spanish in western Florida. More decisive was the French and Indian War (1754-1763). The French and their Indian allies dominated the conflict’s early stages, turning back several English columns in the north. Particularly serious was the near-annihilation of Gen. Edward Braddock’s force of thirteen hundred men outside of Fort Duquesne in 1755. But with English minister William Pitt infusing new life into the war effort, British regulars and provincial militias overwhelmed the French and absorbed all of Canada.
But eighteenth-century conflicts were not limited to the European wars for empire. In Virginia and the Carolinas, English-speaking colonists pushed aside the Tuscaroras, the Yamasees, and the Cherokees. The Natchez, Chick asaw, and Fox Indians resisted French domination, and the Apaches and Comanches fought against Spanish expansion into Texas. In 1763, an Ottawa chief, Pontiac, forged a powerful confederation against British expansion into the Old Northwest. Although his raids wreaked havoc upon the surrounding white settlements, the British victory in the French and Indian War combined with the Proclamation of 1763, which forbade settlement west of the Appalachian Mountains, soon eroded Pontiac’s support.
Most of the Indians east of the Mississippi River now perceived the colonial pioneers as a greater threat than the British government. Thus northern tribes, especially those influenced by Mohawk chief Thayendanegea (Joseph Brant), generally sided with the Crown during the American War for Independence. In 1777, they joined the Tories and the British in the unsuccessful offensives of John Burgoyne and Barry St. Leger in upstate New York. Western Pennsylvania and New York became savage battlegrounds as the conflict spread to the Wyoming and Cherry valleys. Strong American forces finally penetrated the heart of Iroquois territory, leaving a wide swath of destruction in their wake.
In the Midwest, George Rogers Clark captured strategic Vincennes for the Americans, but British agents based at Detroit continued to sponsor Tory and Indian forays as far south as Kentucky. The Americans resumed the initiative in 1782, when Clark marched northwest into Shawnee and Delaware country, ransacking villages and inflicting several stinging defeats upon the Indians. To the south, the British backed resistance among the Cherokees, Chickasaws, Creeks, and Choctaws but quickly forgot their former allies following the signing of the Treaty of Paris (1783).
By setting the boundaries of the newly recognized United States at the Mississippi River and the Great Lakes, that treaty virtually ensured future conflicts between whites and resident tribes. In 1790, Miami chief Little Turtle routed several hundred men led by Josiah Harmar along the Maumee River. Arthur St. Clair’s column suffered an even more ignominious defeat on the Wabash River the following year; only in 1794 did Anthony Wayne gain revenge at the Battle of Fallen Timbers. Yet resistance to white expansion in the Old Northwest continued as a Shawnee chief, Tecumseh, molded a large Indian confederation based at Prophetstown. While Tecumseh was away seeking additional support, William Henry Harrison burned the village after a stalemate at the Battle of Tippecanoe in 1811.
Indian raids, often encouraged by the British, were influential in causing the United States to declare war on Great Britain in 1812. The British made Tecumseh a brigadier general and used Indian allies to help recapture Detroit and Fort Dearborn (Chicago). Several hundred American prisoners were killed following a skirmish at the River Raisin in early 1813. But Harrison pushed into Canada and won the Battle of the Thames, which saw the death of Tecumseh and the collapse of his confederation. In the Southeast, the Creeks gained a major triumph against American forces at Fort Sims, killing many of their prisoners in the process. Andrew Jackson led the counterthrust, winning victories at Tallasahatchee and Talladega before crushing the Creeks at Horseshoe Bend in 1814.
Alaska and Florida were also the scenes of bitter conflicts. Native peoples strongly contested the Russian occupation of Alaska. The Aleuts were defeated during the eighteenth century, but the Russians found it impossible to prevent Tlingit harassment of their hunting parties and trading posts. Upon the Spanish cession of Florida, Washington began removing the territory’s tribes to lands west of the Mississippi River. But the Seminole Indians and runaway slaves refused to relocate, and the Second Seminole War saw fierce guerrilla-style actions from 1835 to 1842. Osceola, perhaps the greatest Seminole leader, was captured during peace talks in 1837, and nearly three thousand Seminoles were eventually removed. The Third Seminole War (1855-1858) stamped out all but a handful of the remaining members of the tribe.
In the United States, the removal policy met only sporadic armed resistance as whites pushed into the Mississippi River valley during the 1830s and 1840s. The Sac and Fox Indians were crushed in Black Hawk’s War (1831-1832), and tribes throughout the region seemed powerless in the face of the growing numbers of forts and military roads the whites were constructing. The acquisition of Texas and the Southwest during the 1840s, however, sparked a new series of Indian-white conflicts. In Texas, where such warfare had marred the independent republic’s brief history, the situation was especially volatile.
On the Pacific Coast, attacks against the native peoples accompanied the flood of immigrants to gold-laden California. Disease, malnutrition, and warfare combined with the poor lands set aside as reservations to reduce the Indian population of that state from 150,000 in 1845 to 35,000 in 1860. The army took the lead role in Oregon and Washington, using the Rogue River (1855-1856), Yakima (1855-1856), and Spokane (1858) wars to force several tribes onto reservations. Sporadic conflicts also plagued Arizona and New Mexico throughout the 1850s as the army struggled to establish its presence. On the southern plains, mounted warriors posed an even more formidable challenge to white expansion. Strikes against the Sioux, Cheyennes, Arapahos, Comanches, and Kiowas during the decade only hinted at the deadlier conflicts of years to come.
The Civil War saw the removal of the Regulars and an accompanying increase in the number and intensity of white-Indian conflicts. The influence of the Five Southern, or “Civilized” Tribes of the Indian Territory was sharply reduced. Seven Indian regiments served with Confederate troops at the Battle of Pea Ridge (1862). Defeat there and at Honey Springs (1863) dampened enthusiasm for the South, although tribal leaders like Stand Waite continued to support the confederacy until the war’s end. James H. Carleton and Christopher (“Kit”) Carson conducted a ruthlessly effective campaign against the Navahos in New Mexico and Arizona. Disputes on the southern plains culminated in the Sand Creek massacre (1864), during which John M. Chivington’s Colorado volunteers slaughtered over two hundred of Black Kettle’s Cheyennes and Arapahos, many of whom had already attempted to come to terms with the government. In Minnesota, attacks by the Eastern Sioux prompted counterattacks by the volunteer forces of Henry H. Sibley, after which the tribes were removed to the Dakotas. The conflict became general when John Pope mounted a series of unsuccessful expeditions onto the plains in 1865.
Regular units, including four regiments of black troops, returned west following the Confederate collapse. Railroad expansion, new mining ventures, the destruction of the buffalo, and ever-increasing white demand for land exacerbated the centuries-old tensions. The mounted warriors of the Great Plains posed an especially thorny problem for an army plagued by a chronic shortage of cavalry and a government policy that demanded Indian removal on the cheap.
Winfield S. Hancock’s ineffectual campaign in 1867 merely highlighted the bitterness between whites and Indians on the southern plains. Using a series of converging columns, Philip Sheridan achieved more success in his winter campaigns of 1868-1869, but only with the Red River War of 1874-1875 were the tribes broken. Major battlefield encounters like George Armstrong Custer’s triumph at the Battle of the Washita (1868) had been rare; more telling was the army’s destruction of Indian lodges, horses, and food supplies, exemplified by Ranald Mackenzie’s slaughter of over a thousand Indian ponies following a skirmish at Palo Duro Canyon, Texas, in 1874.
To the north, the Sioux, Northern Cheyennes, and Arapahos had forced the army to abandon its Bozeman Trail forts in Red Cloud’s War (1867). But arable lands and rumors of gold in the Dakotas continued to attract white migration; the government opened a major new war in 1876. Initial failures against a loose Indian coalition, forged by leaders including Crazy Horse and Sitting Bull, culminated in the annihilation of five troops of Custer’s cavalry at the Little Bighorn. A series of army columns took the field that fall and again the following spring. By campaigning through much of the winter, harassing Indian villages, and winning battles like that at Wolf Mountain (1877), Nelson A. Miles proved particularly effective. The tribes had to sue for peace, and even Sitting Bull’s band returned from Canada to accept reservation life in 1881. Another outbreak among the Sioux and Northern Cheyennes, precipitated by government corruption, shrinking reservations, and the spread of the Ghost Dance, culminated in a grisly encounter at Wounded Knee (1890), in which casualties totaled over two hundred Indians and sixty-four soldiers.
Less spectacular but equally deadly were conflicts in the Pacific Northwest. In 1867-1868, George Crook defeated the Paiutes of northern California and southern Oregon. In a desperate effort to secure a new reservation on the tribal homelands, a Modoc chief assassinated Edward R. S. Canby during an abortive peace conference in 1873. Canby’s death (he was the only general ever killed by Indians) helped shatter President Ulysses S. Grant’s peace policy and resulted in the tribe’s defeat and removal. Refusing life on a government-selected reservation, Chief Joseph’s Nez Percés led the army on an epic seventeen-hundred-mile chase through Idaho, Wyoming, and Montana until checked by Miles just short of the Canadian border at Bear Paw Mountain (1877). Also unsuccessful was armed resistance among the Bannocks, Paiutes, Sheepeaters, and Utes in 1878-1879.
To the far southwest, Cochise, Victorio, and Geronimo led various Apache bands in resisting white and Hispanic encroachments, crossing and recrossing the border into Mexico with seeming impunity. Many an officer’s record was scarred as repeated treaties proved abortive. Only after lengthy campaigning, during which army columns frequently entered Mexico, were the Apaches forced to surrender in the mid-1880s.
The army remained wary of potential trouble as incidental violence continued. Yet, with the exception of another clash in 1973 during which protesters temporarily seized control of Wounded Knee, the major Indian-white conflicts in the United States had ended. Militarily, several trends had become apparent. New technology often gave the whites a temporary advantage. But this edge was not universal; Indian warriors carrying repeating weapons during the latter nineteenth century sometimes outgunned their army opponents, who were equipped with cheaper (but often more reliable) single-shot rifles and carbines. As the scene shifted from the eastern woodlands to the western plains, white armies found it increasingly difficult to initiate fights with their Indian rivals. To force action, army columns converged upon Indian villages from several directions. This dangerous tactic had worked well at the Battle of the Washita but could produce disastrous results when large numbers of tribesmen chose to stand and fight, as at the Little Bighorn.
Throughout the centuries of conflict, both sides had taken the wars to the enemy populace, and the conflicts had exacted a heavy toll among noncombatants. Whites had been particularly effective in exploiting tribal rivalries; indeed, Indian scouts and auxiliaries were often essential in defeating tribes deemed hostile by white governments. In the end, however, military force alone had not destroyed Indian resistance. Only in conjunction with railroad expansion, the destruction of the buffalo, increased numbers of non-Indian settlers, and the determination of successive governments to crush any challenge to their sovereignty had white armies overwhelmed the tribes.
The Reader’s Companion to American History. Eric Foner and John A. Garraty, Editors. Copyright © 1991 by Houghton Mifflin Harcourt Publishing Company. All rights reserved. |
Fermat's spiral is a special type of Archimedean spiral. Archimedean spirals are described by the equation r = a * (theta^(1/n)), where "r" is the radial distance, "theta" is the polar angle and "n" is a constant that alters how tightly the spiral is wrapped. When n = 2, r^2 = a^2 * theta, and the spiral is called Fermat's spiral. For any given positive value of theta, there are two values of "r": r = a * (theta^(1/2)) and r = -a * (theta^(1/2)). This results in a symmetrical spiral about the origin.
MATLAB is a software application developed by MathWorks for technical computing. Many scientists and engineers use MATLAB to perform data analysis and data visualisation. You can use MATLAB to plot Fermat's spiral.
- Skill level:
- Moderately Easy
Other People Are Reading
Type "a = 2" in the Command window.
Type "theta = 0:(2pi)/100:(10pi)" to generate a range of values of "theta."
Type "r_pos = a * (theta.^(1/2))" to calculate the positive value of "r" for each value of "theta."
Type "r_neg = -a * (theta.^(1/2))" to calculate the negative value of "r" for each value of "theta."
Type "polar(theta,r_pos,'k-')" to plot the positive part of the spiral on polar coordinates in black.
Type "hold on, polar(theta,r_neg,'r-')" to plot the negative part of the spiral on the same polar coordinates in red.
Tips and warnings
- You can also plot Fermat's spiral on Cartesian coordinates instead of polar coordinates. Once you have calculated your values of "theta," "r_pos" and "r_neg," convert them to Cartesian coordinates using the "Pol2cart" function, e.g. "[x_pos, y_pos] = pol2cart(theta,r_pos)." Then plot the points using the "Plot" function, e.g. type "plot(x_pos, y_pos)." Repeat the same steps for the positive part of Fermat's spiral.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for |
Water covers two-thirds of the surface of the Earth, but Fresh water is 0.002% on Earth To produce one cup of coffee, we need 140 liters of water. The water footprint of China is about 700 cubic meter per year per capita. Only about 7% of its water footprint falls outside China. The USA's water footprint is 2500 cubic meter per year per capita. The Indian water footprint is 980 cubic meter per year per capita, with just 2% of its total water footprint outside the borders of the country. The International Water Management Institute predicts that by 2025 in India alone, one in three people will live under “scarce water” conditions.
People use lots of water for drinking, cooking and washing but even more for producing things such as food, paper and cotton cloth. A water footprint is an indicator that looks at both the direct and indirect water use of a consumer or producer. The water footprint of an individual, community or business is defined as the total volume of freshwater that is used to produce the goods and services consumed by the individual or community or produced by the business.
Faucet aerators, which reduce splashing while washing hands and dishes. Wastewater reuse or recycling systems, allowing: Reuse of gray water for watering gardens Recycling of wastewater through purification at a water treatment plant. Rainwater harvesting High-efficiency clothes washers, weather-based irrigation controllers Garden hose nozzles that shut off water when it is not being used, instead of letting a hose run. Using low flow taps in wash basins Use waste water for growth of plants and trees
DRIP IRRIGATION IS THE MOST EFFECTIVE WHICH OFFERS THE ABILITY TO DELIVER WATER TO PLANT ROOTS WITH MINIMAL LOSSES. HOWEVER, IT IS INCREASINGLY AFFORDABLE, ESPECIALLY FOR THE HOME GARDENER AND IN LIGHT OF RISING WATER RATES. THERE ARE ALSO CHEAP EFFECTIVE METHODS SIMILAR TO DRIP IRRIGATION SUCH AS THE SPRINKLER SYSTEM IN WHICH WATER IS SPRINKLED OVER THE PLANTS WHICH REDUCES THE WASTAGE OF WATER & HELPS WHEN THERE IS WATER SHORTAGE.
The goals of water conservation efforts include as follows: Sustainability :- To ensure availability for future generations, the withdrawal of water from the ecosystem shouldn’t exceed its natural replacement rate. Energy conservation:- Water pumping, delivery and waste water treatment facilities consume a significant amount of energy, in some regions of the world 15% of total electricity is consumed for water management. Habitat conservation:- Minimizing human water use helps to preserve fresh water habitats for local wildlife and migrating waterfowl. |
These games teach valuable skills and have a high fun and educational rating.
Your child develops spatial reasoning skills while helping carry seed balls from a silo to a car.
Your child learns about singular and plural word families by matching the correct words in this game of memory.
Your child develops attention and civic education skills by watching an inspirational fictional video about a poor couple who survives by an unexpected invention.
Your child develops pattern recognition skills and learns the names of common foods by helping Cookie Monster complete food patterns in the check-out line.
Your child practices telling time by matching numbers on a digital clock to the correct wall clock. Your child helps make sure the airplanes are leaving at the correct time, a real world math concept.
Your child develops abstract thinking and counting skills by connecting the dots and solving puzzles.
Your child learns the flags of Spanish-speaking countries and practices memory skills in this matching game.
Your child will develop basics in reading books, sighting words and attention/listening skills by helping the Wonder Pets save a baby triceratops.
Your child develops an understanding of the different food groups and healthy eating by sorting different types of foods and adding them to the plate.
Your child develops exploration and curiosity skills as well as building narratives and object identification by playing a variety of games and being a creative contributor to the environment. |
Looking at the history of the chair is a fascinating way to cross-section the society out of which that chair came. The chair likely evolved in the Stone Age - Neolithic building sites studied by archaeologists have revealed bench-like sitting areas, perhaps used for rituals.
It's not certain when exactly the first seat with a backrest emerged, but we do know that these more supportive seats that were higher off the ground were reserved for the wealthy and powerful.
Early Egyptian Chairs
Some of the earliest examples we have of chairs as we know them today, with four legs, armrests and a back, were found in Egyptian tombs. These date back to around 2,700 BC. The most famous are the seat of Tutankhamun and, pictured below, the chair of Hetepheres, an Egyptian Queen of the 4th Dynasty.
Also found in the Egyptian tombs were early examples of stools - simpler seats reserved for the lower classes.
Chairs in Ancient Greece and Rome
In ancient Greece, chairs, stools, and benches were used by all levels of society, and are depicted or referenced in much surviving art and literature. Chairs were also used, especially by the powerful, in ancient Rome. Particularly elaborate, marble chairs were reserved for use by nobility or religious rulers.
Chairs in China
The first depictions of chairs in China are in sixth-century Buddhist murals, though the practice of actually using chairs was rare at the time. By the 12th century, chairs had become more widespread. Traditionally, however, the Chinese way was to sit in the lotus position or to be knelt on a "sitting mat". This past of sitting without being far lifted off the ground makes scholars disagree on the roots of the chair in China. Some argue that the chair originally arrived there as a form of Bhuddist Monastic furniture. Today, Chinese people no longer usually sit on mats, unlike in Korea and Japan.
From the European Renaissance On
With the Renaissance in Europe, chairs became a common feature of households, for all those who could afford them. So they were no longer reserved only for the nobility or ecclesiastical rulers. Early European chairs tended to be solid and made of heavy woods. More graceful, lighter designs, like the still famous Louis XV chairs, were introduced in Paris in the 1700s.
In modern times, people have experimented with almost every imaginable form and material, making rocking chairs, reclining chairs, metal folding chairs, office chairs that swivel and move on wheels, chairs for outdoor use only, chairs made from moulded plastic and even chairs specifically for sunbathing.
Today the chair is more popular than ever before. Office seating, seats in cars, lounging sofas, bar stools and more all serve various purposes and are available in a wide range of unique designs. We might not always pay attention to chairs, but it's worth considering that every chair has a history and a story to tell about the context it came from and what was important to its designer, such as form or function. As society and its art and architecture have evolved, so too has the chair! |
Dealing With Epilepsy
When we think of epilepsy, seizure is the first thing that comes in our mind. Epilepsy is a disorder in the nervous system that strikes in all ages. This occurs when abnormal electric impulses are activated by neural cells in the brain transmitting incorrect signals to the body.
Partial seizure occurs when abnormal charges hits one area of the brain while generalized seizure involves the whole part of the brain. Seizures are also known as convulsions. A person with seizure experiences jerking movements or loss of consciousness. Symptoms vary depending on the type of seizure involved. But seizures don’t always mean that a person has epilepsy. It is sometimes caused by head injury or trauma. Further diagnosis should be tested to know if you are epileptic or not.
Seizures can last for seconds to several minutes. Sometimes seizures can run for hours but this only happens in rare cases.
Here are the types of seizures defined:
- Simple partial seizure – abnormal senses may occur even if you are conscious.
- Complex partial seizure – abnormal movements is experienced together with the alteration of consciousness.
- Absence seizure – it is also recognized as “petit mal”. Change is dramatic and may begin at your childhood years and trigger during adulthood. The person may look like daydreaming but staring for a few seconds is recurrent every day.
- Myoclonic seizure – this is the sudden jerking movement of the limbs.
- Atonic seizure – a person with this seizure unconsciously collapses to the ground.
- Tonic- clonic seizure – this is also identified as “grand mal” seizure. This kind of seizure is what people know as convulsions. The effect is very complicated causing stiff and rigid limbs, loss of breathing and consciousness, violent jerking, and tongue biting.
Secondary generalization can occur when partial seizures develop into generalized seizures. People who encounter more than one seizure are mostly diagnosed with epilepsy.
What Causes Epilepsy?
Epilepsy has no particular cause. But this disorder is sometimes inherited in the family but no further studies have been conducted to support this. Factors like head trauma/injury, stroke, lack of oxygen during near drowning or birth, tumor, meningitis, prolonged convulsions during fever, and history of epilepsy in the family increases the risk of developing epilepsy in the early or late stage of life.
How is Epilepsy Diagnosed?
The doctor will track any history of head injury or epilepsy in the family member. Detailed information about your seizures will be noted and the doctor might recommend some laboratory tests like EEG (Electroencephalogram) and MRI (Magnetic Resonance Imaging). EEG is a machine capable of measuring abnormal electrical impulses in the brain while MRI detects brain abnormalities like tumor in order to identify the cause of epilepsy.
Treatment for Epilepsy
It is important that reduced seizure is implemented to epileptic patients. Medications are available but some have complicated side effects that may not work for all patients. Visiting a specialist can help in managing epilepsy. Different methods or therapies may be offered to try which one works best for you. Treatment ranges from surgical operations, neural stimulation to prescription drugs.
Natural methods for epilepsy are also beneficial. Traditional medications like Passiflora, Scullap, homeopathic remedies like Cuprum metallicum and Cicuta verosa have been used for centuries, proven and tested by users. These are beneficial to our health because it relieves stress and maintains normal hormonal levels in our brain.
One that we would recommend is Epi-Still, as it is 100% natural, safe and proven compound remedy containing natural ingredients chosen for their beneficial effects on the health and functioning of the brain, neurons, and entire nervous system.
You can try other natural supplements but it is important to consult your doctor first before taking it. Remember to maintain a healthy lifestyle as much as possible to reduce stressors that might trigger epileptic symptoms.
If you encounter someone in the streets, school, or work having seizure attacks, then you can help that person by being calm first. Remove any sharp objects that may harm that person and put a pillow under his head or anything soft to use. Position the person lying in his/her left side. Prevent restraining the person and putting something in his/her mouth because there’s a big chance you’ll be bitten. If the seizure continues for more than 5 minutes, then call the emergency hotline immediately. |
- Education and Science»
- History & Archaeology
Homer ancient Greek epic poet. Traditionally said to be the author of the Iliad and the Odyssey. Lived probably before 700 B.C.
Homer is the earliest known poet in European literature. His Iliad and Odyssey are among the greatest masterpieces ever written, and they set the standard by which all later epic poetry has been judged. They are heroic, swift-moving tales of adventure, with vividly drawn characters taken from ancient Greek legend. Homer's poetic style is a rare combination of simplicity, rapid movement, and loftiness of tone. It has seldom been equaled in any language.
The works of Homer occupied a position of authority in ancient Greece similar to that of the Bible in Christian societies. Homer's epics were regarded as the fountainhead of religion, morals, literature, and rhetoric, and they were studied wherever there were Greeks. The poet himself was venerated for his artistry and wisdom, and his poems were often memorized, quoted, and imitated. Plato called Homer "the best and most divine of poets".
No certain facts about Homer's life are available. Seven ancient cities claimed to be his birthplace. Of these, Chios and Smyrna (now Izmir) are the most likely. The date of Homer's lifetime was placed by ancient authorities anywhere from the 12th century B.C. to the 7th century B.C. Modern scholars generally place him in the 8th century B.C. According to tradition, Homer was blind, but there is no real proof of this. In fact, his expert handling of visual detail in the poems indicates the opposite. He is said to have died on the island of Ios, but there is no conclusive evidence.
The Epic Tradition
Homer relied on a wealth of traditional material for both style and subject matter. His stories were taken directly from Greek heroic legend and may even have been adapted from works by earlier poets. It is known, for example, that for many generations before Homer there was a class of professional epic poets, or minstrels, who composed and recited epic poems at the homes of aristocrats and even before the public.
These bards appear to have handed down their poems and techniques by word of mouth from Mycenaean times to that of Homer. For example, Homer's poems include reliable information about places and things that ceased to exist after the almost total destruction of Mycenaean civilization in about 1100 B.C. Some authorities believe that such knowledge of the Mycenaean period could have been preserved only through an oral epic tradition.
Another argument in favor of an epic tradition that preceded Homer is his language, which is a unique combination of three Greek dialects and contains many words that were out of use by his time. The language of Homer was never spoken in conversation but was an artificial literary language, apparently a development of the epic tradition. Homer's skillful use of the dactylic hexameter as a metrical form also suggests that it had been employed by earlier poets. He could scarcely have achieved such perfection in this meter if he had been the first poet to use it.
The Homeric Question
The lack of reliable information has led some scholars to deny Homer's existence. The most famous statement of this point of view was made by the German scholar Friedrich Wolf in 1795. Wolf asserted that the Iliad and the Odyssey in their present form were not composed by one man. Rather, he argued, they were the products of a group of men working under the orders of the Athenian tyrant Pisistratus in the 6th century B.C. According to Wolf, the two epics were pieced together from many shorter, earlier poems by various authors.
According to another theory the Iliad and the Odyssey were composed by two different poets. Modern scholars, however, tend to believe that a single poet wrote the bulk of the two epics and that only minor additions and changes were made by other writers. In support of this view, scholars point out the consistency of character development and the remarkable unity of structure in both epics. The similarities between the two poems are greater than the differences.
Influence of Homer
Despite Homer's indebtedness to tradition, he was in many ways an original writer. His portrayal of character, construction of plot, interpretation of events, and amplification of traditional material are believed to be his own contributions. The poems of Homer were meant to be heard, and although the art of writing was known in Greece long before 700 B.C., his style was especially suited for oral recitation. However, because of the great length of both the Iliad and the Odyssey, they were probably recited in their entirety only on special occasions.
Homer remains one of the most widely read authors in Western literature. His work greatly influenced the Latin writer Virgil, whose Aeneid was modeled on both the Iliad and the Odyssey. Through Virgil, Homer indirectly influenced writers of Renaissance epics. Perhaps the most famous English translations of Homer are those by George Chapman in the 17th century and by Alexander Pope in the 18th century. |
This week on the Physics Central Podcast, I talk with physicist Dan Stamper-Kurn about making the smallest measurement of a force ever recorded. He and his group (including lead author Sydney Schreppler) applied a force to a cloud of 1200 atoms, using a laser. Their measurement came out to 40 yoctonewtons: that's 40 x 10-24 newtons (if you drop an apple from a third story window, it hits the ground with about 1 newton of force).
The reason this measurement is significant is because it gets to within a factor of 4 of the standard quantum limit, or SQL. This is a natural limit to how precisely scientists can measure certain variables. (The proof for this is in the Heisenberg Uncertainty Principle). The limit arises through various means, but scientists reach it when the system itself has an uncertainty greater than the measurement. In many cases, the observer imposes this limit: for example, if a scientist uses photons to study a single atom, the photons may start to influence the motion of the atom. So at some point the scientist can't discern the natural motion of the atom from the motion imposed on it by the photons.
Reaching this limit is important for many experiments, including LIGO, the Laser Interferometer Gravitational Wave Observatory. LIGO is searching for ripples in space time, known as gravitational waves. When a gravitational wave passes by, it may stretch or contract space itself. A distance of 1 meter may suddenly be shorter by something like 10-21 meters. LIGO scientists want to measure these changes, but they are bumping up against the standard quantum limit. Nergis Mavalvala also chats on the podcast about how experiments like the one by the Stamper-Kurn group will help LIGO anticipate challenges that may arise as they approach the standard quantum limit. |
Participating teachers post reflections and share ideas in the Forum.
TAH Workshop and Reading Reflection Guideline Questions
- Write about one aspect of the workshop or readings that has affected your understanding of historical thinking skills.
- Write about one aspect of the workshop that has affected your understanding of how to locate, evaluate, and use primary sources.
- Write about three historical topics that you found to be the most interesting or enlightening this week.
- Write about one aspect of the workshop readings that has affected your understanding of the past.
- How has your thinking on these topics changed?
- How might what you learned from the workshops affect your teaching? Consider the historical content of your teaching as well as methods and strategies for teaching.
- Write about one aspect of the film presentation that has affected your understanding of the past.
- How has your thinking on the topic changed?
- How might what you have learned affect your teaching of this topic?
- How might what you have learned affect your use of film in the classroom?
- What were the central themes of this lecture?
- How did the lecture affect your understanding of the past?
- How has your thinking on these topics changed or grown?
- How might you incorporate the themes of this lecture with your teaching?
- What are the goals or themes of this book?
- Describe the sources that the historian used to write this book. Whose voice(s) are being heard through these sources? Why do you think the historian chose to present these voices/tell this story this way?
- How is the story similar to or different from what is presented in the history textbook your students use? Does it change your understanding of this historical period or of historical events? |
Prehistoric Trackways National Monument
Rewriting Scientific Understanding
of Permian Footprints
Track your way to the Permian Period, 280 million years ago—a time when Las Cruces, New Mexico was located near the equator. It was a tropical coastal environment next to the inland Hueco Sea. The vegetation was thick with ancient conifer trees. Fern-like plants grew densely from the ground. Large amphibians and reptiles were on the top of the food chain. Smaller amphibians and reptiles fought for survival. Various insects made their homes amongst the swampy landscape. This was tens of millions of years before the dinosaurs and hundreds of millions of years before humans.
At Prehistoric Trackways National Monument, this ancient world has been preserved as tracks, traces, and various sedimentary features in the red mudstones of the Hueco Group. Hundreds of fossil sites in and around the approximately 5,280 acre Monument preserve different parts of this ancient ecosystem. In particular, the Monument includes a major deposit of Permian-aged fossilized footprint megatrackways.
The trackways at the Prehistoric Trackways National Monument rewrote scientific understanding of Permian footprints. They represent an instant in time almost 300 million years ago, and you can look at it and understand how animals were behaving.
—Dr. Spencer Lucas, Curator of Paleontology,
New Mexico Museum of Natural History and Science
Prehistoric Trackways National Monument (PTNM) was the 100th national monument established in the United States. It was established in 2009 to conserve, protect, and enhance the unique and nationally important paleontological, scientific, educational, scenic, and recreational resources and values of the Robledo Mountains in southern New Mexico. Managed by the Bureau of Land Management (BLM), Prehistoric Trackways National Monument is part of the National Landscape Conservation System (NLCS), which is managed for preservation, public enjoyment, and scientific study.
The Story of Prehistoric Trackways National Monument
In the mid-1980s, amateur paleontologist Jerry MacDonald discovered the trackways that would become Prehistoric Trackways National Monument. Photograph courtesy McKinney Briske (Prehistoric Trackways National Monument).
In the mid-1980s Jerry MacDonald, an amateur paleontologist, brought national recognition to the Robledo Mountains when he found intact, Permian-aged fossilized trackways in the Las Cruces area. Under the supervision of the Smithsonian Institution National Museum of Natural History and the Carnegie Museum of Natural History, MacDonald excavated thousands of specimens from these trackways from what is now known as the Discovery Site. The majority of the over 2500 slabs removed in the excavation comprises "The Jerry MacDonald Paleozoic Trackways Collection" at the New Mexico Museum of Natural History and Science in Albuquerque, New Mexico.
Further research conducted by scientists from all over the world confirms that the Robledo trackways represent one of the most important Late Paleozoic fossil records in the world. A 2010 scientific survey, headed by the New Mexico Museum of Natural History and Science, located over 150 fossil sites in and around the Monument. All of these sites date to the same time period—280 million years ago during the Permian. These sites consist of tracks and imprints made by reptiles such as Dimetrodon, amphibians, fish, arachnids, and insects, along with marine fossils, plant fossils, and petrified wood, as well as rare eurypterid traces shown below.
click image to enlarge...
Eurypterids swimming in freshwater pools were likely responsible for these interesting traces, indentified as Palmichnium. A reconstruction of a eurypterid making the traces is illustrated below the slab. Eurypterids are featured on the 2013 National Fossil Day Artwork. New Mexico Museum of Natural History and Science photograph and diagram, courtesy McKinney Briske (Prehistoric Trackways National Monument).
The trace fossils preserved in Prehistoric Trackways National Monument provide a window into a single instant in time, hundreds of millions of years in the past. Now under the management and protection of the BLM, scientists and the public will be able to study, enjoy, and learn about these fossils for generations to come.
Ranger-led hikes are one way to experience Prehistoric Trackways National Monument. Photograph courtesy McKinney Briske (Prehistoric Trackways National Monument).
The BLM promotes the excellent educational opportunities of Prehistoric Trackways National Monument. Park rangers lead guided hikes for the public approximately twice a month on Saturdays (weather permitting), regularly give programs for local community groups and partnering agencies, host an annual K-5 paleontology day camp, visit classrooms, and host field trips. BLM has partnered with the New Mexico Museum of Natural History and Science and New Mexico State University STEM and Creative Media Institute to create roving school-kits in which students work with paleontologists, through a media component, to solve some of the mysteries of this ancient fossil environment. In 2011, videos were created to document the research of various scientists working together and with BLM staff. Filming was a partnership with faculty and students from the Creative Media Institute at New Mexico State University, Illinois State Geological Survey–Prairie Research Institute, and Royal Holloway, part of the University of London. View the videos here. This project was made possible in part with funds provided through a BLM NLCS grant.
BLM has partnered with the City of Las Cruces Museum of Nature and Science to feature the Monument's fossils in their Permian Trackways Exhibit, including display of a 30-foot-long trackway.
To celebrate National Fossil Day 2013, Monument staff, along with the National Park Service, U.S. Forest Service, Las Cruces Museum of Nature and Science, Asombro Institute for Science Education, and Mesilla Valley Bosque State Park, will host hands-on, interactive programs at a local school where students rotate through stations and learn about different fossils from various time periods and locations that record the natural history of southern New Mexico.
More information about Prehistoric Trackways National Monument, including location and upcoming events can be found at the Bureau of Land Management's Prehistoric Trackways National Monument website.
Article and photographs provided by McKinney Briske (PTNM Park Ranger).
2013 Paleozoic Partner feature articles:
| January: Fossils of the 2013 National Fossil Day Artwork
| February: Paleontological Research Institution, Museum of the Earth
| March: Falls of the Ohio State Park
| April: Field Museum of Natural History, Mazon Creek Collection
| May: Prehistoric Trackways National Monument
| June: Cincinnati Museum Center
| July: Glacier Bay National Park & Preserve
| August: University of Michican Museum of Paleontology, Silica Formation Fossils
| September: Yale Peabody Museum of Natural History, Beecher's Trilobite Bed
| October: Guadalupe Mountains National Park
| November: Utah Geological Survey, Millard County Cambrian Fossils
| December: Denver Museum of Nature and Science, High-Altitude Mass Extinction |
The term "parasomnia" is used in reference to a wide range of disruptive sleep-related events. These behaviors and experiences generally occur during sleep, and in most cases are infrequent and mild. At times, however, they may occur often enough or become so bothersome that medical attention is indicated.
What Are Disorders of Arousal?
The most common of the parasomnias are "disorders of arousal," which include confusional arousals, sleepwalking (somnambulism), and sleep terrors. Experts believe that the various arousal disorders are related and share some characteristics. Essentially, these occur when a person is in a mixed state, both asleep and awake, and often emerging from the deepest state of non-dreaming sleep. The sleeper is awake enough to act out complex behaviors, but is still asleep and not aware of or able to remember these activities.
Parasomnias are very common in young children, and do not usually indicate significant psychiatric or psychological problems such disorders tend to run in families, and can be more severe when a child is overly tired, has a fever, or is taking certain medications. They may occur during periods of stress, and may increase or decrease with "good" and "bad" weeks.
Confusional Arousals: Confusional arousals are most common in infants and toddlers, but are also seen in adults. These episodes may begin with crying and thrashing around in bed. The sleep appears to be awake, and seems to be confused and upset, but resists attempts to comfort or console. It is difficult to awaken a person in the grips of a parasomnia episode. The confusional arousal can last up to half an hour, and usually ends when the agitation subsides and the sleeper awakens briefly, wanting to return to sleep.
Sleepwalking: This disorder is commonly seen in older children, and can range from simply getting out of bed and walking around the bedroom to prolonged and complex actions, such as going to another part of the house or even outdoors. A sleepwalker will sometimes speak, but is unlikely to be clearly understood. Sometimes complicated behaviors take place during a sleepwalking episode (such as rearranging furniture), but these activities are usually not purposeful. While injures during sleepwalking are uncommon, sleepwalkers may put themselves in harm's way -- such as walking outside in bedclothes during the winter. Simple precautions enhance safety.
In most cases, no treatment is necessary. The sleepwalker can family can be assured that these events rarely indicated any serious underlying medical or psychiatric problem. In children, the number of events tends to decrease with age, although they can occasionally persist into adulthood or even originate during the adult years.
Sleep-related Eating: A rare variation of sleepwalking is "sleep-related eating." This disorder manifests itself as recurrent episodes of eating during sleep, without conscious awareness. Sleep-related eating can occur often enough to result in significant weight gain. Although it can affect both sexes and all ages, it is most common in young women.
Sleep Terrors: These are the most extreme and dramatic of the arousal disorders, and the most distressing to witness. A sleep terror episode often begins with a "blood-curdling" scream or shout, and can produce signs that suggest extreme terror, such as dilated pupils, rapid breathing, racing heart, sweating, and extreme agitation. During a sleep terror episode, the victim may bolt out of bed and run around the room or house. In the course of the frenzied event, sleepers can hurt themselves or others.
As disturbing and frightening as these episodes are to an observer, the sleeper usually has no conscious awareness of the event, and generally does not remember it upon awakening.
Unlike typical nightmares or bad dreams, sleep terror episodes are not usually associated with vivid dream images and are recalled after awakening.
How Are Arousal Disorders Evaluated?
In typical childhood occurrences of arousal disorders, medical evaluation is unlikely to be needed. You should, however, contact a healthcare professional if a child's disturbed sleep causes:
1. Potentially dangerous behavior, such as that which is violent or could cause injury;
2. Extreme disturbance of other household members;
3. Excessive sleepiness during the day. In these cases, formal evaluation at a sleep center is warranted.
Because disorders of arousal are relatively uncommon after childhood, adults suffering from these disorders can be triggered by other conditions, such as sleep apnea, heartburn, or periodic limb movements during sleep. A sleep specialist should evaluate the patient's behaviors and medical history.
Are There Treatments for Arousal Disorders?
Clearing of obstructions, securing windows, sleeping on the ground floor, and installing locks or alarms on windows and doors can add a degree of security for the individual and the family.
In cases severe enough that the sleep disorders leads to injury or involves violence, excessive eating, or disturbance to others, treatment may be warranted. Therapy can include medical intervention with prescription drugs, or behavior modification through hypnosis or relaxation/mental imagery.
What Are Some Other Parasomnias?
While the great majority of sleep-related complex behaviors and experiences are due to disorders of arousal, simple precautions should be taken to ensure safety for people with arousal disorders. Clearing the bedroom several other conditions can be frightening or disturbing to those who experience them.
Hypnagogic Hallucinations and Sleep Paralysis: Hypnagogic hallucinations are episodes of dreaming while awake, usually just before falling asleep. These dreams can be frightening because the setting reflects reality (for example, the bedroom), and the content of the dream is often threatening.
Sleep paralysis is the experience of waking up -- usually following a dream -- with a feeling that the muscles of the body (except those used to breath and move the eyes) are paralyzed. Hypnagogic hallucinations and sleep paralysis may occur together. They are common in people with narcolepsy, but can also affect others, particularly individuals who are sleep-deprived. While they can be terrifying, these events are not physically harmful.
Nocturnal Seizures: These seizures, which occur only during sleep, can cause the victim to cry, scream, walk or run about, curse, or fall out of bed. Like other seizures they are usually treated with medication.
Rapid Eye Movement (REM) Sleep Behavior Disorder: All body muscles -- except those used in breathing -- are normally paralyzed during REM sleep. In some people commonly older men, this paralysis is incomplete or absent, allowing dreams to be: acted out." Such dream-related behavior can be violent and can result in injury to the victim or bed partner. In contrast to those who experience sleep terrors, the victim will recall vivid dreams. REM sleep behavior disorder can be controlled with medication.
Sleep Starts: Most people experienced the common "motor" sleep start -- a sudden, often violent jerk of the entire body upon falling asleep. Other forms of sleep starts can occur just as sleep begins. A "visual" sleep start is a sensation of blinding light coming from the inside the eyes or head. An "auditory" sleep start is a loud snapping noise that seems to come from inside the head. Such occurrences, while they can be frightening, are harmless.
Teeth grinding (bruxism): Grinding of teeth during sleep is a very common occurrence, and little evidence suggests that teeth grinding is associated with any significant medical or psychological problems. In severe cases, mouth devices can help reduce the risk of dental injury.
Rhythmic-movement Disorder: This condition, seen most frequently in young children, can also occur in adults. It takes the form of recurrent head banging, head rolling and body rocking. The individual may also moan or hum. These activities can occur just before sleep begins, or during sleep. Medical or psychological problems are unlikely to be associated with rhythmic movement disorder. Behavioral treatments may be effective in severe cases.
Sleep Talking (somniloquy): Sleep talking is a normal phenomenon and is of no medical or psychological importance.
When Do I Ask a Healthcare Professional's Help?
Since most of these sleep-related behaviors are due to disorders of arousal -- which are not medically significant -- medical evaluation and treatment is often not necessary. Medical attention should be considered however, if the parasomnia behaviors: 1) are violet or may cause injury; 2) are disturbing to other household members; or 3) result in excessive daytime sleepiness.
Where Do I Seek Assistance?
Minor sleep problems can be handled by a primary care professional, often with a telephone consultation with a sleep medicine specialist experience with these conditions. Due to the complex nature of some parasomnias, however, proper diagnosis requires expert clinical evaluation and sleep laboratory monitoring of many body functions during sleep. A sleep specialist should direct these evaluations with experience in such cases.
In most cases of bothersome parasomnias, a specific cause can be identified and effectively treated.
Courtesy of the American Academy of Sleep Medicine(http://www.aasmnet.org) |
In general it is true that the younger people start learning a language, the higher their level of proficiency in that language will ultimately be. People who start acquiring a new language after the age of twelve, therefore, usually do not reach the same level of proficiency in that language as native speakers do. According to the critical period hypothesis this is due to maturational changes in the brain before puberty, which make people less and less sensitive to language input. Because of this reduced sensitivity, a native-like level of proficiency should not be attainable after puberty. This hypothesis, applied to the domain of syntax, is the basis of the first research question in this dissertation:
- Are there any late second language learners who fall within the native speaker range in their command of grammatical constructions that are known to be very difficult for second language learners and which can only be acquired on the basis of the input?
This dissertation also contains an investigation into the relationship between a native-like level of proficiency (if attainable at all) on the one hand and the typological distance between the language pairs involved and the background characteristics of the participants on the other hand. This is expressed in the following research questions:
- How is the level attained in L2 grammar after the age of twelve related to the typological distance between the L1 and the L2?
- What are the input and background characteristics of late learners who perform within the native speaker range (if they exist)?
For pronunciation, there are a number of previous studies that have identified second language learners who could not be distinguished from native speakers (see e.g. Bongaerts, 1999). For morphosyntax, results have been less clear and more controversial. (compare for example Coppieters, 1987; Birdsong, 1992; Hyltenstam, 1992; Ioup e.a. 1994; White & Genesee, 1996). Moreover, there were methodological problems with many of these studies and little attention had been paid to the role of the mother tongue. In the study presented in this dissertation, these problems were addressed and the relation between proficiency level at the end state and differences between the L2 and the first languages involved was systematically investigated.
In this dissertation, a study is presented in which 43 native speakers of German, French and Turkish participated, who arrived in the Netherlands after the age of twelve and who were highly proficient in Dutch. Their performance on two grammar tests was compared to the performance on the same tests of (highly educated) native speakers of Dutch.
To assess the (implicit) grammatical knowledge of these participants, their command of dummy subject constructions in Dutch was tested. In these constructions the logical subject is not in its normal syntactic position for semantic or pragmatic reasons. Instead, this position is occupied by het, er or 0.
In our study, we distinguish three types of dummy subject constructions:
- (active) sentences with er or 0 in which the (logical) subject is a noun phrase (DP) (DP-type)
- active sentences with er, het or 0 and a sentential (logical) subject (AStype)
- passive sentences with er, het or 0 and a sentential (logical) subject (PStype)
The native speakers of Dutch in this study revealed a general preference pattern for each type (a preference for het, er and/or 0). This pattern is disturbed by certain factors. Therefore, each type consists of two or three categories with different judgement patterns. Examples are presented in (1)(6):
Men beseft niet altijd dat 0 een pinguïn een vogel is. (DP-type, general pattern) “One does not always realise that a penguin is a bird.”
Ik vind het vervelend dat er boven een raam open staat. (DP-type, non-specific subject in intransitive sentence) “It bothers me that there is a window open upstairs.”
Meestal valt het niet mee om kaartjes voor een concert te krijgen. (AS-type, general pattern) “Usually it is not easy to get tickets for a concert.”
Nu schiet 0 mij ineens te binnen dat ik nog boodschappen moet doen. (AS-type, change of state) “Now it suddenly occurs to me that I still have to go out shopping.”
In de krant wordt 0 beweerd dat hij dronken achter het stuur gezeten heeft. (PS-type, general pattern) “It is claimed in the newspaper that he was drunk while he was driving.”
(6) Door haar vrienden wordt het bewonderd dat ze ook in moeilijke tijden vrolijk blijft. (PS-type, dummy object in equivalent active sentence) “Her friends admire her for remaining cheerful, even in difficult times.”
There were two important reasons for choosing dummy subject constructions. First, they are known to be very difficult to acquire for second language learners. Second, they are hardly covered in Dutch grammars and L2 text books. This means that learners, having no access to explicitly formulated rules, can only acquire these constructions on the basis of processing language input.
Tot test the participants' command of dummy subject constructions in Dutch, two tasks were used in this study: a sentence imitation task and a sentence preference task. In the sentence imitation task participants had to repeat orally presented sentences literally. It has turned out that participants often unconsciously change elements that are phonologically non-salient and ungrammatical (from the point of view of the participant). In the sentence preference task participants had to indicate on a scale which sentence of a minimal pair they preferred. We also gave all participants a questionnaire with questions about background characteristics, such as age of arrival in the Netherlands, self-reported proficiency in various languages, level of education and questions about usage of Dutch and the L1.
The results on the tasks described above show that there are second language learners in each L1 group who have reached a native level in L2 grammar after the age of twelve. On the sentence preference task there were eight second language learners who performed within the native speaker range: three native speakers of German, four native speakers of French and one native speaker of Turkish. For the sentence imitation task there were eleven second language learners who performed within the native speaker range: seven native speakers of German, three native speakers of French and one native speaker of Turkish. As can be seen from these results, the role of the typological distance between the L1 and Dutch seemed to be greater for the sentence imitation task than for the sentence preference task.
A comparison of the learners who fell within the native speakers range on the sentence preference task according to our (strict) criteria with the other second language learners suggested that the role of factors such as input, attending Dutch classes and age of arrival (after the age of twelve) were rather limited. At the same time, there did seem to be a meaningful relation with level of education, proficiency in some other language and pleasure in learning languages. In addition, it appeared that many participants within the native speaker range had a linguistic background.
On the basis of these results it was concluded that reaching a native level after the age of twelve is possible for constructions that are difficult to learn and for which no explicit knowledge is available. The results thus falsify the critical period hypothesis. We also established that reaching this level is even possible for second language learners with an L1 which is very different from the L2 (both typologically and with respect to the constructions investigated). It should be noted, though, that the people who reach a native level constitute only a small percentage of second language learners. One should, therefore, exercise caution and not have unrealistic expectations for the majority of second language learners.
Finally, it was argued that the results with respect to the background characteristics of the second language learners suggest that factors on which learners can exert most influence seem to play a rather limited role, while something like language aptitude or language awareness seems to play a more important role. It seems plausible that people with higher aptitude or language awareness should be better able to notice and process details in the form of the L2 input than average L2 learners. This might have contributed to their greater success in acquiring difficult constructions that are phonologically non-salient and do not contribute much to the meaning of a sentence, as is the case for dummy subject constructions. |
SAMPLING DESIGN PROBABILITY SAMPLINGNON-PROBABILITY SAMPLING Muhammad Bilal, R.No.06 Ali Hussnain syed R.No18 Abbas Ali R.No.31
A population is the set of data of all possible measurements (or observations) of individuals or items. E.g.. the heights of all students in a junior college, the lengths of life of all the light bulbs produced by a manufacturer. A sample is a set of data chosen from a population and is a subset of the population. A sampling unit is an individual member of a sample.
Definition of Sampling: Measuring a small portion of something and then making a general statement about the whole thing. Process of selecting a number of units for a study in such a way that the units represent the larger group from which they are selected.
Why We Need Sampling(Purposes and Advantages of Sampling) Sampling makes possible the study of a large, (different characteristics) population. Sampling is for economy Sampling is for speed. Sampling is for accuracy. Sampling saves the sources of data from being all consumed.
SAMPLING DESIGN1. What is the target population? - Target population is the aggregation of elements (members of the population) from which the sample is actually selected.2. What are the parameters of interest? - Parameters are summary description of a given variable in a population.3. What is the sampling frame? - Sampling frame is the list of elements from which the sample is actually drawn. Complete and correct list of population members only.4. What is the appropriate sampling method? - Probability or Non-Probability sampling method
SAMPLING DESIGN5. What size sample is needed? There are no fixed rules in determining the size of a sample needed. There are guidelines that should be observed in determining the size of a sample. When the population is more or less homogeneous and only the typical, normal, or average is desired to be known, a smaller sample is enough. However, if differences are desired to be known, a larger sample is needed. When the population is more or less heterogeneous and only the typical, normal or average is desired to be known a larger sample is needed. However, if only their differences are desired to be known, a smaller sample is sufficient.
SAMPLING DESIGN The size of a sample varies inversely as the size of the population. A larger proportion is required of a smaller population and a smaller proportion may do for a bigger population. For a greater accuracy and reliability of results, a greater sample is desirable. In biological and chemical experiments, the use of few persons is more desirable to determine the reactions of humans. When subjects are likely to be destroyed during experiment, it is more feasible to use non-humans.
General Types of Sampling1. Probability sampling2. Non-probability sampling
PROBABILITY SAMPLING The sample is a proportion (a certain percent) of the population and such sample is selected from the population by means of some systematic way in which every element of the population has a chance of being included in the sample. Randomization is a feature of the selection process rather than an assumption about the structure of the population. More complex, time consuming and more costly
Non-probability sampling The sample is not a proportion of the population and there is no system in selecting the sample. The selection depends upon the situation. No assurance is given that each item has a chance of being included as a sample There is an assumption that there is an even distribution of characteristics within the population, believing that any sample would be representative.
A. PURE RANDOM SAMPLING This type of sampling is one in which every one in the population of the inquiry has an equal chance of being selected to be included in the sample. Also called the lottery or raffle type of sampling. This may be used if the population has no differentiated levels, sections, or classes. Done with or without replacement
PURE RANDOM SAMPLING main advantage of this technique of sampling is that, it is easy to understand and it is easy to apply too. disadvantage is that, it is hard to use with too large a population because of the difficulty encountered in writing the names of the persons involved.
B. SYSTEMATIC SAMPLING A technique of sampling in which every name (old system of counting off) in a list may be selected to be included in a sample. Also called as interval sampling, there is a gap or interval, between each selected unit in the sample. Used when the subjects or respondents in the study are arrayed or arranged in some systematic or logical manner such as alphabetical arrangement and geographical placement from north to south.
SYSTEMATIC SAMPLING Main advantage is that it is more convenient, faster, and more economical Disadvantage is that the sample becomes biased if the persons in the list belong to a class by themselves whereas the investigation requires that all sectors of the population are to be involved.
C. STRATIFIED SAMPLING The process of selecting randomly, samples from the different strata of the population used in the study. Advantage is that it contributes much to the representative of the sample
D. CLUSTER SAMPLING Also called as multistage cluster sampling Used when the population is so big or the geographical area of the research is so large. Advantage : efficiency Disadvantage: reduced accuracy or representativeness, on the account of the fact that in every stage there is a sampling error.
A. ACCIDENTAL SAMPLING/CONVENIENCE SAMPLING No system of selection but only those whom the researcher or interviewer meet by chance are included in the sample. Process of picking out people in the most convenient and fastest way to immediately get their reactions to a certain hot and controversial issue.
ACCIDENTAL / CONVENIENCE SAMPLING Not representative of target population because sample are selected if they can be accessed easily and conveniently. Advantage : easy to use Disadvantage: bias is present It could deliver accurate results when the population is homogeneous.
B. PURPOSIVE SAMPLING The respondents are chosen on the basis of their knowledge of the information desired.
TYPES OF PURPOSIVE SAMPLING1. QUOTA SAMPLING Specified number of persons of certain types are included in the sample. Advantage over accidental sampling is that many sectors of the population are represented. But its representativeness is doubtful because there is no proportional representation and there are no guidelines in the selection of the respondents.
PURPOSIVE SAMPLING2. JUDGEMENT SAMPLING Sample is taken based on certain judgements about the overall population. Critical issue: objectivity “how much can judgement be relied upon to arrive at a typical sample?” Advantage: reduced cost and time involved in acquiring the sample
Acknowledgment We are very thankful to: Sir. Dr. Iftikhar Hussain |
Why is the Y chromosome slowly disappearing in human DNA?
It’s a question that has baffled scientists since they began understanding DNA, and it’s one that Michael E. Hood, associate professor of biology, believes he can help answer—using a fungus that infects a carnation-like flower.
The problem lies in the unusual nature of the human male sex chromosomes, the only two in human DNA that are unequally matched, with an X and Y pairing up to create a male. In all other chromosomes, humans typically have two copies, but that’s not true for the Y, which is always paired with an X.
This combination means that changes to the Y are sheltered by its X counterpart; they never appear either as expressed mutations or get corrected, as similar changes would in other parts of the DNA sequence. It’s the evolutionary equivalent of a stalagmite in a cave, slowly building up defects one tiny change at a time.
“There seem to be certain places in the genome where harmful mutations accumulate,” Hood says. “The X can’t degenerate in that way because, in females, that’s all you’ve got.”
While no one is really worried about men going extinct (as the tabloid interpretation would have readers believe), scientists are interested in understanding the gradual process of genetic deterioration and where it might eventually lead. It’s a difficult question to explore, since the answer involves tracking human DNA and the harmful mutations genomes accumulate over many generations.
By using the anther smut fungus as a working model, and a rapidly evolving one at that, Hood believes he can offer key insights into genome function. The National Institutes of Health agrees: It recently awarded Hood a $444,651 grant to investigate the issue using the fungus.
There is no such thing as a male or female anther smut, but the fungi do have two mating types with a pair of chromosomes (remarkably similar to XY) that are undergoing degeneration in the same way.
There are other advantages to anther smut: the fungal models are safe for researchers to use, the fungus is easy to grow, and, unlike humans, fungi have small genomes that make the sequencing process simple. Hood can grow a generation in a few weeks and also has access to a large group of related species.
This is not the first time Hood has used anther smut to test theoretical problems. In the fall, he received a $1.7 million grant, together with colleagues at other schools, to use the fungus in the study of sexually transmitted diseases.
“Computer simulations, theoretical work, are really good at indicating what might happen in nature, but that’s quite different from saying what does happen in nature,” Hood says. “The empirical evidence, with a real biological system, finishes off that investigation in a way.”
By tackling fundamental questions—does the degeneration of the Y chromosome slow over time, for example, or does it simply continue until the chromosome is gone?—Hood hopes his work will help clear the way toward solving basic problems that humans face at the genetic level and in disease.
“One of the things I like most about biology is that there’s an answer in there somewhere, and it takes real investigation in order to pull out the explanation,” Hood says. “It’s fun to do the work to try to resolve a mystery.” |
Ultrasound is a cyclic sound pressure wave with a frequency greater than the upper limit of the human hearing range. Ultrasound is thus not separated from audible sound based on differences in physical properties only the fact that humans cannot hear it.
Ultrasound is used in many different fields. Ultrasonic devices are used to detect objects and measure distances. Ultrasonic imaging is used in human and veterinary medicine. In non-destructive testing of products and structures, ultrasound is used to detect invisible flaws. For Industries, ultrasound is used for cleaning and for mixing, and to accelerate chemical processes.
Ultrasonics is the application of ultrasound. Ultrasound can be used for imaging, detection, measurement, and cleaning. At higher power levels ultrasonics are useful for changing the chemical properties of substances.
Figure. Ranges of Ultra sound
A common use of ultrasound is in range finding; this use is also called SONAR, (sound navigation and ranging). An ultrasonic pulse is generated in a particular direction. If there is an object in the path of this pulse, part or all of the pulse will be reflected back to the transmitter as an echo and can be detected through the receiver path. By measuring the difference in time between the pulse being transmitted and the echo being received, it is possible to determine the distance.
Principle of Active SONAR
Here is the animation for the Ultrasound....
Animation of ultra sound |
Genetic engineering (GE) is the modification of an organism’s genetic composition by artificial means, often involving the transfer of specific traits, or genes, from one organism into a plant or animal of an entirely different species. When gene transfer occurs, the resulting organism is called transgenic or a GMO (genetically modified organism).
Genetic engineering is different from traditional cross breeding, where genes can only be exchanged between closely related species. With genetic engineering, genes from completely different species can be inserted into one another. For example, scientists in Taiwan have successfully inserted jellyfish genes into pigs in order to make them glow in the dark. 1
All life is made up of one or more cells. Each cell contains a nucleus, and inside each nucleus are strings of molecules called DNA (deoxyribonucleic acid). Each strand of DNA is divided into small sections called genes. These genes contain a unique set of instructions that determine how the organism grows, develops, looks, and lives.
During genetic engineering processes, specific genes are removed from one organism and inserted into another plant or animal, thus transferring specific traits.
Nearly 400 million acres of farmland worldwide are now used to grow GE crops such as cotton, corn, soybeans and rice. 2 In the United States, GE soybeans, corn and cotton make up 93%, 88% and 94% of the total acreage of the respective crops. 3 The majority of genetically engineered crops grown today are engineered to be resistant to pesticides and/or herbicides so that they can withstand being sprayed with weed killer while the rest of the plants in the field die.
GE proponents claim genetically engineered crops use fewer pesticides than non-GE crops, when in reality GE plants can require even more chemicals. 4 This is because weeds become resistant to pesticides, leading farmers to spray even more on their crops. 4 This pollutes the environment, exposes food to higher levels of toxins, and creates greater safety concerns for farmers and farm workers.
Some GE crops are actually classified as pesticides. For instance, the New Leaf potato, which has since been taken off grocery shelves, was genetically engineered to produce the Bt (Bacillus thuringiensis) toxin in order to kill any pests that attempted to eat it. The actual potato was designated as a pesticide and was therefore regulated by the Environmental Protection Agency (EPA), instead of the Food & Drug Administration (FDA), which regulates food. Because of this, safety testing for these potatoes was not as rigorous as with food, since the EPA regulations had never anticipated that people would intentionally consume pesticides as food. 5
Adequate research has not yet been carried out to identify the effects of eating animals that have been fed genetically engineered grain, nor have sufficient studies been conducted on the effects of directly consuming genetically engineered crops like corn and soy. Yet despite our lack of knowledge, GE crops are widely used throughout the world as both human and animal food.
Scientists are currently working on ways to genetically engineer farm animals. Atlantic salmon have been engineered to grow to market size twice as fast as wild salmon, 6 chickens have been engineered so that they cannot spread H5N1 avian flu to other birds, 7 and research is being conducted to create cattle that cannot develop the infectious prions that can cause bovine spongiform encephalopathy (aka mad cow disease). 8 At this point, no GE animals have been approved by the FDA to enter the food supply. 9 Genetic engineering experiments on animals do, however, pose potential risks to food safety and the environment.
In 2003, scientists at the University of Illinois were conducting an experiment that involved inserting cow genes into female pigs in order to increase their milk production. They also inserted a synthetic gene to make milk digestion easier for the piglets. Although the experimental pigs were supposed to be destroyed, as instructed by the FDA, 386 offspring of the experimental pigs were sold to slaughterhouses, where they were processed and sent to grocery stores as pork chops, sausage, and bacon. 10
University of Illinois representatives claimed that the piglets did not inherit the genetic modifications made to their mothers, but there was still a clear risk to the people who purchased products made from the 386 piglets. Since no genetically engineered animal products have ever been approved by the FDA, the pork products that reached supermarket shelves were technically illegal for human consumption. As a result of the accident, the FDA sent letters in May 2003 to all land-grant universities, reminding researchers that their work "may require" licensing under the animal drug law. 10
Many concerns have been raised over the inadequate testing of the effects of genetic engineering on humans and the environment. Genetic engineering is still an emerging field, and scientists do not know exactly what can result from putting the DNA of one species into another. The introduction of foreign DNA into an organism could trigger other DNA in the plant or animal to mutate and change. 11 In addition, researchers do not know if there are any long-term or unintended side effects from eating GE foods. 12
Critics of genetic engineering believe that GE foods must be proven safe before they are sold to the public. Specific concerns over genetic engineering include: 11
Once released into the environment, genetically engineered organisms cannot be cleaned up or recalled. So, unlike chemical and nuclear contamination, which can at least be contained, genetic pollution cannot be isolated and separated from the environment in which it is spreading.
GE crops can cross-pollinate related weed species, passing on their ability to survive the application of weed killers. Even without passing on that specific genetic trait, the widespread adoption of GE crops that are resistant to herbicides like Roundup has led to dramatic increases in the use of this weed killer, and weeds have gradually developed resistance to the herbicide. This leads to the evolution of superweeds that are very difficult to control. Already, superweeds have infested 12 million acres in the United States. 13 At least 20 weed species worldwide are resistant to Roundup, including aggressive weeds like ragweed, pigweed and waterhemp. 14
Some GE seeds are engineered so that plants cannot reproduce their seeds. In many parts of the world, saving seeds from season to season is the only way farmers are able to survive and continue growing food. However, with GE technology, seeds can be sterile, forcing farmers to rely on seed companies for their livelihood, an expense they may not be able to bear.
* These GE crops were approved by the federal government, but are not known to be commercially available. 16 |
Outline of Lecture:
Definition of pathology
It is the “Scientific study of disease” .
“scientific study of the molecular, cellular, tissue, or organ system response to injurious agents.”
What is the Disease?
It is the “State in which an individual exhibits an anatomical, physiological, or biochemical deviation from the normal”
Classification of Diseases:
Developmental – genetic, congenital.
*Inflammatory – Trauma, infections, immune, etc.
*Neoplastic – tumors cancers
*Degenerative – ageing.
*Iatrogenic: Drug induced.
Basic Language of Pathology
Common changes in all tissues. e.g.. Inflammation, cancer, ageing, edema, hemorrhage ….etc.
Discussing the pathologic mechanisms in relation to various organ systems e.g. CVS, CNS, GIT…..etc.
What should we Know About A Disease
Epidemiology – Where & When.
Etiology – What is the cause?
Pathogenesis – Evolution of dis.
Morphology – Structural Changes
Pathology focuses on 4 aspects of disease:
Knowledge of etiology remains the backbone of:
Understanding the nature of diseases
Treatment of diseases.
“Study of the cause of a disease”
An etiologic agent :
is the factor (bacterium, virus, etc.) responsible for lesions or a disease state.
Predisposing Causes of Disease:
Factors which make an individual more susceptible to a disease (damp weather, poor ventilation, etc.)
Exciting Causes of Disease:
Factors which are directly responsible for a disease (hypoxia, chemical agents…. etc.).
What is the cause?
The sequence events in the response of the cells or tissues to the etiologic agent, from the initial stimulus to the ultimate expression of the disease,”from the time it is initiated to its final conclusion in recovery or death”
Clinical Symptoms & Signs
Clinical signs are seen only in the living individual.
“Functional evidence of disease which can be determined objectively or by the observer” (fever, tenderness, increased respiratory rate, etc.)”
Expected outcome of the disease, It is the clinician’s estimate of the severity and possible result of a disease.
Study of what is abnormal or wrong or disease…!
“Scientific Study of Disease”
Normal à Abnormal Treat
“Is the foundation of medical science and practice. Without pathology, the practice of medicine would be reduced to myths and folklore”
What is “Diagnosis”?
The formal name(s) used to describe a patient’s disease
The process of identifying a disease based on the patient’s symptoms, the doctor’s findings, and the results of investigations and laboratory tests
What do you need to make a diagnosis?
A system of classification that supplies the necessary names, definitions, and criteria
The means to ascertain the defining characteristics of a disease in the individual patient
Past and Present….!
In the past, .. people mistook magic for medicine…!
Now people mistake medicine for magic….!
Subdivisions of clinical Pathology:
Common changes in all tissues.
E.g.. Inflammation, cancer, ageing.
Specific changes in organs.
E.g.. Goiter, pneumonia, breast cancer.
Study of Disease: (Pathology)
Etiology – Causes
Pathogenesis – Evolution
Morphology – Structural Changes
Clinical Significance – Functional Changes
Techniques in Pathology:
Cell Cultures, Medical Microbiology
Right neck mass
Diffuse pattern – no follicles.
Large cells with moderate cytoplasm
Plenty of mitotic figures, Nuclei are vesicular prominent nucleoli
Features suggest T-cell NHL – malignant lymphoma.
Needs further marker studies for typing & management.
Carcinogenesis. DNA Damage, Mutation.
Uncontrolled cell division, tumor.
Enlarged lymphnodes, liver, spleen, microscopically – lymphoma cells.
Fever, Wt loss, tumor-Ln, Liver, Spleen. |
Chapter 10: Motivation
1. Define the following terms: motivation; self-actualization; cognitive dissonance; attribution theory; locus of control; achievement motivation; learned helplessness;
2. What is the distinction between: reward & reinforcer; internal v. external locus of control; "seeking success" v. "avoiding failure"; learning v. performance goals; intrinsic v. extrinsic motivation;
3. What is the difference between intensity and direction apply to motivation?
4 Describe motivation as a result of reinforcement history. (NOTE: how could motivation be a specific result of schedule effects?). Your text states that an examination of the behavioral approach to motivation might answer many questions that teachers have, "...but it is usually easier to speak in terms of motivations to satisfy various needs". Is this acceptable for a data-based discipline?
5. The reinforcing value of stimuli used as a reward cannot be assumed. What does this mean? Describe how one can use a contingency analysis to determine if a reward is a reinforcer.
6. Describe Maslow's hierarchy of needs (the specific levels) in terms of growth and deficiencies. Describe the educational implications of Maslow's needs as described in the text. Is his need theory specific enough to be of any use to educators (or business, etc.)?
7. When beliefs and one's behaviors are incongruent, one experiences problems. Teachers can either attempt to change the student's beliefs and attitudes, or their behavior. How can one change beliefs and attitudes since they are private events? Does it make sense that private events are merely another type of behavior so teachers should focus on the emitted (overt) behaviors of students and assume that their collateral products (e.g., private events) will also change? Explain.
8. How could focusing on a student's behavior rather than their qualities as a person (i.e., what Rogers calls the self) help to avoid cognitive dissonance or faulty attribution when students, for example, receive poor grades. (We don't want the student ending up hating a subject!) ["Your performance on the quiz, Teresa, was not up to your usual work"].
9. List the four explanations for success/failure associated with attribution theory.
10. Self-concept/esteem seems related to achievement. Many educators feel that we should increase students' self-concept so that their achievement also goes up. What is faulty with this assumption?
11. Describe the differences between students with high/low achievement motivation. Does the term achievement motivation describe or explain academic success/failure? How do the terms "achieving to seek success or to avoid failure" lend themselves to a behavioral interpretation?
12. List the respective characteristics of success seekers and failure avoiders.
13. Explain why you agree/disagree with the text's declaration that "...teachers should try to convince students that learning rather than grades is the purpose of academic work". Also, do you agree that teachers should avoid using competitive or incentive systems of grading?
14. Describe how learned helplessness develops according to your text, and compare it with how your instructor says it develops.
15. Briefly describe the expectancy model of motivation. Why is it important to program academic activities (based on the students' abilities) so that kid has a moderate chance of success (rather than a high or low)?
l6. Summarize the 5-step process through which teacher expectations affect student performance. List the three ways teachers can communicate positive expectations to students.
17. Is there a qualitative difference between intrinsic and extrinsic reward? Can/does use of extrinsic reward decrease intrinsic motivation? (Be careful, this is a loaded question!)
l8. Arousing students' interests is one of the four methods teachers can use to increase a subject's intrinsic motivation. Describe the other three. Explain how arousing student interest relates to many other concepts covered thus far (e.g., lesson plans, receptive learning, etc.
19. List/describe the six basic principles of incentives teachers should use to promote student learning.
20. List the categories of reinforcers available for use in the classroom.
21. (A) Describe Slavin's ILE. (B) Explain why you agree/disagree with Slavin's position that teachers should reward effort and improvement rather than absolute or relative performance. Think about it, should a smart kid (and we are alluding to ability!) who is capable of doing a perfect paper only receive an A if the paper is perfect, when another (less able) student receives an A for a paper that is less satisfactory than the other students? Does Slavin's ILE "sound like" a variation of DRH (differential reinforcement of high rates of responding)? [There is a sample lesson plan on ILE starting on page 15 of this guide.]
22. Describe the features associated with cooperative learning.
Describe the difference(s)
between cooperative, individual, and competitive grading systems. How can
one establish a cooperative system?
REVIEW OF CHAP. 10: MOTIVATION
1. Motivation, in the most general sense of the term, is an attempt to explain why behavior occurs (e.g., why people do what they do). Many educators believe motivation is a prerequisite for learning; behaviorists, though, view it as a collateral product of learning because it is, itself, learned. Behavioral educators view the term (motivation) as a descriptive (though sloppy) construct. They prefer to speak in terms of one's history and schedules of reinforcement, and the consequences/contingencies under which one currently operates. The text is correct when it states, "..it is usually easier to speak in terms of motivations to satisfy various needs," but this reflects a lack of parsimony and encourages non-empirical analyses of the dynamics of classrooms.
A. The four situations described on pp. 348-349 illustrate this. The $50 is characterized as not being a good reinforcer for situations 1, 3, & 4. This is an assumption/conclusion that hasn't been demonstrated (no data are available to show that the behaviors in those situations have been weakened!).
2. Non-behavioral psychologists/educators view motivation as having two dimensions: intensity (strength) and direction (how it is focused; into which behaviors-complementary or competing ones).
Additionally, these educators often describe motivation in terms of needs and drives. Needs come in a variety of types: deficiency needs (physical, safety and security; belongingness, & esteem), and "growth" needs (knowing/understanding; esthetic, & self-actualization). The most current/popular need/drive theory (others abound and I will briefly present them in class) are Maslow's humanistic approach, and the cognitive ones.
A. Maslow's theory has an intuitive appeal to it, but there is little empirical support that this philosophy can be translated successfully into real-life settings such as schools and businesses. Until there is a data base that demonstrates the utility of the perspective, it can only play a minimum role in a science whose primary goal is data-based instruction. Still, there are two areas where this humanistic theory has a specific and general implication for schools. Specifically, teachers have to accept that children whose physiological (e.g., food and physical) and safety (e.g., shelter and physical health) needs are not being met regularly and adequately will not be able to learn academic content. Therefore, teachers may have to intervene with social services agencies (often through the school or district's offices) to insure that students are receiving medical attention and/or meal and shelter. Too many people, including educators, believe that children's safety and physiological needs are easily met in our society; ask inservice teachers if they know of students who are abused, come to school hungry, or don't have proper clothes or housing. These are real problems; too many of our youth. Maslow's theory, though, is weak because it offers nothing concrete about how teachers are supposed to meet these needs. In his theory teachers are encouraged to help meet children's need for love, belongingness, and self-esteem. These are important aspects that teachers should be aware of, but they will likely have to borrow/use techniques from other areas (cognitive and behavioral recommendations) to determine how to best do this.
B. Cognitive: Schacter-Singer theory. There are several cognitively oriented need theories. They are all based to some extent on Schacter and Singer's work. Their theory, essentially, says that what one does is based on your previous experiences (and your ability to remember them) and your perceptions of the current situation.
1. Cognitive dissonance (Festinger) fits neatly into this orientation. When we behave in a way inconsistent with our beliefs, we feel uncomfortable and must either alter our beliefs or our actions in order to reduce the internal tension. This finding should influence how educators provide feedback re: academic performance.
2. Attribution theory looks at individuals' personality characteristics to see if they attribute/blame their performance on personal (internal) or external (others) factors for success or failure. It uses the concept of locus of control (internal v. external). No matter that this approach is more descriptive than explanatory, it does provide implication for education specific to self-fulfilling prophecies, teacher expectations, and the need to (a) begin instruction at the learner's entry-level; (b) provide them with sufficiently small steps that are achievable; (c) prescriptively diagnosis/remediate; (d) provide sufficient levels of reinforcement; & (e) ensure adequate corrective feedback.
3. Ignore material based on research using the TAT. This is an invalid projective instrument. Thus, studies that have used it to generate data are highly suspect.
4. Achievement Motivation: (McClelland and Atkinson). A very good description (which has high predictive and concurrent validity) of how people strive for success and choose goals/activities that have, historically, been associated with success or failure.
A. Utilizes the distinction between performance (recognition of attaining a goal) and learning (emphasizes knowledge and self-improvement) goals.
B. Learned Helpless: "no matter what I do I will fail, so I will do nothing." Produced, in labs with animals, via non-contingent aversive stimulation.
3. Research on the relationship(s) between motivation and learning, in general, suggests that for each person and task, there is an optimum level of motivation that supports optimum performance; if the motivation is too high or too low, performance suffers.
4. Expectancy Theories of Motivation. This is a stochastic model that attempts to predict the strength/direction of motivation based on (a) the person's perception of how likely they are to be successful in a task and (b) the importance of successfully achieving the task to the person. This type of approach is (a) prone to a relatively high error rate (how do we accurately measure these probabilities?), but (b) it is still useful because it does allow researchers to fairly accurately predict an outcome (a goal of science) as long as users remember (c) that this type of theory is ONLY describing/predicting a relationship (it is not explaining it). The most basic implication from these (and other diverse) studies is that the tasks assigned to students must (a) take into consideration their entry-level skills so that (b) the task is neither too difficult nor too easy for the student (e.g., individualized instruction-a very good thing).
5. Pygmalion effect: Teacher expectations affect students' self-concepts and their perceptions of their abilities and their performances. Communicate high (realistic) performance expectations of a positive nature. (Hummel's opinion: If you can't be mainly positive with all students most of the time, you shouldn't be a teacher!)
6. Types/classes of reinforcers: social (e.g., praise); activity (free time; library time; recess, etc.-use with the Premack Principle); tangible (games, toys, stars, etc.); exchangeable (tokens, points, etc.); and consumable (pop corn, gum, candy, etc.)
7. Distinguishing between intrinsic and extrinsic reinforcers (and reinforcement) with respect to academic achievement. Most educators believe that intrinsic motivation/reinforcement for learning is qualitatively and quantitatively "better" than external ones. There is no research that shows intrinsic "satisfaction/reinforcement/motivation" produces higher performance or better attitudes than external reinforcement. Still, we do want people to "internalize" a love for learning. How do we accomplish this? Provide lots of reinforcement for early learning; gradually fade the external support from the task so that engaging in the task itself becomes reinforcing (i.e., intrinsic). Ex.: I want my kids to read. Reading is very reinforcing for many people (but not all). Why? Those of us who were taught to read with a "good" method and who experienced lots of appropriate reinforcement that accompanied this data-based instruction, (a) learned to read in a pleasant way and (b) the reading activity, itself (because it was associated with reinforcement and we are successful at the task), has become a reinforcing activity in and of itself. So, one "form" of what many people would call intrinsic motivation/reinforcement is really a product of extrinsic reinforcement, and once the skill was learned, it possesses its own reinforcing properties. Intrinsic reinforcement can also be produced through negative reinforcement. Two "types" of behavior are generated when negative reinforcement is used: escape [e.g., R - (-) = +] and avoidance responding. In avoidance responding, one learns to emit a response in order to avoid the onset of the negative reinforcer (the unpleasant stimulation one escapes from if one is actually experiencing it). Basic and applied research shows that avoidance behaviors are very resistant to extinction (because the response is emitted so consistently the person doesn't experience that the contingency no longer exists!), so it could appear that a person was doing a response in the absence of external contingencies, but the response pattern is really an example of behavior developed via external consequences. Finally, many observers will mistakenly conclude that a response that is not immediately followed by a consequence must be an example of a response that is intrinsically reinforcing. Not necessarily. Many behaviors humans do are maintained on intermittent schedules of reinforcement. These schedules are very resistant to extinction, and the person learns to emit the response numerous times before it is reinforced. If the response is not observed over a long enough period (and perhaps across settings) one wouldn't observe the behavior being consequated [e.g., R + (+)=+] and might conclude that it is an example of intrinsic reinforcement when really it is an example of extrinsic.
A. Some strategies of instruction:
1. reinforce the idea of why it is important to learn stuff; 2. use variety
in presentation (maintains arousal/interest); 3. communicate clear expectations
and requirements (behavioral objectives, do task analyses, etc.); 4. provide,
frequent, immediate and clear corrective feedback that incorporates
what they did right, what was wrong, and how to avoid making the same type
of mistake in the future, that is both formal and informal; 5. utilize
a formal reward structure (token system?) in the class for appropriate
academic performance (and don't neglect praise as a powerful informal reward
for all types of behavior); 6. when possible, let students play some
role in setting the goals of their class (individualize goals for students
within the larger context of what everyone is supposed to achieve during
a lesson and for the years; the larger context almost invariably focuses
on minimum standards, so there is lots of room for individual performance
TRY A NEW QUIZ FORMAT FOR PRACTICE QUIZ !
Last Updated: July 16, 1997 |
In addition to viruses and bacteria, there are three other major types of microbes that can cause infectious disease—and one newly discovered type. As with viruses and bacteria, not all of the species in each of these categories are infectious to humans. But many of the world’s most prevalent infectious diseases are caused by microbes that are included in the groupings below.
cause a wide variety of diseases in humans, ranging from Athlete’s foot to ringworm to deadly histoplasmosis
. Some fungi, such as yeasts
, are comprised of a single cell but most are multicellular. They are found in the air, soil, on plants, and in water. Only about half of all fungi are harmful. Many perform vital functions, such as helping materials decay and decompose in the environment. They reproduce primarily by forming spores that float in the air. These spores can land on human skin or be inhaled, which is why most fungal infections start on the skin or in the lungs. Weakened immune systems
can make people more prone to fungal infection. So can taking antibiotics
, which reduce the bacteria in the body that keep some fungal communities, such as yeast, from growing unchecked.
Prions evoke no immune response and resist heat, ultraviolet light, radiation, and sterilization, making them difficult to control.
Protozoa: Amoebas and paramecia may be the most familiar examples of these single-celled microbes. Able to move rapidly and flexibly because they do not have cell walls, the different species that fall under this category otherwise have little in common. Protozoa typically enter human hosts through contaminated water or food or by the bite of an infected arthropod, such as a mosquito. They are able to multiply in humans, so the presence of just one protozoan can lead to serious infection. These parasites cause some of the deadliest infectious diseases worldwide, including malaria and dysentery.
Parasitic worms, or helminths, cause mild diseases such as swimmer’s itch but also more serious illnesses such as schistosomiasis
, a disease spread to humans via snails. Tapeworms, flukes, and roundworms comprise the three main categories of helminths. Unlike lice and fleas, which are external parasites, helminths live inside a host. Their presence typically disrupts the host’s nutrient absorption, causing weakness and a greater vulnerability to disease. Helminth eggs can contaminate food, water, soil, feces, air, and surfaces such as doorknobs and toilet seats. The eggs enter the human body through the mouth, anus, or nose and often hatch, grow, and multiply in the human intestine, though they may infect other areas of the body. Proper sanitation and thorough cooking of meat can help prevent the transmission of helminths.
A newly recognized class of infectious agents—the prions, or proteinaceous infectious particles—consist only of protein
. Prions are thought to cause variant Creutzfeldt-Jakob disease
in humans and “mad cow disease” in cattle. These proteins are abnormally folded and, when they come in contact with similar normal proteins, turn them into prions like themselves, setting off a chain reaction that eventually riddles the brain with holes. Prions evoke no immune response and resist heat, ultraviolet light, radiation, and sterilization, making them difficult to control. |
Deep space objects > Star clusters
Star clusters are a group of stars with a common origin and a gravitational link for a specific time. It is a useful tool for astronomers as it helps study and model stellar evolution. There are two main types of star clusters: open clusters (open) and globular clusters. Learn more about the galaxy’s star clusters in an interesting video.
Types of star clusters
Open star clusters
Open star clusters are so named because individual stars can be resolved easily. For example, the Pleiades and Hyades are so close that individual stars can be easily seen with the naked eye. They are sometimes called galactic clusters because they are located in dusty spiral arms. Stars in an open cluster have a common origin (the same initial molecular cloud was formed). Usually, the cluster contains several hundred stars (can reach several thousand).
The stars are bound by gravity, but it’s pretty weak. The cluster revolves around the galaxy and finally dissipates due to gravitational contact with stronger objects. It is believed that the Sun appeared in an open cluster, which no longer exists. Therefore, these are always young objects. A nebula is still visible in the Pleiades, hinting at recent formation.
The open clusters are filled with Population I stars — young and highly metallic. In width, cover from 2 to 20 parsecs.
Messier open star clusters
Other notable open star clusters
Globular star clusters
Globular clusters of galaxies contain from a couple of thousand to a million stars located in a spherical gravitational system. They are in a halo and represent the most ancient stars – population II (developed, but low metallicity). The clusters are so old that any star (above the G or F class) has already stepped over the main sequence. The globular cluster has little dust and gas because new stars do not form there. The density in the interior is much higher than in areas near the Sun.
In globular clusters, stars also share a common origin. But this type holds objects firmly by gravity (stars do not scatter). There are approximately 200 globular clusters in the Milky Way. Among them, you can recall 47 Toucan, M4 and Omega Centauri. Although about the latter, there are suggestions that it may be a dwarf spheroidal galaxy.
Messier globular star clusters
Other notable globular star clusters
Age of star clusters
Star clusters are incredibly valuable to astronomers because they can help determine the age of a star and track its evolution.
The stars of open clusters have a common origin, so their metallicity levels converge, which means that all members will pass through the evolutionary stages in the same way. In addition, they are located at the same distance, which also allows you to display the absolute value. If you see bright stars that stand out, then they are much lighter than their weaker neighbors.
With this information, scientists create digital charts for the clusters. They display the apparent value of V on the vertical axis relative to the digital index B – V horizontally. With spectrographic parallax, you can calibrate the chart to display the absolute value.
If we build diagrams for them, we get the bottom graph. Since they are at different distances, it is calibrated to absolute values.
A new scale is visible on the right vertical axis. “Years” is the age of the cluster. The pair in Perseus is so young that most of the stars are in the main sequence stage. The Pleiades are slightly older and do not have stars exceeding color index 0 (spectral type A0). More massive objects have already stepped over to the giant branches. M67 does not have a star hotter than a color index of 0.4. Most significant is the pivot point in the diagram, where the cluster turns off the main sequence. The lower the main sequence, the older the cluster.
Globulars are usually much older than open ones, so the color magnitude in the diagram shows more developed stars. They are also devoid of objects with a large mass. This point is illustrated below for example M55.
There is a cluster of hot stars in the main sequence above the shutdown point. They are called blue stragglers. Scientists believe that due to the high stellar densities in globular clusters, some are capable of merging. The combined mass makes the star hotter and brighter than the main mass. Star clusters are not eternal structures and they are destroyed. Examine this process in the video. Also, use the online star map to find the clusters yourself. If you can’t buy a telescope, then visit our page with a virtual model of the Milky Way galaxy or view a photo from the list of clusters. |
A situation with an ironic context through the storyline. Situational irony is a common feature of storytelling, particularly in 'moral tales' about the values of characters in the stories. Situational irony is a form of humor, but it's also a storytelling method, using analogies and logic to make an abstract point. Situational irony can also be a metaphor, a logical exercise with a premise and a payoff in the ironic outcome.
Examples of Situational Irony:
|Herodotus: Croesus and the Delphic oracle|
Citizen Kane: Rosebud, contrasting with the meaning of Citizen Kane's status
New Testament: The Good Samaritan, the Samaritan as the unlikely hero |
Educational Toys and Life Skills
Educational toys not only promote developmental skills in children. They also help children acquire and improve essential life skills. Creativity, self-confidence, independence, responsibility, and integrity can all be cultivated through the use of carefully selected educational toys.
One hallmark of educational toys is how well they support creative, open-ended play. A tray of wooden food can inspire a child to spend a whole afternoon running a pretend restaurant or planting and harvesting crops on a pretend farm. A set of blocks can be turned into a tower, a road system, a fort, a car, or even different animals. And the possibilities for a pound of modeling clay are endless! The more time a child spends exploring all the different things a toy can become, the more developed the child’s powers of imagination will be. This fosters an open-mindedness to new possibilities that will help the child think of creative and innovative solutions to any challenges he or she ends up facing as an adult.
One way to build self-confidence is through play that encourages a child to assert him or herself. Singing, performing, and acting in front of an audience all help children assert themselves both in the planning stage and during an actual performance. Children also learn to assert themselves by acting out scenarios or performing informally with peers. Open-ended toys such as musical instruments and dress-up clothes and props encourage this type of play.
Taking risks that pay off will also develop a child’s self-confidence. Susan G. Solomon, author of American Playgrounds: Revitalizing Community Space, notes that “Children need a chance to take acceptable risks, learn cause and effect, make choices and see consequences. If they don’t learn to take risks, we’ll lose a generation of entrepreneurs and scientists.”
To take such risks, children must develop powers of risk assessment and decision making so that they can be sure that the risks they plan to take are, in fact, acceptable. The act of riding and controlling large toys such as bicycles requires children to calculate physical risks. The logic needed to play certain strategy-based board games like Monopoly, chess, and checkers involves risk assessment such as whether or not to invest in a property or risk one piece for a future, greater gain.
To improve their ability to calculate risk, children should also develop their decision-making skills. Science and engineering kits can help by requiring children to use observations and directions to make decisions about how to run an experiment or build a working machine. Puzzles and building construction sets can also hone this skill.
In general, allowing children to direct their own play and be in charge of what to do during their free time helps them become more self-sufficient and resilient. In particular, certain educational toys foster skills such as problem solving, taking charge of a situation, and leadership.
One aspect of being independent is being able to solve a problem on your own. Working with a construction toy system allows a child to explore different solutions to the challenge of building various items. Logical challenges faced on your own, such as figuring out how to use a set of pattern blocks to replicate certain complicated patterns, also build problem-solving skills.
Another aspect of being independent is taking charge of a situation. This can be as simple as providing your baby with two toy choices and allowing the baby the autonomy to make his or her own decision about which to play with. Beyond that, you can also encourage the development of independence by allowing your child to direct what roles you will take on when playing with your child or letting your child be in charge of how a toy will be played with. Providing your child with open-ended play sets such as farms, fire and police stations, pirate ships, tree houses, and train stations creates a situation where your child can control what scenarios he or she will act out that day.
A third aspect of being independent is taking a leadership role. While unit blocks and communal building sets of oversized hollow wood blocks, huge foam blocks, or sturdy cardboard blocks can foster cooperation skills, they can also offer opportunities for one child to lead others in a positive way to build a specific construction that that child has in mind. Educational toys can also help children become self-motivated and self-directed so that they can lead themselves to accomplishments without always relying on outside support and affirmation.
To become good citizens, all children should develop a sense of personal, societal, and environmental responsibility. In general, trusting children to take good care of their toys, to play nicely with them and put them back where they belong when play is done, can begin to foster a sense of responsibility. At the most basic level, a chart such as Melissa and Doug’s Magnetic Responsibility Chart can help a child keep track of his or her personal obligations. Beyond that, specific toys can develop other kinds of responsibility.
When a child is provided with an open-ended toy such as a construction set that must be assembled by the child, he or she will take on the personal responsibility of following the directions and making sure the toy is put together correctly. This will train the child to take a sense of pride and personal responsibility in any future jobs he or she is expected to do. And, when a child takes care of a doll or pretend pet, he or she also develops a sense of personal responsibility for fulfilling his or her obligations to someone else.
Role-playing of obligations can extend to creating a sense of responsibility to society. When a child pretends that he or she is a construction worker or a doctor, that child is practicing taking on adult responsibilities that must be fulfilled if people are to live together in communities. Such role-playing socializes the child and allows him or her to get used to the idea of becoming a contributing member to such a community once he or she is grown.
Finally, science kits that encourage children to study the earth can educate children about why people must take care of animals, land, resources, and so forth. Plus, toys that are crafted from sustainable materials (such as Plan Toys) or bioplastics (such as Green Toys), or designed to use recycled materials (such as the Uberstix Scavenger sets), encourage a respect for the conservation of natural resources. This in turn leads to a developed sense of responsibility for caring for the environment.
Educational toys can also help children develop integrity. Using costumes and props to role-play situations such as customer and server can help children practice politeness and manners. Acting out scenarios such as taking care of an injured doll or animal can foster compassion and empathy. And playing competitive games fairly by taking turns and following the rules develops a child’s appreciation for right and wrong.
The educational benefit of toys for child development cannot be underestimated. The childhood pursuit for play and discovery continues into adulthood. Children develop fascination about their surroundings from playing with toys and continue to pick up hobbies late into their adult life. |
With antibiotic resistance a growing threat, scientists are on the hunt for new ways to treat bacterial infections. One of these, called phage therapy, uses a special kind of virus that only infects and kills bacteria. (These viruses are called "bacteriophage" or simply "phage.")
The original idea for this therapy is actually quite old. It was pioneered by Félix d'Herelle in the 1920s (and is still used in Eastern Europe today) but it fell mostly out of favor with the advent of antibiotics like penicillin. However, with antibiotics becoming less effective today, scientists are increasingly turning to unconventional treatments.
Acinetobacter baumanii, often referred to as "Iraqibacter", gained notoriety in recent years due to its causing wound infections in soldiers returning from Iraq and Afghanistan. And like so many bacteria these days, it is resistant to multiple antibiotics and is easily transmitted in hospitals.
To address the problem posed by A. baumanii, a team of American military scientists turned to phage for assistance. Just like with antibiotics, bacteria can become resistant to phage. So, instead of using one phage to target A. baumanii, the team created a very clever cocktail of five wild phages.
The team created wounds on the backs of mice and infected them with bioluminescent (glowing) A. baumanii. Then, the mice were injected with a control solution (PBS), a one-phage solution (AB-Army1), or the five-phage cocktail (AB Cocktail). Bioluminescence was monitored, which indicated roughly how many bacteria were present and how far the infection had spread, and a heat map was generated. (See figure.)
As shown, mice that were given a control solution had a difficult time clearing the infection. On the other hand, mice given the five-phage cocktail cleared the infection after roughly three days. Additionally, these mice lost less weight than the other mice during the course of the infection.
The phages in the cocktail used different mechanisms to kill the bacteria. The AB-Army1 virus, which was a part of the cocktail, did not kill of all the bacteria. Instead, it disarmed them. Some bacteria, like A. baumanii, produce a slimy capsule that inhibits the immune response. Because the AB-Army1 virus likely binds to a protein found in this capsule, it infected and killed only the bacterial cells which produced it, leaving the capsule-free bacteria as survivors. But these "naked" bacteria were then exposed to assault by the other four phages, which infected and subsequently caused them to explode.
The upside and downside to using phage therapy is that the viruses are extremely specific. The upside is that phage will only kill a very specific type of bacterium, whereas antibiotics lay waste to many different bacteria, including friendly ones. The downside is that the phage are too specific. In fact, when the authors tested their five-phage cocktail on 92 clinical isolates of A. baumanii, it only killed 10 of them. Thus, the authors conclude that phage therapy must be highly personalized. Different cocktails would be required to treat different A. baumanii infections.
As daunting as that sounds, it could be overcome if bacterial screening technology improves. Once that occurs, finding the viruses is the easy part. The researchers obtained all of the phage in their cocktail from sewage water.
Source: James M. Regeimbal et al. "Personalized Therapeutic Cocktail of Wild Environmental Phages Rescues Mice from Acinetobacter baumannii Wound Infections." Antimicrob. Agents Chemother. 60 (10): 5806-5816. Published: October 2016. doi: 10.1128/AAC.02877-15
(Image: Phage via Shutterstock) |
Wetlands and Biodiversity is the theme of World Wetlands Day for 2020. Wetlands are rich with biodiversity and are a habitat for a dense variety of plant and animal species. Latest estimates show a global decline of biodiversity, while wetlands are disappearing three times faster than forests. This year’s theme is a unique opportunity to highlight wetland biodiversity, its status, why it matters and promote actions to reverse its loss.
World Wetlands Day
2 February each year is World Wetlands Day to raise global awareness about the vital role of wetlands for people and our planet. This day also marks the date of the adoption of the Convention on Wetlands on 2 February 1971, in the Iranian city of Ramsar on the shores of the Caspian Sea.
Wetlands are land areas that are saturated or flooded with water either permanently or seasonally. Inland wetlands include marshes, ponds, lakes, fens, rivers, floodplains, and swamps. Coastal wetlands include saltwater marshes, estuaries, mangroves, lagoons and even coral reefs. Fish ponds, rice paddies, and saltpans are human-made wetlands. |
Skinner and Behaviorism
Considered the father of Behaviorism, B.F. Skinner was the Edgar Pierce Professor of Psychology at Harvard from 1959 to 1974. He completed his PhD in psychology at Harvard in 1931. He studied the phenomenon of operant conditioning in the eponymous Skinner Box, still used today.
Quite the opposite of a neuroscientific approach, Behaviorism does not look under the hood. In its time, the theory was revolutionary because it deployed an experimental approach to the study of psychology, in contrast with the prevailing psychoanalytic approach. Under Skinner’s leadership, Behaviorists subjected psychology to quantifiable and stringent measures and application of the scientific method.
Skinner was interested in how environmental experience and learning caused modification of certain behaviors. He developed the Operant Conditioning Pigeon Chamber and other devices to enable him to conduct controlled experiments. Stimuli were typically in the form of rewards (positive) or punishments (negative). The experiments revealed how behaviors could be increased with rewards or decreased with the application of punishments. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.