content
stringlengths 275
370k
|
---|
Flag of New Zealand
The flag of New Zealand is a defaced Blue Ensign with the Union Flag in the canton, and four red stars with white borders to the right. The stars represent the constellation of Crux, the Southern Cross.
New Zealand's first flag, the flag of the United Tribes of New Zealand, was adopted before New Zealand became a British colony. Chosen by an assembly of Māori chiefs in 1834, the flag was of a St George's Cross with another cross in the canton containing four stars on a blue field. After the formation of the colony in 1841, British ensigns began to be used. The current flag was designed and adopted for restricted use in 1869 and became the national flag in 1902. It is the British Blue Ensign, incorporating a stylised representation of the Southern Cross showing the four brightest stars in the constellation. Each star varies slightly in size. The Union Flag in the canton recalls New Zealand's colonial ties to Britain.
The flag proportion is 1:2 and the colours are red (Pantone 186C), blue (Pantone 280C) and white. Proportion and colours are identical to the Union Flag.
Flag of the United Tribes
The need for a flag of New Zealand first became clear when the trading ship Sir George Murray, built in the Hokianga, was seized by Customs officials in the port of Sydney. The ship had been sailing without a flag, a violation of British navigation laws. New Zealand-built ships could not fly under a British flag due to New Zealand's colonial status. Among the passengers on the ship were two high-ranking Māori chiefs, believed to be Patuone and Taonui. The ship's detainment was reported as arousing indignation among the Māori population. Unless a flag was selected, ships would continue to be seized.
The first flag of New Zealand was adopted on 9 March 1834 by a vote made by the United Tribes of New Zealand, a meeting of Māori chiefs, who later made the Declaration of Independence of New Zealand at Waitangi in 1835. Three flags were proposed, all purportedly designed by the missionary Henry Williams, who was to play a major role in the translation of the Treaty of Waitangi in 1840. The chiefs rejected two other proposals which included the Union Flag, in favour of a modified St George's Cross or the White Ensign. This flag became known as the flag of the United Tribes of New Zealand. The flag of the United Tribes of New Zealand was officially gazetted in New South Wales in August 1835, with a general description not mentioning fimbriation or the number of points on the stars. The need for a flag was pressing, not only because New Zealand-built ships were being impounded in Sydney for not flying a national flag, but also as a symbol of the independence declared by the Māori chiefs.
The flag is still flown on the flag pole at Waitangi, and can be seen on Waitangi Day. The flag is made with the colours of red, blue, white and black.
After the signing of the Treaty of Waitangi, the British Union Flag was used, although the former United Tribes flag was still used by a number of ships from New Zealand and in many cases on land. The New Zealand Company settlement at Wellington, for example, continued to use the United Tribes flag until ordered to replace it by Governor William Hobson in 1841.
Flags based on defaced Blue ensign
The first flag of New Zealand to be based on the British blue ensign was introduced in 1867 following the Colonial Navy Defence Act 1865, which required all ships owned by colonial governments fly the defaced Royal Navy blue ensign with a Colonial badge. New Zealand did not have a Colonial badge, or indeed a Coat of Arms of its own at this stage, and so the letters "NZ" were simply added to the blue ensign.
The current flag was introduced in 1869. It was initially used only on government ships, but was adopted as the de facto national flag in a surge of patriotism arising from the Second Boer War in 1902. To end confusion between various designs of the flag, the Liberal Government passed the Ensign and Code Signals Bill, which was approved by King Edward VII on 24 March 1902, declaring the flag as New Zealand's national flag. The United Tribes flag design also features on the back of the Second Boer War medals presented to soldiers who served in the war, which indicates that the United Tribes flag was used widely in New Zealand until around this time.
The national flag is officially defined in the Flags, Emblems, and Names Protection Act 1981. Section 5(2) declares it to be "the symbol of the Realm, Government, and people of New Zealand." This law, like most other laws, can be changed by a simple majority in Parliament.
In March 1994 the Prime Minister of New Zealand Jim Bolger made statements supporting a move towards a New Zealand republic. In response Christian Democrat MP Graeme Lee introduced a Flags, Anthems, Emblems, and Names Protection Amendment Bill. If passed, the Bill would have entrenched the Act that governs the flag and added New Zealand's anthems, requiring a majority of 65 percent of votes in Parliament before any future legislation could change the flag. The Bill passed its first reading but was defeated at its second reading, 26 votes to 37.
New Zealand flag debate
Debate on keeping or changing the New Zealand Flag started before November 1979 when the Minister of Internal Affairs, Allan Highet suggested that the design of the flag should be changed, and sought an artist to design a new flag with a silver fern on the fly. The proposal attracted little support however. In 1998 Prime Minister Jenny Shipley backed Cultural Affairs Minister Marie Hasler's call for the flag to be changed. Shipley, along with the New Zealand Tourism Board, backed the quasi-national silver fern flag, using a white silver fern on a black background as a possible alternative flag, along the lines of the Canadian Maple Leaf Flag.
The text on this page has been made available under the Creative Commons Attribution-ShareAlike License and Creative Commons Licenses |
Propaganda is a specific type of message presentation, aimed at serving an agenda. Even if the message conveys true information, it may be partisan and fail to paint a complete picture. The book Propaganda And Persuasion defines propaganda as "the deliberate, systematic attempt to shape perceptions, manipulate cognitions, and direct behavior to achieve a response that furthers the desired intent of the propagandist." The Center for Media and Democracy (CMD) was launched in 1993 to create what at the time was the only public interest and media organization dedicated to exposing organized corporate and government propaganda and its impacts on democracy and democratic social change.
- 1 Kinds of Propaganda
- 2 From the dictionary
- 3 History of Propaganda
- 4 Nazi Germany
- 5 Cold War Propaganda
- 6 Techniques of Propaganda Generation
- 7 Techniques of Propaganda Transmission
- 8 Recognizing Propaganda
- 9 Propaganda organisations
- 10 Resources and articles
Kinds of Propaganda
Propaganda shares many techniques with advertising or public relations; in fact, advertising and PR can be said to be propaganda promoting a commercial product. As commonly understood, however, the term usually refers to political or nationalist messages. It can take the form of leaflets, posters, TV broadcasts or radio broadcasts.
In a narrower and more common use of the term, propaganda refers to deliberately false or misleading information that supports a political cause or the interests of those in power. The propagandist seeks to change the way people understand an issue or situation, for the purpose of changing their actions and expectations in ways that are desirable to the interest group. In this sense, propaganda serves as a corollary to censorship, in which the same purpose is achieved, not by filling people's heads with false information, but by preventing people from knowing true information. What sets propaganda apart from other forms of advocacy is the willingness of the propagandist to change people's understanding through deception and confusion, rather than persuasion and understanding. The leaders of an organization know the information to be one sided or untrue but this may not be true for the rank and file members who help to disseminate the propaganda.
Propaganda is a mighty weapon in war. In this case its aim is usually to dehumanize the enemy and to create hatred against a special group. The technique is to create a false image in the mind. This can be done by using special words, special avoidance of words or by saying that the enemy is responsible for certain things he never did. In every propaganda war two things are needed: Injustice and Faint. The faint or the injustice may be fictitious or may be based on facts, the aim is always to create hate.
Propaganda is also one of the methods used in psychological warfare. More in line with the religious roots of the term, anti-cult activists accuse the leaders of cults of using propaganda extensively to recruit followers and keep them.
Examples of political propaganda:
- English propaganda against Germany in the First World War, see RMS Lusitania
- German propaganda against Poland to start the Second World War, see Attack on Sender Gleiwitz
In an even narrower, less commonly used but legitimate sense of the term, propaganda refers only to false information meant to reassure people who already believe. The assumption is that, if people believe something false, they will constantly be assailed by doubts. Since these doubts are unpleasant (see cognitive dissonance), people will be eager to have them extinguished, and are therefore receptive to the reassurances of those in power. For this reason propaganda is often addressed to people who are already sympathetic to the agenda.
Propaganda has sometimes been classified as "white," "black" or "gray." White propaganda generally comes from an openly identified source and is not intentionally deceptive. Black propaganda pretends to be from a friendly source, but is actually from an adversary and is intended to deceive its audience. Gray propaganda falls somewhere between white and black.
Other general methods used for controlling populations:
1) Isolation/control: Isolating groups can take many forms whether racial, demographic or social. Isolating groups politically can be a simple or complex process but always results in leveraged control and potential political marginalization with potential ultimate control as in a one-party state. Propaganda is an essential tool in providing the information to that will allow a particular group of people to be isolated from the mass.
2) Confusion/diversion: Splitting a major issue into separate components can work to resurrect failed but desired consequences, for example when one contentious element of an issue fails related or independent components of the issue serve as new justifications. For example the original goal in Iraq was the quest for WMD's but when WMD's were disproved the issue was transformed to providing "freedom and liberty" for the Iraqi people, and later on simply the idea of toppling Saddam Hussein was the desired goal.
3) Separation: Related to isolation and control, behaviorial psychologists sometimes refer to the principle of "divide and conquer". Divide and conquer is an extremely useful tool to maintain control over disparate groups and propaganda provides the information upon which separation is based.
4) Reaction: strength is based upon action and it is desirable to place the people and unruly groups in positions where they must react, propaganda is a useful tool and adjunct in forcing people to react as a large group. Government takes its strength from action where the strong act upon certain information and the weak and unwary are left to react.
5) Disinformation as weakness: weakness is indicated by reaction, reaction is induced by misinformation and disinformation. Strength is manifest in action to which an adjunct may be the supply of misinformation or disinformation. Individuals must not be allowed to act or think independently, and individuals must not be permitted to act in the face of government coercion. By forcing people to react to disinformation and misinformation individuals in power can pursue their own private agenda.
6) Coercion: a government's capability is determined by the government's ability to coerce citizens into adopting certain behaviors. In this manner the government may control and condition its people or the government cannot be successful. Propaganda is an essential tool and sometimes directs the manner in which the coercion is focused.
From the dictionary
From the Department of Defense Dictionary of Military and Associated Terms, a 742-page and growing work, most recently amended in November (online here); cited in Peter Edidin, "Give a Blood Chit to the Confusion Agent" (New York Times, January 30)
- "Any thought or idea expressed briefly in a plain or secret language and prepared in a form suitable for transmission by any means of communication."
—Definition of "message"
- "Any form of communication in support of national objectives designed to influence the opinions, emotions, attitudes, or behavior of any group in order to benefit the sponsor, either directly or indirectly."
—Definition of "propaganda," in above cited dictionary
- "Those overt international public information activities of the united states government designed to promote united states foreign policy objectives by seeking to understand, inform, and influence foreign audiences and opinion makers, and by broadening the dialogue between american citizens and institutions and their counterparts abroad."
—Definition of "public diplomacy," in above cited dictionary
History of Propaganda
An example of propaganda from an earlier authoritarian and militaristic culture are the writings of Romans like Livy, which are considered masterpieces of pro-Roman statist propaganda. The term itself, however, originated in Europe in 1622, shortly after the start of the Thirty Years' War, which pitted Catholics against Protestants. Catholic Pope Gregory XV founded Sacred Congregation for the Propagation of the Faith (sacra congregatio christiano nomini propagando or, briefly, propaganda fide), the department of the pontifical administration charged with the spread of Catholicism and with the regulation of ecclesiastical affairs in non-Catholic countries (mission territory). Originally the term was not intended to refer to misleading information.
The modern political sense of the term "propaganda" dates from World War I, and was not originally pejorative. Propaganda techniques were first codified and applied in a scientific manner by journalist Walter Lippman and psychologist Edward Bernays (nephew of Sigmund Freud) early in the 20th century. During World War I, Lippman and Bernays both worked for the Committee on Public Information (known informally as the Creel Committee after its director, George Creel), which was created by U.S. President Woodrow Wilson to sway popular opinion to enter the war on the side of Britain.
The Creel Committee's pro-war propaganda campaign produced within six months an intense anti-German hysteria. Its success permanently impressed American business (and Adolf Hitler, among others, with the potential of large-scale propaganda to control public opinion. Bernays coined the terms "group mind" and "engineering consent", important concepts in practical propaganda work.
The current public relations industry is a direct outgrowth of the Creel Committee's work and is still used extensively by the United States government. Several of the early figures in the public relations industry were members of the Creel Committee, including Bernays, Ivy Lee and Carl Byoir.
Most propaganda in Germany was produced by the Ministry for Public Enlightenment and Propaganda ("Promi" in German abbreviation). Joseph Goebbels was placed in charge of this ministry shortly after Hitler took power in 1933. All journalists, writers, and artists were required to register with one of the Ministry's subordinate chambers for the press, fine arts, music, theater, film, literature, or radio.
The Nazis believed in propaganda as a vital tool in achieving their goals. Adolf Hitler, Germany's F? was impressed by the power of Allied propaganda during World War I and believed that it had been a primary cause of the collapse of morale and revolts in the German home front and Navy in 1918. Hitler would meet nearly every day with Goebbels to discuss the news and Goebbels would obtain Hitler's thoughts on the subject; Goebbels would then meet with senior Ministry officials and pass down the official Party line on world events. Broadcasters and journalists required prior approval before their works were disseminated. Hitler and other powerful high ranking Nazis such as Reinhard Heydrich had no moral qualms about spreading propaganda which they themselves knew to be false. Nazi disinformation came to be known as the Big Lie (ironically, a term that Hitler coined initially to describe what he characterized as dishonest propaganda by Jews).
Nazi propaganda before the start of World War II had several distinct audiences:
- German audiences were continually reminded of the struggle of the Nazi Party and Germany against foreign enemies and internal enemies, especially Jews.
- Ethnic Germans in countries such as Czechoslovakia, Poland, the Soviet Union, and the Baltic states were told that blood ties to Germany were stronger than their allegiance to their new countries.
- Potential enemies, such as France and Great Britain, were told that Germany had no quarrel with the people of the country, but that their governments were trying to start a war with Germany.
- All audiences were reminded of the greatness of German cultural, scientific, and military achievements.
Until the Battle of Stalingrad's conclusion on February 2, 1943, German propaganda emphasized the prowess of German arms and the humanity German soldiers had shown to the peoples of occupied territories. In contrast, British and Allied fliers were depicted as cowardly murderers, and Americans in particular as gangsters in the style of Al Capone. At the same time, German propaganda sought to alienate Americans and British from each other, and both these Western belligerents from the Soviets.
After Stalingrad, the main theme changed to Germany as the sole defender of Western European culture against the "Bolshevist hordes." The introduction of the V-1 and V-2 "vengeance weapons" was emphasized to convince Britons of the hopelessness of defeating Germany.
Goebbels committed suicide shortly after Hitler on April 30, 1945. In his stead, Hans Fritzsche, who had been head of the Radio Chamber, was tried and acquitted by the Nuremberg war crimes tribunal.
Cold War Propaganda
The United States and the Soviet Union both used propaganda extensively during the Cold War. Both sides used film, television and radio programming to influence their own citizens, each other and Third World nations. The United States Information Agency operated the Voice of America as an official government station. Radio Free Europe and Radio Liberty, in part supported by the Central Intelligence Agency, provided gray propaganda in news and entertainment programs to Eastern Europe and the Soviet Union respectively. The Soviet Union's official government station Radio Moscow, broadcast white propaganda, while Radio Peace and Freedom broadcast grey propaganda. Both sides also broadcast black propaganda programs around special crises.
One of the most insightful authors of the Cold War was George Orwell, whose novels Animal Farm and Nineteen Eighty-Four are virtual textbooks on the use of propaganda. Though not set in the Soviet Union, their characters live under totalitarian regimes in which language is constantly corrupted for political purposes. Those novels were used for explicit propaganda. The CIA, for example, secretly commissioned an animated film adaptation of Animal Farm in the 1950s.
Techniques of Propaganda Generation
Saddam Hussein pictured as a decisive war leader in an Iraqi propaganda picture
A number of techniques are used to create messages which are persuasive, but false. Many of these same techniques can be found under logical fallacies since propagandists use arguments which, although sometimes convincing, are not necessarily valid.
Some time has been spent analyzing the means by which propaganda messages are transmitted, and that work is important, but it's clear that information dissemination strategies only become propaganda strategies when coupled with propagandistic messages. Identifying these propaganda messages is a necessary prerequisite to studying the methods by which those messages are spread. That's why it is essential to have some knowledge of the following techniques for generating propaganda:
Appeal to fear: Appeals to fear seeks to build support by instilling fear in the general population - for example Joseph Goebbels exploited Theodore Kaufman's Germany Must Perish! to claim that the Allies sought the extermination of the German people.
Appeal to authority: Appeals to authority cite prominent figures to support a position idea, argument, or course of action.
Bandwagon: Bandwagon-and-inevitable-victory appeals attempt to persuade the target audience to take a course of action "everyone else is taking." "Join the crowd." This technique reinforces people's natural desire to be on the winning side. This technique is used to convince the audience that a program is an expression of an irresistible mass movement and that it is in their interest to join. "Inevitable victory" invites those not already on the bandwagon to join those already on the road to certain victory. Those already, or partially, on the bandwagon are reassured that staying aboard is the best course of action.
Obtain disapproval: This technique is used to get the audience to disapprove an action or idea by suggesting the idea is popular with groups hated, feared, or held in contempt by the target audience. Thus, if a group which supports a policy is led to believe that undesirable, subversive, or contemptible people also support it, the members of the group might decide to change their position.
Glittering generalities: Glittering generalities are intensely emotionally appealing words so closely associated with highly valued concepts and beliefs that they carry conviction without supporting information or reason. They appeal to such emotions as love of country, home; desire for peace, freedom, glory, honor, etc. They ask for approval without examination of the reason. Though the words and phrases are vague and suggest different things to different people, their connotation is always favorable: "The concepts and programs of the propagandist are always good, desirable, virtuous."
Rationalization: Individuals or groups may use favorable generalities to rationalize questionable acts or beliefs. Vague and pleasant phrases are often used to justify such actions or beliefs.
Intentional vagueness: Generalities are deliberately vague so that the audience may supply its own interpretations. The intention is to move the audience by use of undefined phrases, without analyzing their validity or attempting to determine their reasonableness or application
Transfer: This is a technique of projecting positive or negative qualities (praise or blame) of a person, entity, object, or value (an individual, group, organization, nation, patriotism, etc.) to another in order to make the second more acceptable or to discredit it. This technique is generally used to transfer blame from one member of a conflict to another. It evokes an emotional response which stimulates the target to identify with recognized authorities.
Oversimplification: Favorable generalities are used to provide simple answers to complex social, political, economic, or military problems.
Common man: The "plain folks" or "common man" approach attempts to convince the audience that the propagandist's positions reflect the common sense of the people. It is designed to win the confidence of the audience by communicating in the common manner and style of the audience. Propagandists use ordinary language and mannerisms (and clothes in face-to-face and audiovisual communications) in attempting to identify their point of view with that of the average person.
Testimonial: Testimonials are quotations, in or out of context, especially cited to support or reject a given policy, action, program, or personality. The reputation or the role (expert, respected public figure, etc.) of the individual giving the statement is exploited. The testimonial places the official sanction of a respected person or authority on a propaganda message. This is done in an effort to cause the target audience to identify itself with the authority or to accept the authority's opinions and beliefs as its own.
Stereotyping or Labeling: This technique attempts to arouse prejudices in an audience by labeling the object of the propaganda campaign as something the target audience fears, hates, loathes, or finds undesirable.
Scapegoating: Assigning blame to an individual or group that isn't really responsible, thus alleviating feelings of guilt from responsible parties and/or distracting attention from the need to fix the problem for which blame is being assigned.
Virtue words: These are words in the value system of the target audience which tend to produce a positive image when attached to a person or issue. Peace, happiness, security, wise leadership, freedom, etc., are virtue words.
Slogans: A slogan is a brief striking phrase that may include labeling and stereotyping. If ideas can be sloganized, they should be, as good slogans are self-perpetuating memes.
Techniques of Propaganda Transmission
Some of the most effective propaganda techniques work by misdirecting or distracting the public's finite attention away from important issues. It's important to read between the lines of the news and see what isn't being reported, or what is reported once, quietly, and not followed up. In an age of information overload, distraction techniques can as effective as active propaganda. One way to test for distraction is to look for items that appear repeatedly in foreign press (from neutral and hostile countries) and that don't appear in your own. But beware of deliberately placed lies that are repeated with the hope that people will believe it if it is repeated often enough.
All active propaganda techniques can be tested by asking if they tend the target audience to act in the best interests of the distributor of the propaganda. Propaganda presents on point of view as if it were the best or only way to look at a situation.
Sometimes propaganda can be detected by the fact that it changes before and after a critical event, whereas more honest information like medicine, science or any training manual should largely remain the same after the event as before. If there are big disparities, or if some "valuable lesson" or "wake-up call" has occurred, it means that what was provided before the fact was not really "instruction" but "guessing," or - if there is no consistent explanation that survives - propaganda..
US government examples
- 4th Psychological Operations Group (Airborne)
- Counter-Information Team
- Office of Global Communications
- Office of Public Diplomacy
- Office of Strategic Communication
- Office of Strategic Influence
- Psychological Strategy Board
British government examples
- 15 (UK) Psychological Operations Group
- British Satellite News
- British Forces Broadcasting Service
- Central Office of Information
- Civil Contingencies Secretariat, Cabinet Office
- D-Notice Committee
- Government Communication Network
- Government Information Service
- Government Information and Communication Service
- Information Department, Foreign Office
- Information Policy, Army/Intelligence
- Information Research Department, Foreign Office
- Lobby system
- London Press Service
- London Radio Service, Central Office of Information
- Northern Ireland Information Service, Northern Ireland Office
- Public Diplomacy Policy Department, Foreign Office
- Services Sound and Vision Corporation
Australian government examples
Resources and articles
Related SourceWatch articles
- Brett Gary
- Alex Carey
- Center for Media and Democracy
- conservative news outlets (list)
- George Creel
- Hi Magazine
- Historical engineering
- Information warfare
- Institute for Propaganda Analysis
- liberal news outlets (list)
- Obama Propaganda
- Logical fallacy
- manufactured journalism
- Music Corporation of America
- Pentagon military analyst program
- Power of persuasion
- Propaganda film
- Propaganda glossary
- Propaganda Model
- Propaganda posters
- Propaganda techniques
- Public diplomacy, the term used by the United States Information Agency to describe its mission
- Public relations
- Resources for studying propaganda
- State of Deception: The Power of Nazi Propaganda
- John Stauber
- Thought control
- War propaganda
- The Propaganda Model: a retrospective, Journalism Studies, Volume 1, Number 1, 2000, pp. 101–112, Edward S. Herman, University of Pennsylvania, USA
- Jowett, Garth S. and Victoria O'Donnell, Propaganda and Persuasion. 4th ed. Thousand Oaks: Sage Publications, 2006. ISBN 1-4129-0898-1.
- Howe, Ellic. The Black Game: British Subversive Operations Against the German During the Second World War. London: Futura, 1982.
- Edwards, John Carver. Berlin Calling: American Broadcasters in Service to the Third Reich. New York, Prager Publishers, 1991. ISBN 0-275-93705-7.
- Linebarger, Paul M. A. (aka [w:[Cordwainer Smith|]]). Psychological Warfare. Washington, D.C., Infantry Journal Press, 1948.
- Shirer, William L. Berlin Diary: The Journal of a Foreign Correspondent, 1934-1941. New York: Albert A. Knopf, 1942.
- Much of the information found in Propaganda techniques is take from: "Appendix I: PSYOP Techniques" from "Psychological Operations Field Manual No.33-1" published by Headquarters; Department of the Army, in Washington DC, on 31 August 1979. .
- Alex Carey, "Taking the risk out of democracy: Corporate propaganda in the US and Australia", NSW Press/ Illinois Press, 1995.
- Edward S. Herman, The Propaganda Model: a retrospective, Journalism Studies, Volume 1, Number 1, 2000, pp. 101–112, University of Pennsylvania, USA
- propaganda critic: A website devoted to propaganda analysis.
- David Welch: Powers of Persuasion
- Documentation on Early Cold War
- U.S. Propaganda Activities in the Middle East] by the National Security Archive. Collection of 148 documents and overview essay.
- Bibliography on the British Political Warfare Executive
- Sacred Congregation of Propaganda from the Catholic Encyclopedia.
- Jacques Ellul, Propaganda: The Formation of Men's Attitudes--excerpts
- Stefan Landsberger's Chinese Propaganda Poster Pages
- Randal Marlin, Propaganda: the ethics of persuasion, Broadview Press, 2002. ISBN: 1551113767
- Propaganda Communist Chinese Paintings (site in French)
- Bytwerk, Randall, "Nazi and East German Propaganda Guide Page". CAS Department, Calvin College.
- Jim Boyd, "Editorial Pages: Why Courage is Hard to Find," Nieman Reports, Spring 2006.
- Ruth Walker, "How 'propaganda' lost its good name", Christian Science Monitor, May 25, 2006.
- Jerry Landay, The "Civil War" squabble: Waging combat with words", Media Transparency, December 9, 2006.
- Manuel Valenzuela, "The Unearthing: An Awakening Has Arrived. With Truth Comes Awakening," Information Clearing House, May 3, 2007.
- Kenneth A. Osgood, "Propaganda," Encyclopedia of American Foreign Policy, 2002, accessed August 14, 2008.
- Eric Alterman and Danielle Ivory, "Think Again, Blogosphere to Mainstream Media: Get Off the Bus," Center for American Progress, May 21, 2009.
- Toxic Sludge Is Good For You! Lies, Damn Lies and the Public Relations Industry
- Trust Us, We're Experts! How Industry Manipulates Science and Gambles With Your Future
- Weapons of Mass Deception: The Uses of Propaganda in Bush's War on Iraq; |
If the Reverend Nevil Maskelyne came back to life, the 18th-century Astronomer Royal of Great Britain would probably have no trouble grasping the idea behind NASA’s remote sensing GRACE mission. Maskelyne proposed a remarkably similar experiment himself in a presentation to the Royal Society in 1772. “If the attraction of gravity be exerted, as Sir Isaac Newton supposes, not only between the large bodies of the universe, but between the minutest particles of which these bodies are composed . . . it will necessarily follow, that every hill must, by its attraction, alter the direction of gravitation in heavy bodies in its neighborhood ....”
That’s exactly what GRACE, the Gravity Recovery and Climate Experiment, detects. Every 94 minutes or so, twin satellites whip once around Earth at an altitude of 310 miles, taking 30 days to cover the planet’s entire surface, then they do it again and again, sensing variations in local gravity. GRACE maps local variations in the force of gravity over Earth’s surface, revealing mountain ranges and ocean trenches as well as underground watersheds and other hidden concentrations of mass. A joint venture by NASA and the DLR (Deutsches Zentrum für Luft- und Raumfahrt, or German Aerospace Center), GRACE looks right past the familiar oceans, continents, and clouds, showing our planet in a fresh light—as a knobby, blobby globe of gravitational ups and downs.
Among other things, GRACE may have found a crater deep under the Antarctic ice that may mark an asteroid impact greater than the one that doomed the dinosaurs, measured the seafloor displacement that triggered the tsunami of 2004, and quantified changes in subsurface water in the Amazon and Congo river basins. “This is really an entirely new kind of remote sensing,” says project scientist Michael Watkins, of NASA’s Jet Propulsion Laboratory. “It’s like when radar or photography was first invented—you start realizing that it can be applied in all sorts of unanticipated ways. We’re still discovering them.”
The notion that Earth’s gravity field could be measured with satellites dates back to the dawn of the space age. In 1958 ground controllers tracking the first American satellite, Explorer 1, noted that its path faithfully traced the planet’s equatorial bulge (created by centrifugal forces generated by the planet’s rotation). By the 1960s rocket scientists realized that smaller, local variations in gravity could have further, unforeseen effects. Missiles carrying nuclear warheads, for example, could be thrown off course if no allowance was made for mountain ranges or valleys.
If Earth were a perfect sphere, perfectly uniform in density and covered to a uniform depth with ocean, the geoid—a word coined by geologists to refer to an imaginary plane located at the average level of the sea’s surface—would be a perfect sphere as well. Since the geoid would be evenly perpendicular to the pull of gravity in all places, that force would always pull you directly toward the precise center of the Earth. But Earth is nowhere near perfect or uniform, which means that gravity doesn’t always point straight down; a mountain range, for example, might divert it slightly to the left.
Understanding the subtleties of Earth’s gravitational field would be useful in many ways. Scientists could learn a lot about the structure of the planet, what it’s made of, and where the crust is thick or thin. A deposit of high-density underground rock, or an undersea mountain, is utterly invisible—yet they, too, skew the geoid away from perfect flatness. Even when the ocean is utterly calm, it isn’t flat. Measurements reveal that some parts of the ocean are a remarkable 390 feet lower than average, and others are 300 feet higher.
While scientists began to appreciate just how useful a map of the geoid could be, engineers were realizing that the most sensible way to measure the variations would be with a pair of satellites, instead of just one. A single orbiter would bob and weave with the gravity field just fine—but monitors would have to measure the ups and downs from the ground continuously by beaming radio waves back and forth. That would require an enormous network of ground stations. Yet two satellites flying far enough apart would experience different gravitational effects, so that only the distance between them must be measured. As the lead satellite approaches a place with more mass than average, it speeds up just a bit from the extra gravitational pull. Shortly thereafter, so does the second. Then, as the higher-mass region falls behind, each satellite is held back a little—again, first the leading, then the trailing satellite. By sending microwaves between the two, it would be possible to calculate that staggered acceleration, and thus infer the change in gravitational pull on Earth’s surface.
Unfortunately, the variation in distance between the two satellites is so small that in the early 1960s it would have been virtually impossible to detect using any technology then available. In 1976 NASA launched a satellite called LAGEOS (Laser Geodynamics Satellite), which began to address the problem, albeit crudely. It carried no instruments at all. In essence, LAGEOS was a two-foot-diameter shiny brass golf ball; by bouncing laser beams off the satellite from different places on the surface of Earth, geologists could measure the precise distances between widely separated places on the planet. They could, for example, see the gradual separation of continents, due to plate tectonics, year by year.
In the early 1990s the TOPEX (Topography Experiment for Ocean Circulation)/Poseidon satellite, a joint American-French mission, shot into orbit armed with radar altimeters to measure the height of the sea surface. “What they’ve basically done,” Watkins says, “is to look at changes in the sea surface over time, on the assumption the geoid itself doesn’t change.” Except that sometimes it does. Along with its measurements of continental drift, LAGEOS also detected a very gradual change in the gravity field over Canada and northern Europe as the crust continues to rebound—10,000 years later—from the weight of the massive glaciers that pinned it down during the last ice age. It also revealed annual variations in local gravity due to the natural storage and depletion of water during rainy and dry seasons in different parts of the world.
Laser beams fired at LAGEOS were not sensitive enough to pinpoint variations in orbit smaller than a centimeter or so and were too imprecise to pick out the subtler differences in gravity. For that, a double-satellite mission was needed. Finally, in the mid-1990s, the technology to pull it off became available in two forms. The first was microwave transmitters and receivers small, efficient, and reliable enough to be to mounted on small spacecraft and used to gauge the distance between the satellites. The second: the Global Positioning System (GPS). “If I’m sending a signal from me to you,” says Watkins, “and I want to know the time of flight, it’s crucial that our clocks be perfectly synchronized.” By checking in constantly with whatever GPS satellite is in view at a given time, a pair of gravity satellites can use its single clock rather than trying to synchronize their own.
With the technology finally in place, Watkins, together with aerospace engineer Byron Tapley of the University of Texas at Austin and several other scientists and engineers, proposed the GRACE mission. In partnership with the German space agency, NASA sent the dual GRACE satellites into orbit in March 2002. Since then, they have been zipping around Earth in a polar orbit, one satellite about 137 miles ahead of the other. To an observer in space, they would appear to be tracing out the same circle over and over, but since the planet is continuously rotating beneath them, the intrepid satellites orbit over every slice of the surface once every 30 days.
Their instruments measure not the distance between the two satellites but rather the change in distance, and thus the acceleration due to gravity. They do it through interferometry—watching how beams of microwaves interfere with each other. One satellite shoots out a continuous stream of microwaves, which is received by the second satellite and both are sent to the ground. The outgoing and incoming beams are superimposed, creating an interference pattern that varies depending on how close the waves are to being perfectly in phase—that is, how close the waves’ peaks and valleys are lined up. A tiny difference in satellite-to-satellite distance—and thus an increase or decrease in gravitational pull from Earth’s surface—makes a marked difference in the interference pattern. If the satellites are moving together or apart at as little as 150 nanometers per second, the GRACE scientists can see it.
That is not quite the end of the story. Even though 310 miles up is technically outer space, a few air molecules still float around—not enough to make the slightest difference to astronauts on a space shuttle or the space station, which orbit considerably lower, but sufficient to slow the GRACE satellites perceptibly. A clump of air molecules could fool an observer into thinking that something lies below—perhaps a glacier—so each satellite has what’s known as a “proof mass” floating in a chamber inside, untethered to the satellite itself. The proof mass is itself in orbit, so when one of the satellites speeds up or slows down due to gravity variations, the mass does too. But when a satellite slows due to air drag, the proof mass inside, blissfully unaware, keeps moving at its original speed. It doesn’t hit the interior wall of the satellite because onboard electric plates keep it from doing so—but sensitive electronics keep track of the discrepancy so the engineers can subtract it from the real signal.
GRACE’s data are open to any scientist on the planet. “That,” says Byron Tapley, “led to a whole range of people outside the standard community who used GRACE results to do things that weren’t possible before.” In January 2005, for example, Ohio State University geophysicist Ralph von Frese and his colleagues noticed a concentration of higher-than-average-density material in the rock about a mile under the surface of the East Antarctic ice sheet. Mass concentrations like this often accumulate when giant impacts from space pound the crust. When the crust rebounds, it carries higher-density mantle materials up toward the surface and holds them there. Comparing the GRACE data with radar imagery of the icebound bedrock, von Frese found it was centered perfectly inside a ring some 300 miles wide—just what you’d expect from an impactor 30 or so miles across. “It just jumped out at us,” he says.
An asteroid that big would be about four to five times the diameter of the object that killed off the dinosaurs 65 million years ago. This crater is much older, arguably dating back to a time, some 250 million years ago, when something—perhaps a projectile from outer space—wiped out the majority of the species on Earth, including most reptiles, sponges, corals, starfish, clams, sea scorpions, and fish, thereby clearing the evolutionary decks for dinosaurs to become dominant. That was the greatest mass extinction in history, and thanks to GRACE, paleontologists and evolutionary biologists now have an idea of how it may have happened.
But GRACE’s greatest contribution comes from the fact that it remeasures the geoid every month or so. That enabled geologists to make before-and-after assessments of how the seafloor rearranged itself in the Sumatra-Andaman earthquake of December 26, 2004, which triggered the awful Indian Ocean tsunami. “When a major quake happens on land,” Watkins says, “you can go out and look at the changes. With GRACE, we can now look thousands of feet underwater as well.”
The satellites can also reveal movement of water itself, in ways never possible before. “It’s very cool, because water can go underground, it can move around the ocean, it can change from ice to liquid and runoff, but it can’t hide its mass from us,” says Watkins. Imagine, he says, a gigantic hockey puck made of water. “It could be in the form of an ice sheet, or an aquifer, or a piece of ocean. GRACE has the sensitivity to pick up a puck about a centimeter thick and 400 kilometers [half an inch and 250 miles] across.” All the water on Earth can be divided into hockey pucks, he says, and GRACE takes note of how they move around every 30 days.
Last March, geophysicists Isabella Velicogna and John Wahr at the University of Colorado at Boulder published a paper in Science Express that used GRACE data to show that the ice sheet covering Antarctica has shrunk by an average of 36 cubic miles of ice per year—surprising, given that many climate models predict a thickening of the ice as higher global temperatures lead to more evaporation and precipitation. “It’s very difficult for models to reproduce the physics of glaciers, and this shows that the models aren’t as good as we’d like them to be,” Velicogna says.
Velicogna and her colleagues also measured a dramatic loss of Greenland ice, as much as 38 cubic miles per year between 2002 and 2005—even more troubling, given that an influx of fresh melt water into the salty North Atlantic could in theory shut off the system of ocean currents that keep Europe relatively warm. (A separate group at the University of Texas published figures extrapolated from GRACE data showing that Greenland lost as much as 57 cubic miles of ice each year between 2002 and 2005; NASA shortly plans to publish data reconciling the two studies.) “It’s a wake-up call,” says Velicogna, “because there is a lot of water that can go from the ice sheets into the ocean. Both ice sheets are significantly losing mass, and that affects sea level. If sea level is going to rise, that will affect a lot of coastal areas.”
This past December an entire session of the American Geophysical Union’s fall meeting was devoted to movement of water in and out of giant watersheds all over the world. Speakers presented eight papers, on topics ranging from the hydrologic impact of the Three Gorges Dam in China to the impact of climate change on Siberian river systems. All the new findings were based entirely on data from GRACE. Notable results included a report from researchers at MIT that Alaska lost an average of 10 and a half cubic miles of ice each year from 2003 to 2005.
Oceanographers, geologists, and climatologists are scrambling to update their models of the planet based on the flood of GRACE data. But these will start to look positively primitive when a new, upgraded version of GRACE comes along in several years. Armed with laser interferometers more sensitive than the microwave type, GRACE scientists will be able to attain much better resolution, and thus to find even subtler gravity variations and more exquisite detail, or “smaller hockey pucks,” in Watkins’s words. Nevil Maskelyne never managed to make his own experiment work, but with GRACE his idea has been vindicated beyond even his wildest imaginings. |
We recently discovered the first direct evidence of comets bringing water to alien planets, and now we can go one step further. The planetary disk around the star TW Hydrae has enough water in it to support thousands of oceans.
Located 176 light-years from Earth in the constellation Hydra, TW Hyrdae is the closest star system to us that is still developing into a full-fledged solar system. That makes it an invaluable resource for exploring the origins of solar systems in general, ours very much included. The Herschel Space Observatory has found a huge cloud of water vapor in the far reaches of TW Hydrae's planet-forming disk.
While previous research has turned up evidence of similar water vapor clouds in the inner regions of the disk, this is the first time we've seen so much water so far away from the star. That means, unlike those earlier clouds, that this one is cold enough to eventually freeze and form comets. And it's those future comets that could one day bring water to the planets of TW Hydrae , and just maybe pave the way for the development of life in that solar system.
University of Michigan researcher Ted Bergin explains:
"This tells us that the key materials that life needs are present in a system before planets are born. We expected this to be the case, but now we know it is because have directly detected it. We can see it."
And fellow researcher Michiel Hogerheijde of Leiden University in the Netherlands adds:
"The detection of water sticking to dust grains throughout the planet-forming disk would be similar to events in our own solar system's evolution, where over millions of years, these dust grains would then coalesce to form comets. These would be a prime delivery mechanism for water on planetary bodies."
Via the European Space Agency. Artist's impression by Tim Pyle, Spitzer Science Center, CalTech. |
Key Stage 1 English (Ages 5-7)
Our KS1 English course is based upon the requirements of the National Literacy Strategy which is now followed in all schools in England. Pupils are taught the following skills by studying a variety of texts – fiction, non-fiction and poetry.
The emphasis is on a wide range of strategies to make sense of their reading. Children are taught:
- to decode words using sounds and blends.
- to read familiar high frequency words in texts
- to develop comprehension skills
- to talk about the stories they read, identifying key elements including: title, author and illustrator.
This is linked very much to reading in that the two activities reinforce each other.
Children are taught to:
- form letters correctly
- use their knowledge of sounds and blends to build phonetic words
- begin to use high frequency irregular words in their writing
- independently string words together to form a sentence, using spaces between each word
- sequence sentences to form a short narrative
- use capital letters and full stops; recognise and begin to use question marks and exclamation marks
For more information download our Key Stage 1 Parent Pack. |
As many of you are likely already aware, this is an El Nino year. In fact, studies have shown that this could be the strongest El Nino we have seen in 20 years! So what does this mean for the weather patterns we will be seeing over the next several months? Our goal here is to introduce you to the basics of El Nino, as well as answer the question “how does it impact the environment?” This quick overview will help to introduce you to what you need to know about this fascinating weather pattern.
What is El Nino? How Does It Differ From La Nina?
If you have been wondering what exactly El Nino is, you are not alone, as the terms “El Nino” and “La Nina” confuse many people. Basically, El Nino is an event that occurs when the water temperatures in the central Pacific Ocean become warmer than normal. So what is so monumental about this? While an increase in ocean temperatures may not sound like a big deal, it can have a profound effect on weather patterns by causing severe weather worldwide for about 12 months. El Nino occurs roughly every 3-5 years, but it is rare to see an El Nino as large as the one we are currently experiencing.
It is important to note that El Nino, and La Nina are not the same thing. La Nina is a weather cycle caused by the occurrence of colder than normal water temperatures in the pacific oceans. La Nina’s happen much less frequently than El Ninos do, but they are more likely to occur after an El Nino cycle.
How does an El Nino impact global weather pattern?
In general, El Nino causes global temperatures to rise. However, some areas can experience extreme and unusual cold weather. Depending on one’s location, El Nino can lead to unusually warm and dry winters, or extremely cold and wet winters. In the United States for instance, the northern states are expected to see unusually dry and hot weather, while the south is expected to be colder and wetter than most winters.
Thusly, on a global scale, El Nino has been shown to lead to fires or flooding due to these unusually extreme conditions. Past El Nino cycles have also lead to extensive property damage due to wind, rain, frost, fire, lightning, and flooding. In fact, studies estimate that on a worldwide scale, the 1997-1998 El Nino resulted in $35 billion in damage.
The Potential Effects of El Nino on Agriculture
Due to flooding and drought in various agricultural centers of the world, El Nino can cause a reduction of supply in certain crops including sugar, coffee, rice, and cacao. Past El Nino cycles have seen these products rise in cost by as much as 10% due to a reduction in supply of these crops.
The Importance of Tracking El Nino
The various impacts of El Nino make the monitoring of weather patterns through the installation and maintenance of environmental monitoring instrumentation vital. Such technology is the best way to help us to track the effect the current El Nino pattern is having on our environment. Will some locations see a flood risk in the near future? Can certain areas expect a greater threat from fires this summer? Are certain agricultural industries going to be impacted by extreme weather? These are all questions that tracking El Nino’s environmental impact can help us to answer, which can help us to better prepare for any extreme weather conditions.
Contact us to learn more about the impact of El Nino, as well as to learn about how our technology is working to monitor the effect El Nino is having on our environment. |
Addiction has been called a disease of learning. Once addicted, it is very difficult to unlearn the behavior. For drug addiction, the drugs themselves mimic or alter the ability of neurotransmitters to function properly and for synapses to remodel. A number of the materials on this website address this subject.
BrainU lesson plans on the subject of addiction
C. elegans and Alcohol is a laboratory experiment in which students test the effects of alcohol on the roundworm.
What's the Deal? Card Game Students can explore this subject via a fun card game.
Dendritic Spines Lab is a lesson related to drugs and their effects on the brain.
Lesson plans on the subject of addiction from other organizations
Online activities at the Genetic Science Learning Center let students explore the effects of drugs and addiction on the brains of mice and humans in a fun and interactive way.
NIDA Goes Back to School offers science-based drug abuse publications and teaching materials free of charge. Several of these materials are available in Spanish.
The Brain: Understanding Neurobiology Through the Study of Addiction is an NIH Curriculum Supplement for Grades 9-12. At this link, you may access the Web version or request printed materials. Chapter 4: Drug Abuse and Addiction is one chapter in this curriculum supplement.
The AAAS Science Inside Alcohol Project E-Book offers an online interactive experience that guides students (grades 6-12) through the effects of alcohol on the body.
The study of addiction and the brain is being carried out at numerous sites across the country. Notable websites include:
Genetic Science Learning Center at the University of Utah
NIDA The National Institute on Drug Abuse, part of the National Institutes of Health (NIH)
Also from NIDA: NIDA for Teens: The Science Behind Drug Abuse features games and stories for students in grades 5-9 as well as parents and teachers.
Interesting Article on Addiction
Party animals: People aren't alone in the quest to get high - puffer fish make a toxin called TTX that dolphins use to party. "The behaviour was captured on camera by the makers of Dolphins: Spy in the Pod, a series produced for BBC One by the award-winning wildlife documentary producer John Downer." according to an article in The Independent.
Seen any interesting articles? Please let us know and they may appear on a page on BrainU.
Podcast on Decision Making
BrainU instructor and UMN neuroscientist David Redish spoke about his recent book, The Mind within the Brain: How We Make Decisions and How those Decisions Go Wrong, with Eric Zimmer of The One You Feed. In this 37-minute interview, they discuss the types of systems involved in making decisions (deliberative, procedural, Pavlovian...). During the last 15 minutes, the discussion turns to addiction which is essentially a breakdown of the decision-making system. Listen to the podcast.
Cocaine Use Animation from NIDA
This animation addresses the question: Why do people lose control over their cocaine use? Researchers monitored the activity of two types of neurons in mice: “urge” neurons, which promote feelings of reward and repeating behaviors that have produced rewards, and “control” neurons, which dampen those feelings and inhibit behavior.
In simplifying these experimental results for public presentation, the video does not explain that the black line is the ratio of the slope of the green line divided by the slope of the red line. The slope represents how fast the D1 (green) and D2 (red) neurons are firing.
The Workings of the Adolescent Brain
Teenagers are wired to learn — but this same wiring also makes them more vulnerable to addiction. Neuroscientist and BrainFacts.org editor Frances Jensen discusses how the biology of the teen brain presents a double-edged sword in this 3-minute video. |
Overclocking: The Answers
General Overclocking Topics
What is overclocking?
Overclocking is the process of increasing the clock frequency of your Central Processing Unit (CPU), Graphical Processing Unit (GPU), Memory, PCI, and/or AGP devices. In other words, making your computer run faster without having to spend the money on upgrades.
What are the risk of overclocking?
- In most situations, Overclocking will void the warranty of your equipment.
- The lifespan of the CPU and other devices will be shortened.
- You could potentially destroy your CPU, memory, motherboard, and other expensive items.
- Room temperature is likely to increase.
- Your system could become unstable
- You might become an OC Addict
What is the Front Side Bus (or FSB)?
FSB is also known as the Memory BUS or System BUS and connects the CPU with the main memory and is used to connect to other components within the computer. The FSB can range from speeds of 66MHz, 100MHz, 133MHz, 266 MHz, 400MHz, 533MHz and beyond.
What is system bus?
The bus that connects the CPU to main memory on the motherboard. I/O buses, which connect the CPU with the systems other components, branch off of the system bus.
What is backside bus?
A microprocessor bus that connects the CPU to a Level 2 cache. Typically, a backside bus runs at a faster clock speed than the frontside bus that connects the CPU to main memory. For example, the Pentium Pro microprocessor actually consists of two chips -- one contains the CPU and the primary cache, and the second contains the secondary cache. A backside bus connects the two chips at the same clock rate as the CPU itself (at least 200 MHz). In contrast, the frontside bus runs at only a fraction of the CPU clock speed.
What is a bus?
Usually a big long yellow thing that.... err... A Bus is a collection of wires through which data is transmitted from one part of a computer to another. You can think of a bus as a road or highway on which data travels within a computer. When used in reference to personal computers, the term bus usually refers to internal bus. This is a bus that connects all the internal computer components to the CPU and main memory. There's also an expansion bus that enables expansion boards to access the CPU and memory.
All buses consist of two parts -- an address bus and a data bus. The data bus transfers actual data whereas the address bus transfers information about where the data should go.
The size of a bus, known as its width, is important because it determines how much data can be transmitted at one time. For example, a 16-bit bus can transmit 16 bits of data, a 32-bit bus can transmit 32 bits of data, and a 64-bit bus can transmit 64 bits of data. - If your still thinking about the highway analogy, a larger road can allow more cars to travel than a smaller road.
How does FSB determine CPU speed?
CPU speed is determined by the following formula:
FSB x Multiplier = CPU Speed
For example, if you had a FSB setting of 133MHz and a 20x Multiplier, your CPU speed would be 2330MHz or 2.33GHz.
How is a processor overclocked?
The most common (and simplest) way to Overclock a processor is by simply raising the Front Side Bus (FSB) from within the BIOS. The process is the same with both AMD and Intel CPUs. Simply enter the BIOS on system startup, find the setting for the FSB and increase that value. - If you are unsure where this option is in the BIOS, take a look at your motherboard manual, as it should have that information for you.
My CPU says that it has a 266, 333, 400, 533, or 800 BUS speed, how is that?
Newer CPUs now "double pump" or even "quad pump" the FSB, this is similar to how DDR memory works. For example, the 333Mhz BUS on a processor take a 166Mhz FSB and "double pumps" it. 166Mhz FSB x 2 = 333Mhz
Similar to that, the 533MHz BUS processors are "quad pumped". 133MHz FSB x 4 = 533MHz
What's up with the AMD XP names?
AMD processors are not named according to the speed at which the processor runs, instead they are name according to how that particular processor matches against an Intel P4 processor. For example, the AMD XP 1800+ runs at 1.53Ghz, but its performance is equivalent to a Intel P4 1.8Ghz.
Here's a few quotes from AMD:
"Over the past 20 years end users have come to view higher performance � as being synonymous with higher frequency. AMD believes that what people really care about, however, is not the frequency of their processor, but the performance it delivers from their applications. While processor frequency contributes to overall CPU performance, it is not the only factor."
"To the end-user, the ultimate benefit of processor performance is how fast their applications run. Performance to them, simply put, is the amount of time it takes to perform a given task. With that in mind, the processor that performs a given task in the least amount of time has the highest performance. Increased performance implies reduced execution time. Historically, this has been measured through a variety of benchmarks. When comparing the performance of processors that execute the same instruction set, such as the x86 instruction set in PCs, performance is defined as: The work done by the processor in each clock cycle (represented as instructions per clock - IPC) times the number of clock cycles (represented by frequency)."
"PC buyers usually rely on the clock speed (megahertz) of a PC's microprocessor to determine their purchasing decision. Because the industry lacks a simple, universally accepted way to judge performance, users have become conditioned to substituting clock speed to gauge how fast their applications will run."
Can the multiplier be changed?
This answer is two fold:
Intel Processors: Older Intel Processors (P2? and earlier) the multiplier could be changed in the same way that the FSB can be. Allowing you to OC by increasing one or both values. This caused a problem, as many resellers started Overclocking the CPUs and selling slower CPUs as if they were faster. Because of this Intel locked the multiplier inside the CPU and it cannot be changed.
AMD Processors: AMD CPUs come from the factory with the Multiplier locked, however unlike Intel CPUs, an AMD can be unlocked. This was done by AMD for those people like us, who want to Overclock.
How is the multiplier unlocked on an AMD?
Duron and Thunderbird CPUs can be unlocked via a method that has been titled, "The Pencil Trick." Mainly because, all it requires is a standard pencil. You can find the Overclockers Club guide on how to do the "Pencil Trick" here
Athlon XP/MP Processors are a bit more complicated when it comes to unlocking. However, the good folks at HighspeedPC have developed a kit that can be purchased to make unlocking the Athlon XP/MP CPUs much easier. The kit can be found here.
Can the CPU be overclocked without going into the BIOS? -or-
The BIOS has no FSB setting, can the CPU still be overclocked?
In most situations, the CPU can still be Overclocked. There are several programs available that allow you to OC without having to enter the BIOS. Two of the most common are CPUFSB and CPUCool. These programs may not work on all motherboards, andthis guide does not go into detail of how to use them. Some motherboard manufacturers also include overclocking tools, Giga-Byte for example bundles EasyTune with most of it's motherboards.
How can a system be stabilized after overclocking it?
If the system becomes unstable after increasing the FSB and/or Multiplier, there are two options:
- Lower the FSB/Multiplier slightly till it becomes stable
- Increase the Core Voltage (aka vCore) of the CPU
Increasing the vCore of a CPU may help stabilize the system by providing the CPU with an extra boost of current. This increase to the vCore has one nasty side affect: increased heat. The increase of heat is explained as Joule's Law, which I'm not going to cover.
How does cooling play into overclocking?
Like all Integrated Chips (ICs) and electronic devices, the CPU will perform better and last longer when it stays cool. When you Overclock a CPU it creates more heat than it would under normal conditions. A cool CPU is a happy CPU. This also applies to other devices in your computer: Video card, RAM, sound card, and other devices.
What is a safe temperature for a CPU?
AMD and Intel both have maximum temperature ratings for their CPUs listed around 80C. If your CPU gets this hot, you've got some serious problems. Most people try and keep the CPU temperature below 40C at idle and below 55C at load.
What is the best heat sink available?
This is a relative subject, and very dependent on preferance. Many Heat Sinks have come about that provide fantastic cooling under specific situations, Noctua
As technology improves, companies are always coming out with a new heat sink or Fan that has the edge over the competition. Some of the most commonly used heat sinks among Overclockers included those made by Thermaltake, Swiftech, and Thermalright. Check out the Overclockers Club review section for reviews on various heat sinks to see what is right for you.
How can temperature be lowered more? - "Super cooling"
If standard air cooling isn't getting the job done, or has become too loud for you. There are a few more options that can help cool a hot CPU. Those these methods tend to be a bit more expensive than a regular heatsink/fan.
- Water cooling
- Peltier/TEC cooling
- Vapor Cooling
- Liquid Nitrogen Cooling
The methods listed above are beyond the scope of this document, and may be covered in future guides/articles.
In addition to the "Super Cooling" methods listed above, a few things can be done to help lower the temp of your system a few degrees.
- The use of rounded cables, or cable ties to allow air to move easier through the case
- Additional or larger case fans to bring in cooler air, and exhaust the hot air from your case
- Removing the side panels of the case
- Using an Aluminum case instead of a thick steal case
- Lowering the room temperature will also help
What is thermal compound? -or-
Why is thermal compound used?
Due to the machining process used in making heat sinks, just about every heat sink will have a rough surface. To the human eye it may look flat or even feel smooth, but there are microscopic groves in the surface. These groves will trap air between the heat sink and the CPU, and cause a poor transfer of heat.
Thermal compounds such as Artic Silver and Nanotherm are used to fill these groves and help transfer the heat from the CPU to the heat sink.
Is thermal compound required?
Thermal Compound is a must, especially when overclocking a processor, as it ios a component in transfering heat towards a Heat Sink Fan, or CPU Cooler. Thermal paste/grease acts as a conduit for the heat, and is actually designed to draw the heat in thus passing it.
What are the different types of memory?
- DDR RAM (Double Data Rate) pretty much the only thing used anymore. Runs @ FSBx2
- SD RAM (Synchronous Dynamic Random Access Memory) Old school, still used in some servers and older computers. Not used on newer systems.
Are there any tools I can use to Overclock?
While there are apps that will assist in the process of overclocking, these tools provide for a limited amount of speed improvement. Care must be taken when using these tools. There are also a lot of tools that can assist in testing your overclock, such as temperature monitors, and stress tests. |
Thanks to satellite and microscopic imaging we know more about and are able to do more for the world than we ever have been before. Here are just some of the discoveries and advances we’ve been able to make lately thanks to these technologies:
Human Protein Mapping
Once upon a time the mapping of the human genome was seen as a far off goal. That’s been accomplished and now scientists are looking at specific human proteins. One company in particular, Abgent, has focused on this area of study and is in the process of developing an antibody for every human protein. The scientists at the company say that they could accomplish this goal as early as next year.
What does this mean? It opens dozens if not hundreds of doors in medical research, not the least of which is advancing the treating and potential curing of diseases.
Not only can we now land on Mars, thanks to telescopic imaging and satellite relay we can watch the landing in almost real time (there is a thirteen minute delay between Mars and Earth) as it happened and see exactly what the Mars Curiosity rover sees as it explores the planet. We can almost literally look through the eyes of Curiosity as it does its tasks, tests soil, rocks and air and sends data back to Earth. The high res images Curiosity sends back of the Martian landscape are the most detailed and highly defined that we have ever seen here on Earth.
Why is this important? Beyond the coolness factor, what Curiosity is doing is giving us ample evidence for whether or not we could send a manned mission to Mars and bring it back. If Curiosity is able to find something that astronauts can use as a fuel source, we might be able to see man set foot on Mars in our lifetimes.
The James Webb Space Telescope
Right now the most famous NASA Space Telescope is Hubble. Unfortunately, though still traveling in space, the Hubble telescope is getting old and starting to decompose, as is the other deep space telescope, Spitzer. The James Webb Space Telescope is designed to replace them and is being built to go farther and see farther than the telescopes it is succeeding. This is expected to revolutionize the field and lead to gamut of astronomical discoveries.
The primary difference between the JWST and Hubble is that the JWST is going to be focused on infrared observation. It will also be able to see farther than Hubble or Spitzer. NASA Scientists say that it will be able to potentially see almost all the way back to the Big Bang (aka the beginning of the Universe). The things we could learn by being able to see almost to the dawn of our Universe is going to be astounding.
These are just three of the ways that science has advanced in the field of imaging. It’s more than being able to detect quarks, look into the building blocks of protons and neutrons and understand how to turn other cells into stem cells. While in our daily life, the imaging impacts us only in as often as we use our GPS services and Google Satellite to see the places we want to explore—in the world of science and truly understanding our universe, we are closer to that than we have ever been. |
The Publications Warehouse does not have links to digital versions of this publication at this time
The Transantarctic Mountains (TAM), which separate the West Antarctic rift system from the stable shield of East Antarctica, are the largest mountains developed adjacent to a rift. The cause of uplift of mountains bordering rifts is poorly understood. One notion based on observations of troughs next to many uplifted blocks is that isostatic rebound produces a coeval uplift and subsidence. The results of an over-snow seismic experiment in Antarctica do not show evidence for a trough next to the TAM but indicate the extension of rifted mantle lithosphere under the TAM. Furthermore, stretching preceded the initiation of uplift, which suggests thermal buoyancy as the cause for uplift.
Additional Publication Details
Geophysical investigations of the tectonic boundary between East and West Antarctica |
The numerics in Theloskrit are based on the number 7,
with compounded multiplication in alternating sets of 7 and 5.
The numbers themselves are represented by columns of
dots. Each column represents a range of numbers, and is in this manner
somewhat of a layered binary system. The columns go from right to
left, the first column representing 1-7, the second, 7-35, the third, 35-245,
and so on. For example, here is the Theloskrit numeric representation
for the number 144,000:
And here, though arranged backwards for easier learning,
are the numbers represented by the dots in each column, from right to left.
So, using the above chart as a guide, this would be the
explanation of the numeric.
Whereas the basic units in most numeral systems are 1,
10, 100, 1000, etc., the basic units in the Theloskrit numeral system are
1, 7, 35, 245, 1225, 8575, 52875, etc. The unit of a column can be
found by multiplying the unit of the preceding column by 7 or 5, depending
on if it is a balek or somek, respectively. The first
column is a balek, the second is a somek, and so on in an |
Recent articles on The Conversation and in The Guardian question whether inclusive education can do more harm than good – but neither article presents examples of inclusion. Rather, they present tragic examples of exclusion that are claimed to be inclusion-not-working.
What does ‘inclusion’ really mean?
There seems to be a lot of confusion and misinformation about what inclusion actually means. Inclusive education involves the full inclusion of all children. No children are segregated.
Supports for inclusion are embedded within everyday practices. If aides are employed they circulate around the classroom, or spend time assisting the teacher and making adaptations to materials, rather than being off in a corner with one particular child.
There are no separate areas or curricula for children who experience disability. All children are supported to be involved in all aspects of learning.
At one school I visited in my research, a young boy with Down syndrome was learning a modified version of sign language, which supplemented his spoken language, with the rest of his class.
His teachers completed a one-day keyword sign workshop at the start of the year. His teacher introduced a unit on Auslan (Australian sign language) where all of the students learn about Auslan and learn new signs together each week.
Learning sign language in this way did not single him out. However, it did create the opportunity for him to share his knowledge with his peers and support their learning, while also supporting him in his communication.
This example provides only one snapshot of inclusion within a classroom experience, but it illustrates some key elements of inclusion in action. The child in this example participates in the classroom experiences with the other children in the class, but with supports and adaptations as needed (for him and his peers).
That each child has individual differences is not ignored. It is embraced and valued as what makes each person unique. The goal is not to make any child “normal”, but rather to grow and learn together.
The child who experiences disability could be sitting in the same classroom, separate to his peers, with an aide who may or may not be using sign language. However, this would not be inclusion – this would be exclusion.
Common misunderstandings of inclusion
Common misunderstandings of inclusion relate to (incorrectly) considering integration and inclusion to be synonyms; viewing inclusion as simply the presence of a child who is labelled “disabled” or “different” in a mainstream setting; thinking that inclusion is only about some people (instead of about everyone); and viewing inclusion as a process of assimilation.
These misunderstandings of inclusion lead to macro or micro exclusion, which is sometimes mistaken for – or misappropriated as – inclusion. Macro exclusion is where a child is segregated into a separate classroom, unit, or school.
Micro exclusion is where, for example, a child is enrolled in a mainstream setting, but is segregated into a separate area of the classroom or school for all or part of the day; where a child is only permitted to attend for part of the day; present but not participating in the activities along with the other children in the setting; or present but viewed as a burden and not an equally valued member of the class or setting.
While the recent article on The Conversation claims to explore research on inclusive education, studies cited in that article explicitly represent examples of macro or micro exclusion. It is alarmingly common in research and practice for examples of exclusion (micro and macro) to be reported as being about inclusion.
The journey from full segregation to inclusion
Special education commenced (gradually in the 1900s) as a then-revolutionary idea that children who experience disability can and should receive some form of education.
In the main, this was an important first step towards social justice for children who experience disability, who were previously routinely denied any formal education at all (albeit with some exceptions).
Following this commencement of formal education for children who experience disability, the 1960s and 1970s saw the development of ideas of “normalisation” and “integration”, as questions began to be raised about whether segregation was actually the best approach to education.
The 1992 Disability Discrimination Act made it unlawful for any setting to discriminate against a person on the basis of disability (though with some caveats). This paved the way for much greater integration and, eventually, for inclusion.
Since then, philosophical arguments and relevant research progressed from the initial recognition that children who experience disability can and should receive some form of education to the idea that children are of equal value; that the education of all children (including children labelled disabled) should be of high quality; and, therefore, that education should be inclusive.
Inclusive education vs special schools
Contrary to what could logically be expected (given the higher teacher-to-student ratios and the special education training for teachers in special schools), there is no evidence that special schools have any benefits over mainstream schools.
Inclusive education has been found to have equal or better outcomes for all children – not just for children who experience disability. This includes better academic and social outcomes.
It is common for parents and teachers to worry that the inclusion of a child who experiences disability will lower the standard of education for children who do not experience disability. However, research clearly demonstrates that this is not the case.
By contrast, along with myriad other benefits of inclusion (including social and communication development and more positive understandings of the self), inclusive teachers engage with all children more frequently and at a higher cognitive level, with important benefits to all.
Frequent claiming of micro (and even macro) exclusion as inclusion creates significant barriers to, and confusion about, inclusion. Lack of understanding of what inclusion is, and subsequent unwarranted fear of inclusion, are also significant barriers.
Inclusive education involves supporting each child in belonging, participating, and accessing ongoing opportunities, being recognised and valued for the contribution that he or she makes, and flourishing. |
Stanford University News Service
425 Santa Teresa Street
Stanford, California 94306-2245
Tel: (650) 723-2558
Fax: 650) 725-0247
February 7, 2006
David Orenstein, School of Engineering: (650) 736-2245, [email protected]
Scanning electron microscopes are the workhorses of imaging structures on the scale of billionths of a meter. Typically, they work by shooting a beam of electrons at the specimen and then detecting newly generated electrons as they bounce off and scatter. But carbon nanotubes, essentially rolled up sheets of chicken wire a billionth of a meter in diameter, are so narrow and their sides so thin, that scientists haven't properly understood why they are visible using a scanning electron microscope, or SEM. Now, Stanford engineers have solved the mystery, and its explanation not only could help researchers understand what they see in nanotube images but also suggests new nanotube applications such as ultra-sensitive detection of electrons and ultra-precise electron beams for microelectronics manufacturing.
"Based on our traditional view of scanning electron microscopy, it doesn't make any sense that we do see nanotubes," says electrical engineering doctoral student Alireza Nojeh. Engineers care about sighting the small structures because nanotubes have potential applications in computer chips, novel materials and even medical treatments. In the Feb. 9 online edition of the journal Physical Review Letters, Nojeh and co-authors electrical engineering Professor Fabian Pease, mechanical engineering Assistant Professor Kyeongjae Cho and applied physics doctoral student Bin Shan offer a theoretical model explaining how the electrons in an SEM beam cause nanotubes to emit their own electrons, making them detectable with the scope.
Assuming that all specimens in the path of the microscope's electron beam look like bulky solids, traditional theoretical models statistically predict how electrons will scatter off samples. But the thin, hollow nanotubes don't look solid to an incoming electron beam. Using the traditional model, one might expect electrons in the microscope beam to pass right through the nanotube as if it weren't there.
But electron beams don't just pass through and they don't just scatter. According to the researchers' new model, when beam electrons pass into the nanotube, they give nearby electrons inside the nanotube's carbon atoms enough energy to escape. These liberated electrons are emitted out of the tube and are easily visible to the electron microscope's detectors.
Scientists have observed nanotubes emitting electrons in this manner before but now that there is model to explain it, they might learn how to exploit this behavior for new technologies. For example, in their research in collaboration with chemistry Associate Professor Hongjie Dai at Stanford and electrical and computer engineering Assistant Professor Wai-Kin Wong at the National University of Singapore, Nojeh and Pease have discovered how to use an electric field to predispose a nanotube to emit as many as 100 electrons for each electron that strikes it. This amplification could lead to a technology for an ultra-sensitive electron detector.
Similarly, the understanding provided by the new model could help scientists control the timing of electron emission from a nanotube. That would enable more precise and consistent electron beams than are currently available. This new capability could lead to improved machines for "e-beam lithography," which is a technique for patterning finer integrated circuits than can be produced using light.
"Carbon nanotubes have great promise because of their unique properties arising out of their molecular structure," Pease says. "But to realize that promise we still need to improve the understanding of their interaction with external stimuli, such as beams of electrons."
David Orenstein is the communications and public relations manager at the Stanford School of Engineering.
Fabian Pease, Electrical Engineering: (650) 723-0959 [email protected]
Photos of Pease and Nojeh are available on the web at http://newsphotos.stanford.edu.
Email [email protected] or phone (650) 723-2558. |
site is under construction. Please check back every few weeks for
The concept of conditional
distribution of a random variable combines the concept of distribution of a random variable and the
concept of conditional probability.
If we are considering more than one variable, restricting all but one1
of the variables to certain values will give a distribution of the
remaining variables. This is called a conditional distribution.
For example, if we are considering random variables X and Y and 2 is a
possible value of X, then we obtain the conditional distribution of Y
given X = 2. This conditional distribution is often denoted by Y|(X =
A conditional distribution is a probability distribution, so we can
talk about its mean, variance, etc. as we could for any distribution.
For example, the conditional mean
of the distribution Y|(X = x) is denoted by E(Y|(X = x).
1. More generally, if we restrict just some of the variables to
specific values or ranges, we obtain a joint conditional distribution
of the remaining variables. For example, if we consider random
variables X, Y, Z and U, then restricting Z and U to specific
values z and u (respectively) gives a conditional joint distribution of
X and Y given Z = z and U = u. |
| Story writing
| Publish your
| Text analysis
This site has evolved from a doctoral research project
investigating the design of a computer tool that would help
children write more effective narratives, undertaken by Kate
Holdich, supervised by Professor Paul Chung, Department of
Computer Science, Loughborough University, U.K. It is provided
here for the benefit of parents, teachers and children. If you
have found it useful, please let us know.
Part of the research investigated models of writing. There is an enormous difference in the way mature and novice writers approach the task of narrative writing. Mature writers employ a range of literary techniques and revise their writing continually. Children have limited knowledge of narrative writing techniques and rarely revise. The research literature suggests that children will benefit from being guided through the process of reflecting about ideas, vocabulary and sentence constructions as they compose their narratives. In addition they require assistance with proof reading - identifying and correcting grammar weaknesses. Children also need to be made aware of the many features which contribute to the creation of successful stories, such as how to create an effective opening, or how to make characters seem realistic.
HARRY's interactive story writing tutorial aims to guide
children through the process of revision whilst explaining
literary techniques as and when they are appropriate.
HARRY's conversational approach was inspired by ELIZA,
the first computer system able to 'hold a turn-taking
conversation' with the user. ELIZA was named after
Eliza Doolittle in the play 'Pygmalion' by G.B. Shaw,
who learned to speak, but never became any smarter.
HARRY is named after Eliza's teacher, who was more knowledgeable
- and has evolved on this site into a writing wizard.
The CHECK TEXT facility is a separate tool which
indicates strengths and weaknesses in children's
written grammar and style. In time, HARRY may be
able to assess other important features of story
writing such as imaginative plot, convincing
characterisation, descriptive detail etc. For
now, this task must be performed by a human!
Several refereed academic journal papers have been published relating to the HARRY writing tool and CHECKTEXT analysis tool. Here are two Teacher Learning Academy case studies:
A computer tutor for story writing
Revising and editing with HARRY.
**For parents** Subscribe for 6 months for just 10 UK pounds
**For parents** See some testimonials from previous users
**For parents** Boost your child's story writing skills with personal feedback
**For parents and teachers** More interactive story themes now available with instant access
**For teachers** Top five tried and trusted strategies for improving children's writing
**For teachers** An opportunity to take part in a new research project |
Jacques Descloitres, MODIS Land Rapid Response Team, NASA/GSFC
In the Persian Gulf, two tectonic plates-rigid pieces of the Earth's crust-are colliding; the Arabian plate (lower left) is running up on the Eurasian plate (upper right). At lower left in the MODIS image is the younger Arabian plate, and it is moving northward to collide with the Eurasian plate. The Persian Gulf (top) and the Gulf of Oman (bottom) were once the site of a rift, a place where two plates pull apart from each other, and the Indian Ocean filled in the widening gap between the two plates; however, the process then reversed, and about 20 million years ago, the gulf began to close up. The collision of the two continental plates gives Iran its mountainous terrain. This image was made from data acquired on December 30, 2001.
Note: Often times, due to the size, browsers have a difficult time opening and displaying images. If you experiece an error when clicking on an image link, please try directly downloading the image (using a right click, save as method) to view it locally. |
This month marks the hundredth anniversary of the outbreak of the First World War, arguably the most important turning point in modern European history. The Great War destroyed the old European order that had lasted since the settlement reached at the end of the Napoleonic Wars in 1815. The war also ushered in a new and dangerously volatile era of insecurity and conflict, creating the conditions for regimes that were bent on violence and conquest and were prepared to practise mass killing on an unprecedented scale. The First World War was the Urkatastrophe, the original catastrophe without which the great dictators and mass murderers of the mid-twentieth century - Hitler, Stalin and their imitators - would not have been possible.
Whereas the fate of the Jews of Europe became a central issue during the Second World War, given that Nazi Germany, the power principally responsible for launching that war, wished to destroy them in their entirety, the role of Jews in the First World War is at first sight harder to pinpoint. Nevertheless, the Jews who fought in the armies of the chief European belligerent powers numbered around one million, to which must be added some 200,000 who served in the American forces from 1917. The attitudes of these combatant Jews varied from country to country. In Tsarist Russia, which contained the largest concentration of Jews in the world, Jews were subject to severe discrimination and persecution. Jews had long sought to escape conscription into the Russian army and, though many fought loyally even in the face of the ingrained anti-Semitism of the Tsarist officer corps, others were disaffected; after the enormous casualties suffered by the Russian armies in their unsuccessful campaigns of 1914-15, Jews were among those who turned towards the parties hostile to the war and the Tsarist autocracy.
Russia’s enemies benefitted from that country’s record of reactionary excesses. In Germany, the Kaiser’s government portrayed its decision to go to war in August 1914 in part as a defensive measure justified by the expected onslaught of the ‘Russian steamroller’ from the east. Russia was the natural enemy of the Jews and of the liberal, democratic institutions on which their gradual integration into the more advanced societies of Western Europe was predicated. Many German Jews allowed themselves to be persuaded that the preservation of the civil and political rights they had been granted over the decades was bound up with the struggle against Russia. It is, however, undeniable that Germany’s Jews were mostly motivated to flock to the colours by pure patriotism. It has long been known that German Jews equalled, or even excelled, their gentile compatriots in their eagerness to fight for their country in time of war.
While their parents sank their savings into German government war bonds, young Jews like the writer Ernst Toller, who was studying at the University of Grenoble in France when war broke out and only got back to Germany with difficulty, proved their patriotism by joining up, inspired by the mood of national euphoria in August 1914. About 100,000 Jews served in the German forces during the First World War, and some 12,000 died. The writer Thomas Mann, whose attitude to Jews had previously been somewhat ambivalent, movingly recorded in his diary the shock he felt when, after the war’s end, he saw how many men with the name Cohen were listed among the fallen. In recognising the patriotism displayed by Germany’s Jews, Mann was, however, an exception among non-Jewish German patriots and nationalists. As early as 1916, the belief that Jews were failing to support the German war effort was so widespread in right-wing quarters that the Prussian Ministry of War undertook its notorious Judenzählung (census of Jews in the German forces), pandering to the swelling tide of war-fuelled anti-Semitism; when the census showed that Jews were serving in proportion to their numbers in the population, its findings were suppressed.
Many AJR members will have had fathers, uncles, grandfathers and other relatives who fought in the First World War and kept their decorations and certificates as proud mementoes of their service to the country of their birth, even though no amount of Iron Crosses could save a Jew from discrimination and persecution after 1933. Before 1914, Jews had not been admitted to the German officer corps; but by 1918, some 2,000 Jews had been commissioned as officers, and a further 1,200 served as medical officers. This was a source of great pride to the individuals themselves, to their families and to their entire community. Herbert Sulzbach, a German-Jewish refugee who served with distinction in the British army in the Second World War, reaching the rank of captain, remained equally proud of having attained the rank of lieutenant in the Kaiser’s army in the First World War. Geoffrey Perry, born Horst Pinschewer in Berlin, who also distinguished himself in the British forces in the Second World War – he captured the traitor William Joyce (Lord Haw-Haw) - had as a child had to listen so often to his father’s patriotic stories of his First World War exploits in the Kaiser’s army that he refused to talk about his own wartime experiences until well into the 1970s.
Rabbi Jonathan Wittenberg has recently written movingly about the deep-felt patriotism of his grandfather, Rabbi Dr Georg Salzberger, who served as a Jewish chaplain in the German army in the First World War and, after emigrating to Britain in 1939, was for many years the minister at Belsize Square. Salzberger, argues his grandson, saw wartime service as the ultimate proof that German Jews had, through their patriotic contribution to the national cause, achieved equality of status with their gentile compatriots. This Jewish patriotism reflected a belief that, as Germans, Jews and Christians shared a set of moral, social and civic values that bound them together in the name of distinctively German ideals. That form of patriotism could also descend into virulent nationalism: it was a German Jew, Ernst Lissauer, who penned the notorious Hassgesang gegen England (Hymn of Hate against England) in 1914.
The situation in Austria-Hungary, with its many competing national groups - almost all of them hostile to Jews - was different. Here Jews felt loyalty to the Empire and the Emperor, Kaiser Franz Joseph, who had come to symbolise the supranational character of the Habsburg Monarchy, standing above the ethnic strife that threatened to engulf the Jews and acting as guarantor of the civic rights that they had been granted under the constitution. In Austria-Hungary, the army, like the monarchy, transcended ethnic divisions, at least to the extent that some Jews were admitted to the officer corps. Jews had little problem in fighting as loyal citizens of the Empire for they feared, all too presciently, that the defeat and disintegration of the Habsburg Empire would endanger their position across Central and Eastern Europe.
In 1914, Russian armies advanced into Austrian Poland, taking cities like Lviv (Lemberg) and Przemysl and causing a mass flight of Jews. While the Germans concentrated on the western front, Austria-Hungary bore the brunt of the fighting against Russia in the east, a cause with which its Jewish population could readily identify. However, partly thanks to the incompetence of Habsburg strategists, the Empire also found itself fighting on two other fronts. Unable to overcome the stubborn resistance of the Serbs, Austrian forces became bogged down in a campaign that ended only in autumn 1915, when Bulgaria invaded Serbia. In May 1915, Italy came into the war on the opposite side, involving Austria-Hungary in a long and costly campaign conducted on the mountainous terrain of the Alps on the frontier between the two warring states. The huge losses suffered by the Austrians on this largely forgotten front, principally in the 11 battles fought on the river Isonzo, were in large measure responsible for the war-weariness that eventually swept the Empire away.
Probably the most significant development affecting Jews during the First World War occurred in the Middle East, where British forces faced the Ottoman Empire, Germany’s ally. As General Allenby advanced from Egypt into Turkish-held territory to capture Jerusalem, the British government issued in November 1917 the Balfour Declaration, in which it made its celebrated promise of a national home for the Jewish people in Palestine, previously under Ottoman rule. The First World War thus created the conditions under which the foundations of the future state of Israel were laid. But it also created the conditions for the Holocaust, and not only through the fateful rise of anti-Semitism in Germany, a society radicalised and traumatised by its defeat in 1918 and by subsequent political and economic instability. The Turks had already practised genocide against the Armenians in 1915. In the wake of the collapse of Tsarist Russia in 1917, large-scale killings, notably of Jews, occurred across Eastern Europe as rival national and political factions, Poles and Ukrainians, Reds (Bolsheviks) and Whites (anti-Bolsheviks), sought to assert themselves, often by the radical means of eliminating en masse the groups they perceived as supporters of their rivals. |
Hink Pinks can be used to reinforce word learning in an entertaining manner. The teacher says, "Hink Pink" to signal students that the words he/she is looking for have one syllable each and that they rhyme. For example, for a Hink Pink, a teacher might give out the definition, "an overweight feline." The students would answer with" fat cat." To increase the difficulty of this activity, ask for a Hinky Pinky (2 syllable words) or a Hinkity Pinkity (3 syllable words). Have students come up with their own definitions with which to play the game. This stretches their vocabulary and increases their interest in words.
Websites on Hink Pinks:
Utah Education Network: Hink Pinks
The Teacher's Desk: Hinky Pinky |
Agnostic, in an information technology (IT) context, refers to something that is generalized so that it is interoperable among various systems. The term can refer not only to software and hardware, but also to business processes or practices.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The word agnostic comes from the Greek a-, meaning without and gnōsis, meaning knowledge. In IT, that translates to the ability of something to function without “knowing” the underlying details of a system that it is working within. As with interoperability, agnosticism is typically enabled by either compliance with widely-used standards or added elements (such as coding) that will enable one system to function in a variety of environments.
Some examples of agnosticism in IT:
- Platform-agnostic software runs on any combination of operating system and underlying processor architecture. Such applications are sometimes referred to as “cross-platform.”
- Device-agnostic software operates across various types of devices, including desktop computers, laptops, tablet PCs and smartphones.
- Database-agnostic software functions with any vendor’s database management system (DBMS). Typical database-agnostic products include business analytics (BA) and enterprise resource planning (ERP) software.
- Protocol-agnostic software is independent of communication protocols. It negotiates a protocol with its peer and begins communication.
- Business process-agnostic software functions in different business environments. One example is a business process-agnostic business service that encapsulates logic associated with a specific business entity, such as "invoice" or "claim.”
- Vendor-agnostic middleware can mediate between software from multiple vendors, rather than between two specific applications.
- Hardware-agnostic licensing is a per-device or per-user model, rather than having each license tied to a specific device or virtual machine (VM).
Thomas Henry Huxley coined the term agnostic in 1869 as part of his philosophy rejecting the validity of claims of spiritual knowledge, particularly in reference to the existence or non-existence of a deity or deities. |
Koine Greek is the popular form of Greek which emerged in post-Classical antiquity (c.300 BC – AD300), and marks the third period in the history of the Greek language. Other names are Alexandrian, Hellenistic, Common, or New Testament Greek. Koine is important not only to the history of the Greek people, for being their first common dialect and main ancestor of modern Greek but also for its impact on the Church. It was the original language of the New Testament of the Bible as well as the medium for the teaching and spreading of Christianity - unofficially the second language of the Roman Empire.
"Biblical Koine" refers to the varieties of Koine Greek used in the Bible and related texts. Its main sources are:
- the Septuagint;
- the New Testament, compiled originally in Greek (although some books may have had a Hebrew-Aramaic substrate and contain some Semitic influence on the language).
The term Patristic Greek is sometimes used for the Greek written by the Church Fathers, the early Christian theologians in late antiquity. Christian writers in the earliest time tended to use a simple register of Koiné, relatively close to the spoken language of their time, following the model of the Bible. After the 4th century, when Christianity became the official state religion of the Roman Empire, more learned registers of Koiné influenced by Atticism came also to be used. |
In the prime of their lives, when stars burn hydrogen in their core, there’s a clear and simple relationship between a star’s color and brightness. Nearly a century ago, astronomers developed a way to illustrate this relationship with what’s now called the H-R Diagram, a critical tool for understanding how stars evolve. Here’s how it works.
• In a past article, you learned how astronomers classify stars according to their color and surface temperature. The massive bright O and B-type stars burn hot, blue, and bright; mid-sized A, F, and G-type stars are cooler and emit white-to-yellow light; and K and M-type stars burn cool and red like a lump of coal in a campfire.
• Around 1911-1913, the Danish astronomer Ejnar Hertzsprung and American Henry Norris Russell studied star clusters, in which all the stars are roughly the same age, and noticed a clear and surprising relationship between the stars’ brightness and color. They plotted the color and temperature of each star on a graph and came up with something that, in the modern day, looks like this:
• As you can see, most of the stars lie in a band from the upper left to the lower right. In this band, blue stars are brighter and red stars are fainter, with white and yellow stars in between. This band is called the “main sequence”. Although Hertzsprung and Russell didn’t know it at the time, stars on the main sequence are in their youth and middle age during which they burn hydrogen in their cores.
• Stars spend most of their lives on the main sequence. At one time, astronomers believed stars evolved along the main sequence, moving from hotter to cooler as they expelled energy over their lifetimes. But this is not how it works. Once a star begins burning hydrogen through nuclear fusion, it settles onto a particular spot on the main sequence and stays there until the hydrogen runs out.
• For historical reasons, stars along the main sequence are called “dwarfs” and are given the additional symbol “V”. So the Sun is a G2-type dwarf star, or G2V. Our sun has absolute magnitude of +5.0.
• As a star begins to burn helium and heavier elements in the core, it quickly evolves off the main sequence into other types of stars like giants, supergiants, and eventually white dwarfs. We’ll cover that in a future issue.
Good To Know
Here is something interesting. An astronomer can measure the light from a star to determine its spectral type. If the star lies on the main sequence, the astronomer determines its absolute magnitude (the true brightness) from a standard HR diagram. He can measure the star’s apparent magnitude. Then, using a simple mathematical relationship, he can calculate the distance to the star.
Is this not amazing? Simply by measuring the color of light from a star, we can find the distance to the star using the HR diagram. For you keeners, this is called the method of “spectroscopic parallax”.
Armed with this knowledge, you’re in a much better position to understand what you’re looking at through your telescope and how astronomers know what they know. And you can dip into more complex and rewarding books like Burnham’s Celestial Handbook and more complex articles in Sky and Telescope without getting terrified. Knowledge is power, dear reader. |
About 70% of Earth's surface is covered by oceans. Yet, they remain mysterious: we have more detailed maps of the surface of other planets than of the sea floor, and we know even less about the mechanisms ticking inside the intricate oceanic system.
An interdisciplinary team of researchers from the University of California, San Diego (UCSD) Scripps Institute of Oceanography showed that swarms of small, autonomous robot probes could peer into the waters, shedding light on the interaction between the physics of the ocean and the life in it.
The team, led by UCSD oceanographers Peter Franks and Jules Jaffe developed small underwater robots, called Miniature Autonomous Underwater Explorers or M-AUEs: stocky plastic cylinders (about 20 cm long for 1.5 liters of volume), equipped with a number of sensors.
The robots can move up and down by changing how much they float, but otherwise, drift with the current. They keep track of their position using an ingenious underwater version of GPS (the regular one doesn't reach underwater), receiving acoustic signals from special floating buoys instead of satellites.
The first case assigned to these small detectives of the deep was the origin of red tides: outbreaks of microscopic algae, which are normally part of the plankton. When they grow out of control, these algae form dense red or tan patches in the water---hence the name red tide---and poison the water, killing marine wildlife (from birds to fish to manatees), and possibly harming people.
According to Franks, the crowded patches are key to the outbreak, because they help the algae find partners to mate with, kind of like singles bars.
Some 20 years ago, Franks proposed a model predicting that patches would form when physical processes in the ocean worked the right way with the behavior of the algae. Since tracking myriads of microorganisms in the sea is impossible, the hypothesis remained untested. Up until now. M-AUEs can be tracked, and move just like the algae do. So the researchers deployed 16 of them in a 300m-diameter area off the southern California coast, programmed them to swim like algae (maintaining a constant depth), then analyzed their wanderings.
Just as the model predicted, the robots formed patches when they got pushed around by the ocean's internal waves---large, slow waves that flow underwater, invisible from above the surface. The wave crests squish the layer of water where the robots stand, packing them into the troughs, and forming the same patterns seen in red tides. The full results were published in the journal Nature communications.
According to Jaffe, this first success is just the beginning. "I think that a swarm of vehicles that can measure ocean data in 3D and time is how we should look at the future of ocean sensing". M-AUEs, in fact, are relatively cheap, so they can be deployed in large numbers to observe large swaths of water at once. That will vastly improve the understanding of how local currents work, which helps control red tides, and could also contain oil spills.
Dense swarms of M-AUEs equipped with hydrophones (underwater microphones) could listen to sounds from the deep -- from the songs of whales to the signals from a lost plane's black box, the researchers say. "Receiver density is really important for resolving direction, and thereby creating a helpful recording," says Jaffe.
Additionally, the drifting M-AUEs would not have the current flowing on them, like the wind on a microphone, further improving the recording.
From currents to life under the sea, it seems we'll be exploring the mysteries of the ocean through the eyes of little robot armadas.
Cover image credit: Jeremy Bishop/unsplash.com |
ADD is attention deficit disorder. It is characterized by a poor or short attention span and impulsiveness inappropriate for the child’s age, with or without hyperactivity. (With hyperactivity, it is called ADHD.) Hyperactivity is a level of activity and excitement in a child so high that it concerns the parents or caregivers. The diagnosis of ADD usually requires that the child display at least eight of the following symptoms.
- Often fidgets with hands or feet or squirms while sitting (restlessness).
- Has difficulty remaining seated when required to do so.
- Is easily distracted by extraneous stimuli.
- Has difficulty waiting for his or her turn in games or group situations.
- Has difficulty following instructions from others, even if the instructions are understood.
- Has difficulty sustaining attention in tasks or play activities.
- Often shifts from one uncompleted task to another.
- Often talks excessively.
- Often interrupts or intrudes on others.
- Often doesn’t seem to listen to what’s being said.
- Often loses things necessary for tasks or activities at school or at home.
- Often engages in physically dangerous activities without considering possible consequences.
Diagnosis is based on the number, frequency and severity of symptoms. Of course this “diagnosis” depends on the subjective opinion of the observer. The symptoms are not unique to a child with ADD and a child without ADD may have one or more of the symptoms.
What is Ritalin?
Ritalin is methphenidate hydrochloride. It is a central nervous system stimulant used to treat ADD. Possible side effects of the drug include nervousness and insomnia; hypersensitivity (including skin rash, hives, fever, joint pain, dermatitis,); anorexia; nausea; dizziness; palpitations; headache; dyskinesia; drowsiness; blood pressure and pulse changes, both up and down; angina; cardiac arrhythmia; abdominal pain; and weight loss during prolonged therapy. There have been rare reports of Tourette’s syndrome. Toxic psychosis has been reported. Instances of abnormal liver function, isolated cases of cerebral arteritis and/or occlusion; leukopenia and/or anemia; transient depressed mood; a few instances of scalp hair loss have also been reported.
In children, loss of appetite, abdominal pain, weight loss during prolonged therapy, insomnia, and rapid heart rate may occur more frequently; however, any of the other adverse reactions listed above may also occur.
Suppression of growth has been reported with the long-term use of stimulants in children. Methylphenidate should not be used for severe depression. Methylphenidals should not be used for the prevention or treatment of normal fatigue states. There is some clinical evidence that methylphenidate may lower the convulsive threshold (that is, increase the likelihood of seizures) in patients with prior history of seizures, with prior EEG abnormalities in absence of seizures, and very rarely, in absence of history of seizures and no prior EEG evidence of seizures. Safe concomitant use of anticonvulsants and methylphenidate has not been established. In the presence of seizures, the drug should be discontinued.
Visual disturbances have been encountered in rare cases. Difficulties with accommodation and blurring of vision have been reported. Marked anxiety, tension and agitation are contraindications to methylphenidate hydrochloride, since the drug may aggravate these symptoms.
Clearly the decision to take this drug should not be taken lightly. Unfortunately, many times children are placed on this drug based on their symptoms and without much of a medical examination. There are many reasons for a child to have problems concentrating, and there are even physical reasons for behavioral problems. Before a child is placed on a drug that so drastically affects the nervous system, some of these other health issues should at least be considered.
What Kind of Physical Exam Was Performed?
Too often a diagnosis of ADD or ADHD is handed down without any physical exam or lab work. We are not even talking about “alternative” medicine here, just good old-fashioned traditional medical diagnosis. A few of the medical problems that can cause a child to have poor concentration are as follows:
- Anemia—Anemia can cause symptoms that may be mistaken for ADD. A simple, inexpensive blood test, called a CBC (complete blood count) should be taken.
- Low thyroid function—A child with an underfunctioning thyroid will have symptoms similar to ADD. A simple blood test can rule this out.
- Hypoglycemia—Low blood suger. This is determined with more extensive blood testing. Also, if a child eats a lot of sugar and a lot of refined white flour, B vitamins and depleted—easily causing symptoms that can be described as ADD. Also a poor diet can cause hypoglycemia. For many children diet is the first, best place to look.
- Heavy metal toxicity—We have gotten better about screening children for lead. Children are not routinely screened for mercury or cadmium toxicity. Cadmium is found in cigarette smoke.
If a child is labeled with the ADD diagnosis, at the very minimum the doctor should have ruled out the above conditions.
A child may have problems with reading, and the reading problem may not become evident until fourth or fifth grade. Don’t expect teachers or administrators to be well informed about this type of concern. A child can have a reading problem, but the the teachers and administrators may think that everything is fine because if the grades are good and the standard test scores are within the normal range. Often the problem comes to a head in the fourth grade.
Between first and third grades most children learn to read. From fourth grade forward, children read to learn. A dyslexic child will rely on memory to get through school work, and since most students with dyslexia are of above average intelligence, this works for a while. In fourth grade this becomes almost impossible. This is when many students exhibit “symptoms” or behavior problems.
Don’t expect the teachers and administrators to be on top of this. Some are, others are not. You may suspect a reading problem and be told that your child is fine—the test scores said so. Sometimes the reading scores are just a little below the grade level, but the IQ score may indicate that the child is very intelligent. An intelligent child will make up for a reading problem by memorizing or using other skills
The child may have trouble spatially recognizing letters and organizing them into sounds. In other words, the learning strategy that works for most everyone else does not work for this particular child. Clues to a future problem occur early. When the child learns to talk, you may have a very hard time understanding him or her. The child may omit syllables from words insert her syllables that don’t belong.
In preschool, if he or she cannot rhyme words or tell right from left, that may be a clue to a future reading problem. This, along with the earlier language problem, can be clues to a future reading problem.
Another clue is extremely poor spelling (usually around fourth grade). The child may not only misspell the words, but if no one told you, you would have no idea what word he or she was trying to spell. Usually when a child misspells a word you can tell what that word is. Often when a dyslexic child makes an attempt to spell a word that he or she hasn’t memorized, it is very difficult to tell what the word is.
If you suspect a reading problem, you can contact the International Dyslexia Association. It can provide you with information about testing for dyslexia and other learning problems. The association can also help you find a tutor.
Children with learning problems may develop behavior problems or simply let their minds wander. They can become poor students, and it is easy for them to get labeled as ADD or ADHD.
Children with sensory integration problems do not properly process information from the environment. It can lead to unusual, even bizarre behavior. A simple example would be a child having trouble paying attention in class because he is focused on his uncomfortable shoes. It is hard to give a complete picture of sensory integration problems in this short section. To read more about sensory integration, get a copy of The Out-of-Sync Child by Carol Stock Kranowitz, M.A. The following information is taken from that book. If you know a child who exhibits strange behavior, buy this book.
A child may be oversensitive or undersensitive to a particular stimulus. Inappropriate processing of touch, movement, body position, sight, sound, smell and taste can all affect the behavior of the child.
Oversensitive: The child avoids touching. He or she may have a fight-or-flight response to getting dirty, textures of clothes or food, or another person’s light touch.
Undersensitive:The child may be unaware of pain, temperature, or how things feel. He or she may wallow in mud, paw through toys purposelessly, chew on objects, rub against walls or furniture and bump into people.
Oversensitive: The child avoids moving or being unexpectedly moved, and may be anxious when tipped off balance. He or she may avoid running, climbing, sliding or swinging. He or she may feel seasick in cars or elevators.
Undersensitive:The child may crave fast and spinning movement. The child may move constantly, fidget, enjoy getting into upside-down positions and be a daredevil.
Oversensitive: The child may be rigid, tense, stiff and uncoordinated. He or she may avoid playground activities that require good body awareness.
Undersensitive:The child may slump or slouch. His or her actions may be clumsy and inaccurate. He or she may bump into objects, stamp feet or twiddle fingers.
Oversensitive: The child may be overexcited when there is too much to look at and may cover his or her eyes or have poor eye contact. He or she may be inattentive when drawing or doing deskwork, or overreact to bright light. He or she may be hyper-vigilant—on the alert and ever watchful.
Undersensitive:The child may touch everything to learn because vision is not sufficiently coordinated. He or she may miss important cues such as facial expressions, gestures, as well as signposts and written directions.
Oversensitive: The child may cover ears to close out sounds or voices. He or she may complain about noises, such as vacuum cleaners and blenders.
Undersensitive:The child may ignore voices and have difficulty following verbal directions. The child may not listen well to his or herself and speak in a booming voice. He or she may want the TV or radio to be loud.
Oversensitive: The child may object to odors, such as a ripe banana, that other children do not notice.
Undersensitive: The child may ignore unpleasant odors like soiled diapers. He or she may sniff food, people or objects.
Oversensitive: The child may strongly object to certain textures and temperatures of foods. He or she may often gag when eating.
Undersensitive:The child may lick or taste inedible objects like clay and toys. He or she may prefer very spicy or very hot foods.
The Out-of-Sync Child gives examples of the difficulty the children with the various sensory integration problems have. It explains instances of unusual behavior in school and in play. The book gives drug-free strategies for parents. It helps parents to understand their children and gives them ways to help. If you know any child with a behavior problem, difficulty learning, playing or fitting in, buy this book. You will recognize children that you know by the behavior described in this book.
One thing worth noting: Sensory integration problems have been associated with low serotonin levels. Exercise increases serotonin. There are some doctors who think that we are seeing so much ADD (also possibly a serotonin problem) and sensory integration problems because children spend too much time in front of the TV, computers and video games and not enough time playing. Children need physical activity.
Hypoglycemia, thyroid problems, anemia, learning disabilities and sensory integration problems may all be misdiagnosed as ADD or ADHD. So far, we have only discussed things that should be recognized by a traditional medical doctor (although sensory integration is not yet a recognized diagnosis). The message is that even if you do not believe in alternative therapies, at least do a thorough investigation of the child’s problems before resorting to a mind-altering drug. Too often a drug is prescribed after a short interview, with no exam, no lab work and no investigation into the source of the child’s problem. You do not have to believe in alternative medicine to know that this is not right. Ritalin may affect the behavior of the learning-disabled fifth grader, but not improve grades. Ritalin may have no effect on the child with a sensory integration problem. Sometimes Prozac or heavier drugs are used—this is sad and unnecessary. New research indicates that children given anti-depressant drugs have an increased risk of suicide.
Alternative health care may offer some answers for children diagnosed with ADD or ADHD. Ritalin may offer symptomatic control—but no one knows why it works and it certainly does not address the cause. The idea of holistic care is to treat the patient, not the disease.
It seems strange to think of nutrition as “alternative care,” but many doctors see it that way. It is not uncommon to hear, “Vitamins do not cure disease,” from medical doctors. In a sense, they are right. Vitamins do not cure disease, but there is one very important exception. Vitamins cure vitamin deficiency. What constitutes vitamin deficiency is where all the controversy lies.
A recent survey conducted by the National Cancer Institute asked Americans about their diet from the previous day. Only 9% of those asked consumed three or more servings of vegetables or two or more servings of fruit on the previous day. One in nine surveyed had no servings of fruits or vegetables on the previous day.
Clearly, such eating habits create nutrient deficiency. When a diagnosis of ADD or ADHD is handed down it is important to consider the child’s diet. Don’t think of it in an overly simplistic point of view, “He eats sugar, he gets wired.” Think of it as a poor diet creating a health problem.
Essential fatty acids: Packaged food, fried food and junk foods are loaded with hydrogenated oils and partially hydrogenated oils. Cells, especially nerve cells, need oil (fat) for the integrity of the cell membrane. Hydrogenated oils contain trans fats that do not belong in the diet and do not resemble anything in nature. One idea nutritionists have about the cause of ADD is that the trans fats become incorporated into the nerve cells in the brain, making transmission of nerve impulses faulty. The trans fats may also be more permeable to chemical toxins and viruses. The solution is to give the child flax oil or DHA and remove all hydrogenated or partially hydrogenated oil from the diet. Even if the nerve cell theory is not true, this is an excellent suggestion for the health of your child.
Sugar: The New England Journal of Medicine recently published a flawed study that ostensibly disproved the link between hyperactive children and sugar consumption. This subject needs to be more closely examined. Children who eat a lot of sugar are vitamin deficient—especially in B vitamins and in minerals. A large percentage of their food is starch, which is turned to sugar by the body. Lots of parents think that bagels, English muffins, and sugar-free cereals are healthy. What they need to realize is that starch and sugar are essentially the same thing. Starch and sugar deplete B vitamins, vitamin C and minerals. Sugar also stresses the adrenal glands. Some holistic practitioners think that Ritalin mimics the output of the adrenal gland and if you give up sugar and support the adrenals, you will get a better result.
B vitamins:Deficiency in B vitamins causes neurologic symptoms. Traditional medicine only recognizes a thiamine deficiency as beriberi or a niacin deficiency as pellagra. What about subclinical deficiencies? Nervousness, poor concentration, fatigue, depression, poor sleep, forgetfulness and other symptoms can all be caused by not having enough B vitamins. B vitamins are very important for mental function. Eating a lot of sugar and refined carbohydrate depletes B vitamins. One of the most common deficiencies is folic acid. Folic acid is necessary to produce serotonin and norepinephrin (important neurotransmitters, or brain chemicals). Very often a child with sensory integration problems need serotonin. Folic acid is found in fresh green produce. How many children get enough green vegetables? We can get a liquid folic acid and a liquid multivitamin that can be placed in juice. Often the results are amazing. Of course nothing replaces a good diet, but that is sometimes difficult to accomplish.
Chemical additives: Read Ruth Winter’s book on chemical additives. You will see that many of them cause poor concentration, fatigue and trouble with the nervous system. Aspartame, sold under the brand name Nutrasweet or Equal found in many sugar-free snacks, creates methanol (a neurotoxin) in the body.
Minerals: ADHD has been linked to zinc/copper imbalance. Trace mineral deficiency has been linked to allergies. Minerals are often the cofactors that enable enzymes to work. Once again, a poor diet will be deficient in minerals.
Amino acids: Amino acids are the building blocks of protein. A diet high in junk food, poor digestion and vegetarianism can cause a deficiency of certain amino acids. There are lab tests to determine amino acid status.
Hidden allergies, Candidiasis and heavy metal toxicity: If you have gone to a nutritionist or an alternative health practitioner, you may have heard one or all of these terms. Candida albicans is a yeast that grows in the intestine. A diet high in sugar or heavy use of antibiotics can cause high levels of Candida, which causes nutritional deficiency and toxicity. The chemical toxins from the yeast can cause fatigue, nervousness and poor concentration (among a wide variety of symptoms). Hidden allergies can also be a problem. A favorite food, eaten every day, often is the culprit causing the ADD or ADHD. Great improvement is often achieved by following simple, basic nutritional rules.
Hands-on therapies:Chiropractors, Craniosacral therapists, therapists who work with muscle and fascia can all treat ADD and ADHD. The nervous system is involved, and structure can affect the nervous system? Many times jamming in the upper cervical spine affects the dura (a membrane covering the brain and spinal cord), affecting the entire nervous system. This can happen from the trauma of birth. Chiropractors and other practitioners treat this, often with great success. Craniosacral therapy is often useful; cranial bones move, much the same way that the gills of a fish move. This movement is vital to the correct function of the central nervous system. Birth trauma, head trauma or jaw dysfunction may interfere with this movement, creating the symptoms of ADD or ADHD. Babies who are born by Cesearean section are often in need of craniosacral therapy. The contractions of the birth canal serve to pump the craniosacral system during birth. Babies born by C-section do not have this benefit. A chiropractic adjustment of the upper cervical vertebrae or the sacrum often addresses the craniosacral system.
Many times parents will try nutrition, put their child on a hypoallergenic diet or try some alternative therapy without getting the desired result. Then the parents are frustrated. The point is, all of the pertinent issues must be addressed. Giving a child who has a learning disability a dairy- and wheat-free diet may benefit his or her health, but it will not correct the learning disability. You can give vitamins to a child with sensory integration issues and still not solve the problem. By all means, improve the health and nutrition of your child—there’s a very good chance that it will improve the ADD. If not, there may be other issues that need to be addressed. The idea is not to treat ADD or ADHD, but rather treat the patient who has the condition. The goal is not merely to get rid of the symptom, but to find the cause and correct it. Health is not merely the absence of disease; health is optimal function. |
The history of Cambodian art stretches back centuries to ancient crafts. Traditional Cambodian arts and crafts include textiles, non-textile weaving, silversmithing, stone carving, lacquerware, ceramics, wat murals, and kite-making. Beginning in the mid-20th century, a tradition of modern art began in Cambodia, though in the later 20th century both traditional and modern arts declined for several reasons, including the killing of artists by the Khmer Rouge. The country has experienced a recent artistic revival due to increased support from governments, NGOs, and foreign tourists.
The history of Cambodian art stretches back centuries to ancient pottery, silk weaving, and stone carving. The height of Khmer art occurred during the Angkor period; much of the era's stone carving and architecture survives to the present. In pre-colonial Cambodia, art and crafts were generally produced either by rural non-specialists for practical use or by skilled artists producing works for the Royal Palace. In modern Cambodia, many artistic traditions entered a period of decline or even ceased to be practiced, but the country has experienced a recent artistic revival as the tourist market has increased and governments and NGOs have contributed to the preservation of Cambodian culture.
Silk weaving in Cambodia has a long history. The practice dates to as early as the first century, and textiles were used in trade during Angkorian times. Even modern textile production evidences these historic antecedents: motifs found on silk today often echo clothing details on ancient stone sculptures.
There are two main types of Cambodian weaving. The ikat technique (Khmer: chong kiet), which produces patterned fabric, is quite complex. To. create patterns, weavers tie and dye portions of weft yarn before weaving begins. Patterns are diverse and vary by region; common motifs include lattice, stars, and spots. The second weaving technique, unique to Cambodia, is called "uneven twill". It yields single or two-color fabrics, which are produced by weaving three threads so that the "color of one thread dominates on one side of the fabric, while the two others determine the colour on the reverse side." Traditionally, Cambodian textiles have employed natural dyes. Red dye comes from lac insect nests, blue dye from indigo, yellow and green dye from Prohut bark, and black dye from ebony bark.
Cambodia's modern silk-weaving centers are Takeo, Battambang, Beanteay Meanchey, Siem Reap and Kampot provinces. Silk-weaving has seen a major revival recently, with production doubling over the past ten years. This has provided employment for many rural women. Cambodian silk is generally sold domestically, where it is used in Sampot (wrap skirts), furnishings, and pidan (pictorial tapestries), but interest in international trade is increasing.
Cotton textiles have also played a significant role in Cambodian culture. Though today Cambodia imports most of its cotton, traditionally woven cotton remains popular. Rural women often weave homemade cotton fabric, which is used in garments and for household purposes. Krama, the traditional check scarves worn almost universally by Cambodians, are made of cotton.
Cambodia's best-known stone carving adorns the temples of Angkor, which are "renowned for the scale, richness and detail of their sculpture". In modern times, however, the art of stone carving became rare, largely because older sculptures survived undamaged for centuries (eliminating the need for replacements) and because of the use of cement molds for modern temple architecture. By the 1970s and 1980s, the craft of stone carving was nearly lost.
During the late 20th century, however, efforts to restore Angkor resulted in a new demand for skilled stone carvers to replace missing or damaged pieces, and a new tradition of stone carving is arising to meet this need. Most modern carving is traditional-style, but some carvers are experimenting with contemporary designs. Interest is also renewing for using stone carving in modern wats. Modern carvings are typically made from Banteay Meanchey sandstone, though stone from Pursat and Kompong Thom is also used. |
Linux printf command
printf prints a formatted string to the standard output. Its roots are in the C programming language, which uses a function by the same name. It is a handy way to produce precisely-formatted output from numerical or textual arguments.
printf FORMAT [ARGUMENT]...
|FORMAT||FORMAT controls the output, and defines the way that the ARGUMENTs will be expressed in the output. See the Format section, below.|
|ARGUMENT||Each ARGUMENT will be inserted into the formatted output according to the definition of FORMAT.|
|--help||Display a help message, and exit.|
|--version||Display version information, and exit.|
The FORMAT string contains three types of objects:
- ordinary characters, which are copied verbatim to the output.
- interpreted character sequences, which are escaped with a backslash ("\").
- conversion specifications, which define the way in which ARGUMENTs will be expressed as part of the output.
Here is a quick example which uses these three types of objects:
printf "My name is \"%s\".\nIt's a pleasure to meet you." "John"
This command produces the output:
My name is "John". It's a pleasure to meet you.
Here, FORMAT is enclosed in double-quotes ("). There is one conversion specification: %s, which interprets the argument "John" as a string and inserts it into the output. There are three escaped character sequences: two occurrences of \" and one occurrence of \n. The sequence \" translates as a literal double-quote; it is escaped with a backslash so that printf knows to treat it as a literal character, and not as the end of the FORMAT string. \n is the sequence for a newline character, and tells printf to begin a new line and continue the output from there.
The power of printf lies in the fact that for any given FORMAT string, the ARGUMENTs can be changed to affect the output. For example, the output of the command in the above example can be altered just by changing the argument, "John". If used in a script, this argument can be set to a variable. For instance, the command
printf "Hi, I'm %s.\n" $LOGNAME
Each conversion specification begins with a % and ends with a conversion character. Between the % and the conversion character there may be, in order:
|-||A minus sign. This tells printf to left-adjust the conversion of the argument.|
|number||An integer that specifies field width; printf will print a conversion of ARGUMENT in a field at least number characters wide. If necessary it will be padded on the left (or right, if left-adjustment is called for) to make up the field width.|
|.||A period, which separates the field width from the precision.|
|number||An integer, the precision, which specifies the maximum number of characters to be printed from a string, or the number of digits after the decimal point of a floating-point value, or the minimum number of digits for an integer.|
|h or l||These differentiate between a short and a long integer, respectively, and are generally only needed for computer programming.|
The conversion characters themselves, which tell printf what kind of argument to expect, are as follows:
|conversion character||argument type|
|d, i||An integer, expressed as a decimal number.|
|o||An integer, expressed as an unsigned octal number.|
|x, X||An integer, expressed as an unsigned hexadecimal number|
|u||An integer, expressed as an unsigned decimal number.|
|c||An integer, expressed as a character. The integer corresponds to the character's ASCII code.|
|f||A floating-point number, with a default precision of 6.|
|e, E||A floating-point number expressed in scientific notation, with a default precision of 6.|
|p||A memory address pointer.|
|%||No conversion; a literal percent sign ("%") is printed instead.|
A width or precision may be represented with an asterisk ("*"); if so, the asterisk reads in an argument, which must be an integer, and uses that value. For example,
printf "%.*s" 5 "abcdefg"
...produces the following output:
The following table represents the the way that printf would output its ARGUMENT, "computerhope", using various FORMAT strings. Each string is enclosed in quotes so that it's easier to see the exact extent of each:
|FORMAT string||ARGUMENT string||output string|
Please note that printf requires the number of conversion strings to match the number of ARGUMENTs; it maps them one-to-one, and expects to find exactly one ARGUMENT for each conversion string. The only exception is a conversion string which uses an asterisk; such strings require two arguments each.
Conversion strings are always interpreted from left to right. For example, the following printf command:
printf "%d plus %5f %s %.*f." 5 5.05 "equals" 3 10.05
...produces the following output:
5 plus 5.050000 equals 10.050.
Interpreted Escaped Character Sequences
The following character sequences are interpreted as special characters by printf:
|\"||prints a double-quote (")|
|\\||prints a backslash (\)|
|\a||issues an alert (plays a bell)|
|\b||prints a backspace|
|\c||instructs printf to produce no further output|
|\e||prints an escape character (ASCII code 27)|
|\f||prints a form feed|
|\n||prints a newline|
|\r||prints a carriage return|
|\t||prints a horizontal tab|
|\v||prints a vertical tab|
|\NNN||prints a byte with octal value NNN (1 to 3 digits)|
|\xHH||prints a byte with hexadecimal value HH (1 to 2 digits)|
|\uHHHH||prints the unicode character with hexadecimal value HHHH (4 digits)|
|\UHHHHHHHH||prints the unicode character with hexadecimal value HHHHHHHH (8 digits)|
|%b||prints ARGUMENT as a string with "\" escapes interpreted as listed above, with the exception that octal escapes take the form \0 or \0NN|
Quoting In The Shell
Be careful with the way your shell interprets quoted strings. If your shell is not interpreting your quoted string correctly, try using single-quotes rather than double-quotes.
Prints the following output:
hello world !
printf "%b" 'hello\nworld\n!'
Prints the same output as the above example.
printf "Your home folder is %s.\n" $HOME
Prints a string telling you the location of your home directory. |
– Maria Montessori
The Montessori Method is the method of teaching developed by Dr. Maria Montessori. She was one of Italy’s first female MD’s, and was given the assignment to work with young children. Over her more than three decades of educating children, she established many ideals and a philosophy to help children learn. She encouraged individualized teaching through the use of concrete materials rather than abstract theory. Through the use of the five senses and manipulative (hands-on) materials, the child is allowed to progress at his/her own, individual learning rate. The prepared environment encourages the child to use the abilities that are innate to him or her.
A study done by Angeline Lillard at the University of Virginia published in the Journal of School Psychology in 2012 studied children in Montessori programs versus children in Conventional programs. She found that children in Classic Montessori programs showed significantly greater school-year gains on measures of executive function, reading, math, vocabulary, and social problem-solving. |
Social and Behavioral Deficits
Social Deficits and Empathy Deficits. Compounding their communication difficulties, many children with autism also show profound empathy deficits. They develop only a very limited appreciation, or no appreciation at all, of other people's feelings and ideas. They don't recognize and respond to faces as do normal children, and they thus do not learn that each face belongs to an individual separate person. To the children with severe autism, their own feelings and ideas are the only feelings and ideas that appear to exist. Children with autism may have no reaction to another person's crying, for example. They may have no idea that their words and actions affect other people. Many children with autism are completely unaware of their surroundings and other people in their surroundings. It is impossible for some children with autism to take another person's perspective without deliberate training.
Given their empathy and communication deficits, children with autism experience the social work to be unpredictable and frightening. They find social interactions to be unnatural and quite stressful. Rather than embracing relationships, most children with autism try to avoid them, choosing rather to take refuge and comfort in their own isolated worlds. They do not reciprocate play and they do not engage in normal play activities without prompting. They also avoid meeting other people's gaze, and tend instead to fixate their eyes away from people, on to inanimate objects or parts of objects.
When higher functioning children with autism do choose to be social, their deficits in social understanding and empathy prevent them from smoothly engaging with others. For example, a high functioning child with autism may know he is supposed to use words to initiate a conversation with other children, but not know quite how to use them appropriately. Correspondingly, he may walk up to a group of children and attempt to initiate a conversation by echoing an out-of-context phrase he heard previously such as, "It was a dark and stormy night" rather than by making eye contact and saying hello. Though well intentioned, such odd behavior is, of course, quite baffling to children who don't already understand about autism.
Behavioral Deficits. In addition, children with autism may exhibit odd emotional behavior that is not easily understood by others. children with autism's social fears can manifest as compulsive behaviors and/or aggression. Many require order and routine to be maintained as they transition from one activity to another. They may endlessly repeat certain ordering behaviors that serve a self-soothing function. Changes to routine can easily frighten them, resulting in tantrums and aggression. Aggression is not always directed outward, but instead may result in self-injurious behaviors. |
Learn something new every day More Info... by email
In the legal sense, a penumbra is a logical extension of a rule, law, or legal statement that provides people with rights not explicitly delineated in the law. This concept dates to 19th century legal precedents in the United States. Justice Oliver Wendell Holmes contributed significantly to the body of legal discussion on this concept and referred to it in several court cases. One of the most famous invocations of the legal penumbra occurred in the 1965 Griswold v Connecticut case.
Under the logic of this legal theory, a law can imply rights without stating them outright. As long as a reasonable interpretation of a law could provide for a given right, a judge could argue a legal matter falls within the penumbra of the law. While the reasoning may be somewhat shaky and the legal basis can be difficult to prove, if attorneys and judges can argue the matter persuasively, people may accept it.
The right to privacy is an excellent example of a penumbra. Many people believe this right is enshrined in the Constitution of the United States. It is actually not. Instead, judges and legal scholars argue that clauses like the First Amendment include a right to privacy in their penumbra and numerous legal cases have established a body of case law to support this belief, making it difficult to challenge. In Griswold v Connecticut, a challenge to a ban on selling contraceptives, the argument was that this law violated marital privacy and, by extension, the first amendment.
This term is borrowed from astronomy, where the penumbra is the shaded area surrounding a total eclipse. Rather than being definitively stated in a law, the rights are implied in the penumbra, making it a bit of a legal gray area. It is possible to challenge the logic an attorney or scholar uses when laying out the evidence for attaching a given right to a particular rule of law, using supporting documentation like other laws, records from people who participated in the drafting of the law, and so forth.
Legal scholars, attorneys, and judges rely on theories like this to interpret the law, adding meaning and depth to it over time. If people have to read the law literally, they may find loopholes making it difficult to judge certain kinds of cases fairly. The law often has trouble keeping pace with society, and being able to extend logical rights to people on the basis of precedent and implications in existing law is an important legal tool.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
Origin and History
Unlike the common English walnut, the black walnut is native to North America, specifically the Mississippi drainage basin. Today, the common eastern black walnut is grown in the Southeast and California.
The nut shells and wood of the black walnut tree are prized perhaps more than the nutmeat itself. Native Americans made dye from the nut husks, and woodworkers value the extreme hardness and straight grain of the timber. Other industries use the hard shells of black walnuts in plastics, glues, sand-blast cleaners and metal polishers. Because their flavor is so strong, most people donít find black walnuts appealing as a snack. They work wonderfully, however, in baked goods or confections such as cookies, fudge, brownies, cakes and candies. They can even be used as a meat substitute in certain dishes.
Diamond has taken the hassle out of black walnuts ó we make them ready to use straight from the package. Black walnuts have more protein than English walnuts and contain arachidonic fatty acid. |
I found this on a power point on the net and trying to solve it.
Its a conic section from the ellipse chapter.
- The mean distance of Mars from the Sun is 142 million miles.
- Perihelion = 128.5 million miles
- 1. What is Aphelion = ??
- 2. Equation for Mars orbit?
- Round to nearest tenth if necessary.
- Hint: Orbit of a planet about the sun is an Ellipse, with the Sun at one focus. Aphelion of a planet is its greatest distance from the Sun and the perihelion is its shortest distance. Mean distance of a planet from the sun is the length of the semi major axis of the elliptical orbit. |
Lesson Plans and Worksheets
Browse by Subject
Radius Teacher Resources
Find Radius educational ideas and activities
Tenth graders explore and define the radius and diameter of a circle. For this geometry lesson, 10th graders calculate the circumference and area of a circle using real world objects and shapes. They discuss chords and lines tangent to a circle as it relates to tangency.
For this Sombrero Galaxy worksheet, learners observe infrared images taken by the Spitzer Infrared Telescope and the Hubble Space Telescope. They answer 9 questions about the details of the images such as the radius of the stellar component, the thickness of the dust disk and the diameter of the bright nuclear core.
Students investigate the relationship between the radius and area of circles that change size over time. They identify and interpret the puddle at a crime scene, that it is certain to be irregularly shaped and the floor would be neither smooth nor level. This raises several questions about the mathematics of true reality, as well as the physical chemistry involved.
In this volume and surface area worksheet, 10th graders solve and complete 12 different types of problems. First, they find the volume and total surface area of a given circle. Then, students find the height and radius of a cylinder illustrated. They also find the volume of the remaining solid shown.
In this black holes worksheet, learners read about black holes and solve 3 problems where they calculate the Schwarschild radius of one, the number of grams/second a quasar luminosity implies for another and the number of suns per year a super massive black hole consumes at different efficiencies.
In this planet models activity, students use the radius of a given planet and its mass to answer questions using a model of a planet with a given radius. They determine the volume of the planet's inner core, the volume of its outer shell, and the total mass of the planet.
Sixth graders listen to the story of Sir Circumference and the Dragon of Pi. They measure the circumference and diameter of circular objects, record measurements in a table, and use calculators to divide the circumference by the diameter. They complete a homework assignment in which they find the radius of a pizza.
Students construct circles using Cabri Jr. In this circle construction using Cabri Jr. lesson, students construct a circle using Cabri Jr. Students measure the diameter and radius. Students find points on the circle in order to find an equation to model their circle. Students move their circle and discuss the differences in the equation.
In this spherical tanks and fuel level worksheet, students are given a diagram showing the effects of gravity or acceleration on the level of fuel in a spherical tank. Students solve 3 problems including finding the general formula for the radius of a disk, finding the integral for the volume of fluid in a tank and finding the equation for the change in height of the fluid with respect to time as fluid is being withdrawn from a tank.
Tenth graders explore circles. In this geometry lesson students investigate the relationships between the diameter and circumference of a circle and between the radius and area of a circle. The dynamic nature of TI-nspire provides opportunity for conjecture and verification. |
Electrodiagnostic Testing (NCV)
Nerve Conduction Velocity Testing (NCV) is used to diagnose nerve damage or dysfunction and to confirm a particular diagnosis. It can usually differentiate injury to the nerve fiber (axon) from injury to the myelin sheath surrounding the nerve, which is useful in diagnostic and therapeutic strategies.
How is it performed?
During the test, flat electrodes are placed on the skin at intervals over the nerve that is being examined. A low intensity electric current is introduced to stimulate the nerves.
The velocity at which the resulting electric impulses are transmitted through the nerves is determined when images of the impulses are projected on an oscilloscope or computer screen. If a response is much slower than normal, damage to the myelin sheath is implied. If the nerve’s response to stimulation by the current is decreased but with a relatively normal speed of conduction, damage to the nerve axon is implied. NCV testing is useful in arriving an accurate diagnosis when combined with other diagnostic tests, patients symptoms and clinical testing.
NCV Testing is useful in the diagnosis of:
- Peripheral Neuropathies
- Carpal Tunnel Syndrome
- Tarsal Tunnel Syndrome
- Other nerve aliments |
Abstraction: Nancy Cassell
Description:Nancy Cassell discusses the importance of landscape, nature, and place to her painting; the influence of the Surrealists and Abstract Expressionists; and the sexual nature of her subject matter. She also mentions how her childhood in Tennessee; images of creek beds, tornadoes, and orchids; and the unconventional techniques she has developed using black ink on white canvas have given her a unique method and source of visions to channel her subconscious into her work. This segment may be appropriate for more mature students.
- Use to spark a discussion about artists methods and intentions as well as the creative process in general. Have students create art based on the video and your discussion.
- Use the segment in conjunction with a discussion of the purposes of art (e.g., recalling memories from childhood).
- Use in a discussion of color. Compare Cassells palette (black and white) to Gerald Ferstmans.
- Compare and contrast the techniques and methods different artists use in their approach to the same medium, using painter profiles from Spectrum of Art and Through Artists Eyes.
- Use the segment as part of a careers in the arts unit or as an introductory activity before an artist-in-residence visits your classroom.
For more information:
(These links may leave the Arts Toolkit.)
About Looking at Painting: |
Change agricultural techniques:
A large percent of the reasons for desertification lie in agriculture itself. Farming should be done in a sustainable way that prevents soil runoff. Crop rotation and plantation of legume plants should be done to maintain nitrogen level in the soil. Farming on terraces prevents soil erosion in hillside areas. Irrigation methods should also be sustainable and not all water should be drawn out of the land. All efforts to minimize use of water in our lives and conservation is necessary because it is the lack of water in the ground that makes it deserted.
Goats and sheep should not be let to overgraze the land. Uncontrolled grazing leads to an exposed top soil. Encourage in barn feeding that is more controlled.
Planting trees can decrease and stop the rate of desertification. Trees prevent soil erosion by wind and water and sustain the top soil layer. This way they protect the land from becoming deserted. Planting tress and plants that are drought resistant is also helpful because they do not require that much water from the ground. Any form of vegetation cover protects the soil underneath.
Make an overall change:
All the environmental factors are interrelated. Desertification is indirectly linked to global warming and all other environmental changes. Changes in lifestyles and practicing sustainable practices like saving energy, water and reducing waste eventually leads to a decrease in desertification. A shift to renewable energy sources like solar power is another thing we can do. |
A scalar field such as temperature or pressure, where intensity of the field is graphically represented by different hues of color.
In mathematics and physics, a scalar field associates a scalar value to every point in a space. The scalar may either be a mathematical number, or a physical quantity. Scalar fields are required to be coordinate-independent, meaning that any two observers using the same units will agree on the value of the scalar field at the same point in space (or spacetime). Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory.
The scalar field of oscillating as increases. Red represents positive values, purple represents negative values, and sky blue represents values close to zero.
Physically, a scalar field is additionally distinguished by having units of measurement associated with it. In this context, a scalar field should also be independent of the coordinate system used to describe the physical system—that is, any two observers using the same units must agree on the numerical value of a scalar field at any given point of physical space. Scalar fields are contrasted with other physical quantities such as vector fields, which associate a vector to every point of a region, as well as tensor fields and spinor fields. More subtly, scalar fields are often contrasted with pseudoscalar fields.
Uses in physics
In physics, scalar fields often describe the potential energy associated with a particular force. The force is a vector field, which can be obtained as the gradient of the potential energy scalar field. Examples include:
Scalar fields like the Higgs field can be found within scalar-tensor theories, using as scalar field the Higgs field of the Standard Model. This field interacts gravitationally and Yukawa-like (short-ranged) with the particles that get mass through it.
Scalar fields are found within superstring theories as dilaton fields, breaking the conformal symmetry of the string, though balancing the quantum anomalies of this tensor.
Scalar fields are supposed to cause the accelerated expansion of the universe (inflation ), helping to solve the horizon problem and giving an hypothetical reason for the non-vanishing cosmological constant of cosmology. Massless (i.e. long-ranged) scalar fields in this context are known as inflatons. Massive (i.e. short-ranged) scalar fields are proposed, too, using for example Higgs-like fields. |
When I am first teaching shapes it's fun to focus on one at a time every now and then. Not only is it easy reinforcement of the shape recognition the crafts always turn out to be adorable. Depending on your individual child you can choose to focus on just colors, just comparing/ sorting big and small shapes , both or simply let them create. This is a versatile activity with lots to offer.
1. Gather your materials. You will need 1 full size piece of construction paper. Multiple colors of smaller ( scrap is perfect) pieces , glue and either scissors or paper punches.
2. Draw a large circle on your paper.
3. Cut out. If your child is able to do this themselves, by all means they should be doing it!
4. Cut out or punch 2 sizes of smaller circles from the colored scrap paper. If you want to make it more challenging use 3. Remember to label the circles a "Large" and "Small".
5. Add a designated number of glue dots. We are learning about colors, shapes and sizes why not through in some counting. Of course if this it too much to take in, back off a frustrated child isn't learning a thing.
6. Add the dots. Here you have choices to make , you can say " Can you find the small green dot and add it?" or maybe " Choose your favorite large dot" ... mix up the questions so there is a small challenge for them. Take turns if you want. My son liked being the asker- but by asking the questions he was still learning the same concepts.
7. Let dry. |
When we think of foods, we often picture fruit and vegetables. So, where do mushrooms fit in? They are not actually plants, and don’t require pollination or sunlight for energy. Instead, they obtain their nutrients from decaying matter and the root systems of living plants. This makes mushrooms part of the fungi kingdom, which also includes yeast and mold. Let’s look at what mushrooms are and where they fit in the pyramid.
What category are mushrooms on food pyramid?
Mushrooms fall into the vegetable category on the food pyramid. However, they don’t belong in any other food group. This is due to their nutritional value and their role as decomposers. For this reason, they are classified as a vegetable. The food pyramid is divided into four food groups: grains, fruits, vegetables, and meat. The bottom level of the pyramid is made up entirely of plant foods. As we get older, we need to protect our brain from neurological diseases. While we’re at it, let’s talk about the mushroom’s role in preventing neurological diseases.
First of all, it’s important to know what mushrooms are. Although they are technically classified as fungi, they are still considered fruits. This is because they don’t grow out of plants, and they don’t need to be pollinated to obtain their nutrients. In addition, they don’t need to convert sunlight into energy. Instead, they get their nutrients from decaying matter or the root systems of living plants. While they do resemble plants, mushrooms don’t have the same cell structure that plants do. For example, they don’t have the same cellulose, the substance that gives plants their structure.
Is mushroom a protein or carb?
As a member of the fungi kingdom, mushrooms are rich in essential amino acids such as leucine and lysine, which are lacking in many cereal foods. Proteins are important for the growth and repair of body tissues. Some mushrooms contain anticancer and antibacterial properties. Mushrooms are also low in calories and contain only a small amount of fat. The average serving size is about a cup raw or half a cup cooked.
Mushrooms contain small amounts of vitamin D. Vitamin D is needed by the body to fight cancer and other diseases, and taking supplements of vitamin D may help prevent cancer. Choline, another antioxidant found in mushrooms, is linked to a lower risk of cancer and an increased risk of prostate cancer. It is important to remember that a mushroom’s nutritional content is different than a supplement. In addition, mushrooms contain a moderate amount of unsaturated fats.
Consuming about a cup of cooked white button mushrooms provides only 15 calories and 0.2 grams of fat. However, oyster mushrooms are higher in carbs and contain 3.6 grams of fiber per serving. Portabella mushrooms have four grams of carbs per cup, while shiitake mushrooms weigh 19 grams. Overall, mushrooms are a low-carb food that is high in fiber. They also contain no saturated fat.
Are mushrooms in the meat family?
Mushrooms are a popular vegetable to use as a substitute for meat. Their flavor and texture can mimic the taste of meat, while only containing about 15 calories per cup. Unlike other vegetables, mushrooms have a low energy density and can be used in many recipes. However, you should keep in mind that mushrooms do not have the same nutritional value as meat. For example, they do not provide the same amount of iron as meat does. However, they do offer a wide range of nutrients that make them a great alternative to meat. And because they have so few calories, you can easily substitute mushrooms for meat in most recipes.
If you are wondering if mushrooms are considered a member of the meat family, you may be surprised to learn that they are not. As fungi, mushrooms are classified as a vegetable but offer nutrients in other food groups. In fact, mushroom consumption is gaining steam as more consumers adopt a plant-based diet. However, this doesn’t mean that mushrooms should be thrown out. Rather, they can be incorporated into meals in many ways, including as main entrees for plant-based diets.
What is a mushroom considered?
When considering what foods to include in your diet, mushrooms are a great option. Though they are not technically plants or vegetables, they are still considered a vegetable and contain a variety of important nutrients. You can enjoy mushrooms on your favorite pizza or stir fry and get a variety of nutrients in your daily diet. However, you should keep in mind that mushrooms are not technically vegetables because they are fungi.
While mushrooms fall into the vegetable group, they are not technically plants. Their lack of photosynthesis means they can’t use the sun to produce food. Instead, they obtain their nutrients from decayed matter and the root systems of living plants. While they are not considered to be fruits, mushrooms are part of the fungi kingdom, which includes mold and yeast. Therefore, they are a great addition to your meals!
As an added bonus, mushrooms are low in calories and contain little to no fat or carbs. Their nutritional content depends on the variety, substrate, development, and processing conditions. Adding mushrooms to your diet is a great way to boost your health. Mushrooms are a versatile, high-nutrient food that is low-carb and low-fat, while still providing a small amount of protein. A serving of mushrooms is approximately one cup raw or half cup cooked.
Do mushrooms count as 5 a day?
Although the myth surrounding mushrooms and the 5-a-day has long persisted, the vegetable isn’t considered a fruit and should not be counted towards your daily allowance. Mushrooms are not plants or fruits, and their roots and leaves don’t require sunlight to grow. They are also not considered a fruit, but are instead considered a vegetable by the U.S. Department of Agriculture.
Mushrooms are low-carb, almost fat-free, and have a great umami flavor, similar to that of meat. While they don’t contain as much protein as meat, they are still low-calorie and a good source of vitamin B. One serving of mushrooms has about 20 calories and zero grams of fat, making them the perfect substitute for meat. And because they’re low-calorie and low-fat, they’re perfect for people on a diet.
The USDA’s Food Patterns report that adding mushrooms to daily meals increases the amount of potassium, fiber, and other nutrients. However, the impact on overall calories, sodium, and saturated fat was minimal. In addition, eating more mushrooms fits in with the USDA’s “Make Every Bite Count” campaign and 2020-2025 DGA. This is great news for vegetarians who’ve been avoiding mushrooms for a while.
Are mushrooms a vegetable serving?
Although mushrooms are classified as fungi, they are part of the vegetable group and provide many of the same nutritional attributes as vegetables. They are a good source of copper, selenium, riboflavin, and niacin. Unlike other vegetables, however, mushrooms do not contain any fat or cholesterol, and they are low in sodium. Hence, they are an excellent vegetable choice.
Mushrooms are low in calories, and they provide fiber, protein, and antioxidants. They may even reduce the risk of some serious diseases. They can be found in many supermarkets and are commonly substituted for meat in several dishes. Mushrooms are also commonly used in traditional medicine. They are loaded with antioxidants, vitamins, minerals, and amino acids. Although they do not constitute a complete vegetable serving, a single cup of cooked mushrooms has around 50 calories and is a great option in a low-calorie diet.
Although the FDA has not yet approved their inclusion on the food pyramid, their increasing popularity among consumers supports the concept of making mushrooms a healthy vegetable. Studies have shown that they help improve the quality of a diet, and they can also enhance flavor without adding sodium. Consequently, they deserve a prominent place in dietary guidance. To make the best possible impact on public health, researchers must stay abreast of the latest research linking mushrooms to human health. Additionally, they should become familiar with food kingdoms and their nutritional profiles.
Why are mushroom not good for health?
Although they can be a bit scary to some, mushrooms are actually very healthy and have many uses. Mushrooms have long been used for their nutritional value and medicinal benefits. They are a great source of protein, vitamins, and minerals, and are especially high in vitamin D, which promotes calcium absorption. They also have fiber and are rich in immune-boosting antioxidants. You should always cook mushrooms before eating them, though, to get the most nutritional benefits.
Mushrooms are rich in antioxidants, which help the body eliminate harmful free radicals. Free radicals are produced during the metabolic process, but too much of them can damage cells and lead to various health issues. In addition, the antioxidant content in mushrooms may help prevent different kinds of cancer. The presence of selenium has also been linked to lower the risk of some types of cancer. In addition, mushrooms contain high levels of gluttithione, which is an anti-inflammatory and may prevent certain types of cancer.
Is mushroom a vegetable or protein?
Many health advocates consider mushrooms a vegetable, but there is more to their benefits than simply being a light, low-calorie food. In fact, mushrooms contain over 15 different vitamins and minerals, including folate and vitamin B6, as well as magnesium, zinc, and potassium. Mushrooms are also packed with anti-inflammatory compounds and antioxidants. These properties make mushrooms an excellent food for those suffering from inflammation and autoimmune disorders.
Mushrooms are classified as both plant and vegetable foods, but they actually belong to the fungus kingdom and are not part of either the plant or animal kingdom. Although they share many features with plants and animals, they are considered a vegetable because they are cut, cooked, and eaten in the same way. However, because of their resemblance to meat, they are often considered a vegetable and treated as such. In this article, I’ll explain why mushrooms are both protein and vegetable, and highlight some of the benefits of eating mushrooms.
The most important thing to note about mushrooms is that they are fungi, not plants. But they do contain protein and fiber. They are also packed with dietary fiber, potassium, and iron. And because they are high in calories, they should be cooked in the same manner as vegetables. Ideally, they should be sauteed with garlic and olive oil. Mushrooms are used for food, as well as for medicinal purposes and energy production. |
Throughout history, Native Americans have utilized several different types of guns for hunting and warfare. In the early 1600s, the first guns were introduced to the indigenous people of North America by the Europeans. These guns were not only used for hunting, but also for protection from other tribes and the Europeans.
The earliest guns used by Native Americans were matchlock muskets, which were generally reserved for the tribes' leaders and warriors. These muskets were heavy, cumbersome, and difficult to reload. Despite this, they were incredibly accurate and had a longer effective range than most bows and arrows.
By the late 1700s, the flintlock musket had become the most commonly used gun among Native Americans. These guns were lighter, more efficient, and easier to reload than the matchlock musket. They also had a quicker firing rate and were more accurate at longer distances. These guns were used for both hunting and warfare, and were the most common type of gun used by Native Americans until the late 1800s.
In the mid-1800s, the introduction of the repeating rifle changed the way Native Americans used guns. These repeating rifles could fire several shots in quick succession, making them ideal for hunting and warfare. The repeating rifle also allowed for more accuracy and longer ranges than the flintlock musket.
The most commonly used gun by Native Americans in the late 1800s and early 1900s was the lever-action rifle. These rifles were lighter and easier to reload than the earlier muskets and repeating rifles. Lever-action rifles were not only used for hunting, but also proved to be very effective in warfare, as they allowed warriors to fire quickly and accurately from a distance.
In the early 1900s, semi-automatic and fully automatic weapons were introduced to Native Americans. These weapons were heavier and more expensive than earlier guns, but offered a much higher rate of fire, allowing for more accurate and effective warfare. These weapons were not widely used by Native Americans, however, and were generally only used by the tribes' leaders and warriors.
Throughout history, Native Americans have used a wide variety of guns for hunting and warfare. From the matchlock musket of the early 1600s to the semi-automatic and fully automatic weapons of the early 1900s, Native Americans have always been innovative in their use of guns.
Native Americans have used guns for centuries, but the type of guns and their effectiveness have varied greatly. Traditional Native American guns were typically single-shot, muzzle-loaded weapons, such as the flintlock rifle. These guns were effective for hunting, but had limited accuracy and range. Modern firearms, such as semi-automatic rifles, are far more accurate and have a much greater range.
The flintlock rifle was the most common type of gun used by Native Americans. It was a single-shot, muzzle-loaded weapon that used flint and steel to ignite the powder in the barrel. The flintlock was effective for hunting, but it was not very accurate or powerful. It had a short range and was unreliable in wet weather. The flintlock was also slow to reload, so it was not effective in close combat.
Modern firearms are far more accurate and powerful than traditional Native American guns. Semi-automatic rifles, such as the AR-15, are capable of firing multiple rounds in rapid succession. These guns are very accurate and have a much greater range than traditional guns. They are also much more reliable in wet weather. Furthermore, modern firearms are much easier to reload, making them more effective in close combat.
In conclusion, traditional Native American guns were effective for hunting, but were limited in terms of accuracy and range. Modern firearms are far more accurate and powerful, and are much more reliable in wet weather. They also have a much greater range and are easier to reload, making them more effective in close combat. For these reasons, modern firearms are far more effective than traditional Native American guns. |
1. Teach Students to Manage Digital Planners
With a template in OneNote and a bit of patience, Alberto Herraez taught his fifth grade students to manage their time with a digital planner.
“If you implement this, it will oftentimes be the first time a student has had a choice in their learning, and they won’t know how to do it,” he said. “It will be a process. At the beginning, it’s very hard.”
However, his students soon learned to manage their time with the help of their planners, inputting certain activities and how long they would take on Monday and Tuesday, then adjusting tasks for Wednesday and Thursday based on what wasn’t done and what new tasks they needed to complete.
The students also used the planners to create and track long-term goals. Using the SMART goals approach, the students learned to set actionable and measurable goals.
“I then taught Alberto’s fifth graders, and by the end of sixth grade, they had been in this environment for two years, and you could see the results,” Mario Herraez said.
2. Create a Personalized Learning Playlist
Much like a music playlist, a student-created learning playlist can showcase their preferences while still ensuring they’re learning all the necessary materials.
Using a site designed to look like Netflix’s web interface, students in the Herraezes’ classes could choose lessons to complete based on their learning styles. The user profiles correlated to how well a student was doing in the class. The “recently watched” section featured lessons from the previous week, and the “keep watching” row offered different types of activities that students could choose from based on their learning preferences.
The eTwinz used athlete comparisons to make their point. “What if we made Tom Brady do snowboarding?” they asked. “What if we took Simone Biles to swimming class?”
Instead of forcing students to learn in a particular way, the playlist curates choices that respect students’ differences and ability to choose.
3. Personalize Learning Activities with Choice Boards
Similar to the playlists, the Herraezes also used choice boards in their classrooms. These activity sheets feature a grid of learning activities, each worth a different number of points ranging from 10 to 35. The number of points depends on the amount of effort the activity takes, with “rewrite the preamble of the constitution” counting for 10 points and “write and perform a rap of the constitution” counting for 20 points.
“They needed to reach 100 points by the end of the term, and they need to complete at least one activity from each category,” Alberto Herraez explained. The categories referred to different learning styles.
4. Design Classrooms to Reflect Modern Work Environments
School is designed to prepare students for the workforce. The traditional model of students sitting at desks in rows facing the front of the room was implemented during a time when it reflected the workplace of the era. The workplace of today is not the same. |
Enteroviruses can cause severe infections and can occur in large outbreaks. Especially young children are vulnerable. These viruses cause a wide variety of diseases, with aseptic meningitis being the most common in both adults and children. Non-polio enterovirus infections are not notifiable in most countries.
Enteroviruses affect more than a billion of people worldwide each year but exactly how many people are infected each year is unknown. Some may be asymptomatic and screening for these diseases is not routinely performed. With yearly increasing numbers, these viruses uncover an urgent need for better screening to improve child health in many low resource countries.
New infections per year
Dead per year
What causes Enteroviral diseases
Enteroviruses are a group of viruses that cause a number of illnesses which are usually mild febrile illness, mild respiratory illness like the common cold and may be causing some of the acute diarrheal outbreaks. They can however occasionally lead to severe disease. Their main route of transmission is fecal-oral.
On the basis of their pathologies, enteroviruses were originally classified into four groups, polioviruses, Coxsackie A viruses, Coxsackie B viruses and echoviruses.
Enteroviruses isolated more recently are named with a system of consecutive numbers based on the VP1 capsid region of the genome. Enterovirus serotypes are not exclusively associated with particular disease syndromes, but sometimes have a propensity to cause particular symptoms.
Echovirus serotypes are frequently reported to be responsible for meningitis, but also are responsible for most enterovirus infections. Among coxsackieviruses, the leading serotypes associated with central nervous system diseases are B1 to B6, A7 and A9.
Acute hemorrhagic conjunctivitis, a highly contagious infection characterized by eye pain, eyelid swelling, and subconjunctival haemorrhage has been associated with enterovirus 70 and coxsackievirus A24. Enterovirus 71 has been associated with major outbreaks of hand, foot and mouth disease with concurrent fatal encephalitis among very young children.
Historically, poliomyelitis was the most significant disease caused by an enterovirus. While wild-polio viruses have mainly been eradicated, an alarmingly large number of non-polio cases are being reported annually, with the number increasing each year.
Where are Enteroviruses found
Enteroviruses are found world-wide. Enterovirus A types EV-A71, EV-A6, EV-A16, and EV-A10 are the most commonly found types in East Asia, and Southeast Asia. Enterovirus B is found most commonly in Western Asia, Europe, Africa, South America, Southern Asia and Oceania. Most of the data available is from countries who are actively suppressing polio, which may make any attempt to map them inaccurate.
Climate and socio-economic factors likely contributors to the geographic distribution of different types. Many enteroviruses are transmitted through the oral-fecal route. Poor sanitary conditions and overcrowding are important factors driving epidemics related to these viruses.
What are the symptoms of Enteroviral infections
Enteroviruses cause a wide range of symptoms, from rashes in small children, to summer colds, to encephalitis, to blurred vision, to pericarditis. They also have a great range in presentation and seriousness.
- Nonspecific febrile illness is the most common presentation of enterovirus infection. Other than fever, symptoms include muscle pain, sore throat, gastrointestinal distress/abdominal discomfort, and headache. In newborns the picture may be that of sepsis and can be severe and life-threatening.
- Poliomyelitis primarily via the fecal-oral route, affects the central nervous system and causes muscle weakness.
- Polio-like syndrome found in children who tested positive for enterovirus 68.
- Aseptic meningitis in children. The symptoms are diverse and non-specific with possibility of vomiting, headaches, firm neck pain.
- Acute hemorrhagic conjunctivitis can be caused by enteroviruses.
- Acute flaccid paralysis is one of the most serious conditions attributed to enterovirus B.
- Bornholm disease or epidemic pleurodynia is characterized by severe paroxysmal pain in the chest and abdomen, along with fever, and sometimes nausea, headache, and emesis.
- Pericarditis and/or myocarditis are typically caused by enteroviruses; symptoms consist of fever with dyspnea and chest pain.
- Arrhythmias, heart failure, and myocardial infarction have also been reported.
- Herpangina is caused by Coxsackie A virus, and causes a vesicular rash in the oral cavity and on the pharynx, along with high fever, sore throat, malaise, and often dysphagia, loss of appetite, back pain, and headache. It is also self-limiting, with symptoms typically ending in 3–4 days.
- Hand, foot and mouth disease is a childhood illness most commonly caused by infection by Coxsackie A virus or EV71.
- Encephalitis is rare manifestation of enterovirus infection. When it occurs, the most frequent enterovirus found to be causing it is echovirus 9.
- Myocarditis is characterized by inflammation of the myocardium (cardiac muscle cells). Over the last couple of decades, numerous culprits have been identified as playing a role in myocarditis pathogenesis in addition to the enterovirus, which at first was the most commonly implicated virus in this pathology. One of the most common enteroviruses found to be responsible for causing Myocarditis is the Coxsackie B3 virus.
- Possibly chronic fatigue syndrome. Acute respiratory or gastrointestinal infections associated with enterovirus may be a factor in this disease.
How can Enteroviral infections be prevented
Most enteroviruses are spread through the faecal-oral route, which is why environmental sanitation plays a vital role in preventing transmission. Good hygiene practices, such as frequent hand-washing, are essential to reducing the risk of becoming infected.
Some enteroviruses can be prevented with vaccines:
- Polio vaccines are vaccines used to prevent poliomyelitis. Two types are used: an inactivated poliovirus given by injection and a weakened poliovirus given by mouth. A potential, adverse effect of the oral vaccine is its ability to recombine to a form that causes neurological infection and paralysis. It does not affect the person who was originally vaccinated but can allow spread of polio in areas with poor sanitation and low vaccination coverage.
- For hand foot and mouth disease a vaccine is available in China for the EV71 virus.
How are Enteroviral infections diagnosed
Most infections are usually diagnosed based on symptoms. However this is a highly unreliable way to identify these complex disease presentations. Enteroviruses can be detected by Polymerase chain reaction (PCR) tests, such as the ones made available by the WoIDM. Use of these molecular tests offers several potential advantages. While the results are fast and the test have high sensitivity, they also allow to type the virus and enable detection of viruses that cannot be typed with other serological tests.
How are Enteroviral infections treated
There is no specific treatment for non-polio enterovirus infection. People with mild illness typically only need to treat their symptoms.
Some illnesses caused by non-polio enteroviruses can be severe enough to require hospitalization. Also here, supportive treatment is used based on symptoms. There are no specific antiviral drugs for enteroviruses.
While you are here, help us with
Access to Essential Drugs
One third of children, women and men have no access to essential medicines, putting lives at risk. Hospitals frequently run out of medicines and other essential supplies. Our Med-Aid program connects hospitals with aid and ensures that they receive exactly what they need.
Access to Diagnostics
Much of today’s innovation is either not reaching or not suitable for people in developing countries.
Data to Improve Health
Faster and reactive systems to help provide lifesaving support to vulnerable communities.
Support our work. It only takes a minute but makes a world of difference!
With your help we can bring modern diagnostics and essential medicines to people in need, track disease outbreaks better and help prevent future pandemics. |
Early Dynastic Mesopotamia Essay
The end of their reign is thought to have been as a result of the invasion by the Persian Achaemenid Empire. The people of Mesopotamia worshiped the same gods and participated in trade with each other form the different cities. Important innovations were also made in the cities such as the wheel, the plow mathematics and agriculture. The citizens of Mesopotamia had social classes the highest in rank being the nobles and priests, then the commoners and slaves were the lowest. Each city had a tall tower at the center which was usually the temple of the patron god of the city. Transport in each city was well organized in to three categories: the wide processional streets, the public through streets and private blind alleys.
The canals were however the most preferred in transport than roads. This period became known as the Uruk period and it was when the wheel, plow and cylinder seal for property was invented. The people also learnt how to forge weapons and bowls from metals such as copper and bronze. It is thought that each city had a priest ruler who governed it. The Sumerians were very strict on how the public and private spaces were divided and the houses were mainly 90 m2. During the Early Dynastic I period, the people of Mesopotamia started building palaces that grew in size and complexity with time. The earliest palaces were large-scale complexes that were owned by the Mesopotamian elites. One of the earliest known palace is the Khafajah and is thought to have served as a socio-economic institution.
The palace was also used as a residence for the rulers and elite and also had chambers used as workshops by the craftsmen, stores where surplus food was kept courtyards where important ceremonies were held and some had shrines. The high temple which was believed to house the king of gods was used as a distribution center and also housed the priests. The temples as institutions had various roles like: advising people on timing of planting and harvesting which gave the temple a central role in agricultural production. The temple also controlled the distribution of irrigation water which gave it economic power as it was also believed to have supernaturally-sanctioned power over water. The priests of the temples also employed agricultural workers as well as own land.
The early dynastic cities warred against each and sometimes they formed leagues or they submitted to rule by another city. These two made the final decisions on crucial issues such as war and who to form an alliance with. The council also had authority to override the king as in the case of Gilgamesh. The population of the early city-states totaled to about one million at the peak of the civilization. The Sumerians had been able to practice intensive agriculture and set up permanent settlement all year. They were also able to invent the oldest known writing system called the cuneiform. The top of this pyramids were flat until later on when they started using the step pyramid style. This buildings were built with sun-backed bricks in the inside and fired bricks were used on the outside.
These large buildings were mostly used to support temples and other huge houses (Mallowan, 1965, 78). Some scholars believe that the ziggurats may have formed the foundation of the biblical Tower of Babel. There were various offices in Mesopotamia which were political or religious or social. Intuitions such as the temple and the palace also developed and were well organized. The temple played a key role of uniting the people of the city as they respected it as the home to the god of their city (Richardson, 2012, 213). Besides religion, the institution of the temple also organized irrigation and agriculture and was also responsible in distribution of irrigation in the city. The priests and priestesses were respected and were responsible for appointing the kings of the cities.
The temples also served as ceremonial burial places for the priests and the priestesses. J. Culture and values: a survey of the humanities (7th ed. Boston, MA: Wadsworth/Cengage Learning. Nicholas Postgate, J N Postgate 1994. Early Mesopotamia: Society and Economy at Dawn of History. L. Early Mesopotamia and Iran. Thames and Hudson. Steinkeller, P. Early political development in Mesopotamia and the origins of the Sargonic Empire. American Journal of Archaeology. Stone, E. City-states and their centers. The Archeology of City-States: Cross-Cultural Approaches, Smithsonian Institution Press, Washington, DC. Stein, G. Ancient Mesopotamia (Vol. Cambridge University Press. Leick, G. Mesopotamia: The invention of the city. Penguin UK.
From $10 to earn access
Only on Studyloop |
Implementation: A cooking activity which helps participants come closer as the act of baking allows them to touch in an acceptable by all way and is the basis for introducing them in the use and importance of wholemeal grains.
Requirements: Wholemeal flour of different kinds, water, yeast, salt, spices, basins and baking plates, oven. Black or whiteboard. Projector.
How to: We divide the group into subgroups.
- Participants discuss the different types of bread from their homeland/place of origin.
- Yeast or sourdough? How is the sourdough made?
- How can the basic bread recipe given by the facilitator be modified to become healthier?
- Spices that can be used: poppy seeds, nigella sativa seeds, coriander, pistacia lentiscus seeds ( native plant in Greece ) origin and how they affect the taste and nutritional value of the bread
- Wholemeal vs “white’ grains: basic information using a power point presentation about nutritional facts and cooking methods
- Knead the bread and enjoy the smell while baking
- favours a high level of interaction
- overcomes language and cultural barriers
- gives important nutritional information in an enjoyable way
- helps the group come closer by sharing a meal together |
The Mali Empire is my favorite African empire. At its historical height, the empire had everything – wealth, power, and a high accumulation of educational resources. Several kings that led the empire were legendary – Sundiata, Musa, Abubakari. Despite being comparable to Mediterranean empires such as the Ottoman or Roman empires, the Mali Empire doesn’t grace the historical pages with European kingdoms. The most one may see about the Mali Empire is how Europeans tried to find the source of their wealth – presumably to exploit it.
Since the Neolithic Period, the Sudan region of West Africa is where the Mali Empire developed. It began as a small Mandinka kingdom at the upper reaches of the Niger River. This was a prime area due to the fact that the Niger River flooded, which provided fertile land for agriculture. The people – the Mandinka – fished and were cattle herders.
To the north, the Ghana Empire was in decline during the 11th and 12th centuries. The Sosso Empire took it over and imposed trade restrictions on the Mali region, leading the native Mandingo tribes to rebel.
Sundiata Keita centralized the government and maintained diplomacy and a well-trained army. This led to a massive military expansion.
Height of the Mali Empire
The Mali Empire prospered thanks to trade and its prime location, between the rain forests of southern West Africa and the powerful Muslim caliphates of North Africa. The Mali rulers had a triple income: they taxed the passage of trade goods, bought goods, and sold them on at much higher prices, and had access to their own valuable natural resources. Significantly, the Mali Empire controlled the rich gold-bearing regions of Galam, Bambuk, and Bure; one of the main trade exchanges was gold dust for salt from the Sahara.
The Mali Empire rose primarily to power through trade. Control and taxation of trade pumped wealth into the imperial treasury and sustained the Mali Empire’s existence. The most profitable commodities traded were gold and salt. Mined first at Bambuk on a tributary of the upper Senegal River gold was mined at Bure on the headwaters of the Niger River. The location of the gold mines moved as the mines in the west became exhausted and alternative sources were discovered further east. The Mansa (King) claimed all the gold nuggets, but gold dust was available for trade.
Salt was mined deep in the Sahara. They would trade slabs brought by camels in the markets of Timbuktu, Mopti, and other Niger River towns. Great camel caravans brought salt, iron, copper, cloth, books, and pearls from the north and northeast. They exchanged it for gold, kola nuts, ivory, leather, rubber, and slaves from the south.
Although salt and gold dust were used as currency during the fourteenth century, cowrie shells from the Indian Ocean were introduced as currency. Their use improved the collection of taxes and the exchange of goods.
Acting as a middle-trader between North Africa via the Sahara Desert and the Niger River to the south, Mali exploited the traffic in gold, salt, copper, ivory, and slaves across West Africa.
The Mali Empire grew and prospered by monopolizing the gold trade and developing the agricultural resources along the Niger River.
Muslim merchants were attracted to the commercial activity and converted Mali rulers who in turn, spread Islam.
The Mali Empire demanded strong leadership to continue its prosperity. Sundjata established himself as a great religious and secular leader, claiming the greatest and most direct link with the spirits of the land and thus the guardian of the ancestors. After Sundjata, most of the rulers of Mali were Muslim, some of whom made the hajj, one of the most famous, Mansa Musa, the grandson of one of Sunjata’s sisters.
This federation prospered, developing over the next century into one of Africa’s richest ever empires whose wealth would astound both Europe and Arabia.
King Abubakari II is perhaps one of the most historically significant kings of the Mali Empire. The little information about Abubakari comes from griots, which are West African storytellers and advisors to royal personages. They record an ocean expedition, organized by Abubakari that caused significant controversy still today.
King Abubakari longed to explore the ocean, often longingly looking out over the open sea that bordered his kingdom. He thought it possible to find the edge of the Atlantic Ocean. He sent an expedition out over the Atlantic only to have one of his generals return. Frustrated, King Abubakari outfitted a second expedition in 1311 totalling 2,000 ships with another 1,000 ships loaded with food to last him and his team two years.
Various historians believe that Abubakari arrived in what is now Haiti in 1312. Ivan Van Sertima’s, They Came Before Columbus gives an interpretation of such events based on Christopher’s Columbus’ personal diary. He never returned to his kingdom, which gave way for Mansa Musa to ascend the throne.
World renowned for his wealth, Mansa Musa is the most remembered of Mali kings. During Mansa Musa’s reign – 1307 to 1337 – he extended Mali’s boundaries to their farthest limits. Within the kingdom, there were fourteen provinces ruled by governors or emirs who were usually famous generals. Sheiks governed Berber provinces. They all paid tribute to Musa in gold, horses, and clothes.
In 1324, Mansa Musa, accompanied by 60,000 people, traveled across the Sahara to Cairo and then to Mecca and Medina carrying large quantities of gold.
He ruled impartially with a great sense of justice. Musa established diplomatic relationships with other African states, especially Morocco, with whom he exchanged ambassadors. Perhaps he was best known as a ruler who firmly established the Islamic religion in Mali along with peace, order, trade, and commerce. Mansa Musa started the practice of sending students to Morocco for studies and he laid the foundation for what later became the city of Timbuktu, the commercial and educational center of the western Sudan.
Timbuktu was the most important city in the kingdom. The center of culture and trade, it was home to one of the first universities in Sub-Saharan Africa and included a comprehensive library complete with books from places like Greece and Rome. Timbuktu also housed mosques for Islamic worship and prayer.
Three of western Africa’s oldest mosques – Djinguereber, Sankore, and Sidi Yahia – were built there during the 14th and early 15th centuries. Islam at the time in the area was not uniform, its nature changing from city to city. Timbuktu’s bond with the religion grew strong through its openness to strangers, which attracted religious scholars.
Decline of the Mali Empire
The Mali Empire collapsed when several states, including Songhai, proclaimed and defended their independence. Around the 1430s, the rulers could not prevent rebellions from breaking out. The Tuareg people took back their city of Timbuktu in 1433, and by 1500, the Mali Empire comprised just a small portion of land.
The empire of Mali reached its zenith in the fourteenth century, but its power and fame depended on the personal power of the ruler. After the death of Mansa Musa and his brother Mansa Sulayman, Timbuktu was raided and burned. Several states revolted and seized their independence, including the Tuareg, Tukulor, and Wolof. The Mossi attacked trading caravans and military garrisons in the south. In the east, the Songhai gathered strength. Mali lasted another 200 years, but its glory days were over.
In the fifteenth century Mali lost its control over the Sahel and became cut off from direct contact with the trans-Saharan routes and the larger Muslim world. The capital declined and the foreign Muslim community deserted the city. By 1500, the Mali Empire became little more than the beginnings of a Malinke heartland. By the seventeenth century, Mali had broken up into several small independent chiefdoms.
Cartwright, Mark. “Mali Empire.” Ancient History Encyclopedia, 1 March 2019, https://www.ancient.eu/Mali_Empire/. Accessed September 2020.
Chakra, Hayden. “The History of the Mali Empire.” About History, 16 May 2018, https://about-history.com/the-history-of-the-mali-empire/. Accessed 21 October 2020.
Global Security. “Kingdom of Mali.” Global Security.org, 28 8 2020, https://www.globalsecurity.org/military/world/africa/mali.htm. Accessed 6 10 2020.
National Geographic. “The Mali Empire.” National Geographic Resource Library, 20 8 2020, https://www.nationalgeographic.org/encyclopedia/mali-empire/. Accessed 19 10 2020.
Wikipedia. “Mali Empire.” Wikipedia, October 19 2020, https://en.wikipedia.org/wiki/Mali_Empire. Accessed 1 September 2020. |
After reading the last few posts, you know about the series of events that led up to the Great Depression. The collapse of worldwide economies led to the rise of political tensions that kept growing throughout the 1930s, eventually leading to World War II.
Let’s take a look at the role the Fed played during World War II. Did it do any better than it did during the Great Depression?
The role of the Fed during the War
Before the war, America’s military was small and its weapons were increasingly obsolete. After Japan attacked Pearl Harbor and America entered the war, the American military needed to purchase thousands of ships, tens of thousands of airplanes, hundreds of thousands of vehicles, millions of guns, and hundreds of millions of rounds of ammunition. Moreover, it needed to recruit, train, and deploy millions of soldiers across six different continents.
In order to accomplish all this, the US needed to pay entrepreneurs, inventors, and companies to get the job done. As a result, military expenditures rose from a few hundred million a year before the war to $91 billion in 1944.
Financing the war became the Federal Reserve’s wartime mission, which was a far cry from its original mission. The treasury and the Fed came up with a plan to fund the war primarily through taxation and domestic borrowing.
During the war years there was a huge increase in incomes, employment, and the money supply, but most production was going to wartime goods and services, rather than consumer goods. Basic economic principles tell us that when too much money chases too few goods, prices typically rise (i.e. inflation).
Hence, the treasury decided that it was better to tax people so that less money would be chasing consumer goods, which would theoretically prevent inflation from rising too high.
Why domestic borrowing?
To keep the costs of the war reasonable, the Fed began selling wartime bonds.
Moreover, it pegged interest rates to low levels so that the cost of borrowing for the government would be smaller in the end.
Overall, both the Fed and the treasury believed taxation would help keep inflation in check and borrowing money directly from American citizens would make it easier to reach economic stability once the war was over.
Ad hoc changes to the Federal Reserve Act
In addition to taxation and selling low interest rate bonds to consumers, there were many changes made to the original Federal Reserve Act in order to make funding the war more feasible:
One amendment allowed the board to change reserve requirements in banks in New York City and Chicago without changing requirements for other banks.
A second amendment allowed the Fed to purchase government securities directly from the Treasury.
A third amendment exempted the deposits that people made for purchasing war loans from bank reserve requirements. In other words, if you came to the bank to buy wartime bonds, the bank was not required to keep ANY of your money in its reserves (recall that banks are typically required to keep some portion of the money in the bank as reserves — i.e., the fractional reserve system). In this case, all of your money was used to fund the war, and none of it was kept in reserve (yikes!).
Ad hoc executive orders
Not only were there ad hoc changes to the Federal Reserve act, but FDR also issued a series of executive orders that essentially made it much easier for banks to lend and for people to borrow. All of this was intended to speed up the rate of industrial expansion.
All hands on deck
Handling war savings bonds was the largest single operation ever performed by the Federal Reserve. The Fed nearly doubled its workforce, from 11,000 employees in 1939 to 24,000 employees in 1943.
Different regional banks played different roles in processing, approving, and issuing all these savings bonds. Moreover, different regional banks took responsibility for engaging in foreign transactions and handling remittances from the US to foreign countries during the war.
Reserve banks also acted as agents for the treasury’s foreign funds control operations. The goal was to ensure that the transactions happening across borders were legitimate, without Axis manipulation of dollar assets and Axis access to international markets.
In sum, the Federal Reserve played a big role during World War II by helping to finance the war, fund America’s allies, and embargo our enemies.
Now consider the following questions:
- Was helping fund wars the original mission of the Fed?
- Was it important for the Fed to play such a large role in helping fund the war?
- Was it necessary to make ad hoc changes to the Federal Reserve Act or to sign executive orders behind closed doors?
There is no right or wrong answer. However, this quote by F.A. Hayak comes to mind: |
Students soared through the second theme learning about flight.
In Reading, students used their strategies to identify main idea, problem & solution, story elements, cause and effect, and non-fiction text features. Students collaborated to discuss facts learned about the Wright Brothers, Amelia Earhart, hot air ballooning and parachuting. Did you know that roosters and sheep were the first passengers to ride in a hot air balloon?? Students experienced first hand how it feels to soar through the sky in a parachute using the Google Expeditions VR Glasses! Afterwards they created a 3D hot air balloon of their own.
Students were very busy in writing this week! They soared through the sky with many activities that focused on hot air balloons, kites, parachutes and gliders. Writers imagined where they would go if they were granted a hot air balloon ride. Students mentioned places such as Italy, Disney World and Hollywood! They explained who they’d bring, how they’d feel, and how long their trip would be through a short narrative. Later, they switched their focus to kites. Students used Google Translate to find the word “kite” in other languages. Students created 3D “All About Me” hot air balloons which featured their personal interests. They participated in a Virtual Reality tour of a hot air balloon ride using VR glasses as a culminating activity!
In math, students learned about the two main forces to make an airplane move: thrust and lift. They then worked together to make their own paper airplanes, and flew them outside to measure their distances with rulers!
They also made foam glider planes and estimated the distance they could throw them.
Students also learned about symmetry. They went outside to look for symmetrical shapes
using rulers to make sure their shapes were indeed symmetrical. Then they came back inside to create their own symmetrical shapes. Lastly, students learned how to measure units of capacity, identify the three states of matter, and observe that liquid will always take the form of the container it is placed, including water balloons! Students were splashed when tossing the water balloons around!
In science students were given an opportunity to conduct an experiment to test the parachutes that they made. Each group carefully chose different materials to engineer a parachute for a fictional character, Jack. Then they were given a chance to test their parachutes by carefully dropping them from a high point. Each group tested their parachutes and measured the time it took their parachutes to land. They were all very successful creations!
In physical education class Mr. Potor had the students experiencing the push and pull of force using a huge parachute! Students had to work together to create the right movement to trap the air underneath the parachute.
We will be ROAD TRIPPIN’ next week across the United States!! |
Helen Keller Archive Lesson Plans
Using the rich trove of information contained in the digital Helen Keller Archive (www.afb.org/HelenKellerArchive), the American Foundation for the Blind (AFB) has created lesson plans that teach middle and high school students about using digital and physical archives and the difference between primary and secondary sources and how to use them.
Aligned with Common Core curriculum standards, each lesson contains a review of the lesson as a whole, as well as teacher and student activity pages. The goal is to enable teachers to guide students in using digital archival collections while discovering Keller’s work as a leading author, activist, and advocate.
Helen Keller helped raise AFB’s profile when she began working with the organization in 1924. In addition to serving as AFB’s counselor on national and international relations, she made countless speeches and appearances at home and in more than 39 countries around the world on behalf of the organization. Keller remained active with AFB until her death in 1968, by which time she had radically changed perceptions of deafblindness and left a rich legacy upon which AFB continues to expand.
The lessons currently available are: |
To explore the different activities that women and men do each day and how these contribute to the local economy.
The tool asks participants to think about all the activities they did the day before and maps this out on cards for participants to categorise. This includes activities such as cooking breakfast, collecting water, resting, working in the fields, selling goods at the market, or participating in a community meeting.
- Participants see that care for people and the environment is a critical part of the economy even if this is not paid work.
- Participants begin to discuss the division of labour between women and men and why some activities are more often done by women rather than men and vice versa.
Steps in the process
- In a group discussion (can be in small groups), participants list ALL of the activities that they did yesterday.
- Participants draw, or write if they can, one activity per card. Men and women will be given different coloured cards – for instance, men may receive green cards while women receive yellow cards.
- The facilitator then asks: ‘Which of these activities helped you to take care of your family and friends?’
- The participants then group these activities together including the four categories - housework, collection of water and firewood, care of children, care of adults. The facilitator places a card above these activities titled ‘Care for people’
- The facilitator then asks, ‘Which of these activities helped you to take care of the natural resources that are around you?
- Participants then group these activities together and the facilitator places a card above these activities titled ‘Care for the environment’.
- The facilitator then asks, ‘Which of these activities are paid or generate income?
- Participants then group these activities together and the facilitator places a card above these titled ‘Paid work’.
- ‘Which activities contribute to the life of the community?’
- Participants then group these activities together and the facilitator places a card above these titled ‘social and cultural activities’.
- ‘Which activities are considered to be personal rest and leisure?’
- Participants then group these activities together and the facilitator places a card above these titled ‘rest and leisure’.
Questions for analysis
- Is there anything missing from this activities mapping?
- Does this activity mapping capture the main activities that you see in your community?
- Identify those activities that take up the most time for you.
- As women’s cards and men’s cards will be different colours it will be visually clear which activities men and women spend more time doing.
- What activities do men and women do that are the same? What activities do men and women do that are different and why?
- What activities do girls and boys participate in?
- How much time do women and men spend on different activities?
- Can both men and women do the care activities listed here?
- Are there activities that are done more by younger women?
- Are there activities that are done more by older women?
- How does the amount of money you have affect how much time you spend on care activities?
- Which of these activities do you do at the same time?
Power issues to consider
Gender: Having different colour cards for women and men will immediately show the similarities and
differences between their activities. In most cases women and girls will be more involved in care work activities than men and boys. You will likely find that men have more time for paid work either as agricultural labourers, factory workers, traders etc. Many women will be involved in paid work and in unpaid work such as subsistence agriculture. Here facilitators want to show that women are involved in paid and unpaid work alongside unpaid care work.
To deepen the analysis facilitators can ask:
- What is the value of the unpaid and care activities?
- How does that impact on how we see women’s and girls’ contribution to the economy/community?
Age: Children and youth may have different activities than women and men as they may be in school rather than working. However, for some girls and young women their age may mean that they have to carry a heavier workload because of their low status in the household. For instance, young wives may not be able to ask their husbands to support them with their housework. Young women are also more likely to have younger children that require more care. Older women may also have to take on more care work, particularly in countries badly affected by HIV and AIDS.
Status: Widows and single women will often have more work to do than other women unless they have support at home. Widows and single women are likely to be involved in paid work or subsistence agriculture to meet their basic needs while also having to do most, if not all, of the care work at home.
Disability: People who are disabled or challenged physically and mentally and those who may be sick (due to old age or a disease) are often care responsibilities for other members of the household. This would imply increased unpaid care work for other household members and they may need to access community care and support.
Class: Some people in the community will also be able to pay for care services and goods while others will not. For instance, richer community members might be able to pay for electricity or hire domestic workers in their households to help with the cooking and taking care of children. This will mean they spend less time on care work than poorer households.
Suggestions for use
- The process of using the activity mapping will support help participants think about and discuss power dynmics within the family, and appreciate the work that people do to care for people and the enviroment is a critical part of the economy, eve if it is unpaid.
- If the process is rigrously recorded across time or across communities, then the evidence can be used to answer research questions, and as part of and influencing or advocacy strategy.
For facilitators only
The table below shows an example of some of the different activities that might come up and how they could be categorised.
Maria Nandago Thu Jul 16 at 07:07:21 0 like
The Activity mapping tool works very well in identifying and categorising the work done by women and men to enable people recognise how much women contribute to / do as unpaid care work. I would like to suggest that more guiding questions be listed to help the facilitators in leading the discussion on bringing about change. The focus would be on the 3Rs of addressing issues of unpaid care work i.e. Recognition, Redistribution and Reducing the burden.
Amade Suca Fri Sep 16 at 06:09:23 0 like
Casper Knudsen Mon Dec 16 at 06:12:43 0 like
Kas Sempere Wed Oct 16 at 16:10:50 1 like
This is a tool that takes time. Happened to me often that lots of energy was spent on setting up the tool (cards, time for groups, etc.) and then little time (and energy) was left for analysis. A balance of both is ideal.
Kas Sempere Wed Oct 16 at 16:10:33 0 like
And a nice alternative to time surveys!
Amade Suca Mon Dec 16 at 06:12:11 0 like
Yes that is true. This tool does help for the change we want.
Karen Jørgensen Mon Dec 16 at 06:12:48 0 like
I would like to do a mapping of activities and by this understanding who has the power in a given community - Any experiences on this - can this tool be relevant? Thanks Karen |
In its most fundamental form, fuel efficiency refers to a vehicle's ability to extract energy from fuel. The more energy a vehicle can draw from fuel, the more fuel efficiency the vehicle has. Similarly, the less energy a vehicle draws, the less fuel efficient the vehicle is.
Fuel efficiencymeasures the distance a motor vehicle can travel with a single gallon of gasoline.
As a result, increasing the efficiency of these vehicles can help limit the impact on climate change. The Office of Energy Efficiency and Extractable Energy, MIT School of Engineering and HowStuffWorks, the United States Environment and Economy Can Benefit Significantly from Fuel Efficiency Improvements. These sources point out that cars, trucks, and other off-road motor vehicles account for nearly 60 percent of oil consumption and more than 25 percent of greenhouse gas emissions in the U.S.
There are a lot of things that we first think about before buying a car, and one of them is the fuel efficiency of the car. But what is fuel efficiency? And why is it important? Fuel efficiency is the car's ability to derive energy from fuel. The more energy the car can get, the higher the fuel efficiency of the car. Similarly, the lower the amount of energy the car can release, the less fuel efficient it will be.
Cars with good fuel efficiency can consume and transport less fuel, which can also bring great advantages not only to our lives but also to the environment. In addition, we already have fuel-saving technology or a hybrid car with greater fuel efficiency than other vehicles, which can help us in the long term.
Fuel-efficientvehicles require less gasoline to travel a given distance. When we burn less gas, we reduce emissions that cause global warming and produce less pollution, while we spend less on gas and much less.
The higher compression ratio is useful for increasing energy efficiency, but diesel fuel also contains approximately 10% more energy per unit volume than gasoline, helping to reduce fuel consumption for a given power output. Improving fuel efficiency can be done using fuel additives. Some fuel additives improve performance also by increasing octane level.
According to the Office of Energy Efficiency and Removable Energy, MIT School of Engineering and HowStuffWorks, fuel economy measures do not show a completely accurate picture of changes and improvements in a vehicle's efficiency. The National Aeronautics and Space Administration (NASA) has investigated fuel consumption in microgravity. Now, many luxury cars have been adopting lightweight materials for several years to improve overall fuel efficiency and maintain their competitive edge in the industry.
Vehicle diesel engines are generally tuned for optimal operation at travel speeds, so they burn fuel less efficiently when idling. roof bars) and load, smooth acceleration and deceleration, and the use of high gears at a constant speed are some examples that reduce fuel consumption. Fuel is injected directly into the cylinder, providing more efficient combustion than when fuel and air are mixed outside the cylinder. When looking for the car of your dreams, you keep a lot of things in mind, one of which is the fuel efficiency of your car.
Office of Energy Efficiency and Extractable Energy, its Office of Vehicle Technologies is dedicated to research focused on reducing emissions and improving efficiency. In microgravity or zero gravity, such as in an environment in outer space, convection no longer occurs and the flame becomes spherical, with a tendency to become bluer and more efficient. Selling those cars in the United States is difficult because of emissions standards, says Walter McManus, a fuel economy expert at the University of Michigan Transportation Research Institute. Fuel consumption is reported as gallons per mile (GPM) instead of miles per gallon, indicating how many gallons of gasoline you will use when driving 100 miles. Fuel consumption is a more accurate measure of a vehicle's performance because it is a linear relationship, while fuel economy distorts efficiency improvements. |
A fiber-optic cable provides a pipeline that can carry large amounts of information. Copper wires or copper coaxial cable carry modulated electrical signals but only a limited amount of information, due to the inherent characteristics of copper cable.
Free-space transmission, such as radio and TV signals, provides information transmission to many people, but this transmissions scheme cannot offer private channels. Also, the free-space spectrum is becoming a costly commodity with access governed by the FCC. Fiber-optic transmission offers high bandwidth and data rates, but it does not add to the crowded free-space spectrum.
Information Modulation Schemes
The modulation scheme is the manner in which the information to be transported is encoded. Encoding information can improve the integrity of the transmission, allow more information to be sent per unit time, and in some cases, take advantage of some strength of the communication medium or overcome some weakness.
Three basic techniques exist for transmitting information such as video signals over fiber optics:
- Amplitude modulation (AM) includes baseband AM, radio frequency (RF) carrier AM, and vestigial sideband AM.
- Frequency modulation (FM) includes sine wave FM, square wave FM, pulse FM, and FM-encoded vestigial sideband.
- Digital modulation of the optical light source with the ones and zeros of a digital data stream. A sim- plified explanation is that the light or laser source is off for a digital zero and on for a digital one. In actual practice, the light source never completely shuts off. The light source modulates darker and lighter for digital zero and one information. |
In Chemistry Made Easy - Trial Edition, students will use TI-Nspire™ technology to explore common chemistry problems utilizing step-by-step processes.
- Boyle’s Law
- Period System of Elements
About the Lesson
In "Chemistry Made Easy- Trial Version", there are four different activities for students to practice solving common chemistry problems, utilizing step-by-step processes.To access the different activities within the TNS file, from the title screen, MENU >
- Stoichiometry> (1) Compund Mass: Step by Step
Compute Mass of Compound Step-by-Step: Learn how to compute mass of any compound by plugging in any compound in the box viewing the steps below to find its mass.
- Gas Laws> (2) Boyle’s Gas Law
Apply Boyle’s Gas Law Step-by-Step: Learn to apply Boyle’s Law by plugging in any three given information in the boxes and solve for the unknown.
- Basics> (1) Periodic Table of Elements: Symbol, Atomic#
Search the Periodic System of Elements by Symbol or Atomic Number: Search the Periodic System of Elements by Symbol or Atomic Number by plugging in any element’s symbol or atomic number and viewing the element’s information.
- Moles and More> (3) Atoms to Moles Conversion
Atoms to Moles and Back- Conversion Step-by-Step: Learn how to do atoms to moles, and back by plugging in any unknown in the box and viewing the steps to find the atoms or moles below.
Note that all items with a check-mark next to it are functional in this trial version, so there are more activities available in the file to preview then those listed above. For more information on purchasing the full version, visit SMARTSOFT. |
Our Choosing a Book lesson plan empowers students to pick the right book for them. Students will learn the steps to identify the right “level” of book for them. During this lesson, students will practice evaluating several different books to find one that’s a good fit for them.
Students will learn the steps to identify whether a book is at the independent, instructional, or frustrational level (referred to as “easy, just right, or too difficult” in this lesson). Students will select a “just right” text by using the acronym PICK.
Common Core State Standards: CCSS.ELA-LITERACY.RF.1.4, CCSS.ELA-LITERACY.RF.1.4.A, CCSS.ELA-LITERACY.RF.2.4, CCSS.ELA-LITERACY.RF.2.4.A, CCSS.ELA-LITERACY.RF.3.4, CCSS.ELA-LITERACY.RF.3.4.A |
The Dalecarlian runes, or dalrunes, was a late version of the runic script that was in use in the Swedish province of Dalarna until the 20th century. The province has consequently been called the “last stronghold of the Germanic script”.
When Carl Linnaeus visited Älvdalen in Dalarna in 1734, he made the following note in his diary: “The peasants in the community here, apart from using rune staves, still today write their names and ownership marks with runic letters, as is seen on walls, corner stones, bowls, etc. Which one does not know to be still continued anywhere else in Sweden.”
*Cuneiform is a logo-syllabic script that was used to write several languages of the Ancient Near East. The script was in active use from the early Bronze Age until the beginning of the Common Era. It is named for the characteristic wedge-shaped impressions (Latin: cuneus) which form its signs. Cuneiform originally developed to write the Sumerian language of southern Mesopotamia (modern Iraq). Along with Egyptian hieroglyphs, it is one of the earliest writing systems.
‘Learn to Write Cuneiform ‘ is next in the line up of my LTW (Learn To Write) Series. Keep a look out for it…. |
Japan had conquered many colonies and other states during World War II. Special currency notes were officially issued by Japan in these states to replace local currency. These currency notes were known as Japanese Invasion Money. Both Wartime Finance Bank and the Southern Development Bank used bonds to raise money. Wartime Finance Bank gave loans to military industries, and the Southern Development Bank gave loans for hydroelectric generators, electric power companies, shipbuilding and petroleum. In March 1945, the outstanding balance of Southern Development Bank notes stood at more 13 billion. |
When designing port infrastructure projects, coastal constructions or oil rigs, it is important to know the minimum and maximum sea level over long periods (up to a century). While maximum tide heights are known, it is vital to examine the likelihood of positive and negative surges. Such studies require long series of good quality measurements.
Definition of positive and negative storm surge - extreme levels
Instantaneous storm surge is the difference at time t between the observed water level and the predicted water level. A surge is positive if the water level is higher than the expected tide, and negative if lower. Storm surge is mainly meteorological in origin: it is generated during the passage of low pressure systems or anticyclones, by changes in atmospheric pressure and winds. It may also have other origins: waves, seiches, tsunamis, etc.
High tide surge is the difference between the height of the sea observed and the predicted high tide (astronomical tide); this does not necessarily occur at the same time. Similarly, low tide surge is the difference between the low tide observed and the low tide predicted.
To study extreme surges, the appropriate variables are high tide surge and low tide surge, not the instantaneous storm surge. By definition these values are free from the effects of the phase difference between observation and prediction. Thus, if only one phase difference exists between the predicted height and observed height either for physical reasons or because the harmonic constants are poorly determined (eg, short-term and poor quality records), the instantaneous storm surge will be even more significant, which is not the case for high tide and low tide surges.
As a result, the highest high water level, a component of which is random, is a concept that makes sense only if it is estimated in terms of probability. We must determine the mean interval of time, called the return period, between two rare events with sea levels above a certain threshold.
Determining the return period
Expressed in this way, the return period would seem to be determined simply by calculating the mean. However, for the mean to be significant, the observation periods must be much longer than the return periods being calculated. Given the available observations, we can hardly estimate return periods longer than two or three years in most cases.
However, it is possible to effectively address this problem for ports, where more than 10 years of tidal observations are available, based on the fact that storm surges and tide are largely independent. If there are many tide observations available, it is easy to calculate the probability distributions governing rare but not exceptional events like high spring tides or major storm surges. But the two types of events may never have been observed simultaneously. However, the return period for this type of very rare event can be calculated with a good degree of confidence by combining the probability distributions for the tide and for surges. The figures show the results for Brest and present the probability for a predicted high tide to be equal to a given value within 1 cm.
The problem that arises with storm surges is calculating the probability associated with a storm surge higher than a given value. One difficulty is that very high but rare surges cannot be ignored. Events that have never been observed must be taken into account using an extrapolation model. The model is called "Gumbel Distribution", which is used for estimating river flood peaks. This model was applied to the longest available series of tide measurements (Brest) and the distribution was found to be well suited to the marine environment.
Gumbel distribution results from the study of extreme values of anindependent random variable from the same arbitrary distribution. Extreme value variables were first analysed by Fisher and Tippet and completed by Gumbel. Gumbel distribution is just one of many derived from the theory of probability. It turns out that it is well suited to flooding events, which explains its success.
Because the tide and surges are addressed separately, the choice of an extrapolation model is not really critical for estimating return periods of extreme levels.
The figure above shows the probability that storm surge will exceed a certain value, in the coordinate system defined by Gumbel. If the distribution defined by the relation were observed, the experimental points, giving a staircase curve here, would be aligned. The dotted line is the one that best passes through the cluster of experimental points. The two thin lines on either side of the dotted line limit the zone where 90% of the data points should fall if the extrapolation model is selected properly.
The figure is the result of combining the probability distributions of storm surges and predicted high tides. The area is bounded by the extreme values of predicted high tides.
Here the function gives the probability of observing high tide levels greater than a given value, which can be translated in terms of associated return periods. This example focuses only on the presentation of heights greater than the highest astronomical tide (negative surges are not addressed here).
Mapping the estimated return periods
Return periods can be calculated, under certain conditions, from height measurements covering more than one month. Currently, the hydrographic sounding points using pressure sensors placed on the sea floor provide this type of measurement series. It is therefore possible to use the relationships between these measurements and heights observed simultaneously at the nearest reference port to estimate the probability distributions needed to calculate the return periods of extreme tides at these points. Due to the spatial variability of the events studied, the geographic distribution of these sites must be sufficiently dense to compute by interpolating these valuesat all points of the area in question. In some areas, there are charts showing the contour lines of extreme levels.
Curves of the heights reached by the sea for a return period are shown for the Iroise Sea and the Brest roadstead. This example is the result of the exploitation of the data available in this area for mapping extreme levels corresponding to the given return period.
It is also possible to calculate the uncertainties on the values obtained. The results often highlight gaps in the measurement network and help to identify areas where additional tidal observations are desirable.
To find out more:
- Storm Surges
- Storm surge flood warning system (Météo-France, SHOM)
- ANR DISCOBOLE - calculation of the height of extreme water levels on the French coast by P. A. Pirazzoli, 200
- NIVEXT Project : Extreme sea levels
- "Extreme levels" product co-produced by SHOM - CETMEF
- Simon B. (2007). La Marée - La marée océanique et côtière. Edition Institut océanographique, 434pp.
Last updated: 12/12/2012 |
Zion National Park is located along the edge of a region known as the Colorado Plateau. The rock layers have been uplifted, tilted, and eroded, forming a feature called the Grand Staircase, a series of colorful cliffs stretching between Bryce Canyon and the Grand Canyon. The bottom layer of rock at Bryce Canyon is the top layer at Zion, and the bottom layer at Zion is the top layer at the Grand Canyon.
The Utah Geologic Survey produced this free interactive geologic map of the state. Zoom in to identify rock types and ages, as well as volcanic eruptions.
Zion was a relatively flat basin near sea level 240 million years ago. As sands, gravels, and muds eroded from surrounding mountains, streams carried these materials into the basin and deposited them in layers. The sheer weight of these accumulated layers caused the basin to sink, so that the top surface always remained near sea level. As the land rose and fell and as the climate changed, the depositional environment fluctuated from shallow seas to coastal plains to a desert of massive windblown sand. This process of sedimentation continued until over 10,000 feet of material accumulated.
Mineral-laden waters slowly filtered through the compacted sediments. Iron oxide, calcium carbonate, and silica acted as cementing agents, and with pressure from overlying layers over long periods of time, transformed the deposits into stone. Ancient seabeds became limestone; mud and clay became mudstones and shale; and desert sand became sandstone. Each layer originated from a distinct source and so differs in thickness, mineral content, color, and eroded appearance.
In an area from Zion to the Rocky Mountains, forces deep within the earth started to push the surface up. This was not chaotic uplift, but very slow vertical hoisting of huge blocks of the crust. Zion’s elevation rose from near sea level to as high as 10,000 feet above sea level.
Uplift is still occurring. In 1992 a magnitude 5.8 earthquake caused a landslide visible just outside the south entrance of the park in Springdale.
This uplift gave the streams greater cutting force in their descent to the sea. Zion’s location on the western edge of this uplift caused the streams to tumble off the plateau, flowing rapidly down a steep gradient. A fast-moving stream carries more sediment and larger boulders than a slow-moving river. These streams began eroding and cutting into the rock layers, forming deep and narrow canyons. Since the uplift began, the North Fork of the Virgin River has carried away several thousand feet of rock that once lay above the highest layers visible today.
The Virgin River is still excavating. Upstream from the Temple of Sinawava the river cuts through Navajo Sandstone, creating a slot canyon. At the Temple, the river has reached the softer Kayenta Formation below. Water erodes the shale, undermining the overlaying sandstone and causing it to collapse, widening the canyon.
Volcanoes of Zion
Zion National Park sits at the boundary between the Basin and Range geologic province and the Colorado Plateau. This transition zone is part of a volcanic arc from near Delta, Utah, south through Cedar Breaks, St. George, Zion National Park, Parashant National Monument on the Arizona Strip, to the volcanoes of the Flagstaff, Arizona area, and east to Albuquerque north to Capulin Volcano National Monument and even up into southern Colorado. There have been several eruptions in the last million years. The Kolob Volcano (near Lava Point) is the oldest inside the park at 1.1 million years old. Four others erupted along the Kolob Terrace Road 220,000 to 310,000 years ago, including Firepit and Spendlove Knolls. The most recent eruption, known as Crater Hill, erupted just below the West Temple along the Virgin River about 120,000 years ago. However, eruptions in St. George were as recent as 41,000 and 32,000 years ago. The most recent eruptions in the region were near Cedar Breaks about 1,000 years ago, in Parashant 950 years ago, and Sunset Crater 920 years ago. The most recent eruption in Utah was Ice Spring, near Fillmore only 660 years ago. This multi-state volcanic story is explained on the Grand Canyon-Parashant National Monument website in much further detail. The next volcanic eruption in or near the park could happen any time, but also may not happen for tens of thousands of years.
Geology-in-Action A landslide once dammed the Virgin River forming a lake. Sediments settled out of the quiet waters, covering the lake bottom. When the river breached the dam and the lake drained, it left behind a flat-bottomed valley. This change in the character of the canyon can be seen from the scenic drive south of the Zion Lodge near the Sentinel Slide. This slide was active again in 1995, damaging the road.
Flash floods occur when sudden thunderstorms dump water on exposed rock. With little soil to absorb the rain, water runs downhill, gathering volume as it goes. These floods often occur without warning and can increase water flow by over 100 times. In 1998 a flash flood increased the volume of the Virgin River from 200 cubic feet per second to 4,500 cubic feet per second, again damaging the scenic drive at the Sentinel Slide. |
NEW DISCOVERY OF MicroDNA CIRCLES
: Your genes and mine differ at thousands of individual points (SNPs, or single nucleotide polymorphisms) and in the number of copies of individual genes (CNV, or copy number variations). But did you know that there are differences in DNA between the individual cells of your body?
• Searching for evidence of genetic mosaics
within the brain, researchers found tens of thousands of tiny circles of DNA representing a completely different form from the usual chromosomes. These microDNA
circles are typically 200-400 base pairs long, and represent only 0.2% of the total chromosomal DNA, but given that we have 3,156,105,057 base pairs of DNA that’s still a lot. They are thought to arise from genes due to errors in replication or repair. Both single and double stranded circles were found. The electron microscope image shows a microDNA circle with a larger molecular model in color on the outside.
• Researchers reasoned that microDNA would leave behind microdeletions
, or small gaps in the chromosome. Indeed, when they set about looking for these gaps, they found them at a rate of 1 in 2000, giving rise to considerable genetic variation between cells. Not only could these deletions potentially affect gene function, but they may serve as a genetic cache of information that may play a role in non-Mendelian inheritance
• Fun fact
: Do you know the origin of the word genome
? It is thought to come from the Greek (γίνομαι) for "I become, I am born, to come into being"
• Source: Extrachromosomal microDNAs and chromosomal microdeletions in normal tissues. Shibata Y, Kumar P, Layer R, Willcox S, Gagan JR, Griffith JD, Dutta A. Department of Biochemistry and Molecular Genetics, University of Virginia School of Medicine, Charlottesville, VA, USA. Science. 2012 Apr 6;336(6077):82-6. http://www.ncbi.nlm.nih.gov/pubmed/22403181 |
By Mayara Matos and Thauan Santos |
Data show that more than 3 billion people suffer from water shortages around the globe. The report indicates that the availability of the resource for each person has reduced by a fifth over two decades. Amongst the affected ones, nearly half of the population is compromised by a severe scarcity of water and even catastrophic droughts due to climatic changes. The Food and Agriculture Organization of the United Nations (FAO) reported that the amount of people impacted only in Sub-Saharan Africa was around 50 million individuals in 2020.
Impacts on Defense and Security: Water availabiltiy is a key factor to assure the safety of human activities, such as agriculture. The change of rainfall pattern as well as the shortage of water for irrigation threatens food production, and therefore food safety.
Source: HARVEY, Fiona. More than 3 billion people affected by water shortages, data shows. The Guardian, 11/06/2020. Available at: https://amp.theguardian.com/environment/2020/nov/26/more-than-3-billion-people-affected-by-water-shortages-data-shows |
Why do we teach our pupils Mathematics?
Mathematics is crucial for everyday life. It underpins most of science, technology, finance and engineering. We want pupils to appreciate the beauty and power of mathematics, and develop curiosity and enjoyment of it. At Woodlands we want pupils to enjoy the challenge of Mathematics. We want students to be confident that they can think critically, and engage in regular problem solving to allow them to apply their skills in real life. We want pupils to understand data in the world around them and be able to reason and communicate Mathematically. But above all we want them to be numerate, confident, enjoy Mathematics and experience success in the subject, whatever their ability.
What pupils will learn in KS3
At KS3, we offer a broad range of study closely aligned to the National Curriculum, including:
- Number (e.g. order or operations, fractions, decimals and percentages, use of a calculator etc.)
- Algebra (expressions, equations, substitution, graphs formulae and sequences etc.)
- Ratio, proportion and rates of change
- Geometry and Measures (converting units, area, volume, perimeter, estimation etc.)
- Probability (chance and likelihood of events happening etc.)
- Statistics (averages, graphs and charts etc.)
Pupils revisit these topics over the two years and study them in greater depth from year 7 to year 8.
What pupils learn in KS4
At KS4, we have selected the Edexcel exam board, believing it to have the most accessible language and layout for our pupils needs and the best exam experience.
During years 9-11 pupils study a spiral curriculum which continuously develops and builds upon the strands they first visited in KS3, however there is a greater emphasis and deeper learning of:
- Mathematical fluency
- Problem solving
Beyond the classroom
In year 11 pupils have the opportunity to go on an outdoor activity weekend. During this time pupils stay the night at an activity centre and spend the weekend revising and studying mathematics, whilst engaging in abseiling, obstacle courses and other team building activities
Some other activities that we’ve run at woodlands are
- Maths in action trip to London to watch guest speakers
- Inter schools FMSP maths competition
- UKMT maths challenge
- Trips to London Metropolitan, Cambridge and other Universities
- 7 Days to go checkmat Edexcel Higher
- 7 Days to go checkmat Edexcel Foundation
- Click here for Hegarty Maths. |
Growing in STEM: STEM Resources and Materials for Engaging Learning Experiences
You are here
Early STEM experiences should include exploratory learning, allowing children to learn content through the processes of inquiry. The STEM experiences teachers provide for young children can involve a variety of learning materials, including children’s literature, consumables and manipulatives, and web-based resources. In this issue we offer suggestions and examples to guide teachers’ selection of classroom STEM resources and materials.
Children’s literature and STEM learning
Use of high-quality STEM-focused children’s literature supports introducing and examining science, technology, engineering, and mathematics concepts in the early childhood classroom (Hong 1996; Patrick, Mantzicopoulos, & Samarapungavan 2008; Sharkawy 2012; Varelas et al. 2014). With the amount of children’s literature aimed at building concept knowledge, teachers may find it difficult to select appropriate books to introduce these ideas. Children’s picture books may contain information that is so oversimplified it can become misleading (Dagher & Ford 2005), leave out key scientific components (Smolkin et al. 2008), or contain little representation of practical or natural sciences (Ford 2005). In addition, finding literature that represents the physical sciences (like motion or astronomy) can be much harder than locating books on life sciences (plants or animals) (Ford 2005; Smolkin et.al. 2008). The difficulties in finding high-quality literature can be intensified when seeking literature about technology, engineering, and mathematics.
In addition to selecting books with high-quality artwork/pictures and text, below are two essential questions to consider before introducing STEM-focused children’s literature into the classroom:
- Does the book present content that is technically sound and appropriate for children’s developing understandings?
- Does the book effectively help students build both inquiry and content understandings?
Add loose parts and nonstandard materials for STEM explorations
Another important factor to consider when planningearly STEM experiences is the role open-ended materials can play in classroom learning experiences. STEM experiences often involve many different materials for exploration, but you do not need to purchase manufactured curricular materials. Giving children access to open-ended materials can broaden and extend children’s explorations while also limiting expenditures. Consider the theory of loose parts first proposed by architect Simon Nicholson in the 1970s. Loose parts are materials without a predetermined purpose that can be moved, combined, reformed, taken apart, and put back together in numerous ways(Nicholson 1972). Loose parts can be used alone or combined with other materials and can be both manufactured and natural. Nicholson wrote, “In any environment, both the degree of inventiveness and creativity, and the possibility of discovery, are directly proportional to the number and kind of variables in it” (1972,6). Expanding children’s access to nontraditional STEM materials can serve to provide a wider range of learning opportunities in the classroom.
Consider Remida, the reclaimed materials center in Reggio Emilia, Italy, that houses a wide variety of loose parts and unique items for teachers to bring into their classrooms. This center can serve as a guide for teachers as they build a classroom collection of loose parts for use in STEM explorations.
In order to guide the selection of materials for inclusion in the classroom, think about the following essential questions:
- Can this material be added into a STEM experience or learning center to support student thinking?
- Does the material allow for exploration and inquiry?
- In what ways might children use it to explore the topic at hand?
- How does the material complement the items the children are familiar with?
Examples of High-Quality Children's Books
Life sciences. Looking Closely in the Rain Forest, by Frank Serafini. All ages.
Frank Serafini has a wonderful series of books called Looking Closely. ... This book, focusing on the rain forest, won the National Science Teachers Association’s (NSTA) Outstanding Science Trade Book (OSTB) award in 2011. The writing is almost poetic in its repetition but remains simple and clear enough for very young children to understand. All the books in this series introduce students to the notion of hypothesizing through close inspection, as each plant or animal is preceded by a close-up partial image and the text poses questions about what it might be.
Physical sciences.I Fall Down, by Vicki Cobb. Ages 3–6.
Vicki Cobb introduces the potentially difficult concept of gravity through easy-to-understand language and activities for young children. This NSTA 2005 OSTB award winner challenges children to drip, dribble, and drop different materials while experimenting with the forces of gravity. Other books in Cobb’s Science Play series include I See Myself (light and reflection), I Get Wet (properties of water), and I Face the Wind (properties of air and air currents).
Technology. Ada Lovelace, Poet of Science: The First Computer Programmer, by Diane Stanley. Ages 4–8.
This is the story of Ada Lovelace, the daughter of Lord and Lady Byron. Credited for having her mother’s mathematical and scientific brain, coupled with her father’s creative imagination, Ada Lovelace was the first recognized computer programmer in history. The text offers insight into the history of computers, the Industrial Revolution, and the mechanical loom. Teachers and librarians can draw connections to history and other forms of technology.
Engineering.The Boy Who Harnessed the Wind (picture book edition), by William Kamkwamba and Bryan Mealer. Ages 5 and up.
This book is based on the inspirational, award-winning memoir of 14-year-old William Kamkwamba, who built a windmill from scrap materials to produce electricity for his African village during a drought and famine. This true story about ingenuity, creativity, and persistence in the face of severe adversity will inspire children to imagine their own capabilities. An NSTA OSTB winner in 2012. (There are several books published with this title, so be sure to select the picture book edition.)
Mathematics (numbers and counting). Lifetime: The Amazing Numbers in Animal Lives, by Lola M. Schaefer. All ages.
Winner of NSTA’s OSTB in 2013, Lifetime is filled with charming mixed-media illustrations in numerically accurate pictures. Readers can count how many times a spider spins an egg sac (one) or how many baby seahorses a father seahorse carries in a lifetime (1,000). While younger children may not be able to count to the higher numbers, they can conjecture about more and less, and the text and visual representations make it suitable for even very young audiences. This book also allows for incorporation of science concepts such as life spans and life cycles.
Mathematics (measurement). How Tall, How Short, How Far Away?,
by David A. Adler. Ages 5 and up.
Adler delves into the history of measurement and encourages practical applications for students. This book, which won NSTA’s OSTB award in 1999, encourages readers to learn about and try ancient methods of measurement, as well as design their own tools to measure height and distance.
High-quality STEM web resources
While there are a lot of STEM-related resources available, here are three free tools we think early childhood teachers should know about.
PBS Kids (supports all STEM learning)
PBS Kids provides educational programs, resources for parents and teachers, and a thorough list of STEM games for children ages 2–8. Resources are built around all STEM subjects and encourage students to problem solve through activities that include characters from PBS’s televised programs. www.pbskids.org
Peep and the Big Wide World (science, mathematics, and engineering)
A clever and endearing look into science through the lens of a large urban park, Peep and the Big Wide World is an animated series aimed at 3- to 5-year-olds that can captivate viewers of all ages. Peep introduces children to easy-to-replicate science experiments using everydaymaterials. Renowned early childhood science educator Karen Worth is the show’s educational science advisor, and actress Joan Cusack narrates. The website includes videos, student experiments, and computer activities. www.peepandthebigwideworld.com/en/
This nonprofit organization is dedicated to teaching K–12 students how to code. While all students can benefit, a primary goal of Code.org is to provide technology opportunities to females and other students underrepresented in the computer science field. Videos explain what coding is, and users can access early childhood activities and lesson plans after creating a free account. https://code.org/
We hope early childhood teachers are inspired to think creatively as they plan STEM integration. Teachers can support children’s exploration and learning by ensuring that they have many opportunities for playful engagement. Diversifying STEM materials and resources to include both traditional and nontraditional tools, high-quality STEM literature, and web-based resources deepens children’s daily engagement in science, technology, engineering, and mathematics.
Dagher, Z.R., & D.J. Ford. 2005. “How Are Scientists Portrayed in Children’s Science Biographies?” Science & Education 14 (3): 377–93.
Ford, D.J. 2005. “Representations of Science Within Children’s Trade Books.” Journal of Research in Science Teaching 43 (2): 214–35.
Hong, H. 1996. “Effects of Mathematics Learning Through Children’s Literature on Math Achievement and Dispositional Outcomes.” Early Childhood Research Quarterly 11 (4): 477–94.
Nicholson, S. 1972. “The Theory of Loose Parts: An Important Principle for Design Methodology.” Studies in Design Education Craft and Technology 4(2): 5–14.
NSTA (National Science Teachers Association). 2016. NSTA Recommends. www.nsta.org/recommends/.
Patrick, H., P. Mantzicopoulos, & A. Samarapungavan. 2008. “Motivation for Learning Science in Kindergarten: Is There a Gender Gap and Does Integrated Inquiry and Literacy Instruction Make a Difference.” Journal of Research in Science Teaching 46 (2): 166–91.
Reggio Children. 2005. Remida Day. Reggio Emilia, Italy: Reggio Children.
Sharkawy, A. 2012. “Exploring the Potential of Using Stories About Diverse Scientists and Reflective Activities to Enrich Primary Students’ Images of Scientists and Scientific Work.” Cultural Studies of Science Education 7 (2):307–40.
Smolkin, L.B., E.M. McTigue, C.A. Donovan, & J.M. Coleman. 2008. “Explanation in Science Trade Books Recommended for Use With Elementary Students.” Science Education 93 (4): 587–610.
Varelas, M., L. Pieper, A. Arsenault, C.C. Pappas, & N. Keblawe-Shamah. 2014. “How Science Texts and Hands-On Explorations Facilitate Meaning Making: Learning From Latina/o Third Graders.” Journal of Research in Science Teaching 51 (10): 1246–74.
Bree Laverdiere Ruzzi is a PhD student and graduate teacher assistant at Old Dominion University in Norfolk, Virginia. Bree was previously a K–12 school librarian for Virginia Beach City Public Schools. Her research includes working with school librarians and early childhood/elementary teachers to collaboratively create inquiry-based science instruction enjoyable to young students. [email protected]
Angela Eckhoff, PhD, is an associate professor of teaching and learning–early childhood at Old Dominion University, in Norfolk, Virginia. |
Quercus alba, White Oak
The white oak is a large, strong tree. It has a short stocky trunk with massive horizontal limbs.The fall foliage is often quite beautiful and showy.
Sunlight Preference: Full sun and partial shade are best for this tree, meaning it prefers a minimum of four hours of direct, unfiltered sunlight each day.
It can adapt to a variety of soil textures, but prefers deep, moist, well-drained sites. High pH soil will cause chlorosis. White oak is less susceptible to oak wilt than the red oak species.
New transplants should receive plenty of water and mulch beneath the canopy to eliminate grass competition
The acorns are one of the best sources of food for wildlife and are gathered, hoarded and eaten by birds, hoofed browsers and rodents.
Oaks, in general, are the best pollinator plants there are! In Ohio, Oaks support over 477 different species of butterflies and moths – more than any other plant!
Acorns are produced generally when the trees are between 50-100 years old. Open-grown trees may produce acorns are early as 20 years. Good acorn crops are irregular and occur only every 4-10 years.
- Can live for centuries.
- Features alternating leaves that are 4–8" long with 3–4 rounded, finger-like lobes on each side and one at the tip. Intervening sinuses sometimes reach almost to the mid-rib.
- Produces long, yellowish-green catkins drooping in clusters in the spring.
- Develops a deep taproot, making it difficult to transplant.
- Is extremely sensitive to soil compaction and grade changes.
- This information has been adapted from Arbor Day Foundation. Please find more information here. |
How are ice cores dated? How, there is some accuracy in linking Taylor Glacier samples to ice accuracy records due to analytical uncertainties and the possible nonuniqueness of the vostok. Second, the ice vostok chronologies themselves are subject to uncertainties. For the last 60 ka, an annual layer-counted age scale is available for Greenland, to which Antarctic records can be tied using globally how-mixed CH 4 ; beyond this age, ice radiocarbon modeling is how used to reconstruct the chronology 39 – The uncertainty in the ice core temperature can be evaluated by comparing them to independently dated speleothem records showing concomitant events 41 – Third, the Kr samples tell a spread in ages due to their finite temperature. We estimate this last effect is only important for the oldest sample where the layers tell how strongly compressed. The first sample Kr-1 was obtained along the main lab. The sample is from the Younger Dryas temperature, which is clearly identified by its characteristic CH 4 sequence. The top axis shows the distance along the transect in meters; note that the position? We assign a stratigraphic age of Going down-glacier the ice gets progressively older; ice about ages between 10 and 55 ka is found in stratigraphic order 0?
Although it has been possible to construct a new index of global volcanism using ice core acidity and sulphate records for the period from to the present, for the year period, there are fewer ice cores available, and dating problems become more serious, especially for Antarctic cores. An Ice core-Volcanic Index constructed for the period A. Except for a very few eruptions, the ice core record currently available is insufficient to delineate the climatic forcing by explosive volcanic eruptions before about for the Northern Hemisphere and before about for the Southern Hemisphere.
Additional ice cores, however, combined with geological and biological information, will allow this to be done in the future. Skip to main content Skip to sections.
Dating of the ice cores is essential in order to reconstruct the temporal development the various dry extraction techniques (2) after having problems with CO2.
An ice core is a core sample that is typically removed from an ice sheet or a high mountain glacier. Since the ice forms from the incremental buildup of annual layers of snow, lower layers are older than upper, and an ice core contains ice formed over a range of years. Cores are drilled with hand augers for shallow holes or powered drills; they can reach depths of over two miles 3.
The physical properties of the ice and of material trapped in it can be used to reconstruct the climate over the age range of the core. The proportions of different oxygen and hydrogen isotopes provide information about ancient temperatures , and the air trapped in tiny bubbles can be analysed to determine the level of atmospheric gases such as carbon dioxide.
Since heat flow in a large ice sheet is very slow, the borehole temperature is another indicator of temperature in the past. These data can be combined to find the climate model that best fits all the available data. Impurities in ice cores may depend on location. Coastal areas are more likely to include material of marine origin, such as sea salt ions.
Greenland ice cores contain layers of wind-blown dust that correlate with cold, dry periods in the past, when cold deserts were scoured by wind. Radioactive elements, either of natural origin or created by nuclear testing , can be used to date the layers of ice. Some volcanic events that were sufficiently powerful to send material around the globe have left a signature in many different cores that can be used to synchronise their time scales.
Ice cores have been studied since the early 20th century, and several cores were drilled as a result of the International Geophysical Year —
These are kilometres long cylinders of ice drilled in short sections from the Greenland and Antarctic icecaps. The theory is that air bubbles trapped in the ice are samples of the ancient atmosphere, and thus give an accurate reading of CO2 levels in those ancient times. The ice cores tell us that the pre-industrial level of CO2 was about parts per million ppm.
By Michael Le Page. See all climate myths in our special feature. How should past CO 2 levels compare with past temperatures? If there is no relation between CO 2 and temperature, there should be no correlation at all. If CO 2 is the only factor determining temperature, there should be a very close correlation. If CO 2 is just one of several factors, the degree of correlation will depend on the relative importance of CO 2 and will vary depending on how much other factors change.
So what has actually happened? The best evidence comes from ice cores. As the snow falling on the ice sheets in Antarctica or Greenland is slowly compressed into ice, bubbles of air are trapped, making it possible to work out the concentration of CO 2 in the atmosphere going back hundreds of thousands of years. There is no way to work out the global temperature at the time the ice formed, but clues to the local temperature come from the relative amount of heavy hydrogen deuterium in the water molecules of the ice compared with seawater, or from the amount of oxygen
Use the controls in the far right panel to increase or decrease the number of terms automatically displayed or to completely turn that feature off. The results provide substantial support for theories of Economic-Elite Domination and for theories of Biased Pluralism, but not for theories of Majoritarian Electoral Democracy or Majoritarian Pluralism. And, you must be your own Devil’s Advocate. And I accept your point that sometimes one should go “outside the box” and select a sub-optimally-economic choice, for the sake of diversification within limits of course.
But enough general waffle. The safety of Nuclear power is demonstrated by the past track record as you have indicated.
A difficulty in.
Why use ice cores? How do ice cores work? Layers in the ice Information from ice cores Further reading References Comments. Current period is at right. Wikimedia Commons. Ice sheets have one particularly special property. They allow us to go back in time and to sample accumulation, air temperature and air chemistry from another time. Ice core records allow us to generate continuous reconstructions of past climate, going back at least , years.
By looking at past concentrations of greenhouse gasses in layers in ice cores, scientists can calculate how modern amounts of carbon dioxide and methane compare to those of the past, and, essentially, compare past concentrations of greenhouse gasses to temperature. Ice coring has been around since the s. Ice cores have been drilled in ice sheets worldwide, but notably in Greenland and Antarctica[4, 5]. Through analysis of ice cores, scientists learn about glacial-interglacial cycles, changing atmospheric carbon dioxide levels, and climate stability over the last 10, years.
Despite these issues, ice cores from Greenland and Antarctica show to independently date the air and the ice, and to improve temperature.
We know what global temperatures are like now, from direct measurement around the globe. And we know quite a lot about what temperatures were like over the past few hundred years thanks to written records. But what about further back than that? How do we know what temperatures were like a thousand years ago, or even hundreds of thousands of years ago? There is, of course, no written record that far back in history — but there is a chemical record, hidden in the ice of Antartica and Greenland.
While I was there, I had the opportunity to visit their ice core lab, where they analyse sections of ice cores brought back from Antarctica. From these unassuming columns of ice scientists can determine past temperatures and climates, and can also give a humbling perspective on how human activities can have serious impacts on our atmosphere. These early cores were only drilled to a depth of around metres, and the low quality of the cores recovered prevented any significant analytical work once they were recovered.
The first ever continuous ice core all the way down to the bedrock in Greenland was drilled in , and was metres long.
How far into the past can ice-core records go? Scientists have now identified regions in Antarctica they say could store information about Earth’s climate and greenhouse gases extending as far back as 1. By studying the past climate, scientists can understand better how temperature responds to changes in greenhouse-gas concentrations in the atmosphere.
This, in turn, allows them to make better predictions about how climate will change in the future. Now, an international team of scientists wants to know what happened before that.
These dating problems undoubtedly have some negative ef- fect on the poor correlations among ice core records as seen below but cannot be the primary.
Deep ice core chronologies have been improved over the past years through the addition of new age constraints. However, dating methods are still associated with large uncertainties for ice cores from the East Antarctic plateau where layer counting is not possible. Consequently, we need to enhance the knowledge of this delay to improve ice core chronologies. It is especially marked during Dansgaard-Oeschger 25 where the proposed chronology is 2. Dating of 30m ice cores drilled by Japanese Antarctic Research Expedition and environmental change study.
Introduction It is possible to reveal the past climate and environmental change from the ice core drilled in polar ice sheet and glaciers. The 54th Japanese Antarctic Research Expedition conducted several shallow core drillings up to 30 m depth in the inland and coastal areas of the East Antarctic ice sheet.
Ice core sample was cut out at a thickness of about 5 cm in the cold room of the National Institute of Polar Research, and analyzed ion, water isotope, dust and so one. We also conducted dielectric profile measurement DEP measurement. The age as a key layer of large-scale volcanic explosion was based on Sigl et al. |
Marmots are species of medium-sized robust, short-legged burrowing herbivorous rodents in the genus Marmota, family Sciuridae, order Rodentia. Marmots are closely related to the ground squirrels and gophers. Marmots live in burrows that they dig themselves, or sometimes in the deep crevices of rock piles and talus slopes beneath cliffs. Most species of marmots occur in alpine or arctic tundra or in open forests of North America, Europe, and Asia. The woodchuck or groundhog of North America is also a familiar species of marmot found in agricultural landscapes within its range.
Marmots have a plump, sturdy body, with a broad head, and small but erect ears. The legs and tail of marmots are short, and their fingers and toes have strong claws, and are useful tools for digging burrows. Marmots commonly line their subterranean dens with dried grasses and other haylike materials. Marmots are rather slow, waddling animals, and they do not like to venture very far from the protection of their burrows and dens. Marmots can climb rock faces and piles quite well. The pelage of marmots is short but thick, and is commonly brown or blackish colored.
Marmots often sit up on their haunches, and in this position they survey their domain for dangerous predators. Marmots are rather vocal animals, emitting loud, harsh squeaks and squeals as warnings whenever they perceive a potential predator to be nearby. As soon as any marmot hears the squeak of another marmot, it dashes back to the protection of its burrow. Marmots also squeak when communicating with each other, or if they are injured. Marmots are loosely social animals, sometimes living in open colonies with as many as tens of animals living in a communal maze of interconnected burrows.
Marmots are herbivores, eating the above-ground tissues and tubers of a wide range of herbaceous plants, as well as buds, flowers, leaves, and young shoots of shrubs. They store food in their dens, some of which is consumed during the wintertime.
Marmots gain weight through the growing season, and are very fat when they go into hibernation at the onset of winter. The hibernation occurs in dens that are thickly hay-lined for insulation, and the entrance to their den is plugged with hay or dirt at this time. Some alpine populations of marmots migrate to traditional winter-den sites lower in altitude than their summer range. Marmots typically winter in tightly huddling family groups. Marmots may occasionally waken from their deep sleep to feed, sometimes outside if the day is relatively warm and sunny.
Various animals are predators of marmots, including golden eagles, hawks, foxes, and coyotes.
Humans are also predators of marmots in some parts of their range, using the animals as a source of meat, and sometimes as a source of medicinal oils.
The most familiar marmot to most North Americans is the woodchuck or groundhog (Marmota monax ), a widespread and common species of open woodlands, prairies, roadsides, and the edges of cultivated fields. The woodchuck is a relatively large, reddish or brownish, black-footed marmot, with animals typically weighing about 7-13 lb (3-6 kg), although one captive animal achieved a most-fatty weight of 37 lb (17 kg) in the late autumn. Woodchucks dig their burrow complexes in well-drained soil, generally on the highest ground available to them.
The hoary marmot (M. caligata ) is a species of alpine tundra and open montane forests of the mountains of northwestern North America, also occurring in the northern tundra of Alaska, Yukon, and the western Northwest Territories. There are various subspecies of hoary marmots, including the small dark-brown Vancouver Island marmot (M. c. vancouverensis ), the Olympic marmot (M. c. olympus ) of northwestern Washington state, and the Kamchatkan marmot (M. c. camtscharica ) of the mountains of far-eastern Siberia.
The yellow-bellied marmot (M. flaviventris ) is a yellow-brown species of alpine and open montane habitats in the western United States.
The alpine marmot (Marmota marmota ) occurs in the Alps of northern Italy, southeastern France, and Switzerland. The habitat of this species is alpine tundra and meadows, where it lives in rock piles and in burrows. This species is subjected to a sport hunt, the male animals being referred to as bears, and the females as cats. The meat of these marmots is eaten, and their fat is a highly regarded folk medicine in some parts of its European range.
The bobak marmot (M. bobak ) occurs rather widely in high-altitude grasslands and alpine tundra of the Himalayan Mountains of central Asia. This species is hunted as food and for its fat throughout
Barash, D. Marmots: Social Behavior and Ecology. Stanford, CA: Stanford University Press, 1989.
Hall, E.R. The Mammals of North America. 2nd ed. New York: Wiley & Sons, 1981.
Nowak, R.M. ed. Walker’s Mammals of the World. 6th ed. Baltimore: John Hopkins University Press, 1999.
Wilson, D.E. and D. Reeder, comp. Mammal Species of the World. 3rd ed. Washington, DC: Smithsonian Institution Press, 2005.
Armitage, K.B. “Evolution of Sociality in Marmots.” Journal of Mammalogy 80 (1999): 1–10.
Markels, Alex. “Last Strand.”Audubon 106 (May 2004): 40–46.
"Marmots." The Gale Encyclopedia of Science. . Encyclopedia.com. (November 17, 2018). https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/marmots-0
"Marmots." The Gale Encyclopedia of Science. . Retrieved November 17, 2018 from Encyclopedia.com: https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/marmots-0 |
Everyone is exposed to ultraviolet (UV) radiation from the sun and other artificial sources used in industry, commerce and recreation. Emissions from the sun include light, heat and UV radiation. The sun is the strongest source of UV radiation in our environment. Small amounts of UV radiation are essential for the production of vitamin D in humans. However, over-exposure to it may lead to short and long term adverse effects on the skin, eyes and immune system.
UV radiation is electromagnetic radiation, with wavelengths between 100-400 nm. It is divided into three bands :
As sunlight passes through the atmosphere, all UVS and about 90% of UVB is absorbed by ozone, water vapour, oxygen and carbon dioxide. UVA is not filtered as significantly by the atmosphere. Therefore, the UV radiation reaching the Earth’s surfacee is largely composed of UVA with a smaller amount of UVB component.
UV Radiation levels are influenced by:
The higher the sun in the sky, the higher the UV radiation level. Thus UV radiation levels vary with time of the day and time of year.
The closer to the equator, the UV radiation level is higher. Closer to the equator, the sun’s rays have a shorter distance to travel through the atmosphere and therefore less radiation can be absorbed.
UV radiation levels are highest under cloudless skies. However, light or thin clouds have little effect in reducing exposure and may even enhance UV levels because of scattering.
The higher the altitude, the UV radiation level is higher. There is less atmosphere available in higher altitudes to absorb the radiation. With every 1000m increase in altitude, UV radiation levels increase by around 10%.
Ozone in the stratosphere absorbs some of the UV radiation that would otherwise reach the Earth’s surface. Ozone levels vary over the year and even across the day.
UV radiation is reflected or scattered to varying extents by different surfaces, e.g. fresh snow can reflect as much as 80% of UV radiation, dry beach sand about 15% and sea foam about 25%.
What is ultra violet?
Ultraviolet (“UV”) light is one of the frequencies of light that is given off by the sun. Ultraviolet light is not visible to the human eye; it is an invisible part of the “electromagnetic spectrum”. Ultraviolet radiation, visible light and infrared energy are all given off by the sun. The image below shows the electromagnetic spectrum, which includes ultraviolet, infrared and visible light along with other types of energy.
Some colors of wet pour surfacing are more prone to higher levels of degradation from UV (ultraviolet) light exposure. This means that certain colors in the manufacturing process of the rubber can fade over time because they have been exposed to ultra violet radiation from the sun. Generally speaking, the brighter colors are more prone to UV Degradation and during this process the sunlight radiation attacks the synthetic polymeric within the EPDM material, leading to a loss of the original vibrant color. |
mineral is a naturally occuring, homogeneous, solid with a crystalline atomic structure. Crystallinity implies
that a mineral has a definite and limited range of composition, and that the composition is expressible as a chemical
formula. Some definitions of minerals give them as inorganic materials, however both diamonds and graphite are considered
minerals, and both are primarily comprised of carbon, which would make them organic. So this leads me, as an
engineer, to believe that mineralogists do not have a good, precise definition of a mineral, but rather a loose
definition. The definition above, is the most inclusive and would include all substances currently described as minerals.
The key items that make something a mineral are occurring naturally, and the definite crystal structure, that is
expressible as a chemical formula. Rocks that do not meet this criteria are referred to as amorphis - not having a
definite structure or expressible as a chemical formula. Some elements that occur naturally and are minerals are arsenic,
bismuth, platinum, gold, silver, copper, and sulphur.
THE DEFINITION OF ORGANIC: Organic chemistry is the study of those substances containing carbon in combination with
hydrogen (H), and a few other non metals, namely oxygen (O), nitrogen (N), sulfur (S) and the halogens (F2, Cl2,
Br2, and I2).
Calcite is one of the most important carbonate minerals. It is often referred to
as a industrial mineral, and is found in over 300 forms. The photo is from
the Southern California Deserts, is white. Calcite may be any color, from
colorless to black, depending upon the other minerals contained in it. Calcite
has a hardness of 3, SG of 2.72, and has a rhombohedral cleavage. Calcite is
used in cement making, aggregate, quick lime manufacture, metallurgical uses
and for a number of other industrial uses.
|Limestone is a sedimentary rock,
is comprised primarily of the mineral calcite (CaCO3).
Limestone is predominately calcium carbonate, with minor amounts of magnesium carbonate,
silica, clay, iron oxide, or carbonaceous material. Limestone is widely distributed,
may be fine grained, compact, coarse grained, or composed of fragmented material.
Limestones comprise approximately 10 percent of the sedimentary rocks.
Back to Top
It forms either by direct crystallization from water (usually seawater) or by
accumulation of shell and shell fragments. Most limestones were formed by
organisms such as corals, mollusks, brachiopods, etc. In the first case, it
carries a record of the chemical composition of seawater and it provides evidence
of how that composition has changed with time. In the second case, limestone
provides a record of the evolution of many important fossils. Limestone usually
forms in shallow water less than 20 m (70 ft) deep and thus also provides
important geological information on the variation in sea level in the past.
Limestone rocks are frequently riddled with caves. All limestone forms from
the precipitation of calcium carbonate from water. Calcium carbonate leaves
solutions in many ways and each way produces a different kind of limestone.
All the different ways can be classified into two major groups: either with
or without the aid of a living organism.
White limestone is relatively rare, and has many commercial uses, fillers for paint, rubber products,
putty, pottery, paper manufacturing, plastics, food, flooring, PVC Pipes, white
ink, tooth paste, wire coating, caulking, glue, caulking compounds, resins,
polyester, ceiling and wall textures, dry wall, mud, joint compounds, stucco,
fiberglass, roofing shingles, and a few more.
The white limestone specimen was provided by Specialty Minerals, Lucerne Valley CA mine. |
Gasoline Fuel Cell Would Boost Electric Car Range
The advanced fuel cell could eliminate range anxiety and make electric cars more practical, while keeping carbon-dioxide emissions low.
If you want to take an electric car on a long drive, you need a gas-powered generator, like the one in the Chevrolet Volt, to extend its range. The problem is that when it’s running on the generator, it’s no more efficient than a conventional car. In fact, it’s even less efficient, because it has a heavy battery pack to lug around.
Now researchers at the University of Maryland have made a fuel cell that could provide a far more efficient alternative to a gasoline generator. Like all fuel cells, it generates electricity through a chemical reaction, rather than by burning fuel, and can be twice as efficient at generating electricity as a generator that uses combustion.
The researchers’ fuel cell is a greatly improved version of a type that has a solid ceramic electrolyte, and is known as a solid-oxide fuel cell. Unlike the hydrogen fuel cells typically used in cars, solid-oxide fuel cells can run on a variety of readily available fuels, including diesel, gasoline, and natural gas. They’ve been used for generating power for buildings, but they’ve been considered impractical for use in cars because they’re far too big and because they operate at very high temperatures—typically at about 900 ⁰C.
By developing new electrolyte materials and changing the cell’s design, the researchers made a fuel cell that is much more compact. It can produce 10 times as much power, for its size, as a conventional one, and could be smaller than a gasoline engine while producing as much power.
The researchers have also lowered the temperature at which the fuel cell operates by hundreds of degrees, which will allow them to use cheaper materials. “It’s a huge difference in cost,” says Eric Wachsman, director of the University of Maryland Energy Research Center, who led the research. He says the researchers have identified simple ways to improve the power output and reduce the temperature further still, using methods that are already showing promising results it the lab. These advances could bring costs to a point that they are competitive with gasoline engines. Wachsman says he’s in the early stages of starting a company to commercialize the technology.
Wachsman’s fuel cells currently operate at 650 ⁰C, and his goal is to bring that down to 350 ⁰C for use in cars. Insulating the fuel cells isn’t difficult since they’re small—a fuel cell stack big enough to power a car would only need to be 10 centimeters on a side. High temperatures are a bigger problem because they make it necessary to use expensive, heat-resistant materials within the device, and because heating the cell to operating temperatures takes a long time. By bringing the temperatures down, Wachsman can use cheaper materials and decrease the amount of time it takes the cell to start.
Even with these advances, the fuel cell wouldn’t come on instantly, and turning it on and off with every short trip in the car would cause a lot of wear and tear, reducing its lifetime. Instead, it would be paired with a battery pack, as a combustion engine is in the Volt, Wachsman says. The fuel cell could then run more steadily, serving to keep the battery topped without providing bursts of acceleration.
The researchers achieved their result largely by modifying the solid electrolyte material at the core of a solid-oxide fuel cell. In fuel cells on the market, such as one made by Bloom Energy, the electrolyte has to be made thick enough to provide structural support. But the thickness of the electrolyte limits power generation. Over the last several years, researchers have been developing designs that don’t require the electrolyte to support the cell so they can make the electrolyte thinner and achieve high power output at lower temperatures. The University of Maryland researchers took this a step further by developing new multilayered electrolytes that increase the power output still more.
The work is part of a larger U.S. Department of Energy effort, over the past decade, to make solid-oxide fuel cells practical. The first fruits of that effort likely won’t be fuel cells in cars—so far, Wachsman has only made relatively small fuel cells, and significant engineering work remains to be done. The first applications of solid oxide fuels in vehicles may be on long-haul trucks with sleeper cabs.
Equipment suppliers such as Delphi and Cummins are developing fuel cells that can power the air conditioners, TVs, and microwaves inside the cabs, potentially cutting fuel consumption by 85 percent compared to idling the truck’s engine. The Delphi system also uses a design that allows for a thinner electrolyte, but it operates at higher temperatures than Wachsman’s fuel cell. The fuel cell could be turned on Monday, and left to run at low rates all week and still get the 85 percent reduction. Delphi has built a prototype and plans to demonstrate its system on a truck next year. |
- The definition of a trench is a long, narrow ditch sometimes dug by troops during wartime to hide from enemies.
A long narrow ditch dug in World War I to protect troops from being seen by the enemy is an example of a trench.
- Trench means to dig a long and narrow ditch.
When you dig a long, narrow ditch to place a pipe, this is an example of a time when you trench.
- to cut, cut into, cut off, etc.; slice, gash, etc.
- to cut a deep furrow or furrows in
- to dig a ditch or ditches in
- to surround or fortify with trenches; entrench
Origin of trenchLate Middle English trenchen ; from Old French trenchier (Fr trancher), to cut, hack, probably ; from Classical Latin truncare, to cut off: see truncate
- to dig a ditch or ditches, as for fortification
- to infringe (on or upon another's land, rights, time, etc.)
- to verge or border (on); come close
- a deep furrow in the ground, ocean floor, etc.
- a long, narrow ditch dug by soldiers for cover and concealment, with the removed earth heaped up in front
Origin of trenchME < OFr trenche (Fr tranche, a slice) < trencher
- a system of trenches dug as fortifications, as in WWI
- a situation characterized by the heavy or physical work of any struggle or enterprise
- A deep furrow or ditch.
- A long narrow ditch embanked with its own soil and used for concealment and protection in warfare.
- A long, steep-sided valley on the ocean floor.
verbtrenched, trench·ing, trench·es
- To dig or make a trench or trenches in (land or an area, for example).
- To place in a trench: trench a pipeline.
- To dig a trench or trenches.
- To encroach. Often used with on or upon: “The bishop exceeded his powers, and trenched on those of the king” (Francis Parkman).
- To verge or border. Often used with on or upon: “a broad playfulness that trenched on buffoonery” (George Meredith).
Origin of trenchMiddle English trenche, from Old French, from trenchier, to cut, perhaps from Vulgar Latin *trincare, variant of Latin truncare, from truncus, trunk; see ter&schwa;-2 in Indo-European roots.
(third-person singular simple present trenches, present participle trenching, simple past and past participle trenched)
- (usually followed by upon) To invade, especially with regard to the rights or the exclusive authority of another; to encroach.
- (military, infantry) To excavate an elongated pit for protection of soldiers and or equipment, usually perpendicular to the line of sight toward the enemy.
- (archaeology) To excavate an elongated and often narrow pit.
- To have direction; to aim or tend.
- To cut; to form or shape by cutting; to make by incision, hewing, etc.
- To cut furrows or ditches in.
- to trench land for the purpose of draining it
- To dig or cultivate very deeply, usually by digging parallel contiguous trenches in succession, filling each from the next.
- to trench a garden for certain crops
From Old French trenche. |
When your child is sick, it can take a toll on not only them, put you as a parent. Your pediatrician is available to help you restore the health of your child. Whooping cough is an infection of the respiratory system that is caused by the bacterium Bordetella pertussis (or B. pertussis). This sickness is characterized by severe coughing spells, which can sometimes end in a “whooping” sound when the person breathes in.
Whooping cough mainly affects infants younger than 6 months old before immunizations, and kids 11 to 18 years old whose immunity has started to fade adequately protect them. With help from your pediatrician, you can find relief for your infant from whooping cough.
The Signs and Symptoms
The first symptoms of whooping cough are similar to those of a common cold:
- Runny nose
- Mild cough
- Low-grade fever
After about 1 to 2 weeks, the dry, irritating cough evolves into coughing spells, which can last for more than a minute. When a coughing spell occurs, the child might turn red or purple, and at the end of the spell, they may make a characteristic whooping sound when breathing in.
By visiting your pediatrician, you can take the next step toward helping your child feel better once again. |
General Overview of Concussions
Most people in the general population will have at least a vague idea of what the diagnosis of a concussion entails, at least to the extent of knowing that it is due to a head injury. Truly understanding what a concussion is, however, depends on a more detailed definition of what takes the diagnosis from just a simple bump of the head to an actual concussion.
One very significant distinction that makes a brain injury a concussion is that there is change in the brain function, not the actual structure itself. A variety of causing factors, from being hit in the head to whiplash, can trigger changes in the brain that affect normal functioning. This is a large reason why so much focus is put into concussion prevention for young children, since this functional change could affect their developmental growth.
Although a concussion can be labeled as a mild traumatic brain injury, it actually involves a very complex pathophysiological process, which affects the brain. This process interferes with the regular function of the brain, which can be the part of the concussion that is most alarming to individuals suffering from a concussion. There are metabolic changes that take place within the brain following the concussion.
Many people might incorrectly associate a concussion with the loss of consciousness following a blow to the head, but although loss of consciousness is a potential possibility, it is not as common as one might think. In reality, brain damage can still occur without the patient losing consciousness, which is actually the most common situation involving concussions. An individual might even experience a concussion without realizing it, especially without the obvious display of losing consciousness. It is important to check for damage after any severe head injury, even if consciousness was maintained the entire time.
Loss of consciousness should not be the sole diagnosing factor for any head injury, particularly concussions. In order to be thorough in the diagnosis, a health care provider might perform a physical exam, focusing on behaviors that might be affected by a concussions. This includes thinking abilities, coordination, and reflexes. If necessary, this could be followed up by an EEG, MRI, or CT scan of the head to determine whether there is further damage. Recovery from such injuries depends on the severity of the particular case and specific symptoms that the patient experiences.
For a general definition, one should understand that concussions could produce symptoms in many different ways. The neurobiological changes can affect the injured person in multiple ways mentally, whether by hindering their memory, by making it difficult to concentrate, or a variety of other ailments that will be discussed later. There are also physical symptoms that can occur, from headaches to lack of energy. On top of these two primary affected areas, the injured person might also suffer from emotional conflicts or sleep disturbance. Symptoms in patients are also shown to worsen significantly with repeated concussions or head injuries, as the injured person is already at significant neurobiological risk. Taking the right steps towards prevention is the primary method for the safety and protection against concussions.
One reason that brings concussions to the forefront of discussion in current society is the impact that this injury has on athletes. For many years, the severity of receiving multiple concussions went unrealized, but now the awareness is growing. NFL football players are drawing attention to this situation by retiring as young as 23 due to repeated concussions, and many who criticize the decision are unaware of the reality of the situation. There are life-changing conditions that can arise with a large number of concussions. Football players, hockey players, and many other contact sport players will experience repeated head impacts that could lead to brain injury. Even after a brain injury, the players get back in the game and their coaches and sport leagues often encourage the continued playing. The dangers of having more and more concussions, however, has led to the increase of young athletes retiring early.
Other topics that will be discussed in-depth in later chapters are specific symptoms, causes, risk factors, and complications of concussions. Each of these have many potential outcomes for an individual who suffers a brain injury, so it is important to know all of the possible outcomes before somebody puts their future at risk by playing sports with a brain injury.
Here is a brief outline, including some of what has been mentioned above just to provide a thorough overview of what is to come:
The Brain: Understanding the structure and function of the human brain can help paint a complete picture for how a concussion or any brain injury truly affects the hurt individual. As mentioned above, the changes occurring are functional rather than structural, and this article explains how these changes take place. Making the connection between these neurobiological changes and the symptoms experienced can shed a new light on why certain symptoms take place.
Symptoms: The symptoms for a single concussion are often mild and go away after a short period of time, typically anywhere from a week to a month. If symptoms persist, however, this could be indicative of more severe injury or a developing condition such as CTE. There are also more mild conditions, such as post-concussion syndrome, that will typically resolve themselves in six months to a year. Knowing the symptoms is the first step in raising awareness, as it allows hurt individuals, teammates, coaches, family, and bystanders to be aware when somebody might be suffering from a concussion. Without this awareness of symptoms, a hurt individual might not seek medical help for what they are experiencing and instead continue to put themselves at risk for further injury.
Causes: There are numerous ways in which a concussion can occur. Some of the most common causes found are accidental falls, contact sports, automobile accidents, and recreational activities. Certain precautionary measures can be taken in these areas to reduce the number of concussions as a result.
Risk Factors: What might put an individual more at risk for serious brain injuries? There are different variables and risk factors, all of which can change depending on age, sex, and occupation.
Complications: Brain injuries are no simple matter, no matter how mild the injury is. There are numerous complications that can arise for somebody with a brain injury.
Treatment and drugs: The most common treatment for a concussion is simply rest, in order to allow the brain the time it needs to recover. For some, such as those suffering from CTE or worse conditions, treatment might be focused more on easing the damage rather than healing entirely, as this condition is irreversible.
Prevention: The best step towards dealing with concussions is prevention, and awareness is the key factor of this.Raising awareness about the risks of multiple concussions is the best way to keep individuals from suffering from worse degenerative brain conditions, such as CTE.
Long-term Effects: Mild, one-time concussions won't often incur long-term effects. If they do, this often indicates that there is something worse than just a mild injury. Brain damage can lead to loss of occupational and emotional skills, both of which can have a negative impact on certain aspects of an individual's life.
Costs: Treating a brain injury is no cheap matter. For somebody with a serious condition, these costs can add up to over a million dollars long-term. Since medical attention is necessary and can't be avoided for someone with brain damage, these costs can create a huge financial burden for the individual and his or her family. With long-term conditions, rehabilitation and therapy could be necessary costs in order to regain skills or recover from certain symptoms. Rehabilitation over a long period of time can incur great financial costs.
The Structure and Function in the Human Brain
Concussions cause neurobiological changes within the brain. In order to understand concussions on a deeper scientific level, it is necessary to dive into the biology behind the changes that the brain undergoes with a mild traumatic brain injury (TBI). The details and connections behind these changes can unveil correlations between injuries and symptoms or why specific precautions are necessary to take regarding concussions. The damage caused by a concussion is specific to functional changes not structural damage within the brain, and these functional changes manifest in different ways and in different degrees depending upon the severity of the injury.
The Initial Impact
Beginning with that first blow to the head or the jolt that moves the body, it is believed that this sudden movement or impact creates a wave of energy that passes through the brain tissue.1 This initiates a cascade within the brain that affects areas of ionic, metabolic, and physiologic function. If the concussion is less severe, then these issues should correct themselves in a short amount of time and be back to normal. The initial impact that causes the concussion is not fatal in itself, but it is the following aftereffects that present potential health problems.
In ordinary circumstances, the skull is present to protect the brain from damage or injury. In the event of a concussion, however, the skull is actually doing the reverse by being a blockade for the brain to ram into. Imagine a car accident, in which the car runs into a wall and stops moving, but the motion propels the driver's body forward still. This is typically the case with a concussion as well, in which the skull stops moving but the wave of energy propels the brain still, and it has a rough impact with the skull. In this case, the body part intended to protect the brain is actually harming it. Studies have shown that the brain can actually stretch or twist during this motion and this contortion of the brain strains many of the parts that deliver important functions. This puts stress on the brain, as will be discussed in more detail below.
Role of Recovery Time
The length of time that recovery takes plays a role in the vulnerability of the brain to a second injury. The quicker the brain can recover, the quicker blood flow increases back to the brain. However, during the recovery process, the cerebral blood flow is reduced and this can present many issues. The lack of sufficient blood to the brain contributes to cell dysfunction, which also increases the cell's vulnerability to a second injury. Repeated damage in such a short amount of time, before the brain has had the chance to fully recover, creates further problems and worsens the damage from the first impact. A closer look at the damage that repeat concussions can cause will be explored later in the article, but first it is necessary to understand just what is happening with cell dysfunction and the changes that concussions cause.
As mentioned, the recovery period after the initial impact can result in what is called cell dysfunction. In order to understand what is different about the function of the brain after a concussion, one must first understand what it is like in regular functioning circumstances, or before the concussion occurs.
Within the brain, there are billions upon billions of nerve cells called neurons. When the brain is operating properly and all functions are in working order, neurons act as signaling transmitters sending out electrical and chemical signals to and from the rest of the body. The process occurs when a signal arrives at the neuron and travels down the axon, which you might imagine as a sort of tunnel or pathway to another cell. Axons connect neurons to other neurons via their dendrites, which extend out from the neuron's cell body. The signal that is traveling down the axon then reaches the synapse, which is the gap between each neuron that consists of neurotransmitters. These neurotransmitters are released in an organized manner, which the next cell receives as a specific coded message. This whole process of transmitting signals and messages to the brain via neurons is highly organized and detailed, but a concussion disrupts the process until recovery has the time to fully restore the organization. Nerve cells are highly sensitive and fragile, which makes any sudden movements of the head potential for injury.
When the neuron process is disrupted by an injury, the fragile axons are likely to swell and even break away from the cell body and degenerate. This disrupts the cell communication, as the axons are the means of communication from neuron to neuron, and the degeneration of an axon releases toxic levels of the neurotransmitters mentioned above into the rest of the cell. Even after the injury, damage can continue to take place, as neurons surrounding the injured area can continue to die and worsen the injury over the next day. The contortion of the brain during the initial impact can also contort all of these neurons and axons, which in turn impacts the function ability of the brain.
The sudden, overwhelming release of neurotoxins creates an energy crisis in the brain, which explains why the pace of the brain is much slower following a concussion. The brain is then forced to work harder to accomplish ordinary tasks, so the injured person will experience a sluggish state until recovery is fully accomplished. Many of the symptoms experienced during this stage can be attributed to this energy crisis and the overworking of the brain while it attempts to recover.
Cognitive function is impaired because many of the neural pathways are either damaged or destroyed, which makes it difficult to access certain memories. In a properly functioning brain, the pathways between neurons would be the way that a person remembers certain words or shapes, so with the axons either twisted or degenerated, this pathway is inaccessible.2
An individual suffering from a concussion might experience difficulty remembering what happened in the minutes following the initial impact, or they might have difficult with comprehension or finding the right words to say. This confusion and incomprehensiveness is the first evident sign to bystanders that the injured person is experiencing a concussion, although no outward physical signs are evident.
Science of Recovery
With all of this damage occurring and causing a cascade of further problems, how does the brain begin to repair itself? Under normal function, the neurons operate properly by maintaining a certain concentration of specific ions on either side of their cellular membrane. These ions are primarily composed of Sodium (Na+) and Potassium (K+). The most common state for neurons is having a higher concentration of these ions outside of the cell membrane, meaning that the inside has a more negative charge than the outside does. One can imagine a neuron at rest as being comparable to a battery, which has two ends of opposite charge, one end being positive (+) and the other negative (-). This is called polarization. The state of polarization provides potential for a flow between the two ends to produce enough charge or energy to send chemical messages across the axon.
In order for a neuron to send a message, since that is the job of the axons, it is necessary to excite the ions by removing the neuron from this resting state of polarization. This involves making the inside of the cell membrane more positively charged, which is accomplished by sodium-potassium pumps that move ions in and out of the cell membrane. For the ions to enter the axon, there are multiple ion channels within the axon that allow for this to occur.
During a concussion, however, these ion channels are not carefully maintained as they usually are, and they instead open to release the ions from within the axon. This forces the cell to kick into overdrive and try to restore the ion level back to normal, which can use a lot of the brain's already limited energy. This restorative process requires time and nutrients that the brain may have difficulty acquiring, due to so many pathways and functions being shut down.
Repeated Concussions and CTE
With enough time, function can be restored and all of this damage repaired. The lasting cognitive damage, however, comes with repeat injuries, as a second or third concussion would only worsen the injury and prevent restoration. These repeated concussions can accelerate the decline of cognitive function, and even bring on the onset of dementia or Alzheimer's in patients who are too young to be experiencing this just by natural aging of the brain.
While the obvious answer might be that an individual with a concussion should just rest to avoid further damage, a critical factor playing a role is the fact that so many concussions go undiagnosed. Without the obvious display of loss of consciousness, many young people or children may not realize that their head trauma was truly a concussion. Without any further injuries, this concussion would heal over time and return to normal. However, if they are unaware of the vulnerability of their injured brain, the individual may not take the necessary caution or give their brain the rest it needs to recover. This unawareness exposes them to potential accidents or injuries, and the severity of a second concussion could be far worse. Many young athletes are simply unaware of what exactly determines a concussion, so these brain injuries continue to go underdiagnosed. Athletes at risk may also be afraid to report a concussion in fear of losing their status or position on their team, and thus continue to put themselves in danger despite a concussion.
Awareness of the dangers of repeated concussions has increased in the last couple of decades, as medical issues have arisen surrounding boxers, football players, and other sports players who have experienced multiple concussions. The rise in awareness of a condition called chronic traumatic encephalopathy (CTE) has sparked further interest and research into the changes that the brain undergoes with multiple concussions. CTE can be difficult to diagnose because the symptoms manifest in behavioral mechanisms, often similar to the symptoms of other brain disorders such as Alzheimer's or Parkinson's.3 Symptoms affect emotions, memory, and mood, just to name a few of the primary reactions.
Due to the many neurobiological changes caused by concussions, the brain responds by what is called cerebral atrophy, which is also seen in Alzheimer's. This is, put simply, the actual shrinking of the brain. Along with this is the development of neurofibrillary tangles, which are formed by a protein called tau. Further research is still necessary to fully understand these tau tangles, because much remains unknown by scientists and researchers about the precise details regarding this formation. Regardless of lack of details, research does show that these neurofibrillary tangles are present in all cases of Alzheimer's and CTE.
As the two changes mentioned above take place (the brain shrinks and the tau tangles form), a correlated increase in aggression and oftentimes depression takes place as well. The research is incomplete as to exact causative agents behind this increase in aggression, but a correlation between the increase in tangles and aggression is evident.
The Challenge in Studying the Changes
- Complications with a Concussion
- Knowing the Symptoms and Signs of a Concussion
- Additional Resources on Concussions
- A Concussion's Long-Term Consequences
- Concussion Facts: Treatment and Drugs
- How to Identify Unhealthy Relationships
- Kids that Become Bullies
- Dealing with the Life Changes of a Diabetic
- How to Create Healthy Relationships with Children
- Getting Started With Exercise in Weight Loss Management
- Managing Self-Esteem in Toddlers
- How to Talk to Children About Bullying
- Self-Esteem Building During the Elementary and Middle School Years
- Understanding Diabetes: All About Glucose
- Paying Attention to Potential Buyers In Establishing a Personal Trainer Business |
Punctuation—Hyphens and Dashes
These resources provide guidelines for using punctuation in your writing.
Last Edited: 2013-03-22 08:43:43
Hyphens (-) are used to connect two or more words (and numbers) into a single concept, especially for building adjectives. Likewise, some married women use hyphens to combine their maiden name with their spouse’s name:
- There are fewer Italian-American communities these days.
- The family’s money-saving measures have been helping them to build their savings.
- She has stopped buying 2-liter bottles and has started buying 0.5-liter bottles, instead.
- I had a conversation with Mrs. Skinner-Kcrycek this morning.
They are also a necessary component of the numbers 21 through 99:
- Before the exam, Tomas studied for thirty-three hours without sleep.
Although they can be used as substitutes for the word “to” when discussing value ranges and scores in games, it is better to use the word in formal writing situations than the punctuation:
- The high temperature will be 87-89 degrees.
Hyphens are also used in syllable breaks when words cannot fit completely on a line, and must be continued on the following line. With word processors and the ability to automatically move whole words, though, this has become less common:
- This opinion is based on sales figures for the past few months, and con-
- versations I have had with customers.
Dashes (—) can be used to indicate an interruption, particularly in transcribed speech:
The chemistry student began to say, “An organic solvent will only work with—” when her cell phone rang.
They can also be used as a substitute for “it is, “they are,” or similar expressions. In this way they function like colons, but are not used for lists of multiple items, and are used less frequently in formal writing situations:
- There was only one person suited to the job—Mr. Lee.
They can also be used as substitutes for parentheses:
- Mr. Lee is suited to the job—he has more experience than everybody else in the department—but he has been having some difficulties at home recently, and would probably not be available.
Note that dashes are double the length of hyphens. When you type two hyphens together (--), most word processors automatically combine them into a single dash.
The Purdue OWL maintains a number of resources on punctuation you can visit to learn more: |
Sensory Art for Babies and Toddlers
The minds of babies and toddlers are always growing and sensory art is a particularly helpful way to let them explore! Sensory art for babies and toddlers is different from a standard art class in that the focus is on the experience of creating using all five senses, instead of the final product. Here is why sensory play such as finger painting, play dough, sand, mud, and water play important for children’s development:
- Engaging sensory play uses all 5 senses and promotes integration; the ability of their brains to process all the information they receive via touch, smell, hearing, vision and occasionally taste.
- As children pour, dump, build, and scoop, they explore and learn about spatial concepts like full/empty, over/under, in/out.
- They learn pre-math concepts along with language and vocabulary for important cognitive development.
- Art is an essential component to learning that encourages exploration and discovery through creative play.
- Art stimulates both sides of the brain.
- 33% of children are visual learners.
- Art develops hand-eye coordination, stimulates perception, and encourages children to pay attention to the physical space around them.
- Art is a natural activity giving children the freedom to manipulate different materials in an organic and unstructured way that allows for exploration and experimentation.
- Children can share and reflect on their works of art with loved ones, which is wonderful for their self esteem and for connecting with family and friends.
All of our sensory art classes are carefully designed with creative exploration in mind. Come join in the fun! |
Have you ever wondered what Leonardo da Vinci, Albert Einstein and Thomas Edison had in common? These great geniuses and inventors all had personalities that were filled with curiosity.
Curiosity is a quality related to inquisitive thinking such as exploration, investigation and learning. It is heavily associated with all aspects of human development, in which stems the process of learning and the desire to acquire knowledge and skills. Characteristics associated with curiosity include learning, memory and motivation.
Why is curiosity important for learning?
- Helps prepare us for learning and enables greater retention
Researchers have discovered that students are more likely to learn and retain information about a subject if their curiosity has been stimulated. Boykin (1981) discovered that by asking students unusual and interesting questions before exposing them to the material enabled these students to retain this material, as their curiosity was first stimulated by the questions asked.
When you’re curious about something, your brain absorbs all information presented around that topic. Preparing material that is less interesting with information that is more interesting, will naturally assist in retaining more information.
- Helps reward learning
The drive to learn information or perform some action is often initiated by the anticipation of reward. In this way, the concepts of motivation and reward are naturally tied to the notion of curiosity. The idea of reward is defined as the positive reinforcement of an action that encourages a particular behavior by using the emotional sensations of relief, pleasure and satisfaction that correlate with happiness. Many areas in the brain are used to process reward and come together to form what is called the reward pathway. In this pathway, areas of the brain that are linked to the reward sensation, are activated.
Dopamine is linked to the process of curiosity, as it is responsible for assigning and retaining reward values of information gained. Research suggests higher amounts of dopamine is released when the reward is unknown and the stimulus is unfamiliar, compared to activation of dopamine when the stimulus is familiar. Therefore, curiosity releases dopamine, which activates the sensation of happiness and reward.
5 ways to use curiosity in your teaching practice
- Emphasize on questions and not answers
Questions are an excellent indicator of curiosity. Create an introductory lesson to a topic which allows learners to ask questions, and reward with points for questions based on quantity, quality, etc. The quality of a question not only reveals curiosity, but background knowledge, literacy level, confidence and student engagement.
- Personalise curiosity
Let students choose a topic for an essay, then refine that topic/theme until it’s authentic and personal to them. You could start with a general topic—climate change, for example—and then have each student refine that topic based on their unique background, interests, and curiosity until it’s truly personal and ‘real.’
- Allow for autonomy
Let students lead. Allow students to use a self-directed learning model and to take ownership of their learning. Curiosity is not sparked if learning is passive.
- Reward curiosity
Curiosity stimulates the feelings of rewards and gratification through intrinsic motivation. Therefore, offer students extrinsic rewards by incorporating gamification elements in your lessons.
- Instructional design curiosity
Don’t leave curiosity as a last-minute add-on to your lesson plan. Make sure to include instructional design in your learning material. Build in time for learners to question and explore material on their own. Plan activities that spark curiosity. |
What is Competency Education?
Why Competency Education
The time for competency education has come. It is vitally important for our country to move away from the restrictions of a time-based system. The reasons are many:
- To ensure that all students succeed in building college and career readiness, consistent with the Common Core of world class knowledge and skills;
- To take advantage of the extraordinary technological advances in online learning for personalization, allowing students to learn at their own pace, any time and everywhere.
- To provide greater flexibility for students that would otherwise not graduate from high school because they have to work or care for their families;
Many states are adjusting their state policies to allow for competency education innovations. Ohio’s Credit Flex policy requires districts and schools to provide different ways to earn high school credit. New Hampshire is implementing sweeping reforms to make all high school competency-based.
What is Competency Education?
“In proficiency system, failure or poor performance may be part of the student’s learning curve, but it is not an outcome.”
and Assessment, Oregon
Competency education builds upon standards reforms, offering a new value proposition for our education system. Frequently, competency education is described as simply flexibility in awarding credit or defined as an alternative to the Carnegie unit. Yet, this does not capture the depth of the transformation of our education system from a time-based system to a learning-based system. Competency education also hold promise as districts explore new ways to expand and enrich support to students, challenging the assumption that learning takes place within the classroom. Competency-based approaches are being used at all ages from elementary school to graduate school level, focusing the attention of teachers, students, parents, and the broader community on students mastering measurable learning topics.
In 2011, 100 innovators in competency education came together for the first time. At that meeting, participants fine-tuned a working definition of high quality competency education:
- Students advance upon mastery.
- Competencies include explicit, measurable, transferable learning objectives that empower students.
- Assessment is meaningful and a positive learning experience for students.
- Students receive timely, differentiated support based on their individual learning needs.
- Learning outcomes emphasize competencies that include application and creation of knowledge, along with the development of important skills and dispositions.
Click here for an in depth look at the working definition.
A Note on Language
The issue of language is always a challenge when new concepts or paradigms are introduced. As you learn about competency education you will encounter multiple phrases used to capture the practice of students advancing upon mastery: standards-based, outcomes-based, performance-based, or proficiency-based. Federal policy is using the phrase competency-based in Race to the Top and other programs.
CompetencyWorks uses the phrase competency education. Why? It furthers the institutionalization of the concept by building on federal policy.
What we call it isn’t important. What is important is that we share a working definition that drives policy and practice towards a learner-centered system in which success is the only option. |
Aerial surveys are conducted to learn about manatee distribution, relative abundance, and use of habitat.
Aerial surveys are valuable for acquiring information on manatee distribution, relative abundance, and use of habitat types. The Florida Fish and Wildlife Conservation Commission (FWC) uses three types of surveys to assess manatee populations: distributional surveys, synoptic surveys, and power plant surveys.
AERIAL DISTRIBUTIONAL SURVEYS
Marine mammal biologists from FWC and other agencies use aerial distribution surveys to determine the seasonal distribution and relative abundance of manatees. Aerial surveys are sometimes flown to document the abundance of dolphins, right whales, and sea turtles.
Surveys are typically conducted in nearshore waters around the state. Flights are usually between four and six hours long and are most commonly flown every two weeks for two years. Most surveys are flown from small, four-seat, high-winged airplanes (Cessna 172 or 182) flying at a height of 150 m (500 ft) at a speed of 130 km/hr (80 mph). The flights are designed to maximize manatee counts by concentrating on shallow nearshore waters, where manatees and their primary food source, seagrasses, are located. Flight paths are parallel to the shoreline, and when manatees are sighted, the airplane circles until the researchers onboard are able to count the number of animals in each group. Scientists usually do not survey deeper waters. In urban areas or where waters are particularly opaque, some studies are made using small helicopters.
All aerial data are recorded on maps and entered into the Fish and Wildlife Research Institute's Marine Resources Geographic Information System (MRGIS) for spatial analysis. Survey data in the MRGIS are used as a primary source of data for management planning and decisions. The FWC Atlas of Marine Resources CD-ROM includes 31 data sets of manatee aerial distribution survey sightings, detailed aerial flight paths, and related coverages of bathymetry, shorelines, seagrasses, county boundaries, and aids to navigation.
Five other research groups are currently conducting manatee aerial distribution surveys in Florida:
- Jacksonville University surveys Duval County.
- Kennedy Space Center surveys upper Banana River.
- Dade County Department of Environmental Resource Management, Mote Marine Lab surveys Sarasota and Charlotte Counties.
- Chassahowitzka National Wildlife Refuge surveys the Crystal River and Big Bend areas.
The FWC's Imperiled Species Management and many other groups make frequent use of the data in the MRGIS system for making management decisions.
"Synoptic" means covering a large area. The synoptic surveys are winter aerial surveys that cover all of the manatees' known wintering habitats in Florida. FWC coordinates the interagency team conducting each synoptic survey.
These statewide interagency surveys are conducted after cold fronts pass through Florida, when the manatees gather at warm springs and thermal discharges from power plants and industrial plants. These surveys are useful in determining minimum estimates of manatee populations.
Manatees are counted during the coldest winter weather (December through March) because they congregate near known warm-water sites, such as natural springs, power plants, and deep canals, when temperatures drop. Counts are believed to be most accurate just after a cold front-when it is a bit warmer, clear, and windless-because manatees move to the surface to warm in the sun, making them more visible.
POWER PLANT SURVEYS
The waters around Tampa Bay area power plants were surveyed (1999-2003) for manatees each year from November through March. These surveys were flown near the Florida Power Corporation's Bartow plant and at the Tampa Electric Company's Big Bend and Port Sutton plants. Most of these counts were made to document manatee use of the thermal discharges in the waters near power plants. These surveys were also used as part of a manatee-boat interaction study to document manatee presence and boat use at certain areas.
Transect Aerial Surveys
In August 1997, as part of a long-term study to develop improved aerial survey techniques, FWC conducted transect aerial surveys in the Banana River, Brevard County. These counts will be used as part of a long-term assessment of population trends. A publication on the transect aerial surveys describes the benefits of transect methods for standardizing aerial survey counts and assessing population trends in wide, shallow bodies of water, like the Banana River (Miller et al. 1998).
Tampa Bay Power Plant Calibration Study
Researchers believe aerial surveys underestimate manatee populations, largely because some animals go undetected by observers. Design of most past aerial surveys focused on producing maximum counts rather than standardizing them. Since all manatees are not detected during surveys due to surface water conditions like turbidity and glare, some animals are not counted. Results of surveys that do not account for manatees not seen during the flight are not comparable over time or between locations. To improve surveys, scientists must develop means of accounting for manatees not detected during the survey, and counts must be adjusted to improve their accuracy.
In winter 1999-2003, FWC conducted aerial survey research at the TECO Big Bend Power Plant in Tampa Bay, Florida. The purpose of the study was to develop a mathematical model to formulate a correction factor to adjust winter counts of manatees at the TECO Big Bend power plants.
Between December and March 2003, researchers flew three sets of repeated aerial surveys over the TECO Big Bend Power Plant on the first, windless day following three different cold fronts. Aerial observers surveyed the plant to test the effectiveness of counting manatees during a 20-40 minute flight. One flight was flown in the morning (approximately 10:00 a.m.), and one was flown in the afternoon (approximately 2:00 p.m.) at an altitude of 500 ft-700 ft and speed of 70 kt.
Comparing the percentage of animals counted with the percentage of animals undetected by the observer provides data that are used to develop a correction factor, which can be applied to the initial count to adjust for animals missed during the survey. FWC staff members will apply knowledge gained from this study to obtaining better count estimates in locations throughout the state.
All state manatee research and management activities are funded only by sales of the "Save the Manatee" license tag, boat registration fees, and voluntary contributions. |
Stable quantum information could make possible solving problems in record time
Quantum computing has overcome an important barrier: Scientists have achieved nearly perfect control over a bit of quantum information in a way that could bring them a step closer to error-free calculations.
All digital information comes in tiny packets called bits. In consumer devices, bits are chunks of magnetic or electric material that flip between two distinct states. But thanks to quantum weirdness, certain minuscule objects called quantum bits, or qubits, can exist in two states at once. Physicists have connected multiple qubits with each other to share one overall “entangled” state. Using entanglement, rudimentary quantum computers can run multiple calculations at once and solve simple problems like factoring 15 into 3 and 5 (SN: 3/10/12, p. 26). Because each additional qubit doubles a device’s processing power, future quantum computers should complete tasks far more rapidly than conventional machines do.
But quantum computing has a downside: Quantum states are easily shattered, especially as the number of entangled qubits increases. John Martinis, a physicist at the University of California, Santa Barbara, compares a classical bit to a coin resting flat on a table: The coin won’t flip unless the table gets a really hard shake. A qubit, by contrast, is like a coin standing on edge — the slightest jiggle topples it. Theorists in the 1990s suggested that qubits arranged in a checkerboard could overcome this fragility by monitoring and correcting errors in their neighbors, creating communal stability. Even in this scheme, however, individual qubits’ states would need to come out correctly after at least 99 out of 100 state-changing computations; otherwise, errors would multiply throughout the grid. No device containing more than three qubits has yet achieved this 99 percent stability threshold for each qubit.
Seeking to make such an unflappable qubit, Martinis and colleagues report in the April 24 Nature that they built tiny electrical circuits, each roughly the size of a grain of sand, from superconducting aluminum wire and ultrathin barriers of aluminum oxide. When cooled to 30 thousandths of a degree Celsius above absolute zero, electrons slosh back and forth, or resonate, around the circuits without encountering resistance. Information can be encoded in this resonance to make a qubit.
Using the grid computing idea, Martinis and colleagues lined up five of their qubits and electrically linked each to its nearest neighbors. The researchers then etched larger circuits that allowed them to change individual qubits’ states with tiny pulses of electricity. Using these pulses, the scientists found they could control one qubit’s state more than 99.9 percent of the time. For two entangled neighboring qubits, the fidelity dropped to 99.4 percent, still above the 99 percent threshold. When they entangled all five at once, the researchers could control the qubits’ state 81.7 percent of the time.
Achieving such precise control in a system with so many qubits is “a great milestone for quantum information processing,” says physicist Raymond Laflamme of the University of Waterloo in Canada.
“It’s quite a spectacular achievement,” agrees Simon Devitt, a theoretical physicist at Ochanomizu University in Tokyo. He says the result provides a clear path to a quantum computer: “Once you satisfy the error correction requirements, then the rest is engineering.”
Yale University’s Robert Schoelkopf, who invented the sloshing-electron qubit that Martinis’ team used, says the team has made “a significant advance.” But he says a practical quantum computer would require even stabler qubits.
Editor's Note: This story was updated May 6, 2014, to correct the approximate size of the qubit circuits and to correc t the temperature those circuits were cooled to. |
Malignant cells can also break away from the original (primary) tumor and spread through the blood or lymphatic system to form a new tumor in another part of the body. The spread of cancer to other parts of the body is called metastasis. Cancers that have metastasized are more likely to cause pain, as this represents a more advanced disease.
Signs and Symptoms of Cancer Pain
There are three main types of pain seen in cancer: somatic, visceral and neuropathic cancer pain. Somatic pain is pain felt by the pain receptors of the body and is often described as aching, dull, sharp, or throbbing. Examples are pain in an incision after surgery, or from a cancer that has spread to the bones.
Visceral pain is caused by tissues or organs in the abdomen that are being stretched or enlarged by cancer. The pain is commonly described as deep, squeezing or as a feeling of pressure. The pain might also be referred to other areas. For example, pain from gallbladder disease is often felt in the right shoulder.
Neuropathic pain is caused when nerves are damaged by cancer or cancer treatments. This pain is usually described as burning, shooting or stabbing. |
Course in Logic 101
Welcome to Logic 101
Everyone has an opinion, and in a democratic country everyone also has an equal right to state an opinion, but not everyone's opinion is of equal value. This page exists to spread this truth - some opinions are worth more than others*. And if you want your opinion to be the rational opinion, you must learn how to argue your points using the tenets of Logic.
- Douglas Adams called this “opinion inequality”
The course in Logic 101 now proceeds to the opening page on Logic. Students ought to begin by reviewing this page.
Here is how the course proceeds. Students may also take note that proceeding in this fashion will also allow them to have a basic grasp of the history of logic.
This section presents the axioms of Classical Logic. Common myths about these axioms are also explored. Students will come to learn that these axioms only apply to certain logics.
The two concepts are explained and differentiated.
It's one thing to know how to argue, its quite another to know when to argue, and when to remain silent.
This section defines the term argument
What is deduction? What is Induction? What are their limits?
In this section, the various ways to assess Deductive and Inductive arguments is explored.
This section briefly explores the two main ways in which an argument can go wrong - i.e. problems with the form of an argument, and problems with the premises in an argument.
This is both the longest section of the site, and most likely the most interesting. It presents a nearly exhaustive list of the most common informal fallacies.
In order to learn Classical Logic one must first learn about the premises used in Classical arguments: Categorical Propositions.
The traditional Square of Opposition is a diagram specifying logical relations among the four types of categorical propositions described in the preceding section.
It was eventually discovered that the Traditional Square of Opposition required a correction. This is it.
Categorical Syllogisms are explained. One a student makes it to this point, they can claim to grasp the basics of Classical Logic.
However, there are other types of syllogisms. These are explored in this section.
Students will come to learn about the limits of Classical Logic. Propositional Logic represents one attempt to overcome these obstacles. Propositional Logic deals with using symbols to represent logical arguments.
Truth tables provide a useful method of assessing the validity or invalidity of the form any argument. Once a student grasps Propositional Logic, they can begin to use Truth Tables.
Formal Fallacies A review of the valid and invalid forms of Propositional Logic.
Students will then come to see the limits of Propositional logic! Predicate logic helps overcome the shortcomings of both Propositional Logic and the problems found in the Traditional Square of Opposition.
There is more to logic than trading in tautologies. Inductive logic allows us to make statements about the real world. Included is a discussion of a possible basis of inductive logic: Bayesian Theory.
This section explores the nature of emotional appeals, and illustrates how they differ from logical arguments.
For future consideration:
Common argument forms |
The European Honeybee (Apis Mellifera)
The First 21 Days of a Bee's Life
This was a great TED Talk video, revealing the life-cycle of our Domestic Honeybee (Apis Mellifera) as witnessed by Anand Varma while filming for National Geographic.
Anand Varma's 60 second video for National Geographic is here:
Anand Varma's 60 second video for National Geographic, is here:
Why are the Honeybees Disappearing?
There are several factors culminating in the Death and the Colony Collapse Disorder (CCD) we are witnessing.
- American Foul Brood - A bacterial growth of spores that kills larvae, produces a rotting smell, and transmits to other hives easily.
- Colony Collapse Disorder (CCD) - Sudden loss of a colony's worker bee population with few dead bees found near a colony are signs of this disorder. The Direct cause is unknown.
- Varroa Mites - These mites feed on the body fluids of bees they inhabit, spread quickly, and can be difficult to detect.
- Tracheal Mites - These mites infest adult bees, including large breathing tubes near the base of the bees' wings, and make it more difficult for them to keep warm in winter.
- Wax Moths - The moths attack weak colonies by chewing through combs of a hive. Larvae tunnel into combs and destroy the colony.
- Pesticides (particularly, Neonicotinoids) - Commercial pesticides are strictly regulated, but ones used by suburban homeowners are not. Bees land on plants which have been sprayed with pesticides and poison themselves, along with their colonies.
- Natural Pathogens (Bacteria/Molds/Fungus) |
Wisconsin engineers ready a blueprint for a nanomechanical computer
Aug. 3, 2007
If efforts now under way by a team of University of Wisconsin-Madison engineers pan out, the age of the nanomechanical computer may be at hand.
Instead of relying on solid-state transistors and other electronic components to compute ones and zeroes, such a machine would depend purely on moving parts - gates and pillars and levers and pistons - to create switches, logic gates and memory units, the building blocks of digital computers.
"The aim is to have a new type of device for computing applications," says Robert Blick, a UW-Madison professor of electrical and computer engineering and senior author of a paper in the July 24 New Journal of Physics that outlines a plan for making a computer based on microscopic moving parts.
Conventional devices use electrons that travel in circuits to perform the calculations that drive the functions of computer chips. A nanomechanical computer would also depend on electrons, but instead of the solid state electronic components used in conventional computers to channel them into working circuits, the nanomechanical device would rely on the push and pull of millions of microscopic parts to control the flow of electrons.
Inspiration for the Wisconsin effort resides in the purely mechanical computers of the past. The most famous is the "difference engine" produced by 19th century English mathematician Charles Babbage. Hand-held mechanical calculators, Blick notes, were developed in the 1950s and were sold as recently as the early 1970s.
Computer chips based on nanomechanical parts are not likely to compete with conventional electronic devices, Blick says, but they would have key advantages that could lead to hybrid chips or specialized roles for all-mechanical nanodevices.
For example, nanomechanical chips promise to be more rugged and durable than conventional silicon chips, making them potentially useful for extreme environments such as space, car engines, battlefields and children's toys. What's more, they would require less power to operate and could perform at much higher temperatures - up to 500 degrees Celsius - obviating the need for the energy-eating cooling systems required for electronic, silicon-based computers.
It's estimated that between 15 and 20 percent of total energy use in the United States is devoted to operating and cooling computers. "That's one of our motivations. That's why we have this dream to attack the problem at the root," Blick says.
More energy-efficient chips would also have potential for portable computers, as battery power tends to be the limiting factor for laptops.
Blick's group has already made a working silicon model of a mechanical transistor, the basic switch at the heart of all computers, and is now in the process of trying to align several elements into a working circuit.
"We've tested these single devices and we've shown that a single element works," says Blick. "The next step is to demonstrate memory. We're starting with the basics of information engineering."
The components of a nanomechanical computer, according to Blick, would likely be made of materials other than silicon. Ultra-hard diamond film is one possible material, as it can be chemically treated and is amenable to the methods used to mass-produce integrated circuits.
An important consideration, Blick explains, is developing a system that can achieve industrial-scale production and uses existing industrial lithographic techniques.
"We have some idea of how to mass-fabricate (these devices) in the clean room," Blick says. "We think it might be four years to having a product." |
Presentation on theme: "Terms Used to Describe Direction and Surface TermMeaning VentralRefers to the belly or underside of a body or body part DorsalRefers to the back – also."— Presentation transcript:
Terms Used to Describe Direction and Surface TermMeaning VentralRefers to the belly or underside of a body or body part DorsalRefers to the back – also refers to the cranial surface of the manus (front of paw) and pes (rear of paw) CranialFront of the body PosteriorRear of the body RostralNose end of the head CephalicPertaining to the head CaudalToward the tail MedialToward the midline LateralAway from the midline
TermMeaning SuperiorUppermost, above, or toward the head InferiorLowermost, below or toward the tail ProximalNearest the midline or nearest to the beginning of a structure DistalFarthest from the midline or farthest from the beginning of the structure Superficial (also called external) Means near the surface Deep (also called internal)Away from the surface PalmarThe caudal surface of the manus(front paw) including the carpus PlantarThe caudal surface of the pe (rear paw) including the tarsus
Planes Imaginary lines that are used descriptively to divide the body into sections PlaneDescription Midsagittal (median and midline) Divides the body into equal right and left halves Sagittal (frontal and coronal)Divides the body into dorsal (back) and ventral (belly) parts Transverse (horizontal and/or cross-section 0plan) Divides the body into cranial and caudal parts
Planes are imaginary lines that are used descriptively to divide the body into sections. Midsagittal: the plane that divides the body into equal right and left halves. * median * midline Sagittal: the plane that divides the body into unequal right and left parts
Dorsal: the plane that divides the body into dorsal (back) and ventral (belly) parts * frontal * coronal Transverse: the plane that divides the body into cranial and caudal parts * horizontal plan * cross-sectional plan
The terms anterior, posterior, superior and inferior can be confusing when used with quadrupeds. In quadrupeds, ventral is a better term for anterior and dorsal is a better term than posterior
Study.... -ology: study of physiology: then study of body function pathology: the study of the nature, causes and development of abnormal conditions pathophysiology: the study of changes in function caused by disease etiology: the study of the cause of disease
The Mouth TermMeaning ArcadeDescribes how teeth are arranged in the mouth Lingual surfaceAspect of the tooth that faces the tongue MaxillaUpper jaw MandibleLower jaw Palatal surfaceTooth surface of the maxilla that faces the tongue Lingual surfaceThe tooth surface of the mandible that faces the tongue
TermMeaning Buccal surface (vestibular surface) Aspect of the tooth that faces the cheek Occlusal surfaceThe aspects of the teeth that meet where you chew Labial surfaceThe tooth surface facing the lips Contact surfaceThe aspects of the tooth that touch other teeth Mesial contactContact surface is the one closest to the midline of the dental arcade or arch Distal contactThe surface furthest from the midline of the dental arcade
The dental arcade is the term used to describe how teeth are arranged in the mouth. Teeth Surfaces The lingual surface is the aspect of the tooth that faces the tongue. The palatal surface is the tooth surface of the maxilla that faces the tongue and the lingual surface is the tooth surface of the mandible surface that faces the tongue.
The buccal surface is the aspect of the tooth that faces the cheek (Bucca means cheek). * sometimes called the vestibular surface (Vestibule means cavity or entrance)
The occlusal surfaces are the aspects of the teeth that meet when you chew. Hint: think of the teeth occluding, or stopping, things from passing between them when you clench the them. The labial surface is the tooth surface facing the lips. (labia means lip) Contact surfaces are divided into * mesial : the one closest to the midline of the dental arcade * distal: furthest from the midline of the dental arcade
HOLES = CAVITIES A body cavity is a hole or hollow space in the body that contains and protects organs. The cranial (crani = skull) cavity is the hollow space that contains the brain and skull. The spinal cavity is the hollow space that contains the spinal cord within the spinal column. The thoracic cavity (thorac = chest) is the hollow space that contains the heart and lungs with the ribs between the neck and diaphragm.
The abdominal cavity is the hollow space that contains the major organs of digestion located between the diaphragm and pelvic cavity. The peritoneal cavity is the hollow space within the abdominal cavity between the parietal peritoneum and the visceral peritoneum. The pelvic cavity is the hollow space that contains the reproductive and some excretory systems organs bounded by the pelvic bones.
TERMS YOU NEED TO KNOW... REGIONS Abdomen – the portion of the body between the thorax and the pelvis containing the abdominal cavity. Thorax – is the chest region located between the neck and the diaphragm. Groin – the lower region of the abdomen adjacent to the thigh (also known as inguinal area)
MEMBRANES... Membranes – are thin layers of tissue that cover a surface, line a cavity or divide a space or an organ. Peritoneum – the membrane lining the walls of the abdominal and pelvic cavities and covers some of the organs in this area. (the peritoneum maybe further divided in reference to its location) * parietal (side) peritoneum – outer layer of the peritoneum that lines the abdominal and pelvic cavities * visceral (organ) peritoneum – the inner layer of the peritoneum that surrounds the abdominal organs.
Peritonitis – inflammation of the peritoneum ABDOMEN... Umbilicus (navel) – the pit in the abdominal wall marking the point where the umbilical cord entered the fetus. Mesentery – the layer of the peritoneum that suspends parts of the intestine in the abdominal cavity. Retroperitoneal – superficial to the peritoneum.
LYING AROUND... Recumbent – lying down Dorsal recumbency – lying on the back – also known as supine Ventral recumbency (sternal recumbency) – lying on the belly – also known as pron Left lateral recumbency – lying on the left side Right lateral recumbency – lying on the right side
Dorsal or supine Sternal or ventral Right lateral Left lateral
MOVING ALONG... Adduction – movement toward the midline Abduction – movement away from the midline Flexion – closure of a joint angle or reduction of the angle. Extension – straightening of a joint or an increase in the angle between two bones * hyperflexion and hyperextension occur when the joint is flexed or extended too far.
CELLS... Cyte = cell Ology = study of * cytology = involves studying cell origin, structure, function and pathology Prot = first Plasm = formative material of cells * protoplasm = the cell membrane, cytoplasm, and nucleus
GENES... Genetic – term used to denote something that pertains to genes or heredity. Genetic Disorder – any inherited disease or condition caused by defective genes Congenital – denotes something that is present at birth Anomaly – deviation from what is regarded as normal (used instead of defect)
Tissue... Hist/o = tissue Ology = study of * histology = the study of structure, composition and function of tissue Tissue – a group of specialized cells that are similar in structure and function
Four types of tissue: 1. epithelial (epithelium) – covers internal and external body surfaces and is made up of tightly packed cells a. Endothelium – lining of the internal organs b. Mesothelium – covering that forms the lining of serous membranes such as the peritoneum
2. Connective - adds support and structure to the body by holding the organs in place and binding body parts together Examples: bone, cartilage, tendons, ligaments a. Adipose – fat (connective) 3. Muscle – contains cell material with the specialized ability to contract and relax a. Skeletal b. Smooth c. cardiac
4. Nervous - contains cells with the specialized ability to react to stimuli and conduct electrical impulses -plasia = formation, development, growth and cell numbers -Trophy = formation, development, and increase in size of tissue and cells
Anaplasia – a change in the structure of cells and their orientation to each other Aplasia – lack of development of an organ or tissue or a cell Dysplasia – abnormal growth or development of an organ or a tissue or a cell. Hyperplasia – abnormal increase in the number of normal cells in normal arrangement in an organ or a tissue or a cell Hypoplasia – incomplete or less than normal development of an organ or a tissue or a cell.
Neoplasm – any abnormal new growth of tissue in which multiplication of cells is uncontrolled, more rapid than normal, and progressive * usually form a distinct mass of tissue called a tumor * benign – not cancerous or not recurring * malignant – tending to spread and be life threatening (cancerous) -oma = tumor or neoplasm
Atrophy – decrease in size or complete wasting of an organ or tissue or cell Dystrophy – defective growth in the size of an organ or tissue or cell Hypertrophy – increase in the size of an organ or tissue or cell Reminder: a – without dys – bad hypo – less than normal hyper – more than normal ana – without neo – new
5.Glands: groups of specialized cells that secrete material used elsewhere in the body Aden/o = gland Exocrine gland: groups of cells that secrete their chemical substances into ducts that lead out of the body or to another organ. (sweat glands, sebaceous glands) Endocrine gland: groups of cells that secrete their chemical substances directly into the bloodstream, which transports them throughout the body. They are ductless (thyroid glands, pituitary and the portion of the pancreas that secretes insulin.
6.Organ: part of the body that performs a special function or functions.
NUMBERS Medical terms can be further modified by the use of prefixes to assign number value, numerical order, or proportions. |
Topics covered: Work, heat, first law
Instructor/speaker: Moungi Bawendi, Keith Nelson
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional material from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: So last time we talked about the zeroth law, which is the common-sense law, which says that if you take a hot object next to a cold object, heat will flow from the hot to the cold in a way that is well defined, and it allows you to define temperature. It allows you to define the concept of a thermometer. You have three objects, one of them could be a thermometer. You have two of them separated at a distance. You take the third one, and you go from one to the other, and you see whether heat flows, when you touch one object, the middle object, between those two objects.
Let me talk to you about temperature scales. We talked about the Celsius scale then the Fahrenheit scale. The late 1800's were a booming time for temperature scales. People didn't really realize how important it was to properly define the reference points: Fahrenheit's warm-blooded or 96 degrees, and Romer's 7.5 degrees. Romer because he didn't want to go below zero degrees measuring temperature outside in Denmark Those are kind of silly. But they're the legacy that we have today, and that's what we use.
In science, we use somewhat better temperature scales. And the temperature scale that turns out to be well-defined and ends up giving us the concept of an absolute zero is the ideal gas thermometer. So, let's talk about that briefly today first.
The ideal gas thermometer. It's based on Boyle's law. Boyle's law was an empirical law that Mr. Boyle discovered by doing lots of experiments, and Boyle's law says that the limit of the quantity pressure times the molar volume, so this quantity here, pressure times the molar volume, as you let pressure go to zero. So, you do this measurement, you measure with the gas, you measure the pressure and the molar volume. Then you change the pressure again, and you measure the pressure in the volume, and you multiply these two together, and you keep doing this experiment, getting the pressure smaller and smaller, you find that this limit turns out to be a constant, independent of the gas. It doesn't care where the gas is. You always get to the same constant. And that constant turns out to be a function of the temperature. The only function it is -- it doesn't care where the gas is. It only cares where the temperature is.
All right, so now we have the makings of a good thermometer and a good temperature scale. We have a substance. The substance could be any gas. That's pretty straightforward. So now we have a substance, which is a gas, with a property. So now the volume of mercury, or the color of something which changes with temperature, or the resistivity. In this case here, our property is the value of the pressure times the volume, times the molar volume. That's the property. The property is the limit as p goes to zero of pressure times molar volume. It's a number. Measure it. It's a number. It's going to come out. That's the property that's going to give us the change in temperature.
Then we need some reference points. And Celsius first used the boiling point of water, and called that 100 degrees Celsius, and the freezing point of water and called that zero degrees Celsius. And then we need an interpolation scale. How to go from one reference point to the other with this property. This property, which we're going to call f(t).
There are many ways you can connect those two dots. If I draw a graph, and on one axis I have this temperature. The idea of temperature with two reference points, zero for the freezing point of water, 100 degrees for the boiling point of water. And on the y-axis I've got the property f(t). It has some value corresponding to t equals zero. So let's get some value right here. There's another value connected to this property here, when t is equal to 100, a reference point here.
Now there many ways I can connect these two points together. The simplest way is to draw a straight line. It's called the linear interpolation. My line is not so straight, right here. You could do a different kind of line. You could do a quadratic, let's say. Something like this. That would be perfectly fine interpolation. All right, we choose to have a linear interpolation. That's a choice, and that choice turns out to be very interesting and really important, because if you connect these two points together, you get a straight line that has to intercept the x-axis at some point.
Now what does it mean to intercept the x-axis here? It means that the value of f(t) for this temperature is zero. That means that at this point right here, f(t)=0. That means the pressure times the volume equals zero, for that gas. And if you're below this temperature here, this quantity, p times v it would be negative. Is that possible? Can we have p v negative? Yes? No, it can't be. Negative pressure doesn't make any sense, right? Negative volume doesn't make any sense. That means that this part here, can't happen. That means that this temperature right here is the absolute lowest temperature you can go to that physically makes any sense. That's the absolute zero. So the concept of an absolute zero, a temperature below which you just can't go, that's directly out of the scheme here, this linear interpolation scheme with these two reference points. If I had taken as my interpolation scheme, my white curve here, I could go to infinity and have the equivalent of absolute zero being at infinity, minus infinity.
So, this temperature, this absolute zero here, which is absolute zero on the Kelvin scale. The lowest possible temperature in the Celsius scale is minus 273.15 degrees Celsius. So that begs the notion of re-referencing our reference point, of changing our reference points. To change a reference point from this point here being zero, instead of this point here being zero. And so redefining then the temperature scale to the Kelvin scale, where t in degrees Kelvin is equal to t in degree Celsius, plus 273.15. And then you would get the Kelvin scale.
All right, it turned out that this thermometer here wasn't quite perfect either. Just like Fahrenheit measuring 96 degrees being a warm-blooded, healthy man, right, that's not very accurate. Our temperature probably fluctuates during the day a little bit anyways, it's not very accurate. And similarly, the boiling point, defining that at a 100 degrees Celsius, well that depends on the pressure. It depends whether you're in Denver or you're in Boston. Water boils at different temperatures, depending on what the atmospheric pressure is; same thing for the freezing point.
So that means, then, you've got to define the pressure pretty well. You've got to know where the pressure is. It would be much better if you had a reference point that didn't care where the pressure was. Just like our substance doesn't care where the gas is. It's kind of universal. And so now, instead of using these reference points for the Kelvin scale, we use the absolute zero, which isn't going to care what the pressure is. It's the lowest number you can go to. And our other reference point is the triple point of water -- reference points become zero Kelvin, absolute zero, and the triple point. The triple point of water is going to be defined as 273.16 degrees Kelvin. And the triple point of water is that temperature and pressure -- there's a unique temperature and pressure where water exists in equilibrium between the liquid phase, the vapor phase, and the solid phase. So the triple point is liquid, solid, gas, all in equilibrium.
Now you may think, well I've seen that before. You take a glass of ice water and set it down. There's the water phase, there's the ice cube is the solid phase, and there's some water, gas, vapor, and that's one bar. Where am I going wrong here? The partial pressure of the water, of gaseous water, above that equilibrium of ice and water is not one bar, it's much less.
So the partial pressure or the pressure by which you have this triple point, happens to be 6.1 times 10 to the minus 3 bar. There's hardly any vapor pressure above your ice water glass. So this unique temperature and unique pressure defines a triple point everywhere, and that's a great reference point.
Any questions? Great. So now we have this ideal gas thermometer, and out of this ideal gas thermometer, also comes out the ideal gas law. Because we can take our interpolation here, our linear interpolation, the slope of this line. Let's draw it in degrees Kelvin, instead of in degrees Celsius. So we have now temperature in degrees Kelvin. We have the quantity f(t) here. We have an interpolation scheme between zero and 273.16 with two values for this quantity, and we have a linear interpolation that defines our temperature scale, our Kelvin temperature scale.
And so the slope of this thing is f(t) at the triple point, which is this point here, this is the temperature of the triple point of water, divided by 273.16. That's the slope of that line. The quantity here, which is f (t of the triple point), divided by the value of the x-axis here. So that's the slope, and the intercept is zero, so the function f(t), you just multiply by t here. This is the slope. f(t) is just the limit. As p goes to zero of p times v bar. And so now we have this quantity, p times v bar, and the limit of p goes to zero is equal to a constant times the temperature.
That's a universal statement. It's true of every gas. I didn't say this is only true of hydrogen or nitrogen, This is any gas because I'm taking this limit p equals to zero. Now this constant is just a constant. I'm going to call it r. I'm going to call it r. It's going to be the gas constant, and now I have r times t is equal to the limit, p goes to zero of p r. It's true for any gas, and if I remove this limit here, r t is equal to p v bar, I'm going to call that an ideal gas.
See, this is the property of an ideal gas. What does it mean, ideal gas? It means that the molecules or the atoms and the gas don't know about each other.
They effectively have no volume. They have no interactions with each other. They occupy the same volume in space. They don't care that there are other atoms and molecules around. So that's basically what you do when you take p goes to zero. You make the volume infinitely large, the density of the gas infinitely small. The atoms or molecules in the gas don't know that there are other atoms and molecules in the gas, and then you end up with this universal property. All right, so gases that have this universal property, even when the pressure is not zero, those are the ideal gases. And for the sake of this class, we're going to consider most gases to be ideal gases. Questions?
So now, this equation here relates three state functions together: the pressure, the volume, and the temperature. Now, if you remember, we said that if you had a substance, if you knew the number of moles and two properties, you knew everything about the gas. Which means that you can re-write this in the form, volume, for instance, is equal to the function of n, p, t.. In this case, V = (nRT)/P. Have two quantities and the number of moles gives you another property. You don't need to know the volume. All you need to know is the pressure and temperature and the number of moles to get the volume. This is called an equation of state. It relate state properties to each other. In this case it relates the volume to the pressure and the temperature.
Now, if you're an engineer, and you use the ideal gas law to design a chemical plant or a boiler or an electrical plant, you know, a steam plant, you're going to be in big trouble. Your plant is going to blow up, because the ideal gas law works only in very small range of pressures and temperatures for most gases.
So, we have other equations of states for real gases. This is an equation of state for an ideal gases. For real gases, there's a whole bunch of equation the states that you can find in textbooks, and I'm just going to go through a few of them.
The first one uses something called a compressibility factor, z. Compressibility factor, z. And instead of writing PV = RT, which would be the ideal gas law, we put a fudge factor in there. And the fudge factor is called z. Now we can put real instead of ideal for our volume. z is the compressibility factor, and z is the ratio of the volume of the real gas divided by what it would be were it an ideal gas. So, if z is less than 1, then the real gas is more compact then the ideal gas. It's a smaller volume. If z is greater than 1, then the real gas means that the atoms and molecules in the real gas are repelling each other and wants to have a bigger volume.
And you can find these compressibility factors in tables. If you want to know the compressibility factors for water, for steam, at a certain pressure and temperature, you go to a table and you find it. So that's one example of a real equation of state. Not a very useful one for our purposes in this class here. Another one is the virial expansion. It's a little bit more useful. What you do is you take that fudge factor, and you expand it out into a Taylor series. So, we have the p v real over r t is equal to z. Now, we're going to take z and say all right, under most conditions, it's pretty close to 1, when it's an ideal gas. And then we have to add corrections to that, and the corrections are going to be more important, the larger the volume is. Remember, it's the limit of p times v goes to zero, so if you have a large volume with a large pressure, then you're out of the ideal gas regime.
So let's take Taylor series in one over the volume, it's going to be one over the volume squared, etcetera. And these factors on top, which are going to depend on the temperature, are the virial coefficients, and those depend on the substance. So you have this p B(t) here. This is called a second virial coefficient. And then, so you can get, you can actually find a graph of this B(t). It's going to look something like this. It's the function of temperature, as B(t). There's going to be some temperature where B(t) is equal to zero. In that case, your gas is going to look awfully like an ideal gas. Above some temperature is going to be positive, below some temperature is going to be negative.
Generally, we ignore the high order terms here. So again, if you do a calculation where you're close enough to the ideal gas, and you need to design your, if you have an engineer designing something that's got a bunch of gases around, this is a useful thing to use. Now, the most interesting one for our class, the equation of state that's the most interesting, is the Van der Waals equation of state, developed by Mr. Van der Waals in 1873. And the beauty of that equation of state is that it only relies on two parameters.
So let's build it up. Let's see where it comes from. Let me just first write it down, the Van der Waals equation of state. p plus a over v bar squared times v bar minus b equals r t. All right, if you take a equal to zero, these are the two parameters, a and b. If you take those two equal to zero, you have p v is equal to r t. That's the ideal gas law.
Let's build this up. Let's see where this comes from, where these parameters a and b comes from. So, the first thing we're going to do is we're going to take our gas in our box, let's build a box full of gases here. We've got a bunch of gas molecules or atoms. OK, there's the volume of a box here. While these gas molecules or atoms through first approximation, are like hard spheres. They occupy a certain volume. Each atom or molecule occupies a particular volume. And so, we can call b is the volume per mole of the hard spheres, volume per mole that is the little sphere that the molecules are. So that the volume that is available to any one of those spheres is actually smaller than v. Because you've got all these other little spheres around, so the actual volume seen by any one of those spheres is smaller than v. So when we take our ideal gas law, p v bar is equal to r t we have to replace v bar by the actual volume available to this hard sphere. So instead of v bar, we write p v bar minus b, equal r t.
OK, that's the hard sphere volume of the spheres. Now, those molecules or atoms that are in here, also feel each other. There are a whole bunch of forces that you learn in 5.112, 5.111 like with Van der Waals' attractions and things like this. So there are attractive forces, or repulsive forces that these molecules feel, and that's going to change the pressure that the molecules feel.
For instance, if I have, what is pressure? Pressure is when you have one of these hard spheres colliding against the wall. There's the hard sphere. It wants to collide against the wall to create a force on the wall, and I have a couple of the hard spheres that are nearby, right, and in the absence of any interactions, I get a certain pressure. This thing would but careen into the wall, kaboom! You'd have this little force, but in the presence of these interactions, you've got these other molecules here that are watching this, you know, their partner sort of wants to do damage to themselves, like hitting that wall, and they say, no! Come back, come back, right?
There is an attractive force. There are no other molecules on that side of the wall. So there's an attractive force that makes the velocity within not quite as fast. The force is not quite as strong as it was without this attractive force. So the real pressure is not quite the same because of this attractive force as it was, as it would be without the attractive forces. The pressure is a little bit less in this case here.
So instead of this p here. Now if I re-write this equation here as p is equal to r t divided by v bar minus b, just re-writing this equation as it is. So the pressure is going to depend on how strong this attractive force is. So the pressure is going to be less if there's a strong attractive force. And the 1 over v squared is a statistical, is basically a probability of having another molecule, a second molecule in the volume of space.
So, if the molar volume is small, then one over v bar is large, there's a large probability of having two spheres together in the same volume. If the molar volume is large, that means that there's a lot of room for the molecules, and they're now going to be close to each other, and so this isn't going to be as important.
So, a is the strength of the interaction, v bar is how likely they are to be close to each other. And that's going to affect the actual pressure seen by the gas. And a is greater than zero when you have the attraction. And that gives use the Van der Waals' equation of state, with two parameters, the hard sphere volume and the attraction. You don't have to go look up in tables or books. You don't have to have all the values of the second virial coefficient, or the fudge factor, just two variables that make physical sense, and you get an equation of state which is a reasonable equation of state, and that's the power of the Van der Waals' equation of state, and that's the one we're going to be using later on this class to describe real gases. Question?
OK, so we've done the zeroth law. We've done temperature, equations of state. We're ready for the first law. We're just going to go to through these laws pretty quickly here. Remember, the first law is the upbeat law. It's the one that says, hey, you know, life is all rosy here. We can take energy from fossil fuels and burn it up and make it heat, and change that energy into work. And it's the same energy, and we probably can do that with 100% efficiency. We can take heat from the air surrounding us and run our car on it with 100% efficiency.
Is this possible? That's what the first law says, it's possible; work is heat, and heat is work, and they're the same thing. You can break even, maybe. So let's go back and see what work is. Let's go back to our freshman physics. Work, work is if you take a force, and you push something a certain distance, you do work on it. So if I take my chalk here and I push on it, I'm doing work to push that chalk. Force times distance is work. The applied force times the distance. There are many kinds of work. There's electrical work, take the motor, you plug it into the wall, electricity makes the fan go around, that's electrical work. There's magnetic work. There is work due to gravity.
In this class here, we're going to stick to one kind of work which is expansion work. So expansion work, for instance, or compression work, is if you have a piston with a gas in it. All right, you put a pressure on this piston here, and you compress the gas down. This is compression work. Now the volume gets smaller. p external here. Pressure, the piston goes down by some volume l. The piston has a cross-sectional area, a, and the force -- pressure is force per volume area. So the force that you're pushing down on here is the external pressure times the area. Pressure is force per volume area. That's the force you're using to push down.
Now the work that's it is calculated when you push down with the pressure on this piston here, that work is force times distance, f times I. f is p external times a, times the distance l. So that's p external times the change in the volume. The area times this distance is a volume, and that is the change in volume from going to the initial state to the final state. Now we need to have a convention. We've got force. Work is force times distance, it's p external times delta v, and I'm going to be stressing a lot that this is the external pressure. This is the pressure that you're applying against the piston, not the pressure of the gas. It's the pressure the external world is applying on this poor system here.
OK, but we need a convention here. The convention, and then we need to stick to it. And this convention, unfortunately, has changed over the ages. But we're going to pick one, and we're going to stick to it, which is that if the environment does work on the system, if we push down on this thing and do work on it, to compress it, then we call that work negative work. No, we call that work positive work. All right, so that means we need to put a negative sign right here, by convention. So if delta v is negative, in this case delta v is negative, OK, delta v is negative, pressure is a positive number, negative times negative is positive, work is greater than zero. We're doing work on the system, to the system. In this case here, work is positive.
If you have expansion on the other side, if the system is expanding in the other direction, if you're going this way, right, you're going to do work to the environment. There might be a mass here. This could be a car. Pistons in the car, right, so the piston goes up. That's going to drive the wheels. The car is going to go forward. You're doing work on the environment. Delta v is going to be negative. w is going to be negative. Sorry, I got it backwards again. Delta v is positive in this direction here, the work is negative. So work on the system is positive. Work done by the system is negative. Convention, OK, this negative sign is just a pure convention. You just got to use it all the time. If you use an old textbook, written when I was taking thermodynamics, they have the opposite convention, and it's very confusing. But now we've all agreed on this convention, and work is going to be with the negative sign here. OK, any questions?
This is an example where the external pressure here is kept fixed as the volume changes, but it doesn't have to be kept fixed. I could change my external pressure through the whole process, and that's the path. We talked about the path last time being very important. Defining the path. So if I have a path where my pressure is changing, then I can't go directly from this large volume to this small volume. I have to go in little steps, infinitely small steps. So, instead of writing work is the negative of p external times delta v, I'm going to write a differential. dw is minus p external dv, where this depends on the path, it depends on path and is changing as v and p change.
Now I'm going to add a little thing here. I'm going to put a little bar right here. And the little bar here means that this dw that I'm putting here is not an exact differential. What do I mean by that? I mean that if I take the integral of this to find out how much work I've done on the system, I need to know the path. That's what this means here. It's not enough to know the initial state and the final state to find what w is.
You also need to know how you got there. This is very different from the functions of state, like pressure and temperature. There's a volume, there's a temperature, than the pressure here. There's other volume, temperature and pressure here, corresponding to this system here. And this volume, temperature and pressure doesn't care how you got there. It is what it is. It defines the state of the system. The amount of work you've put in to get here depends on the path. It's not a function of state. It's not an exact differential. So the delta v here is an exact differential, but this dw is not. That's going to be really important. So if you want to find out how much work you've done, you take the integral from the initial state to the final state of dw minus from one to two p external dv, and you've got to know what the path is. So let's look at this path dependence briefly here.
We're going to do two different paths, and see how they're different in terms of the work that comes out. So we're going to take an ideal gas, we can assume that it's ideal. Let's take argon, for instance, a nice, non-interacting gas. We're going to do a compression. We're going to take argon, with a certain gas, certain pressure p1, volume V1, and we're going to a final state argon, gas, p2, V2. Where V1 is greater than V2, and p1 is less than p2. So if I draw this on a p v diagram, so there is volume on this axis. There's pressure on this axis. There is V1 here. There's V2 here. There's p1 here, and p2 here. So I'm starting at p1, V1. I'm starting right here. And I'm going to end right here. Initial find -- there are many ways I can get from one state to the other. Draw any sort of line to go here, right? There are a couple obvious ones, which we're going to -- we can calculate, which we're going to do.
So, the first obvious one is to take V1 to V2 first with p constant. So take this path here. I take V1 to V2 first, keeping the pressure constant at p1, then I take p1 to p2 keeping the volume constant at V2. Let's call this path 1. Then you take p1 to p2 with V constant. An isobaric process followed by a constant volume process.
You could also do a different path. You could do, let me draw p v, there's my initial state. My final state here, I could take, first, I could change the pressure, and then change the volume. So the second process, if you take p1 to p2, V constant, and then you take V1 to V2 with p constant. This is path number two. Both are perfectly fine paths, and I'm going to assume that these paths are also reversible. Let's assume that both are reversible, meaning that I'm doing this pretty slowly, so as I change, let's say I'm changing my volumes here, V1 to V2, it's happening, I'm compressing it slowly, slowly, slowly so that at any point I could reverse the process without losing energy, right? It's always an equilibrium.
All right, let's calculate the work that's involved with these two processes. Remember it's the external pressure that's important. In this case, because it's a reversible process, the external pressure turns out to be always the same as the internal pressure. It's reversible, that means that p external, equals p. I'm doing it very slowly so that I'm always in equilibrium between the external pressure and the internal pressure so I can go back and forth.
So, let's calculate w1. The work for path one. First thing is I change the volume from V1 to V2 The external pressure is kept constant, p1, so it's minus the integral from 1, V1 to V2, p1, dv. And then the next step here is I'm going from -- the pressure is changing. I'm going from V2 to V2 dv -- what do you think this integral is? Right, so this is easy part, zero here. This one is also pretty easy. That's minus p1 times V2 minus V1. p1 times V2 minus V1. What that turns out to be, this area right here. It's V1 minus V2 times p1. This is w1 here. OK, I can re-write this as p1 time V1 minus V2 and get rid of this negative sign here. Now V1 is bigger than V2, so this is positive. So I am compressing, I'm doing work to the system, positive work everything follows our convention.
Number two here, OK, the first thing I do is I change the pressure under constant volume, V1, V1 minus p dv, and then I change the volume from V1 to V2 and then this is p2, dv. This first integral is zero V1 to V1, then I get minus p2 times V2 minus V1 or p2 times V1 minus V2. Again, a positive number.
I'm doing work to the system to go from the initial state to the final state. But it's not the same as w1. In this case, I have p1 times delta V. In this case here, I have p2 times delta V. And p2 is bigger than p1. w2 is bigger than w1. The amount of work that you're doing on the system depends on the path that you take.
All right, how do I, practically speaking, how do I do this? Anybody have an idea? How do I keep p1 constant while I'm lowering the volume?
STUDENT: Change the temperature?
PROFESSOR: Change the temperature, right. So what I'm doing here is I'm cooling, and then when I'm sitting at a fixed volume and I'm increasing the pressure, what am I doing? I'm heating, right? So I'm doing cooling and heating cycles. So in this case here, I cool and then I heat. In this case here, I heat and then I cool. All right, so I'm burning some energy, I'm burning some fuel to do this somehow, to get that work to happen.
All right, now suppose that I took these two paths, and coupled them together. So in this case, it's the amount of work is the area under that curve. And in this case here, the amount of work is bigger, w2 is bigger, and it's the area under this curve.
Now, suppose I took this two paths, and I took -- couple them together with one the reverse of the other. So I have my initial state, my final state, my initial state, my final state here. And I start by taking my first path here. I cool, I heat. So there's w1. So the w total that I'm going to get, is w1, and then instead of the path from V1 to, from 1 to 2 going like this as we had before, I'm going to take it backwards. If I go backwards, to work -- everything is symmetric, the work becomes the negative from what I had calculated before, so this becomes minus what I calculated before for w2.
The total work, in this case here, is p1 times V1 minus p2 times V1 minus V2, it's p1 minus p2 times V1 minus V2. This is a positive number, p1 is smaller than p2. This is a negative number. The total work is less than zero. That's the work that the system is doing to the environment. I'm doing work to the environment. The work is negative, which means that work is being done to the environment. And that work is the area inside the rectangle.
What you've built is an engine. You cool, you heat, you heat, you cool, you get back to the same place, but you've just done work to the environment. You've just built a heat engine. You take fuel, rather you take something that's warm, and you put it in contact with the atmosphere, it cools down. You take your fuel, you heat it up again. It expands. You change your constraints on your system, you heat it up some more, then you take the heat source away, and you put it back in contact with the atmosphere. And you cool it a little bit, change the constraints, cool it a little bit more, and heat, and you've got a closed cycle engine. We're going to work with some more complicated engines before.
But the important part here is that the work is not zero. You're starting at one point. You're going around a cycle and you're going back to the same point. The pressure, temperature, and volume are exactly the same here as when you started out. But the w is not zero. The w, for the closed path, and when I put a circle there on my integral that means a closed path, when you start and end at the same point, right, this is not zero. If you had an exact differential, the exact differential around a closed path, you would get zero. It wouldn't care where the path is. Here this cares where the path is. So, work is not a function of state. Any questions on work before we move on to heat, briefly?
So heat is a quantity that flows into a substance, something that flows into a substance that changes it's temperature, very broadly defined. And, again, we have a sign convention for heat. So heat, we're going to call that q. And our sign convention is that if we change our temperature from T1 to T2, where T2 it's greater than T1 then heat is going to be positive. Heat needs to go into the system to change the temperature and make it go up. If the temperature of the system goes down, heat flows down heat flows out of the system, and we call that negative q. Same convention is for w, basically.
Now, you can have a change of temperature without any heat being involved. I can take an insulated box, and I can have a chemical reaction in that insulated box. I can take a heat pack, like the kind you buy at a pharmacy. Break it up. It gets hot. There's no heat flowing from the environment to the system. I have to define my terms. My system is whatever's inside the box. It's insulated. It's a closed system. In fact, it's an isolated system. There's no energy or matter that can go through that boundary. Yet, the temperature goes up. So, I can have a temperature change which is an adiabatic temperature change. Adiabatic means without heat. Or I could have a non-adiabatic, I could take the same temperature change, by taking a flame, or a heat source and heating up my substance. So, clearly q is going to depend on the path. I'm going from T1 to T2, and I have two ways to go here. One is non-adiabatic. One is adiabatic.
All right, now what we're going to learn next time, and Bob Field is going to teach the lecture next time, is how heat and work are related, and how they're really the same thing, and how they're related through the first law, through energy conservation. OK, I'll see you on Wednesday then. |
"Then they returned to Jerusalem from the hill called the Mount of Olives, a Sabbath days walk from the city. When they arrived, they went upstairs to the room where they were staying." (Acts 1:12-13)
The building identified as the Coenaculum or the Cenacle is a small, two-storey structure within a larger complex of buildings on the summit of Mount Zion. The upper storey was built by the Franciscans in the 14th century to commemorate the Last Supper. It is also identified as the "upper room" in which the Holy Spirit descended upon the Disciples at Pentecost (Acts 2:2-3). In Christian tradition, the area of the city in which they were living at the time was the present Mount Zion (the geographical name somehow having been transferred from the Temple Mount to this hill in the southwest corner of the city possibly through a 4th-century misreading of Micah 3:12, which seems to speak of two hills: "the Mountain of the Lord and Zion").
The ground-floor room beneath the Coenaculum contains a cenotaph that since the 12th century has been known as the "tomb of King David" - even though the recorded burial place of the king was in the "City of David" on the Ophel Ridge (1 Kings 2:10). Beneath the level of the present floor are earlier Crusader, Byzantine and Roman foundations. An apse behind the cenotaph is aligned with the Temple Mount, leading to speculation that this part of the building may have been a synagogue, or even "the synagogue" mentioned by the Pilgrim of Bordeaux in 333.
This area of the hill became part of the Mother Church of Holy Zion (shown in the 6th-century mosaic Madaba Map). This basilica was destroyed by the Persians in 614. The 12th-century Crusader Monastery and Church of St. Mary were built on the foundations of this earlier church, but in 1219 it too was destroyed (probably in the demolition of the walls and strongpoints around the city ordered by the Ayyubid Sultan Al-Muazzam).
The present Chapel of the Coenaculum was built by the Franciscans on their return to the city in 1335. The ribbed vaulting of the ceiling is typical of Lusignan or Cypriot Gothic. The sculpted mihrab, the Muslim prayer niche, was added in 1523, when the Franciscans were evicted from the building and the room converted into a mosque.
Sources: Israeli Foreign Ministry |
Theory Name: Advanced Organizers (Subsumption Theory)
Authors: Ausebel, David
Associated Learning Theory
Cognitive Learning Theory
This theory prescribes a way of creating instructional materials that
help the learner organize content in order to make it meaningful for
Specification of Theory
(a) Goals and preconditions
Reception (expository) learning – Basically, this is the meaningful
background knowledge that a learner needs to have before being able to
1. General content should be present before increasing the detail and
2. Instructional materials should try to integrate what has already been
learned and relate that to new learning.
(c) Condition of learning
1. Different advanced organizers produce different results.
(d) Required media
Yes, this theory leads to the development of instructional materials.
Instructional materials can serve as a generic term, however, because
advanced organizers could exist in several formats.
(e) Role of facilitator
Provide a meaning organizer that subsumes, or bridges, the gap between
what has been learned and what is going to be learned.
(f) Instructional strategies
Create a textual or graphic organizer that links new learning to prior
knowledge and experience. Advanced organizers should always be given
in advance of instruction.
There are four types of advanced organizers:
1. Expository – describes new knowledge
2. Narrative – presents new information in story format
3. Skimming – skimming through information
4. Graphic Organizers – pictographs, descriptive or conceptual patterns,
Building an advanced organizer includes the following heuristics:
1. Present information at a higher level of abstraction than the future learning
2. Bridge the gap between previous and new learning
3. Higher level advanced organizers (more abstract) produce better results
than lower level organizers (more concrete).
4. Preview new learning.
5. Use familiar terms and concepts to relate to new terms and concepts.
6. Do not review information unless it is relevant to new learning.
(g) Assessment method
Formative Research & Application
(a) Tested context - K-12, Language Learning
(b) Research method
(c) Research description |
What is biogas energy?
This section is about the use of biogas in industry for the purpose of energy creation (heat and electricity) and/or non-transport fuel that can be released back into the grid for general public use.
Biogas is produced via a process called Anaerobic Digestion (AD), which results in the production of numerous gases that can then be burnt to produce energy. Anaerobic digestion is the breakdown of various plant and animal material (known as biomass) by bacteria in an oxygen-free environment. For example, the waste plant material is sealed in an airtight container, then bacteria is added, which is encouraged to multiply and grow, releasing methane and other gases as the by-product of the process. In addition, there are other by-products produced in the process which are rich in nutrients and can be used as fertiliser. The inputs in the process can be any number of biomass materials including any of the following: food waste, energy crops, crop residues, slurry and manure. In practice the process can take on waste from households, supermarkets and industry, therefore reducing the waste that goes to landfill.
The two major gases that make up biogas are methane (CH4), which accounts for about 60%-70% of the total and also carbon dioxide (CO2) which accounts for 30%-40%. Small traces of other gases can be found. Overall the way biogas is composed depends on the inputs or feedstock that goes into the AD process. In industry, biogas can be enhanced to filter out pure methane and removing other gases, which then becomes biomethane.
Biogas energy and industrial uses
Heat-only biogas energy
Biogas can simply be burned through the combustion process to produce heat only. When burned, one cubic metre of biogas produces around 2.0/2.5 kWh of thermal energy. A proportion of the heat generated in the plant can be used first hand to power the digester and the nearby buildings. The remaining heat is discharged, and unless it is then heating and transferring hot water through a local pipe network into the home, it is wasted. This concept of heating water and transferring to homes as part of central heating is popular in some Scandinavian countries.
In the UK, to increase the concept of district heating, requires investment in new infrastructure. The generators that make this upfront investment can hopefully by the end of 2012, make good use of the Renewable Heat Incentive support.
Electricity-only biogas energy
Electricity can be generated from the combustion of biogas, which is a relatively simple process, but this requires an upgrade of the plant. Electricity is easier to transport and measure than heat and gas supply, but requires the infrastructure to feed into the grid, which is not simple and may be expensive. Generating green electricity can benefit the generators (households and communities) by making the use of Feed-in Tariffs (FiTs) or for bigger players can maximise the Renewable Obligation Certificates (ROCs) for industrial scale production.
Combined heat and power (CHP) biogas energy
CHP or cogeneration is a process that simultaneously produces useable heat and electricity. The process of heat generation makes biogas plants more efficient than other conventional power plants, as the process to create the various gases requires heat, therefore less is wasted. In addition to then generating the heat, the plants also generate electricity, which is transported and sold like the excess heat. If generators can support the cogeneration process, then they are able to make good use of the RHI, FiTs and ROCs available to them.
Biomethane liquid biogas energy
Biogas needs to be purified by extracting out the carbon dioxide and trace gases, therefore making a purer form of methane which then becomes biomethane. In the UK, the process of purification has to take place for it to be acceptable in the gas grid, where the gases are dried and upgraded to a higher methane content (upwards of 95%) so it then resembles the qualities of natural gas. This approach is already followed in the US, and other Western European countries. Refer to the National Grid for more information and supporting documentation that look at maximising the opportunities for biomethane into national supply. The DECC have announced that when the RHI scheme is finalised, it will not only include support payments for biogas combustion (see above), but biomethane injection too as a consumption fuel for homes and businesses.
Biomethane transport fuel biogas energy
Biogas energy, like the process for domestic heating fuel, can be cleaned further from other gases (carbon dioxide and trace gases), then upgraded to a pure form of biomethane and used as transport fuel. Biogas is eligible for support under the Renewable Transport Fuel Obligation.
The combustion of biomethane from vehicles are more environmentally friendly than that of burning of transport fuels such as modern petrol and diesel, thereby helping reduce greenhouse emissions. Examples of renewable transport fuels in vehicles that are formed out of biogas are compressed natural gas (CNG) or liquefied natural gas (LNG). In the UK, the number of vehicles that are CNG or LNG is a relatively low percentage compared to Germany and Sweden. As the number of of vehicles using CNG and LNG is lower in the UK, the infrastructure to support these vehicles, such as refuelling stations, is also therefore less developed than some of those countries mentioned.
Biogas energy policy development
Biogas energy hasn’t really taken off as a fuel alternative as of yet in the UK. This is partly due to the lack of infrastructure to support its development and secondly, more importantly, there is a lot uncertainty from the government around the RHI. Generators do not yet know what is actually going to form part of this scheme and the level of support that will be given to both fuel and combustion. Biogas energy generation is more expensive than conventional fossil fuels that enjoy benefits of scale and lower fuel price. The government needs to support these generators if they want the investment to happen and the price to come down in the long term.
Smaller scale producers currently benefit from the FiTs and larger scale producers from the ROCs (only for electricity generation – on the assumption the AD facility has been completed post 15th July 2009). If you are an accredited generator of renewable energy, you could be entitled to benefit from the Renewables Obligation, if you are not already doing so. To find out more information and how to benefit from renewable obligation, please visit OFGEM.
- Rich leftover products that can be harnessed as fertiliser.
- Biogas creation is made from plant and animal waste that would otherwise end up in landfill sites.
- Biogas power plants can make use FiT and ROCs for electricity generation and RHI for heating.
- Biomethane (from biogas) is a fuel that powers CNG or LNG vehicles, emits less greenhouse gases.
- Purifying biogas by removing CO2 and trace gases, turns the gas into biomethane which can be used as alternative for domestic consumption.
- With a piping infrastructure, biogas power plants can provide heating to local communities and districts that is greener and more sustainable.
- Biogas power plants are more efficient in using the heat they generate than conventional coal, oil and gas plants.
- Electricity distribution start-up costs can be high as requires the producer to connect to the grid and agree a tariff with their distribution partners.
- Local infrastructure needs to be upgraded, which may be costly, so that heat generated can be effectively passed to homes as a central heating solution.
- In the UK the volume of CHG and LNG vehicles is relatively low compared to countries like Germany and Sweden.
- If the plant was built pre-July 2009, the producer will not be able to benefit from FiT and RHI.
- Poor choice of feedstock will mean poor yields of biogas.
- Ensure regular supply of feedstock, otherwise this may lead to large variations in production. |
Are you looking for an effective way to teach writing for students in kindergarten through graduate school using an easy 5-step process? Look no further! This recorded webinar, How to Teach Writing by Dr. Andy Johnson, can show you how easy writing can be when you follow the 5-step process.
When was the last time you wrote or received a handwritten letter? For thousands of years, handwritten letters have played a critical part in our lives. In this age of digital communication, handwritten letters are becoming a lost art. Emails and text messages can be sent instantly; however, the impact of a good old-fashioned handwritten letter can bring a lifetime of benefits and memories.
Encouraging children to write and read letters will improve their literacy and communication skills, as well as their social and emotional development. Writing can reduce anxiety and stress, as well as decrease depression. It’s especially important during this time of virtual learning and social distancing to provide opportunities for handwritten letters. Let’s explore the academic and mental benefits of being PenPals!
Handwritten letters improve writing skills. We know that reading and writing go hand in hand… but did you know that writing by hand is just as important as reading? By definition, literacy is one’s ability to read and write. Research confirms that integrating reading and writing automatizes those skills. From kindergarten standards of using a combination of drawing, dictating, and writing to compose informative/explanatory texts to twelve grade standards of producing clear and coherent writing, all learners must be able to write to communicate ideas.
Handwritten letters improve reading skills. Research confirms that writing by hand activates reading circuits in the brain that promote literacy. Additionally, research by McGinley and Tierney in 1989 confirmed that integrating reading and writing instruction leads to a higher level of thinking than when either process is taught alone. Providing opportunities to read a letter from a teacher or loved one will lead to improved reading achievement, better writing performance, and increased awareness of self, others, and the community.
Handwritten letters improve communication skills. It’s an old saying, but it’s true: Practice makes perfect. By habit, we mimic the voices around us – which is sometimes not the best grammar. Our speech is a direct reflection of our writing. Writing forces thought and articulation of main ideas while exploring main feelings. Letter writing provides an opportunity to improve vocabulary, knowledge, and sentence structure; and better writing creates a better speaker. What better way to practice and improve communication skills through writing than writing to someone you trust?
Handwritten letters improve self-awareness. Mental health and well-being are the core of who you are. Writing helps to clear the mind, recover memories and organization of thought, and refine ideas. Research confirms that a person can better understand his/her feelings more clearly when it’s written. Writing is a creative way to improve mental recall and well-being.
Handwritten letters improve relationships. In times like these, opportunities to connect with teachers and loved ones are important. Handwritten letters confirm the importance of relationships between educators and families with children. Daily writing opportunities provide deep connections while addressing reading, writing, and social development skills. Addressing the whole child is vital.
Use a dated notebook, versus loose paper, to keep track of how the conversation evolves. This notebook can serve as a journal, mental wellness check-in, calendar, planner, and keepsake for life (or not).
Do not edit children’s’ writings in the journal; however, provide additional opportunities to teach correct sentence structure, etc.
Always begin and end with something positive.
So go ahead, grab a pen and notebook, and begin creating memories while positively impacting literacy, communication, social, and emotional development, simultaneously.
The young students frantically waved their hands high in the air. They couldn’t wait to run to the front of the gym and participate in a game I call, “5 in Ten!”. I recently spoke with hundreds of students in different settings (urban, rural, and suburban) and they all enjoyed “5 in 10’!”. The gist of this interactive game is to name 5______ in ten seconds.
The catch is that the students do not know what I will ask them until I say, “go!”. For example, I will call someone up to the front of the class, gym, auditorium, etc. and immediately say, “Name 5 dances in ten seconds…go!”. I typically will have the audience be my shot clock and provide a whisper countdown…10…9…8….7…6…..5…4….3…2..1…Short Buzzer sound! The choices one can use are endless. I can ask participants to name 5 dogs, 5 birds, 5 pizza toppings, 5 songs, 5 movies, 5 shoes, 5 cars, 5 words that start with the letter “A”, etc.
I use “5 in 10” as an ice breaker for students, staff, parents, and families of all ages when I present. Similarly to “5 in 10”, I also use “3 in 5” and “1 in 3.” These are variations of the same “5 in 10” game with the exception that you have to name 3____ in 5 seconds and 1_____ in 3 seconds. Even in virtual environments, students. staff, and parents are excited to play these games!
It was a breath of fresh air to many who were struggling with the remote learning options that were very rigid at times. These fun games get students to speak in front of others. I use it to enhance listening. I use it to help with the correlation between listening, speaking, writing, and reading as well. Before I tackle reading, I typically get students to listen. Historically, stories were told orally (speaking) and the hearer had to “listen well” to pass the story on. Many of these stories were written and these words were read from papers and books. The correlation between listening, speaking, writing, and reading must be leveraged more.
Below are a few ways you can leverage the fun to get some reading gains!
Try “5 in 10”, “3 in 5”, and “1 in 3”
Tell a story and have your students continue where you left off. For example, “It was the first day of school for Anthony. He was so excited he ran out the door and forgot….” Have a student “continue” until you have a complete story! You can interject at times to get the story to keep moving.
After the students finish their collective story, have them write down the story on paper. Allow them to change up certain parts as they see fit.
Collect the stories and make a list of words that you want to highlight for vocabulary improvements.
Encourage students to take these same ideas home and have their families do similar activities!
So here is my call to action for you! At the very least, please try “5 in 10”, “3 in 5”, and “1 in 3” with your students, colleagues, and families. Let me know how they enjoyed it! Remember to leverage the fun as you learn!
Reading and writing are skills that go hand in hand. As children develop, they learn to speak first. Reading follows, and then the ability to write in their language. Writing is a great way to reinforce the lessons they learn from reading. They start to mimic writing the words they see, much like they mimic hearing the words they hear on a daily basis. Introducing children to writing is a task that should occur early. It can start with items as simple as crayons and some paper.
Providing the opportunity to draw at an early age is one way of encouraging writing. Much like ancient cultures drew images that morphed into letters, the pictures that young children draw are their way of communicating. Getting them to put markers and crayons to paper is a way to encourage early writing skills. When they complete their drawings, you can have them tell you stories about them. As they get older, you can teach them that writing is very similar to drawing.
This playful approach to writing can be the perfect introduction to associating letters with sounds. Children can start practicing associating letterforms with sounds and words as early as the preschool years. During that time, they begin putting sounds together with the words they hear. They are starting to understand the connection between the letters they see and the sounds or ideas they represent. Picture books emphasize this connection as well, helping children to associate the images of the words with pictures.
As they become more familiar with what letters look like, those letters may start to emerge in their drawings. The letters will be random at first. Mostly they will be working on consonants and a few vowels. Each time they write down letters spend some time talking about them. What sounds do the letters make? What words are they part of? When the letterforms start to develop, they will eventually mimic the words they see in books. This is an opportune time to continue to teach them more about the words they are seeing as they begin to write them out.
Another way that young children are encouraged to write is by seeing their parents write. Children like to repeat what their parents are doing. Before computers became such powerful communication devices, there was more writing done at kitchen tables around the country. With fewer letters and checks written, it is essential to take time out of the day to show your children that you write. This is also a chance to teach them the importance of things like thank you letters, as well as their own creative works. When children tell stories about their drawings, write them down for them. Then have them read the stories back to you. They have created their own stories to share with your help!
Developing writing is a way to reinforce what they are learning when they read. They are learning the building blocks of reading, letters, and words, while they connect what a letter looks like to how it sounds. It starts with something as simple as drawing pictures, eventually turning those pictures into full-blown stories.
One of the strongest desires that parents of young children have is the ability to communicate with them. While they know a howling baby is uncomfortable in some way, they do not know why. Years are spent modeling speech to toddlers, saying words and pointing at objects to cement a visual link to the concept they are trying to teach. Toddlers, for their part, are incredibly amusing as they learn this skill. Every adult male becomes “Daddy”. Sometimes the family pet becomes “Daddy” as well. But they learn this skill through verbal demonstration and visual connection.
The writer is an explorer. Every step is an advance into a new land.
~Ralph Waldo Emerson
Teaching a child to read is a similar process. We sit a child on our lap, or lay them down at night, and read them a story. If they can see the words, their curiosity gets the better of them and they start asking questions. They learn to read in the same way they learn to speak; repeating what the adults say until they connect the word and the concept it conveys. The visual components to reading are letters and words. Teaching students at a young age to write improves their reading skills by helping them recognize the connection between the letters they see and the sounds the letter make.
Parents are instrumental in helping children link writing to reading and speaking at a young age. Like with speaking, they do not understand writing letters. They want to mimic what they are seeing on the page. This is how young learners start to write, known as emergent writing. Emergent writing is the way many younger students start writing. They start with scribbles, and over time hone those random slashes into letters. As they learn their letter forms, they are able to turn them into words. This adds another tool in their communication arsenal, linking the spoken and written word together. Adults help by encouraging this scribbling and guiding into letters. As young writers move from scribbling to writing poorly to writing well, they begin to move into other writing skills that are related to reading, like reading left to right and top to bottom. Understanding how punctuation affects what is read creeps into their writing.
Why start at such a young age? Early aptitude in writing is an indicator of a child’s reading ability. Up to middle school, children are sponges of information. They learn the things parents and teachers reinforce, like positive habits and important life skills. It is during this time frame that teaching them new skills are most effective. Helping them develop an aptitude for writing is a tool that will help them through their entire life, from taking notes in school to writing resumes and cover letters for jobs. It is important to keep them interested and enjoying writing while not forcing it. Pushed too hard, and they will get burnt out and frustrated. Writing becomes a chore, starting a bad relationship with writing and letters. This could start a bad relationship with reading as well, further hindering future prospects.
Teaching students at a young age to write improves their reading skills by helping them recognize the connection between the letters they see and the sounds the letter make.
Everything we can do as educators to build a strong relationship with the written word is important for a child’s future development. Giving students the tools to write the words they are reading is a major step to improving their literacy. Building their confidence in these abilities at a young age starts them on the path of being lifelong readers and learners. Kids Read Now knows the importance of building literacy at a young age. Reading to younger children supports their desire to learn to read and write, creating better students. |
Confucius, a Chinese philosopher said a long time ago that “music produces a kind of pleasure which human nature cannot do without.” Playing a musical instrument brings many benefits and can give joy to you and everyone around you. Even Albert Einstein has credited his study of the violin for inspiring him to think differently about physics. In addition, former Federal Reserve Chairman Alan Greenspan credits his saxophone playing for balancing his mind when tackling complex economic patterns.
The act of playing a music instrument enables our minds to expand deeper thinking. But how does that work? Doing such an act stimulates parts of our brain that are generally dormant. This new and enhanced synergy allows for the opening of new pathways and connections in our mind that grow stronger when exercised often like a muscle.
So, in this article, let’s look furthermore on how this stimulates other parts of our brains. Here are the top benefits for playing a musical instrument!
- Increases the capacity of your memory
Back in 2003, psychologists at the Chinese University of Hong Kong conducted a study among school students, half of whom had been musically trained, and half who had not. The test involved reading a list of words to the students and asking them to recall the words after some time had passed. The study showed that the boys who had been musically trained had a significantly better verbal memory than boys who had not. Moreover, the more musical training they had, the more words they were able to remember.
Many researches have shown that both listening to music and playing a musical instrument stimulate your brain and can increase your memory. According to an article from The Telegraph online magazine, “New research suggests that regularly playing an instrument changes the shape and power of the brain and may be used in therapy to improve cognitive skills.” Continually, there is more evidence that musicians have organizationally and functionally different brains compared to non-musicians, especially in the areas of the brain used in processing and playing music.
To put it simply, learning to play an instrument lets the parts of your brain that control motor skills, hearing, storing audio information, and memory actually grow and become more active.
- Teaches you perseverance
Learning to play an instrument is not easy and simple, it takes time and effort. This is why it teaches you patience and perseverance. When you’re just starting out, everything won’t be as perfect as you want them to be. In fact, the majority of musicians have to work difficult sections of music multiple times in a row before they can play it correctly. Although this may come as a hardship, it is what helps you grow as a musician and continue on reaching that goal.
The process of learning to play an instrument involves not only your mind but also your body. You will have to learn fingerings and/or chord shapes, develop technique, and memorize new information. Slowly, with consistent practice, you will find yourself getting better. With each new milestone, you gain a small reward for your efforts and this will keep you motivated. Making music requires patience, that you should remember. Instead of getting immediate results, you will have to persevere!
- Improves your ability to discern sounds
A neuroscientist from Northwestern University in Chicago, Nina Kraus, found more positive effects on older adults of early musical training – in the realm of hearing and communication. She measured the electrical activity in the auditory brainstems of 44 adults, ages 55 to 76, as they responded to the synthesized syllable “da.” Although none of them had played a musical instrument in 40 years, those who had trained the longest – between four and fourteen years – responded the fastest.
According to Kraus, this finding was significant. This is because hearing tends to decline as we age, including the ability to quickly and accurately discern consonants, a skill crucial to understanding and participating in conversation.
The reason for this, she speculates may be that musical training focuses on a very precise connection between sound and meaning. Students learning to play a musical instrument focus on the note on a page and the sound it represents, on the ways sounds go together. In addition, they are using their motor systems to create those sounds through their fingers.
The payoff is the ability to discern specific sounds – like syllables and words in conversation – with greater clarity.
Musical training holds real promise for those just starting out and even those who are aging. This training serves as a cognitive intervention to help aging adults preserve, and even build skills. No matter what age you start learning, you are ensured to have great benefits. So, what are you waiting for? Start practicing now! |
When scientists talk about populations they often refer to the carrying capacity of species in a particular environment. Carrying capacity is the largest population that an environment can sustain forever.
Suppose the carrying capacity of seals for a particular group of islands is
Write a differential equation that models the rate of change in the number of seals.
Write a general solution for the differential equation.
when because that is the current population. Suppose that after ten years, 2500 seals inhabit the island. Write a formula for in terms of . |
A simulation of movement or the perception of motion created by the rapid display of a series of still images.
Persistence of Vision
Refers to the way our eyes retain images for a split second longer than they actually appear, making a series of quick flashes appear as one continuous picture.
2D or “Traditional” Animation
When an animation is created using a series of drawings in a two dimensional (e.g. “flat”) environment.
3D or “Computer Animation”
When an animation is created in a computer using software that allows for objects to be animated in a 3D environment where the camera can be moved around the environment in the X, Y, and/or Z Axis.
Animation where a model is moved incrementally and photographed one frame at a time. NOTE: Sometimes this is also referred to as “claymation”. However, claymation is in fact a trademarked term and does not apply to the genre as a whole.
The speed at which frames progress in an animation. Measures usually as frames per second (fps). – In animation for film the typical frame rate is 24 frames per second. Since, most traditional animation is typically done on “twos” (e.g. each drawing is shown for TWO frames) a typical second of animation will consist of 12 unique drawings.
A frame in a timeline at which a change will occur.
A main action or drawing that is set on a key frame.
An inbetween basically fills in what is happening between the breakdowns for pose A and pose B.
A thumbnail is a very small image or sketch used as a reference or an placeholder for a final image.
When a character or object that is symmetrical moves with both sides in sync and in unison. This “mirrored” appearance typically appears unnatural and incorrect. |
This is the Editable PowerPoint version of these Rhyming worksheets.
Rhyming is a great way to help students recognise word patterns and learn how English words are constructed.
This free pack come with 4 different worksheets aimed at helping teachers to give their different ability students work that is targeted at their level. The first two worksheets are 2 different three letter worksheets to help rhyme C.V.C (mostly) words, the next 2 have longer rhyming words. This helps introduce the different spellings of vowels and English Words. Students have to find the rhyming pairs and colour them different colours.
There are a few different worksheets on the site so please take a look around. These cover Phonics and general English.
I hope they are useful and please let me know if they were. If you need an editable copy feel free to check it out in the shop (its gonna be a dollar or two just to help with the costs of running the website)
Most of resources can be premium downloaded below, they are all editable so you can adapt them for your, classes, students and schools. It also helps me with the costs of running the website! |
The objectives of the lesson :
Educational: to enrich student`s knowledge about the theme.
Practical: to practical their reading and to check their pronunciation
Developing: to develop students reading and listening comprehension
Equipment : text book, sentence cards, read and translates
Organization moment Good morning, children! How are you today?
Who is on duty today? Who is absent today?
What date is it today?
Warm-up Let`s start our lesson. The theme of our lesson is Schools in Kazakhstan
Presentation Open your books at p 105 ex.1 .Listen and repeat
Practice p 106 ex.1 Listen to the text “My Primary School” Learn the new information about the education in the primary school.
Ex.2 Answer the questions
1. What age did Aidar start school at?
2. When did he go to secondary school?
3. How many years of study do primary and secondary schools comprise?
4. How many years of study are compulsory in our republic?
Ex.3 Fill in the correct prepositions
1. Astana is the capital___________ Kazakhstan.
2. I started school __________ the age of seven
3. Nine years____________classes are compulsory.
4. _____________ the first of September we get acquented____ our teachers.
Ex.4 Complete the sentences
1. After four years of_________ school classes I went to _____ school.
2. Primary and secondary schools together ___________ eleven years of study.
3. Our school year begins on the first of September and ends in May. It___9 months
Reading P 108 ex.3
Checking homework p.108 ex.3 translate
Reflexion What have you learnt today? |
Andhra Pradesh Board of Intermediate Education, also known as BIEAP, is the body that governs and conducts Intermediate education of Class 11 and Class 12 in the state. Maths taught in Cass 11 is a bit analytical and practising Maths daily will become one of the most interesting and favourite subjects for the students. Important questions for AP Board Intermediate 1st year Maths is a fruitful resource for the students as there is a sudden advancement in the level of difficulty in the subject. These AP Intermediate 1st Year Maths important questions given below will help the students to get an idea of the different types of questions that can be framed in an examination. These questions are crafted after analysing the ap intermediate question papers 2020 and other years.
The important questions of Maths have been designed in such a way, so as to help the students to learn and understand the concepts in an interesting and easy manner. These important questions are prepared to keep in mind about the latest AP Board Intermediate 1st year Maths syllabus with the help of independent subject experts. Class 11 or Intermediate 1st-year students can actually succeed in Maths exam by making proper use of the important questions and knowing the actual exam paper in a clear manner.
By solving AP Board Intermediate 1st year Maths important questions students will get a good idea about the exam pattern and the marking scheme. These important questions will help students to gain the right knowledge to tackle any type of questions that can be asked during the exams. The important questions are mostly framed by taking reference from the previous year question papers or AP Intermediate 1st Year Model Question Papers and there is always a high chance that these important questions might be asked in their final intermediate exam. These important question papers will prove to be a useful study tool during exam preparation.
1. Write the condition that the equation ax + by + c = 0 represents a non-vertical straight line. Also write its slope.
2. Transform the equation 4x-3y+ 12=0 into slope-intercept form and intercept form of a straight line.
3. Find the ratio in which the point C (6,-17,-4) divides the line segment joining the points A(2,3,4) and B(3,-2,2)
4. Find the interval in which f (x) = x3 – 3x2 is decreasing.
5. Find the angle between the lines joining the origin to the points of intersection of the curve x2 + 2xy + y2+ 2x + 2y – 5 = 0 and the line 3 x -y + 1 = 0
6. Find the equation of locus of a point, the sum of whose distances from (0, 2) and (0, -2) is 6 units
7. Show that the origin is within the triangle whose angular points are (2,1), (3, -2) and (-4, 1)
8. Show that the line joining the points A (+6, -7, 0) and BC (16, -19, -4) intersects the line joining the points P(0,3,-6) and Q (2,-5, 10) at the point (1,-1,2)
9. Find the derivative of tan 2x from the first principles
10. Find the orthocentre of the triangle whose vertices are (5,-2), (-1,2) and (1,4)
11. Find the cube root of 37-30 √ 3
12. Find the area: of the triangle formed with the points A(1, 2, 3), B (2, 3, 1) and C (3, 1, 2) by vector method.
13. If f : A → B and g : B → C are bijections, then prove that gof : A → C is also bijection.
14. If A + B + C = 180°, then show that sin 2A – sin 2B + sin2C = 4 cos A sin B cos C
15. Find the value of x, if the slope of the line passing through (2, 5) and (x, 3) is 2.
16. Find the angle between the planes 2x-y+z=6 and x+y+2z=7
17. A (2, 3) and B (3, 4) be two given points. Find the equation of the Locus of P, so that the area of the Triangle PAB is 8.5 sq. units.
18. Find the points on the line 3x- 4 y-1= 0 which are at a distance of 5 units from the point (3, 2).
19. Find the derivative of sin 2x from the first principle.
20. A wire of length l is cut into two parts which are bent respectively in the form of a square and a circle. Find the lengths of the pieces of the wire, so that the sum of the areas is the least.
21. Find the slopes of the lines x + y =0 and x-y=0.
22. Find the derivative of the cot x from the first principle. |
We have been learning about 2D and 3D shapes and how they are different. We discussed the different vocabulary we would use when describing the shapes. The children then drew a shape dragon out of 2D shapes, labelling each shape. They then created their own 3D shape dragon. We had great fun creating and naming our dragons! |
21st BCS Civil Engineering Written Questions and Solve–
Question-1: a) Write down the function of ballast in a railway track?
What is Ballast?
Ballast is the foundation of railway track and it distributes the load to the underlying layers in a railway track.Generally rocks or gravels are used as a ballast material.The gravels covers the sub-grade of railway track and distributes the load the underlying layers.
Functions of Ballast:
- To provide a hard layer for the sleepers to rest on.
- To distribute the loads.
- For easy maintenance of railway track without disturbing the underlying layers.
- To prevent the growth of plants and vegetation etc.
- To hold the sleepers in position during the train passing.
- To maintain the lateral stability of the railway track.
- To ensure proper drainage of water.
Question:1(b)–Define Yard.Discuss the important points needs to be considered in designing the Marshalling Yard.
A yard is a space/area that contains a complex series of railroads, tracks and also contains sheds for maintaining,repairing storing,joining of rail coaches and locomotives.
Marshall Yard are of three types:
- Hump Yard
- Gravity Yard
- Flat Yard
Important factors to be considered during designing of Marshalling yard:
(If anyone know please contribute the answer of this section in comment box)
Question:1(c)–Design the rate of super-elevation for a curve of radius 400 m and speed of 80 km/hr. Also check the co-efficient of lateral friction.(G.C. Singh,pp-167)
For mixed traffic conditions super-elevation, e=(0.75V)^2/127R = (0.75X80)^2/(127X400) = 0.0708 = 1 in 14.12
check for co-efficient of lateral frictions, = (V^2/127R)- 0.067 = (80^2/127X400)-0.067 = 0.059
As the value is less than 0.15 the safe value,the design in safe with a super-elevation of 1 in 15 (answer). |
Updated: Sep 18, 2022
What advice do we give to parents about how fast a pupil should read? What advice do we give to pupils? How does the speed of reading impact on cognition, comprehension and the transfer of what we read to long term memory? How many books should pupils read in a month? How many hours need to be devoted to reading to meet this target? What happens when pupils fall below an optimum reading speeds?
I am particularly interested in speed because of the importance of QUANTITY of reading. I am particularly interested in speed because is it a LIMITING FACTOR for too many pupils, particularly disadvantaged pupils. I am not particularly interested in speed reading - and this blog is not about that! I hope it is of particular interest to reading leaders, KS2 teachers, headteachers, colleagues who set up interventions and parents. Rather than dismiss speed - build a deeper understanding.
We know that the quantity of reading makes a difference to the progress pupils make in reading comprehension. Therefore, the number of books consumed by a child is important and parents and educators should do all they can to encourage pupils to read widely and often. An extensive diet of reading is important for all sorts of other reasons, such as enjoyment, escapism, social and emotional wellbeing, not to mention the benefits reading has to our knowledge base and views of the world. Pupils benefit from extensive reading. This would suggest, therefore, that being able to read quickly means that you would be at an advantage, because you can 'eat the text' at a faster rate and consume more than someone who reads slowly. This is on the whole an ever-increasing circle, the faster you read, the more you can consume, and the more you consume, the faster your pace of reading becomes.
Reading volume is defined as the combination of time students spend reading plus the number of words they actually consume as they read (Allington, 2012). This combination affects everything from students’ cognitive abilities to their vocabulary development and knowledge of the world (Cunningham & Zibulsky, 2013).
Our slowest readers are often asked to have two or three times the amount of stamina than pupils who read at a more normal rate. For example, Harry Potter and the Philosopher's Stone by JK Rowling is 76,944 words. A child who reads at 90 words per minute = just over 14 hours. By comparison a child who reads at 160 words per minute = 8 hours. Think about the implications of this for in-class and out-of-class reading.
A teacher asks the class to read a piece of text as part of a geography lesson. It is 600 words long. The pupil reading at 160 wpm takes 3 minutes 45 seconds. The pupil reading at 90 words per minute takes 6 minutes 40 seconds. This is one of the reasons why the lesson quite often move on before the slower reader gets to the end of the text (and this does not take into account if they had to read more slowly than their normal rate because they were finding comprehension of the text difficult). Over the course of many lessons this impacts on learning and pupil confidence. Reading speed impacts on far more than just reading!
Quantity of reading impacts on vocabulary, the ability to be an effective writer, levels of comprehension, knowledge of the world. This plus ease of word reading is linked to freeing up working memory, better performance in examinations and even improved performance in subjects such as mathematics. Ease of word reading also impacts on pupils' confidence levels and self-efficacy levels. If pupils are operating below minimum speeds it has significant implications. Join me in a quest to ensure all our pupils can read easily. I hope this blog proves useful in raising points for professional discussion.
Most adults read at about 250 words per minute. If you read often, this might be 300 words per minute. Perhaps you would like to time yourself? Reading aloud is slower. This is for two reasons - it is physically difficult to read aloud at the same pace as silent reading and reading aloud usually needs to be at a pace that is comfortable for the listener. Most audio books and presenters speak at about 160 words per minute, perhaps a little slower than you might typically read aloud yourself. Children read at a slower rate, which increases throughout their time at school. By the end of Year 2 (age 7), we would expect a child's reading rate to be about 90 words per minute (below 60 would be a concern), by Year 4 we might expect that pupils achieve around 140 words per minute (and there is some research that would suggest below 100 in Year 4 would cause pupils frustration academically), and by Year 6 (age 11) around 160-180 words per minute.
There is no 'exact science' on speed for a particular year group, but pulling together different pieces of research, information and classroom experience we can utilise these figures. We can also use our professional judgement. A child reading just under 160 words per minute in Year 6 may benefit from increasing reading rate slightly but word reading is unlikely to be a limiting factor. We might consider what we know about the child's overall academic performance and independence/confidence/self-efficacy before making a decision about if they need to improve word reading speed. However, if the child has fallen below 100wpm in Y6, it is likely not only to be impacting on comprehension but also other aspects of education, such as writing performance. At the other end of the spectrum, it is also not about speed reading. For a child reading at 250 words per minute in Year 6, we might be concerned about comprehension, whether the child varies their reading rate, and accuracy levels - which could all be checked.
I am sure that many teachers and parents would find information about typical reading speeds to be information useful. For example, are parents reading aloud to their children at a pace that is too fast for comprehension? Are they expecting their child to finish reading a novel too quickly? If the adult and the child have decided to read the same novel (a copy each), the adult will need to keep in mind that they will read at a faster rate than their child and should take steps to ensure that it does not become an uneven race to finish! If you are reading a popular novel, you will probably be able to find it on Audible and it will state how long the audio version takes in hours and minutes (which, as we have stated, will typically be at 160 wpm) so from this you can extrapolate how long is should take children/adults to read a novel. If you want to finish the novel in a week, it would enable you to work out how many minutes per day would need to be devoted to reading.
Teachers in all year groups should monitor word reading rates and accuracy levels. They should be mindful of approaches they can take to ensure their class continues to make increments in word reading levels throughout primary and secondary, e.g. modelling reading; pupils having copies of the class novel - not just the teacher reading aloud; opportunities for paired reading / silent reading / choral reading; parts of lessons which examine prosody with opportunities for repeated practice; performance poetry; in-class activities which are known to strengthen word reading such as repeated reading exercises; word study - e.g. suffixes, spelling strategies; reading aloud new vocabulary - see orthographic mapping information; time devoted to independent reading. (If lots of pupils are operating below optimum levels in KS2, seek to examine how classroom activities are contributing to word reading. It is also important to consider how teacher beliefs and knowledge of word reading may be influencing lesson design and how much time / significance is placed speed and fluency. (Get in touch if you would like more help on this issue.)
Which children might need an intervention to increase word reading rate? This is particularly a question to ask from mid-way through Year 2 and upwards throughout a child's school career. There are simple tests that can be carried out for reading speed and accuracy levels that will also provide useful diagnostic information for teachers.
Quantity of text consumed and vocabulary
One of the benefits of high volumes of reading is the impact this has on vocabulary development. There are two components of vocabulary to consider: one is breadth of vocabulary (the number of words known); the other is depth of vocabulary (different meanings for a word that depend on the context in which the word is being used). For example: If I said to you the word 'red', you might immediately think of the colour red. However, if I said the 'boy was red faced' you might think of something other than the colour red, and if I said 'we had the red carpet treatment' you might think of something else, or if I posed a question such as 'were lady Macbeth's hands as red as those of her husbands?' you would not simply be thinking of the colour red. When children read high volumes of text they are more likely to be exposed to different ways in which a word is used. Words in context make a difference to how we interpret their meaning. Children who read widely have broader and deeper vocabularies. There is empirical evidence that, for older children and adults, much learning of new words occurs through exposure to written texts (Nagy, Herman, & Anderson, 1985; Sternberg, 1987). There are many positive studies (e.g. Cain and Oakhill, 2011; Kempe, Eriksson-Gustavsson, and Samuelsson, 2011). One of the reasons for this is that print material expose pupils to words that are less frequently used in spoken everyday language (such as abrasive, omnipresent, superfluous, stipulation), including words that are perhaps more associated with bygone eras (such as hearth, wireless and stove). Reading is one of the best ways to increase vocabulary, particularly if the reading diet includes fiction and non-fiction books. Subject knowledge is a key ingredient into acquiring vocabulary, and therefore parents and educators should encourage children to read a range of different texts, including non-fiction.
Most common words:
Whilst it is an American list, this document lists some of the less frequently used words with synonyms.
For those who are really keen on word frequency lists... http://ucrel.lancs.ac.uk/bncfreq/flists.html
Most EYFS and Year 1 teachers will be familiar with the 100, 300, 800 and 1000 most frequently used words.
300 most common words
1000 most common words
If most Year 6 pupils read at 140-160 words per minute (with many reading at 180+), the first column shows the number of words that are likely to be read against the number of minutes committed to reading. The second column simply adds 15 minutes of school reading a day to the total. You can see that 15 minutes a day in class and very little home reading probably exposes pupils to less than a million words per academic year (less if they read slowly). For a pupil who is an avid home reader this is likely to be in excess of three million words per academic year.
The table above illustrates how important home reading (or reading outside the normal classroom day) matters. The child who reads for an hour a day at home will be exposed to a wider vocabulary and in more contexts. Of course, I am not suggesting that this be forced reading time! This has to come from a place of 'desire to read'. Pupils who are motivated to read bring something extra to the reading process, e.g. concentration, perseverance, a desire to understanding the text, a willingness to engage in thinking while reading. Intrinsic motivation is therefore an important factor. Desire to read is enhanced by being fluent and having good comprehension skills (which improve the more you read). The chicken and the egg! Do pupils and parents know how much the quantity of reading adds to vocabulary development? (And other high impact strategies, such as expanding vocabulary via root words, prefixes, suffixes; etymology of words; word games; using new words, synonyms, antonyms, high-frequency words; engagement in conversations). Do they know how much reading quantity adds to comprehension skills?
For younger children, vocabulary developments tends to be higher if there has been
1) repeated reading of the same book by the adult;
2) the opportunity for children to join in with the print;
3) discussion of new / interesting vocabulary after reading.
There is certainly a lot of evidence that suggests analytical talk around the text is significant for pupils making progress in both word reading, vocabulary development and comprehension. Book talk engaged in by parents has been found to variable and school can support parents in getting the most out of book talk by providing advice booklets, video clips and workshops.
I wonder how many parents would think about combining fiction and non-fiction texts at home in order to support vocabulary development (and reading comprehension). For example, reading a non-fiction book about rivers would expand a child's knowledge of rivers and vocabulary associated with rivers. If the child then reads a book such as Wind in the Willows by Kenneth Grahame, the child will find it easier to read as they will already be familiar with some of the river terms, and they are more likely to be able to visualise the scenes, particularly if the adult can also provide their child with the opportunity to see a river first hand! The fiction and non-fiction vocabulary both support and compliment each other. Reading easy non-fiction books introduces children to unfamiliar terms and content that in turn makes it easier for them to digest harder books on the same topic. Sequencing of texts is therefore very beneficial to the reading and learning process. Is this knowledge something that parents could take advantage of? Could teachers put sets of non-fiction and fiction together for home learning packs? Could teachers provide a sequence of books on the same topic for home learning - starting with images (no text), basic books, more advanced texts? Could suggested title combinations be sent home? How is text combining and text sequencing being taken into account when planning lessons and designing schemes of work?
What impacts on the speed of reading?
Known vocabulary (it is faster to read words that you are familiar with).
The ability to discern different sounds in word. Pupils who have difficulty associating sounds with letters might have difficulty learning to read. It is for this reason that it takes a special set of teaching (and learning) skills for a hearing-impaired child to learn to read. Strong phonics programmes support the development of early reading.
Eye tracking, eye movements - the ability to read ahead, track the words from left to right. The speed at which the eye can move and flick between sections of text. Any impairments to sight can slow down reading speed. There may be exercises pupils need to complete to assist with eye movements - this should come under the advice of an expert in this field, e.g. the optician, eye specialist at a hospital.
Enjoyable practice has a great deal of influence on reading speed and comprehension. The more children read, usually the better their reading rates. Enjoyable practice should include both silent reading and reading aloud.
Types of practice (possible interventions are described later in this blog post) can increase rates of reading and fluency. Reading aloud to a real person (or a live animal, such as a dog - more so than reading to a puppet/toy although this is also beneficial) supports word reading rate and accuracy levels (LeRoux, Swartz, & Swart, 2014)
Those reading at very fast levels are not reading every word. They are scanning and skimming. They are reading in chunks and often visualising what they read as if it is a movie playing. Some high-speed readers are visualising words rather than reading each word. I am not trying to encouraging this type of reading, other than it might sometimes be useful if scanning a page of a nonfiction to see if the content is relevant and useful to read at a deeper level.
The readers mental cognition speed impacts on reading (or should - more on this later). More complex texts usually lead readers to slow down their rate of reading in order to ensure that comprehension levels are maintained. (And we do want pupils to be in a position to vary their reading pace to maintain understanding or to achieve different types of goal - but it should be that, choice).
The size of text, layout and font styles can impact on reading speed.
Needing to user a tracer or a finger to track the text slows down reading rate. Most children will eventually acquire the skills to read without needing this approach. However, some types of finger pointing are used in speed reading. This is referred to as 'meta guiding'. There are specific programmes on speed reading, mostly aimed at adults.
Familiarity with the text type.
The ability of pupils to read with sentences in mind rather than words in mind helps to speed up reading rate. Pupils reading rate might also improve with training that focus on 'seeing' or 'reading' several words at once.
Moving from reading aloud to silent reading (as silent reading, as explained easier is faster than reading aloud).
(Note, if you wish to assess a pupils' reading rate, try to test them on more than one passage of text (usually for 1 minute of reading time). Pupils reading rate (and fluency - e.g. prosody) should be higher for texts that are at their current level of decoding. When testing pupils - select a text that is year group appropriate (perhaps from a reading scheme, a text taken from a book of reading comprehension pieces, text in SATs paper). What is their reading rate and accuracy level. Generally, it is recommended that pupils are able to read 90% of the words for the text to be at the right level for them. Children who recognise less than 90% of the words in a text can generally not read the text productively without a lot of support (the frustration level). When conducting reading rate activities, it is also good to count errors. These are substitutions, omissions, insertions, self-corrections, and help provided by the teacher after a 5 second hesitation. It should be noted that accuracy rates naturally improve during KS1 and should be above 90% and preferably above 95%, particularly as pupils move up through school. You can therefore measure the impact of any intervention in terms of reading speed, accuracy and other elements of fluency such as prosody.)
Interventions aimed at improving rates of reading, accuracy and fluency:
EYFS and Y1:
FOR EARLY YEARS, pupils need to hear lots of stories being read by adults. They also need the opportunity to join in familiar or repeated phrases. Younger pupils can engage in 'echo reading' where they repeat back modelled phrases and sentences as part of enjoying a whole class text. They can hear the same story multiple times. Hearing lots of different adults model reading can be an advantage. Pupils can also start to 'listen along' as they read, e.g. use of headphones and a recorded story, providing the opportunity for differentiated texts to be used - good for reading centres. If you buy multiple copies of the book and a headphone splitter, group sessions can be set up.
Check for automaticity in knowledge of the alphabet - pupils who can visualise the letters and distinguish between letters are better placed for reading and writing. High-impact strategies therefore also include adults helping children learn letter names, distinguish between lower and uppercase letters, letter-sound and sound-letter practice including be able to visualise letters when the name of the letter is said aloud. A set of plastic letters (both upper and lowercase can be helpful here). It is important to emphasis careful observation to help pupils really look at the differences between letters. You can find out more about which letters are more/less frequently known in my handwriting blog. A set of rainbow alphabet charts can be downloaded free - see signup form at the end of the blog - for assessing current knowledge, working on gaps and improving writing as well as reading.
Strong phonics programmes and phonics interventions are of course essential. For some pupils the sessions include too many moving parts and too many different sounds, particularly if working memory is an issue. If pupils have auditory processing issues (which is not just about hearing) they need more time to process speech and form sound - although it is often milliseconds, it makes it hard for them to get the most from normal class phonics sessions. Consider which learners are struggling with reading and seem lost in a phonics session and do not join in as much with speaking elements or appear reliant on following on from other pupils - they move their lips slightly after everyone else. These pupils are likely to need supplementary sessions focusing on single sounds 1:1 e.g. hearing and saying and writing the split digraph i-e (and only that one in a session). The discussion about this is more than can be undertaken here, but what I would highlight here is the opportunities to repeatedly hear phonics modelling (with and without visuals) in simple, separated chunks. Pupils need access to recordings (audio/video) that pupils can repeatedly listen to whilst looking at the associated visual cues. Pupils can record their attempts on a talking post card or ipad (benefits self-assessment) or to an adult (benefits expert feedback) - some pupils need far more repetition of the modelling + many more opportunities to say the sounds aloud than is possible in class phonic sessions or even in an intervention. Using technology allows for personalised learning in school and out of school and enables the child to repeatedly listen (or record themselves) without necessarily always needing an adult to be physically with the child.
From Year 2 upwards:
Several researchers have found that repeating the same passage of text aloud until a level of fluency is achieved improves rates of reading and that improvements transfer to new pieces of text (Dowhower, 1986; Herman 1985; Taylor et al 1985; Screiber 1980). The pieces of text that are used in the intervention need to become more sophisticated over time for progress to be made. Dowhower found that repeated readings improved reading rate, accuracy, comprehension and prosody (expression, pausing appropriately, responding to punctuation, correct emphasis). Pupils gain confidence and produce a 'best rendition'.
A related technique is repeated 'listening-while-reading' texts. The pupil reads the text whilst listening to an adult modelling a fluent rendition (preferably more than once). Several studies have found this to be effective in supporting struggling readers. An advantage of the 'listening-while-reading' is that is can be completed by the teacher or via a recording of the passage being read, enabling more pupils to be part of an 1:1 intervention group as it is 1 pupil : 1 set of headphones and a wider range of passages can be recorded, again, enabling more pupils to be targeted. (If you would like to know more on this subject, read the article 'Effects of repeated reading and listening-while-reading on reading fluency' by Timothy E Rasinski, Journal of Educational Research 1990).
Pupils can also benefit from repeated readings with teacher feedback - the sentence or short section of text for older pupils is read by the pupil, the teacher provides feedback at the end of the piece of text, the pupil immediately re-reads. The pupil reads the next sentence/ section and repeats the process. It is useful to read the whole piece as one fluid text at the end of the session.
Timed practice (how many words are read in a set period of time - which is tracked) has been shown to positively impacted on reading rates. It does sound like a harsher process, but it has been proven to have positive results. Perhaps because it brings reading speed into the forefront of the pupil's mind and provides an opportunity to practice. Read for 1 minute, feedback, read again.
Below is a summary of high-impact, evidence based interventions that support word reading. Please get in touch if you would like to discuss this (or any other elements in the blog) - I offer 90 minute professional discussion sessions as well as bespoke training.
(You can download a PDF version of this by completing the form at the end of the blog).
Text choice for the interventions above: Where possible, texts selected for interventions should be interesting and motivating to read. It would be useful if the texts linked together, e.g. texts in the same topic or stories that are sequenced. This will help to make interventions into authentic reading opportunities. If texts selected relate to the curriculum, the intervention might kill two birds with one stone. Interventions would therefore ideally include texts that are both fiction and non-fiction. The level of challenge should move on with the child's development, so as to always offer opportunities for progression. If starting with texts below the child's year group, increase the challenge of the text as the intervention proceeds.
Implications for wider practice. In whole class reading sessions pupils need to: see the text, read the text and engage with the text. Teachers should take all steps possible for this to happen when running whole class reading lessons. It is not enough for the teacher to be the only one with a copy of the text when reading aloud to the class. Pupils need a copy of the text or at least one book between two pupils. Pupils must be able to engage with the text or whole class novel being read. They need opportunities to follow along as the teacher reads, to join in with echo reading, to read the text aloud, to engage in choral reading, to read to a partner, to engage in silent reading etc. It appears that the further one moves away from activities directly related to the reading process, the lower the correlation between the activity and reading achievement. An interesting point - some studies have shown that regular reading aloud by the teacher in class, e.g. 20 minutes every day, tends to encourage children to request that adults at home buy books / read to them, and tends to eventually increase independent reading. Good role models are needed! And when this isn't possible, try to provide pupils with audio books and a copy of the text.
It is interesting to consider what parents could take away from this. For example, the benefits of both the parent and the child having a copy of the text being read, particularly for older children. Recently, a friend of mine has been reading to her grandson remotely (due to self-isolation). She has reported more success when they both had a copy of the text (since it is hard to share when physically remote from each other). The camera can focus on her and she can, from time to time, hold up the book and importantly the same is true for the child. They are both easily able to see the print and the pictures and therefore the experience is more enjoyable. The above table of interventions might also support parents in understanding why a child may be bringing home a text that they have already read in class, and promotes not only a child reading to an adult, but the importance of the adult reading to the child. The adult reading aloud to the child also allows the child the chance to access books that are beyond their current level of independent reading. Echo reading, where the adult models the sentence or section of text, is a strategy that many parents would be able to implement. Audio books for home use might also help pupils with listening-while-reading (as long as both the printed book and the audio book are provided). - We are not saying that 'sharing a book' doesn't have a place, it certainly does, but perhaps a little of both strategies would be advantageous for the development of reading skills, particularly for pupils struggling with word reading.
NIM : Neurological Impress Method
"NIM (Neurological Impress Method), developed by Heckelman (1969), is a multisensory oral reading fluency intervention for struggling readers that involves paired choral reading. NIM was designed for “impressing mature reading behaviors upon students” (Eldredge, 1988, p. 36). Initial studies were conducted in clinical settings with an adult and a struggling reader, sitting side by side, simultaneously reading aloud at a rapid rate using challenging texts. The voice of the adult was directed toward the student’s ear. The adult used a finger to track the spoken words. This method was designed to expose struggling readers to effective reading processes and to “break the phonics-bound condition that occurs in many children who have had intensive phonics training and still have not learned to read fluently” (Heckelman, 1969, p. 281). According to Eldredge (1988), “repeated exposure to words frequently used in print probably improves the students’ sight recognition of such words, which, in turn, probably improves reading comprehension” (p. 41). Heckelman (1969) tested NIM with 24 adolescents, who achieved a mean increase of 1.9 grade levels after 7.5 hours of practice over 6 weeks. The range of increases in grade levels among participants was 0.8 to 5.9 grade levels, although the levels of text difficulty were not specified.' Eldredge and Butterfield (1986) modified NIM for whole class reading practice by using student pairs—a strong reader paired with a weaker reader—who sit side by side while simultaneously reading aloud from the same book. Similar to the original NIM process, lead readers touch each word when read, running their fingers smoothly under the words. The lead readers read at a normal speed as assisted readers say aloud as many words as they can. Both readers look at each word as it is read. Calling the process “dyad reading,” Eldredge and Butterfield found that the paired oral reading increased student achievement and improved struggling students’ attitudes toward reading. Dyad reading allowed students to effectively access and comprehend more challenging texts and increased the volume and diversity of texts read (Eldredge, 1988)." Extract from: The effects of dyad r |
In phylogenetics, an autapomorphy is a distinctive feature, known as a derived trait, that is unique to a given taxon. That is, it is found only in one taxon, but not found in any others or outgroup taxa, not even those most closely related to the focal taxon (which may be a species, family or in general any clade). It can therefore be considered an apomorphy in relation to a single taxon. The word autapomorphy, first introduced in 1950 by German entomologist Willi Hennig, is derived from the Greek words αὐτός, aut- = "self"; ἀπό, apo = "away from"; and μορφή, morphḗ = "shape".
Because autapomorphies are only present in a single taxon, they do not convey information about relationship. Therefore, autapomorphies are not useful to infer phylogenetic relationships. However, autapomorphy, like synapomorphy and plesiomorphy is a relative concept depending on the taxon in question. An autapomorphy at a given level may well be a synapomorphy at a less-inclusive level. An example of an autapomorphy can be described in modern snakes. Snakes have lost the two pairs of legs that characterize all of Tetrapoda, and the closest taxa to Ophidia - as well as their common ancestors - all have two pairs of legs. Therefore, the Ophidia taxon presents an autapomorphy with respect to its absence of legs.
The autapomorphic species concept is one of many methods that scientists might use to define and distinguish species from one another. This definition assigns species on the basis of amount of divergence associated with reproductive incompatibility, which is measured essentially by number of autapomorphies. This grouping method is often referred to as the "monophyletic species concept" or the "phylospecies" concept and was popularized by D.E. Rosen in 1979. Within this definition, a species is seen as "the least inclusive monophyletic group definable by at least one autapomorphy". While this model of speciation is useful in that it avoids non-monophyletic groupings, it has its criticisms as well. N.I. Platnick, for example, believes the autapomorphic species concept to be inadequate because it allows for the possibility of reproductive isolation and speciation while revoking the "species" status of the mother population. In other words, if a peripheral population breaks away and becomes reproductively isolated, it would conceivably need to develop at least one autapomorphy to be recognized as a different species. If this can happen without the larger mother population also developing a new autapomorphy, then the mother population cannot remain a species under the autapomorphic species concept: it would no longer have any apomorphies not also shared by the daughter species.
Phylogenetic similarities: These phylogenetic terms are used to describe different patterns of ancestral and derived character or trait states as stated in the above diagram in association with synapomorphies.
- Homoplasy in biological systematics is when a trait has been gained or lost independently in separate lineages during evolution. This convergent evolution leads to species independently sharing a trait that is different from the trait inferred to have been present in their common ancestor.
- Apomorphy – a derived trait. Apomorphy shared by two or more taxa and inherited from a common ancestor is synapomorphy. Apomorphy unique to a given taxon is autapomorphy.
- Synapomorphy/Homology – a derived trait that is found in some or all terminal groups of a clade, and inherited from a common ancestor, for which it was an autapomorphy (i.e., not present in its immediate ancestor).
- Underlying synapomorphy – a synapomorphy that has been lost again in many members of the clade. If lost in all but one, it can be hard to distinguish from an autapomorphy.
- Autapomorphy – a distinctive derived trait that is unique to a given taxon or group.
- Symplesiomorphy – an ancestral trait shared by two or more taxa.
- Reversal – is a loss of derived trait present in ancestor and the reestablishment of a plesiomorphic trait.
- Convergence – independent evolution of a similar trait in two or more taxa.
- Page RD, Holmes EC (14 July 2009). Molecular Evolution: A Phylogenetic Approach. John Wiley & Sons. ISBN 978-1-4443-1336-9. OCLC 609843839.
- Futuyma DJ (1998). Evolutionary Biology (3rd ed.). Sinauer Associates, Inc. p. 95.
- Appel RD, Feytmans E (2009). "Chapter 3: Introduction of Phylogenetics and its Molecular Aspects". Bioinformatics: a Swiss Perspective (1st ed.). World Scientific Publishing Company.
- Calow PP (2009). Encyclopedia of Ecology and Environmental Management. John Wiley & Sons. ISBN 978-1-4443-1324-6. OCLC 1039167559.
- Forey PL (1997). History of the Coelacanth Fishes (1st ed.). Sprinter.
- Howard DJ, Berlocher SH (1998). Endless Forms: Species and Speciation (1st ed.). USA: Oxford University Press. ISBN 978-0-19-510901-6. OCLC 60181901.
- Bull AT (2004). Microbial Diversity and Bioprospecting. ASM Press.
- Platnick NI (2001). "From Cladograms to Classifications: The Road to DePhylocode." (PDF). The Systematics Association.
- Gauger A (April 17, 2012). "Similarity Happens! The Problem of Homoplasy". Evolution Today & Science News.
- Sanderson MJ, Hufford L (21 October 1996). Homoplasy: The Recurrence of Similarity in Evolution. Elsevier. ISBN 978-0-08-053411-4. OCLC 173520205.
- Brandley MC, Warren DL, Leaché AD, McGuire JA (April 2009). "Homoplasy and clade support". Systematic Biology. 58 (2): 184–98. doi:10.1093/sysbio/syp019. PMID 20525577.
- Archie JW (September 1989). "Homoplasy Excess Ratios: New Indices for Measuring Levels of Homoplasy in Phylogenetic Systematics and a Critique of the Consistency Index". Systematic Zoology. 38 (3): 253–269. doi:10.2307/2992286. JSTOR 2992286.
- Wake DB, Wake MH, Specht CD (February 2011). "Homoplasy: from detecting pattern to determining process and mechanism of evolution". Science. 331 (6020): 1032–5. doi:10.1126/science.1188545. PMID 21350170. Lay summary – Science Daily.
- Simpson MG (9 August 2011). Plant Systematics. Elsevier. Amsterdam: Elsevier. ISBN 9780080514048.
- Russell PJ, Hertz PE, McMillan B (2013). Biology: The Dynamic Science. Cengage Learning. ISBN 978-1-285-41534-5.
- Lipscomb D (1998). "Basics of Cladistic Analysis" (PDF). Washington D.C.: George Washington University.
- Choudhuri S (2014-05-09). Bioinformatics for Beginners: Genes, Genomes, Molecular Evolution, Databases and Analytical Tools (1st ed.). Academic Press. p. 51. ISBN 978-0-12-410471-6. OCLC 950546876.
- Williams D, Schmitt M, Wheeler Q (2016-07-21). The Future of Phylogenetic Systematics: The Legacy of Willi Hennig. ISBN 978-1-107-11764-8. OCLC 951563305.
- Avise JC, Robinson TJ (June 2008). "Hemiplasy: a new term in the lexicon of phylogenetics". Systematic Biology. 57 (3): 503–7. doi:10.1080/10635150802164587. PMID 18570042.
- Copetti D, Búrquez A, Bustamante E, Charboneau JL, Childs KL, Eguiarte LE, et al. (November 2017). "Extensive gene tree discordance and hemiplasy shaped the genomes of North American columnar cacti". Proceedings of the National Academy of Sciences of the United States of America. 114 (45): 12003–12008. doi:10.1073/pnas.1706367114. PMC 5692538. PMID 29078296. |
What is Marfan syndrome? Marfan syndrome is a genetic disorder that affects the body’s ability to make healthy connective tissue, which supports the bones, muscles, organs, and tissues in your body. The condition can affect different areas of the body, including: Bones, ligaments, tendons, and cartilage. Organs, such as the heart and lungs. Skin.
What is Sjögren’s syndrome? Sjögren’s syndrome is a chronic (long-lasting) disorder that happens when the immune system attacks the glands that make moisture in the eyes, mouth, and other parts of the body. The main symptoms are dry eyes and mouth, but the disorder may affect other parts of the body. Many people with Sjogren’s syndrome say they feel tired often (fatigue). They also may have joint and muscle pain. In addition, the disease can damage the lungs, kidneys, and nervous system.
What is epidermolysis bullosa? Epidermolysis bullosa is a group of rare diseases that cause fragile skin that leads to blisters and tearing. Tears, sores, and blisters in the skin happen when something rubs or bumps the skin. They can appear anywhere on the body. In severe cases, blisters may also develop inside the body. The symptoms of the disease usually begin at birth or during infancy and range from mild to severe.
What is vitiligo? Vitiligo is a chronic (long-lasting) disorder that causes areas of skin to lose color. When skin cells that make color are attacked and destroyed, the skin turns a milky-white color. No one knows what causes vitiligo, but it may be an autoimmune disease. In people with autoimmune diseases, the immune cells attack the body’s own healthy tissues by mistake, instead of viruses or bacteria. A person with vitiligo sometimes may have family members who also have the disease. There is no cure for vitiligo, but treatment may help skin tone appear more even. |
Junior Control Insight 2.5
Using Computers to Control Machines - A New Approach
by Laurence Rogers
School of Education
Although physics education in secondary schools in England and Wales currently has a formal alliance with chemistry and biology, in many schools there exist strong links between physics and technology. The teaching of electronics has long been a common feature of physics at advanced level and in recent years computer control has emerged as a significant application of this technology. Until now the teaching of computer control technology has been dominated by the need to teach computer programming skills, using either LOGO type languages or flow chart diagrams. In Control Insight a new approach has been recently developed, offering easy access to problem-solving activity, eliminating the need to learn a computer language or flow chart manipulations. The essence of the approach is to describe the problem in terms of 'systems' which emphasise how information is passed between systems and focus on how information is used for making decisions. The function of each system is described in normal English rather than a stylised computer syntax. Presented in a multimedia environment with animated graphics and sound, the new program encourages progression from simple problems suited to age 8 pupils through to more sophisticated problems suited to age 15.
The National Curriculum for schools in the UK includes a programme of study for Information and Communication Technology (ICT) both in terms of developing pupils' capability for using the technology and in applying the technology in their learning of their other subjects. The ICT programme of study specifies the 'use of ICT to measure, record, respond to and control events by planning, testing and modifying sequences of instructions' . The type of activity which fulfils this requirement has been developing for many years and typically involves the use of the computer to control machines and has been an attractive aspect of ICT for physics teachers with an active interest in the applications of electronics.
To enable a computer to control an electrical device, an interface box is necessary. Pupils build models with low voltage (4 volt) motors, bulbs, relays, switches and sensors; these are connected through the interface to the serial port of the computer. To make the model work and perform a useful task, the computer needs a program which detects signals from the interface and switches the output devices on an off. The intelligence for such actions depends upon a series of instructions formulated by pupils and stored within the program. Various methods have been devised to enable pupils to create instructions. Some use direct programming techniques based on machine code, or languages such as LOGO or BASIC. In recent years graphical methods involving the manipulation of flow charts have also been developed. All these methods, however, suffer the disadvantage that a considerable amount of operational skill is required to create instructions for quite simple tasks. The necessary training to acquire such skill is an unwanted distraction from the problem solving activity which should occupy the more important focus of attention. The new Control Insight program has explicitly addressed the need to simplify operational aspects of the control process; it offers a new metaphor for solving control problems which frees pupils to focus their thinking on the structure of the problem and its potential solutions.
The central feature of Control Insight is the representation of the components of the control system on the screen as functional blocks which are linked to show the passage of information signals. The blocks are assigned the properties of input, process and output units characteristic of control systems. The concept builds on the 'systems' approach developed for the teaching of electronics. Pupils select the blocks they require by dragging them from toolbars surrounding the 'System' window and linking them appropriately. A typical system is shown in Figure 1.
Figure 1. Control system for a model car park barrier.
The essence of a 'systems' approach is that the user does not concern himself with the details of how the block works; the description of the block is confined to what the block does in terms of providing or receiving or processing a signal. Control Insight employs three types of block representing a range of sensors, processes and output devices shown in Table 1.
Table 1. Types of components and processes used in Control Insight.
The process modules contain a key innovative feature of the program: Pupils define their function by making choices in a properties dialogue box and as a result of these choices the program builds a sentence which describes the function. The sentences for all the process modules are displayed at the top of the window. Thus a script describing the actions of the system appears in plain English.
When pupils have completed their design in the System window, they can test how it works by switching to a simulation 'Run' mode. Sensor blocks may be activated by pointing and clicking, and output blocks are animated to show their on or off state. The system can be further edited until it performs the task in the desired manner. When pupils are satisfied with their design they can switch to the 'Connections' window which shows how to connect real sensors and devices to the interface. Finally the real model can be connected and controlled by the program.
We now have a complete view of the problem solving process: First, the problem has to be stated. Then pupils decide what sensors and output devices are needed in a solution. The solution needs to be described in a series of single sentence instructions for the computer and one process module is needed for each sentence. All the components are assembled on the screen and the process module properties are adjusted to give the appropriate sentences. The resultant control system is then tested on the screen. Thus the designing and testing processes are all conducted in a simulation environment on the screen in a manner which emulates industrial practice; only when the design is proven is the first prototype model built. The building, connection and control of a real model is the culmination of the process.
A further special feature is designed to make the system appeal to young pupils from age eight upwards: An alternative 'Scene' window is provided for building models, but, instead of using diagrammatic blocks, life-like pictures are used to represent the switches, sensors, motors, lights and other components. All these pictures are assembled in front of a background picture to create a complete scene. The picture comes to life when the program is set to 'Run' mode. All the components are animated and each makes a sound when activated.
Experience with the program so far has confirmed that, for all ages, pupils gain confidence rapidly in solving control problems. For the youngest pupils the picture based context is motivating and simple to use. For older pupils systems of great sophistication can be built. Teachers appreciate that many pupils can engage with the control technology without the need of peripheral equipment until the final stage.
- The National Curriculum for England: Information and Communication Technology Department for Education and Employment, Sanctuary Buildings, London, 1999.
- Control Insight, Logotron, 124 Cambridge Science Park, Milton Road, Cambridge, CB4 4ZS UK, 2000 |
Active Fire Detection
Active fires are located on the basis of the so-called thermal anomalies produced by them. The algorithms compare the temperature of a potential fire with the temperature of the land cover around it; if the difference in temperature is above a given threshold, the potential fire is confirmed as an active fire or "hot spot."
EFFIS uses the active fire detection provided by the NASA FIRMS (Fire Information for Resource Management System).
MODIS Active fires
The MODIS sensor, on board the TERRA and ACQUA satellites, identifies areas on the ground that are distinctly hotter than their surroundings and flags them as active fires. The difference in temperature between the areas that are actively burning with respect to neighbouring areas allows active fires to be identified and mapped. The spatial resolution of the active fire detection pixel from MODIS is 1 km.
Additional information on the MODIS active fire product is available at https://earthdata.nasa.gov/what-is-new-collection-6-modis-active-fire-data
VIIRS Active fires
The VIIRS (Visible Infrared Imaging Radiomer Suite) on board the NASA/NOAA Suomi National Polar-orbiting Partnership (SNPP) uses similar algorithms to those used by MODIS to detect active fires. The VIIRS active fire products complements the MODIS active fire detection and provides an improved spatial resolution, as compared to MODIS. The spatial resolution of the active fire detection pixel for VIIRS is 375 m. Additionally, VIIRS is able to detect smaller fires and can help delinate perimeters of ongoing large fires.
Additional information on VIIRS active fire products can be found at https://earthdata.nasa.gov/earth-observation-data/near-real-time/firms/viirs-i-band-active-fire-data
The mapping of active fires is performed to provide a synoptic view of current fires in Europe and as a means to help the subsequent mapping of burnt fire perimeters. Information on active fires is normally updated 6 times daily and made available in EFFIS within 2-3 hours of the acquisition of the MODIS/VIIRS images.
When interpreting the hotspots displayed in the map, the following must be considered:
- Hotspot location on the map is only accurate within the spatial accuracy of the sensor
- Some fires may be small or obscured by smoke or cloud and remain undetected
- The satellites also detect other heat sources (not all hotspots are fires)
To minimize false alarms and filter out active fires not qualified as wildfires (e.g. agricultural burnings), the system only displays a filtered subset of the hotspots detected by FIRMS. To this end a knowledge based algorithm is applied that takes into account the extent of surrounding land cover categories, the distance to urban areas and artificial surfaces, the confidence level of the hotspot.
With the identify feature tool, key information attached to each active fire is provided, such as geographic coordinates, administrative district (commune and province) and the main land cover category affected.
Another source for identification of active fires is built through firenews: the news items are selected from a large set of RSS feeds published by various forest fires related sites, and from news feeds filtered out with appropriate keywords. Items from selected feeds are then geolocated and published on the EFFIS web site, in the FireNews section. |
Autism, as it has been defined, is a developmental disorder of the nerves that leads to several impairments, like inability to use language for the purpose of self expression and communication. Moreover, such children are socially awkward, and do not know how to begin a conversation, or continue with one in a social setting. They are generally aloof and are oblivious of the feelings of people around them.
Moreover, they are also unable to remember too many facts and figures. Autistic children generally develop obsessive and repetitive patterns of behavior, like rocking, or repeating words and phrases. Some random facts about autism that all parents and care givers should be aware of are as follows:
- Originally, the term “autism” was used to refer adult schizophrenia. However, it was only in 1943 that the medical term was given the specific definition by which it is known and understood today.
- According to surveys and reports for the year 2012 as published by CDC (Centers For Disease Control And Prevention), every 1 in 88 children is diagnosed with autism. The rates are almost 300% higher than what it used to be in 2002.
- Researchers state that mutations of chromosome number 16 may cause autism. The problem lies in the DNA region containing the “morpheus” gene, which has, historically, evolved with human evolution. To state it in very simple words, the genes which have resulted in the development of human intelligence may also be the cause of autism.
- There are no medical examinations, like blood tests and scans that can help in the detection of autism. Parents and doctors need to be vigilant and observe the behavioral traits of the child to determine if he/she is autistic.
- Researches have revealed that countries where there are higher levels of precipitation record greater number of autistic cases. In such places, the level of atmospheric pollutants are generally high, the levels of Vitamin D are rather low, and a sedentary lifestyle triggers higher cases of autism.
- Autism is more common than AIDS, diabetes and childhood cancer put together.
- In case of identical twins, the detection of one sibling with autism increases the possibility of diagnosing the other as being autistic almost by 90%.
Just like medical science has been unable to pin-point the definite causes of autism, there is no cure either. The only option is for parents and care givers to carefully observe the child in the early stages of development, as early detection and therapeutic intervention greatly helps in reducing the harmful effects of this debilitating condition. Moreover, with regular therapy, the child can also develop linguistic and social skills that will help him/her to adjust better in the society in which he/she lives. |
A descriptive essay is considered to be extremely powerful in its sole purpose. It stimulates to feel and imagine provoking smells, sensations, and visions. An impression lies at the very heart of a good descriptive essay. Originally, it creates the atmosphere in your essay paper.
Descriptive Essay: Definition and Meaning
We share our moods and impressions through descriptive essay writing. The very important thing is to involve all possible sensory details. All sights, touches, smells, sounds and tastes must help the reader to see the same picture that the writer has in his / her mind. It must be mentioned, that descriptive essay writing requires avoidance of the same patterns in the sentences. The writer should use figurative and vivid language. Try to combine descriptive sentences smoothly and avoid general words. The picture must be clear, not vague.
Writing an Effective Descriptive Essay You Must:
- Provide complete and intense description of the subject;
- Involve even the tiniest details if they are significantly important
- Demonstrate the emotional qualification of the subject
- Show your own response and impression about the subject
- Avoid irrelevant elements
- Observe every single aspect step by step
In this manner, we see that a descriptive essay is based on detailed observation and characterization.
Descriptive Essay Structure
The main aim of a descriptive essay is not simply to describe a particular object, place, person or situation, but to make your reader see and feel the same you do. In other words, you should try to reproduce your thoughts on the paper. Obviously, you must write in accordance with the particular structure, which usually contains:
- Introduction. This part is supposed to explain why the author has chosen a specific object or person. Introduction comprises a strong thesis statement and must captivate reader’s attention from the first lines.
- Body. Here, the author pays more detailed attention to the main points. As a rule, each point is considered and discussed in a separate section. Usually, the body consists of three paragraphs:
- The first paragraph tells the reader about the object itself, its characteristics and the most distinguishing features. It gives a full and vivid picture through the smallest details of observation.
- The second paragraph portrays surroundings. In this section, you are free to use as many stylistic devices as you want. Your reader must feel the atmosphere of the environment you describe.
- The last paragraph refers to senses and emotions. You describe everything you can feel, see, hear, touch, and smell. Your task is to make the picture alive.
- Conclusion. The last stage is the conclusion. It emphasizes the importance of your description. In this part, you sum up your emotions, attitudes, and impressions.
There are some helpful tips for proper descriptive essay structuring :
- Compare a few topics and choose the subject of description. Try to find an interesting thing, place, or person or use some memorable experience. Avoid vague language. Use powerful words to describe all specific details;
- Try to find the most relevant sources related to the topic;
- Do not forget about sufficiency. Your essay must contain an adequate number of idioms, metaphors, clichés, etc.;
- Always go from general to specific;
- The strongest accent must be put on the descriptions of the subject and observation details.
To compose a brilliant essay, try to adhere to all these guidelines. Remember that facts and examples are less important than your writer’s ability to create a vivid picture. Make your readers feel and imagine and do your best to give them a complete perception. Let them create a whole picture in their minds. Illustrate the most important details using all human senses, bring a subject to life, and you will succeed. |
Étang Saumâtre and Lago Enriquillo are saline lakes found in Hispaniola’s rift valley along the border of Haiti and the Dominican Republic. Since both lakes are in a depression, there is no outflow to drain the lakes, so they are at the mercy of evaporation, rainfall variability, and runoff from the surrounding countryside. These three Landsat images from 1986, 2004, and 2012 show how dramatically the lake levels can fluctuate.
In recent years, the water levels have been rising due to increased rainfall, which has been made worse by increased runoff and sedimentation from the reduction of forests. These higher lake levels have flooded the towns and agricultural lands on the shores of the lakes and have occasionally blocked the road between Haiti and the Dominican Republic.
Landsat data are useful to scientists, managers, and policy planners as they study how natural variation, such as rainfall, and human changes, such as deforestation, can affect lake levels with often unexpected consequences. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.