content
stringlengths 275
370k
|
---|
So, which terms do I use?
Terminology, particularly as it relates to Indigenous peoples, can be tricky to navigate. A term that might be acceptable to some might be offensive to others. Because of this, many people do not feel confident using certain terms when referring to Aboriginal peoples. Fear of using the "wrong" word should never stifle important dialogue and discussions that need to be had.
By taking a moment to consider the history of certain terms, it is very possible to learn and be comfortable with which words to use in which contexts. We have compiled this guide to help inform your decisions on terminology.
Terms in this section:
First Nations | Inuit | Metis | Indian | Inuit | Indigenous | Native | Peoples (plural)
To capitalize or not to capitalize?
Why does terminology matter?
The history of relationships between the Canadian state and Aboriginal peoples is complex, and has oftentimes been paternalistic and damaging. As a result, terminology can represent something more than just a word. It can represent certain colonial histories and power dynamics. Terminology can be critical for Indigenous populations, as the term for a group may not have been selected by the population themselves but instead imposed on them by colonizers. With this in mind, one might understand how a term can be a loaded word, used as a powerful method to divide peoples, misrepresent them, and control their identity—what we can see today in Canada with "status" and "non-status Indians," the legally defined categories of people under the Indian Act.
On the other hand, terms can empower populations when the people have the power to self-identify. It is important to recognize the potential these words may hold— but it is also important and very possible to understand these terms well enough to feel confident in using them and creating dialogue. We have included several of these general terms below, although many Aboriginal people may prefer to identify themselves by their specific cultural group. As you will see, the most respectful approach is often to use the most specific term for a population when possible.
The term "Aboriginal" refers to the first inhabitants of Canada, and includes First Nations, Inuit, and Métis peoples. This term came into popular usage in Canadian contexts after 1982, when Section 35 of the Canadian Constitution defined the term as such. Aboriginal is also a common term for the Indigenous peoples of Australia. When used in Canada, however, it is generally understood to refer to Aboriginal peoples in a Canadian context. This term is not commonly used in the United States.
“First Nation” is a term used to describe Aboriginal peoples of Canada who are ethnically neither Métis nor Inuit. This term came into common usage in the 1970s and ‘80s and generally replaced the term “Indian,” although unlike “Indian,” the term “First Nation” does not have a legal definition. While “First Nations” refers to the ethnicity of First Nations peoples, the singular “First Nation” can refer to a band, a reserve-based community, or a larger tribal grouping and the status Indians who live in them. For example, the Stó:lō Nation (which consists of several bands), or the Tsleil-Waututh Nation (formerly the Burrard Band).
This term refers to specific groups of people generally living in the far north who are not considered "Indians" under Canadian law.
The term Métis refers to a collective of cultures and ethnic identities that resulted from unions between Aboriginal and European people in what is now Canada.
This term has general and specific uses, and the differences between them are often contentious. It is sometimes used as a general term to refer to people of mixed ancestry, whereas in a legal context, "Métis" refers to descendants of specific historic communities. For more on Métis identity, please see our section on Métis identity.
The term "Indian" refers to the legal identity of a First Nations person who is registered under the Indian Act. The term "Indian" should be used only when referring to a First Nations person with status under the Indian Act, and only within its legal context. Aside from this specific legal context, the term "Indian" in Canada is considered outdated and may be considered offensive due to its complex and often idiosyncratic colonial use in governing identity through this legislation and a myriad of other distinctions (i.e., "treaty" and "non-treaty," etc.). In the United States, however, the term "American Indian" and "Native Indian" are both in current and common usage.
You may also hear some First Nations people refer to themselves as "Indians." While there are many reasons for an individual to self-identify as such, this may be a deliberate act on their part to position and present themselves as someone who is defined by federal legislation.
"Indian Band" is also a legal term under the Indian Act to denote a grouping of status Indians. (For more information on this, see our section on bands.)
Indigenous is a term used to encompass a variety of Aboriginal groups. It is most frequently used in an international, transnational, or global context. This term came into wide usage during the 1970s when Aboriginal groups organized transnationally and pushed for greater presence in the United Nations (UN). In the UN, "Indigenous" is used to refer broadly to peoples of long settlement and connection to specific lands who have been adversely affected by incursions by industrial economies, displacement, and settlement of their traditional territories by others. For more on how this term was developed, please see our section on global actions.
"Native" is a general term that refers to a person or thing that has originated from a particular place. The term "native" does not denote a specific Aboriginal ethnicity (such as First Nation, Métis, or Inuit). In the United States, the term "Native American" is in common usage to describe Aboriginal peoples. In Canada, the term "Aboriginal" or "Indigenous" is generally preferred to "Native." Some may feel that "native" has a negative connotation and is outdated. This term can also be problematic in certain contexts, as some non-Aboriginal peoples born in a settler state may argue that they, too, are "native."
Is it okay to say "native"?
While "native" is generally not considered offensive, it may still hold negative connotations for some. Because it is a very general, overarching term, it does not account for any distinctiveness between various Aboriginal groups. If you are referencing a specific group, it is generally considered more respectful to use another term that more specifically denotes which peoples you are referring to.
However, "native" is still commonly used. Many people find it to be a convenient term that encompasses a wide range of populations. When wanting to use a general term in the Canadian context, one might prefer the use of the term "Aboriginal."
The plural “peoples” recognizes that more than one distinct group comprises the Aboriginal population of Canada. For example, “Aboriginal people” (singular) might mean each Aboriginal individual, whereas “Aboriginal peoples” (plural) indicates a number of separate Aboriginal populations.
There is no official consensus on when to capitalize certain terms. Some people consider capitalization a sign of respect to the people you are referring to. Therefore, it may not be necessary to capitalize when using the term as an adjective and not in direct reference to a population. (For example, consider, “She is a native to the area” to “She is Native American” or even, “She is Native.”)
Perhaps the term with the most definite capitalization “rule” is “Indian,” as it is a legal entity enforced by the Canadian government.
Ultimately, style guides have not created strict guidelines. As a result, you may find variation depending on your resources. Oftentimes, authors will explain their decision in a preface or a footnote.pdf printable version |
The history of maple syrup is both interesting and informative, providing many facts concerning the sociology of early North America. The first people known to have manufactured maple syrup are the Native Americans living in the northeast part of North America, a long time before the arrival of the first Europeans. Using an early method of tapping the sweet sap of the maple trees, these early tribes rendered the juice into a source of high-calorie winter food. The Native Americans were generous with their technology, showing the first European colonies how to extract the syrup from the trees.
The Europeans were quick studies, introducing their knowledge of metallurgy, storage, and transportation into the process. Their new knowledge became part of the history of maple syrup. The harvested sap was taken to a “sugar shack,” where it could easily be stored in river-cooled buildings. The sap had to be rendered down in a tedious process by being boiled in large cauldrons. The sap had to be stirred often to prevent crystallization. By the 1800’s, the process had been refined and made more efficient. “Country Sugar,” as the maple sweetener was called, was the most common sugar available in North America for quite some time.
Various pumps and dehumidifiers were introduced into the process by the enterprising Americans (another technological addition to the history of maple sugar) but the rendering process remained slow and expensive. The U.S. started to import maple syrup from Canada, particularly from Quebec, a cold area known for its wealth of sugar maple trees, and has been doing so ever since.
In the United States, maple syrup is still a small industry in New England states, most notably in Vermont, and on a smaller scale, Maine and various other states. In America, Canada, and Europe, syrups are broken down into “grades.” The grading system is slightly complicated. It is judged on color, sugar content, and time of harvesting. Read our Maple Syrup Grading Article for more about maple syrup grading. A continuing examination of the history of maple syrup reveals that in the Civil War, most Union households used maple sweetener to boycott cane sugar, which was primarily produced by slaves. Today, maple syrups is a favorite for a variety of dishes throughout the country. It is a perennial favorite for breakfast goodies such as pancakes, waffles, and French toast. It can also be used for cooking, replacing sugar as maple syrup health benefits are greater than brown or white sugar. Maple syrup is also used widely in vegan and vegetarians cooking. |
|Yale-New Haven Teachers Institute||Home|
Learners will become familiar with nuclear medicine through diagnostic imaging and its components. This will allow students to relate to real life situations. The activities in this unit will foster quantitative thinking in the learner, which will lead the learner to develop the interest, objectivity, attitudes and mathematical skills that are related in the area of nuclear medicine. It is my intent that the learners skills will grow and knowledge will increase with adequate use of this unit. The adequate use will enhance the required amount of application of a concept necessary to insure its future availability and in this way the learner will become literate to some degree within the area of nuclear medicine.
This unit will be taught to seventh and eighth grade middle school students. It can be taught to students on a high school level, especially those interested in the professional fields of nuclear medicine technology.
The middle school student is an entity in himself, with unpredictable reactions to problems and personal situations. The middle school student should begin to take a critical look at himself and to seek direction in his life and the establishment of his values. In this unit the student will explore nuclear medicine technology, its components and its usage in the medical profession. It is the intent of this unit to incorporate in nuclear medicine the concepts which will expose each student to an awareness and broad understanding of nuclear medicine. A wide range of exposure will allow a student to develop a career interest.
Nuclear medicine was first used for the investigation of thyroid disease prior to the Second World War. During the past decade the field of nuclear medicine has expanded so rapidly and extensively that most practicing physicians trained prior to this growth are unaware of the numerous, valuable radioisotopic procedures now available to them.
Contemporary methods may be divided broadly into three groups. The largest division is diagnostic procedures, such as organ imaging, in which a radionuclide, in a suitable chemical form, is administered to a patient and the distribution of radioactivity in the body is determined by an external radiation detector. The second largest division of nuclear medicine utilizes radionuclide techniques to measure concentrations of hormones, antibodies, drugs, and other important substances in samples of blood or tissues. The third phase is therapy to treat disorder and restore the normal function of an organ.
In 1896, Henri Becquerel in France, discovered the use of radioactivity in Uranium. Radioactivity is defined as the property by which nuclei spontaneously decay or disintegrate by one or more discrete energy steps or transitions until a suitable stable state is reached.
Due to Becquerel’s research, Marie and pierre Curie found that both uranium and thorium possessed this property of radioactivity, also that some uranium minerals are more radioactive than uranium itself. For this work Becquerel and the Curies were jointly awarded the Nobel prize for Physics in 1903.
Marie and Pierre Curie’s work made an impact in the world of science with their discovery of one radioactive element radium, which changes into other elements. The Curie’s research gave meaning to the inner world of the atom.
An element is a basic substance consisting of atoms which are chemically alike. For many years before the discovery of radium, scientists had believed that atoms were the smallest units of matter. The word atom comes from a greek word meaning indivisible.
However, today we understand that most of an atom is empty space with particles revolving around a tiny core, or nucleus. The nucleus contains particles called protons and neutrons tightly locked together. The particles which revolve around the nucleus are called electrons.
Many elements, of which radium is one, are naturally radioactive. This means that they are made up of atoms which are unstable, that is, the nuclei, or cores of these atoms are constantly disintegrating of their own accord. In the process of disintegration or decay, the atoms automatically give off particles and radiation, and change into lighter elements.
The major forms of radiation given off by radioactive elements are alpha particles, beta particles, and gamma rays.
The alpha particle is a helium nucleus consisting of two protons and two neutrons. This particle is the same as the helium atom with the exception that there are no orbital electrons. Because there are no negative charges to neutralize the positively charged nucleus, the alpha particle possesses an electric charge of plus two upon emission. Since the particle is without electrons, it will not be satisfied until it acquires two electrons, making it an electrically neutral helium atom.
One alpha particle is a combination of two protons and two neutrons. Alpha particles shoot out from the nuclei of splitting atoms, such as those of radium, at a speed of about 10,000 miles per second. Alpha particles can be stopped by a few sheets of paper and are unable to penetrate the unbroken skin, and they rarely cause any damage.
Beta particles are very light and have a continuous energy spectrum. The maximum energy available to the beta particle from nuclear decay is called the endpoint energy. The number of beta particles emitted with this maximum energy are few. Beta particles lose most of their energy ionizing atoms along their paths. Penetrating power of beta particles is high when compared to alpha particles.
Beta particles are electrons which shoot out from certain radioactive atoms at the speed upward from 100,000 miles a second, but they can be stopped by approximately an inch of wood. Beta particles can penetrate about one-third inch of human tissue and cause severe burns.
Gamma rays are a form of radiant energy, or radiation released from the nuclei of radioactive atoms when they disintegrate. Gamma rays are part of the electromagnetic wave spectrum as radio waves, visible light waves, and x-rays. Gamma rays are very much like x-rays, except that their wavelengths are shorter. Their penetrating power is enormous. The gamma rays of radium can be stopped by thick sheets of concrete or lead. They can pass right through the human body and therefore can be extremely dangerous because they are capable of destroying cell life.
When a radium atom gives off an alpha particle, it becomes a radon atom. Radon is a gas which has a very short life. Radon is the decay product of radium.
In recent years, the words nuclide and radionuclide have fallen into disfavor. These words have been replaced by the terms isotope and radioisotope. Nuclide refers to any nucleus plus its orbital electrons. Isotopes refer to two or more forms of the same element, in that they have the same number of protons and a different number of neutrons. A radioisotope will disintegrate at a constant rate, with the time required to reach fifty percent of the original number of atoms referred to as the physical half-life. A physical half-life is a factor considered when selecting a particular isotope for certain use.
Every radionuclide has a fixed half-life ranging from seconds to years. Those used in clinical nuclear medicine have half-lives in the range of minutes to days.
Radionuclides are measured in terms of the amount of radioactive atoms that disintegrate in one second. The terms employed are referred to as curies, named after Marie Curie. One curie of radioactive material means that it would have 3.7 x 1010 disintegrations per second. Chart No. 1 on the following page defines the Curie and its sub-units.
|PER SECOND||PER MINUTE|
|Curie||Ci||3.7 x 1010||2.22 x 1012|
|Millicurie||mCi||3.7 x 107||2.22 x 109|
|Microcurie||uCi||3.7 x 104||2.22 x 106|
|Nanocurie||nCi||3.7 x 101||2.22 x 103|
|Liver||Technetium—Sulfur Colloid||6 mCi|
The composition of a radiopharmaceutical; how they are obtained, their characteristics, and how radiopharmaceuticals are used to obtain information are some of the principles involved in all radioisotopic procedures. Radiopharmaceuticals are not used to produce a pharmacological effect. They all contain a radioisotope that is used for diagnostic imaging.
Technetium, an isotope, is one of the most important radiopharmaceuticals used in nuclear medicine today.
Because of its short half-life of six hours, technetium (99m) is used in nuclear medicine today for diagnosis, using the gamma camera. Since technetium lasts such a short time it cannot be kept in stock, so it is prepared by the beta decay of molybdenum. Molybdenum is kept in a shielded container while decaying, yielding technetium. It decays by a procedure called isomeric transition to a lower energy state, giving it a longer half-life. Every morning technetium is needed and extracted from its parent by a brine solution. This procedure is also used in other areas.
Radiopharmaceuticals play an important role in nuclear medicine. In diagnostic procedures small amounts of isotopes may aid in gaining necessary information concerning normal and abnormal life processes. The usage of these radiopharmaceuticals belongs under the supervision of the U.S. Atomic Energy Commission and licenses are issued only after institutions’ facilities are inspected by the commission. Chart No. 2 on the previous page indicates the Radiopharmaceuticals used in diagnostic imaging and the standard adult doses for nuclear medicine scans.
Scintillation counters consist of a detector system and a processing display unit. The detector system is made up of a sodium iodide crystal coupled to a photomultiplier tube. When gamma ray photons strike the crystal, flashes of blue-violet light or scintillation occurs. The crystal is transparent to light and it’s enclosed in a light tight container. A powder is used to reflect light out, only through the crystal area adjacent to the photomultiplier tube. Once the flashes of light reach the surface of the photomultiplier tube, electrons are released.
These electrons become amplified in the photomultiplier tube and are then transmitted through a preamplifier to the main unit only to be amplified further. Now the electrons are ready to be processed and displayed. The light emitted from the output signal from the detector unit is proportional to the energy released inside the crystal by way of the gamma photon.
Collimators play an integral part in detector systems. Collimators are designed so that the detector can only see photons in a specific area inside a patient while rejecting others from outside this area. The wide angle collimator is most commonly used and each type is designed for a specific purpose. When counts from a large field of view are needed the wide angle is used.
Parallel collimators are used with camera systems which also view a large area but are concerned mainly with the distribution of the radioactive isotope.
Focusing collimators are used with scanning devices which view a small area as in organ imaging. The collimator with more holes has what is called increased resolution, with decreased sensitivity and the opposite is true for a collimator with less holes.
The electrical pulses of electrons are directed to the processing unit from the detector system. The spectrometer is used to sort the spectrum of gamma energies and accept or reject ones of specific pulse height. The window must be adjusted to reject all pulses above and below certain energy levels. A window is the range of energy of an isotope. In the case of technetium, its energy range is 140Kev. The energy range is set up for 20% of the isotope, one will allow 10% above and 10% below the energy range. This will help to reduce the counts from scattered radiation.
From the spectrometer the information is passed on to be displayed as counts on a scaler. A scaler measures the amount of radioactivity from within the source.
Scanners are designed to produce two-dimensional pictures of the distribution of the radioactive isotope in an organ. Organ scanning is achieved by a systematic movement of a scintillation detection assembly with a focusing collimator, going back and forth across the organ of interest. These rectilinear scanners are now almost obsolete. The gamma camera is the number one imaging device that can produce an image without moving the detector unit. The camera has the ability to “see” certain organs in their entirety. A brief description of the Gamma Scintillation Camera and the Multi-crystal Camera follows.:
The three types of Collimators used in Nuclear Medicine
Any abnormalities such as abscess or lesions can be determined. A normal scan will demonstrate an even distribution of the radioactive isotope throughout the liver. Multiple views are taken, eight in all.
- a. Anterior aspiration with costal margin marker
- b. Anterior expiration with costal margin marker
- c. RAO—right anterior oblique
- d. LAD—left anterior oblique
- e. Anterior to include liver and spleen
- f. Rt. Lat.—right lateral
- g. Lft. Lat.—left lateral
- h. Posterior to include liver and spleen
MAA better known as macroaggregated serum albumin is important to all nuclear medicine departments. MAA is commercially supplied in a sterile solution that is precalibrated for emergency situations. Six views of the lungs are taken as follows:.
- a. LPO—left posterior oblique
- b. Posterior
- c. RPO—right posterior oblique
- d. Right lateral
- e. Anterior
- f. Left lateral
All nuclear medicine scans follow basically the same pattern. The patient is injected with the radioactive isotope. There is a time factor as to when the scan will begin. All types of views are taken of an organ at many different positions. The gamma camera is used for detection of all images.
It would be advantageous to use copies of scans in the classroom exercise. Copies of scans may be readily obtained through the Yale-New Haven Teachers Institute.
The following pages contain diagrams and the functions of the imaging devices in nuclear medicine.
The Anger scintillation camera is shown schematically in Figure 1 on the following page. The scintillations produced in the sodium iodide detector are “looked” at by an array of 19 or 37 photomultiplier tubes (PMT). The scintillation light produced by a gamma-ray interaction in the detector crystal is shared by the PMT in the array. The contribution of H.O. Anger was to devise an electronic circuit which would produce an image dot on the face of an oscilloscope; the dot’s location corresponds to the location of the gamma-ray interaction in the circular detector. The first problem to be solved in the scintillation camera is the determination of the energy of the gamma ray. In a single PMT device (such as the rectilinear scanner), the pulse output from the PMT is proportional to the gamma-ray energy deposited in the scintillation detector. The same situation holds in the 19-tube array: the pulse size produced by each PMT is proportional to the light seen by that PMT. The output pulses of the 19 tubes are added together algebraically in the SUM circuit to form a SUM pulse, which is proportional to the total gamma-ray energy deposited in the crystal. The SUM pulse is sent to the pulse height analyzer, which produces an output pulse (called the Z-pulse) when the system has detected a gamma ray of the proper energy. This Z pulse is sent to an oscilloscope, where it causes one dot to be written on the face of the oscilloscope. A time exposure of the dot flashes is obtained to produce a scintophoto. If no additional information is provided by the system, a series of dots will be produced in the center of the oscilloscope; in other words, no localizing information is provided.
The information regarding where the dot is to be written on the face of the oscilloscope is produced by the X-, Y-position circuit. This circuit compares the pulse height output of each PMT with the SUM pulse. The position circuit produces X- and Y-deflection voltages which are applied to the deflection plates of the image oscilloscope. The process is illustrated schematically in Figure 2 on the following page. (The X-deflection plates are omitted for simplicity).
In the normal course of events, the multi-crystal scintillation camera sends its output to 294 scalers, where the valid counts in each crystal are collected. The contents of these scalers are scanned and presented on the face of an oscilloscope as a scintiphoto where the brightness of the oscilloscope location is a function of the number of counts in each location. In a real sense, this device is a step ahead of the Anger scintillation camera in that its output has already been digitized—the number of counts in each location is represented by a number stored in a scaler.
|IMAGING SYSTEM||DEAD TIME||COUNT RATE||99mTc ACTIVITY|
|Micro||TO PRODUCE||TO PRODUCE|
|Seconds||10% LOSSES||10% LOSSES|
A good background in college level algebra is usually the minimum requirement. Presented below are some concepts and word problems which are particularly applicable to calculations frequently used in nuclear medicine.
- A) each factor may be evaluated separately and the individual products multiplied.
42 = 4 x 4 = 16
43 = 4 x 4 x 4 = 64
THEN: 16 x 64 = 1024
The same results may be obtained by adding the exponents.
42 x 43 = 4(2 + 3) = 45 = 1024
The above may be written algebraically as:
Xa x Xb x Xc = X(a + b + c)
II. The quotient of two terms containing the same base raised to any power is equal to the base raised to the difference of the algebraic sum of exponents in the numerator and the algebraic sum of the exponents in the denominator.
24 Ö 22 = ?
Evaluating the numerator and denominator separately, the results are as follows:
24 Ö 22 = 16 Ö 4 = 4 or 24 Ö 22 = 2(4-2)= 22 = 4
The algebraic rule:
Ya Yb = Ya Ð b
III. One must multiply the exponents to raise a term to a power that has a base raised to a power.
(32)3 = 93 = 243, OR (32)3 32t3 = 35 = 243
1. 6 . 6 . 6 = 63
2. 4 . 4 . 4 . 4 = 44
3. P . P . P . P . P = P5
Evaluate each expression:.
1 Y = 72 Y = 49
2. M = 35 M = 243
3. T = 43 T = 64
4. S = 24 S = 16
Evaluate each expression:
1. 3y4 if y = 2 =
2. 4r3 if r = 3 =
3. 2m3 if m = 5 =
Simplify the following:
Simplify the following:
|— = 3||— = 9||—— = 81|
Gerald will be going to the hospital for a bone scan. He will arrive at the nuclear medicine department at 7:45 a.m. and be injected with l5mCi of Technetium—MDP. Scanning takes place two hours after injection is administered.
What time will Gerald’s scan begin? 9:45 a.m.
A survey showed that 3/4 of the patients of nuclear medicine receive liver and spleen scans. What part of the patients do not receive liver and spleen scans?
|.25 decimal||1/4 fraction|
Given the formula below, compute a childs dosage for a renal scan:
|Child wt(kg) x Adult dose||60kg x 15|
- 1. Sally mixes 260g of flour, 200g of sugar, and 245 grams of butter. How much does the whole mixture weigh? ———
- 2. Susan’s dinner consists of 150g of cooked ham, 125g of potatoes, 100g of peas and 160g of fruit salad. How much does Susan’s dinner weigh? ———
- 3. One apple weighs 100g and one orange weighs 160g. How much more than two apples do two oranges weigh? ———
- 4. A box of crackers weighs 342 grams. The crackers are packed in 3 cellophane wrappers. If each of the 3 packages has the same weight, how much does each package weigh? ———
- 5. Tammy’s popsicle weighs 74g. How much does it weigh after Tammy has eaten half of it? ———
- 6. Jeff weighs 45kg and his baby brother weighs 5kg. How much more does Jeff weigh than his brother. Change the weight in kilograms to pounds. ———
- 7. Bob can lift twice as much weight as Carl. If Carl can lift 27kg, how much can Bob lift? ———
- 8. Lisa’s bicycle weighs 3kg less than her wagon. If her bicycle weighs 8kg, how much does her wagon weigh? ———
- 9. An ocean liner weighs 31,000t. It takes on 78t of cargo, 65t of passengers and luggage, 43t of food and water. What is the total weight? ———
- 10. Cindy, Jimmy and Julie weigh 108kg all together. If Cindy weighs 23kg, Jimmy weighs 38kg, how much does Julie weigh? ———
20 York Street
Yale New Haven Hospital
New Haven, Connecticut
Diagrams of Imaging Devices courtesy of:
Dr. Robert C. Lange, Ph. D
Yale University School of Medicine
Technical Director—Section of Nuclear Medicine
Yale New Haven Hospital
New Haven, Connecticut
Mr. Leonard Quartarraro
Nuclear Medicine Department
Yale New Haven Hospital
New Haven, Connecticut
12 Clintonville Road
St. Raphael’s Hospital
Nuclear Medicine Department
1450 Chapel Street
New Haven, Connecticut
Bernier, Donald, Langen, James, and David Wells. Nuclear Medicine Technology and Techniques. C.V. Mosby Co., St. Louis, 1981.
Early, Razzak, Sodee. Nuclear Medicine Technology. C.V. Mosby Co., St. Louis, 1975.
Goodman, Paul and Rao Dandamudiv. An Introduction to Physics of Nuclear Medicine. Charles Thomas Publishers, Springfield, Ill., 1977.
Gregory, J.N.. The World of Radioisotopes. Angus and Robertson, 1966.
Kisieleski, Walter E. and Renato Baserga. Radioisotopes and Life Processes. U.S. Atomic Energy Commission, Division of Technical Information, Oak Ridge, TN, 1966.
Lange, Robert C.. Nuclear Medicine for Technicians. Year Book Medical Publishers, Inc., Chicago, 1973.
Maynard, Douglas C.. Clinical Nuclear Medicine. Lea & Febiger, Philadelphia, 1969.
Parker, Roy, Smith, Peter H.S., and David M. Taylor. Basic Science of Nuclear Medicine. Churchill Livingstone, New York, 1978.
Phelan, Earl. Radioisotopes in Medicine. U.S. Atomic Energy Commission, Division of Technical Information, Oak Ridge, TN, 1966.
Quinly, Edith H. Safe Handling of Radioactive Isotopes in Medical Practice. MacMillan Co., New York, 1960.
Ramesh, Chandra. Introductory Physics of Nuclear Medicine. Lea & Febiger, Philadelphia, 1982.
Selman, Joseph and Charles C. Thomas. The Fundamentals of X-ray and Radium Physics. Charles Thomas Publishers, Springfield, Ill, 1977.
Simmons, Greg H.. A Training Manual for Nuclear Medicine Technologists. U.S. Department of Health, Education and Welfare, Maryland, 1970.
Contents of 1983 Volume VII | Directory of Volumes | Index | Yale-New Haven Teachers Institute |
Stimulated emission is the process by which an incoming photon of a specific frequency can interact with an excited atomic electron (or other excited molecular state), causing it to drop to a lower energy level. The liberated energy transfers to the electromagnetic field, creating a new photon with identical phase, frequency, polarization, and direction of travel as the photons of the incident wave. This is in contrast to spontaneous emission which occurs at random intervals without regard to the ambient electromagnetic field.
However, the process is identical in form to atomic absorption in which the energy of an absorbed photon causes an identical but opposite atomic transition: from the lower level to a higher energy level. In normal media at thermal equilibrium, absorption exceeds stimulated emission because there are more electrons in the lower energy states than in the higher energy states. However, when a population inversion is present the rate of stimulated emission exceeds that of absorption, and a net optical amplification can be achieved. Such a gain medium, along with an optical resonator, is at the heart of a laser or maser. Lacking a feedback mechanism, laser amplifiers and superluminescent sources also function on the basis of stimulated emission.
Stimulated emission was a theoretical discovery by Einstein within the framework of the old quantum theory, wherein the emission is described in terms of photons that are the quanta of the EM field. Stimulated emission can also occur in classical models, without reference to photons or quantum-mechanics.
Electrons and how they interact with electromagnetic fields are important in our understanding of chemistry and physics. In the classical view, the energy of an electron orbiting an atomic nucleus is larger for orbits further from the nucleus of an atom. However, quantum mechanical effects force electrons to take on discrete positions in orbitals. Thus, electrons are found in specific energy levels of an atom, two of which are shown below:
When an electron absorbs energy either from light (photons) or heat (phonons), it receives that incident quanta of energy. But transitions are only allowed between discrete energy levels such as the two shown above. This leads to emission lines and absorption lines.
When an electron is excited from a lower to a higher energy level, it will not stay that way forever. An electron in an excited state may decay to a lower energy state which is not occupied, according to a particular time constant characterizing that transition. When such an electron decays without external influence, emitting a photon, that is called "spontaneous emission". The phase associated with the photon that is emitted is random. A material with many atoms in such an excited state may thus result in radiation which is very spectrally limited (centered around one wavelength of light), but the individual photons would have no common phase relationship and would emanate in random directions. This is the mechanism of fluorescence and thermal emission.
An external electromagnetic field at a frequency associated with a transition can affect the quantum mechanical state of the atom. As the electron in the atom makes a transition between two stationary states (neither of which shows a dipole field), it enters a transition state which does have a dipole field, and which acts like a small electric dipole, and this dipole oscillates at a characteristic frequency. In response to the external electric field at this frequency, the probability of the electron entering this transition state is greatly increased. Thus, the rate of transitions between two stationary states is enhanced beyond that due to spontaneous emission. Such a transition to the higher state is called absorption, and it destroys an incident photon (the photon's energy goes into powering the increased energy of the higher state). A transition from the higher to a lower energy state, however, produces an additional photon; this is the process of stimulated emission.
Stimulated emission can be modelled mathematically by considering an atom that may be in one of two electronic energy states, a lower level state (possibly the ground state) (1) and an excited state (2), with energies E1 and E2 respectively.
If the atom is in the excited state, it may decay into the lower state by the process of spontaneous emission, releasing the difference in energies between the two states as a photon. The photon will have frequency ν and energy hν, given by:
where h is Planck's constant.
Alternatively, if the excited-state atom is perturbed by an electric field of frequency , it may emit an additional photon of the same frequency and in phase, thus augmenting the external field, leaving the atom in the lower energy state. This process is known as stimulated emission.
In a group of such atoms, if the number of atoms in the excited state is given by N2, the rate at which stimulated emission occurs is given by:
where the proportionality constant B21 is known as the Einstein B coefficient for that particular transition, and ρ(ν) is the radiation density of the incident field at frequency ν. The rate of emission is thus proportional to the number of atoms in the excited state N2, and to the density of incident photons.
At the same time, there will be a process of atomic absorption which removes energy from the field while raising electrons from the lower state to the upper state. Its rate is given by an essentially identical equation:
The rate of absorption is thus proportional to the number of atoms in the lower state, N1. Einstein showed that the coefficient for this transition must be identical to that for stimulated emission:
Thus absorption and stimulated emission are reverse processes proceeding at somewhat different rates. Another way of viewing this is to look at the net stimulated emission or absorption viewing it as a single process. The net rate of transitions from E2 to E1 due to this combined process can be found by adding their respective rates, given above:
Thus a net power is released into the electric field equal to the photon energy hν times this net transition rate. In order for this to be a positive number, indicating net stimulated emission, there must be more atoms in the excited state than in the lower level: . Otherwise there is net absorption and the power of the wave is reduced during passage through the medium. The special condition is known as a population inversion, a rather unusual condition that must be effected in the gain medium of a laser.
The notable characteristic of stimulated emission compared to everyday light sources (which depend on spontaneous emission) is that the emitted photons have the same frequency, phase, polarization, and direction of propagation as the incident photons. The photons involved are thus mutually coherent. When a population inversion () is present, therefore, optical amplification of incident radiation will take place.
Although energy generated by stimulated emission is always at the exact frequency of the field which has stimulated it, the above rate equation refers only to excitation at the particular optical frequency corresponding to the energy of the transition. At frequencies offset from the strength of stimulated (or spontaneous) emission will be decreased according to the so-called line shape. Considering only homogeneous broadening affecting an atomic or molecular resonance, the spectral line shape function is described as a Lorentzian distribution:
where is the full width at half maximum or FWHM bandwidth.
The peak value of the Lorentzian line shape occurs at the line center, . A line shape function can be normalized so that its value at is unity; in the case of a Lorentzian we obtain:
Thus stimulated emission at frequencies away from is reduced by this factor. In practice there may also be broadening of the line shape due to inhomogeneous broadening, most notably due to the Doppler effect resulting from the distribution of velocities in a gas at a certain temperature. This has a Gaussian shape and reduces the peak strength of the line shape function. In a practical problem the full line shape function can be computed through a convolution of the individual line shape functions involved. Therefore optical amplification will add power to an incident optical field at frequency at a rate given by:
Stimulated emission cross section
The stimulated emission cross section (in square meters) is
- A21 is the Einstein A coefficient (in radians per second),
- λ is the wavelength in the void(in meters),
- n is the refractive index of the medium (dimensionless), and
- g(ν) is the spectral line shape function (in seconds).
Under certain conditions, stimulated emission can provide a physical mechanism for optical amplification. An external source of energy stimulates atoms in the ground state to transition to the excited state, creating what is called a population inversion. When light of the appropriate frequency passes through the inverted medium, the photons stimulate the excited atoms to emit additional photons of the same frequency, phase, and direction, resulting in an amplification of the input intensity.
The population inversion, in units of atoms per cubic meter, is
where g1 and g2 are the degeneracies of energy levels 1 and 2, respectively.
Small signal gain equation
The intensity (in watts per square meter) of the stimulated emission is governed by the following differential equation:
as long as the intensity I(z) is small enough so that it does not have a significant effect on the magnitude of the population inversion. Grouping the first two factors together, this equation simplifies as
is the small-signal gain coefficient (in units of radians per meter). We can solve the differential equation using separation of variables:
Integrating, we find:
- is the optical intensity of the input signal (in watts per square meter).
The saturation intensity IS is defined as the input intensity at which the gain of the optical amplifier drops to exactly half of the small-signal gain. We can compute the saturation intensity as
- h is Planck's constant, and
- τS is the saturation time constant, which depends on the spontaneous emission lifetimes of the various transitions between the energy levels related to the amplification.
- is the frequency in Hz
General gain equation
The general form of the gain equation, which applies regardless of the input intensity, derives from the general differential equation for the intensity I as a function of position z in the gain medium:
where is saturation intensity. To solve, we first rearrange the equation in order to separate the variables, intensity I and position z:
Integrating both sides, we obtain
The gain G of the amplifier is defined as the optical intensity I at position z divided by the input intensity:
Substituting this definition into the prior equation, we find the general gain equation:
Small signal approximation
In the special case where the input signal is small compared to the saturation intensity, in other words,
then the general gain equation gives the small signal gain as
which is identical to the small signal gain equation (see above).
Large signal asymptotic behavior
For large input signals, where
the gain approaches unity
and the general gain equation approaches a linear asymptote:
- Einstein, A (1916). "Strahlungs-emission und -absorption nach der Quantentheorie". Verhandlungen der Deutschen Physikalischen Gesellschaft 18: 318–323. Bibcode:1916DPhyG..18..318E.
- Fain, B.; Milonni, P. W. (1987). "Classical stimulated emission". Journal of the Optical Society of America B 4: 78. doi:10.1364/JOSAB.4.000078.
- Saleh, Bahaa E. A. and Teich, Malvin Carl (1991). Fundamentals of Photonics. New York: John Wiley & Sons. ISBN 0-471-83965-5.
- Alan Corney (1977). Atomic and Laser Spectroscopy. Oxford: Oxford Uni. Press. ISBN 0-19-921145-0. ISBN 978-0-19-921145-6.
.3 Laser Fundamentals, William T. Silfvast |
Select a Glossary:
Unintentional bias is the result of using a weaker study design (e.g., a case series or observational study), not designing a study well (e.g., using too low a dose of the comparator drug), or not executing the study well (e.g., making it possible for participants or researchers to determine to which group they are assigned). Intentional bias also exists. Examples of study techniques that are designed to make a favorable result for the study drug more likely include a run-in phase using the active drug to identify compliant patients who tolerate the drug; per protocol rather than intention-to-treat analysis; and intentionally choosing too low a dose of the comparator drug or choosing an ineffective comparator drug.
Allocation concealment recently has been recognized as an important element of randomized controlled trial design. Allocation is concealed when neither the participants nor the researchers know or can predict to which group in a study (control or treatment) the patient is assigned. Allocation concealment takes place before the study begins, as patients are being assigned. Blinding—concealing the study group assignment from those participating in the study—occurs after the study begins. Blinding should involve the patient, the physicians caring for the patient, and the researcher. It is particularly important that the persons assessing outcomes also are blinded to the patient’s study group assignment.
Individual findings from the history and physical examination often are not helpful in making a diagnosis. Usually, the physician has to consider the results of several findings as the probability of disease is revised. Clinical decision rules help make this process more objective, accurate, and consistent by identifying the best predictors of disease and combining them in a simple way to rule in or rule out a given condition. Examples include the Strep Score, the Ottawa Ankle Rules, the Wells Rule for deep venous thrombosis, and a variety of clinical rules to evaluate perioperative risk.
In a large study, a small difference may be statistically significant. For example, does a 1- or 2-point difference on a 100-point dementia scale matter to your patients? It is important to ask whether statistically significant differences also are clinically significant. Conversely, if a study finds no difference, it is important to ask whether it was large enough to detect a clinically important difference and if a difference actually existed. A study with too few patients is said to lack the power to detect a difference.
The P value tells us how likely it is that the difference between groups occurred by chance rather than because of an effect of treatment. For example, if the absolute risk reduction was 4% with P = .04, if the study were done 100 times, the risk reduction would be expected to be caused four times by chance alone. The confidence interval gives a range and is more clinically useful. A 95% confidence interval indicates that if the study were repeated 100 times, the study results would fall within this interval 95 times. For example, if a study found that a test was 80% specific with a 95% confidence interval of 74% to 85%, the specificity would fall between 74% and 85% 95 times if the study were repeated 100 times.
Disease-oriented evidence refers to the outcomes of studies that measure physiologic or surrogate markers of health. This would include things such as blood pressure, serum creatinine, glycohemoglobin, sensitivity and specificity, or peak flow. Improvements in these outcomes do not always lead to improvements in patient-oriented outcomes such as symptoms, morbidity, quality of life, or mortality.
External validity is the extent to which results of a study can be generalized to other persons in other settings, with various conditions, especially "real world" circumstances. Internal validity is the extent to which a study measures what it is supposed to measure, and to which the results of a study can be attributed to the intervention of interest, rather than a flaw in the research design. In other words, the degree to which one can draw valid conclusions about the causal effects of one variable or another.
Were the participants analyzed in the groups to which they were assigned originally? This addresses what happens to participants in a study. Some participants might drop out because of adverse effects, have a change of therapy or receive additional therapy, move out of town, leave the study for a variety of reasons, or die. To minimize the possibility of bias in favor of either treatment, researchers should analyze participants based on their original treatment assignment regardless of what happens afterward. The intention-to-treat approach is conservative; if there is still a difference, the result is stronger and more likely to be because of the treatment. Per protocol analysis, which only analyzes the results for participants who complete the study, is more likely to be biased in favor of the active treatment.
Likelihood ratios (LRs) correspond to the clinical impression of how well a test rules in or rules out a given disease. A test with a single cutoff for abnormal will have two LRs, one for a positive test (LR+) and one for a negative test (LR–). Tests with multiple cutoffs (i.e., very low, low, normal, high, very high) can have a different LR for each range of results. A test with an LR of 1.0 indicates that it does not change the probability of disease. The higher above 1 the LR is, the better it rules in disease (an LR greater than 10 is considered good). Conversely, the lower the LR is below 1, the better the test result rules out disease (an LR less than 0.1 is considered good).
A multiple-treatments meta-analysis allows you to compare treatments directly (for example, head-to-head trials) and indirectly (for example, against a first-line treatment). This increases the number of comparisons available and may allow the development of decision tools for effective treatment prioritization.
The absolute risk reduction (ARR) can be used to calculate the number needed to treat, which is … number of patients who need to be treated to prevent one additional bad outcome. For example, if the annual mortality is 20% in the control group and 10% in the treatment group, then the ARR is 10% (20 – 10), and the number needed to treat is 100% ÷ ARR (100 ÷ 10) = 10 per year. That is, for every 10 patients who are treated for one year, one additional death is prevented. The same calculation can be made for harmful events. The number of patients who need to receive an intervention instead of the alternative for one additional patient to experience an adverse event. The NNH is calculated as: 1/ARI, where ARI is absolute risk increase (see NNT). For example, if a drug causes serious bleeding in 2% of patients in the treatment group over one year compared with 1% in the control group, the number needed to treat to harm is 100% ÷ (2% – 1%) = 100 per one year. The absolute increase (ARI) is 1%.
In an observational study of a drug or other treatment, the patient chooses whether or not to take the drug or to have the surgery being studied. This may introduce unintentional bias. For example, patients who choose to take hormone therapy probably are different from those who do not. Experimental studies, most commonly randomized controlled trials (RCTs), avoid this bias by randomly assigning patients to groups. The only difference between groups in a well-designed RCT is the treatment intervention, so it is more likely that differences between groups are caused by the treatment. When good observational studies disagree with good RCTs, the RCT should be trusted.
Observational studies usually report their results as odds ratios or relative risks. Both are measures of the size of an association between an exposure (e.g., smoking, use of a medication) and a disease or death. A relative risk of 1.0 indicates that the exposure does not change the risk of disease. A relative risk of 1.75 indicates that patients with the exposure are 1.75 times more likely to develop the disease or have a 75% higher risk of disease. Odds ratios are a way to estimate relative risks in case-control studies, when the relative risks cannot be calculated specifically. Although it is accurate when the disease is rare, the approximation is not as good when the disease is common.
Patient-oriented evidence (POE) refers to outcomes of studies that measure things a patient would care about, such as improvement in symptoms, morbidity, quality of life, cost, length of stay, or mortality. Essentially, POE indicates whether use of the treatment or test in question helped a patient live a longer or better life. Any POE that would change practice is a POEM (patient-oriented evidence that matters).
Simple randomization does not guarantee balance in numbers during a trial. If patient characteristics change with time, early imbalances cannot be corrected. Permuted block randomization ensures balance over time. The basic idea is to randomize each block such that m patients are allocated to A and m to B.
Predictive values help interpret the results of tests in the clinical setting. The positive predictive value (PV+) is the percentage of patients with a positive or abnormal test who have the disease in question. The negative predictive value (PV–) is the percentage of patients with a negative or normal test who do not have the disease in question. Although the sensitivity and specificity of a test do not change as the overall likelihood of disease changes in a population, the predictive value does change. For example, the PV+ increases as the overall probability of disease increases, so a test that has a PV+ of 30% when disease is rare may have a PV+ of 90% when it is common. Similarly, the PV changes with a physician’s clinical suspicion that a disease is or is not present in a given patient.
Whenever an illness is suspected, physicians should begin with an estimate of how likely it is that the patient has the disease. This estimate is the pretest probability. After the patient has been interviewed and examined, the results of the clinical examination are used to revise this probability upward or downward to determine the post-test probability. Although usually implicit, this process can be made more explicit using results from epidemiologic studies, knowledge of the accuracy of tests, and Bayes’ theorem. The post-test probability from the clinical examination then becomes the starting point when ordering diagnostic tests or imaging studies and becomes a new pretest probability. After the results are reviewed, the probability of disease is revised again to determine the final post-test probability of disease.
Studies often use relative risk reduction to describe results. For example, if mortality is 20% in the control group and 10% in the treatment group, there is a 50% relative risk reduction ([20 – 10] ÷ 20) x 100%. However, if mortality is 2% in the control group and 1% in the treatment group, this also indicates a 50% relative risk reduction, although it is a different clinical scenario. Absolute risk reduction subtracts the event rates in the control and treatment groups. In the first example, the absolute risk reduction is 10%, and in the second example it is 1%. Reporting absolute risk reduction is a less dramatic but more clinically meaningful way to convey results.
A run-in period is a brief period at the beginning of a trial before the intervention is applied. In some cases, run-in periods are appropriate (for example, to wean patients from a previously prescribed medication). However, run-in periods to assess compliance and ensure treatment responsiveness create a bias in favor of the treatment and reduce generalizability.
The number of patients in a study, called the sample size, determines how precisely a research question can be answered. There are two potential problems related to sample size. A large study can give a precise estimate of effect and find small differences between groups that are statistically significant, but that may not be clinically meaningful. On the other hand, a small study might not find a difference between groups (even though such a difference may actually exist and may be clinically meaningful) because it lacks statistical power. The “power” of a study takes various factors into consideration, such as sample size, to estimate the likelihood that the study will detect true differences between two groups.
Sensitivity is the percentage of patients with a disease who have a positive test for the disease in question. Specificity is the percentage of patients without the disease who have a negative test. Because it is unknown if the patient has the disease when the tests are ordered, sensitivity and specificity are of limited value. They are most valuable when very high (greater than 95%). A highly Sensitive test that is Negative tends to rule Out the disease (SnNOut), and a highly Specific test that is Positive tends to rule In the disease (SpPIn).
Often, there are many studies of varying quality and size that address a clinical question. Systematic reviews can help evaluate the studies by posing a focused clinical question, identifying every relevant study in the literature, evaluating the quality of these studies by using predetermined criteria, and answering the question based on the best available evidence. Meta-analyses combine data from different studies; this should be done only if the studies were of good quality and were reasonably homogeneous (i.e., most had generally similar characteristics).
Studies of treatments, whether the treatment is a drug, device, or other intervention, must be randomized controlled trials. Because most new, relevant medical information involves advances in treatment, these studies must sustain rigorous review.
Studies of diagnostic tests, whether in a laboratory or as part of the physical examination, must demonstrate that the test is accurate at identifying the disease when it is present, that the test does not identify the disease when it is not present, and that it works well over a wide spectrum of patients with and without the disease.
Only systematic reviews (overviews), including meta-analyses, will be considered.
The main threats to studies of prognosis are initial patient identification and loss of follow-up. Only prognosis studies that identify patients before they have the outcome of importance and follow up with at least 80 percent of patients are included.
Decision analysis involves choosing an action after formally and logically weighing the risks and benefits of the alternatives. Although all clinical decisions are made under conditions of uncertainty, this uncertainty decreases when the medical literature includes directly relevant, valid evidence. When the published evidence is scant, or less valid, uncertainty increases. Decision analysis allows physicians to compare the expected consequences of pursuing different strategies under conditions of uncertainty. In a sense, decision analysis is an attempt to construct POEMs artificially out of disease-oriented evidence.
Qualitative research uses nonquantitative methods to answer questions. While this type of research is able to investigate questions that quantitative research cannot, it is at risk for bias and error on the part of the researcher. Qualitative research findings will be reported if they are highly relevant, although specific conclusions will not be drawn from the results.
These are a broadly accepted set of nine criteria to establish causality between an exposure or incidence and an effect or consequence. In general, the more criteria that are met, the more likely the relationship is causal.
Information from Hill AB. The environment and disease: association or causation? Proc R Soc Med. 1965;58(5):295-300.
EBM Features and Departments »
Share this page
This page will be removed from your Favorites Links. Are you sure? |
4th Grade Weekly Vocabulary is a simple, yet effective way to teach students essential vocabulary critical to mastering 4th grade content as well as success on grade-level standardized tests.
Each week's lesson contains a collection of 3 math terms, 3 language arts terms, 2 science terms, and 2 social studies terms identified as essential Common Core vocabulary.
The simple, predictable format allows for students to complete the majority of learning the vocabulary words and their meanings independently, thus teaching students how to study. Parents appreciate knowing what is for homework each night, and gladly participate in routines at home to help their student successfully learn their vocabulary each week. Use in conjunction with websites like Vocabulary City and Quizlet to incorporate technology and differentiate instruction.
4th Grade Weekly Vocabulary will be available for all 36 weeks of the school year. More to come soon... |
We have this file available for download.
The film The Hurricane of '38 and this website offer insights into topics in American history, including: oral histories, community responses to disasters, the growth of and reliance upon infrastructure like roads, bridges, railways, and telephone lines, the history of weather forecasting, and, indirectly, the Munich appeasement of 1938. You can use part or all of the film, or delve into the rich resources available on this website to learn more, either in a classroom or on your own.
The following activities are grouped into four categories: history, economics, geography, and civics. You can also read a few helpful hints for completing the activities at the bottom of the page.
1. Read the U.S. Storm Disasters Timeline and use it to create a map of major storm disasters in the United States. Label the year and summary of each disaster next to its location on the map.
2. Write a description of the most dramatic weather-related event you have ever witnessed or experienced. Then ask an adult -- preferably an older adult -- to describe his or her most dramatic weather-related experience. Select one of these two stories and tell it to the class. When everyone has presented a story, discuss the similarities and differences among them: Do several stories concern the same storm? Was the person's reaction to the storm primarily fear, curiosity, excitement, or something else? Did the storm bring out the best in people, or the worst?
1. Imagine that you live in Providence, Rhode Island during the Hurricane of 1938. As the storm recedes, you see a group of your friends enter stores damaged by the storm and begin to carry off merchandise. How do you respond? Why? Write a letter to a friend in which you describe the situation and your response.
2. Read about the Hurricane's aftermath to get a sense of the extent of the cleanup that was necessary after the Hurricane of 1938, and to learn about the involvement of the American Red Cross in aiding the victims. To find out how the Red Cross is aiding victims of recent disasters, visit its website. Use the information on the site to prepare a poster or collage on the Red Cross's response to a recent disaster. You might, for example, include information on the number of people the Red Cross has helped, the amount of money and blood it collected for victims, and portions of the news stories on the site.
1. Read about and examine a map of the hurricane's path. Use the information you find to draw your own map tracing the route of the hurricane from its origins in the Cape Verde Islands to its end in arctic Canada. Include the dates and times listed.
2. Visit the website of the Federal Emergency Management Agency, the part of the federal government assigned to prepare for and respond to disasters. Type the name of your state into the search engine at the site to find information on times when federal disaster relief was provided to parts of your state. As a class, create a wall map of your state and label these events on the map, including the dates, the areas that were declared disaster areas, and the specific cause of each disaster.
1. Read a brief history of the National Weather Service. Today, weather-related information is much more widely available than in 1938. To prove this point, hold a class contest: select an upcoming day (such as the coming Saturday) and see who can find weather forecasts for that day for your community from the largest number of sources. Sources can include newspapers, radio and television stations, and Web sites. Only one forecast can come from each source: for example, forecasts from two different newspapers count as two sources, but forecasts from two different editions of the same newspaper only count as one source. Students should list their sources and the specifics of the forecast itself, such as the temperature and the possibility of precipitation.
2. As the film notes, the Hurricane of 1938 was overshadowed by events in Europe, so it received relatively little press coverage. Follow radio, television, newspaper, and/or Internet coverage of world events for one week. Look for examples of disasters in other countries that were caused by weather or other factors. Report on these events to the class. Did these events receive prominent media coverage, and if not, what events received greater attention that day? Do you think the disasters should have received more coverage?
Hints for the Active Learning Questions
1. Maps should reflect the information in the timeline.
2. You might wish to broaden the discussion by asking students what weather-related stories they recall from books, movies, and television, such as the classic Jack London story, "To Build a Fire," or the book (and later movie) The Perfect Storm.
1. Students should explain the reasons for their actions. You might want to challenge students by noting that in 1938, Americans had suffered through nearly a decade of the Great Depression and/or by asking if students would be more tempted to permit or join in the looting if they knew the store owner was insured.
2. You might point out that a part of the Red Cross Web site is devoted to youth services and includes information on volunteer opportunities with the Red Cross.
1. Maps should correspond to the information in the description.
2. Note that some of the documents retrieved will not be related to specific cases of disaster relief. Students also could try searching by the name of your community.
1. You might have each student make his or her own prediction for that day, based on the forecasts, and assign bonus points to the student whose prediction is most accurate.
2. So that students understand the legitimate importance of the Munich crisis of 1938, you also might want to ask them what they know about it and discuss its role in leading to the start of World War II in Europe. |
is a small African genus, represented by 19 species in southern Africa and
one in Tanzania. The plants are slender herbs with small to medium-sized
flowers, and are often hard to recognize as orchids. They are found in a
variety of different habitats, ranging from fynbos and bushveld to grassland.
Some species are very common and easy to find, but others are known only
from one or few sites and are rare even there. Most grow in small to large
colonies. A mass display of P. acutifolium with hundreds or thousands
of plants can often be seen in mountain marshes after fire, and P. alatum
is normally found in dense clusters of countless individuals. Some species
are known for their rather unpleasant odour, which can be quite strong.
Pollination is by oil-collecting bees. Flowering occurs in spring or summer,
with some species flowering mainly after fire.
The plants grow terrestrially and have underground root tubers. Most species are less than 50 cm tall, but P. magnum has erect stems of up to 1.5 m length and is thus our tallest orchid. Leaves are lanceolate (lance-shaped) and borne all along the stem; they vary in number from one to many. Inflorescences are terminal and are laxly or densely one- to many-flowered. The small or medium-sized flowers are borne on unbranched spikes, with their colour ranging from yellow to green and white, and sometimes with a purple or maroon flush. While flowers of most species are resupinate (lip facing down), there are also two species with non-resupinate flowers. Flowers are generally open and cup-like which is an important difference to the otherwise very similar and closely related genus Corycium. The small and normally narrow lip is linear to deltate and has an elongate appendage. In structure the column is very complicated, with the two anther thecae (pollen sacs) separated and situated on the corners of an elongate horseshoe-shaped connecting part, and the two stigma pads on the median carpel.
Plants of the genus Pterygodium are difficult to obtain and to grow. Cultivation requirements are similar to those for Satyrium.
Selected species and their main distribution
Click images to enlarge
Description and images: Hubert Kurzweil
|© S A National Biodiversity Institute| |
In recent history, China has seen considerable amounts of emigration. In 2012, there were more than 46 million Chinese living outside of China. However, with the rapid growth of its economy, the country has become an increasingly attractive destination for people all around the world, especially those seeking employment or a new life. In 2017, over 900,000 foreigners resided in China, mostly in large cities like Beijing, Shanghai, and Shenzhen. Although this is a significant number, it is minuscule when compared to the total population of over 1.3 billion.
The relative lack of foreigners, combined with China’s history of isolationism, and the fact that China has never been a major destination for immigrants, means that the current immigration policy is completely outdated in the age of our increasingly globalized world. For non-ethnically Chinese foreigners, it is almost impossible to obtain a permanent residency or citizenship, which means that even simple tasks such as buying a train ticket or booking a hotel become an arduous and time-consuming process. As a result of these restrictive policies, foreigners are discouraged from staying in China for long periods of time, limiting the contributions they can make to the Chinese economy.
Because foreigners are so uncommon, the vast majority of Chinese people have never interacted with a foreigner. Thus, they are often ignorant of foreign cultures, contributing to xenophobia towards any non-Chinese group and a general lack of foreign language skills. This, in turn, makes it more difficult for China to project its soft power, and prevents it from being taken as seriously as it could be on the international stage. In order to reverse this trend, China should open up its borders to economic immigrants, much like Canada or the United States, and provide a realistic, attainable path to permanent residency and citizenship.
A History of Isolation
China has experienced a long history of isolationism which dates back to the Ming dynasty. It was believed at the time that by completely banning private maritime trade, piracy on the Chinese coast, which plagued the state in the 15th century, would disappear altogether. This coincided with a popular belief that China was at the geographic centre of the world, and was so economically powerful that it did not need help from the “barbaric” outside world through trade. After the Manchurians defeated the Ming dynasty and established the Qing, this policy of isolationism was maintained, and even extended so that people who lived on the coast were forced to move further inland.
Complete isolation remained in force until 1757, when Canton (now known as Guangzhou) was designated as the only port open to foreign trade. However, this did not lead to free trade with European states, but rather the outbreak of the opium wars between China and the British Empire. With a superior military, Britain quickly defeated the Chinese, leading to the signing of the treaty of Nanking. The treaty, which forced China to open its ports and concede Hong Kong to Britain, marked the beginning of what is known as the ‘Century of Humiliation’, during which China lost almost all wars fought against foreign powers and lost land, wealth, and influence. The result was increasingly prevalent anti-Western sentiment, culminating with the Boxer Rebellion of 1899.
Between 1912 and 1949, the newly founded Republic of China made efforts to modernize and open up the country, with substantial numbers of foreigners residing in Chinese concessions in Tianjin and Shanghai. Nevertheless, the country became closed once again following the communists’ victory in the Chinese civil war and the establishment of the People’s Republic of China. Only with the liberalization of the Chinese economy after 1973 did significant amounts of foreigners begin to arrive in China.
The Current Situation
For most of the 900,000 foreigners who currently reside in China, getting a permanent residency or citizenship is a task that borders on impossibility, with only 7,356 individuals obtaining the document from 2004-2014. To qualify for a permanent residence, a candidate must either:
- Be a high-level foreign expert holding a post in a business that promotes China’s economic, scientific and technological development, or social progress.
- Have made outstanding contributions, or be of special importance to China.
- Have made large direct investment of over US$500,000 in China.
- Come to China to be with your family, such as husband or wife, minors dependent on their parents, and senior citizens dependent on their relatives.
Obtaining a citizenship, by comparison, is much easier, provided one is able to obtain a permanent residency in the first place. As a requirement, the candidate must be able to “integrate” into Chinese society and have knowledge of Mandarin and Chinese culture. Furthermore, after gaining Chinese citizenship, the candidate must renounce all other citizenships. In 2010, only 1448 naturalized citizens were counted in the Chinese census.
Without a permanent residency or citizenship, foreigners in China are required to regularly renew their work visas, which last up to a maximum of five years. In addition, they are ineligible to obtain a resident identity card, which serves as a document for services like obtaining residency permits driver’s licenses and opening bank accounts. Without it, foreigners are required to use their passports, leading to even more problems. Because most Chinese businesses and local government agencies have little to no experience in the procedures for dealing with foreigners, causing delays or even the outright refusal to provide a service.
Recently recognizing this problem, the Chinese government has introduced a ‘smart’ permanent residency card in July of 2017, which would operate similarly to a resident identity card. Instead requiring staff to manually input an individual’s information on a computer, it can be quickly scanned — but in reality, this has had little effect in easing the lives of foreigners, as so few actually hold a permanent residency.
By opening China’s borders to economic immigrants through policy reform, China could profit from a wide range of long-term social and economic benefits. First, as people from different ethnicities and backgrounds settle in China, many are likely to become entrepreneurs, which would drive innovation and contribute to the creation of even more jobs. In addition, immigrants who return to their country of origin would bring with them an understanding of Chinese culture, the ability to speak Mandarin, and international contacts in China. These consequences translate to more economic and soft power for China, as well as improved relations between China and the developing world.
At the same time, the introduction of sizable immigrant populations would allow for a continual cultural exchange. This would have the benefit of exposing the Chinese population to foreign languages, encouraging them to enrol in language learning programs, while simultaneously making it easier to find qualified individuals who are suited to teach such languages. Over time, increasing proficiency in foreign languages would benefit the economy by boosting trade and business links with other countries, making China a much more attractive destination for international tourists. Furthermore, by allowing Chinese people to interact with foreigners more often, it may allow for a shift away from the traditional xenophobic view that is so prevalent in China.
Furthermore, economic immigration can help fill the increasing demand for labour as China’s working population rapidly shrinks. Due to the one child policy, China’s working population is projected to decline by 23% by 2050, as the largest age groups gradually become older. Without the necessary workforce, industries will likely struggle to keep up with demand for consumer products such as electronics, clothing, and furniture. Such a labour shortage would cause the price of labour to rise, which when combined with the rapid growth of average income in the country, would make China an attractive option for immigrants from the developing world who wish to earn more than they would be able to at home. By filling new job vacancies with immigrant labour, China can avoid any shortages and continue to experience the steady economic growth that it has experienced in the past decade.
Although China has become an economic powerhouse in the past few decades, it has remained a culturally isolated state with a severe lack of interaction with other cultures. By reforming immigration policy and giving foreigners a chance to become permanent residents and Chinese citizens, China would profit from better awareness of other languages and cultures, increase soft power in the developing world, and maintain strong economic growth. If China wants to become a regional and world leader in an increasingly globalized and connected world, it must break from tradition and take a new path.
Edited by Leila Mathy |
Historically some instances of volcanic activity have been responsible for changes in weather due to ash in the atmosphere. With the ongoing eruption of the Eyjafjallajokull volcano, will Europe have a cooler-than-normal summer? Could the U.S. be affected?
—Jeff Schmeckpeper, Lockport
Because sun-blocking particles following volcanic eruptions are eventually disbursed worldwide by high-level winds, the resulting cooling is a global rather than a local affair.
However, the particles, so-called aerosols, must be injected into the stratosphere, the atmospheric layer extending from about 6 to 30 miles aloft. The layer below that, the troposphere, is the layer in which "weather" occurs. Particles injected into the troposphere are washed out of the air by precipitation, usually within a few weeks. To date, ejecta from the Iceland volcano are mainly below 20,000 feet. |
Key Difference – Exome vs Transcriptome
A gene contains coding and non-coding regions within it. Coding sequences are known as exons, and non-coding sequences are known as introns. The nucleotide sequence of the exons of a gene represents the genetic code of the gene to synthesize the specific protein. Hence, exons remain in the mRNA molecule. The total exon region of the genome is known as exome, and it is an important part of the genome. The genetic code of the genes is converted into the genetic code of the mRNA molecule, which is needed for the production of protein. The entire mRNA molecules transcribed in a cell or a cell population at a time is known as a transcriptome. The key difference between exome and transcriptome is that exome represents the sequences of the exon regions of the genome while transcriptome represents the total mRNA of a cell or a tissue at a given time.
What is Exome?
Genes are composed of exons, introns and regulatory sequences. Exons are the gene regions which are transcribed into mRNA sequence during the transcription. Introns and other non-coding regions are removed during the transcription. The nucleotide sequence of exons determines the genetic code of the gene which synthesizes the specific protein it codes. Only exons remain within mRNA of a protein. The collection of exons in the genome is known as exome of an organism. It represents a part of the genome which is expressed in genes. In humans, exome accounts 1% from the genome. It is the protein-coding portion of the human genome.
What is Transcriptome?
The transcriptome is the collection of all protein-coding and non-coding transcripts (RNAs) in a given tissue. Transcriptome represents the collection of total mRNA molecules expressed by the genes in a cell or a tissue. The transcriptome of a cell can be varied with the transcriptome of another cell type. The transcriptome is also dynamic – it changes with time in response to both internal and external stimuli. Even within the same tissue or within the same cell type, transcriptome can change after few minutes.
Transcriptome differs from exome of an organism. Transcriptome includes only the expressed exome sequences. Though the exome of a cell remains the same, transcriptome differs among cells since gene expression is not same for all cells or tissues. Only essential genes are expressed in different cells and tissues. Gene expression is a tissue or cell type specific process. It is regulated by various factors including environmental factors. Therefore, transcriptome can vary with external environmental conditions.
Transcriptome is used as a precursor for proteomics studies. All proteins are derived from mRNA sequences. Translational modifications can result in the changes in proteins. However, transcriptome provides important basic information for the proteomic studies.
What is the difference between Exome and Transcriptome?
Exome vs Transcriptome
|Exome is the collection of protein-coding region of the genes.||Transcriptome is the collection of all transcribed RNA including mRNA.|
|Exome is studied using DNA sample.||Transcriptome is studied using an RNA sample.|
|Whole exome sequencing is the method of studying exome.||RNA sequencing is the method of studying transcriptome.|
Summary – Exome vs Transcriptome
Exons are the coding sequences of the genes and determine the mRNA sequences of the proteins. The collection of these coding sequences (exons) is known as exome of an organism. Genes are transcribed into mRNA molecules prior to making proteins. The total mRNA molecules of a cell or a tissue at any given time are known as transcriptome. Transcriptome represents the genes that are being actively expressed into mRNA at any given time. Transcriptome is cell and tissue specific and affects with the environmental conditions. This is the difference between exome and transcriptome.
1.Sarah et al, “Targeted Capture and Massively Parallel Sequencing of Twelve Human Exomes.” Nature. U.S. National Library of Medicine, 10 Sept. 2009. Web. 01 Apr. 2017
2.Mutz et al. “Transcriptome analysis using next-generation sequencing.” Elsevier: Article Locator. N.p., Sept. 2012. Web. 01 Apr. 2017
1.”The transcriptome of pluripotent cells” By Grskovic, M. and Ramalho-Santos, M., The pluripotent transcriptome (October 10, 2008), StemBook, ed. The Stem Cell Research Community, StemBook, doi/10.3824/stembook.1.24.1, http://www.stembook.org. – DirectStemBook Figure 2 The transcriptome of pluripotent cells.Grskovic, M. and Ramalho-Santos, M., The pluripotent transcriptome (October 10, 2008), StemBook, ed. The Stem Cell Research Community, StemBook, doi/10.3824/stembook.1.24.1, (CC BY 3.0) via Commons Wikimedia |
Anacondas are the largest snakes in the world and often reach a length of 25 to 30 feet and weigh over 300 lbs.
There are four different anaconda species which belong to the genus Eunectus
Habits And Breeding
Anacondas have poor eyesight and hearing but are especially adept at sensing movement, especially nearby and has a very keen sense of smell and taste. Unlike most other snakes who can "taste the air" by flicking their tongue, the anaconda's tongue is not sensitive to outside stimulai except for touch. Anacondas have a special chemical receptor, (Jacobson's organ) which is a pair of small blind pouches or tubes that are situated one on either side of the nasal septum or in the buccal cavity and that are reduced to rudimentary pits in adult humans but are more developed in reptiles, amphibians, and some mammals as chemoreceptors.
Anacondas, can attain a length of up to 35+ feet and weigh over 300 lbs. They attain sexual maturity in 3 to 4 years and mate in December and January. The females gestation period is around 180-200 days with the number of resulting young depending largely on the size and overall health of the female but sometimes as many as a 90 to 100 may be born, (although around 20 to 30 babies around two to three feet in length are the normal result).
Unlike most other snake species which lay eggs, the anacondas are viviparous and thus give birth to live young. The babies emerge as perfect miniature replicas of their parents and within several days are ready to go out in search of their first meal which often consists of small lizards, frogs and rodents. In the graphic shown below, small nodules are present along the anacondas belly region, suggesting that they probably had legs at one time.
Hunting and Diet
Anacondas are carnivores (meat-eaters). They mostly hunt at night (they are nocturnal). Anacondas kill by constricting (squeezing) the prey until it can no longer breathe. Sometimes they drown the prey. Like all snakes, they swallow the prey whole, head first. The anaconda's top and bottom jaws are attached to each other with stretchy ligaments, which let the snake swallow animals wider than itself. Snakes don't chew their food, they digest it with very strong acids in the snake's stomach. Anacondas eat pigs, deer, caiman (a type of crocodilian), birds, fish, rodents (like the capybara and agouti), and other animals. After eating a large animal, the anaconda needs no food for a long time, and rests for weeks. The young (called neonates) can care for themselves soon after birth, including hunting (but are pretty much defenseless against large predators). They eat small rodents (like rats and mice), baby birds, frogs and small fish.
Member of the boa family, South America's green anaconda is, pound for pound, the largest snake in the world. Its cousin, the reticulated python, can reach slightly greater lengths, but the enormous girth of the anaconda makes it almost twice as heavy.Green anacondas can grow to more than 29 feet (8.8 meters), weigh more than 550 pounds (227 kilograms), and measure more than 12 inches (30 centimeters) in diameter. Females are significantly larger than males. Other anaconda species, all from South America and all smaller than the green anaconda, are the yellow, dark-spotted, and Bolivian varieties.Anacondas live in swamps, marshes, and slow-moving streams, mainly in the tropical rain forests of the Amazon and Orinoco basins. They are cumbersome on land, but stealthy and sleek in the water. Their eyes and nasal openings are on top of their heads, allowing them to lay in wait for prey while remaining nearly completely |
An international research team studying the structure and organization of the brain has found that different genetic factors may affect the thickness of different parts of the cortex of the brain.
The findings of this basic neuroscience study provide clues to better understanding the complex structure of the human brain. Ultimately, knowledge of genetic factors that underlie brain structure may help to identify individuals at risk for neuropsychiatric disorders, such as autism, schizophrenia or dementia. However, further research is necessary and the road to preventing or treating these conditions based on this work remains a long one.
The team was led by researchers at the University of California, San Diego, and included scientists from Virginia Commonwealth University, Boston University, Harvard Medical School and Massachusetts General Hospital, the University of Helsinki in Finland and the Veterans Affairs San Diego Healthcare System.
In the study, published online this week in the Proceedings of the National Academy of Sciences Online Early Edition, the team used MRI brain scan data collected from more than 200 pairs of twins between the ages of 55 and 65 and created a map based on genetic correlations between measures of thickness at different places on the cortex.
Using software developed by Michael Neale, Ph.D., professor of psychiatry and human genetics in the VCU School of Medicine, the team drew a genetic correlation map based on cortical thickness at thousands of points on the surface of the brain. These correlations were then analyzed to identify regions where the same genetic factors seem to have been operating. Twelve such regions in each hemisphere were identified, similar to an earlier study of measures of surface area.
"Our team has mapped genetic factors that influence the thickness of the cortex of the human brain," said Neale who was a study contributor and co-author.
"Knowledge of the genetic organization of brain structures may guide the identification of risk factors for psychiatric disorders," he said.
According to Neale, individuals differ in the thickness of these regions, and a twin study can help differentiate genetic from environmental factors that cause these differences at any one location. Twin studies also can estimate the degree to which the same versus different genetic factors affect two different characteristics.
Traditionally, maps of the human brain have been drawn using one of two types of information. The first is anatomical, such as the wrinkles on the surface, or cortex, of the brain. A second type of map, which may be called functional, is drawn from knowledge of how different parts of the brain are associated with particular functions. For example, Wernicke's area on the left side of the brain is associated with the understanding of language.
The research builds on work published last year in Science by the same research team. That article reported on the initial development of the new software tool to study and explain how the brain works. It was considered the first map of the surface of the brain based on the basis of genetic information.
Next steps for this research will include correlating measures of these regions with outcomes, such as change in cognitive abilities since age 20, or lifetime cigarette smoking.
For nearly 30 years, Neale, an internationally known expert in statistical methodology, has developed and applied statistical models in genetic studies, primarily of twins and their relatives, with the goal of better understanding the brain and behavior.
Explore further: Atlas shows how genes organize the surface of the brain |
Explore the rich historical timeline of the Chickasaws from the nation’s roots to its present day existence. From the early days of European discovery to the dark periods of settler encroachment, the Chickasaws emerged a stronger nation, one that would once again gain sovereignty in later years. Today, in the days of our creative renaissance, the Chickasaw Nation is flourishing once again.
In the North American theatre of the larger Seven Year’s War, France and England were fighting for colonial domination of America, the Caribbean and India. The French, who were greatly outnumbered in America, relied on their Indian allies to help them.
The end of the war and the triumph of England meant the Chickasaws had survived France’s attempt to destroy them. During this war, the tribe also established peace with the Choctaws, ending decades of conflict and fighting. |
Fluid mechanics is the study of how
fluids move and
the forces on
them. (Fluids include
mechanics can be divided into
statics, the study of fluids at rest, and
fluid dynamics, the study of fluids in motion. It is a branch of
continuum mechanics, a subject which models matter without using the
information that it is made out of atoms. The study of fluid mechanics goes back
at least to the days of ancient Greece, when
made a beginning on fluid statics. However, fluid mechanics, especially fluid
dynamics, is an active field of research with many unsolved or partly solved
problems. Fluid mechanics can be mathematically complex. Sometimes it can best
be solved by
numerical methods, typically using computers. A modern discipline, called
Computational Fluid Dynamics (CFD), is devoted to this approach to
solving fluid mechanics problems. Also taking advantage of the highly visual
nature of fluid flow is
Particle Image Velocimetry, an experimental method for visualizing and
analyzing fluid flow.
Relationship to continuum mechanics
Fluid mechanics is a subdiscipline of
continuum mechanics, as illustrated in the following table.
Continuum mechanics the study of the physics of continuous materials
Solid mechanics: the study of the physics of continuous materials
with a defined rest shape.
Elasticity: which describes materials that return to their rest
shape after an applied
Plasticity: which describes materials that permanently deform after
a large enough applied stress.
Rheology: the study of materials with both solid and fluid
|Fluid mechanics: the study of the
physics of continuous materials which take the shape of their container.
In a mechanical view, a fluid is a substance that does not support
tangential stress; that is why a fluid at rest has the shape of its
containing vessel. A fluid at rest has no shear stress.
Like any mathematical model of the real world, fluid mechanics makes some
basic assumptions about the materials being studied. These assumptions are
turned into equations that must be satisfied if the assumptions are to hold
true. For example, consider an incompressible fluid in three dimensions. The
assumption that mass is conserved means that for any fixed closed surface (such
as a sphere) the rate of mass passing from outside to inside the
surface must be the same as rate of mass passing the other way. (Alternatively,
the mass inside remains constant, as does the mass outside). This
can be turned into an
integral equation over the surface.
Fluid mechanics assumes that every fluid obeys the following:
Conservation of mass
Conservation of momentum
- The continuum hypothesis, detailed below.
Further, it is often useful (and realistic) to assume a fluid is
incompressible - that is, the density of the fluid does not change. Liquids
can often be modelled as incompressible fluids, whereas gases cannot.
Similarly, it can sometimes be assumed that the
of the fluid is zero (the fluid is inviscid). Gases can often be assumed
to be inviscid. If a fluid is viscous, and its flow contained in some way (e.g.
in a pipe), then
the flow at the boundary must have zero velocity. For a viscous fluid, if the
boundary is not porous, the shear forces between the fluid and the boundary
results also in a zero velocity for the fluid at the boundary. This is called
no-slip condition. For a porous media otherwise, in the frontier of the
containing vessel, the slip condition is not zero velocity, and the fluid has a
discontinuous velocity field between the free fluid and the fluid in the porous
media (this is related to the
Beavers and Joseph condition).
The continuum hypothesis
Fluids are composed of
molecules that collide with one another and solid objects. The continuum
assumption, however, considers fluids to be
continuous. That is, properties such as density, pressure, temperature, and
velocity are taken to be well-defined at "infinitely" small points, defining a
REV (Reference Element of Volume), at the geometric order of the distance
between two adjacent molecules of fluid. Properties are assumed to vary
continuously from one point to another, and are averaged values in the REV. The
fact that the fluid is made up of discrete molecules is ignored.
The continuum hypothesis is basically an approximation, in the same way
planets are approximated by point particles when dealing with celestial
mechanics, and therefore results in approximate solutions. Consequently,
assumption of the continuum hypothesis can lead to results which are not of
desired accuracy. That said, under the right circumstances, the continuum
hypothesis produces extremely accurate results.
Those problems for which the continuum hypothesis does not allow solutions of
desired accuracy are solved using
statistical mechanics. To determine whether or not to use conventional fluid
dynamics or statistical mechanics, the
Knudsen number is evaluated for the problem. The Knudsen number is defined
as the ratio of the molecular
mean free path length to a certain representative physical length
scale. This length scale could be, for example, the radius of a body in a
fluid. (More simply, the Knudsen number is how many times its own diameter a
particle will travel on average before hitting another particle). Problems with
Knudsen numbers at or above
are best evaluated using statistical mechanics for reliable solutions.
The Navier-Stokes equations (named after
Claude-Louis Navier and
George Gabriel Stokes) are the set of equations that describe the motion of
such as liquids and gases. These equations state that changes in
of fluid particles depend only on the external
and internal viscous forces (similar to
acting on the fluid. Thus, the Navier-Stokes equations describe the balance of
forces acting at any given region of the fluid.
The Navier-Stokes equations are
differential equations which describe the motion of a fluid. Such equations
establish relations among the rates of change the variables of interest. For
example, the Navier-Stokes equations for an ideal fluid with zero viscosity
states that acceleration (the rate of change of velocity) is proportional to the
derivative of internal pressure.
This means that solutions of the Navier-Stokes equations for a given physical
problem must be sought with the help of
In practical terms only the simplest cases can be solved exactly in this way.
These cases generally involve non-turbulent, steady flow (flow does not change
with time) in which the
Reynolds number is small.
For more complex situations, such as global weather systems like El Niņo or
lift in a wing, solutions of the Navier-Stokes equations can currently only be
found with the help of computers. This is a field of sciences by its own called
computational fluid dynamics.
General form of the equation
The general form of the Navier-Stokes equations for the conservation of
is the fluid density,
substantive derivative (also called the material derivative),
is the velocity vector,
is the body force vector, and
is a tensor
that represents the surface forces applied on a fluid particle (the
Unless the fluid is made up of spinning degrees of freedom like vortices,
is a symmetric tensor. In general, (in three dimensions)
has the form:
are normal stresses, and
are tangential stresses (shear stresses).
The above is actually a set of three equations, one per dimension. By
themselves, these aren't sufficient to produce a solution. However, adding
conservation of mass and appropriate boundary conditions to the system of
equations produces a solvable set of equations.
Newtonian vs. non-Newtonian fluids
A Newtonian fluid (named after
Newton) is defined to be a
stress is linearly proportional to the
perpendicular to the plane of shear. This definition means regardless of the
forces acting on a fluid, it continues to flow. For example, water is a
Newtonian fluid, because it continues to display fluid properties no matter how
much it is stirred or mixed. A slightly less rigorous definition is that the
drag of a small object being moved through the fluid is proportional to the
force applied to the object. (Compare
By contrast, stirring a
non-Newtonian fluid can leave a "hole" behind. This will gradually fill up
over time - this behaviour is seen in materials such as pudding,
sand (although sand
isn't strictly a fluid). Alternatively, stirring a non-Newtonian fluid can cause
the viscosity to decrease, so the fluid appears "thinner" (this is seen in
There are many types of non-Newtonian fluids, as they are defined to be
something that fails to obey a particular property.
Equations for a Newtonian fluid
The constant of proportionality between the shear stress and the velocity
gradient is known as the
A simple equation to describe Newtonian fluid behaviour is
- τ is the shear stress exerted by the fluid
- μ is the fluid viscosity - a constant of
is the velocity gradient perpendicular to the direction of shear
For a Newtonian fluid, the viscosity, by definition, depends only on
not on the forces acting upon it. If the fluid is
incompressible and viscosity is constant across the fluid, the equation
governing the shear stress (in
Cartesian coordinates) is
- τij is the shear stress
on the ith face of a
fluid element in the jth
- vi is the velocity
in the ith direction
- xj is the
If a fluid does not obey this relation, it is termed a
non-Newtonian fluid, of which there are several types. |
McKinney-Vento Homeless Education Assistance Act
The face of homelessness is a child: A child who may have moved three times in a year, may be living with a family or in a shelter, and may have changed schools numerous times. A child in this situation does not worry about learning or homework, but about where to sleep or where to get something to eat.
The McKinney-Vento Homeless Assistance Act is helping children and families with enrollment, transportation, and school meals. The Bryant School District is trying to make a difference by assisting children and families with clothing, school supplies, and connecting them to resources in the community.
Causes of homelessness:
· Lack of affordable housing
· Deep poverty
· Health problems
· Domestic violence
· Natural and other disasters
· Abuse/neglect (unaccompanied youth)
Barriers to education for homeless children and youth:
· Enrollment requirements (school records, immunizations, proof of residence, and guardianship)
· High mobility resulting in lack of school stability and education continuity
· Lack of access to programs
· Lack of transportation
· Lack of school supplies, clothing, etc.
· Poor health, fatigue, hunger
Who are homeless children and youth?
Individuals who lack a fixed, adequate, and regular nighttime residence such as those:
· Living in cars, parks, public spaces, abandoned buildings, substandard housing, bus or train stations, or similar settings
· Living in hotels, motels, or trailer parks or campgrounds due to lack of affordable housing
· Awaiting foster care placement
· Children abandoned in hospitals
· Sleeping in public or private places not designed as a regular sleeping accommodations for humans
· Living with others due to lack of permanent housing as a result of economic hardship
· Living in emergency or transitional shelters
· Unaccompanied Youth
· Migratory children living in the above situations |
Word problem on Percentage
In a room with 35 men, 80 percent of the occupants are women. How many women are in the room?
Answer STEP 1:
The percentage of women present in the room is given as 80. So, in a room with only women and men, the percentage of men present will be 100 – 80 = 20.STEP 2:
Since we are given the number of men in the room, we find that it is easy find the number of women, if we know the total number of occupants in the room. Let’s assign any variable say x
for the total number of people. Now, let us determine x
Percentage of men × total occupants = Number of menSTEP 3:
The total number of occupants in the room is 175, and the number of men is 35. Subtract 35 from 175 to get the number of women in the room. |
Pablo’s science class is growing plants. He recorded the height of his plant each day for 10 days. The plant’s height, in centimeters, over that time is listed in the table below. Pablo determines that the function is a good fit for the data. How close is his estimate to the actual data? Approximately how much does the plant grow each day?
- Create a scatter plot of the data.
- Draw the line of best fit through two of the data points.
- Find the residuals for each data point.
- Plot the residuals on a residual plot.
- Describe the fit of the line based on the shape of the residual plot.
- Use the equation to estimate the centimeters grown each day. |
If you plan to serve as a Bible class teacher of this material,|
please be sure to use our online Teacher's Guide.
Important Note To Students And Teachers
Lesson 1 The Contrast
Lesson 2 David and Goliath
Lesson 3 David and Saul (early)
Lesson 5 David Fled
Lesson 6 David's Trials As He Fled
Lesson 7 David's Flight Became More Complex
Lesson 9 David and Nabal
Lesson 10 David, Abner, and Joab
Lesson 11 David, Uzzah, the Ark, Michal
Lesson 12 David, Bathsheba, and Uriah
Lesson 13 Why? |
Temporal range: Late Jurassic, 155–145 Ma
|D. altus skeleton in Japan|
|Species:||† D. altus|
(Marsh, 1878 [originally Laosaurus altus])
Dryosaurus (// DRY-o-SAWR-əs; meaning 'tree lizard', Greek δρυς/drys meaning 'tree, oak' and σαυρος/sauros meaning 'lizard'; the name reflects the forested habitat, not a vague oak-leaf shape of its cheek teeth as is sometimes assumed) is a genus of an ornithopod dinosaur that lived in the Late Jurassic period. It was an iguanodont (formerly classified as a hypsilophodont). Fossils have been found in the western United States, and were first discovered in the late 19th century. Valdosaurus canaliculatus and Dysalotosaurus lettowvorbecki were both formerly considered to represent species of Dryosaurus.
Discovery and naming
In 1876, Samuel Wendell Williston in Albany County, Wyoming discovered the remains of small euornithopods. In 1878, Professor Othniel Charles Marsh named these as a new species of Laosaurus, Laosaurus altus. The specific name altus, meaning "tall" in Latin, refers to it being larger than Laosaurus celer. In 1894, Marsh made the taxon a separate genus, Dryosaurus. The generic name is derived from the Greek δρῦς, drys, "tree, oak", referring to a presumed forest-dwelling life mode. Later it was often assumed to have been named after an oak-leaf shape of its cheek teeth, which, however, is absent. The type species remains Laosaurus altus, the combinatio nova is Dryosaurus altus.
The holotype, YPM 1876, was found in a layer of the Upper Brushy Basin Member of the Morrison Formation, dating from the Tithonian. It consists of a partial skeleton including a rather complete skull and lower jaws. Several other fossils from Wyoming have been referred to Dryosaurus altus. They include specimens YPM 1884: the rear half of a skeleton; AMNH 834: a partial skeleton lacking the skull from the Bone Cabin Quarry; and CM 1949: a rear half of a skeleton dug up in 1905 by William H. Utterback. From 1922 onwards in Utah, Earl Douglass discovered Dryosaurus remains at the Dinosaur National Monument. These include CM 11340: the front half of a skeleton of a very young individual; CM 3392: a skeleton with skull but lacking the tail; CM 11337: a fragmentary skeleton of a juvenile; and DNM 1016: a left ilium dug up by technician Jim Adams. Other fossils were found in Colorado. In Lily Park, Moffat County, James Leroy Kay and Albert C. Lloyd in 1955 recovered CM 21786, a skeleton lacking skull and neck. From 'Scheetz’ Quarry 1, at Uravan, Montrose County, in 1973 Peter Malcolm Galton and James Alvin Jensen described specimen BYU ESM-171R found by Rodney Dwayne Scheetz and consisting of some vertebrae, a left lower jaw, a left forelimb and two hindlimbs. Gregory S. Paul in 2010 suggested that the Utah material represented a separate species.
Apart from Dryosaurus altus, several other species have been named in the genus. The first of these was created accidentally when in 1903 Giuseppe de Stefano renamed Crocodilus phosphaticus Thomas 1893 into Dryosaurus phosphaticus; he had intended to call it Dyrosaurus phosphaticus. This was only emended by Éric Buffetaut in 1981.
Dryosaurus had a long neck, long, slender legs and a long, stiff tail. Its arms, however, with five fingers on each hand, were short. Known specimens were about 8 to 14 feet (2.4 to 4.3 m) long and weighed 170 to 200 pounds (77 to 91 kg). However, the adult size is unknown, as no known adult specimens of the genus have been found.
Dryosaurus had a horny beak and cheek teeth and, like other ornithopods, was a herbivore. Some scientists suggest that it had cheek-like structures to prevent the loss of food while the animal processed it in the mouth.
Diet and dentition
The teeth of Dryosaurus were, according to museum curator John Foster, characterized by "a strong median ridge on the lateral surface."Dryosaurus subsisted primarily on low growing vegetation in ancient floodplains.
Growth and development
A Dryosaurus hatchling found at Dinosaur National Monument in Utah confirmed that Dryosaurus followed similar patterns of craniofacial development to other vertebrates; the eyes were proportionally large while young and the muzzle proportionally short. As the animal grew, its eyes became proportionally smaller and its snout proportionally longer.
Paleobiogeography and fossil distribution
The Dryosaurus holotype specimen YPM 1876 was discovered in Reed’s YPM Quarry 5, in the Upper Brushy Basin Member, of the Morrison Formation. In the Late Jurassic Morrison formation of Western North America, Dryosaurus remains have been recovered from stratigraphic zones 2-6. A spectacular digsite near Uravan, Colorado held hundreds of D. altus fossils which represented multiple stages of the animal's life cycle. This formation is a sequence of shallow marine and alluvial sediments which, according to radiometric dating, ranges between 156.3 million years old (Ma) at its base, to 146.8 million years old at the top, which places it in the late Oxfordian, Kimmeridgian, and early Tithonian stages of the Late Jurassic period. In 1877 this formation became the center of the Bone Wars, a fossil-collecting rivalry between early paleontologists Othniel Charles Marsh and Edward Drinker Cope. The Morrison Formation is interpreted as a semiarid environment with distinct wet and dry seasons. The Morrison Basin where dinosaurs lived, stretched from New Mexico to Alberta and Saskatchewan, and was formed when the precursors to the Front Range of the Rocky Mountains started pushing up to the west. The deposits from their east-facing drainage basins were carried by streams and rivers and deposited in swampy lowlands, lakes, river channels and floodplains. This formation is similar in age to the Solnhofen Limestone Formation in Germany and the Tendaguru Formation in Tanzania.
The Morrison Formation records an environment and time dominated by gigantic sauropod dinosaurs such as Camarasaurus, Barosaurus, Diplodocus, Apatosaurus and Brachiosaurus. Dinosaurs that lived alongside Dryosaurus included the herbivorous ornithischians Camptosaurus, Stegosaurus and Othnielosaurus. Predators in this paleoenvironment included the theropods Saurophaganax, Torvosaurus, Ceratosaurus, Marshosaurus, Stokesosaurus, Ornitholestes and Allosaurus, which accounting for 70 to 75% of theropod specimens and was at the top trophic level of the Morrison food web. Other vertebrates that shared this paleoenvironment included bivalves, snails, ray-finned fishes, frogs, salamanders, turtles, sphenodonts, lizards, terrestrial and aquatic crocodylomorphans, and several species of pterosaur. Early mammals were present in this region, such as docodonts, multituberculates, symmetrodonts, and triconodonts. The flora of the period has been revealed by fossils of green algae, fungi, mosses, horsetails, cycads, ginkgoes, and several families of conifers. Vegetation varied from river-lining forests of tree ferns, and ferns (gallery forests), to fern savannas with occasional trees such as the Araucaria-like conifer Brachyphyllum.
- Tom R. Hübner; Oliver W. M. Rauhut (2010). "A juvenile skull of Dysalotosaurus lettowvorbecki (Ornithischia: Iguanodontia), and implications for cranial ontogeny, phylogeny, and taxonomy in ornithopod dinosaurs". Zoological Journal of the Linnean Society 160 (2): 366–396. doi:10.1111/j.1096-3642.2010.00620.x.
- McDonald AT, Kirkland JI, DeBlieux DD, Madsen SK, Cavin J; et al. (2010). Farke, Andrew Allen, ed. "New Basal Iguanodonts from the Cedar Mountain Formation of Utah and the Evolution of Thumb-Spiked Dinosaurs". PLoS ONE 5 (11): e14075. doi:10.1371/journal.pone.0014075. PMC 2989904. PMID 21124919.
- Galton, P.M., 1977. "The Upper Jurassic dinosaur Dryosaurus and a Laurasia-Gondwana connection in the Upper Jurassic", Nature 268(5617): 230-232
- O.C. Marsh, 1878, "Principal characters of American Jurassic dinosaurs. Part I", American Journal of Science and Arts 16: 411-416
- O.C. Marsh, 1894, "The typical Ornithopoda of the American Jurassic", American Journal of Science, series 3 48: 85-90
- Gilmore C.W., 1925, "Osteology of ornithopodous dinosaurs from the Dinosaur National Monument, Utah. Camptosaurus medius, Dryosaurus altus, Laosaurus gracilis", Memoirs of the Carnegie Museum 10: 385-409
- Galton, P.M. & Jensen, J.A., 1973, "Small bones of the hypsilophodontid dinosaur Dryoraurus altus from the Upper Jurassic of Colorado", Great Basin Nature, 33: 129-132
- Paul, G.S., 2010, The Princeton Field Guide to Dinosaurs, Princeton University Press p. 281
- G. de Stefano, 1903, "Nuovi rettili degli strati a fosfato della Tunisia", Bollettino della Societa Geologica Italiana 22: 51-80
- R.T.J. Moody and E. Buffetaut, 1981, "Notes on the systematics and palaeoecology of the crocodiles and turtles of the Metaloui Phosphates (Eocene) of southern Tunisia", Tertiary Research 3(3): 125-140
- Horner, John R.; de Ricqlés, Armand; Padian, Kevin; Scheetz, Rodney D. (2009). "Comparative long bone histology and growth of the "hysilophodontid" dinosaurs Orodromeus makelai, Dryosaurus altus, and Tenontosaurus tillettii (Ornithischia: Euornithopoda)". Journal of Vertebrate Paleontology 29 (3): 734–747. doi:10.1671/039.029.0312.
- Marshall (1999) pp. 138-139
- "Dryosaurus altus," Foster (2007) pp. 218-219.
- "Appendix," Foster (2007) pp. 327-329.
- Trujillo, K.C.; Chamberlain, K.R.; Strickland, A. (2006). "Oxfordian U/Pb ages from SHRIMP analysis for the Upper Jurassic Morrison Formation of southeastern Wyoming with implications for biostratigraphic correlations". Geological Society of America Abstracts with Programs 38 (6): 7.
- Bilbey, S.A. (1998). "Cleveland-Lloyd Dinosaur Quarry - age, stratigraphy and depositional environments". In Carpenter, K.; Chure, D.; and Kirkland, J.I. (eds.). The Morrison Formation: An Interdisciplinary Study. Modern Geology 22. Taylor and Francis Group. pp. 87–120. ISSN 0026-7775.
- Russell, Dale A. (1989). An Odyssey in Time: Dinosaurs of North America. Minocqua, Wisconsin: NorthWord Press. pp. 64–70. ISBN 978-1-55971-038-1.
- Foster, J. (2007). "Appendix." Jurassic West: The Dinosaurs of the Morrison Formation and Their World. Indiana University Press. pp. 327-329.
- Foster, John R. (2003). Paleoecological Analysis of the Vertebrate Fauna of the Morrison Formation (Upper Jurassic), Rocky Mountain Region, U.S.A. New Mexico Museum of Natural History and Science Bulletin, 23. Albuquerque, New Mexico: New Mexico Museum of Natural History and Science. p. 29.
- Carpenter, Kenneth (2006). "Biggest of the big: a critical re-evaluation of the mega-sauropod Amphicoelias fragillimus". In Foster, John R.; and Lucas, Spencer G. (eds.). Paleontology and Geology of the Upper Jurassic Morrison Formation. New Mexico Museum of Natural History and Science Bulletin, 36. Albuquerque, New Mexico: New Mexico Museum of Natural History and Science. pp. 131–138.
- Foster, J. (2007). Jurassic West: The Dinosaurs of the Morrison Formation and Their World. Indiana University Press. 389pp.
- Palmer, D., ed. (1999). The Marshall Illustrated Encyclopedia of Dinosaurs and Prehistoric Animals. London: Marshall Editions. pp. 138–139. ISBN 1-84028-152-9. |
Learn something new every day
More Info... by email
The human alimentary canal, also called the gastrointestinal (GI) tract, consists of all the structures from the mouth to the anus, through which food is consumed and digested, and waste is excreted. Structures of the alimentary canal include the mouth, pharynx, esophagus, stomach, intestines, and anus. The GI tract of a mature male human measures approximately 20 feet (6.5 meters). The alimentary canal can be divided into the upper GI tract and the lower GI tract.
The upper GI tract is composed of the mouth, pharynx, esophagus, stomach, and the duodenum, the uppermost part of the small intestine. The mouth, also called the buccal cavity or oral cavity, contains a number of structures that help in the initial digestion of food, namely, the salivary glands, tongue, and teeth. The pharynx, the portion of the throat directly behind the mouth, serves to direct food into the esophagus and prevent it from entering the trachea, or windpipe.
The esophagus helps move ingested food towards the stomach through peristalsis, a type of wave-like muscular contraction. The second stage of digestion takes place in the stomach. As digested food passes out of the stomach, it enters the duodenum, where digestive juices from the liver and pancreas are combined.
The lower GI tract consists of most of the intestines and the anus. The intestines are divided into the small and large intestine, both of which have three subparts. Two of the sections of the small intestine are included in the lower GI tract, the jejunum and the ilium.
The jejunum is the midsection of the small intestine. It moves food from the duodenum to the ilium through peristalsis, and aids in the absorption of nutrients. The majority of nutrient absorption takes place in the ilium, which is lined with villi, microscopic finger-like projections that increase surface area for greater absorption. All soluble molecules are absorbed into the blood in the ilium.
The large intestine is comprised of the cecum, the colon, and the rectum. The cecum connects the small and large intestines, while the colon absorbs water and salt from the digested material before it is excreted as waste. The colon itself has four different parts: the ascending colon, the transverse colon, the descending colon, and the sigmoid colon. The rectum is a temporary storage area for feces or solid waste before it is excreted. The last portion of the lower GI tract, the anus, is the exit point of feces, the waste product of the alimentary canal, from the body.
The liver, gallbladder, and pancreas are some other organs of the digestive system that support the function of the alimentary canal. The liver produces bile, which aids in the breakdown of ingested food in the small intestine, and the gallbladder temporarily stores bile. The pancreas secretes several digestive enzymes into the small intestine to aid in digestion.
@anon257660: Within the body.
Is the alimentary canal located within or outside the body?
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
Which instrument? is a listening game which helps students focus on timbre (how things sound).
- Show students the two minute video below.
- Discuss the sounds and how they are made.
- Ask students to sit in a circle. One student is in the middle and they are ‘it’. ‘It’ must close their eyes listen to one of the instrument sounds (use the toggle on the video to allow a specific instrumental sound to be heard).
- If the student guesses correctly they select another student in the circle to take their place. |
SPSS Chi-Square Test – Quick Overview
“Chi-Square test” usually refers to the chi-square independence test: a test for investigating whether there's a relation between two categorical variables.
For example, does region ‘say anything’ about marital status? In this case, a chi-square independence test will answer if the percentages of people who married, never married or divorced are the similar over US regions such as West, Midwest and so on.
The reasoning behind the chi-square independence test is explained in simple language in Chi-Square Test - What Is It?.
For testing if some hypothesized frequency distribution is likely -given your data- the one sample chi-square test is the way to go.
SPSS Chi-Square Test Tutorials
A chi-square test evaluates whether two categorical variables are related. This tutorial explains the chi-square test in normal language. With illustrations, without mathematical formulas. Read More
SPSS Chi-Square Independence Test is a procedure for testing whether two categorical variables are associated in any way. Read More
SPSS one-sample chi-square test is used to test whether a single categorical variable follows a hypothesized population distribution. Read More |
Teach your students about the basics of pronouns in a fun and engaging way that is sure to captivate your learners. 2 complete lessons, teacher lesson plans, activity pages, interactive notebook pages, teacher instructions, samples, and answer keys.
Includes the Following:
- Lesson Plans
- Interactive Notebook Student Pages
- Interactive Notebook Teacher Instructions
- Interactive Notebook Color Samples
- Student Worksheet Activities
- Answer Key
- Students will be able to define and identify pronouns.
- Students will be able to define and identify singular and plural pronouns.
- Students will be able to define and identify subject pronouns
- Students will be able to replace nouns with pronouns in the subject of a sentence.
Written for 3rd grade but could easily be adapted for 2nd or 4th.
Can also be used for reteaching / reengaging and extra practice.
This product is part of my Pronouns Interactive Notebook Unit
Download the preview to check it out!
DON'T MISS THESE FREEBIES!
- Context Clues Anchor Chart
- Context Clues Graphic Organizer
- Speeding Tickets
- Using Commas Center
- Sentence Types Interactive Notebook
- Teacher Binder Calendars
OTHER GREAT PRODUCTS:
- Behavior Documentation Forms for Teacher Binder
- Bridal Shower Activities for Students
- Classroom Jobs Lanyard Tags
- Facebook Page Worksheet - Back to School
- Fully Editable Teacher Binder with Updates - Polka Dots and Chevron Theme
- Physical and Chemical Weathering Demonstration and Science Experiment
- Reading Rubrics / Reading Evaluations Bundle
- Science Units Bundle - Examining Cells, Classifying Fingerprints, Sound, and more!
- Spelling Test Paper (25 words)
- Step up to Writing ELA Bundle
- Step up to Writing Resources
- Step up to Writing - Writing a Personal Narrative - ELA Bundle
- Student Observation Checklist 1 - Teacher or Administrator
- Tale of Despereaux Unit - ELA Reading Comprehension , Vocab, and more!
ALL NEW PRODUCTS DEBUT AT 20% OFF FOR MY FOLLOWERS!
Look for and click the ★ GREEN STAR ★
next to my store logo at the top right corner of this page to become a follower. This will allow you to be notified each time I debut a new product or FREEBIE!
If you would like to get updates on new and current resources...
- Follow me on FACEBOOK
- Tweet me on TWITTER
- Collaborate with me on PINTEREST
Have you ever thought of selling your own products on Teachers Pay Teachers?
to sign-up and start making money!
This item is a paid digital download from my TpT store
Purchasing this product grants permission for use by one teacher in his or her own classroom. If you intend to share with others, please purchase an additional license. |
- ›› Keywords : Citizenship
This unit is part of Gilder Lehrman’s series of Common Core State Standards–based teaching resources. These units were developed to enable students to understand, summarize, and analyze original texts of historical significance. Through a step-by-step process, students will acquire the skills to analyze any primary or secondary source material.
Over the course of three lessons the students will analyze text from three documents defining American democracy: the Preamble to the United States Constitution, the...
Could you pass the US citizenship test? Take these quizzes to see how well you know the American history and civics required of people taking the naturalization test. The actual test is not multiple choice, but these are the 100 questions from which each potential citizen's 10-question civics and history exam are drawn. |
Variable quantity that mathematically describes the wave characteristics of a particle. It is related to the likelihood of the particle being at a given point in space at a given time, and may be thought of as an expression for the amplitude of the particle wave, though this is strictly not physically meaningful. The square of the wave function is the significant quantity, as it gives the probability for finding the particle at a given point in space and time. Seealso wave-particle duality.
Learn more about wave function with a free trial on Britannica.com.
Sentencelike expression that may be thought of as obtained from a sentence by substituting variables for constants occurring in the sentence. For example, “x was a parent of y” may be thought of as obtained from “Adam was a parent of Abel.” A propositional function therefore has no truth-value, becoming true or false only when its free variables are replaced by constants of appropriate syntactic categories (e.g., “Abraham was a parent of Isaac”).
Learn more about propositional function with a free trial on Britannica.com.
Equation that expresses the relationship between the quantities of productive factors (such as labour and capital) used and the amount of product obtained. It states the amount of product that can be obtained from every combination of factors, assuming that the most efficient available methods of production are used. The production function can thus measure the marginal productivity of a particular factor of production and determine the cheapest combination of productive factors that can be used to produce a given output.
Learn more about production function with a free trial on Britannica.com.
In mathematics, one of a set of functions related to the hyperbola in the same way the trigonometric functions relate to the circle. They are the hyperbolic sine, cosine, tangent, secant, cotangent, and cosecant (written “sinh,” “cosh,” etc.). The hyperbolic equivalent of the fundamental trigonometric identity is cosh2math.z − sinh2math.z = 1. The hyperbolic sine and cosine, particularly useful for finding special types of integrals, can be defined in terms of exponential functions:
Learn more about hyperbolic function with a free trial on Britannica.com.
In mathematics, an expression, rule, or law that defines a relationship between one variable (the independent variable) and another (the dependent variable), which changes along with it. Most functions are numerical; that is, a numerical input value is associated with a single numerical output value. The formula math.A = πmath.r2, for example, assigns to each positive real number math.r the area math.A of a circle with a radius of that length. The symbols math.f(math.x) and math.g(math.x) are typically used for functions of the independent variable math.x. A multivariable function such as math.w = math.f(math.x, math.y) is a rule for deriving a single numerical value from more than one input value. A periodic function repeats values over fixed intervals. If math.f(math.x + math.k) = math.f(math.x) for any value of math.x, math.f is a periodic function with a period of length math.k (a constant). The trigonometric functions are periodic. Seealso density function; exponential function; hyperbolic function; inverse function; transcendental function.
Learn more about function with a free trial on Britannica.com.
In mathematics, a function in which a constant base is raised to a variable power. Exponential functions are used to model changes in population size, in the spread of diseases, and in the growth of investments. They can also accurately predict types of decline typified by radioactive decay (see half-life). The essence of exponential growth, and a characteristic of all exponential growth functions, is that they double in size over regular intervals. The most important exponential function is math.emath.x, the inverse of the natural logarithmic function (see logarithm).
Learn more about exponential function with a free trial on Britannica.com.
Function may refer to: |
Climate Change: The Unseen Force Behind Rising Food Prices?
That sneaking suspicion you get every time you arrive at the grocery checkout counter is right: food generally costs more than it did just 12 months ago. According to a recent statement presented to the U.S. House Committee on Agriculture, the Consumer Price Index, a measure of average prices for household and consumer goods, is projected to rise from 3.5 percent to 4.5 percent by year’s end. Prices are expected to remain high as global food production struggles to keep pace with the rising demand for commodities such as wheat and corn.
While governments and consumers decry the steady increase in food prices, groups like the United Nations Food and Agriculture Organization (FAO) are taking a harder look at some of the factors contributing to this rise—including the role of climate change. Changing climatic conditions, in particular the decline in water availability, are forcing farmers to continually adapt their agricultural production. According to the FAO, climate change has both environmental and socioeconomic outcomes for agriculture: changes in the availability and quality of land, soil, and water resources, for example, are later reflected in crop performance, which causes prices to rise.
Climate change has been attributed to greater inconsistencies in agricultural conditions, ranging from more-erratic flood and drought cycles to longer growing seasons in typically colder climates. While the increase in Earth’s temperature is making some places wetter, it is also drying out already arid farming regions close to the Equator. This year’s Intergovermental Panel on Climate Change (IPCC) assessment report states that “increases in the frequency of droughts and floods are projected to affect local production negatively, especially in subsistence sectors at low latitudes.” The decline in production in the face of growing demand can drive up prices in markets that may lack the technology to fight environmental hazards to overall production.
Such has been the case in Australia, where the once-fruitful food-production regions of New South Wales have been subject to a severe drought for the last five years. There is evidence of shifting rainfall patterns in the region, and a growing number of Australians now view this as a repercussion of climate change. The crop failures, economic hardship in rural communities, and subsequent jump in food prices are forcing the country to reassess its approach to climate change and to consider increasing food imports, a move that would drive prices up further. Speaking on the issue last year, Mike Rann, the premier of South Australia, remarked, “what we’re seeing with this drought is a frightening glimpse of the future with global warming.”
By FAO estimates, the developing world will spend $52 billion between 2007 and 2008 on imports of wheat, corn, and other cereal crops. If current trends persist, these countries will also be worst affected by climate change’s pressure on food production and pricing, while experiencing the effects of more varied and more severe environmental conditions. Advances in technology make it unlikely that overall world food production will decline due to climate change, but agricultural capacity in large parts of Africa and Asia is expected to shift dramatically. Climate-related changes in agricultural conditions will likely only increase developing countries’ dependence on imported food, a pricey prospect considering rising global transportation costs.
This story was produced by Eye on Earth, a joint project of the Worldwatch Institute and the blue moon fund. View the complete archive of Eye on Earth stories, or contact Staff Writer Alana Herro at aherro [AT] worldwatch [DOT] org with your questions, comments, and story ideas. |
Amplitude modulation (AM):
It is a technique used in electronic communication, most commonly for transmitting information via a radio carrier wave. AM works by varying the strength of the transmitted signal in relation to the information being sent. For example, changes in signal strength may be used to specify the sounds to be reproduced by a loudspeaker, or the light intensity of television pixels.
In radio communication, a continuous wave radio-frequency signal (a sinusoidal carrier wave) has its amplitude modulated by an audio waveform before transmission. The audio waveform modifies the amplitude of the carrier wave and determines the envelope of the waveform. In the frequency domain, amplitude modulation produces a signal with power concentrated at the carrier frequency and two adjacent sidebands. Each sideband is equal in bandwidth to that of the modulating signal, and is a mirror image of the other. Amplitude modulation resulting in two sidebands and a carrier is called "double-sideband amplitude modulation" (DSB-AM). Amplitude modulation is inefficient in power usage; at least two-thirds of the power is concentrated in the carrier signal, which carries no
useful information (beyond the fact that a signal is present).
To increase transmitter efficiency, the carrier may be suppressed. This produces a reduced-carrier transmission, or DSB "double-sideband suppressed-carrier" (DSB-SC) signal. A suppressed-carrier AM signal is three times more power-efficient than AM.
If the carrier is only partially suppressed, a double-sideband reduced-carrier (DSBRC) signal results. For reception, a local oscillator will typically restore the suppressed carrier so the signal can be demodulated with a product detector.
Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB. |
Babies born to mothers who experience complications during pregnancy such as preterm birth (early birth before 37 weeks of pregnancy) and intrauterine infection (infections in the uterus) have a higher risk of a movement disorder called cerebral palsy. Cerebral palsy is a broad term used to describe a non-progressive physical disorder of movement or posture that is acquired in early life, and that results from complications in brain development. It may also be associated with intellectual disabilities, behavioural disorders, sensory defects (blindness and deafness) and seizures.
Another Cochrane review found that magnesium sulphate given to mothers before preterm birth could protect the baby's brain and improve long-term outcomes into childhood. This review aimed to assess whether magnesium sulphate given to mothers before term birth (birth at 37 weeks of pregnancy or later) could also protect the baby's brain and improve long-term outcomes.
This review included one randomised controlled study involving 135 women with mild pre-eclampsia (high blood pressure and/or protein in the urine). There was not enough evidence from this study to determine the effects of magnesium sulphate on babies born at term. Women receiving magnesium sulphate were more likely to feel warm and flushed in this study than women who received a placebo, but they were not more likely to stop treatment due to side effects. The rates of haemorrhage after birth and rates of caesarean birth were similar for women who received magnesium sulphate and those who received a placebo.
More studies are needed to establish whether magnesium sulphate given to the mother at term is protective for the baby's brain. The babies in these trials should be followed up over a long period so that we can monitor the effects of magnesium on child development.
We are awaiting further information from another six studies so that they can be assessed.
There is currently insufficient evidence to assess the efficacy and safety of magnesium sulphate when administered to women for neuroprotection of the term fetus. As there has been recent evidence for the use of magnesium sulphate for neuroprotection of the preterm fetus, high-quality randomised controlled trials are needed to determine the safety profile and neurological outcomes for the term fetus. Strategies to reduce maternal side effects during treatment also require evaluation.
Magnesium sulphate is extensively used in obstetrics for the treatment and prevention of eclampsia. A recent meta-analysis has shown that magnesium sulphate is an effective fetal neuroprotective agent when given antenatally to women at risk of very preterm birth. Term infants account for more than half of all cases of cerebral palsy, and the incidence has remained fairly constant. It is important to assess if antenatal administration of magnesium sulphate to women at term protects the fetus from brain injury, and associated neurosensory disabilities including cerebral palsy.
To assess the effectiveness of magnesium sulphate given to women at term as a neuroprotective agent for the fetus.
We searched the Cochrane Pregnancy and Childbirth Group's Trial Register (31 July 2012) and the reference lists of other Cochrane reviews assessing magnesium sulphate in pregnancy.
Randomised controlled trials comparing antenatally administered magnesium sulphate to women at term with placebo, no treatment or a different fetal neuroprotective agent. We also planned to include cluster-randomised trials, and exclude cross-over trials and quasi-randomised trials. We planned to exclude studies reported as abstracts only.
Two review authors independently assessed trials for eligibility and for risk of bias. Two authors independently extracted data. Data were checked for accuracy.
We included one trial (involving 135 women with mild pre-eclampsia at term). An additional six studies are awaiting further assessment.
The included trial compared magnesium sulphate with a placebo and was at a low risk of bias. The trial did not report any of this review's prespecified primary outcomes. There was no significant difference between magnesium sulphate and placebo in Apgar score less than seven at five minutes (risk ratio (RR) 0.51; 95% confidence interval (CI) 0.05 to 5.46; 135 infants), nor gestational age at birth (mean difference (MD) -0.20 weeks; 95% CI -0.62 to 0.22; 135 infants).
There were significantly more maternal side effects (feeling warm and flushed) in the magnesium sulphate group than in the placebo group (RR 3.81; 95% CI 2.22 to 6.53; 135 women). However, no significant difference in adverse effects severe enough to cease treatment was observed (RR 3.04; 95% CI 0.13 to 73.42; 135 women). There were no significant differences seen between groups in the rates of postpartum haemorrhage (RR 4.06; 95% CI 0.47 to 35.38; 135 women) and caesarean section (RR 0.80; 95% CI 0.39 to 1.63; 135 women). |
launching on Saturday, December 9, 2006, Space Shuttle Discovery headed
to the International Space Station (ISS). The mission is just one more
step in completing the ISS solidly establishing our first foothold on
our journey beyond Earth.
President George W. Bush recently called for a renewed spirit of discovery,
and experts have been working to match his timeline: Complete
the ISS by 2010, plant humans on the moon permanently by 2020, and
then work on colonizing Mars.
The hurdles to reach our nearest neighbor are numerous. Identifying
how to live and work in space for long periods of time is an even greater
challenge. Every space mission helps scientists learn more. During this
week's lesson, you will hear experts explain why getting to the moon
is important. Then, you will design your own space station, take a tour
on the ISS, and see how lessons learned there help pave the way for
Why the Moon?
Start your exploration by visiting NASA, where their experts explain,
the Moon? Begin by watching the video in which President Bush
announces the nation's next push Into the Cosmos, located in
the Related Multimedia column on the right-hand side.
Next, watch the videos in the Lunar Exploration Theme section: Human
Civilization, Scientific Knowledge, Exploration Preparation,
Global Partnerships, Economic Expansion, and Public
In the Related Multimedia column, learn about the next generation of
space exploration by checking out NASA's New Spacecraft. Finish
off your exploration here by viewing the video, Moon, Mars and Beyond.
Think about what you have learned through these videos and write a
paragraph or more explaining your opinion about colonizing space.
Preparing for Space Life
To live in space, people have to be able to meet their basic needs. These include eating, sleeping, exercising, and disposing of waste.
Let's explore how scientists have figured out how to meet these needs
at CosmicQuest's Living
in Space: Design a Space Station . Click Let's
Find Out!, read the introduction, and then click Let's
As you work through the activity, you will discover how scientists
had to solve the problem of maintaining a healthy environment for the
space station's residents. In what ways must weight, storage space,
and the crew's isolation be considered when figuring out how to supply
the station's water, food, and other necessities?
Continue through your training program to prepare for Working
in Space. Why is recycling water so important? How long
does NASA require their astronauts to exercise every day? What would
happen if they did not?
Next, take the final challenge to design
a self-sufficient space habitat. How has isolation from
Earth's supply line affected the station's design and size? In what
ways would living on the station be different than living on Earth?
In what ways would it be similar?
On the ISS
let's visit the International
Space Station to see how the theory compares to the reality
in Space. As you explore life on the station, make sure
to watch some of the videos in each section.
Check out the station's Space
Food menu, what kind of Space
Wear to pack, and how to get some Space
Space Work comes with its own challenges, of course, and there are times when the crew can have a little Space Fun. What has been the space station's primary purpose? Why is it important that the crew get time to relax and do things like disco-dancing?
Next, join a tour of the ISS as a Virtual
Astronaut. Choose which version of the site you want to
enter. (You may need to ask your computer administrator to install the
Virtools player.) Apply, and then sign-in to meet First Commander Bill
Sheperd. Follow the training instructions to explore the station and
all of the activities. To return to the main ISS menu after an activity,
click the ISS icon in the top right corner.
How does weightlessness affect bones and the various types of muscles? Why do these effects influence how and for how long someone can live in zero gravity? In what ways can tracking changes in the crew's health help future crew members?
How will plant-growing experiments at the ISS help solve food supply
problems for future space crews? As a space tourist, which sights might
you be most interested in seeing? What would be the greatest challenge
living at the space station? What conditions would you hope to change
first, before going into space?
If you would like to learn more about the ISS, check out other sections
of the International
Space Station site, or explore NASA's Humans
in Space site, which includes more information on Preparing
for Space Travel, Getting
to Space, Living
in Space, Working
in Space, and Traveling
Over several months or longer, follow all of the news in The Cincinnati Enquirer
that is related to the goals of colonizing space. Summarize each news
event, including what new knowledge experts have gained and how the
event contributes to the larger goals. Add photos or other images, as
appropriate. Create a timeline to span the planned tracking period,
and add each event to your timeline. Discuss the string of events with |
Pulse Code Modulation audio is a digital recording of analog audio used in a range of technologies from telephones to Blu-ray discs. PCM works by taking analog signal amplitude samples at regular intervals several thousand times per second. Recording software initially uses the PCM format before converting audio into another format like MP3 or AAC. Additionally, compressed audio files like MP3s are decompressed back into PCM when being played back on speakers.
Raw Digital Audio
PCM audio recordings are raw digital audio samples. High quality PCM recordings can be lossless because they don't discriminate between recorded content and don't use any form of compression to cut out unnecessary and less important audio content to reduce file size. Two factors gauge PCM's performance: the sampling rate and wordlength. PCM samples the audio waveform between 8 and 192 thousand times per second. The wordlength measures the signal-to-noise ratio or available bandwidth in 8 to 24 bits. PCM also supports mono, stereo and multi-channel recordings for sending different feeds to different speakers. The WAV, AIFF and AU recording file types contain the unprocessed PCM data.
The History of PCM
PCM traces its origin back to 1937; Alec Reeves, a British Engineer, developed the technology. Telephone companies started using the technology in the 1960s to send phone calls long distances between cities more efficiently. While PCM files are much larger than compressed audio like MP3 files, the format uses less bandwidth than analog audio. Telecoms commonly use Mu-Law, a PCM telephony technology, to send audio in 64 Kbps PCM data streams. Different video and audio recording formats since the 1960s have regularly used or included PCM as an audio recording option.
Devices That Use PCM
Devices like audio/video equipment and computers have widely adopted the PCM format. The PCM technology makes an appearance in recording formats like 8mm, Hi8, VHS, S-VHS, audio CDs, DVD video and Blu-ray video. Computer sound cards use the PCM format for recording audio from the microphone jack and can convert compressed audio into PCM for playback. TV sets and audio/video equipment often sport PCM-labeled ports for sending uncompressed audio from the playback device to the TV or receiver.
Modern telephones can use Pulse Density Modulation instead of PCM to move audio from the microphone to the signal processor. PCM is easier to manipulate, but PDM benefits from picking up less noise and interference from other signals at a low cost. In the audio/video world, PCM competes against encoded formats including Dolby Digital, TrueHD, DTS and DTS-HD. It's common for audio/video technology to support more than one playback format. Sony's Super Audio CD technology uses a different recording technique called "Direct Stream Digital" which only records if the audio wave is moving up or down at sample points, rather than PCM's range of values. |
Teacher Guide to Women's History
Human history is full of instances of great actions by men and women and yet for years, the achievements of women have not always been celebrated. Women were given a limited role in the social structure and so very often history reads like a long list of triumphant actions by men. There have been, however, women throughout history who have managed to break down the barriers imposed on them. These are the women who have made history a more balanced narrative.
Women have made contributions in varied fields, at different times in history and in many different countries. Here is a list that is indicative of some of the major areas of women's contribution in history:
Jane Addams - An American human rights activist, founder of the Hull House, who went on to win the Nobel Peace prize at a time when women were still slowly making their presence felt in public life.
Madeline Albright - The first woman Secretary of State in the United States and a path-breaker in terms of women's role in government in the US.
We look at ten women that left their touch on the history of the world. We look at the life they led and their personal and professional challenges.
Emmeline Pankhurst and Susan B Anthony - These are the two women who are associated strongly with women's suffrage. Pankhurst in the United Kingdom and Anthony in the US emphasized that democracy is meaningless if half the population does not have the right to vote.
Jane Austen - A quiet Victorian author whose gentle observances about the life around her have stood the test of time and place and proven the universality of the language of good literature.
Marie Curie - The first woman to win two Nobel prizes, Curie won them in Physics and Chemistry. She worked with her husband to discover Polonium and Radium.
Hatshepsut - This Egyptian Pharaoh stands as a representative of all the female monarchs across the centuries who used their power to be fair and thoughtful leaders.
This lesson / worksheet set explores the challenges, achievements, and lives of several women that helped shape history: Amelia Earhart and Clara Barton.
Margaret Mead - An American anthropologist who helped people take a fresh look at world cultures and popularized a whole new field of study.
Florence Nightingale - Also known as the Lady with the Lamp, she was a pioneer in establishing the principles of modern nursing. Known for her compassion, she was also unwavering in her commitment to improving the hygiene levels at hospitals.
Rosa Parks - An African-American woman who stands as a giant in terms of how her quiet but determined refusal to follow the laws of discrimination set the course of Civil Rights history in the United States.
Margaret Thatcher - England's first female Prime Minister and a world leader of unquestioned stature who removed all doubts about the seriousness of women politicians.
This lesson / worksheet set explores the challenges, achievements, and lives of several women that helped shape history: Eleanor Roosevelt and Susan B. Anthony.
Mother Theresa - She was moved by her faith to take on the Herculean task of serving the ill, the poor, and the ignored in her adopted hometown of Kolkata, India and showed that will power and determination can truly make a huge difference.
Billie Jean King - The American tennis player who played a significant part in making women athletes serious contenders in the eyes of the public.
As is obvious, there are women leaders, scientists, artists, athletes, activists and philosophers who have played critical roles through the history of time. This list of a dozen women is a good starting point for an exploration of all the significant contributions that women have made throughout history.
This lesson / worksheet set explores the challenges, achievements, and lives of several women that helped shape history: Sandra O'Connor, Barbara McClintock, and Hillary Clinton. |
Lubricating systems are used to apply calculated amounts of lubricant to machinery in order to prevent wear from friction. Lubricating systems are vital to manufacturing and industrial companies. Moving or rotating parts of machinery, such as dies, chains, spindles, pumps, cables, rails, bearings and gears, need to be lubricated in order to run smoothly and reliably.
Many different types of lubricating systems may be used in the same industrial plant in order to keep the assembly line moving without hitch. To ensure effective operation, most moving parts require regular lubrication. Thankfully, there is a vast variety of lubricating systems to ensure that every piece of machinery is thoroughly lubricated. Air lubricators, for instance, supply lubrication and filtration to compressed air lines. These lubricators are often built into the line itself, providing constant lubrication to power tools and other mechanisms. Chain oilers, on the other hand, are units that dispense measured amounts of lubricant along the length of a chain or rail. Both of these systems can be automatic, running by way of preset programs rather then individual manual attention. Such systems are cost effective and more productive, and therefore very popular in the industry. Another automatic system is the central lubrication. This system, which also often attaches itself to the machine it lubricates, is able to cover more then one part of a machine at once. Other systems include gas pumps and constant level oilers.
Lubricating Systems - Alemite |
4.5 Adaptive immunity
Adaptive immunity is due to the actions of two types of specialised leukocytes, known as T cells and B cells. (If you are interested, the letters denote ‘thymus’ and ‘bone marrow’, the tissues where each of these leukocytes mature.) We will describe their individual contributions to the adaptive immune response shortly, but first we focus on the most striking difference between innate and adaptive immunity. The clue lies in the word ‘adaptive’.
T cells and B cells have recognition methods that distinguish between different pathogens (e.g. different species of bacteria), and they adapt during their first encounter with a particular pathogen. The second time they meet it in the body, the adaptive response begins earlier, lasts longer and is more effective than it was on the first occasion. You can learn more about this by watching the following animation. (If you do not wish to see closed captions, use the 'CC' (captions) button to remove or reveal the subtitles.)
INSTRUCTOR: The immune system is said to be adaptive because, after the first encounter with a pathogen, it can develop a much faster response to repeat infection with the same pathogen. This adaptive response is important for vaccination and immunisation. Let's take a closer look at some graphs that illustrate this phenomenon.
This graph shows the first encounter with a pathogen, which might be, for example, the chicken-pox virus. If we chart the number of antibodies and leukocytes the body produces on the vertical axis over time on the horizontal axis, we can see that, after infection at time 0, it takes 10 days for antibody and leukocyte numbers to start increasing. This increase in production of antibodies and leukocytes lasts for just over 15 days.
Now let's take a look at the secondary adaptive immune response by plotting this on the same chart. This occurs with a repeat infection by the same pathogen. In our example, this would be a repeat contraction of the chicken-pox virus. In this case, after infection at time 0 it takes less than 5 days for antibody and leukocyte numbers to start increasing.
The production of antibodies and leukocytes lasts for over 30 days. And it is a noticeably larger increase, compared to the increase seen in our primary response. How is this possible?
The cells of our immune system that are responsible for this phenomenon are known as "B cells." B cells are designed to recognise only a specific pathogen, and so we have billions of them in our bodies. During a primary encounter with a pathogen, the B cell binds to the pathogen via receptors and eventually becomes activated. At this point, it starts dividing, producing copies of itself.
Some of these new cells become "plasma cells." This is the name given to the cells that function as antibody factories, producing antibodies that recognise the pathogen and flag it for destruction by other cells of the immune system. However, these plasma cells only live for a few days.
In contrast, some of the clone cells become memory cells, with a life span of decades. They circulate in the bloodstream, ready to produce antibodies much more quickly when they next encounter the same pathogen. It's the memory cells that produce the secondary immune response.
Thanks to scientific experimentation, we now know that it's possible to deliberately administer a pathogen to generate an immunological memory by the production of memory B cells. It's this process that underlies vaccination and other forms of immunisation.
So, there is a much faster and increased response to a subsequent encounter with a pathogen and this demonstrates the adaptability of the immune system. This response is due to the production of long-lived memory cells that circulate in the body after the primary adaptive immune response subsides. These memory cells are specifically programmed to recognise the same pathogens that triggered the primary response if they ever get into the body again. You will learn much more about these later in this session.
Overall then it is to be expected that one of the appropriate immune system responses to infection is an increase the concentration of leukocytes in the blood circulation. This expected response can actually be tested in a laboratory by taking a blood sample from an individual who is suspected to be suffering from an infection. Blood from the sample is then smeared onto a microscope slide and air dried, and the sample can then be viewed at different magnifications using a light microscope to enable the number of leukocytes to be counted. In the activity in the next section you can test for the presence of the suspected infection by counting leukocytes in blood samples using our digital microscope. |
Sea ice covers millions of square kilometers of the Earths ocean surface. Therefore it significantly regulates the surface fluxes of water, heat and momentum between the ocean and the atmosphere. Moreover, sea ice is important for the climate on Earth because: It hampers gas exchange between ocean and atmosphere, it reflects a large portion of sunlight and itcontributes to the formation of deep and bottom waters which are part of the global ocean circulation. Examining the changes of sea ice has thus become an important field in Earth System Science.Thickness and extent are the two main characteristics of a sea ice cover and are important indicators of climatic changes. Sea ice extent is measured with microwave sensors from satellites since 1979 and shows a large-scale retreat of Arctic sea ice. Also the ice thickness in the Arctic reduced, as shown by upward sonar measurements from submarines since 1953. With the rapid decline of Arctic sea ice, also the Antarctic sea ice cover has attracted morescientific interest. The extent of Antarctic sea ice shows a small but significant positive trend for the period since satellite measurements began. But contrary to the Arctic, our knowledge about the long-term development of Southern Ocean sea ice thickness is still very limited. There are two main reasons for this lack of information: (1) The thickness of sea ice is still not routinely measured from space with sufficient accuracy and (2) there are no submarinemeasurements of ice draft for the Antarctic. But various airborne and in-situ techniques -like electromagnetic induction sounding, laser altimetry, ship-based observations and drilling - have been successfully applied in different regions of the Southern Ocean. However, the data gained by these methods are often biased towards thin ice and provide only shortsnapshots of the ice thickness.To date, the only way of monitoring the long-term variations of the sea ice thickness in the Southern Ocean are moored upward looking sonars (ULSs). These instruments are attached to the upper end of a mooring rope and can measure over periods of up to two years. The basic principle of a ULS draft measurement is transmitting ultrasonic sound pulses towards the surface and measuring the travel time of the reflected sound signal. Knowing thesound velocity, the travel times can be converted into distances. With the precise knowledge of the instrument depth, the detected time intervals can be used to calculate the thickness of the subsurface portion (draft) of the sea ice. The Alfred Wegener Institute (AWI) maintains an array of 13 ULSs in the Atlantic sector of the Southern Ocean since 1990, which provides a unique dataset of Antarctic sea ice thickness.This presentation introduces the ULS-dataset and shows first results of the variability of sea ice thickness in the Weddell Sea. One goal of this project was to assimilate all available ULS-data that have been processed by different methods since 1990. The obtained dataset shows, that the monthly mean sea ice thickness at the tip of the Antarctic Peninsula decreasedby almost two meters since 1990. Contrary, the ice thickness near the Fimbul Ice Shelf in the southeastern Weddell Sea shows a positive trend for the period 2000-2008. As there were still gaps in the thickness record due to instrument failure or loss of moorings, the missing data for the eastern Weddell Sea were filled by an iterative method based on multichannel singular spectrum analysis (M-SSA). The resulting time series span a period of 12 years and enable the assessment of interannual variability. Whereas thickness changes in the eastern Weddell Sea show no distinct trend, significant changes occurr close to the Antarctic, in the region of the Antarctic coastal current.
Helmholtz Research Programs > PACES I (2009-2013) > TOPIC 1: The Changing Arctic and Antarctic > WP 1.3: A Bi-Polar Perspective of Sea Ice - Atmosphere - Ocean - Ecosystem Interactions |
Detecting Fluorescence: Green, Yellow, Orange, and Red in the Deep Sea
Physical Sciences Inc.
What is "florescence"? Simply stated, fluorescence is the absorption of light at one wavelength (color) and its re-emission at a different wavelength, or color. Some things will glow with a new color when you shine the "right light" on them. The right light can differ, depending on the target. Most people are accustomed to seeing fluorescence produced by ultraviolet light, often called "black light" because humans can't see it. For underwater life, though, we have found that blue light is almost always better than ultraviolet for detecting bright fluorescence.
Process for Detecting Fluorescence
Florescence can be detected by either eye or camera. In the deep sea, the process for searching for fluorescence applies to both. First, we must first shine a bright blue light on the sea floor. Since all of the blue light that hits the subject is not absorbed, and fluorescence tends to be weak, a bright blue spot is all you will see. We then use a filter to get rid of the blue light that is reflected back, and let only the fluorescence (green, yellow, orange, or red) through to the imaging device, which is either the eye or a camera. The Figure 1 schematic shows what's going on.
Usually, we start with a light source that emits white light, which is composed of all colors. We put a filter (the excitation filter) in front of that so only the blue light comes out. When the blue light reaches a surface it may cause fluorescence, shown as green in Figure 1. What heads back toward the imager is a combination of green fluorescence plus the reflected blue light. We place a yellow filter (the barrier filter) in front of the imager to block the reflected blue and pass only the fluorescence.
The photo series (Figure 2) demonstrates this process. In the top image, we see a scene illuminated by white light. (You can also see the flashlight, but it is not on in this image.) In the middle image, we have turned off the room lights and turned on the blue flashlight. It looks like a blue light on the scene. Finally, in the image on the bottom we put the yellow filter material between the subject and the camera, and we can now see red fluorescence from chlorophyll in the leaf and a bright fluorescence from a paper label.
Preparing the Submersible
Figure 3 shows powerful lights mounted on the front work basket of the Johnson-Sea-Link submersible. In this case, the two lights on the outside have filters that allow only ultraviolet light to pass through, while the two in the middle have filters that let blue light out. The Sea-Link's camera is fitted with a yellow blocking filter (Figure 4).
We mounted the filter in a way that lets us use the submersible's manipulator arm to swing the filter out of the way for taking images illuminated by white light, or to swing it in front of the camera for fluorescence images. The observer in the submersible sphere wears yellow filter glasses so that he or she can see the fluorescence stimulated by the light. |
As the written word is essential to the poet and writer, and the algorithmic formula is imperative to the mathematician, drawing is the essence of the artist and designer’s expression.
As an effective means of communication and thinking, drawing operates on many levels, and it is important for the artist and designer to not only comprehend these differences, but to also achieve a certain level of skill in the discipline of drawing. Drawing can be a tremendously empowering tool for communication and thinking. This article will briefly explore and loosely define the many different approaches, or types, of drawing.
In thinking about drawing methodologies and their respective purposes, apparent are at least eight distinct categories, including:
1. Life Drawing: drawing as a means of expression; drawing from direct observation, as in still-life or figure drawing
2. Emotive Drawing: drawing, like painting, as an expressive way to explore and put forth feeling, mood, self, time, and so on; drawing as a sensitive expression of personality
3. Sketching: drawing in order to explain or actively think through a problem; drawing through the act of visualizing; drawing actively and loosely
4. Analytic Drawing: drawing as a way to dissect, understand and represent; drawing from observation
5. Perspective Drawing: drawing as a way to represent volume, space, light, eye-level (horizon), surface planes, and scale
6. Geometric Drawing: drawing as a means to precisely represent all aspects of construction; drawing that shows measured scale, true sides, sections, and a variety of descriptive views.
7. Diagrammatic Drawing: drawing in order to investigate, explore, and document concepts and ideas; drawing as an active design process where ideas evolve due to adjacencies and happenstance
8. Illustration Drawing: drawing in order to document; drawing to clearly state and render intent, style, size, color, character, effect, and so on
The marks made for each of these drawing categories vary greatly, as do the materials, tools, techniques, and even substrates on which the drawing is produced. A graphite pencil makes a different mark than a marker, than a vine charcoal stick, than a ballpoint pen, and on and on. Newsprint paper is appropriate for some drawing materials, such as pencil, charcoal and crayon, whereas more wet mediums, such as markers or India ink may prove problematic.
Concurrently, the purpose for each of these drawings categories vary, as do the end result. A sketch can quickly document an idea upon first conception, whereas a geometric drawing requires a much longer gestational period. The sketch is of the moment and the geometric drawing is more labored. The sketch contains possibility and potential, whereas the geometric drawing is more like the ending chapter to a novel, final. The person who makes the drawing must weigh the truth and consequences of the effort, choose the method of drawing that is appropriate, that which will provides the best result.
This is not to say that the act of drawing should not be experimental in nature. To the contrary, investigation is paramount to the creative process and the educational process. Practice drawing, experience using drawing, and exposure to comparative examples of drawing provide one with a greater ability to make choices regarding the appropriate drawing technique, material, surface, tool and approach to utilize when beginning a drawing.
Where words and formulas cannot quite describe the creative intent, drawing succeeds in being a tremendously empowering tool for communication and thinking. The artist and designer is much stronger in her ability to create with this skill mastered.
(Note: For the purposes of this article, computer-aided drawing techniques were not addressed specifically, though the author admires the inherent benefits of computer technology.)
Explore the many drawingreference materials available. |
Astronomy (from the Greek words astron (ἄστρον), "star", and nomos (νόμος), "law") is the scientific study of celestial objects (such as stars, planets, comets, and galaxies) and phenomena that originate outside the Earth's atmosphere (such as the cosmic background radiation). It is concerned with the evolution, physics, chemistry, meteorology, and motion of celestial objects, as well as the formation and development of the universe.
Astronomy is one of the oldest sciences. Astronomers of early civilizations performed methodical observations of the night sky, and astronomical artifacts have been found from much earlier periods. However, the invention of the telescope was required before astronomy was able to develop into a modern science. Historically, astronomy has included disciplines as diverse as astrometry, celestial navigation, observational astronomy, the making of calendars, and even astrology, but professional astronomy is nowadays often considered to be synonymous with astrophysics. Since the 20th century, the field of professional astronomy split into observational and theoretical branches. Observational astronomy is focused on acquiring and analyzing data, mainly using basic principles of physics. Theoretical astronomy is oriented towards the development of computer or analytical models to describe astronomical objects and phenomena. The two fields complement each other, with theoretical astronomy seeking to explain the observational results, and observations being used to confirm theoretical results.
Amateur astronomers have contributed to many important astronomical discoveries, and astronomy is one of the few sciences where amateurs can still play an active role, especially in the discovery and observation of transient phenomena.
Old or even ancient astronomy is not to be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects. Although the two fields share a common origin and a part of their methods (namely, the use of ephemerides), they are distinct.
2009 has been declared by the UN to be the International Year of Astronomy 2009 (IYA2009). The focus is on enhancing the public’s engagement with and understanding of astronomy. |
by Norris Chambers
Before understanding how microphones work, we have to learn just a little more about physics - sound waves, to be more specific. What makes a sound? A sound is nothing more than a vibration in the atmosphere that causes a response from your ear or other receiving device. Sounds are composed of different frequencies of vibration. Think of a vibration as you did AC current - a change from negative to positive at some rate - the rate is called the frequency and is expressed as so many cycles per second. In the matter of sound, it would be so many vibrations per second. A low rate of vibration produces a low tone, and a high rate of vibration produces a high pitched tone. The human ear normally can hear frequencies from around 30 cycles per second to well over 15,000. Some people can hear higher frequencies and some lower.
To simplify sound waves, as we did AC current with the battery that we kept changing the polarity to illustrate, we will take a piece of plywood about two feet in diameter and holding it in front of us, will push it forward the length of our forearms. Then we will draw it to our chest. We have performed one cycle, and in doing so, we pushed the air that was in front of the plywood forward, and when we brought it toward us, we pulled some back. This disturbance went forward through the air. That was a 1 cycle sound wave. Now imagine that you could do this about 50 times per second. If you did, you would be generating a sound wave that a person could hear as a very low roar. Now pretend that you could do this 2000 times per second. You would be>generating a shrill whistle that could be heard for blocks. That is the way sound waves are made - a sound vibrates the air and the vibrations are picked up by the ear.
The ear detects sound waves by means of a thin portion that vibrates, or moves in and out with the air pressure of the sound it is receiving. This vibration is transmitted to the brain and interpreted as a sound. Although the ear cannot hear extremely high frequencies because it cannot vibrate and transmit that fast, that does not mean that they are not there. For instance, a dog can hear much higher frequencies and that is why the silent dog whistle is heard by the dog, but not by people.When you speak, you force air past your vocal cords, causing them to vibrate in accordance with the sounds you wish to transmit. Speech is composed of several different frequencies emitted at the same time, but the ear and brain translate them into intelligent sounds.
Now, back to the microphone.
We will examine four basic types of microphones: the carbon, the crystal, the dynamic and the condenser. These four types all convert sound waves into electronic components that vary in accordance with the sounds that they are expected to pick up.
The carbon microphone is the first practical mic and was the one used in early telephones and radio transmissions. As the name implies, it uses carbon granules, or small bits of carbon. Carbon is a conductor of electrons, but it offers considerable resistance to its flow. When the granules are packed together tighter, the resistance is less. When they are looser, it is more. When they are compressed, more current will flow through the carbon, and when they are not compressed, less current will flow. In the microphone, a diaphragm, or thin metal disk, is positioned against one side of the carbon grains, and one side of the battery is connected to it. The other pole is connected to the other side of the carbon. When a sound wave hits the thin diaphragm, the resistance of the carbon goes up and down in accordance with the vibrations of the sound wave, and of course the current goes up and down with the resistance. You have converted the sound waves into variations of electrical current. The battery was connected through the low resistance winding of a transformer, and as you remember, variations in the primary cause corresponding variations in the secondary. Because there are a few turns in the primary and many in the secondary, an AC voltage is produced that is equivalent to the sound wave.
The crystal microphone works on an entirely different principle. It uses a crystal like material called "Rochelle salts." This material is in a thin strip, and has the peculiar property of causing electrons to gather on one side when it is bent. The more it is bent, the more electrons. In a crystal microphone, the thin disk or diaphragm is mechanically connected to the crystal, causing it to bend slightly when the diaphragm vibrates. As the sound waves strike it, it in turn vibrates the crystal, causing it to produce a voltage that varies in proportion to the frequency and amplitude of the sound that is picked up. The same type crystal has been widely used in phonograph pickups. The needle is connected to the crystal, and as it moves from side to side in the groove of the record, in accordance with the recording, a voltage is produced that is proportional to the frequency and amplitude of the recorded material.
A dynamic microphone uses a diaphragm also, but it is connected to a round coil of a very few turns that surrounds one pole of a permanent magnet. It is wound on a round, insulated form that fits closely over the magnet. When the diaphragm vibrates with the sound waves that strike it, it moves back and forth through the magnetic field. As you remember from the magnetic theory, when a coil is moved through a magnetic field, a voltage is generated in it. So as the sound moves it through the permanent magnetic field, it generates a voltage that varies with the vibration of the sound waves striking the diaphragm.
The condenser mic is what the name suggests....a condenser. As you recall, a condenser (or capacitor, as it is sometimes called) consists of two plates separated by an insulator, In a condenser microphone, one plate is fixed and the other is a diaphragm of very thin conducting material. A DC voltage is applied to the condenser through a high value resistor. When the voltage is applied, the electrons form on one plate and retreat from the other. But when sound waves hit the thin diaphragm plate, it vibrates back and forth. When the plates are closer together, the capacity increases. When farther apart, the capacity decreases. The sound wave that causes the diaphragm to vibrate in and out causes the electrons to advance and retreat in accordance with the sound. Therefore, you have a varying voltage across the resistor that corresponds with your sound.
These four microphones have all done the same thing - presented you with a varying electrical voltage that is a representation of the sound waves. You could listen to this voltage with a headset, or amplify it and cause it to come over a loud speaker, as in a public address system. You could also use it to broadcast on radio or TV.
Next time we will find out how a speaker works. Aren't we having FUN?
Click on the lesson of your choice:
Return To Main Page (and select another Old Timer's Tales to read)
Please Click Here To E-Mail Me
Copyright © 2007 Norris Chambers |
|Montessori is a philosophy with the fundamental tenet that a child learns best within a social environment, which supports each individual’s unique development.Dr. Maria Montessori, the creator of what is called “The Montessori Method of Education,” based this on her scientific observations of young children’s behavior. Through her observations she found that children learn best in an environment filled with developmentally appropriate materials that provide experiences contributing to the growth of self-motivated, independent learners.|
Montessori’s Theories Include Premises Such As:
What Makes Montessori Education Unique?
The “Whole Child” Approach: The Primary goal of a Montessori program is to help each child reach full potential in all areas of life. Activities promote the development of social skills, emotional growth, and physical coordination as well as cognitive preparation. The holistic curriculum, under the direction of a specially prepared teacher, allows the child to experience the joy of learning, time to enjoy the process and insure the development of self-esteem, and provides the experiences from which children create their knowledge.
The “Prepared Environment”: In order for self-directed learning to take place, the whole learning environment: room, materials and social climate; must be supportive of the learner. The teacher provides necessary resources, including opportunities for children to function in a safe and positive climate. The teacher thus gains the children’s trust, which enables them to try new things and build self-confidence.
The Montessori Materials: Dr. Montessori’s observations of the kinds of things which children enjoy and go back to repeatedly lead her to design a number of multi-sensory, sequential and self-correcting materials which facilitate the learning skills and lead to learning of abstract Ideas.
The Teacher: Originally called a “Directress,” the Montessori teacher functions as a designer of the environment, resource person, role model, demonstrator, record-keeper and meticulous observer of each child’s behavior and growth. The teacher acts as a facilitator of learning. Extensive training; a minimum of one full year following the baccalaureate degree; is required; plus a year of supervised student teaching with the age group with which the teacher will work, (i.e.: infant and toddler, 3-6 year olds, elementary or secondary level.)
How Does It Work?
Each Montessori class operates on the principle of freedom within limits. Every program has its set of ground rules which differ from age group to age group, but is always based on core Montessori beliefs: respect of oneself, respect for each other, and respect for the environment.
Lessons are tailored to each child’s abilities and academic readiness. The teacher relies on his or her observations of the children to determine which new activities and materials may be introduced to an individual child or to a small group. The aim is to encourage active, self-directed learning and to strike a balance of individual mastery with small group collaboration within the whole group community.The three-year-age span in each class provides a family-like grouping where learning can take place naturally. More experienced children share what they have learned while reinforcing their own learning. Because this peer group learning is intrinsic to Montessori, there are often more conversation/language experiences in the Montessori classroom than in conventional early education settings.
How Can a “Real” Montessori Classroom Be Identified?
Since Montessori is a word in the public domain, it is possible for any individual or institution to claim to be Montessori. But, an authentic Montessori classroom must have these basic characteristics at all levels:
Teachers educated in the Montessori philosophy and methodology for the age level they are teaching, who have the ability and dedication to put the key concepts into practice and to establish a partnership with the family.
Taken from the American Montessori Society home page. |
The Number Race has several goals. It has been scientifically tested, and its original design has been published: (click to expand)
Strengthen the brain mechanisms of number processing
What are these brain mechanisms?
Our brain can process numbers in several different ways: visually as digits (“3”), verbally as number words (“three” - written or spoken), and concretely as a quantity (♥♥♥) or a position along a mental number line. Each of these is a different way in which the brain represents numbers, and there are specific brain circuits for handling each representation.
Different arithmetic tasks rely on different representations of number in the brain. For example, the digit representation is used when reading numbers written as digits or when writing them. The verbal representation is used when talking or listening to someone saying numbers, and also for storing multiplication facts in our memory (“three times five is fifteen”). The quantity representation is used to decide which of two numbers is larger, or to quickly approximate quantities.
Our brain can also transform numbers from one representation to another. For example, when we read aloud the number 5, our brain must understand the digit, transform it to its verbal representation, and instruct our speech system to say aloud the word “five”. At the same time, the brain also transforms 5 into a quantity, and we get a sense of how large the number 5 is.
Why is it important to strengthen these brain mechanisms?
The ability to handle the different representations of numbers is the cornerstone of numeric literacy. For example, if we could not transform digits into number words quickly and efficiently, reading digits aloud would be difficult for us. Being able to transform numbers into the quantity representation is especially important, because we usually see or hear numbers as digits or words, but it is the quantity representation that makes us understand the “meaning” of a number and have a sense of how large it is.
Like in many other domains, practice makes perfect: if we practice the brain in transforming numbers among representations, it processes numbers faster and faster, with fewer errors, and with less effort.
Whereas many mathematical games focus just on calculation skills, The Number Race is one of only a few games that were specifically designed to teach and practice the various representations of numbers and the transformations between them, with a special focus on the quantity representation.
How does The Number Race accomplish that?
The game presents numbers in all representations: they are written as digits; they are narrated as spoken number words; and they are visualized as quantities, by displaying sets of objects. The player has to choose the greater of two numbers, starting with concrete sets, and gradually moving through spoken and written numbers, to written numbers only. Comparing numbers encourages processing quantity and transforming the numbers from their symbolic representation to the quantity representation. Once the player response, all three formats of numbers are reinforced, and the concrete sets are placed in one-to-one correspondence. Finally the tokens the player has won are moved to a racetrack, which demonstrates how numbers are mapped to a number-line-like structure.
- What are these brain mechanisms?
Establish the mental number line
The Number Race teaches children to build up a mental number line by using a racetrack (similar to a game board) to map numbers to space.
Why is it important to establish a mental number line?
When we think about numbers, we often imagine them on a mental “number line”. When we do this, we are essentially using our brain’s ability to represent space to help us understand numbers. By picturing numbers on the number line, we can understand their relative size. The number line also helps us understand the meaning of addition and subtraction, and plan strategies for adding or subtracting past decade boundaries.
How does The Number Race help build a mental number line?
In The Number Race, players win tokens which they place onto a racetrack, or linear number board, with squares numbered up to 40, in order to move their player forward. The game teaches children to “count on” the number of tokens they have won from the square they are at, as they would in a board game. Research has shown that playing linear, numbered, board games this way has huge benefits for establishing the mental number line.
- Why is it important to establish a mental number line?
Teach and practice counting
The Number Race teaches counting of numbers 1-40, including “counting on”.
Why is it important to teach counting?
Counting is a fundamental skill which allows children to work with numbers, particularly to learn to add and subtract. Learning to count progresses through a series of stages. When children first learn to recite the counting sequence, they often do so inflexibly and without really understanding its purpose. Eventually they learn to flexibly count (e.g. stop and start at different numbers, or count by twos or tens), as well as to fully understand the purpose of counting.
Young children first learn arithmetic using counting strategies. These strategies start off slow and inefficient, and progressively become faster and more efficient. For instance when asked to give the answer to 2 + 5, children will first go through a phase of “counting all” with their fingers, in which they count “one, two” fingers, then “one, two, three, four, five” fingers, then “one, two, three, four, five, six, seven” fingers, to get the answer. Eventually they will learn to “count on” from the larger number, “six, seven”. However to be able to do this, they need to quickly identify the larger number, and to use their counting flexibly.
In the long run, children will memorize some frequent addition sums, e.g. 4+3=7, 2 + 8 = 10. But they will continue to use counting for subtraction, for more complicated sums, or as a back-up strategy if memory retrieval fails.
How does The Number Race teach counting?
In The Number Race, players win tokens which they place onto a racetrack with squares numbered up to 40, in order to move their player forward. The game teaches children to “count on” the number of tokens they have won from the square they are at, as they would in a board game. Thus children have repeated opportunities to practice flexible counting with numbers 1 - 40. The Number Race reinforces the associated addition facts, e.g. if a player is on square 7, and wins 3, the game tells them “7 + 3 = 10” as their player is moved forward.
- Why is it important to teach counting?
Teach and practice early addition and subtraction
The Number Race teaches and practices early addition and subtraction facts, focusing on concrete sets and the meaning of the facts.
Why is it important to teach addition and subtraction facts?
When young children are asked to add two numbers, they initially use counting strategies: e.g. when presented with the exercise 2 + 5, they count on “six, seven” – or even count all the way from 1.
Eventually they learn the more efficient strategy of memory retrieval: adults just remember that 2 + 5 = 7. However this requires a large amount of practice to become fluent. At advanced levels, The Number Race provides this practice, and pushes children to recall facts faster and faster.
How does The Number Race teach addition and subtraction facts?
The initial levels of The Number Race ask the player to compare sets of objects or numbers 1-10. In advanced levels, however, the player cannot see the number of objects in each set, and each set is annotated with an addition or subtraction exercise. To know which set is larger, the player must solve the addition or subtraction (or on the most difficult level, both!).
If the difference between the compared sets is very large, estimating the result will be enough. However the software will gradually push the player to compare sets closer and closer together, so that he/she will have to calculate the exact answer. The emphasis is on numbers 1-10, which are trained for fluency, but adding a single digit number to numbers 10 – 40 is also taught in the context of moving players along the board (e.g. “27 + 3 = 30”).
- Why is it important to teach addition and subtraction facts?
Encourage Fluency (Automatic Processing)
What is fluency?
Our brain can operate in different modes. Some tasks that we do require attention - for example, playing chess. Other tasks are performed automatically, i.e., with no need to allocate attention to them - for example, walking.
Our attention resources are limited and the brain can allocate its full attention only on one task at a time. For example, most of us cannot handle two chess games at the same time. The situation is different when it comes to automatic tasks: we can usually perform several such tasks simultaneously - e.g., we can walk, eat, and tighten a loose button in our shirt, all at the same time.
Many operations may require a lot of attention when we learn them, and then gradually become automatic. For example, think about learning how to ride bicycle.
The Number Race aims to achieve fluency in quantity and simple arithmetic, so that calculation and number sense become effortless, and cease to place a heavy burden on our attention.
Why is fluency important?
First of all, fluent processing is usually quicker. If your calculation is fluent, you get to the result more quickly.
Another importance of fluency lies in the fact that our attentional resources are limited. If a child can’t calculate automatically, he/she has to spend a lot of attention resources into the calculation process. Once arithmetic is fluent, the child can concentrate his/her full resources on other tasks - such as understanding a math or physics problem.
How does The Number Race help reach fluency?
The Number Race adjusts its level of difficulty according to the player’s performance of the player, and maintains an average success rate of 75%.
The adaptive algorithm adjusts the numerical distance between the quantities to be compared and the length of the response deadline. The algorithm also adjusts the format in which the answers to choose from are shown (sets of objects, symbolic numbers, or addition or subtraction exercises).
- What is fluency?
Helping children with dyscalculia
What is dyscalculia?
Dyscalculia is a learning disability in mathematics. It can be a selective difficulty in math that is not necessarily accompanied by a general cognitive deficit. Dyscalculia usually results from an impairment in the brain circuits involved in mathematical cognition.
Several brain mechanisms are involved in mathematical cognition, and impairment in different brain mechanisms may result in different kinds of dyscalculia, with different symptoms. For example, if you have impairment in the brain region responsible for processing digits, you may find it difficult to read or write the number 35 but have no difficulty with the words "thirty five". If you have impairment in the brain region responsible for understanding quantities, you may be able to read and write numbers, but may have difficulties understanding the quantities they represent.
You can learn more about dyscalculia on The Number Race author Dr. Anna Wilson’s website www.aboutdyscalculia.org.
How can The Number Race help children with dyscalculia?
Think about physical injuries: if your leg got injured and you can't walk, you can still be taught how to walk again, and you would probably need to practice a lot. The physiotherapist may give you some exercises to practice specific muscles.
Similarly, when the brain is impaired in a brain region involved in mathematical cognition, it can still be taught to function better. Practice may partially restore the function of the impaired circuit, and alternative regions (nearby or in the opposite hemisphere) can also be trained. This is what The Number Race aims to do.
Often children with dyscalculia have had a lot of negative experiences with mathematics. Because it adapts to the user, The Number Race provides positive experiences with math and numbers. By building up a model of the child’s knowledge and skills, the software is able to present problems that children can do 75% of the time, but which still challenge them.
We are certainly not saying that any brain impairment can be cured, and that any child with dyscalculia can become fluent in math. Still, many children who experience difficulties in math can gain from training tools such as The Number Race and The Number Catcher.
- What is dyscalculia?
The Number Race has been scientifically tested
Overall, scientific results on the efficacy of The Number Race suggest that the software does have an impact on core numerical processing. However, it is important to note that the software only covers a small area of the math curriculum, so the impact is limited in scope, especially for older children. Therefore it is important that the software be used in conjunction with other remediation activities.
The efficacy of version 2.0 of The Number Race has been tested in several studies, and some of them have already been published:
- Wilson, A. J., Revkin, S. K., Cohen, D., Cohen, L., & Dehaene, S. (2006). An open trial assessment of “the number race”, an adaptive computer game for remediation of dyscalculia. Behavioral and Brain Functions, 2 (20). A 4-month open-trial study, in which we examined whether a group of 9 children aged 7-9 with mathematical learning difficulties showed improvement in basic numerical cognition tasks after 5 weeks of using the software for 2 hours a week. The children improved in small number perception, comparison, and simple arithmetic (subtraction).
- Wilson, A. J., Dehaene, S., Dubois, O., & Fayol, M. (2009). Effects of an Adaptive Game Intervention on Accessing Number Sense in Low-Socioeconomic-Status Kindergarten Children. Mind, Brain and Education, 3 (4), 224-234. This study was a cross-over design which used a control software. The sample was 53 kindergarten children of low SES. The results showed clear cross-over effects in symbolic and cross-format (e.g., digits to dots) numerical comparison tasks.
- Räsänen, P., Salminen, J., Wilson, A. J., Aunio, P., & Dehaene, S. (2009). Computer-assisted intervention for children with low numeracy skills. Cognitive Development, 24, 450–472. This study, conducted in Finland with kindergarten children with low numeracy, compared the effects of The Number Race to a those of a different math remediation software (Graphogame), which also focuses on numerical comparison and matching number symbols to quantities. Both softwares yielded similar improvement in simple arithmetic, but the improvement did not hold over time. The results showed that both software packages were effective in increasing numeracy in at-risk kindergarten children, but neither of these softwares is a "quick fix" and they should be used in conjunction with other techniques.
- Further trials testing version 3.0 of The Number Race are currently being carried out in Sweden and New Zealand. The original design of The Number Race has been published:
Wilson, A. J., Dehaene, S., Pinel, P., Revkin, S. K., Cohen, L., & Cohen, D. (2006). Principles underlying the design of “The Number Race”, an adaptive computer game for remediation of dyscalculia. Behavioral and Brain Functions, 2, 19. |
Here are some questions that the course Words in English will address. Many of these are things most people have not thought about -- have not even considered there to be a question about. Anyone with a little intellectual curiosity who sees these questions, however, might be surprised to realize just how little they know about the words in their language. And how easy it is, with a little guided study, to remedy that lack of knowledge.
Knowledge about the standard languages of Europe is more widespread among the educated populace of European countries than it is in this country, since such knowledge is part of the school curriculum there. Why should we be content to be so ignorant about something so fundamental to our life as the English language?
What are the languages most closely related to English?
What does it mean for a language to be 'related' to another language?
What is the relation of particular English words and morphemes (small, simple, and meaningful word elements) to words and morphemes found in other languages?
How many words are there in English?
What kinds of words are there in English, i.e. what are some ways they can be classified?
Is there any system or structure to the vocabulary?
Why is there often more than one word in English for what seems to be the same or almost the same concept?
Are there any patterns we can discover when we are faced with this situation?
What is the difference between 'native' and 'borrowed' words?
How are words formed?
What kinds of word parts are there in English?
How do prefixes differ from suffixes? Roots from affixes?
Why do some meaningful word parts (morphemes) occur in slightly different forms in different words? (a-morphous vs. an-aerobic: a/an = 'not')
How does sound change affect word elements? Is there any systematicity to such changes?
How does spelling relate to sounds, and to sound change?
How do we know which word parts can be combined? And how are they put together?
Where do words come from?
How do new words get into the language?
What kinds of changes do they undergo?
What happens to words when they've been in the language a long time?
Where do morphemes come from?
How has the English vocabulary evolved?
What was it like in earlier times?
What kinds of changes has it undergone?
Have even the ways of building or forming words in the language changed?
What is slang? Jargon?
How do people use words to create group solidarity?
What is 'standardization' of a language? What is linguistic prescriptivism?
© 2005-2008 Suzanne Kemmer
Last modified 15 Aug 13 |
What is NEEDED FOR GERMINATION OF SEED?
Germination is the process by which plants grows from a seedling to new fruit or a flower. The most common example of germination is the sprouting of a seedling from a seed of an angiosperm or gymnosperm. However the growth of a sporeling from a spore, for example the growth of hyphae from ...
In order for seeds to germinate, they need the proper combination of oxygen, moisture, temperature, and sometimes light. As the seed takes up water, it activates enzymes that direct the germination process.
the conditions for the seed germination includes appropriate amount of water in the surrounding soil then the nutrients should be available then the carbon dioxide concentration and the absence of chemicals like absisic acid which can cause a seed to remain in a dormant state for long time.
Things Needed for a Seed to Germinate. ... plays a huge role in the germination process. Some seeds germinate better when the soil temperature is 55 degrees Fahrenheit or cooler, other seeds prefer the soil temperature to be closer to 70 or 80 degrees Fahrenheit.
Explore This Topic: What is seed germination? the growth the plant. Do enzymes help in seed germination? Answer . A seed contains an embryo plant. it also contain a food store on which the embryo will rely while it is germinating, until it has grown leafs and can start to photosynthesise.
Germination of Seeds The seed of a higher plant is a small package produced in a flowering plant or gymnosperm containing an embryo and stored food reserves.
For germination to take place the seed must be viable or living and should have sufficient food for its germination. The following environmental conditions must exist.
pH levels have nothing to do with seed germination. All seeds need to germinate are moisture and warmth. However after the seed has germinated the level of pH wil be a factor for the growth of the plant. Does music affect seed germination?
Studies have shown that small seeds often require light for germination while large seeds are usually indifferent to their exposure ... seeds collected in south-west Queensland, Australia required a warm stratification pre-treatment to alleviate dormancy, and light to terminate dormancy and ...
What Conditions Do Seeds Need to Germinate?. Germinating seeds is an exciting and interesting way to increase your garden variety. During germination, seeds come out of their dormant state and burst through the seed coating. Some seeds have unusual germination requirements, so always research ...
Your best source of information on the germination of seeds is 'Seed Germination, Theory and Practice' by Norm Deno. ... Many seeds need a cold moist period before they will sprout. The essentials are moisture, air, cold and time.
Seed germination requires moisture and a viable growing medium. Check seed packets for germination and cultural tips.
Seeds remain dormant or inactive until conditions are right for germination. All seeds need water, oxygen, and proper temperature in order to germinate.
A seed is a small embryonic plant enclosed in a covering called the seed coat, usually with some stored food. It is the product of the ripened ovule of gymnosperm and angiosperm plants which occurs after fertilization and some growth within the mother plant. The formation of the seed completes ...
In seeds with epigeal germination, the cotyledons emerge form the seed coat green, ... However some tiny seedlings can suspend the need to get to sunlight for quite some time. For example orchid seeds (Orchidaceae) are minute and contain very little sustenance.
What Do Seeds Need to Start to Grow? ... The germination stage ends when a shoot emerges from the soil. But the plant is not done growing. It's just started. Plants need water, warmth, nutrients from the soil, and light to continue to grow.
A botanical seed consists of an embryonic plant that is in resting form. Seed germination is the basic phase in the growth of any plant.
Find out about the role of water, temperature, light and gases for seed germination, and how scientists at Kew's Millennium Seed Bank work to germinate seeds.
Seeds and germination. A seed may be defined as an embryonic plant in a state of arrested development, supplied with food materials, and protected by one or more seed coats.
temperature)$can$reduce$seed$germination$and$viability.$ Reduced$ambient$water$potential$during$phase$1$resultsin:$ MMreducedseedwatercontent$ ... 2$is$required$for$respiration,$excess$water$limits$diffusion$of$O 2.Tradeoff between$sufficient$water$and$to$much$water.$ $
What conditions are needed for seed germination? Working in groups students need to fill out the Student Designed Experiment form and identify the variable they want to test. They have done the control in the skill building activity.
Germination of Seeds Germination is the resumption of growth of the embryo plant inside the seed.
Essentials for seed germination include: air (carbon dioxide), water and warmth. Light is also a crucial component of seed germinations though it is regarded as
Understanding the basic biology of grass seed germination will reinforce your knowledge of planting a lawn and help answer the “Why is it done this way?” line of questions.
Seeds and Seed Germination As we examine the life history of a plant we need to start somewhere. Because humans start growing plants of interest from propagules, it makes some sense to start with seeds.
Materials needed for seed germination in the dark . Materials needed for germination in the light
Do you expect every grass seed to grow? How long until grass first appears? Will old seed still sprout? Learn what will improve grass seed germination rates.
When seeds are developing, the seeds go through several stages, the last being dehydration to become an inactive seed. The inactive seed is what can be bought at the store in packets.
Describe the process of seed germination. What factors are required to "break dormancy"? Does seed germination in plants require ATP energy? Which of the following do you believe is required for seed germination? water oxygen sunlight soil?
Germination of seed includes all changes that take place from the time when dry, viable seed starts to grow when placed under suitable conditions of germination to the time when seedling becomes
SEED GERMINATION A seed contains an embryonic plant in a resting condition, and germination is its resumption of growth. Seeds will begin to germinate when the soil temperature is in the appropriate range and when water and oxygen are available.
Seed Germination. A seed certainly looks dead. It does not seem to move, to grow, nor do anything. In fact, even with biochemical tests for the metabolic processes we associate with life (respiration, etc.) the rate of these processes is so slow that it would be difficult to determine whether ...
Germination is the process in which a plant or fungus emerges from a seed or spore and begins growth. The most common example of germination is the sprouting of a seedling from a seed of an angiosperm or gymnosperm.
How Does A Seed Germinate? What Is Germination? How To Germinate Apple Seeds. How To Germinate Marijuana Seeds. What Does Germinate Mean? Origin: 1600–10; Latin germinātus (past participle of germināre to sprout, bud), equivalent to germin-(see germinal) + -ātus-ate 1.
Germination Requirements. All seeds need some moisture for the germination process to begin, but how much or how little depends on the type of plant it will grow into.
Learn about seed germination with this fun science experiment for kids. ... While light can be an important trigger for germination, some seeds actually need darkness to germinate, if you buy seeds it should mention the requirements for that specific type of seed in the instructions.
Light needed germination – While many seeds need to be placed under the soil in order to germinate, there are some that actually need light in order to germinate. Burying these seeds below the soil will keep them from germinating.
Germination is the process in which a seed or spore awakens from dormancy and starts to sprout. For germination to start, a seed...
Stephen Cone 12 months ago. The four external requirements for seed germination are temperature, water, oxygen and light/darkness. 0
Mature seeds are dormant with very few metabolic processes going on. The resumption of growth in a seed is called germination. Seeds need oxygen and water to germinate.
Seed Germination Process Most plant life starts from the humble seed, leaf through this article to understand the process of seed germination.
The dried seed will remain dormant until the outside conditions are right for germination. For one thing, this means that the temperature must be right.
Some seeds need light for germination, while in some seeds germination is hindered by light. Most wild species of flowers and herbs prefer darkness for germination and should be planted deep in the soil while most modern vegetable crops prefer light or are not affected by it, and are ...
Concepts Germination is the awakening of a seed (embryo) from a resting state. It involves the harnessing of energy stored within the seed and is activated by components in the environment.
In order to have successful seed germination, you will need to provide appropriate temperature, moisture, and relative humidity, and sterile...
What is required for germination of a seed? ChaCha Answer: In order for seeds to germinate, they need the proper combination of oxyge...
Germination Process. What You Need To Know About The Germination Process. Knowing a little about the seed germination process can make a difference in the quality of crops or the health of a plant you are starting from seed, and can a lack of knowledge can sometimes result in no germination at all.
In this germination experiment we studied 4 seeds bird mix. We wanted to find out percentage of the seeds in the bird food that able to germinate.
In a word, seed need the correct soil temperature, moisture, humidity and light to germinate. Supplying these requirements correctly is not too difficult, but it is necessary.
Germination Temperature The effect of soil temperature on sown seeds. Percentage of Normal Vegetable Seedlings Produced at Different Temperatures.
If you didn't find what you were looking for you can always try Google Search
Add this page to your blog, web, or forum. This will help people know what is What is NEEDED FOR GERMINATION OF SEED |
The science behind handshakes
A new study suggests shaking hands is more than just a common greeting.
Researchers at Israel's Weizmann Institute of Science say it's actually a way of smelling each other - much like animals do to learn more about one another. Except people are much more discreet.
The researchers used hidden cameras to observe more than 270 people while they waited alone. During that time the volunteers put their hands near their nose 22 percent of the time. Using some sneaky instrument that measured airflow to the nose, researchers realized they were actually sniffing their hands. Staff members eventually greeted some volunteers with a handshake. Participants who shake hands with someone of the same sex were twice as likely to subtly sniff their hands after.
Participants who shook hands with the opposite sex, however, were more likely to sniff their non-shaking hand.
The project's researcher told New Scientist, "People constantly have a hand at their face, they are sniffing it, and they modify that behavior after shaking hands. That demonstrates that the handshaking is a chemosignalling behavior."
Chemosignals are like pheromones that transmit information from our bodies - like sweating when we're afraid. New Scientist reports it's still unclear what chemical signals are exchanged through handshakes, or why there are different results for same-sex and cross-gender handshakes, but the team at Weizmann is now looking at how the signals might be affected in behavioral conditions like autism spectrum disorders.
They believe these results are just the "tip of the iceberg."
More on AOL:
First tortoise babies found on Galapagos Islands in more than 100 years
Free-range parents deemed responsible for 'unsubstantiated child neglect'
Iowa mother pregnant with rare set of monoamniotic twins |
I Scream You Scream We All Scream for Ice Cream!!
Beginning Reading Lesson
In order for children to read and spell words, it is essential that students learn that letters stand for phonemes and that spellings of words map out the phonemes that we hear in spoken words. A child must learn to decode many different correspondences. In this lesson, children will learn the correspondence ea=/E/. Students will learn to associate ea with the long e sound, after seeing it in written text and listening for the sound /E/ that it makes. Teaching all the correspondences to students will help them in becoming more fluent.
Letter Box letters: s, t, r, e, a, k, m, c, l, p, n.
Lee and the Team (copies for each student)
1. The lesson should be introduced by reviewing the sound that the letter e makes with the students. “Today we will talk about the letter e which we have discussed before with our creaky door. What sound does it make? That is correct… e=/e/. Today we are going to learn a new correspondence. Can anyone tell me what the two letters “ea” sound like when they are put together in a word? You are right; they make the long E sound. The “ea” together says EEEE- it’s name. To help me remember that ea make the long E, I always think of the saying: “ I scream you scream, we all scream for ice cream. Lets say it together, That was great!
2. Now we are going to practice by using our letters to make words with the ea. Let me do the first word. The word is sneak. I am going to get the letters s, n, e , a and k and put them in four boxes. S goes in the first box, than the n goes in to the next box and the e and a go in the same box, than the k goes in the last box. Now it’s your turn to practice. TELL WHY!!
4[steam, creak, least, speak, treat, steam, clear, smear]
3. You did a great job! Now without using the letter boxes I am going to use the letters and spell a word, I want you to tell me what the word is. Let me show you, the letters s/n/e/a/k / spell the word Sneak. Now, its your turn. When I use the letters to spell words, I want you to tell me the word I have spelled.
4. You are working so hard, Great Job! Now lets read the book. Lee and the Team, while we read lets try to look for the words that have the ea=/E/.
The last thing we are
going to do is look at
pictures on the worksheet and circle the pictures that show the ea=/E/
Click here to return |
Presentation on theme: "SYSTEMS OF LINEAR EQUATIONS"— Presentation transcript:
1SYSTEMS OF LINEAR EQUATIONS Solving Linear Systems Algebraically
2Solving Systems of Equations Algebraically When you graph, sometimes you cannot find the exact point of intersection. We can use algebra to find the exact point.Also, we do not need to put every equation in slope-intercept form in order to determine if the lines are parallel or the same line. Algebraic methods will give us the same information.
3Methods of Solving Systems Algebraically We will look at TWO methods to solve systems algebraically:1) Substitution2) Elimination
4Method 1: Substitution Steps: Choose one of the two equations and isolate one of the variables.Substitute the new expression into the other equation for the variable.Solve for the remaining variable.Substitute the solution into the other equation to get the solution to the second variable.
5Method 1: Substitution Example: Equation ‘a’: 3x + 4y = - 4 Equation ‘b’: x + 2y = 2Isolate the ‘x’ in equation ‘b’:x = - 2y + 2
6Method 1: Substitution 3(- 2y + 2) + 4y = - 4 Example, continued:Equation ‘a’: 3x + 4y = - 4Equation ‘b’: x + 2y = 2Substitute the new expression,x = - 2y + 2 for x into equation ‘a’:3(- 2y + 2) + 4y = - 4
8Method 1: Substitution x + 2 (5) = 2 x + 10 = 2 x = - 8 Example, continued:Equation ‘a’: 3x + 4y = - 4Equation ‘b’: x + 2y = 2Substitute y = 5 into either equation ‘a’ or ‘b’:x + 2 (5) = 2x + 10 = 2x = - 8The solution is (-8, 5).
9Method 2: EliminationSteps:Line up the two equations using standard form (Ax + By = C).GOAL: The coefficients of the same variable in both equations should have the same value but opposite signs.If this doesn’t exist, multiply one or both of the equations by a number that will make the same variable coefficients opposite values.
10Method 2: Elimination Add the two equations (like terms). Steps, continued:Add the two equations (like terms).The variable with opposite coefficients should be eliminated.Solve for the remaining variable.Substitute that solution into either of the two equations to solve for the other variable.
16Method 2: Elimination 36x - 24y = 0 0 = 0Example 2, continued:When both variables are eliminated,if the statement is TRUE (like 0 = 0), then they are the same lines and there are infinite solutions.if the statement is FALSE (like 0 = 1), then they are parallel lines and there is no solution.
17Method 2: Elimination36x - 24y = 0-36x + 24y = 00 = 0Example 2, continued:Since 0 = 0 is TRUE, there are infinite solutions.
18AssignmentPg 128, #1-27 every 3rd. 1,4,7,10,13,etc.
19Solving Systems of Three Equations Algebraically When we have three equations in a system, we can use the same two methods to solve them algebraically as with two equations.Whether you use substitution or elimination, you should begin by numbering the equations!
20Solving Systems of Three Equations Substitution MethodChoose one of the three equations and isolate one of the variables.Substitute the new expression into each of the other two equations.These two equations now have the same two variables. Solve this 2 x 2 system as before.Find the third variable by substituting the two known values into any equation.
21Solving Systems of Three Equations Linear Combination MethodChoose two of the equations and eliminate one variable as before.Now choose one of the equations from step 1 and the other equation you didn’t use and eliminate the same variable.You should now have two equations (one from step 1 and one from step 2) that you can solve by elimination.Find the third variable by substituting the two known values into any equation. |
A planetary scientist from America, David Kipping, who published his article on the resources of the electronic library arXiv.org, shared the calculations of the laser power required to launch the space probe with a light sail to Alpha Centauri under the “Breakthrough Starshot” project program. As a result, some problems in the work of the “sailboat” were revealed, based on certain aspects of the theory of relativity of Einstein, which is associated with overheating of the sail. Of course, this does not negate the possibility of flying in principle, but it creates some snags.
The joint project of British cosmologist Stephen Hawking and Russian billionaire Yuri Milner – “Breakthrough Starshot” means the construction of a spacecraft based on the idea of the Californian physicist Filipi Lubin. Its essence lies in the fact that not “ordinary” spaceships will go to travel to distant planets, but flat ones, structured from reflective materials and characterized by their ease, the possibility of dispersal of which to almost the speed of light will be achieved by a powerful orbiting laser. Thanks to this, American physicists believe that it will be possible to reach Alfa Centauri in twenty years, and the duration of flights between Earth and Mars will be reduced to three days without payload on board, and with a load of up to 10 tons – up to a month. At the moment, the problem of safe braking of such a space probe is being solved, the laser beam can not interact stably with the accelerating “sailer”, this considerably complicates the task.
According to the theory of relativity of Einstein, there is a change in the properties of matter and light when the near-light velocity is reached. More precisely, in length objects are reduced, and their mass increases. From the calculations of Kipping, the predictable consequences of if the light particles collide with the sail at normal and near-light speed, it is possible to estimate the changes in the operation of the laser as a whole. The relativistic effects of the theory of relativity will reduce the laser power by ten percent even with a slight acceleration. Hence, it is necessary to increase either the motor power or the time of its operation, but in this case the sail itself will overheat. This will entail a malfunction of the electronics of the sail, and it will become less efficient, because in this case a part of the laser energy will simply waste heat in the environment.
Now the project participants are thinking about creating a new material that can reflect all the incoming light, or how to protect the electronics from inevitable destruction. The first to achieve in principle is realistic, for example, a thin coating of salts, titanium, zinc and other types of metals have 99.99% reflective properties that will make it possible to disperse the sailboat in four hours and not to damage its electronic component. This task will require the use of a phased laser with a total power of 700 megawatts. |
The pupil is the black “hole” in the middle of the iris (the colored part of the eye). The pupil looks black because the inside of the eye is dark. Eye doctors dilate the pupil with eye drops to help them view the retina. There are many causes for a dilated pupil outside of the doctors office, however. Some people have a natural anisocoria where one pupil is naturally larger than the other. Plant irritants, pesticides, and antihistamine medications can also make the pupil dilate. Blockage of sympathetic or parasympathetic nerves to the eye can cause a pupil to dilate – there are some serious medical conditions that can cause this blockage such as Adie’s pupil, Horner’s Syndrome, or third nerve palsy. A new onset pupil abnormality needs to be evaluated by an eye doctor. |
Like tiny nomads, malaria parasites move from human to mosquito and back again. But how do they know when to pack up and move? New research from HHMI Senior International Research Scholar Alan F. Cowman suggests that when it’s time to leave their human hosts, the parasites, a type of protozoa, send each other dispatches saying it’s time to head out.
A mosquito transmits the malaria parasite to a human host when it plunges its proboscis through the skin for a sip of blood. The protozoa travel through the bloodstream to the host’s liver, where they reproduce asexually and infect red blood cells. When it’s time to go, the protozoa develop into gametes that are taken up during a second mosquito’s bite. Inside the mosquito’s gut, the protozoa reproduce sexually and the process begins again.
Cowman’s postdoctoral fellow Neta Regev-Rudzki discovered that the protozoa were talking to each other inside their human hosts, passing vesicles between the red blood cells they had infected. As the researchers reported in Cell on May 23, 2013, vesicle production increased when the protozoa were stressed—for example, when they were exposed to an antimalarial drug—and seemed to signal to the parasites to mature into their sexual form. The communication mechanism made sense: the protozoa need a way to broadcast environmental conditions and let the community know when it’s time to catch a ride with the next mosquito.
Although Cowman and his team at the Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia, have yet to determine the exact content of the vesicles, once they do, they could have some potential drug targets. “A big aim among malaria researchers is not only to develop ways to treat the disease but also to make compounds that inhibit transmission,” says Cowman. “Blocking the passage of the parasites into the form that can spread to mosquitoes is one way to do that.” |
This two page vocabulary worksheet on Saki's (H H Munro) short story has 3 parts. In Part 1A, 10 vocab words are quoted in their original sentences from the text, which the students will read in context. In Part 1B the students will match the vocab words to their definitions & give the part of speech. In Parts 2A & 2B, students will do the same with 10 more words from the short story. Part 3 is "Word Families" where students change the word to another part of speech or identify the root word. (ie, What is the noun form of communicate? communication; or What is the root word of communicate? commune) The two page answer KEY is included, making this 4 pages in all. |
Machines learn through algorithms—sets of rules or instructions—that allow them to continually improve through experience. In contrast to programming, which can tell a machine to do a specific task in the same way over and over, machine learning algorithms modify themselves over time based on the original goal set and previous results.
A simple real-life example of an algorithm is the creation or improvement of a recipe. You might follow basic sets of instructions or rules when following a recipe (e.g., preheating the oven or chopping vegetables), but have the freedom to make adjustments, like adding a little more salt or substituting one ingredient for another. The order of the steps is important, and not all substitutions are acceptable, but by making small modifications within a set of guidelines, you can improve the recipe over time.
Machine learning algorithms work in a similar way, but there are different types of learning styles depending on the outcome you’re seeking. As you don’t follow the same sets of rules when following a recipe, driving a car, or getting dressed in the morning, machines also need different rules for various types of tasks. Understanding how different types of machine learning algorithms, or models, work will help you choose the right one to achieve your desired outcome.
What Is Supervised Learning?
With supervised learning, the training data that the model uses to learn has a known label or outcome—there is a clear, expected answer. During the training process, the model makes predictions about the answer and is corrected when the result is incorrect. The model learns based on feedback, and accuracy improves over time until it has reached an acceptable level.
A recommendation engine for an e-commerce website is a real-life example of supervised learning. The model gathers data from previous images viewed and uses it to predict what new shoppers might buy based on their browsing habits. Every time a recommended product is added to the cart or a purchase is made, the model learns how accurate its predictions are and uses that information to improve its recommendations.
Machine learning algorithms in this category include:
- Artificial neural networks (ANN): Aim to mimic the human brain by learning to perform tasks based on provided examples
- Bayesian logic (Naive Bayes): Classifies data based on the assumption that features are independent of each other
- Decision trees: Use simple decision rules to classify data based on predefined variables
- Deep learning: Recognizes patterns based on training data, validation data, and test data
- Linear discriminant analysis: Finds a linear combination of features to classify data
- Linear regression: Predicts a dependent variable value based on a given independent variable
- Logistic regression: Classifies binary variables such as pass/fail, yes/no, healthy/sick, and so on
- Random forests: Use a combination of decision trees to deliver a mean prediction based on all of the trees
- Similarity learning: Measures how similar two objects are to each other
- Support vector machines (SVMs): Use data points in multiple dimensions to classify data based on a number of different features
- Transfer learning: Uses information learned from previous data sets to inform decisions about related problems
What Is Unsupervised Learning?
With unsupervised learning, there is no clear answer and the model learns by deducing patterns or structures in a set of data that is not labeled. The learning is unsupervised because there is no human telling it whether the outcomes are right or wrong. It is up to the algorithm to identify the solution that makes the most sense based on the data.
Unsupervised learning is like asking an open-ended question, whereas supervised learning is more like a multiple choice question that has a single answer. An unsupervised learning algorithm example in e-commerce is automatically segmenting customers into groups to provide different user experiences. The model will use the available data to determine what clusters make the most sense—product images viewed, product searches, product attributes, and so on.
Machine learning algorithms in this category include association, which is used to find patterns in large data sets with no labels, and those that use clustering:
- Hierarchical clustering: Puts objects into distinct groups with other objects that are similar
- Independent component analysis: Separates multiple variables into individual components
- k-Means clustering, Association Rules: Partition observations into clusters
- k-NN (k nearest neighbor): Classifies an object based on the similarity to its nearest neighbors
- Principal component analysis: Creates a best-fitting line in multiple dimensions given a collection of data points
- Singular value decomposition: Factorizes a matrix into singular vectors and values
What Is Semi-Supervised Learning?
Not surprisingly, semi-supervised learning is a combination of supervised and unsupervised learning. Some of the data is labeled and some is not. There is a desired outcome, but the learning process is not as simple as supervised learning. Because not all of the data is labeled, the model must also learn how to organize it.
One example of a data set that uses semi-supervised learning is image tagging for a large collection of images for an e-commerce website. Some images are tagged, others are not, and the machine has to learn how to classify the untagged images based on the existing labels and correctly group the images together.
Use Multiple Learning Types in One Platform
The beauty of the Skyl.ai platform is that you can create different types of machine learning models and you don't have to be a data scientist to use it. However, it helps to have some understanding of the various types of machine learning algorithms so you know what is happening behind the scenes.
Skyl.ai uses supervised learning—specifically deep learning and transfer learning—to analyze data. This makes the platform well-suited to applications that require pattern recognition and those that require an extensive existing knowledge base, such as Natural Language Processing. To learn more about how to make your machine learning project successful, download our free checklist. |
Extreme heat can occur quickly and without warning; older adults, children, and sick or overweight individuals are at greater risk from extreme heat; humidity increases the feeling of heat as measured by a heat index.
The following percentages show various statistics for heat-related deaths in the United States.
Heat-related deaths reported most frequently among males
And adults aged 65 and older
Almost all heat-related deaths occur during May - September
With the highest numbers reported in July
Find places in your community where you can go to get cool.
Keep your home cool by doing the following:
Cover windows with drapes or shades. Weather-strip doors and windows. Use window reflectors, such as aluminum foil-covered cardboard, to reflect heat back outside. Add insulation to keep the heat out. Use attic fans to clear hot air. Install window air conditioners and insulate around them.
Learn to recognize the signs of heat-related illness.
Be Safe DURING
Never leave a child, adult, or animal alone inside a vehicle on a warm day.
Find places with air conditioning. Libraries, shopping malls, and community centers can provide a cool place to take a break from the heat.
If you’re outside, find shade. Wear a hat wide enough to protect your face.
Wear loose, lightweight, light-colored clothing.
Drink plenty of fluids to stay hydrated. If you or someone you care for is on a special diet, ask a doctor how best to accommodate it.
Do not use electric fans when the temperature outside is more than 95 degrees, as this could increase the risk of heat-related illness. Fans create air flow and a false sense of comfort, but do not reduce body temperature.
Avoid high-energy activities.
Check yourself, family members, and neighbors for signs of heat-related illness.
RECOGNIZE and RESPOND
- Signs: Muscle pains or spasms in the stomach, arms, or legs Actions: Go to a cooler location. Remove excess clothing. Take sips of cool sports drinks with salt and sugar. Get medical help if cramps last more than an hour.
- Signs: Heavy sweating, paleness, muscle cramps, tiredness, weakness, dizziness, headache, nausea or vomiting, or fainting. Actions: Go to an air-conditioned place and lie down. Loosen or remove clothing. Take a cool bath. Take sips of cool sports drinks with salt and sugar. Get medical help if symptoms get worse or last more than an hour.
- Signs: Extremely high body temperature (above 103 degrees) taken orally; red, hot, and dry skin with no sweat; rapid, strong pulse; dizziness; confusion; or unconsciousness Actions: Call 911 or get the person to a hospital immediately. Cool down with whatever methods are available until medical help arrives. |
Some problems are so challenging to solve that even the most advanced computers need weeks, not seconds, to process them.
Now a team of researchers at Georgia Institute of Technology and University of Notre Dame has created a new computing system that aims to tackle one of computing’s hardest problems in a fraction of the time.
“We wanted to find a way to solve a problem without using the normal binary representations that have been the backbone of computing for decades,” said Arijit Raychowdhury, an associate professor in Georgia Tech’s School of Electrical and Computer Engineering.
Their new system employs a network of electronic oscillators to solve graph coloring tasks – a type of problem that tends to choke modern computers.
Details of the study were published April 19 in the journal Scientific Reports. The research was conducted with support from the National Science Foundation, the Office of Naval Research, the Semiconductor Research Corporation and the Center for Low Energy Systems Technology.
“Applications today are demanding faster and faster computers to help solve challenges like resource allocation, machine learning and protein structure analysis – problems which at their core are closely related to graph coloring,” Raychowdhury said. “But for the most part, we’ve reached the limitations of modern digital computer processors. Some of these problems that are so computationally difficult to perform, it could take a computer several weeks to solve.”
A graph coloring problem starts with a graph – a visual representation of a set of objects connected in some way. To solve the problem, each object must be assigned a color, but two objects directly connected cannot share the same color. Typically, the goal is to color all objects in the graph using the smallest number of different colors.
In designing a system different from traditional transistor-based computing, the researchers took their cues from the human brain, where processing is handled collectively, such as a neural oscillatory network, rather than with a central processor.
“It’s the notion that there is tremendous power in collective computing,” said Suman Datta, Chang Family professor in Notre Dame’s College of Engineering and one of the study’s co-authors. “In natural forms of computing, dynamical systems with complex interdependencies evolve rapidly and solve complex sets of equations in a massively parallel fashion.”
The electronic oscillators, fabricated from vanadium dioxide, were found to have a natural ability that could be harnessed for graph coloring problems. When a group of oscillators were electrically connected via capacitive links, they automatically synchronized to the same frequency – oscillating at the same rate. Meanwhile, oscillators directly connected to one another would operate at different phases within the same frequency, and oscillators in the same group but not directly connected would sync in both frequency and phase.
“If you suppose that each phase represents a different color, this system was essentially mimicking naturally the solution to a graph coloring problem,” said Raychowdhury, who is also the ON Semiconductor Junior Professor at Georgia Tech.
The researchers were able to create a small network of oscillators to solve graph coloring problems with the same number of objects, which are also referred to as nodes or vertices. But even more significant, the new system theoretically proved that a connection existed between graph coloring and the natural dynamics of coupled oscillatory systems.
“This is a critical step because we can prove why this is happening and that it covers all possible instances of graphs,” Raychowdhury said. “This opens up a new way of performative computation and constructing novel computational models. This is novel in that it’s a physics-based computing approach, but it also presents tantalizing opportunities for building other customized analog systems for solving hard problems efficiently.”
That could be valuable to a range of companies looking for computers to help optimize their resources, such as a power utility wanting to maximize efficiency and usage of a vast electrical grid under certain constraints.
"This work provides one of the first constructive ways to build continuous time dynamical system solvers for a combinatorial optimization problem with a working demonstration using compact scalable post-CMOS devices," said Abhinav Parihar, a Georgia Tech student who worked on the project.
The next step would be building a larger network of oscillators that could handle graph coloring problems with more objects at play.
“Our goal is to reach a system with hundreds of oscillators, which would put us in striking distance of developing a computing substrate that could solve graph coloring problems whose optimal solutions are not yet known to mankind,” Datta said.
This material is based upon work supported by the National Science Foundation under Grant No. 1640081, the Semiconductor Research Corporation under research task Nos. 2698.001 and 2698.002, and the Office of Naval Research under award No. N00014-11-1-0665. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of those agencies.
CITATION: Abhinav Parihar, Nikhil Shukla, Matthew Jerry, Suman Datta and Arijit Raychowdhury, “Vertex coloring of graphs via phase dynamics of coupled oscillatory networks,” (Scientific Reports, April 2017). http://dx.doi.org/10.1038/s41598-017-00825-1 |
- Why do we need an extremely large telescope like the Giant Magellan Telescope?
- How do stars and planets form and evolve?
- What happened in the early universe?
- What do black holes look like?
- Why do galaxies differ so much in size, shape, composition and activity?
- Machine Learning
- Atomic & Molecular Data
- Gravitational Waves
- Medical Applications
- Planetary Geology
- Space Weather
- Solar and Stellar Atmospheres
- Cosmic Microwave Background
- Stellar Astronomy
- The Milky Way Galaxy
- Extragalactic Astronomy
- Planetary Systems
The MMT Observatory is the premier visible-light and infrared telescope operated by the Center for Astrophysics | Harvard & Smithsonian. Located at the CfA’s Fred Lawrence Whipple Observatory (FLWO) in southern Arizona, this 6.5-meter (21 foot) telescope is used to study objects across the field of astronomy, from the Solar System to distant galaxies. The MMT has provided a testbed for new telescope technologies developed by scientists and engineers at CfA and the University of Arizona, which jointly operate the MMT Observatory.
The Telescope and the Science
The MMT Observatory was originally built as the Multi-Mirror Telescope in 1979, with six mirrors acting together to focus light from astronomical sources. This design was chosen in part because large mirrors were more difficult and expensive to fabricate then, and because it was easier to transport smaller mirrors to the top of Mt. Hopkins, a 2600-meter (8600 foot) tall mountain in the Santa Rita Mountains of southern Arizona. In subsequent decades, engineers improved mirror-making techniques to lower both weight and cost, allowing the smaller mirrors to be replaced by a single 6.5-meter mirror in 2000. The single mirror improved the field of view — the amount of sky viewable by the telescope — by more than 300 times. The telescope retains the MMT name, though the acronym isn’t generally used anymore.
From the beginning, the MMT has been used to test new technologies and innovations for astronomy. While many large telescopes in 1979 were built to track astronomical objects along just one axis for engineering simplicity, the MMT was constructed to be track objects along both axes. The modern telescope is equipped with “adaptive optics”: a flexible secondary mirror which creates extremely sharp astronomical images by correcting for distortions created by Earth’s atmosphere.
Scientists and engineers at the CfA developed a suite of instruments both to create images and to analyze the spectrum of light from astronomical sources. These include Megacam, a liquid nitrogen-cooled array of 36 charge-coupled device (CCD) detectors, the same basic technology used in digital cameras. Each of the 36 CCDs capture images 2048 x 4608 pixels in size, making Megacam one of the largest astronomical cameras in use. Other instruments include the SAO Widefield InfraRed Camera (SWIRC) designed for taking infrared images, along with the MMT and Magellan Infrared Spectrograph (MMIRS), Binospec, Hectospec, and Hectochelle spectrographs, which are capable of analyzing the spectrum from multiple astronomical objects at once.
These instruments make the MMT a powerful general-purpose optical and infrared observatory, useful for studying astronomical objects from the Solar System to distant galaxies. To list just a few, astronomers have used the MMT to identify systems for future gravitational wave observatories, follow up on exoplanet observations, analyze star-forming regions, and study the environment of supermassive black holes.
Binospec - moderate dispersion multi-slit spectrograph and imager
Hectospec - moderate dispersion 300-fiber spectrograph
Hectochelle - high dispersion 240-fiber echelle spectrograph
Blue Channel - moderate dispersion single-slit spectrograph
Red Channel - moderate dispersion single-slit spectrograph
MMT Cam - small field-of-view imager
SPOL - imaging spectropolarimeter
MMIRS - multi-slit spectrograph and imager
ARIES - adaptive optics imager and spectrograph
MMT-Pol - imaging polarimeter |
- 0.1 What Is Air Quality?
- 0.2 What Is Air Pollution?
- 0.3 What Causes Air Pollution?
- 0.4 What Makes Up Air Pollution?
- 0.5 How Can Air Pollution Be Monitored?
- 0.6 How Can Air Pollution Be Checked and Controlled?
- 0.7 Air Pollution and the Maritime Industry
- 0.8 Sinay Cares About Air Quality
- 1 Frequently Asked Questions About: AIR POLLUTION MONITORING
What Is Air Quality?
Air Quality is the condition of how clean or polluted the surrounding air is. Air pollution is present in ports and surrounding cities. If left unmonitored, air pollutants can cause many health and environmental issues. Today there exist many different types of air monitoring solutions since there is an increasing global concern for air quality and air pollution.
Air is composed of:
- Nitrogen and oxygen, which make up 99 percent of our air
- & Other gases including argon, carbon dioxide, neon, helium, krypton, hydrogen, and xenon.
What Is Air Pollution?
Air pollution is any chemical in the atmosphere that negatively affects the health of human life, wildlife, and sea life. Studies show, like heavy smoking, air pollution can significantly reduce our lifespan and cause disease. According to the World Health Organization (WHO), 9 out of 10 people on the planet are currently breathing polluted air, and air pollution is responsible for 1/3 of strokes, lung cancer, and heart disease.
Air pollution is a major public health issue and a large environmental and health risk.
What Causes Air Pollution?
There are two main air pollution sources:
- Natural Sources, pollutants from fires, volcanoes, pollen, and dust
- Anthropogenic sources, pollutants from used fossil fuels, industries, and vehicles
It is estimated that in 2019 air pollution contributed to 6.7 million deaths around the world.
What Makes Up Air Pollution?
Air pollution is a composition of harmful chemicals. These chemicals include:
- Particulate matter (PM), which is extremely small liquid droplets classified by the diameter size of the particle. Particulate matter that is less than 10 mm, called PM10, is extremely harmful because it can reach deep parts of the lungs and bloodstream. To put this into perspective, 10mm is smaller than a grain of sand or a piece of hair. Their small size is what makes them so harmful to human health.
- Volatile organic compounds like gas and formaldehyde.
- Nitrogen dioxide, which is caused by fuel from vehicles or gas stoves for example.
- Carbon monoxide, which is from the exhaust such as heavy traffic.
- Sulfur dioxide, caused by sulfur-containing fuels.
- Lead, which is caused by metal processing.
- Smog ozone (different than the natural ozone layer), which is a colorless gas formed because of sunlight interacting with nitrogen dioxide and volatile organic compounds.
The above pollutants can then compound and produce secondary pollutants like nitric acid and sulfuric acid. These can combine and cause acid deposition which leads to acid rain, changing the PH of the entire food web.
How Can Air Pollution Be Monitored?
Air pollution must be monitored to protect human health and the surrounding environment.
Air pollution monitoring by the European Environment Agency (EEA) began in the 1970s. Air pollution monitoring by the United States Environmental Protection Agency (U.S. EPA) began in the 1980s. The EPA for example reports air pollution levels and makes predictive forecasts.
Now, most countries have national laws, regulations, and programs to measure pollutants. In Europe, there is the European Copernicus program which monitors the atmosphere and maps out trace gases that affect our air, health, and climate. In the United States, there is the Clean Air Act which is a federal law that regulates air pollution emissions and concentrations.
Air pollution can be monitored using environmental monitoring systems such as stations, sensors, new technologies, and the Air Quality Index (AQI).
Ambient air monitoring programs, which are the long-term monitoring of air pollutants, use mobile, IoT, and Big Data technologies. For instance, in London, hundreds of IoT sensors combined with satellites and open sources help measure air pollution levels. Ambient air monitoring provides public and transparent information about current pollutants in the air.
Machine learning, Big Data, IoT, plus open information like traffic and weather data help cities to monitor air quality pollution in real-time.
How Can Air Pollution Be Checked and Controlled?
Air pollution can be checked and controlled thanks to regulations, legislation, and existing technological solutions. To reduce air pollution, many governments have developed standards to regulate pollution and policies to reduce the environmental impact of solutions and promote clean energy sources.
Developed technologies can now scrub pollutants out of the air, and some countries have set in place legislation that limits the number of pollutants certain industries are allowed to produce each year.
Air Pollution and the Maritime Industry
Maritime activities, such as shipping, can lead to harmful emissions that travel far distances. These emissions can in turn create pollutants like particulate matter. That is why organizations like the International Maritime Organization (IMO) have created legislation in Europe; for example, ships cannot use fuel containing more than .05% of Sulphur content.
As industrial shipping uses 300 metric tons of fuel each year, and fuel is one of the main contributors to air pollution, it is important that the shipping industry monitors pollutants released as well as finds new clean energy sources for shipping. With shipping activity increasing every year, if pollutants are not reduced air quality will only worsen.
Ports and thus surrounding cities are faced with port congestion due to increased shipping. The more crowded ports are the worse the air quality is around these cities. Ports are also normally industrial environments which also contribute to air pollution.
Therefore, it is important that ports monitor the air quality of the port to ensure that harm is not done to the surrounding community.
Sinay Cares About Air Quality
At Sinay we care about air quality and its impacts, which is why we created the Sinay Air Module to help ports and the shipping industry monitor air pollution. With the Sinay Air Module, you can monitor key air quality indicators (among which SO˅2, NO˅2, PM1, PM2.5, and PM10) using sensors, be alerted in real-time when you exceed certain thresholds to make the best decisions, and generate automatic monitoring reports.
Air quality monitoring allows ports to conform to increasingly stricter air pollution regulations and legislation.
Frequently Asked Questions About: AIR POLLUTION MONITORING
Pollution is monitored with environmental monitoring.
Environmental monitoring is composed of tools and techniques that identify, analyze, and establish parameters for environmental conditions to quantifiably assess the impacts of various activities on the environment.
Environmental monitoring allows pollution levels to be controlled and trends to be identified.
Air pollution is checked and controlled by government laws and legislation. The Air Quality Index is a tool used by many governments to communicate air quality levels.
An air quality monitoring system is a process that monitors the quality of the air in relation to pollutants and toxins. The system is often an application that uses data and AI to produce key indicators.
Indoor air quality is monitored with hardware, software, and services designed specifically to be placed indoors. They detect pollutants and can measure the quality of the air. |
The engine shown is a double-acting steam engine because the valve allows high-pressure steam to act alternately on both faces of the piston.
The following animation shows the engine in action:
You can see that the slide valve is in charge of letting the high-pressure steam into either side of the cylinder. The control rod for the valve is usually hooked into a linkage attached to the cross-head, so that the motion of the cross-head slides the valve as well.
(On a steam locomotive, this linkage also allows the engineer to put the train into reverse.)
You can see in this diagram that the exhaust steam simply vents out into the air.
This fact explains two things about steam locomotives:
It explains why they have to take on water at the station -- the water is constantly being lost through the steam exhaust.
It explains where the "choo-choo" sound comes from. When the valve opens the cylinder to release its steam exhaust, the steam escapes under a great deal of pressure and makes a "choo!" sound as it exits. When the train is first starting, the piston is moving very slowly, but then as the train starts rolling the piston gains speed. The effect of this is the "Choo..... choo.... choo... choo choo-choo-choo" that we hear when it starts moving.
The high-pressure steam for a steam engine comes from a boiler.
The boiler's job is to apply heat to water to create steam.
There are two approaches: fire tube and water tube.
A fire-tube boiler was more common in the 1800s. It consists of a tank of water perforated with pipes. The hot gases from a coal or wood fire run through the pipes to heat the water in the tank, as shown here:
In a fire-tube boiler, the entire tank is under pressure, so if the tank bursts it creates a major explosion.
More common today are water-tube boilers, in which water runs through a rack of tubes that are positioned in the hot gases from the fire. The following simplified diagram shows you a typical layout for a water-tube boiler:
In a real boiler, things would be much more complicated because the goal of the boiler is to extract every possible bit of heat from the burning fuel to improve efficiency.
July 25, 2006 |
Myriapods are arthropods (animals with jointed legs) that have numerous pairs of legs. The original Greek word means “10,000 feet”. The number of legs in adult myriapods varies according to species and ranges from nine pairs to nearly 200. All species are terrestrial and can generally be found in the soil, in leaf litter and under rocks and rotting wood. Many species possess specialized glands that secrete foul-tasting compounds. This large group is divided into four classes: Diplopoda, Chilopoda, Pauropoda and Symphyla.
This class is the largest, with more than 10,000 known species, most of them living in warm climates, and includes millipedes. Millipedes are arthropods that have two pairs of legs per segment as adults, hence the name diplopod or “double legs.” Most millipedes feed on decaying organic matter, although some will eat fungi and others are carnivorous.
Centipedes make up the second largest class, with over 2,500 species. They have only one pair of legs per segment. Unlike millipedes, nearly all centipedes are predators, although some will eat plant matter. The first pair of appendages on the trunk is modified into a pair of claws with poison glands, which centipedes use to capture prey (usually other arthropods). Some centipedes, when disturbed or handled by a human, can inflict a painful bite, but there are no reported cases of human fatalities from such bites. The biggest centipedes grow to a length of 30 cm.
Symphylans resemble centipedes, but are translucent and measure 2 to 8 mm long. They usually have 12 pairs of legs, sometimes 11.
These tiny arthropods vaguely resemble centipedes. They are less than 2 mm long. They have rather short, sometimes flattened bodies, and possess nine pairs of legs (rarely ten). |
About food allergies
If you have a food allergy, your immune system reacts to a particular food when the food enters your body. This food is called an allergen.
Your immune system reacts by releasing histamine and other substances into your body’s tissues. This leads to the symptoms of an allergic reaction.
Even tiny amounts of the food you’re allergic to can cause an allergic reaction. Some reactions can happen immediately, and others can happen several hours later.
Allergic reactions are common. But most reactions aren’t severe and deaths are extremely rare.
Food allergies aren’t the same as food intolerances. A food intolerance is a reaction to the food you’re eating, but the reaction isn’t caused by your immune system. Food allergies are generally more severe and have more symptoms than food intolerances.
Immediate-onset food allergies: symptoms
The symptoms of immediate-onset food allergies usually appear within a few minutes. But sometimes symptoms can appear 1-2 hours after a child has eaten the food.
Mild to moderate symptoms of immediate-onset food allergies include:
- swollen lips, face or eyes
- skin reactions like redness, hives or eczema
- tingling or itchy mouth
- vomiting, stomach pain or diarrhoea
- nose congestion.
A severe allergic reaction is called anaphylaxis, and it can also happen immediately. Symptoms of anaphylaxis include:
- breathing difficulties or noisy breathing
- tongue swelling or throat tightness
- a wheeze or persistent cough
- difficulty talking or a hoarse voice
- persistent dizziness or fainting
- paleness and floppiness (in young children)
Anaphylaxis is a life-threatening allergic reaction and needs urgent medical attention. If your child is having an anaphylactic reaction, first lay your child flat or keep them sitting. Don’t let your child stand or walk around. Next use an adrenaline auto-injector like EpiPen® if one is available. Then call an ambulance – phone 000.
Delayed-onset food allergies: symptoms
The symptoms of delayed-onset food allergies appear more than 2-4 hours after a child comes into contact with the food. Sometimes symptoms appear many hours later.
Symptoms of delayed-onset food allergies include vomiting, diarrhoea, bloating and stomach cramps. Occasionally there might be mucus or blood in the poo.
Delayed-onset allergies aren’t usually life threatening.
Common food allergies
The most common food allergies are:
- cow’s milk
- tree nuts like cashews, pistachios, walnuts, pecans or hazelnuts
Diagnosing food allergies in children
Immediate-onset food allergies
Tests for immediate-onset allergies include the following:
- Skin-prick test: your child’s skin is pricked with a special device that looks a bit like a toothpick and that contains a drop of a specific allergen. If a hive comes up where your child’s skin has been pricked, your child probably has an allergy.
- Blood tests: the serum specific IgE antibody test uses your child’s blood to see whether your child is sensitive to specific allergens. If your child’s blood has a high amount of antibodies, your child probably has an allergy. Your child might have this test if they can’t have skin-prick testing.
- Oral food challenge: sometimes your child will be given the possible allergen in a safe, supervised setting. Medical and nursing staff will watch to see whether an allergic reaction happens. This test carries a risk of anaphylaxis so should be conducted only by medical specialists in a setting where anaphylaxis can be safely and quickly treated.
Delayed-onset food allergies
If your child has a delayed-onset food allergy, diagnosis usually happens through an ‘elimination and re-challenge’ test.
This involves removing possible allergy-causing foods from your child’s diet, then reintroducing them when your child’s allergy specialist thinks it’s safe to do so. You reintroduce only one food at a time so it’s easier to identify the food that’s causing the issue.
You might hear about tests like IgG food antibody testing, Vega testing and hair analysis. These tests haven’t been scientifically proven as allergy tests. Tests and treatments that are backed up by science are most likely to work, be worth your time, money and energy, and be safe for your child.
Managing food allergies in children
There’s no cure for food allergies yet, but many children grow out of them. You can also take some steps to make it easier for you and your child to live with food allergies.
Avoid the food
It’s important for your child to avoid the food. This can be challenging, particularly as eating even tiny amounts can cause an allergic reaction. Your child also needs to avoid any foods or cutlery that could have been in contact with the food they’re allergic to.
You can do two important things to help your child avoid the food:
- Read all food labels. Be aware that some allergenic foods have different names – for example, cow’s milk protein might be called ‘whey’ or ‘casein’. But by law 10 allergens must be plainly stated on food labels – cow’s milk, soy, egg, wheat, peanut, tree nuts, sesame, fish, shellfish and lupin.
- Be careful when you eat out. Ask what ingredients each dish includes, how it was prepared, whether it has touched any other foods, and whether there’s any risk of cross-contamination. Most restaurants are happy to tell you, but they might not know about the ingredients in some foods like sauces. It’s best to avoid buffets and bain-maries (food warmers) because there’s a good chance that ingredients have been transferred from one dish to another.
Have an action plan
You should talk to your doctor about an ASCIA (Australasian Society of Clinical Immunology and Allergy) action plan. This will help you recognise and treat symptoms if your child eats something that causes an allergic reaction.
Know how to use an adrenaline auto-injector
If your child is at risk of anaphylaxis, your doctor might prescribe an adrenaline auto-injector like EpiPen®. These auto-injectors make it easy to self-inject adrenaline. Your doctor will teach you and your child (if old enough) how and when to use it.
It’s important that key people – like family, carers, babysitters and your child’s school – know how and when to use your child’s adrenaline auto-injector.
Consider a medical bracelet
Your child might wear a medical bracelet that lets people know your child has an allergy.
How long do food allergies last?
Most children grow out of their food allergies by 5-10 years of age, especially children who are allergic to milk, egg, soybean or wheat.
Allergies to peanuts, tree nuts, fish and shellfish are more likely to be lifelong.
If you think your child might have grown out of an allergy, see your GP or allergy and immunology specialist for an assessment. Don’t experiment at home to see whether your child has outgrown the allergy. Your doctor will let you know whether it’s safe for you to introduce the food at home or whether this should be done under medical supervision.
How to reduce your child’s risk of food allergies
You can take some simple steps that might help reduce your child’s risk of developing food allergies.
Eat a well-balanced and nutritious diet while pregnant or breastfeeding
When you’re pregnant or breastfeeding, it’s important to eat a wide variety of healthy foods every day including fruit, vegies, grains, protein and dairy or calcium-enriched products.
Avoiding foods that commonly cause allergies – for example, eggs and peanuts – while you’re pregnant or breastfeeding won’t reduce the risk of your baby developing allergies. In fact, avoiding too many foods can be dangerous, because your baby won’t get important nutrients.
Breastmilk is best, so it’s recommended that you exclusively breastfeed your baby until it’s time to introduce solid foods at around six months old. It’s best to keep breastfeeding until your baby is at least 12 months old.
Talk to a doctor or nurse about infant formula
For parents bottle-feeding with infant formula, there’s no evidence that giving babies hydrolysed infant formula or partially hydrolysed infant formula (which is also called hypoallergenic or HA formula) instead of standard cow’s milk formula prevents allergies.
If you’re not sure what formula is best for your baby, talk to your paediatrician, GP or child and family health nurse.
Introduce allergenic solids from around six months of age
Introducing allergenic solid foods early can reduce the risk of your child developing a food allergy. All babies, including babies with a high allergy risk, should have solid foods that cause allergies from around six months of age.
These foods include well-cooked egg, peanut butter, wheat (from wheat-based breads, cereals and pasta) and cow’s milk (but not as a main drink).
Your baby doesn’t need to avoid any particular allergenic foods.
Allergy risk facts and factors for children
Most children with food allergy don’t have parents with food allergy. But if a child’s parents have a food allergy or other allergy problems like asthma, eczema or hay fever, the child has an increased risk of food allergies.
Babies with severe eczema in the first few months of life are at an increased risk of developing food allergy. |
Biofilm builds up in water systems, appearing as slippery scum in pipes, irrigation trenches and other water transportation methods. These thin, slimy films can have a serious impact on growing conditions in agricultural production, and on dairy farming.
Because of the contamination threats from biofilm, effective water treatment is critical for agriculture, horticulture and other related sectors.
What is Biofilm?
Biofilm is a deposit of single-celled bacteria, fungi and algae. These are thin films, that are slimy to the touch. They stick to surfaces, forming a matrix of cellular materials.
A common form of bacterial biofilm include plaque, which forms on teeth, or the slimy material that can block drains. Biofilm deposits can vary in thickness, depending on environmental conditions.
In farming environments, biofilms form layers on feeding and milking equipment, animal housing, and water systems.
In fact, around 90% of the bacteria you will find on a farm will be in biofilm layers. The problem with them is that they are resistant to common forms of cleaning and disinfection.
A surface may look visibly clean, but unless you break down the biofilms sticking to it, it will not be clean on a biological level.
How Do Biofilms Form?
The formation of biofilms begins when bacteria and other free-floating organisms come into contact with surfaces. Once the biofilms make contact, they begin a process, which is in five stages:
Attachment is the first step. The micro-organisms produce an extracellular polymeric substance (EPS), which enables them to stick together, and secures the biofilm to the surface.
The EPS is a protein and includes enzymes, DNA, RNA, and polysaccharides. But the largest proportion of biofilms is water, which can be up to 97%.
This water enables nutrient flow within the biofilm matrix.
Proteins in EPS help with its structure and physiology, stabilising it and enhancing its adhesive qualities.
Once attached, the biofilm cells grow and divide, and this forms a dense structure which consists of many layers of thickness. But at the same time, at this stage, the biofilm is still too thin to be visible.
Various environmental conditions will influence biofilm growth and thickness. These include access to nutrients, oxygen content and sheer stress from water flow.
In slow flowing water, biofilm can become very thick.
A maturation stage, the cells within the biofilm excrete more extracellular polymeric substances.
This creates a complex 3D structure, with criss-crossing water channels that exchange nutrients and waste products.
After maturation, cells detach themselves from the biofilm matrix, enabling them to find new surfaces to adhere to, and thereby spreading bacteria. The enclosing slime protects these cells from the antibiotics, chemicals and immune systems which might otherwise repel or destroy them.
The final stage in the biofilm formation cycle is where the detached cells form colonies of their own.
These colonies can attract other micro-organisms, along with viruses and other lifeforms such as insect larvae, leading to widespread contamination.
What Causes Biofilm?
Biofilms are biological systems, and the bacteria they contain organise themselves into a functional community that co-ordinates how it acts and reacts.
One theory is that biofilms are a very primitive form, and may originally have arisen as a form of protection for cellular organisms.
Is Biofilm Dangerous?
Many of the bacteria and organisms in biofilms do not present a direct threat to health, but they can help spawn other more harmful things, such as E coli, Legionella, pseudomonas, cryptosporidium and fungi.
Biofilm can be dangerous, therefore, in a number of settings, including agriculture, horticulture, dairy farming and food and drink production.
Agriculture and Biofilm
Even when biofilms are not directly harming human health, where they affect water used solely for agricultural purposes, they can cause serious harm to plants.
As a grower, you could be fighting stubborn fungal infections in plants, which keep recurring, without realising that the source of the problem is the water supply you’re using to sustain your crop in the first place.
Where plants are growing in indoor conditions, such as greenhouses and hot houses, these can be perfect environments for fungal spores to thrive in, via contaminated water systems.
Another way biofilm can affect plant crops is by depriving them of oxygen. Many of the fungi, algae and bacteria in biofilms need oxygen to survive. These aerobic micro-organisms get this from the water surrounding them, which then leaves the water supply deoxygenated.
A deoxygenated water supply creates agricultural problems. Plant roots cannot survive without oxygen, whereas plenty of harmful bacteria are anaerobic, which means they will grow in this environment.
A deoxygenated waters supply can lead to soil that is oxygen-deficient, which in turn leads to problems with crops.
Biofilm in Dairy Farming
The impact of biofilm in farming is not limited to plant growth.
Biofilms are an issue in dairy farming because of the risk of cross-contamination. While equipment and livestock pens may appear clean, the danger is that they are not biologically clean.
The presence of biofilms can prevent cleaning agents and disinfectants in reaching cells, while providing conditions that enable these cells to thrive.
As we have seen in the formation cycle of biofilm, once they have reached maturation point, harmful bacterial cells can break free of biofilm and spread contamination.
One example is where a new-born calf is housed in a hutch that previously held a weaned calf. Biofilm can preserve any bacteria from the weaned calf, preventing its removal during the cleaning and disinfecting process. This leaves the new-born calf vulnerable to this bacteria.
Another example of how biofilm impacts dairy farming is when it forms of surfaces in contact with raw milk. Heat treatment may eliminate some bacteria, but where they are heat-stable, they can cause quality issues with end-products, affecting their shelf-life.
Other Industries Affected by Biofilm
Other industries where biofilm can cause serious issues include:
- Soft drinks and Juices
In these industries, biofilms formulate in processing equipment, including water systems, storage tanks, pumps and valves.
How Biofilm Harms Water Systems
Not only can water systems transport bacteria, via biofilm, but biofilm can also be harmful to water systems themselves.
Dissolved oxygen levels drop as biofilms thrive, leading to a spread of anaerobic bacteria which give water a sulphurous smell and flavour.
These aspects alone are not harmful to health, but the bacteria give off hydrogen sulphide as a metabolic by-product, which can corrode steel, iron and copper pipe fittings.
Biofilms can also be the source of iron-reducing bacteria, which can further reduce oxygen levels in water, allowing more biofilms to thrive.
The problem becomes one of a vicious circle of contamination that is hard to break.
How Do You Remove and Control Biofilm?
Biofilm is clearly an issue affecting water systems and the industries that depend on them, but how do you remove biofilm and prevent it from recurring?
To combat biofilms, you must break them down. This can be challenging, since they have greasy, slimy coatings to protect themselves.
Traditionally, chlorine dioxide and hydrogen peroxide have been effective in removing, preventing and controlling biofilm in water supplies.
As disinfectants they work against bacteria, fungi, viruses and protozoa, killing microbes. But chlorine will not kill cryptosporidium, and as a powerful oxidising agent, it can cause corrosion to pipework in the long-term. Hydrogen peroxide may damage some surfaces and is hazardous in high concentrations.
There is, however, now an alternative to both chemicals, which provides fast and effective destruction of biofilms and micro-organisms.
Oxyl-Pro is hydrogen peroxide-based, but it uses only food-safe ingredients to stabilise its chemical content. This ensures its safety as both disinfectant and decontamination agent.
It disinfects water systems, oxygenates compacted soils and will also disinfect surfaces and machinery. |
Multiple sclerosis, or MS, is a disorder of the central nervous system. When you have the disease, it affects both your brain and spinal cord. Normally, nerve cells are surrounded by an insulating layer called myelin. Myelin is a fatty substance that helps transmit nerve impulses. In those with MS, the myelin sheath becomes inflamed or damaged. This slows or completely disrupts the transmission of nerve impulses, leaving areas of scarring called sclerosis.
When these nerve signals are disrupted, you can experience a number of symptoms, the most common of which are blurred or double vision, tingling in the limbs, loss of balance and coordination, and tremors. MS attacks typically come and go in episodes, with relapses alternating with remissions.
MS is what medical experts call an autoimmune disease. This means it is caused by an attack by your body’s own immune system. For reasons still unknown, immune cells attack and destroy the myelin sheath. Communication is then disrupted between the brain and other parts of the body. No one knows exactly what causes the immune system to behave in this way, but many have proposed theories as to the possible triggers of MS.
Vitamin D deficiency has been linked with an increased risk for the symptoms of MS. In a clinical trial performed at the Harvard Medical School in Boston, researchers examined whether levels of vitamin D were associated with the risk of contracting MS. The researchers did a massive
data review of more than seven million U.S. military personnel who had blood samples stored in the Department of Defense Serum Repository. Multiple sclerosis cases were identified through Army and Navy databases from the years 1992 through 2004. The researchers matched each of the 257 cases to two controls. Vitamin D status was then evaluated by reviewing serum samples collected before the date of initial MS symptoms.
The research group found that, among caucasians, the risk of multiple sclerosis significantly decreased with increasing levels of vitamin D. The researchers concluded that higher levels of vitamin D were associated with a lower risk of multiple sclerosis.
The best source of vitamin D is the sun. UV rays from the sun trigger vitamin D synthesis in your skin. Ten to 15 minutes of sun exposure at least two times a week to your face, arms, hands or back is enough to give you a healthy dose of vitamin D. Any longer than that and you should put on sunblock.
Here are some food sources of vitamin D:
— Cod liver oil (the best source) |
One reason for the continuing popularity of Shakespeare's plays is that the themes don't seem to age. In Macbeth, we see an ambitious man evolve into a tyrant, where peace cannot be restored until he is destroyed. How many examples, say from the twentieth century, are there where we see this pattern played out with real historical people and events?
In this example, we are concerned with a major theme: ambition. In this activity, we will consider this theme- what it means, different types of ambition and the effects of ambition developed step by step through Macbeth's downfall.
Other themes can be looked at individually and we will be concerned with how they overlap to support this key idea. How do appearances differ from reality? What is good or evil, can they both exist in one person?
As we consider themes we are also moving into what makes literature! One way to think about it is that the authors write about the experiences of being human. As humans, we may experience love, loss, happiness, fear, and so on. Through these experiences, no matter where or when we live, we share these connections with other humans. The plays, novels or movies that we feel most strongly about are likely to be those we can connect in some way, or at least recognise some of the feelings that we have just named, as being parts of the human condition.
You should always refer to your own text when working through these examples. These quotations are for reference only. |
The lessons housed within this unit all provide practice on specific skills or strategies. Some lessons were written to see what students remember and/or can do at the beginning of the year. Others were used to re-teach groups of students who hadn’t quite mastered the chosen skill when it was first introduced. Still others were designed to give students meaningful practice while I conducted required testing.
All lessons used texts that were familiar or easily decodable so that students’ energies were spent on skill practice rather than trying to just make sense of the text itself. Many lessons include reproducibles that were made with graphics from Kevin and Amanda’s Fonts, Teaching in a Small Town, and Melonheadz Illustrating.
In these next seven lessons, we tackle identifying fictional elements, describing main characters, summarizing, and making connections between texts by comparing and contrasting characters. The texts we are using are The Pain and the Great One (Blume, J. (1985). The pain and the great one. Bantam Publishing: New York, New York.) and My Rotten Redheaded Older Brother (Polacco, P. (1998). My rotten redheaded older brother. Simon and Schuster: New York, New York.).
Today’s lesson focuses on describing “The Pain.” In yesterday’s lesson, we re-read the first half of the book, which focused on the older sister. Today, we re-read the second part of the book, which is narrated by the brother who is called, “The Pain,” by his older sister.
Before reading I ask students to be on the look out for ways to describe the brother. I remind them that we aren’t looking for physical characteristics or feelings, but character traits: words that describe a person on the inside. I also tell them that when a great trait comes to mind, think of a piece of evidence from the text that would support that idea.
Because we completed an identical lesson yesterday, I have students work on the third page of the packet together. After reading, I have students work with their partner to determine appropriate character traits for the brother and think of solid evidence that backs up their ideas. As they work, I walk the room offering support and assistance when needed. Today it appears much easier for students to determine appropriate answers such as “selfish,” “immature,” or “mean” rather than “young,” or “not nice.” In order to support their need to find evidence, I’ve placed a copy of today’s text on each table. This doesn’t allow each partnership to have their own, but does allow at least one copy of the text to be easily accessible when needed.
When they have found at least three strong traits with support, I have them move on to the bottom of the page where it asks if the character changed during the story. This seems a little too easy as the answer is basically the same as yesterday’s response for the sister.
Since I was able to visit all groups, I did not have students share their answers with the whole class. Instead, I had each partnership begin their independent practice once they had finished their work from today’s mini-lesson.
Students pull the fiction texts they are reading out of their book boxes and begin their independent work. Today they are to:
1. Write a response to today’s story - show how they connect to the story in a personal way through their own life experiences.
2. Begin reading their independent fiction text. Look for ways to describe a main character by thinking of appropriate character traits and locating evidence for their ideas. If students are towards the end of their books, they can reflect on whether a character has changed during the story and look for ways to support their answers. All answers should be recorded in their readers’ notebooks.
While students work, I conduct independent or small group conferences.
At the end of the work time, students share their work with their reading partners. As they share, I walk the room looking for great examples of both types of responses. I make notes of those who had strong connections to the text and their own life experiences or had excellent examples of character traits with textual evidence and then share these with the class. |
How to Play Quick Draw: Fun Math Card Game
Materials to Make Your Own:
- Index cards
- Deal out fifteen cards from your deck to each player. This becomes each player’s draw pile.
- Place two stacks of an additional 5 cards in the center of play.
- Each player draws 3 cards from their individual draw pile. This becomes each player’s hand during play.
- To start play each player turns over the top card from each stack of 5 cards in the center of play and places it face up.
- Players choose a card from their hand that is either a multiple higher or lower than one of the cards in play. Players can play on either stack and place a card as soon as a number from the player’s hand can be played. Players do not take turns.
- As soon as a player places a card, that player draws a card from their draw pile so that each player holds 3 cards at all times.
- If players cannot play one of the three cards in their hand, then each player turns over the top card from the 5-card pile and places that card face up in play. Play resumes.
- The first player that uses all cards from their individual draw pile wins.
Teaching Number in the Classroom with 4-8-Year-Olds, Dr. Wright et al
Chapter 10, IA10.3, pp. 194 |
In August 1996, a group of scientists announced that they had found evidence of ancient life on Mars. This evidence included bacteria-shaped objects and organic chemical molecules in the martian meteorite ALH 84001, which was collected in Antarctica. In the next few days, NASA presented the work at a press conference, the President made a statement about it, and the TV and papers were full of reports, speculation, and jokes about life on Mars.
Most of the world was unprepared for possible traces of martian life in a meteorite. Collecting meteorites in Antarctica was novel; the idea of martian meteorites was bizarre; knowledge of Mars was sketchy; and knowledge of primitive life on Earth was limited. Much of the important information is hidden in technical journals, written by specialists for specialists.
With this slide set, we hope to make some of this information accessible. The slide set and captions are divided into sections on Mars, Antarctic meteorites, ALH 84001 and its possible traces of life, and exploration of Mars and the universe. Most slides and captions can also be used independently. Terms defined in the glossary are underlined the first time they are mentioned in this booklet.
The suggested reading portion of this slide set has been updated to include research published since the time of the first edition, and some of the captions have been updated to reflect the success of recent missions.
The surface area of Mars is equivalent to the land area of Earth. Mars appears reddish because much of it is coated with iron oxide minerals (the material that forms rust on Earth). Its atmosphere, composed primarily of carbon dioxide, is very thin. Air pressure at the planet's surface is about one two-hundredth of the air pressure on Earth. As on Earth, clouds form and dissipate each day (one Mars day, called a “sol,” is 24 hours and 37 minutes long), and the global atmospheric circulation is driven by seasonal changes in temperature. Occasionally, winds raise dust storms. These are usually short-lived and local, but can grow to global proportions (most frequently when Mars is closest to the Sun).
Mars' geologic history has been much simpler than the Earth's, mostly because Mars is smaller than Earth. Mars' smaller size — approximately one-half the diameter of Earth-- means that Mars loses its internal heat much faster than Earth does. Because internal heat powers geological activity on a planet, Mars now has much less activity, like volcanos and earthquakes, than Earth. Its lower gravity — approximately a third of Earth’s — allows water in its atmosphere to escape to space, so the martian surface has become desiccated over time; once it had abundant water, but now it is drier than the most arid desert on Earth.
The Mariner spacecraft destroyed Lowell’s vision of intelligent Martians when they sent back images of an arid, ancient Mars, without any sign of life or its works. However, scientists still thought that Mars was the most likely place to find life in the solar system. This is why the Viking landers to Mars came to carry three instrument sets to search for signs of life: cameras, a gas chromatograph and mass spectrometer, and a biology metabolism package. None of these instruments found clear signs of life on Mars.
The results were discouraging. The Viking cameras and the biology experiments revealed an arid, lifeless desert. However, continued studies of Mars and a growing understanding of how Mars has changed over time led many scientists to believe that Mars once had the ingredients for life — an abundance of water, an atmosphere as thick as Earth's, and a warmer climate. Perhaps when the atmosphere and water were lost, martian life moved underground.
The challenge for our future is to look for signs of life from that earlier, more hospitable time, and to look for signs of recent life (or living organisms) wherever they may be — most likely in wet places underground, away from Mars’ harsh surface.
*Mission failed at |
Have you ever been asked to use three words to describe yourself? Or have you ever found it hard to describe someone to your foreign friend? In this writing, Learn English Fun Way will introduce some English common words about people’s age, appearance, characteristics and actions. Let’s begin:
I. Vocabulary for AGE:
- Young: not yet old; not as old as others
She looks much younger than her 39 years.
- Old: having lived for a long time; no longer young
He was beginning to look old
- Middle-aged: neither young nor old
He is 40 years old. He is middle-aged
- In someone’s twenties: between the ages of 20 and 29
She is 28 years old. She is in her twenties
- Knee-high to a grasshopper: very small; very young
Look how tall you are! Last time I saw you, you were knee-high to a grasshopper!
- Long in the tooth: old or too old
She’s a bit long in the tooth for a cabaret dancer, isn’t she?
- Mutton dressed as lamb: describe a woman who is trying to look younger than she really is, especially by wearing clothes that are designed for young people
The style doesn’t suit her – it has a mutton-dressed-as-lamb effect on her!
- No spring chicken: to be no longer young
How old is the owner? I don’t know but she’s no spring chicken!
- Over the hill: old and therefore no longer useful or attractive
Oh, grandma! You say you’re over the hill, but actually you’re still a super cook!
- (Live to a) ripe old age: an age that is considered to be very old
If you lead a healthy life you’ll live to a ripe old age.” said the doctor.
II. Vocabulary for APPEARANCE:
- Thin: not covered with much fat or muscle
He was tall and thin, with dark hair.
- Slim/ Slender: thin, in a way that is attractive
How do you manage to stay so slim?
- Skinny: very thin, especially in a way that you find unpleasant or ugly
He was such a skinny kid.
- Well-built: with a solid, strong body
He is a tall, well-built young man
- Muscular: having large strong muscles
He has a muscular body
- Fat: having too much flesh on it and weighing too much
You’ll get fat if you eat so much chocolate.
- Overweight: too heavy, in a way that may be unhealthy
You don’t look overweight.
- obese: very fat, in a way that is not healthy
She is grossly obese.
- Stocky: short, with a strong, solid body
He has a stocky figure
- Stout: rather fat
You are becoming stout
- Fit: healthy and strong, especially because you do regular physical exercise
She tries to keep fit by jogging every day.
- Frail: physically weak and thin
Mother was becoming too frail to live alone.
- Plump: having a soft, round body; slightly fat
That dress makes you look rather plump.
- Tall: having a greater than average height
He’s grown taller since I last saw him.
- Short: small in height
He was a short, fat little man.
- Of medium/ average height: neither short nor tall
I am of medium height
III. Vocabulary for SKIN:
- Pale: skin that is very light in color; having skin that has less color than usual because of illness, a strong emotion, etc.
Her face had grown deathly pale.
- Rosy: pink and pleasant
She had rosy cheeks.
- Sallow: having a slightly yellow color that does not look healthy
He was a small man with a thin sallow face.
- Dark: brown or black in color
Even if you have dark skin, you still need protection from the sun.
- Pasty: pale and not looking healthy
Their pasty faces were the result of long periods underground.
- Greasy skin: Skin covered in a lot of grease or oil
IV. Vocabulary for VOICE:
- Stutter: to have difficulty speaking because you cannot stop yourself from repeating the first sound of some words several times
‘W-w-what?’ he stuttered.
- Deep (voice): voice that has low sounds
I heard his deep warm voice filling the room.
We heard a deep roar in the distance.
- High (voice): at the upper end of the range of sounds that humans can hear; not deep or low
She has a high voice.
- Squeaky (voice): making a short, high sound
Her voice is squeaky.
V. Vocabulary for PERSONALITY:
- Adaptable: capable of fitting a particular situation or use
When Connie’s parents divorced, she proved herself to be adaptable. It wasn’t easy, but she learned how to cope with this big change.
- Adventurous: willing to take risks and try new ideas; enjoying being in new, exciting situations
For the more adventurous tourists, there are trips into the mountains with a local guide.
- Affectionate: showing caring feelings and love for somebody
He is very affectionate towards his children.
- Ambitious: having a strong desire for success or achievement
He is a fiercely ambitious young manager
- Compassionate: showing or having sympathy for another’s suffering
Politicians are not usually regarded as warm or compassionate people.
- Courageous: able to face and deal with danger or fear without flinching
I hope people will be courageous enough to speak out against this injustice.
- Courteous: characterized by politeness and gracious good manners
The hotel staff are friendly and courteous.
- Diligent : showing care and effort in your work or duties
He is a diligent student/worker
- Generous: giving or willing to give freely; given freely
They were very generous in giving help
- Frank: honest and direct in what you say, sometimes in a way that other people might not like
He was very frank about his relationship with the actress.
- Impartial: not supporting one person or group more than another
As chairman, I must remain impartial.
- Intuitive: able to understand something by using feelings rather than by considering the facts
I don’t think that women are necessarily more intuitive than men.
- Reliable: that can be trusted to do something well; that you can rely on
He was a very reliable and honest man who would never betray anyone.
- Sensible: able to make good judgements based on reason and experience rather than emotion; practical
She’s a sensible sort of person.
- Sympathetic: kind to somebody who is hurt or sad; showing that you understand and care about their problems
She was very sympathetic when I was sick.
Above is the vocabulary list for describing people. The list has many useful words as well as their definitions and the part of speech where it is used. Hope you can use the words well in your daily communication. Thank you for reading and see you in the next writing! |
Welcome to The Multiplying (1 to 9) by 8 and 9 (81 Questions) (E) Math Worksheet from the Multiplication Worksheets Page at Math-Drills.com. This math worksheet was created on 2021-02-20 and has been viewed 1 times this week and 3 times this month. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math.
Teachers can use math worksheets as tests, practice assignments or teaching tools (for example in group work, for scaffolding or in a learning center). Parents can work with their children to give them extra practice, to help them learn a new math skill or to keep their skills fresh over school breaks. Students can use math worksheets to master a math skill through practice, in a study group or for peer tutoring.
Use the buttons below to print, open, or download the PDF version of the Multiplying (1 to 9) by 8 and 9 (81 Questions) (E) math worksheet. The size of the PDF file is 52626 bytes. Preview images of the first and second (if there is one) pages are shown. If there are more versions of this worksheet, the other versions will be available below the preview images. For more like this, use the search bar to look for some or all of these keywords: math, multiplication, focus, digits, facts, factors, products, fillable, saveable, savable.
The Print button initiates your browser's print dialog. The Open button opens the complete PDF file in a new browser tab. The Download button initiates a download of the PDF math worksheet. Teacher versions include both the question page and the answer key. Student versions, if present, include only the question page.
This worksheet is fillable and savable. It can be filled out and downloaded or printed using the Chrome or Edge browsers, or it can be downloaded, filled out and saved or printed in Adobe Reader. |
Guide to Understanding Natural Gas and Natural Gas LiquidsFebruary 19, 2014
Natural gas is a crucial part of the United States energy sector, providing energy for fuel, heating, cooking, and much more. Our natural gas resources are also bountiful and with advances in drilling technology and methods, more accessible than ever. Despite its pervasiveness, many citizens are still unsure what exactly natural gas is, and how it relates to substances such as natural gas liquids, gas-to-liquids, and liquefied natural gas.
Overview of Natural Gas, Natural Gas Liquids, Liquefied Natural Gas, and Gas-To-LiquidsNatural Gas - Natural gas is a hydrocarbon gas. It consists primarily of methane but may also include other alkanes, carbon dioxide, nitrogen, and hydrogen sulfide. Natural gas is flammable and can be used for energy. It may be found in reserves by itself, or associated with crude oil. Natural Gas Liquids - Natural gas liquids are condensable hydrocarbons that are often associated with natural gas or crude oil. They include: ethane, propane, butane, isobutane, and pentane. Since they are condensable (may be referred to as condensates) they often form in natural gas wells when the pressure begins to lessen. Alternatively, they may form at the surface, or be formed through refrigeration and distillation. Gas-To-Liquids - The term gas-to-liquids refers to the refining process of converting natural gas into a liquid. One of the most common processing techniques to do this is methanol to gasoline (MTG), which uses syngas as an intermediate to convert methanol into gasoline. Others types include the Fischer–Tropsch process and Syngas to Gasoline Plus process (STG+). Liquefied Natural Gas - Liquefied natural gas (LNG) refers to natural gas that has undergone extreme cooling to the point of becoming liquefied. In it’s liquid state, a given amount of LNG takes up about 1/600th the space as natural gas. This makes it much easier to transport but it does require regasification terminals.
A Closer Look at Natural GasWhat Is Natural Gas? - Natural gas is a fossil fuel that forms when animal or plant remains are exposed for thousands of years to extremely high levels of pressure and heat. The pressure and heat breaks down the energy that was originally found in the plant or animal and stores it in the chemical bonds of the natural gas. Methane, which is the simplest alkane (consisting of one carbon atom and four hydrogen atoms) is the predominant compound in natural gas. Natural gas may also contain more complex alkanes such as ethane, propane, butane, isobutane, or pentane. It also commonly includes small percentages of carbon dioxide, hydrogen sulfide, and nitrogen. Natural gas is considered a nonrenewable resource; however, advances in drilling and discovery technologies have led many to believe that it is much more abundant than previously thought. Where Is Natural Gas Found? - Natural gas may be found either associated with crude oil or in formations by itself. It may also be found in coalbeds. Advances in technology have allowed natural gas to be recovered from challenging environments such as shale formations. Natural gas is found all over the world including in the United States, Canada, Russia, Qatar, Turkmenistan, and Iran to name just a few major natural gas producing countries. What is Natural Gas Used For? - Natural gas is an important source of energy for residential heating and cooking. On the industrial level, it is commonly used for electricity generation and as chemical feedstock for the manufacture of certain products such as plastics and other commercially used organic chemicals. Natural gas is rising in prominence as a fuel alternative to sources derived from crude oil. Natural gas that is associated with crude oil and recovered as a byproduct, may be burned off, or pumped back into wells if it is not economic to transport to market. How is Natural Gas Processed? - Prior to its use residential or industrial applications natural gas must be processed to remove impurities such as carbon dioxide, nitrogen, helium, water and water vapor, etc. Natural gas is compressed at compressor stations to facilitate its travel through the pipelines. Scrubbers are also typically present at these compressor stations to begin removing water and other impurities. More extensive processing is commonly done in the downstream sector at processing plants. Natural gas may also be processed to create natural gas liquids (NGLs), gas-to-liquids (GTLs), and liquefied natural gas (LNG).
A Closer Look at Natural Gas LiquidsWhat are Natural Gas Liquids? - Natural Gas Liquids (NGLs) are hydrocarbons that have condensed from the natural gas’ gaseous state into a liquid state. This may occur naturally in the wellsite when pressure is reduced, or at the surface. It may also be intentionally induced by distillation and refrigeration in gas plants and refineries. Methane is not an NGL because it is lighter and its boiling point is lower. Common NGLs include ethane, propane, butane, isobutane, and pentane. What are NGLs used for? - NGLs are used for a wide range of commercial and industrial purposes. They may be used as petrochemical feedstock, heating, cooking, or for gasoline blending. Common uses for different types of NGLs: Ethane - Ethane can be used for petrochemical feedstock to yield ethylene which is used in the manufacturing of plastics. It is also used in the production of anti-freeze, detergent, and other commercial products. Though it usually has more value as petrochemical feedstock it may also be used directly for fuel depending on market conditions and its geographic location. Propane - Propane may also be used for either petrochemical feedstock to create ethylene and propylene or it may be used for fuel and energy, again depending largely on the economy of transportation and market conditions. When it is used for energy it commonly powers stoves, gas grills, clothes dryers, generators, and water heaters, etc. As a fuel source it may be used to power lawnmowers, outboard motors, and forklifts. It is frequently sold in compressed cylinders and often mixed with butane or other hydrocarbons. Butane & Isobutane - Butane has two isomers. The "normal" isomer consists of four carbon atoms joined together in a continuous, unbranching chain. The other isomer, isobutane, three of the carbon atoms are joined to the fourth, resulting in a clustered branch. The two different isomers have different chemical and physical properties with make them useful in different ways.
- Butane - The standard isomer of butane is commonly used as petrochemical feedstock to produce synthetic rubber. It can also be used to make liquefied petroleum gas (LPG), lighter fuel, or as a blending component in gasoline.
- Isobutane - Isobutane is used in refinery alkylation to make gasoline and for use in other chemical processes. It is also used in refrigerants and aerosols.
- Pentane - Pentane is commonly used as a blowing agent in the production of polystyrene foam and as a chemical solvent. It is also a common component of gasoline.
- Pentane Plus - NGLs heavier than pentane are commonly referred to as pentane plus, natural gasoline, or “debutanized” natural gasoline. As the name implies they are used in gasoline as well as ethanol blends, and in oil sands production. |
A parasitic infection, or parasitosis, is an infection caused by a parasite, a small organism that lives in or on you and survives from the nutrients in your body. Parasites may enter your body through food or water or through your skin, or they may live on your skin. While some parasites are harmless, others cause disease.
Overview and Symptoms
Examples of parasitic disease include malaria, Chagas disease, intestinal worms or lice.
Symptoms of parasitic infections vary widely, depending on the type of parasite involved. Symptoms can range from swelling, rash or itching to stomach ache, nausea, vomiting, diarrhea and fever. Left untreated, some parasitic infections can be very dangerous.
Parasitic infections can often be diagnosed with a blood test, a stool test or endoscopy to look for the presence of a parasite in your intestines, an even through diagnostic imaging such as an X-ray or CT scan, to look for evidence of a parasite inside your body.
Treatment depends on the type of parasitic infection. In general, your doctor will prescribe medication to treat your infection. Other treatments can help reduce symptoms, like drinking plenty of water to reduce the risk of becoming dehydrated when your infections causes diarrhea. |
Galileo Galilei is one of the greatest physicists and astronomer in the entire world. He was born in Italy in the year 1564. In early 1600, Galileo invented the spyglass device which could make him view distant objects such as those in space.
Galileo, who started his astology career as a mathematics tutor, continued to sharpen his skills regarding his discovery. It wasn’t long before his research on the laws of motion led him towards the invention of the telescope. His telescope was so powerful that it could magnify distant objects up to eighth or ninth times. As such, he became the first man to see craters of the moon as well as discover sunspots. This significantly shaped his understanding on the universe and the world around him. Afterward, he made a series of astronomical discoveries which were strongly objected. For instance, Galileo claimed that the moon surface wasn’t smooth. This was against the widespread knowledge of many, including the ancient church.
Another famous discovery he made was the rings surrounding planet Saturn. The rings which appeared like lobes drove him further and further into space exploration. While observing Jupiter’s moons, Galileo also discovered other planets and recorded them.
During this period, most of the people, including the ancient Roman Catholic, believed in the Copernican system. According to the Copernican system, the solar system as a whole was centered on the earth. In other words, all the other planets, as well as objects in the universe, moved around the earth. Being formulated by Nicolaus Copernicus in 1543, the theory was the most common among the Roman Catholics. However, after making several discoveries, Galileo objected this theory. He claimed that the solar system was heliocentric. In other words, it was sun-centered rather than earth-centered.
In the early 1600s, Galileo was called by the Roman Catholic Church in Rome and warned against spreading his controversial theory. Nevertheless, he continued to support his belief. In 1632, Galileo was put under house arrest. Upon summon, he was found guilty of heresy and put to house arrest for the rest of his life by the powerful Catholic Church. One of the main reason as to why the church sought to silence him was because his discoveries could change how the ancient measured space. Besides, the findings stood a high chance of threatening church territorial holdings.
Besides, the Galileo’s main intention was to shed everything into the light. He wanted to disapprove the Copernicus theory. However, the Roman Catholics were not ready to receive such disapprovals since they could lose trust and credibility from their followers.
Due to the fact that the church had more influence, it was considered as part of the Roman Empire. In order to control massive followers, the ancient church had to proselytize the new world. However, by allowing Galileo to continue spreading the heliocentric discoveries, the church role could be threatened. Therefore, they had to find a way of silencing him. Last but not least, the church also wanted to maintain its moral authority over the empire. |
So far our galactic adventures have included landing men on the moon, taking pretty pictures of Saturn, and roaming the surface of Mars. So what’s next on NASA’s to-do list? Perhaps snagging an asteroid to keep in our own backyard.
Researchers from the Keck Institute for Space Studies proposed a plan [pdf] in April to bring an asteroid into the moon’s orbit so astronauts can study it up close. How big an asteroid are we talking? Researchers said the sweet spot would be right around 500 tons and 20 feet in diameter—big enough to locate but small enough to transport. After finding such an asteroid, researchers want to send a robotic spacecraft to bag and drag the asteroid into the moon’s orbit. The asteroid would in effect become the moon’s own mini moon. The round-trip journey could take up to a decade, which would give NASA enough time to set up a manned mission to the asteroid to study it up close and personal. So far NASA has not turned the proposal down.
The proposed mission’s price tag is a couple billion dollars, comparable to that of the Curiosity mission. But this is a small price to pay for the lofty benefits the research team says the mission will have for future of space travel: A) mining asteroids for valuable minerals like platinum, B) using asteroids’ oxygen, carbon and hydrogen to refuel spacecraft in space, C) determining how to deflect asteroids so they don’t smash into Earth, and D) bringing humans one step closer to creating permanent settlements in space.
Most importantly, the report states, “Such an achievement has the potential to inspire a nation.” Seems like the rest of the world would be pretty excited, too. |
Ancient Indian History – Indus Valley Civilization, Mohenjo Daro, Harappan Culture
The past of India goes back several thousand years. We learn about it from the evidences which our ancestors have left behind. Even when the paper was not made.
Ancient Indian History - Indus Valley Civilization, Mohenjo Daro, Harappan Culture
- 1 Unique Feature of the Indus Valley Civilization
- 2 Indus Valley Civilization History
- 3 The lives of the people in Indus Valley Civilization
- 4 Indus Valley Civilization Notes
- 5 Nomenclature of Indus Valley Civilisation
- 6 Geographical Spread
- 7 Some New Discoveries
- 8 Who Built the Indus Valley Civilization
- 9 Indus Cities
- 10 Town Planning
- 11 Agriculture
- 12 Domestication of Animals
- 13 Trade
- 14 Towns Associated with Different Industries
- 15 Art and Craft
- 16 Religious Practices
- 17 Burial Practices
- 18 Script
- 19 Decline of the Cuvilisation
- 20 Important Harappan Sites
- 21 Possible Causes of Disappearance
- 22 Indus Valley Civilization PDF Download
Indus Valley Civilisation is a bronze age that was located in the north western region of Indian subcontinent. Since many of the Harppan settlement are found on the vast plane of river Saraswati which is dry today. It is also called as Indus-Saraswati Civilization. The Harrappan culture was spread over the many part of India like Sindh, Baluchistan, Punjab & Haryana, Western Uttar Pradesh, Jammu, Rajasthan, Gujarat and Northern Maharashtra. The important Harappan sites are Kotoji located on left bank of river Indus opposite Mohenjodaro, Harappa, KaliBanga, Mehrgarh, Mundigak, Damb Dadaat, Amri, Gumla, Rehman Dehri, Dolavira, Lothal.
The Harappan culture lasted for more than a thousand years. The world’s earliest urban civilization came to an end around 1300 BC. Natural calamities like floods, teriible epidemic diseases, and attack by wild animals are the suspected reasons for their decline. Although some aspects of the Harappan culture continued after their decline, those who succeeded Harappans knew nothing of city life. Thus decline of the Harappan culture was a negative event in the history of India.
Unique Feature of the Indus Valley Civilization
- The structure of the houses has toilet connected to a centralized system.
- Sanitation system is planned and organized by a centralized government.
- Unique size of bricks everywhere
- Town Planning
Indus Valley Civilization History
- Mature period began around 2700 BCE.
- Harappa was established around 3300 BCE.
- Farming and agriculture became the main economic activity due to scarce metal resources.
- Trading with Sumerian civilization have been progressed along the shore Arabian Sea and the Persian Gulf.
- Most artifacts unearthed were toys indicating that they liked entertainment and loved to play.
- For a while, archaelogists thought that the cities of the Indus Valley Civilzation were populated by children.
- Their dissolution of IVC remains a mystry.
The lives of the people in Indus Valley Civilization
- AGRICULTURE was their main economic activity.
- They had irrigation systems.
- Pottery and Jewellery making
- Houses were made of clay bricks
- Their religion was animism and polytheism. They worshipped many gods some animal like revered BULL.
- Their social classes were lived inside the citadels; the farmer and trader lived outside of citadel.
Indus Valley Civilization Notes
- Indus Civilisation is one of the four earliest civilisations of the world along with the civilisations of Mesopotamia (Tigris and Euprates), Egypt (Nile) and China (Hwang Ho).
- The civilization forms part of the proto-history of India and belong to Bronze age.
- The most accepted period is 2500-1700 BC (by Carbon-14 dating).
- Dayaram Sahni first discovered Harappa in 1921.
- RD Banerjee discovered Mohenhodaro or Mound of the Dead in 1922.
Nomenclature of Indus Valley Civilisation
- Indus Valley Civilisation as it flourished along the Indus river.
- Harappan Civilisation named by John Marshall after the first discovered site, Harappa. Harappa comes in the Pakistan now.
- Saraswati-Sindhu Civilisation as most of the sites have been found at the Hakra-Ghaggar river.
- The civilisation covered parts of Sind, Baluchistan, Afghanistan, West Punjab, Gujarat, Uttar Pradesh, Haryana, Rajasthan, Jammu and Kashmir, Punjab and Maharashtra.
- Mundigak and Shortughai are two sites located in Afghanistan.
- West-Sutkagendor on Makran coast (Pak-Iran Border) East-Alamgirpur in Uttar Pradesh (River Hindon).
- North-Manda in Jammu (River Chinab). South-Daimabad in Maharashtra (River Pravara) are major sites.
Some New Discoveries
- Ganverivala in Pakistan by Rafeeq Mugal.
- Rakhigarhi in Sind (Haryana) by Rafeeq Mugal
- Dholavira on bank of river Luni in Gujarat excavated by RS Bist and JP Joshi is largest excavated site in India.
Who Built the Indus Valley Civilization
|Harappa||Pakistani Punjab||Ravi||1921||Daya Ram Sahni|
|Ropar||Indian Punjab||Sutlej||1953||YD Sharma|
|Alamgirpur||Uttar Pradesh||Hindon||1974||YD Sharma|
- Town planning was not uniform. A common feature was grid system i.e. streets cutting across one another at right angles, dividing the town into large rectangular blocks.
- The towns were divided into two parts : upper part or citadel and lower part.
- The fortified citadel on the Western side housed public buildings and members of ruling class. IN this, religious activities, public gatherings or important administrative activities.
- Below the citadel on the Eastern side lay the lower town inhabited by the common people. The streets are done in grid pattern.
- Underground Drainage System connected all houses to the street drains made of mortar, lime and gypsum. They were covered with either bricks or stone slabs and equipped with its ‘Manhole’. This shows developed sense of health and sanitization.
- The Great Bath or Swimming Pool (Mohenjodaro) It was used for religious bathing. Steps at either end lead to the surface. With inlet to the tank and outlet to the drain water. There were changing rooms alongside. No usage of stones.
- The Granaries (Harappa) 6 granaries in a row were found in the Citadel at Harappa.
- Houses were made up of burnt bricks. They were often two or more storeyed, varied in size, with a square courtyard around which number of rooms. Windows did not face the main streets. They had tiled bathroom.
- Lamposts were erected at regular intervals. It indicates the existence of street lightening.
- Agriculture was the backbone of the civilisation. The soil was fertile due to innundation in the river Indus and flooding.
- The Indus people sowed seeds in the flood plains in November, when the flood water receded and reaped their harvests of wheat and barley in April, before the advent of next flood.
- They used wooden plough share (ploughed field from Kalibangan) and stone sickles for harvesting.
- Gabarbanda or nalas enclosed by dam for storing water were a feature in parts of Baluchistan. Grains were stored in granaries.
- Crops Produced wheat, barley, dates, peas, sesamum, mustard, milet, ragi, bajra and jowar. At Lothal and Rangpur rice husks were found.
- They were first to produce cotton in the world, which Greek called as Sindon derived from Sind. A fragment of woven cotton cloth was found at Mohenjodaro.
- Well Irrigation is evident from Alladinho, dams and irrigation canals from Dholvira. Sugarcane was not known to Indus people.
Domestication of Animals
- Animal rearing was practiced mainly humped bull. They domesticated buffaloes, oxens, sheep, asses, goats, pigs, elephants, dogs, cats etc.
- Camel bones are reported at Kalibangan and remains of horse from Surkotada.
- Agriculture, industry and forest produce provided the basis for internal and external trade.
- Trade was based on barter system. Coins are not evident, bullock carts, pack animals and boats were used for transportation.
- Weights and measures were made of limestone, steatite etc. Generally in cubical shape. They were in multiple of 16.
- Several sticks inscribed with measure marks have been discovered. It points that linear system of measurement was in use.
- Foreign trade flourished with Mesopoyamia or Sumeria (Iraq), Central Asia, Persia, Afghanistan and Bahrain.
- Sumerian text refers to trade with Meliha (Indus) while Dilmun (Bahrain) and Makan (Makran coast) were two intermediate stations.
Towns Associated with Different Industries
- Daimabad Bronze industry
- Lothal Factory for stone tools and metallic finished goods.
- Balakot Pearl finished goods, bangle and shell industry
- Chanhudaro Beads and bangles factory
- Lothal (artificial dockyard) – Surkotada, Sutkagendor, Prabspattan, Bhatrao, Kalibangan, Dholavira, Daimabad were coastal towns of the civilisation.
- Major Exports Agricultural products, cotton goods, terracotta figurines, pottery, stetite beads (from Chanhudro), conch-shell (from Lothal), ovory products, copper etc.
|Gold||Kolar (Karnataka),Afghanistan, Persia (Iran)|
|Silver||Afghanistan, Persia (Iran), South India|
|Copper||Khetri (Rajasthan), Baluchistan, Arabia|
|Lapis Lazuli and Sapphire||Badakhshan (Afghanistan)|
|Steatite||Shaher-i-Sokhta, Kirthar Hills|
Art and Craft
- Harappans used stone tools and implements and were well acquainted with bronze. Bronze was made by mixing copper (from Khetri) with tin.
- Boat making, jewellery of gold, silver precious stone and bead making was practiced. Cotton fabrics were used in summer and woollen in winter.
- Both men and women were very fond of ornaments and dressing up.
- Pottery both plain (red) or painted (red and black) pottery was made. Pots were decorated with human figures, plants, animals and geometrical patterns and ochre was painted over it.
- Seals were made of steatite pictures of one horned bull (the most), buffalo, tiger, rhinocerous, goat and elephant are found on the seals. They marked ownership of property.
- Mesopotamian seals were found from Mohenjodaro and Kalibangan; Persian seal from Lothal. Most important one is Pasupati seal.
- Metal images Bronze image of nude woman dancer (identified as devdasi) and stone steatite image of a beared man (both obtained from Mohenjodaro).
- Terracotta figurines Fire baked clay was used to make toys, objects of worship, animals (monkey, dogs, sheep, cattle, humped and humpless bulls), cattle toys with movable head, toy-carts, whistles shaped like birds and both male and female figurines.
- They played dice games. Gambling was favourite time pass. No clear evidence of music.
- Chief Female Diety A terracotta figure where a plant ishwon growing out of the embryo of a woman, represents Mother Goddess (Goddess of Earth).
- Chief Male Diety Pashupati Mahadeva (Proto-Shiva), represented in seals as sitting in a yogic posture on a low throne and having three faces and two horns. He is surrounded by an elephant, a tiger, a rhino and a buffalo and two dear appear at his feet.
- Lingam and yoni worship was prevalent. Trees (pipal), animals (bulls, birds, dove, pigeon) unicorn, and stones were worshipped. No temple has been found though idoltary was practised.
- Indus people believed in ghosts and evil forces and used amulets as protection against them. Fire altars are found at Lothal and Kalibangan.
- Evidence of snake worship is also found.
- General practice was extended inhumation in North-South direction.
- Mohenjodaro Three forms of burial complete, fractional and post cremation.
- Kalibangan Two forms of burial-circular and rectangualr grave.
- Surkotada Pot-burial, Dholavira-Megalithic burial.
- Lothal Double burial
- Harappa East-West axis; R-37 and H cementary.
- It was pictographic in anture. Fish symbol is most represented.
- Overlapping of the letters show that it was written from right to left in the first line and then left to right in the second line. the style is called Boustrophedon.
Decline of the Cuvilisation
The Harappan culture flourished about 1800 BC , then it began to decline. There is no unanimity among historians regarding reason fro the decline of this urban civilisation. there are many different theories that shows the decline of the Indus culture.
|Theory||Reasons for Decline|
|Gorden Childe HT, Lambrick Kur, Kendey Orell and AN Ghosh||External aggression, Unstable river systems, Natural calamity and Climate change|
|R Mortimer Wheeler, Robert Raiker, Sood & Aggarwal||Aryan invasion, Earthquake, Dryness of river and Ecological imbalance|
Important Harappan Sites
|Harappa (Gateway city)||2 row of granaries with brick platform, work men’s quarter, stone symbol of lingam and yoni, virgin-Goddess, clay figures of mother Goddess, wheat and barley in wooden mortar, copper scale and mirror, vanity box, dice. Sculpture Dogs chasing a deer (bronze), nude male and nude dancing female (stone), red sand stone male torso.|
|Mohenjodaro (Mound of the Dead)||The great bath, The great granary (largest building), multi-pillared assembly hall, college, proto-Shiva seal, clay figures of mother Goddess, Dice. Sculpture Bronze dancing girl, steatite image of bearded man.|
|Kalibangan (Black Bangle)||Decorated bricks, bangle factory, wheels of a toy cart, wells is every house. Remains of a massaive brick wall around both the citadel and lower town (lower town of Lothal is also fortified), bones of camel, tiled floor, Mother Goddess figurines are absent here.|
|Chanhudaro (lancashire of India)||Inkpot, lipstick, carts with seated driver, ikkas of bronze, imprint of dog’s paw on a brick. Only city without Citadel.|
|Daimabad||Bronze images of Charioteer with chariot, ox, elephants and rhinoceros.|
|Amri||Actual remains of Rhinoceros.|
|Alamgirpur||Impression of cloth on a trough|
|Lothal (Manchester of Indus Valley Civilisation)||Rice husk, fire altars, grinding machine, tusks of elephant, granomy, terracotta ship, houses with entrance on main streets, impressions of cloth on some seals, modern day chess, instrument for measuring 180 and 90 and 45 degree angles.|
|Ropar||Buildings made of stone and soil. Dog buried with human. One inscribed steatite seal with typical Indus pictographs, oval pit burials.|
|Banawali||Oval shaped settlement, only city with radial streets, lack of systematic drainage pattern. Toy plough, largest number of barley grains.|
|Surkotada||Both Citadel and lower town fortified with stone wall. First actual remains of horse bones. Cemetry with for pot burials.|
|Dholavira||Only site to be divided into 3 parts. Giant water reservoir, unique water harnessing system, dams and embankments, a stadium, rock-cut architecture.|
|Sutkagendor||Two fold division of township-Citadel and lower town.|
Possible Causes of Disappearance
- Famine, hunger, drought
- Ecological factors/ Natural disasters (flood, typhoon, earthquake, volcanic eruption etc.)
- Periodic flooding of indus leading to desertion of cities.
- Invasions by another civilization Aryans.
Indus Valley Civilization PDF Download
You can also download Indus Valley Civilization PDF on clicking below link.
Indus Valley Civilization Multi Choice Questions (MCQ)
Please practice below questions also by clicking on below link.
All the best for your upcoming exam!
Read more articles…. |
Seaweed and their environment
Growing in a different medium/realm, they have a unique physiology & chemistry. Seaweed descriptions also use a different set of words to describe them. Seaweeds are the most ancient plants on earth; algae are the source of much of Earth's oxygen. Algae are also very important ecologically because they are the beginning of the food chain for other animals. They carry the goodness of the sea to your blood !
Some seaweeds are microscopic, such as the phytoplankton that live suspended in the water column and provide the base for most marine food chains. Some are enormous, like the giant kelp that grow in abundant “forests” and tower like underwater redwoods from their roots at the bottom of the sea. Most are medium-sized, come in colors of red, green and brown. Their essence includes tenacity, flexibility, endurance, cyclical balance and
oneness with all energies.
Seaweed - a misnomer
The name “seaweed” is really a misnomer, because a weed is a plant that spreads so profusely it can harm the habitat where it takes hold. More recently, as their benefits are getting better understood, seaweeds are also called 'sea vegetables' or 'ocean vegetables'.
They are – THE OTHER SEAFOOD - we are used to the animal side of seafood, but what about the vegetation?
For centuries they have had an esteemed place in the diet of coastal communities around the world but they are a huge part of the diet in Asia, especially in Japan (over 15%). It is quickly gaining popularity elsewhere, thanks to the availability of sushi which has broken down some of the reservations un-initiated might have once had.
Sea plants physiology is different to their land counterparts:
- do not have roots, they anchored to rocks or the seabed through a holdfast
- do not have stem or branches, but a stipe
- do not have leaves, instead they have blades - cut blades are called fronds
- bladders are present in some species
Wakame parts (image below) are used to various purposes in a wide range of dishes. It is important to note that the vocabulary applied to sea plants is quite different to that of the land plants. It is helpful to know it to better understand information on the label.
Sea plants chemistry is also very different:
- best source of iodine (& essential co-factors) in nature
- they are much more concentrated in nutrients (10-20 times)
- they have all 5 essential nutrients necessary to a healthy diet protein, carbs, fat (albeit extremely low), chelated minerals, all vitamins (incl B12) as well as prebiotic fibre
- they have unique compounds that are being studied for their tremendous health-giving properties
- at the extreme of the alkaline food scale |
A quarry is a type of open-pit mine in which dimension stone, rock, construction aggregate, riprap, sand, gravel, or slate is excavated from the ground. The word quarry can also include the underground quarrying for stone, such as Bath stone .
Apr 24, 2017· How It Is Extracted. Granite usually occurs in large deposits, many times referred to as slabs, throughout the world. Mining operations use different methods of cutting to extract the different deposits from the ground in places called quarries. These slabs are then polished, put on trucks and sent to fabricators.
How is Stone Quarried. The blocks can then be drilled from the bench wall. Blocks of a given type of stone usually have a fairly uniform size, due to the size of the processing equipment used. Granite blocks usually weigh between 38-42,000 pounds, while lighter marble and travertine blocks weigh between 15-25,000 pounds.
Figure 1. Process flow diagram for granite quarrying operations. As shown in Figure 1, the first step in quarrying is to gain access to the granite deposit. This is achieved by removing the layer of earth, vegetation, and rock unsuitable for product—collectively referred to as
Quarrying Process. The mantle can be raised or lowered within the concave, allowing the gap, and therefore the size of the crushed product, to be varied to a limited degree. If the crusher is jammed by a stray bit of steel, e.g., a digger bucket tooth, the mantle automatically moves down to …
© 2020 CIROS. All Rights Reserved |
5.2 How is English Related to Other Languages?
What is you and your siblings most common ancestor?
Your first cousins? Your second cousins?
Distinguish between the Kurgan and Anatolian hypothesis of Indo-European language diffusion
In terms of:
Geographical origin and cultural group livelihood strategy
Indo-European originated in Anatolia around 7,000 BCE
IE cultural group created the agricultural revolution and spread into Europe
Modern day Turkey
Indo-Europeans arrived in Europe around 3,000 BCE
Border between Russia and Kazakhstan
Came in on horse-drawn war chariots and attacked the agriculturalists living in Europe
Draw a family tree for English that includes group, branch, and family
How do the Anatolian and Kurgan hypotheses differ in terms of origin and livelihood strategy?
Which hypothesis for the diffusion of Indo-European do you find more compelling? Why?
West Germanic Group – further divided by elevation (subgroups)
High Germanic – origin in southern mountains
Basis for modern standard German
Low Germanic – English, Dutch, Afrikaans, etc
North Germanic Group – Scandinavian languages descended from Old Norse
Spoken in Scandinavia before 1000 CE
Variant of a language that a country’s intellectual or politically elite seek to promote as the norm
Standard English in the US
The King’s English AKA Received Pronunciation in England
Commonly used by politicians, public media figures, etc
Indic (Eastern) Group
India – 29 languages spoken by over 1 mill each
Hindi (variety of Hindustani)
The British chose it for government business in late 1800s
22 scheduled languages – government is obligated to encourage their use
Many spoken varieties of Hindi, but only one written tradition – Devanagari
Reflects their Muslim identity
Iranian (Western) Group – written in Arabic
Persian in Iran
Pashto in E Afghanistan and W Pakistan
How do Modern Standard English and Texas regional English differ
How are Urdu and Persian related?
Should the US make English the official language? Why or why not?
Should more than one language be made official?
East Slavic and Baltic Groups
Russian – spoken by more than 80% of Russian people
Increased in global importance with the rise of the USSR post-WWII
Forced all citizens to learn Russian to promote unity
Pressure to speak local languages contributed to the fall of the USSR
Still used for common communication between post-soviet states
Arabic, Chinese, English, French, Russian, Spanish
West and South Slavic Groups
Most spoken: Polish, Czech, and Slovak
Czech and Slovak are mutually intelligible
During the communist era as Czechoslovakia the government tried to balance Czech and Slovakian language
Twice as many speakers of Czech
In 1993 Slovakia split from the Czech Republic
Due in part to perceived domination of Czech culture in Czechoslovakia
One language, called Serbo-Croatian before the breakup of Yugoslavia
Assert their identity
Very minor linguistic differences
Montenegrin and Serbia use the Cyrillic alphabet
What do Russian and Swahili have in common?
Considering their linguistic sameness, should the most similar Slavic languages be consolidated?
Evolved from Latin, spoken by the Romans 2,000 years ago
Vulgar Latin – Latin spoken by common people in the provinces
Spanish and French are official languages of the UN
Less spoken romance languages
Catalán in Andorra and Catalonia, Spain
Sardinian – mix of Italian, Spanish, and Arabic
Sardinia = small Mediterranean island
Three tribes invaded the English Isles
Angles, Jutes, Saxons
Now called Anglo-Saxons for the two largest
English evolved from the language of these tribes
The Vikings also added some words when they attacked
Then the Normans, from France successfully conquered England in 1066
The mix of English and French formed what would become modern English
Common words from German: sky, horse, woman, man, etc
Courtly words from French: celestial, equestrian, feminine, masculine
English colonies all spoke English
Including a small colony in Jamestown, VA!
Ultimately English would become the language of North America when the US defeated France for control
Which group was displaced west when the Germanic invaders came to England? |
Fast urban growth is a modern phenomenon. Until modern times, few settlements entered the population size of more than a few thousand residents. The first urban settlement to arrive at a population of one million was the London city by about. A.D. 1810 By 1982 nearly 175 cities in the world had crossed the one million population mark. In the present time, 54 per cent of the world’s population resides in urban settlements in comparison to only 3 per cent in the year 1800.
Classification of Urban Settlements
The interpretation of urban areas differs from one country to another. Some of the general basis of classification is the size of the population, and administrative setup and occupational composition.
It is an important criterion adopted by most countries to describe urban areas. The lower limit of the population size assigned as urban is 1,500 (Colombia), 2,000 (Argentina and Portugal) 2,500 (the U.S.A. and Thailand), 5,000 (India) and 30,000 ( Japan). Besides the size of the population, the density and part of non-agricultural workers are taken into consideration in India. Countries with a low density of population may prefer a lower number as the cut-off value compared to densely populated countries. In Denmark, Finland, and Sweden all areas with a population size of 250 persons are called urban. The minimum population for a town is 300 in Iceland, whereas, in Venezuela and Canada, it is 1,000 persons.
In some countries, such as India, the important commercial activities in addition to the size of the population in assigning a settlement as urban are also taken as a criterion. Likewise, in Italy, a settlement of more than 50% of its economically productive population involved in non-agricultural pursuits is called urban. India has set this criterion at 75%.
The administrative structure is a criterion for assigning a settlement as urban in some countries. For example, in India, a settlement of any size is categorised as urban, if it has a municipality, Notified Area Council or Cantonment Board. Similarly, in Latin American nations, such as Brazil and Bolivia, any administrative capital is recognised as urban irrespective of its population size.
Location of urban centres is analysed with reference to their function. For example, the sitting demands of a holiday resort are quite distinct from that of an industrial town, a seaport or a military centre. Strategic towns require places extending natural defence; mining towns demand the proximity of economically valuable minerals; industrial towns usually require local energy supplies or raw materials; tourist centres need attractive landscape, or a marine seashore, a spring with medicinal water or old monuments, ports need a harbour etc. Sections of the initial urban settlements were based on the availability of water, fertile land and material for buildings. Today, while these factors remain valid, modern technology performs a vital role in discovering urban settlements distant from the source of these materials. Piped water can be provided to a faraway settlement, the building material can be carried from long distances. Apart from the site, the circumstances play an important role in the development of towns. The urban centres which are positioned close to an important trade route have undergone rapid development.
Functions of Urban Centres
The initial towns were centres of administration, industry, trade religious importance and defence. The importance of defence and religion as distinguishing functions has diminished in general, but other functions have entered the list. Today, several new functions, such as residential, recreational, transport, mining, manufacturing and most recent activities associated with information technology are conducted in specialised towns. Some of these functions do not necessarily need the urban centre to have any significant relationship with their neighbouring rural areas. In spite of towns playing multiple functions, we refer to their principal function. For example, Sheffield as an industrial city, Chandigarh as an administrative city, London as a port city and so on. Large cities have a rather greater variety of functions.
Furthermore, all cities are dynamic and over a period may evolve new functions. Most of the early 19th-century fishing ports in England have now also grown as tourism. Many of the old market cities are now known for production activities. Towns and cities are categorised into the following classes. Administrative Towns National capitals, in which the administrative offices of central governments, such as New Delhi, Canberra, Washington D.C., and London, Beijing, Addis Ababa etc. are called administrative cities. Provincial (sub-national) cities can also have administrative functions, for example, Albany (New York), Chennai (Tamil Nadu), Victoria (British Columbia).
Commercial Towns and Trading
Agricultural market towns, such as Kansas City and Winnipeg; financial and banking centres like Amsterdam and Frankfurt; large inland centres like St Louis and Manchester; and transportation links such as Baghdad, Agra and Lahore have been significant trading centres.
Places of pilgrimage, such as Mecca, Jagannath Puri, Varanasi and Jerusalem etc. are considered cultural cities. These urban hubs are of great religious importance.
Additional functions which the cities offer are health and sport (Miami and Panaji), manufacturing (Pittsburgh and Jamshedpur), mining and quarrying (Broken Hill and Dhanbad) and transportation (Singapore and Mughal Sarai).
CLASSIFICATION OF TOWNS BASED ON FORMS
An urban settlement may be linear, star or crescent-shaped and square. In fact, the form of the settlement, style of buildings and architecture and other constructions are a result of its historical and cultural traditions. The roads expand from the govt headquarters such as Piazza, Arat and Amist Kilo roundabouts. Mercato has markets which grew with time and is supposed to be the largest market between Johannesburg and Cairo. A multi-faculty university, several good schools and medical colleges make Addis Ababa a good educational centre. It is a terminal station for rail route of the Djibouti-Addis Ababa. Bole airport is a comparatively new airport. The city has seen rapid growth because of its multifunctional character and being a large nodal centre established in the centre of Ethiopia. |
Indian history dates back to 3000 BC. Excavations in Punjab and Gujarat reveal that the Indus Valley civilisation was a highly developed urban civilisation. In fact the two cities of Harappa and Mohenjodaro, situated on two sides of the river Ravi, are known to have been built on a similar plan. But that only meant a new wave of urbanisation was taking place along the Ganges around 1500 BC. This has been recorded in the Rig Veda – the earliest known literary source composed in this period that sheds light on India’s past.
The Great Dynasties
By 6th century BC, the Magadh rulers dominated the Northern plains. It was also the time when new thinking emerged in the form of Buddhism and Jainism to challenge Hindu orthodoxy. The Magadh rule was followed by the rule of Chandragupta Maurya (322-298 B.C.), one of India’s greatest emperors. The Mauryan reign peaked under the reign of Ashoka the Great who extended his empire from the Kashmir and Peshawar in the North to Mysore in the South and Orissa in the East. Not only was Ashoka a great ruler, he was one of the most successful propagators of Buddhism in the country. After Ashoka’s death in 232 B.C. the empire began to disintegrate and the country was repeatedly raided and plundered by foreign invaders, leaving India disunited and weak for the next 400 years. Stability returned with the reign of Chandra Gupta I (380-412 A.D.). His rule is considered the golden period in Indian history when art and culture flourished and the country prospered.
India is the 7th largest country in the world. It has a total area of 3,166,414 square kilometer. Situated north of the equator, the country lies between 68°7′ and 97°25′ east longitude and between 8°4′ and 37°6′ north latitude. The second most-populous country in the world, it is surrounded by countries like Nepal, Bhutan, China, Pakistan, Bangladesh and Myanmar on land and by the Indian Ocean, mainly the Bay of Bengal, the Laccadive Sea and the Arabian Sea. The highest point of India is Kangchenjunga (8,598 m/28,208.7 ft) and the lowest point is Kuttanad (−2.2 m/−7.2 ft).
India is divided into 6 physiographic regions:
- The Northern Mountains: This region consists of the Himalayas, the world’s highest mountain range.
- The Peninsular Plateaus: The largest and oldest physiographic region constitutes the Vindhya range, the Malwa Plateau, the Deccan Plateau, the Chota Nagpur Plateau, the Satpura Range, the Aravali Range, the Western Ghats and the Eastern Ghats.
- Indo-Gangetic Plains: Also known as the Great Plains, three main rivers i.e. the Indus, the Ganges and the Brahmaputra dominate this region. It runs parallel to the Himalayas and covers 700,000 sq km in area.
- Thar Desert: It is one of the largest deserts in the world with an area of 200,000. Most of the desert is located in Rajasthan. It enters into Pakistan as well.
- The Coastal Plains: This region is composed of Eastern Coastal Plain, which stretches from Tamil Nadu in the south to the West Bengal in the east, and Western Coastal Plain, which lies between the Western Ghats and the Arabian Sea.
- The Islands: Two major island groups the Lakshadweep Islands off the coast of Kerala in the Arabian Sea and the Andaman and Nicobar Islands in the Bay of Bengal near the Burmese coast and other islands make up this region. Barren Island, which is the only active volcano in India, is situated in the Andaman Islands. |
Plovers are a group of small shorebirds that commonly live on beaches and tidal zones. There are over 60 different species in the Plover, or Charadriinae, subfamily. Researchers divide the birds in the subfamily into 8 different taxonomic genuses also containing killdeers, wrybills, and dotterels. Read on to learn about the Plover.
Description of the Plover
These birds vary from species to species, each of which has different plumage with a variety of colors and patterns. For the most part, these birds are relatively small, and usually have light or dark colored feathers to match the beach or rocks.
They usually have moderately long legs, and short beaks. The largest species is less than a foot long, and most weigh just a few ounces.
Interesting Facts About the Plover
There are many different members in this bird family, and each is unique in its own way. Learn more about some individual species below.
- Wrybill – This species, also known as the “ngutuparore,” lives in New Zealand. Wrybills have rather odd beaks, because they bend to the right about halfway down the bill, hence the name “wrybill.” They are the only species of bird that has bill that naturally bends sideways.
- Piping Plover – This little bird lives along the coasts of North America. Despite its small assize, the Piping makes impressive migrations across multiple states every year! Just like sea turtles, Pipings return to the same beaches year after year to reproduce.
- Mountain Plover – Unlike most of these birds, which usually live along beaches and shores, Mountain Plovers live in meadows and fields in high elevations. However, they do not live in the mountains, but rather the foothills and tablelands. These little birds love to nest in prairie dog towns!
- Hooded Dotterel – This species lives in Australia and Tasmania, primarily along shorelines and some inland waterways. They have dark brown feathers on their heads, bright red beaks, and the skin around their eyes is red as well. The IUCN lists this bird as Vulnerable, primarily due to habitat destruction and pollution.
Habitat of the Plover
Though some of these birds live in different habitats, most species are shore birds. They live along beaches, sand dunes, estuaries, tide pools, and more. Some species also inhabit farms, particularly flooded pastures or lakes and ponds.
Some species also inhabit tundra, meadow, grassland, and other habitat types. Each species has different preferences, though the habitats of many species overlap with one another.
Distribution of the Plover
Different species live virtually worldwide, save for some of the most extreme environments. They inhabit various regions across North, Central, and South America, including the polar regions in the north. They also live throughout areas of Eurasia, Africa, Australia, and the surrounding islands.
Some species live across vast regions, while others only live in small pockets of habitat. Each species’ distribution is different, but many species have overlapping ranges and populations.
Diet of the Plover
Plovers eat a variety of small organisms, primarily invertebrates. The vast majority of their diet consists of worms, small insects, and crustaceans. Because their beaks are quite short, they usually hunt by running along and probing into the sand when they spot a potential meal.
Some common food items include snails, worms, flies, shrimp, crabs, and more. Each species eats different types of food, based on what is available in their region.
Plover and Human Interaction
Human interaction impacts different species of Plovers to varying degrees. Some species of Plovers are plentiful and common. Those species that live across wide ranges tend to have stronger populations than species that live only in a small region. Some of the most pressing dangers to these birds are habitat destruction, pollution, and hunting by feral animals like cats, rats, and dogs.
Humans have not domesticated Plovers in any way.
Does the Plover Make a Good Pet
No, these little birds do not make good pets. They are wild animals, and are not friendly towards humans. In most places, it is illegal to own, capture, harass, or kill a Plover.
In zoos, these birds live in enclosures with shallow waters and “beaches” to forage on. Their habitats are usually sandy, and many of their habitats contain a variety of other shorebirds.
The exact specifics of their care vary from species to species. Zookeepers feed them a variety of foods based on their natural diet. For example, a Plover that lives along the beach might have a diet of small fish, krill, shrimp, and more.
Behavior of the Plover
Different Plover species have different social needs. Some species are social, and congregate in groups known as “flocks.” Some flocks contain just a handful of birds, while others reach numbers in the hundreds.
Most of these little birds spend their time running near the water in search of food. They can fly, but usually do not unless they are in danger. For the chicks that cannot fly, the parents will distract the predator by pretending they have an injury and leading the predator away from their chicks.
Reproduction of the Plover
Most Plovers nest on the ground, but their incubation periods and fledging rates vary from species to species. This is where their camouflaged feathers come in handy.
Most species do not build nests, but simply make a small indent in the sand and lay their eggs in that. Other species line their nests with stones or pebbles. The chicks can walk and follow their parents as soon as they hatch, but the parents must protect them from predators and lead them to food. |
A wide range of skills are taught focusing on Reading Comprehension and Grammar for Grades Four through Six. With a total of 522 engaging lessons, your child will not only be covering critical Reading Comprehension and Grammar material, but will remain engaged and have fun using educational games for learning.
Clear here to see screenshots of our robust Language Arts program.
- 24 fun programs covering Language Arts topics such as Reading and Grammar.
- Pre-tests identify strengths as well as areas where additional help may be necessary.
- 522 engaging lessons.
- Computer-generated assignments are based on child’s test results.
- Study Lessons, Practice, and Time-on-Task exercises.
- Post-tests measure child’s achievement gains.
- Rewards such as Reading and Writing games and entertaining activities help to motivate students for a job well done.
- One management system allows you to track each child’s progress automatically.
- Grade-specific preparation for state and national standardized tests. |
The topic of society, science, and ethnicity can often be filled with complexities, but we are here to break them down into bite-sized pieces which you can easily comprehend.
According to the social sciences, ethnicity is often used to distinguish between the majority and minority groups. Once the distinction has been made, sociologists and other experts dig deep and study the relations between the two groups within our society, exploring social topics such as inequalities in things like education, housing, income, etc.
Further discussion will dig even deeper, and the ethnicities are subdivided into further groups by cultural attributes such as country of origin, language, religious practices, etc. Although this is a common misconception, ethnicity is still distinct from a race. Race, on the other hand, is used to distinguish people by their physical appearance, and this genetic component doesn’t correlate with one’s ethnicity. Ethnicity is linked to cultural background, which is re-enforced by cultural customs, as previously mentioned.
There is also a very subjective nature to the topic of ethnicity, such as when viewed under the lens of a different society. Take the example of second-generation Mexican Americans. Although they may be natural born US citizens and tend to share the same religious views as the dominant American majority, they will always be defined ethnically as Mexican Americans. But if these same Mexican Americans were to visit Mexico, especially as first timers, they will be ethnically viewed as Americans, since they speak with an American dialect, and will be very much Americanized in terms of their clothing style, food preferences, musical tastes, etc.
The topic of ethnicity in science and society is becoming ever more important in today’s diverse society, but it can be tough to bring nuance to this complex topic. Therefore, it is important to understand what ethnicity means, how it relates to science and society, and to get involved in more discussions surrounding this issue. |
The Federal Reserve plays an important role in the life of every American (and some would argue, every human) but few know of its origins, and the power it wields.
The Creation of the Federal Reserve
On Jekyll Island, off the coast of Georgia, in 1910 the Federal Reserve was conceived. The men (well known and wealthy—see list below) who came up with the Federal Reserve were asked to the meeting on Jekyll Island under great secrecy. No one knew they were there. The meeting took place over nine days, where the men discussed and hammered out the significant details of what would become the Federal Reserve.
Men Who Created the Federal Reserve
- Nelson W. Aldrich
- Abraham Piatt Andrew
- Frank Arthur Vanderlip
- Henry Pomeroy Davison
- Charles D. Norton – President of the First National Bank of New York
- Benjamin Strong – Vice President of Bankers Trust Company
- Paul Warburg – Director of Wells Fargo & Company
Frank Arthur Vanderlip arrange the meeting, and instructed the men to arrive in utmost secrecy, going as far as requesting that they take pre-arranged transport and to only use first names as they made the trip. This was to keep the identity of those who came up with the concept of the Federal Reserve secret. The secrecy was needed so that the public would not know who proposed the creation of the Federal Reserve, otherwise no legislation that created the Federal Reserve Bank would pass Congress.
The Federal Reserve: The Fountain-Head of Money
Despite its name, the Federal Reserve is a completely private organization that works with the government to set the monetary policy in the United States. The primary relationship between the Federal Reserve and the US government is through the Treasury. It issues Federal Reserve currency (electronic money) which is used to purchase treasury bonds and bills—effectively loaning the Federal Government money.
This process is interesting (frightening) because the Federal Reserve basically creates money out of thin air, and lends it to the government. And how does the borrowed treasury get paid back? Well, the governments taxes the working public and businesses to pay for its liabilities.
How Does the Federal Reserve Help the Federal Government
Once the Federal Reserve came into being, it was given the power to create (print) federal reserve notes (money). It can generate money (out of think air) and lend it out to Federal Reserve banks (there are 12) which in turn can either lend it directly to the government (by buying treasury notes) or lending to other banks which then can lend it to the government by buying treasury notes. The treasury notes pay interest and that is how the banks make money, and since the notes are backed by the United States government, the are considered to be safe investments.
The money that the Federal Reserve creates out of think air produced interest, which then the Federal reserve uses to operate, and lend. Whatever is left over is then routed into the US treasury. Most of this money is made from charging the US government interest.
US Government borrows money from the Federal Reserve, the Federal Reserve charges the Federal Government interest, and then pays out the profits of that transaction to the US Treasury. The Federal Reserve system and its members and directors make money by creating something out of nothing, and then charging the US Government for lending it ‘nothing’.
A Comprehensive Video From the Expert on the Subject
Note: George Edward Griffin is the foremost expert on the Federal Reserve, its founding, and consequences of its creation. It is based on his presentations, and expertise that this writing is based. |
View a slide show of images from the schools highlighted in this article.Not everyone uses garbage as a teaching tool, but maybe we should. Melina Kuchinov, a teacher at Green Woods Charter School in Philadelphia, dumps a trash can filled with compost onto a plastic bag spread on her classroom carpet. For weeks, her first-grade students have collected and composted banana peels, fruit rinds, and vegetables. Now, they are about to learn what happens to their food after they’re done with lunch. They separate dirt from composting trash with Popsicle sticks and examine the bugs that are eating their leftovers, recording their observations as they dig. After several weeks of composting, says Kuchinov, the students look forward to seeing what’s happening inside the trash bin.
Most other schools, Kuchinov admits, would never let her compost in her classroom, much less examine the waste on the floor. But Green Woods Charter, an environmental education school, uses the outdoors as a classroom, even bringing it indoors sometimes. Across the country, environmental education schools and the growing movement to get children outdoors are challenging the current “indoor generation” of kids. “The interest has never been greater,” says Martin LeBlanc, Sierra Club national youth education director. “People have never been more aware of the fact that children are not getting involved with the outdoors.” Even as No Child Left Behind decreases the time alloted for environmental education and field trips, research shows that children who spend time outdoors are healthier, happier, and smarter. With the global warming crisis looming, children who spend time outdoors may also be the ones who help save the planet.
The Indoor Generation
Today’s children spend far more time indoors than out. The percent of adolescents who participated in daily physical education decreased from 42 percent in 1991 to 28 percent in 2003. Worse, up to 13 percent of schools do not have scheduled daily recess at all. And when students are in class, they’re not learning about the environment. “One of the unintended consequences of NCLB,” says Brian Day, executive director of the North American Association for Environmental Education (NAAEE), “was that a whole set of things, environmental education included, got pushed out of the classroom because of the initiative’s overwhelming focus on reading and math.”
At home, children’s time is often structured. After finishing school, sports, homework, and dinner, many children opt for television, video games, or the computer over playing outside before bedtime. Children’s free time, says Cheryl Charles, Ph.D., president of the Children and Nature Network, is “out of balance right now, and it’s to the detriment of kids.” Because of sedentary, indoor lifestyles, doctors treat more and more children for diabetes, obesity, attention disorders, and depression. They see fewer broken bones but more repetitive stress injuries from computers and video games. Too much time indoors and children also lose a certain confidence and independence. “Children used to play outside on their own for hours at a time,” says the Sierra Club’s LeBlanc. “That just doesn’t happen anymore.”
Outdoor time can remedy many indoor-generation concerns. Playing outside is natural exercise, which reduces obesity and diabetes. Playing on fields or in woods stimulates cooperation, creativity, and problem-solving skills more than playing on asphalt. “Kids who play on naturalized schoolyards tend to have fewer antisocial interactions,” says David Sobel, director of teacher certification programs at Antioch University New England. Outdoor settings and green environments also have a calming affect on children with attention disorders; children as young as five showed a decrease in ADD symptoms when they were engaged with nature.
Getting outdoors also improves student test scores. According to a 2005 study released by the California Department of Education, children who learned in outdoor classrooms increased their science test scores by 27 percent. The gains also extend to reading and math. “If you use the environment as an integrating theme across the curriculum,” says Day, “test scores go way up.” It’s reading about the environment and then exploring it that makes a difference. “It’s not merely the act of going outdoors,” says Day, “but if you tie it back to the curriculum in an applied way, then things start to happen.”
More than Hugging a Tree.
Instead of simply teaching an appreciation of nature, today’s environmental education programs concentrate on comprehension and preservation. “We’re really working hard on understanding the natural world and bringing hard science into [the curriculum],” says Anne Vilen, curriculum specialist with Evergreen Community Charter School, in Asheville, North Carolina. According to Vilen, Evergreen’s students do fieldwork in which they collect data and bring it back to the classroom to produce a learning product. The students also take on annual service learning projects. The combination of classwork, fieldwork, and service learning allows Evergreen’s students to move from appreciating the environment to maintaining and sustaining it.
Combating doom and gloom
Children have always been interested in saving their favorite animals, but the current discussion about global warming, deforestation, species extinction, and other crises is often accompanied by fearful predictions and pictures of melting ice caps and drowning polar bears. This can be scary for children. Sobel has dubbed this condition ecophobia, a fear for the planet’s future. According to Sobel, emphasizing doom and gloom too early produces an ecophobia in children that distances them from the natural world. Instead of trying to save the planet, they shut down and retreat from nature. The cure for ecophobia is teaching children to love the environment outside their windows. At Hawley Environmental School in Milwaukee, kindergarten students connect to the outdoor world before they learn about science and ecology. “We connect to the real world first,” says teacher Amy Fare.
Once children enjoy the outdoors and understand how the environment works, teaching them about environmental issues can empower them. Environmental education, says Day, “is not about teaching kids what to think, it’s about teaching them how to think so they can make their own decisions.”
Environmental crises are crucial topics to discuss, and children are able to have those conversations if teachers present the issues rationally. “Our students know about global warming,” says Andrew Slater, head of The Logan School, a environmental education school in Denver.
Learning about the environment out-of-doors indirectly teaches children that they can and should save the environment. Whatever Green Woods’ students do in the future, says Jean Wallace, the school’s principal, “they will have a deep understanding of how the environment works,” including how their choices impact the environment. After all, says Wallace, “it’s difficult to conserve and protect what you don’t understand.”
Teaching Kids Outdoors
As we realize the benefits of spending time outdoors, teaching outside in outdoor classrooms and on field trips is gaining popularity. After all, says the Children and Nature Network’s Charles, there isn’t any topic that we can’t teach outdoors. Cathy Bache, a teacher at the Secret Garden nursery school in Scotland, has taken this to heart. The school is completely outdoors, and the preschool students spend every day, all day, rain or shine, playing in the school’s gardens, streams, and woods.
Of course, you don’t have to spend all year outside to reap benefits. Green Woods Charter School uses the Schuylkill Center for Environmental Education’s 350 acres of forests and miles of trails to teach students about the pond, field, and forest ecosystems before branching out into watersheds, life sciences, and physical sciences. The environmental-focused curriculum, says Wallace, allows the school to teach in “what we believe as educators is the best way for children to learn—through multisensory experiences.” All that time outdoors turns Green Woods’ students into remarkable nature observers who can recognize patterns and pick out the smallest of details.
Troy Schlegel, a fifth-grade teacher at Oakwood Environmental Education Charter School in Oshkosh, Wisconsin, uses the outdoors as a classroom as much as possible. His students compose poems in a gazebo and learn about longitude and latitude using GPS devices on the school grounds. Each student also chooses a tree on the campus to care for and write about throughout the year. This helps them with their writing. “They focus on what they’re seeing,” says Schlegel, “and they write more and use a larger vocabulary than if they were sitting in a classroom.”
Using the outdoors as a classroom also capitalizes on children’s natural curiosity and enthusiasm. “Kids have a natural love for plants and animals,” says Sarah Taylor, principal of Sunnyside Environmental School in Portland, Oregon. “Teaching students outdoors meets their kinesthetic needs to be outside.”
Learning outdoors, even without fields, woods, or trails, has its benefits. An outdoor classroom doesn’t have to be elaborate, says Charles. It can be as simple as a small, square space with rocks, water, and plants. A 15-minute neighborhood walk or play break on natural surfaces can calm antsy minds and increase both children’s ability to concentrate and their creativity.
Leave no child inside
In the future, environmental education and time outdoors may be part of every student’s school day. During this year’s NCLB reauthorization, Congress has added the Leave No Child Inside Act. If passed, this Act will change how schools approach environmental education. “It will start the process of giving more flexibility to teachers to get kids learning outside,” says the Sierra Club’s LeBlanc. It will provide time, training, and funds for environmental education in public schools. With Leave No Child Inside, says the NAAEE’s Day, environmental literacy will be an important school subject and will be integrated across the curriculum. In the meantime, our students can benefit from the smallest environmental lessons—growing school gardens, creating outdoor spaces, or building compost piles—where they can see life happen outside of the classroom. |
NIH Research Matters
December 12, 2011
Antibodies Protect Against HIV in Mice
Researchers have devised a gene transfer technique in mice that, with a single injection, protects the immune cells that HIV targets. With further development, the approach may prove effective at helping to prevent HIV infection in people.
Most vaccines work by triggering the immune system to produce antibodies to help beat back infections. But a vaccine for HIV has been elusive. Proteins on the surface of HIV mutate rapidly, changing shape and preventing most antibodies from latching onto the virus.
Scientists have discovered several antibodies that can neutralize HIV. They've gained important insights into how they bind to the virus and why they're effective. But designing a vaccine that prompts the human immune system to generate such antibodies and mount an effective attack remains a difficult challenge.
A team of researchers led by Drs. Alejandro Balazs and David Baltimore at the California Institute of Technology decided to pursue a different strategy—one that doesn't require the immune system to generate antibodies. Their work was partly supported by NIH's National Institute of Allergy and Infectious Disease (NIAID). They described the approach, called vectored immunoprophylaxis, in the November 30, 2011, advance online edition of Nature.
The scientists began with a virus capable of expressing high levels of full-length human antibodies when injected into muscle. They modified the virus by inserting the genes that code for an HIV-neutralizing antibody called b12. When the virus was injected into mouse leg muscle, the mice produced high levels of antibodies for at least a year.
The researchers next tested whether the technique could protect against HIV. Mice aren't susceptible to HIV, so the researchers used specialized mice with human CD4 cells, the immune cells that HIV targets and infects. After exposure to the virus, mice expressing b12 antibodies showed none of the CD4 cell loss that control animals did.
The researchers tested other antibodies known to neutralize a broad range of HIV strains. Another antibody called VRC01, which was identified by scientists at NIH, produced results similar to b12. Both antibodies protected CD4 cells against HIV doses 100-fold higher than the levels that would infect most animals. This protection would be well beyond that needed to prevent HIV infection in humans.
“Normally, you put an antigen or killed bacteria or something into the body, and the immune system figures out how to make an antibody against it,” Balazs explains. “We've taken that whole part out of the equation.”
The team is now developing a plan to test the method in human clinical trials. “If humans are like mice, then we have devised a way to protect against the transmission of HIV from person to person,” says Baltimore. “But that is a huge if, and so the next step is to try to find out whether humans behave like mice.”—by Harrison Wein, Ph.D.
- Insights into How HIV Evades Immune System:
- Making Antibodies That Neutralize HIV:
- Antibodies Protect Human Cells from Most HIV Strains:
- Antibody Gets a Grip on HIVís Potential Weak Spot:
- AIDS Information:
NIH Research Matters
Bldg. 31, Rm. 5B64A, MSC 2094
Bethesda, MD 20892-2094
About NIH Research Matters
Editor: Harrison Wein, Ph.D.
Assistant Editors: Vicki Contie, Carol Torgan, Ph.D.
NIH Research Matters is a weekly update of NIH research highlights from the Office of Communications and Public Liaison, Office of the Director, National Institutes of Health. |
America is a heavy place. More than two-thirds of adults and one-third of preschoolers are either overweight or obese, and similar proportions can be found in many western countries.
One significant cause of this problem is a diet that is rich in calories and poor in nutrient-rich vegetables. Our food intake habits are shaped significantly in our younger years, and youth tend not to be overly excited about vegetables. Give a child a choice between fries and carrots, and carrots don't fare so well. However, if we cannot get children to eat healthy now, we are likely staring down the barrel of obesity rates that will continue to rise.
Several interventions have been proposed to encourage youth to eat more vegetables, particularly in their school cafeterias. The school is an important place, as children are minimally supervised but eat a significant amount of their daily food intake on site. Many proposed solutions are expensive, difficult to implement, and result in modest behavior changes.
There is one noteworthy exception, and it is delightfully simple. In a study by Joe Redden and his colleagues at the University of Minnesota published in the Journal of the American Medical Association this month, the authors showed a simple way to get students in a lower-income school cafeteria to eat more vegetables. How? Pictures.
The authors placed pictures in cafeteria lunch trays. The photo showed veggies in one of the lunch plate compartments, suggesting that other students typically placed vegetables in that compartment. The results? Children put more veggies on their plates. The cost? About $12 and two hours of time, for 600 children.
Why was this so effective? Social psychology has long shown the power of social norms. We often do what others are doing, and conforming (or fitting in) is a goal especially powerful to children. If everyone starts listening to a certain band, many will follow. Of course, if smoking or underage drinking becomes common, it is hard to stop youth from doing that too. In this case, the mere illusion that vegetable choice is common among their peers increased the likelihood of including a veggie as a part of their lunch.
Convince students that carrots are good for their health (say, through more education), and still only a few choose to eat them. Youth already know that carrots are good for their health and smoking is bad, but they rarely act in accordance with their own best health. Neither do adults. How can we expect youth to do any better than us? However, convince students that their friends all eat carrots, and carrots become more attractive. Since the strongest predictor of what we eat is what we put on our plate, most of the additional veggies were actually eaten, rather than tossed in the trash. |
Electrons rule our world, but not so long ago they were only an idea. This month marks the 120th anniversary of a profound and influential creation, the electron theory of Dutch physicist Hendrik Antoon Lorentz. His electron was not merely a hypothesized elementary particle; it was the linchpin of an ambitious theory of nature. Today physicists are accustomed to the notion that a complete description of nature can rise out of simple, beautiful equations, yet prior to Lorentz that was a mystic vision.
For most physicists the memorable peak of 19th-century physics is the theory of electrical and magnetic fields, capped by James Clerk Maxwell’s mathematical synthesis of 1864. Then a haze settles, until the 20th-century massifs of relativity and quantum theory poke through. That foggy folk history obscures the bridge between—itself a brilliant achievement, built through heroic labor.
Lorentz’s achievement was to purify the message of Maxwell’s equations—to separate the signal from the noise. The signal: four equations that govern how electrical and magnetic fields respond to electric charge and its motion, plus one equation that specifies the force those fields exert on charge. The noise: everything else!
Now one had definite equations for the behavior of tiny bodies with specified mass and charge. Could one use those equations to rebuild the description of matter on a new foundation, starting from idealized “atoms” of charge? This was the burden of Lorentz’s electron theory. Starting with his 1892 paper, Lorentz and his followers used the electron theory to explain one property of matter after another—conduction of electricity and of heat, dielectric behavior, reflection and refraction of light, and more. Thus, they laid the groundwork for the subjects we now call electronics and materials science. |
Meaning Of Quoting, Paraphrasing, And Summarizing
The quotation is the phrase or the word given by the writers and philosophers and these phrases are stated between the commas. The phrase or the sentences should be the same as the original, means the writers need to state the whole phrase word by word. In different books and articles, the writers have used different quotations related to the topic to convince the audience about the fact. With the quotations, the name of the writer is important to write.
Paraphrasing is a very common skill
which the writers used to have. It means to place the whole original paragraph in the writer’s own words. Paraphrasing encourages the rewording of the paragraph and converts it into the new one, this concept is the kind of plagiarism. A summary is not as detailed as the paraphrase is, but the paraphrase is brief and shorter in length than the original source. If the writer wants to write brief and just to convey the meaningful message then it is good to write paraphrasing because it is focused on the essential message of the passage.
Summarizing means to write
the main points of the whole document in a very concise form. In the writing process, the two parts are very important such as the introduction and the conclusion. In conclusion, mostly the writer writes the main points and facts of the whole document. For summarizing the document briefly the writer should first read the whole document and then re-read the whole one again. The writer should circle the main points in the documents. The other way to summarize the document is to divide the written material into different stages and then explain the stage in own words. After gathering the nut of the document the writer should develop the draft and revise it. One thing which the writer should not forget is to compare the document with the original source.
Reason Of Using The Quotations,
Paraphrasing And Summary
Following are the reasons due to which it is better for the authors to use such concepts while writing:
These facilitate the writers to support the assertion and they increase the credibility of the author.
With the help of summarizing, the author can make the document more brief and concise.
Paraphrasing assists the writers to use brain instead of reference books and articles for the keywords.
Paraphrasing and Summary help the audience to understand the meaning more easily because in paraphrasing and summarizing the words are more simple than in the original source.
The instructors should encourage
The students in their academic careers to get involved in the process of quoting, paraphrasing and summarizing while writing the subject assignments and projects. For doing anything there is always a proper way of doing it likewise for quoting the quotes in the document there are many ways to quote means that the writer can write quotations at the top of the page or in between the paragraphs etc.
Quoting, summarizing and paraphrasing are the tactics of writing good documents, so the skilled writers are the one which used to exercise these tactics while writing. These strategies are very essential for an effective document in terms of solid facts/figures and conciseness. Conciseness in the document takes less time of the reader to read the whole document. Time is very important for the audience and for the writers so one should give much importance to the time. |
In much of the United States, immigrants from China banded together in self-enclosed communities, “Chinatowns,” in which they retained their language, culture, and social organization. In the South, however, the Chinese began to merge into the surrounding communities within a single generation’s time, quickly disappearing from historical accounts and becoming, as they themselves phrased it, a “mixed nation.”
Lucy M. Cohen’s Chinese in the Post-Civil War South traces the experience of the Chinese who came to the South during Reconstruction. Many of them were recruited by planters eager to fill the labor vacuum created by emancipation with “coolie” labor. The Planters’ aims were obstructed in part by the federal government’s determination not to allow the South the opportunity to create a new form of slavery. Some Chinese did, however, enter into labor contracts with planters—agreements that the planters often altered without consultation or negotiation with the workers. With the Chinese intent upon the inviolability of their contracts, the arrangements with the planters soon broke down.
At the end of their employment on the plantations, some of the immigrants returned to China or departed for other areas of the United States. Still others, however, chose to remain near where they had been employed. Living in cultural isolation rather than in the China towns in major cities, the immigrants soon no longer used their original language to communicate within the home; they adopted new surnames, so that even among brothers and sisters variations of names existed; they formed no associations or guilds specific to their heritage; and they intermarried, so that a few generations later their physical features were no longer readily observable in their descendants.
Based on extensive research in documents and family correspondence as well as interviews with descendants of the immigrants, this study by Lucy Cohen is the first history of the Chinese in the Reconstruction South—their rejection of the role that planter society had envisioned for them and their quick adaptation into a less rigid segment of rural southern society.
Found an Error? Tell us about it. |
Flood control in the Netherlands
Flood control is an important issue for the Netherlands, as about two thirds of its area is vulnerable to flooding, while the country is among the most densely populated on Earth. Natural sand dunes and human-made dikes, dams and floodgates provide defense against storm surges from the sea. River dikes prevent flooding from water flowing into the country by the major rivers Rhine and Meuse, while a complicated system of drainage ditches, canals and pumping stations (historically: windmills) keep the low lying parts dry for habitation and agriculture. Water control boards are the independent local government bodies responsible for maintaining this system.
In modern times, flood disasters coupled with technological developments have led to large construction works to reduce the influence of the sea and prevent future floods.
- 1 History
- 2 Modern developments
- 3 Current situation and future
- 4 References
- 5 External links
Original geography of the Netherlands and terp building
The flood-threatened area of the Netherlands is essentially an alluvial plain, built up from sediment left by thousands of years of flooding by rivers and the sea. About 2000 years ago, before the intervention of humans, most of the Netherlands was covered by extensive peat swamps. The coast consisted of a row of coastal dunes and natural embankments which kept the swamps from draining but also from being washed away by the sea. The only areas suitable for habitation were on the higher grounds in the east and south and on the dunes and natural embankments along the coast and the rivers. In several places the sea had broken through these natural defenses and created extensive floodplains in the north. The first permanent inhabitants of this area were probably attracted by the sea deposited clay soil which was much more fertile than the peat and sandy soil further inland. To protect themselves against floods they built their homes on artificial dwelling hills called terpen or wierden (known as Warft or Hallig in Germany). Between 500 BC and 700 AD there were probably several periods of habitation and abandonment as the sea level periodically rose and fell.
Dike construction in coastal areas
The first dikes were low embankments of only a metre or so in height surrounding fields to protect the crops against occasional flooding. Around the 9th century the sea was on the advance again and many terps had to be raised to keep them safe. Many single terps had by this time grown together as villages. These were now connected by the first dikes.
After 1000 AD the population grew, which meant there was a greater demand for arable land but also that there was a greater workforce available and dike construction was taken up more seriously. The major contributors in later dike building were the monasteries. As the largest landowners they had the organization, resources and manpower to undertake the large construction. By 1250 most dikes had been connected into a continuous sea defense.
The next step was to move the dikes ever-more seawards. Every cycle of high and low tide left a small layer of sediment. Over the years these layers had built up to such a height that they were rarely flooded. It was then considered safe to build a new dike around this area. The old dike was often kept as a secondary defense, called a sleeper dike.
A dike couldn't always be moved seawards. Especially in the southwest river delta it was often the case that the primary sea dike was undermined by a tidal channel. A secondary dike was then built, called inlaagdijk. With an inland dike, when the seaward dike collapsed the secondary inland dike becomes the primary. Although the redundancy provides security, the land from the first to second dike is lost- over the years the loss can become significant.
Taking land from the cycle of flooding by putting a dike around it prevents it from being raised by silt left behind after a flooding. At the same time the drained soil consolidates and peat decomposes leading to land subsidence. In this way the difference between the water level on one side and land level on the other side of the dike grew. While floods became more rare, if the dike did overflow or was breached the destruction was much larger.
The construction method of dikes has changed over the centuries. Popular in the Middle Ages were 'wierdijken', earth dikes with a protective layer of seaweed. An earth embankment was cut vertically on the sea-facing side. Seaweed was then stacked against this edge, held into place with poles. Compression and rotting processes resulted in a solid residue that proved very effective against wave action and they needed very little maintenance. In places where seaweed was unavailable other materials such as reeds or wicker mats were used.
Another system used much and for a long time was that of a vertical screen of timbers backed by an earth bank. Technically these vertical constructions were less successful as vibration from crashing waves and washing out of the dike foundations weakened the dike.
Much damage was done to these wood constructions with the arrival of the shipworm (Teredo navalis), a bivalve thought to have been brought to the Netherlands by VOC trading ships, that ate its way through Dutch sea defenses around 1730. The change was made from wood to using stone for reinforcement. This was a great financial setback as there is no natural occurring rock in the Netherlands and it all had to be imported from abroad.
Current dikes are made with a core of sand, covered by a thick layer of clay to provide waterproofing and resistance against erosion. Dikes without a foreland have a layer of crushed rock below the waterline to slow wave action. Up to the high waterline the dike is often covered with carefully laid basalt stones or a layer of tarmac. The remainder is covered by grass and maintained by grazing sheep. Sheep keep the grass dense and compact the soil, in contrast to cattle.
Developing the peat swamps
At about the same time as the building of dikes the first swamps were made suitable for agriculture by colonists. By digging a system of parallel drainage ditches water was drained from the land to be able to grow grain. However the peat settled much more than other soil types when drained and land subsidence resulted in developed areas becoming wet again. Cultivated lands which were at first primarily used for growing grain thus became too wet and the switch was made to dairy farming. A new area behind the existing field was then cultivated, heading deeper into the wild. This cycle repeated itself several times until the different developments met each other and no further undeveloped land was available. All land was then used for grazing cattle.
Because of the continuous land subsidence it became ever more difficult to remove excess water. The mouths of streams and rivers were dammed to prevent high water levels flowing back upstream overflowing cultivated lands. These dams had a wooden culvert equipped with a valve, allowing drainage but preventing water from flowing upstream. These dams however blocked shipping and the economic activity caused by the necessity to transship goods caused villages grow up near the dam, some famous examples are Amsterdam (dam in the river Amstel) and Rotterdam (dam in the Rotte). Only in later centuries were locks developed to allow ships to pass.
Further drainage could only be accomplished after the development of the polder windmill in the 15th century. The winddriven waterpump has become one of the trademark tourist attraction of the Netherlands. The first drainage mills using a scoop wheel could raise water at most 1.5 meter. By combining mills the pumping height could be increased. Later mills were equipped with an Archimedes' screw which could raise water much higher. The polders, now often below sea level, were kept dry with mills pumping water from the polder ditches and canals to the boezem, a system of canals and lakes connecting the different polders and acting as a storage basin until the water could be let out to river or sea, either by a sluice gate at low tide or using further pumps. This system is still in use today, though drainage mills have been replaced by first steam and later diesel and electric pumping stations.
The growth of towns and industry in the Middle Ages resulted in an increased demand for dried peat as fuel. First all the peat down to the groundwater table was dug away. In the 16th century a method was developed to dig peat below water, using a dredging net on a long pole. Large scale peat dredging was taken up by companies, supported by investors from the cities. These undertakings often devastated the landscape as agricultural land was dug away and the leftover ridges, used for drying the peat, collapsed under the action of waves. Small lakes were created which quickly grew in size, every increase in surface water leading to more leverage of the wind on the water to attack more land. It even led to villages being lost to the waves of human-made lakes. The development of the polder mill gave the option of draining the lakes. In the 16th century this work was started on small, shallow lakes, continuing with ever larger and deeper lakes, though it wasn't until in the nineteenth century that the most dangerous of lakes, the Haarlemmermeer near Amsterdam, was drained using steam power. Drained lakes and new polders can often be easily distinguished on topographic maps by their different regular division pattern as compared to their older surroundings. Millwright and hydraulic engineer Jan Leeghwater has become famous for his involvement in these works.
Control of river floods
The first large construction works on the rivers were conducted by the Romans. Nero Claudius Drusus was responsible for building a dam in the Rhine to divert water from the river branches Waal to the Nederrijn and possibly for connecting the river IJssel, previously only a small stream, to the Rhine. Whether these were intended as flood control measures or just for military defense and transportation purposes is unclear.
The first river dikes appeared near the river mouths in the 11th century, where incursions from the sea added to the danger from high water levels on the river. Local rulers dammed branches of rivers to prevent flooding on their lands (Graaf van Holland, ca. 1160, Kromme Rijn; Floris V, 1285, Hollandse IJssel), only to cause problems to others living further upstream. Large scale deforestation upstream caused the river levels to become ever more extreme while the demand for arable land led to more land being protected by dikes, giving less space to the river stream bed and so causing even higher water levels. Local dikes to protect villages were connected to create a ban dike to contain the river at all times. These developments meant that while the regular floods for the first inhabitants of the river valleys were just a nuisance, in contrast the later incidental floods when dikes burst were much more destructive.
The 17th and 18th century was a period of many infamous river floods resulting in much loss of life. They were often caused by ice dams blocking the river. Land reclamation works, large willow plantations and building in the winter bed of the river all worsened the problem. Next to the obvious clearing of the winter bed, overflows (Dutch: overlaten) were created. These were intentionally low dikes where the excess water could be diverted downstream. The land in such a diversion channel was kept clear of buildings and obstructions. As this so-called green river could therefore essentially only be used for grazing cattle it was in later centuries seen as a wasteful use of land. Most overflows have now been removed, focusing instead on stronger dikes and more control over the distribution of water across the river branches. To achieve this canals such as the Pannerdens Kanaal and Nieuwe Merwede were dug.
A committee reported in 1977 about the weakness of the river dikes, but there was too much resistance from the local population against demolishing houses and straightening and strengthening the old meandering dikes. It took the flood threats in 1993 and again in 1995, when over 200.000 people had to be evacuated and the dikes only just held, to put plans into action. Now the risk of a river flooding has been reduced from once every 100 years to once every 1250 years. Further works in the Room for the River project are being carried out to give the rivers more space to flood and in this way reducing the flood height.
Water control boards
The first dikes and water control structures were built and maintained by those directly benefiting from them, mostly farmers. As the structures got more extensive and complex councils were formed from people with a common interest in the control of water levels on their land and so the first water boards began to emerge. These often controlled only a small area, a single polder or dike. Later they merged or an overall organization was formed when different water boards had conflicting interests. The original water boards differed much from each other in organisation, power and area they managed. The differences were often regional and dictated by differing circumstances, whether they had to defend a sea dike against a storm surge or keep the water level in a polder within bounds. In the middle of the twentieth century there were about 2700 water control boards. After many mergers there are currently 27 water boards left. Water boards hold separate elections, levy taxes and function independently from other government bodies.
The dikes were maintained by the individuals who benefited from their existence, every farmer having been designated part of the dike to maintain, with a three-yearly viewing by the water board directors. The old rule "Whom the water hurts, he the water stops" (Dutch: Wie het water deert, die het water keert) meant that those living at the dike had to pay and care for it. This led to haphazard maintenance and it is believed that many floods would not have happened or would not have been as severe if the dikes had been in better condition. Those living further inland often refused to pay or help in the upkeep of the dikes though they were just as much affected by floods, while those living at the dike itself could go bankrupt from having to repair a breached dike.
Rijkswaterstaat (English: Directorate General for Public Works and Water Management) was set up in 1798 under French rule to put water control in the Netherlands under a central government. Local waterboards however were too attached to their autonomy and for most of the time Rijkswaterstaat worked alongside the local waterboards. Rijkswaterstaat has been responsible for many major water control structures and was later and still is also involved in building railroads and highways.
Water boards may try new experiments like the sand engine off the coast of North Holland.
Over the years there have been many storm surges and floods in the Netherlands. Some deserve special mention as they particularly have changed the contours of the Netherlands.
A series of devastating storm surges, more or less starting with the First All Saints' flood (Dutch: Allerheiligenvloed) in 1170 washed away a large area of peat marshes, enlarging the Wadden Sea and connecting the previously existing Lake Almere in the middle of the country to the North Sea, thereby creating the Zuiderzee. It in itself would cause much trouble until the building of the Afsluitdijk in 1933.
Several storms starting in 1219 created the Dollart from the mouth of the river Ems. By 1520 the Dollart had reached it largest size. Reiderland, containing several towns and villages, was lost. Much of this land was later reclaimed.
In 1421 the St. Elizabeth's flood caused the loss of 'De Grote Waard' in the south west of the country. Particularly the digging of peat near the dike for salt production and neglect because of a civil war caused dikes to fail. It created the Biesbosch, a valued nature reserve.
The more recent floodings of 1916 and 1953 gave rise to building the Afsluitdijk and Deltaworks respectively.
Flooding as military defense
By flooding certain areas on purpose a military defensive line could be created. In case of an advancing enemy army the area was inundated with about 30 cm (1 foot) of water, too shallow for boats but deep enough to make advance on foot difficult, hiding underwater obstacles as canals, ditches and purpose-built traps. Dikes crossing the flooded area and other strategic points were protected by fortifications. The system proved successful on the Hollandic Water Line in rampjaar 1672 during the Third Anglo-Dutch War but was overcome in 1795 because of heavy frost. It was also used with the Stelling van Amsterdam, the Grebbe line and the IJssel Line. The advent of heavier artillery and especially airplanes have made this strategy largely obsolete.
Technological development in the twentieth century meant that larger projects could be undertaken to further improve the safety against flooding and to reclaim large areas of land. The most important are the Zuiderzee Works and the Delta Works. By the end of the twentieth century all sea inlets have been closed off from the sea by dams and barriers. Only the Westerschelde needs to remain open for shipping access to the port of Antwerp. Plans to reclaim (parts of) the Wadden Sea and the Markermeer were eventually called off because of the ecological and recreational values of these waters.
The Zuiderzee Works (Zuiderzeewerken) are a human-made system of dams, land reclamation and water drainage works. The basis of the project was the damming off of the Zuiderzee, a large shallow inlet of the North Sea. This dam, called the Afsluitdijk, was built in 1932-33, separating the Zuiderzee from the North Sea. As result, the Zuider sea became the IJsselmeer — IJssel lake.
Following the damming, large areas of land were reclaimed in the newly freshwater lake body by means of polders. The works were performed in several steps from 1920 to 1975. Engineer Cornelis Lely played a major part in its design and as statesman authorisation of its construction.
A study done by Rijkswaterstaat in 1937 showed that the sea defenses in the southwest river delta were inadequate to withstand a major storm surge. The proposed solution was to dam all the river mouths and sea inlets thereby shortening the coast. However because of the scale of this project and the intervention of the Second World War its construction was delayed and the first works were only completed in 1950. The North Sea flood of 1953 gave a major impulse to speed up the project. In the following years a number of dams were built to close off the estuary-mouths. In 1976, under pressures from environmental groups and the fishing industry, it was decided not to close off the Oosterschelde estuary by a solid dam but instead to build the Oosterscheldekering, a storm surge barrier which is only closed during storms. It is the most well-known (and most expensive) dam of the project. A second major hurdle for the works was in the Rijnmond area. A storm surge through the Nieuwe Waterweg would threaten about 1.5 million people around Rotterdam. However, closing off this river mouth would be very detrimental for the Dutch economy, as the Port of Rotterdam - one of the biggest sea ports in the world - uses this river mouth. Eventually, the Maeslantkering was built in 1997, keeping economical factors in mind: the Maeslantkering is a set of two swinging doors that can shut off the river mouth when necessary, but which are usually open. The Maeslantkering is forecast to close about once per decade. Up until now (January 2012), it has closed only 1 time, in 2007. The project was finished with the construction of the Maeslantkering in 1997.
Current situation and future
The current sea defenses are stronger than ever but experts warn that complacency would be a mistake. New calculation methods revealed numerous weak spots. A theoretical sea level rise (made more extreme by global warming) and continuing land subsidence might make further upgrades to the flood control and water management infrastructure necessary.
The sea defenses are continuously being strengthened and raised to meet the safety norm of a flood chance of once every 10,000 years for the west, this being the economic heart and most densely populated part of the Netherlands, and once every 4,000 years for less densely populated areas. The primary flood defenses are tested against this norm every 5 years. In 2010 about 800 km of dikes out of a total of 3,500 km failed to meet the norm. This does not mean there is an immediate flooding risk, it is the result of the norm becoming more strict from the results of scientific research on for example wave action and sea level rise.
The amount of coastal erosion is compared against the so-called basic coastline (Dutch: BasisKustLijn), the average coastline in 1990. Sand replenishment is used where beaches have retreated too far. About 12 million m3 of sand are deposited yearly on the beaches and below the waterline in front of the coast.
The Stormvloedwaarschuwingsdienst (SVSD) ((English: Storm surge warning service)) makes a water level forecast in case of a storm surge and warns the responsible parties in the affected coastal districts. These can then take appropriate measures depending on the expected water levels, such as evacuating areas outside the dikes, closing barriers and in extreme cases patrolling the dikes during the storm.
The Second Delta Committee or Veerman Committee (officially State Committee for Durable Coast Development, Dutch: Staatscommissie voor Duurzame Kustontwikkeling) gave its advice in 2008. It expects a sea level rise of 65 to 130 cm by the year 2100. Among its suggestions are:
- to increase the safety norms tenfold and strengthen dikes accordingly,
- to use sand replenishment to broaden the North Sea coast and allow it to grow naturally,
- to use the lakes in the southwest river delta as river water retention basins,
- to raise the water level in the IJsselmeer to provide freshwater.
These measures would cost approximately 1 billion Euro/year.
Room for the River
Some think global warming in the 21st century might result in a rise in sea level which could overwhelm the measures the Netherlands has taken to control floods. The Room for the River project allows for periodic flooding of indefensible lands. In such regions residents have been removed to higher ground, some of which has been raised above anticipated flood levels.
- Bosker, F (2008). "Zeedijken in het noorden, Mythes en feiten over 2000 jaar kustbescherming", uitgeverij Noordboek, ISBN 978-90-330-0751-4
- Unie van Waterschappen: Groot deel Nederlandse dijken nu al toekomstbestendig
- Waterschap Noorderzijlvest: Resultaat ‘APK-keuring’ zeedijk Noorderzijlvest bekend
- Rijkswaterstaat: Water in beeld 2009
- Rijkswaterstaat: Stormvloedwaarschuwingen
- Delta Commissie 2008: Advice (english)
- Michael Kimmelman (February 13, 2013). "Going With the Flow". The New York Times. Retrieved February 19, 2013.
- Vergemissen, H (1998). "Het woelige water; Watermanagment in Nederland", Teleac/NOT, ISBN 90-6533-467-X
- Ten Brinke, W (2007). "Land in Zee; De watergeschiedenis van Nederland", Veen Magazines, ISBN 978-90-8571-073-8
- Stol, T (1993). "Wassend water, dalend land; Geschiedenis van Nederland en het water", Kosmos, ISBN 90-215-2183-0
- DeltaWorks.Org - Website about the flood of 1953 and the construction of the DeltaWorks.
- Rijkswaterstaat: Watermanagement in the Netherlands - Publication by the Ministry of Infrastructure and Environment and Rijkswaterstaat (pdf) |
After viewing each article, website, or video in this tutorial, each student will be able to correctly answer a serious of questions based on these sources.
This tutorial is designed to help you learn more about the Holocaust and better understand the perspective of the Jews persecuted before and during World War II. After reviewing each source you should be able to answer all the questions posed to you about this tutorial since these sources simply give you more information about things we have discussed in class.
Reading Standards for Literacy in History/Social Studies 6–12
Grades 9 & 10- Standard number 1:
Cite specific textual evidence to support analysis of primary and secondary sources, attending
to such features as the date and origin of the information.
Grades 9 &10- Standard number 4:
Determine the meaning of words and phrases as they are used in a text, including vocabulary describing political, social, or economic aspects of history/social science.
Source: "California Common Core State Standards." CCSS for ELA - Content (CA Dept of Education) (2013): 81. California State Board of Education, Mar. 2013. Web. 12 Dec. 2014. .
This article gives a brief introduction and overview of the Holocaust, what it was, as well as when and where it occurred.
Source: United States Holocaust Memorial Museum. “Introduction to the Holocaust.” Holocaust Encyclopedia. http://www.ushmm.org/wlc/en/article.php?ModuleId=10005143. Accessed on December 14, 2014.
This is a diary entry written by a young man named Yarden in September, 1939. He talks about the great persecution that the Jews suffered before being placed into concentration camps.
Source: Michal Unger, The Last Ghetto: Life in the Lodz Ghetto, Yad Vashem, 1992, p. 40.
This short essay question is to be answered after you have viewed the entire tutorial. You must write at least five sentences in response to this question and submit your response to me via email.
After viewing the article, diary entry, and video, do you think there is anything that the Jews could have done to try and prevent the Holocaust from occurring? Please explain your reasoning on why you do or do not think there was anything they could have done. |
8.2.2.b Analyze and solve pairs of simultaneous linear equations.
8.2.2.b.i Explain that solutions to a system of two linear equations in two variables correspond to points of intersection of their graphs, because points of intersection satisfy both equations simultaneously.
8.2.2.b.ii Solve systems of two linear equations in two variables algebraically, and estimate solutions by graphing the equations. Solve simple cases by inspection.
8.2.2.b.iii Solve real-world and mathematical problems leading to two linear equations in two variables.
8.2.3 Graphs, tables and equations can be used to distinguish between linear and nonlinear functions
8.2.3.a Define, evaluate, and compare functions.
8.2.3.a.i Define a function as a rule that assigns to each input exactly one output.
8.4.1.f Given two similar two-dimensional figures, describe a sequence of transformations that exhibits the similarity between them.
8.4.1.g Use informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for similarity of triangles. |
A treatment for patients with blood-related cancers and certain blood disorders, stem cell transplantation involves replacing a patient’s unhealthy blood-forming cells with healthy ones. Patients are first treated with chemotherapy, and sometimes radiation therapy, to wipe out or diminish the bone marrow and lymph nodes where cancers such as leukemia and lymphoma form. In an “allogeneic” transplant, they then receive an infusion of healthy stem cells from a compatible donor to replenish their blood-forming elements. The infused cells also provide an anticancer effect.
Traditionally, it was necessary for donors and recipients to have closely matched tissue types. Tissue type is determined by human leukocyte antigens (HLA), proteins on the surface of the body’s cells. They inform the immune system whether cells are to be left alone or are foreign or diseased, and should be eliminated. To reduce the risk that the transplant will result in an attack on normal, healthy tissue, doctors seek donors whose HLA type is as close as possible to the recipient’s.
To broaden the pool of potential donors, researchers in the early 2000s developed a modified form of stem cell transplant, known as a haploidentical transplant, in which a healthy first-degree relative – a parent, sibling, or child – can often serve as a donor. Instead of a near-total HLA match, donors for a haploidentical transplant need be only a 50 percent match to the recipient.
In addition to making it easier to find a suitable donor, haploidentical transplants often can often be performed more promptly than traditional unrelated donor transplants. Relatives may be able to make a donation on short notice, which may be less likely for unrelated donors, particularly if they live in other countries.
For donors and patients alike, the procedure for a haploidentical transplant is much the same as for a standard allogeneic transplant. The main difference is that several days after the transplant, patients receive a very high dose of the chemotherapy drug cytoxan. This causes a sharp decline in active T cells, key contributors to graft-versus-host disease (GVHD), a potential side effect of transplantation in which donated immune system cells mount an attack on the body.
Success rates for haploidentical transplants are similar to those for conventional transplants. Because haploidentical transplants are a relatively new procedure, it’s uncertain whether their anticancer effect persists over many years. The answer may become clearer in a clinical trial organized through the Blood Marrow Transplantation Clinical Trials Network comparing haploidentical stem cells and another source of partially matched donor cells – umbilical cord blood – in patients with certain types of leukemia and lymphoma.
Learn more from the Stem Cell Transplantation program at Dana-Farber/Brigham and Women’s Cancer Center. |
Over the past decade, we've seen lower interest rates, a broken housing market, and virtually unlimited options when it comes to personal loans. The end result is that Americans are borrowing money at a record pace; consumer debt is on the rise.
In this article, we're going to discuss the role debt plays in fueling the U.S. economy. To do that, we're first going to talk about concepts such as the money multiplier, and how this economic theory is related to consumer debt. From there, we'll be able to explain both the positive and negative effect this can have in America and on consumers.
In order to know whether or not rising debt is a problem in America, it's important to understand some basic economic rules. For example, when a consumer buys something, the money spent doesn't simply stop at that store.
In fact, experts believe that around 70% of the U.S. gross domestic product (a common measure of economic growth) is derived from consumer spending. This means even relatively small changes in spending habits can have fairly large effects on the health of the economy. Perhaps the best way to understand debt's effect on the economy is explained through an example.
Let's say a consumer decides they want to buy a new car or truck. To buy the truck, the consumer is going to increase their debt load. They're going to borrow money.
When the purchase is finalized, the money doesn't stop there; some of it keeps flowing. The salesperson collects a commission, and the dealership buys another car from the factory. The salesperson now has some extra money to spend. The factory pays its workers to produce more trucks. They purchase parts from their suppliers, who pay their workers to produce those parts...
The salesperson and the factory worker need to pay income taxes, and they may decide to save some money instead of buying another product. But the example has served its purpose. By borrowing money to purchase the truck, the original consumer has transferred wealth to others. That original loan has resulted in a multiplier effect, and the economic boom continues as money changes hands.
The money multiplier concept is often associated with Keynesian economic theory, and has been the rationale for using increased government spending or tax cuts to stimulate the U.S. economy. The example given earlier follows this same theory:
Increased consumer spending is followed by an increase in business revenues. Those revenues result in more jobs which once again result in more spending - and so the cycle continues.
The reason it's important to understand the money multiplier is because the availability of personal loans and consumer debt, can have a large effect on the economy. When the government is trying to jump start a sluggish economy, one of the many tools they can use is to lower interest rates.
Lower rates make borrowing easier for consumers, and that means more money flows into and through the economy. Most consumers are more concerned with the size of their monthly loan payments than how much money they're actually borrowing. Lower interest rates translate into an increase in the ability of consumers to handle a larger debt load, which pumps more money into the system.
This discussion helps to better understand if an increase in debt load is good or bad for consumers or the economy. On the one hand, it helps economic growth. On the other hand, many experts question how much further personal debt can expand without triggering a rise in bankruptcies.
While personal debt has been on the rise, the vast majority of that increase is associated with home mortgages. As consumers borrowed more money to buy bigger homes, mortgages have been used to purchase an appreciating asset. As home values increase, this translates into an increase in consumer wealth.
This is a good combination: rising debt, followed by an increase in wealth. The increase in wealth means a lower chance of default on a loan. After all, homeowners with large mortgages were quickly building significant amounts of equity in their homes. If they got into trouble paying back their loans, they could always pull some of the equity out of their home, or even sell their homes at a profit to pay off the money owed creditors and lenders.
In fact, this was a "healthy" situation until 2007, when the housing market seemed to slow down. This slowdown resulted in a crisis in the sub-prime mortgage market. Borrowers with weaker credit histories, lower household income, and relatively few assets could no longer pay back their outstanding loans. This resulted in a quick rise in foreclosures and bankruptcies; eventually triggering what is known as the Great Recession.
One of the statistics published by the Federal Reserve is the Debt Service Ratio, or DSR. The household DSR is an estimate of monthly debt payments to disposable personal income. Payments included in this ratio consist of the estimated required payments on outstanding mortgage and consumer debt.
The DSR is an indicator of the debt Americans are carrying relative to their disposable income. The higher the ratio, the larger the burden carried by the consumer. In the fourth quarter of 2007, this measure stood at 13.46, which was the highest ratio in the 27 year history of the indicator. This means that for every $100 of disposable income, nearly $14.00 is used just to make the household's mortgage and debt payments. As of the first quarter of 2016, the DSR was 10.02, which is a decrease of 24% from the 2007 high.
For over 25 years, the amount of money borrowed was rising, but much of it was used to buy larger homes. The housing market itself helped support this increase. Home prices were appreciating, and helping to build consumer wealth.
But in late 2007, housing prices started reversing themselves, and interest rates began to rise. Suddenly buying a larger home no longer guaranteed an abundant supply of home equity to support the out-of-control spending habits developed by some consumers. Higher interest rates only added to this problem.
What worries some economists is that consumers can no longer depend on relatively inexpensive home equity loans to satisfy their overzealous purchasing habits. The fear is that consumers will start to depend on more risky, and expensive, sources of money such as credit cards.
This type of debt is unsecured, meaning there isn't an asset (collateral) backing the loan that can be sold to repay the money owed. This type of borrowing can be quickly followed by a rapid rise in bankruptcies. Only time will tell whether or not these "doom and gloom" predictions made by some economists will come true.
About the Author - Risks of Rising Consumer Debt (Last Reviewed on November 28, 2016) |
Feeding a Hummingbird That's Really a Moth
Stephen W. Kress
PLANTING PLENTY of nectar-producing flowers can lure hummingbirds into your yard. But while on the lookout for what John James Audubon called "glittering garments of the rainbow," you might spot a hovering hummingbird moth instead. In a classic example of convergent evolution, these swept-winged, stout-bodied moths dine on flower nectar and pollinate flowers in a way remarkably similar to hummingbirds. Like their avian namesakes, hummingbird moths can hover seemingly motionless, while tapping nectar reserves with their long, coiled tongue. Some species even have a green back, further adding to their hummingbird resemblance. But unlike the birds, these moths are late risers. They don't stir into action until the sun warms their wing muscles.
Hummingbird moths are members of the sphinx moth family. This enormously varied group gets its family name from the way the caterpillars can pull their front ends up into a sphinxlike pose when disturbed. While most sphinx moths visit flowers at night, hummingbird moths—also called clearwings because of the transparent patches on their wings—frequent gardens in full daylight.
Hummingbird moth caterpillars feed mainly on honeysuckle, hawthorn, snowberry and viburnum, but different species have different tastes. The caterpillars transform into pupae, which are enclosed in well-hidden, dense brown cocoons under fallen leaves. Some pupae spend winter there, transforming into flying adults the following spring. When more than one brood is produced—as happens in southern climates—a second set of adults will emerge in late summer or fall.
Since the moths' appearance varies by location, differentiating between the species is sometimes a struggle. Taxonomists generally agree that North America has four hummingbird moth species, which cover most of the country. Each species' wings have the characteristic transparent patches, and the males have a dramatic anal tuft.
Gardeners don't have to plant special flowers to attract the adults, but the larvae do require specific shrubs for food. If you're interested in enticing these pollinators into your yard, first find out which species range in your area, then grow some of the caterpillars' favorite plants. Following are some tips to attract and recognize the species.
Common clearwing (Hemaris thysbe): The largest and most common hummingbird moths, common clearwings range from Newfoundland to Florida, across to Texas, north along the eastern Great Plains, west to southern British Columbia, then north to southern Alaska. Look for narrow or broad bands on the abdomen and a muddy-yellow or brown thorax. Males have a distinctive black tuft at the tip of their abdomen. Plant hawthorn, honeysuckle, snowberry, cherry and plum trees or shrubs to feed their caterpillars.
Snowberry clearwing (Hemaris diffinis): Ranging from Nova Scotia to Florida, across to California and north to British Columbia, snowberry clearwings have more black markings on the thorax, abdomen and legs than other clearwings. Plant dogbane, snowberries and honeysuckle, especially dwarf bush, to feed their caterpillars.
California clearwing (Hemaris senta): Found in Colorado, New Mexico, Utah, Wyoming and west to California and north to British Columbia, California clearwings have brownish-olive or olive-green heads and thoraxes. The abdomen, which has a broad yellow band, is black or olive-green above and yellow below. Their wings have a very narrow brown border and the clear parts of the wings have a steel-blue luster. Little is known about the caterpillars of this moth.
Graceful clearwing (Hemaris gracilis): The least common of the four species, graceful clearwings range from Nova Scotia to central Florida along the East Coast and west through New England to Michigan. Look for a pair of red-brown bands on the sides of the thorax to distinguish them from common clearwings. The thorax varies from green to yellow-green and sometimes brown with white underneath. They have a pale red abdomen. Foods are probably similar to those eaten by other species, but little is known about the caterpillars of this moth.
Ornithologist Stephen W. Kress is the director of Project Puffin.
Find out how you can turn your garden into a Certified Wildlife Habitat. |
Although the repeal of the Corn Laws is one of the most studied questions in 19th century tariff politics, its historical interpretations are still disputable today. The repeal of the Corn Laws is historically relevant because of “its alleged significance as an indication of the waning of aristocratic domination of British politics” (McKeown 1989: 353). Historiography has to solve the following empirical puzzle: in 1846 Charles Villiers (a leading member of the Anti-Corn Law League in parliament) proposed total and immediate repeal of the Corn Laws, just as he had in preceding years. The motion was overwhelmingly defeated. Yet, only a few weeks later, Peel laid his motion for repeal before the House. By 16 May, Peel’s version of repeal had passed its third reading (Brawley 2006: 467). Sir Robert Peel counted on more than 300 votes for passage of repeal in 1846, implying a winning margin of 90 votes (McKeown 1989: 356). However, this shift in political support began as soon as 1842. Moreover, from the beginning of their implementation the Corn Laws were not without controversy in the Tory Party itself (section 5). After having sketched the historical debate (section 2), as well as the implementation of the Corn Laws with the Importation Act 1815 (section 3), this essay analyses in how far external shocks (4), theoretical development (5) or interest groups (6) contributed most to the policy reform in 1846. Another possible cause of the repeal might be found in the different understanding of the adjustment process of repeal, changing the interests of landowners (7). Finally, this essay concludes that several long term developments, the increasing fear of a new Irish Famine as well as the changing nature of landowners’ interests can explain why the Corn Laws were repealed. Furthermore, Peel as a person plays a role insofar as he was open to new evidence and can be considered as an undogmatic politician: a typical representative of British Empiricism.
2.THE HISTORICAL DEBATE
Originally the Corn Laws were designed to protect cereal producers in the United Kingdom of Great Britain and Ireland against competition from less expensive foreign imports between 1815 and 1846. The high import duties, which were imposed by the Importation Act of 1815 (repealed by the Importation Act of 1846) prevented the import of grain from other countries. The Corn Laws improved the economic situation of landowners by increasing the price of grain and inducing cultivation on less productive land; the land rents thus increased. Since grain was the major consumption good of labourers, the high price of grain necessitated an increase in the nominal wages of labourers and thusly reduced the profits of manufactures (Irwin 1996: 94). The essential matter was therefore food prices, since the price of grain influenced the price of the most important food staple: bread.
3.THE IMPORTATION ACT OF 1815
The Importation Act of 1815 thus “provided protection to British agriculture, primarily benefiting the landlords who dominated parliament” (Irwin 1989: 41). Landowners were a long-established class, who were heavily represented in Parliament. The Tory Party, dominated by the landowning aristocracy, and Peel supported protection of agriculture in the form of the restrictive Corn Law of 1815. Additionally, the political representation of the landowners’ interests was supported by Malthus at his concern about the dependence on foreign supply (Irwin 1996: 95-97). After having sketched the political and economic logic of the implementation of the Corn Laws, this essay will deal with the question as to how and why the Corn Laws got repealed in 1846. I will consider whether it was the shocks or long-range developments, a shift in political representation of interest groups or the change of personal or public beliefs, which caused the repeal.
4.AN EXTERNAL SHOCK: THE GREAT FAMINE
One of the most obvious explanations can be... |
Graphene, a form of pure carbon arranged in a lattice just one atom thick, has interested countless researchers with its unique strength and its electrical and thermal conductivity. But one key property it lacks — which would make it suitable for a plethora of new uses — is the ability to form a band gap, needed for devices such as transistors, computer chips and solar cells.
Now, a team of MIT scientists has found a way to produce graphene in significant quantities in a two- or three-layer form. When the layers are arranged just right, these structures give graphene the much-desired band gap — an energy range that falls between the bands, or energy levels, where electrons can exist in a given material.
“It’s a breakthrough in graphene technology,” says Michael Strano, the Charles and Hilda Roddey Associate Professor of Chemical Engineering at MIT. The new work is described in a paper published this week in the journal Nature Nanotechnology, co-authored by graduate student Chih-Jen Shih, Professor of Chemical Engineering Daniel Blankschtein, Strano and 10 other students and postdocs.
Graphene was first proven to exist in 2004 (a feat that led to the 2010 Nobel Prize in physics), but making it in quantities large enough for anything but small-scale laboratory research has been a challenge. The standard method remains using adhesive tape to pick up tiny flakes of graphene from a block of highly purified graphite (the material of pencil lead) — a technique that does not lend itself to commercial-scale production.
The new method, however, can be carried out at a scale that opens up the possibility of real, practical applications, Strano says, and makes it possible to produce the precise arrangement of the layers — called A-B stacked, with the atoms in one layer centered over the spaces between atoms in the next — that yields desirable electronic properties.
“If you want a whole lot of bilayers that are A-B stacked, this is the only way to do it,” he says.
The trick takes advantage of a technique originally developed as far back as the 1950s and ’60s by MIT Institute Professor Mildred Dresselhaus, among others: Compounds of bromine or chlorine introduced into a block of graphite naturally find their way into the structure of the material, inserting themselves regularly between every other layer, or in some cases every third layer, and pushing the layers slightly farther apart in the process. Strano and his team found that when the graphite is dissolved, it naturally comes apart where the added atoms lie, forming graphene flakes two or three layers thick.
“Because this dispersion process can be very gentle, we end up with much larger flakes” than anyone has made using other methods, Strano says. “Graphene is a very fragile material, so it requires gentle processing.”
Such formations are “one of the most promising candidates for post-silicon nanoelectronics,” the authors say in their paper. The flakes produced by this method, as large as 50 square micrometers in area, are large enough to be useful for electronic applications, they say. To prove the point, they were able to manufacture some simple transistors on the material.
The material can now be used to explore the development of new kinds of electronic and optoelectronic devices, Strano says. And unlike the “Scotch tape” approach to making graphene, “our approach is industrially relevant,” Strano says.
James Tour, a professor of chemistry and of mechanical engineering and materials science at Rice University, who was not involved in this research, says the work involved “brilliant experiments” that produced convincing statistics. He added that further work would be needed to improve the yield of usable graphene material in their solutions, now at about 35 to 40 percent, to more than 90 percent. But once that is achieved, he says, “this solution-phase method could dramatically lower the cost of these unique materials and speed the commercialization of them in applications such as optical electronics and conductive composites.”
While it’s hard to predict how long it will take to develop this method to the point of commercial applications, Strano says, “it’s coming about at a breakneck pace.” A similar solvent-based method for making single-layer graphene is already being used to manufacture some flat-screen television sets, and “this is definitely a big step” toward making bilayer or trilayer devices, he says.
The work was supported by grants from the U.S. Office of Naval Research through a multi-university initiative that includes Harvard University and Boston University along with MIT, as well as from the Dupont/MIT Alliance, a David H. Koch fellowship, and the Army Research Office through the Institute for Soldier Nanotechnologies at MIT. |
The most accurate thermometer in the known universe looks nothing like a thermometer. It is a copper vessel the size of a large cantaloupe, filled with ultrapure argon gas and studded with microphones and microwave antennas. The purpose of the gadget, which sits on the campus of the National Physical Laboratory (NPL) in Teddington, England, is not simply to measure temperature, however. Rather the device and others like it may allow scientists to completely overhaul the concept of temperature and recast it in terms of fundamental physics.
The plan rests on linking temperature to energy via a physical constant. Today the international standard temperature unit, the kelvin, is based on the properties of water, but scientists would like to bring it in line with other measurement units that have been liberated from the vagaries of the macro world. The second is now defined by the oscillations of a cesium atom; the meter relates to the speed of light in a vacuum. “It's bonkers that the kelvin doesn't directly relate temperature to energy,” says Michael de Podesta, who leads the research team.
The NPL device measures the Boltzmann constant, which links changes in energy to changes in temperature. De Podesta's team and its competitors hope to nail down the constant well enough to relate one kelvin to a certain number of joules of energy.
The new thermometer—technically an “acoustic resonator”—rings like a bell when the physicists feed certain sound frequencies into its microphones. From that sonic resonance, the researchers can determine the speed of sound within the gas-filled cavity and thus the average speed of the argon molecules—that is, their kinetic energy. In July, de Podesta's team reported in the journal Metrologia the most accurate measurement yet of the Boltzmann constant.
The current temperature definition makes use of water's phase changes. One key threshold is the so-called triple point, 273.16 kelvins, where water ice, liquid and vapor can coexist. In 1954 an international agreement defined the kelvin as 1/273.16 the difference between absolute zero and water's triple point.
The 1954 definition works well in general but begins to break down for extreme temperatures, such as those found within stars. “It only happened this way because people started measuring temperature long before they knew what it actually was, before temperature was known to just be atoms and molecules buzzing around,” de Podesta remarks. “Now that we know better and have the opportunity to correct it, we should.” |
One of the mysteries of the English language finally explained.
A hole saw used in surgery to remove a circle of tissue or bone.
- ‘The instrument for this is still called a trephine.’
- ‘The only difference in processing of the BM core and clot was decalcification of the trephine core biopsy.’
- ‘An instrument that acts like a cookie cutter, called a trephine, cuts a precise circular shape.’
- ‘The bone marrow aspirate was similar to the trephine, with 15% of blasts showing monocytoid differentiation together with an absolute monocytosis and mild dyserythropoiesis.’
- ‘In 2 cases, a bone marrow trephine biopsy was also available for examination.’
Operate on with a trephine.
- ‘The invention relates to a method and apparatus for trephining corneal tissue in preparation for keratoplasty.’
Early 17th century: from Latin tres fines ‘three ends’, apparently influenced by trepan.
Top tips for CV writingRead more
In this article we explore how to impress employers with a spot-on CV. |
Note: Liverpool / New York.
Source: data from P.J. Hugill (1993) World Trade since 1431, Baltimore: Johns Hopkins University Press, p.128. Stopford, M. (2009) Maritime Economics, Third Edition, London: Routledge.
- Introduction. The steamship Great Western can be considered as one of the first liners, crossing the Atlantic in 15.5 days in 1838. Early liners were made out of wood and used paddle wheels, often complemented by sails, as the main form of propulsion. Their capacity was limited to less than 200 passengers.
- Growth. By the 1860s the introduction of iron hulls, compound steam engines and screw propulsion led to significant reductions in crossing times to about 8-9 days. No longer limited by the technical limits of wood armatures the size of liners increased substantially with a tonnage exceeding 5,000 tons and a capacity of 1,500 passengers. The number and frequency of liner services across the Atlantic (an across the world) increased substantially.
- Maturity. Represents the Golden Age of the liner where those ships dominated long distance passenger movements. By the early 20th century (1907), the liner Mauretania with a capacity of 2,300 passengers was able to cross the Atlantic in 4.5 days, a record which was held for 30 years when the liner Queen Mary reduced the crossing time by half a day (4 days). Liners reached their operational capacity of around 1,500 to 2,000 passengers and Atlantic crossing times stabilized around 5 days. They relied on quadruple screws using turbine steam engines. This also corresponded to the peak American immigration years from European countries, a process to which liners contributed substantially.
- Obsolescence. By the 1950s the prominence of the liner was challenged by the first regular transatlantic commercial flights. This challenge quickly asserted itself and in a decade the liners shifted from being the main support of transatlantic passenger movements to obsolescence. One of the last liners, the United States (mainly made of aluminum), held the transatlantic crossing speed record of 3.5 days in 1952. By the 1960s, air transportation has overtaken the supremacy of liners for transatlantic crossings and reference time became hours instead of days. Liner services disappeared and the surviving ships became the first cruise ships. |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2010 August 23
Explanation: Have you ever seen the Milky Way's glow create shadows? To do so, conditions need to be just right. First and foremost, the sky must be relatively clear of clouds so that the long band of the Milky Way's central disk can be seen. The surroundings must be very near to completely dark, with no bright artificial lights visible anywhere. Next, the Moon cannot be anywhere above the horizon, or its glow will dominate the landscape. Last, the shadows can best be caught on long camera exposures. In the above image taken in Port Campbell National Park, Victoria, Australia, seven 15-second images of the ground and de-rotated sky were digitally added to bring up the needed light and detail. In the foreground lies Loch Ard Gorge, named after a ship that tragically ran aground in 1878. The two rocks pictured are the remnants of a collapsed arch and are named Tom and Eva after the only two people who survived that Loch Ard ship wreck. A close inspection of the water just before the rocks will show reflections and shadows in light thrown by our Milky Way galaxy. Low clouds are visible moving through the serene scene in this movie.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. |
Researchers at the University of California, Riverside have made a big development in renewable energy using waste glass bottles and an inexpensive chemical process to create the next generation of lithium-ion batteries. The glass is used to create nanosilicon anodes that are needed in lithium-ion batteries. These new batteries will provide more power and battery life to electric and plug-in hybrid vehicles and personal electronics. Cengiz Ozkan and Mihiri Ozkan, two professors from the university, are leading the project.
Every year billions of glass bottles make their way to the landfills. With this knowledge, the researchers wondered if the silicon dioxide in waste beverage bottles could be used for high purity silicon nanoparticles for lithium-ion batteries.
Silicon anodes have their positives and negatives. They can store up to 10 times more energy than conventional graphite anodes. But they expand and shrink during charge, which makes them unstable. It has been discovered that downsizing the silicon to nanoscale reduces their instability. By combining a nearly pure form of silicon dioxide, like the ones taken from glass waste bottles, and implementing a chemical reaction, the researchers created the lithium-ion batteries. These batteries store four times more energy than conventional anodes.
How they did it
To create the batteries, the glass bottles were crushed and ground into a fine white powder, a magnesiothermic reduction, which transforms the silicon dioxide into nanostructured silicon. The researchers then coated the silicon nanoparticles with carbon, improving their stability and energy storage capabilities.
The coin cell batteries that have the glass bottle based silicon anodes outperformed traditional batteries in lab tests. The glass derived silicon electrodes demonstrated excellent electrochemical performance. They had a capacity of ~1420 mAh/g at C/2 rate after 400 cycles. Researchers say that one glass bottle provides enough nanosilicon for hundreds of coin cell batteries.
The paper titled, “Silicon Derived from Glass Bottles as Anode Materials for Lithium Ion Full Cell Batteries” was published in Scientific Reports. |
Connect: I will say, “Yesterday we read the part of Mason Dixon memory that had a lot of evidence for our theme, then we upgrade our theme. Today we are going to look at the text again to determine how the author crafted the text to show the theme.
Teach: I will say, “In order to show a deeper understanding of the text, I am going to practice the skill thinking about author’s craft and the strategy of using sentence stems. The process I will use is as follows:
1) Re-read the part of the text that I read yesterday
2) Think to myself, “Why did the author craft the text this way”
3) Write long about the author’s craft
4) Add if I agree with how the author crafted this theme.”
I will show students a “how authors craft theme” chart with sentence stems and give them a Authors Craft Resource Sheet.docx.
Then I will show the students with Song of the Trees how I determine an author’s craft moves. I read the part where Papa stands up to Mr. Anderson and say to myself, “Hmm…Mildred Taylor seems to be using a lot of dialogue and then describes the facial expressions of the characters. I am going to jot down, “The author uses a lot of dialogue between Papa and Mr.Anderson which shows the theme of “It takes an act of bravery to stand up to cowardice.” For example……(then I will include text evidence).
After I show at least three example of craft (I will have the other two already written). I will show the students how I think through, “Do I agree with how the author crafted this theme?” I will show them an example of my writing.
Active Engagement: I will say, “I want you to re-read the part of Mason-Dixon memory you read yesterday, using the chart, turn and tell your partner what kind of craft moves you see Clifton Davis making. Tell your partner the piece of evidence that makes you think this” I will check for understanding listening to every level of learner (at least 3 students-one who is at standard, one is approaching standard, and one who is above standard).
Closing of Active Engagement: I will say, “Remember, in order to show a deeper understanding of the text, great readers practice the skill thinking about author’s craft moves and the strategy of using sentence stems and their resources. They re-read the part of the text which shows their theme, they think to themselves, “Why did the author craft the text this way? Then they write long about the author’s craft and add if I agree with how the author crafted this theme.”
Independent Practice: I will say, “Now you are going re-read the part of the story you read yesterday, annotate for author’s craft and then write long about it using the sentence stems.” Students will organize and write quietly while I confer. I will put on the writing music.
Partner Work: Students will be directed share their writing about author’s craft. I will say, “Decide who will be partner A and who will be partner B. Partner A you will share your thoughts about how Clifton Davis crafted the theme(s) of Mason Dixon Memory. Partner B, I want you to listen if partner A has a logical argument as to their claim about how Clifton Davis devised this theme through the characters. Give your partner feedback as to if they missed anything. I should hear you say, “I agree because…. OR I disagree because…” Then switch.”
Closing: For today students will turn in their “write longs” in order for me to assess their understanding of author’s craft. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.