content
stringlengths 275
370k
|
---|
The examples and perspective in this article may not represent a worldwide view of the subject. (May 2014) (Learn how and when to remove this template message)
A wayside shrine is a religious image, usually in some sort of small shelter, placed by a road or pathway, sometimes in a settlement or at a crossroads, but often in the middle of an empty stretch of country road, or at the top of a hill or mountain. They have been a feature of many cultures, including Catholic and Orthodox Europe and Shinto Japan.
The origins of wayside shrines
Wayside shrines were often erected to honor the memory of the victim of an accident, which explains their prevalence near roads and paths; in Carinthia, for example, they often stand at crossroads. Some commemorate a specific incident near the place; either a death in an accident or escape from harm. Other icons commemorate the victims of the plague. The very grand medieval English Eleanor crosses were erected by her husband to commemorate the nightly resting places of the journey made by the body of Queen Eleanor of Castile as it returned to London in the 1290s. Some make it clear by an inscription or notice that a specific dead person is commemorated, but most do not.
Wayside shrines were also erected along old pilgrim routes, such as the Via Sacra that leads from Vienna to Mariazell. Some mark parish or other boundaries, such as the edge or a landholding, or have a function as convenient markers for travelers to find their way. Shrines and calvaries are furthermore frequently noted on maps and therefore represent important orientation aids.
The pre-Christian cultures of Europe had similar shrines of various types; many runestones may have fallen into this category, though they are often in the nature of a memorial to a dead person. Few Christian shrines survive in predominantly Protestant countries, but they remain common in many parts of Catholic and Orthodox Europe, often being repaired or replaced as they fall into disrepair, and relocated as roads are moved or widened. The most common subjects are a plain cross or a crucifix, or an image of the Virgin Mary, but saints or other scenes may also be shown. The surviving large stone high crosses of Celtic Christianity, and the related stone Anglo-Saxon crosses (mostly damaged or destroyed after the Protestant Reformation) are sometimes outside churches, but often not, and these may have functioned as preaching crosses, or in some cases just been wayside shrines. The calvaires of Brittany in France, are especially large stone shrines showing the Crucifixion, but these are typically in villages.
Types of shrines
Wayside shrines are found in a variety of styles, ranging from simpler column shrines and Schöpflöffel shrines to more elaborate chapel-shrines. Some have only flat painted surfaces, while other shrines are decorated with reliefs or with religious statues. Some feature a small kneeling platform, so that the faithful may pray in front of the image. A common wayside shrine seen throughout the Alpine regions of Europe, especially Germany, Austria and northern Italy, is the Alpine style crucifix wayside shrine. This style often has elaborate wood carvings and usually consists of a crucifix surrounded by a roof and shelter.
A column shrine (German: Bildstock, also Marterl, Helgenstöckli, or Wegstock; Slovene: slopno znamenje; Lithuanian: Koplytstulpis) normally resembles a pole or a pillar, made either of wood or of masonry, and is sometimes capped with a roof. The Austrian/south German designation Marterl hearkens back to the Greek martyros 'martyr'. In a setting resembling a tabernacle, there is usually a picture or a figure of Christ or a saint. For this reason, flowers or prayer candles are often placed on or at the foot of the shrine.
In Germany, they are most common in Franconia, in the Catholic parts of Baden, Swabia, in the Alpine regions and Catholic areas of the historical region of Eichsfeld and in Upper Lusatia. In Austria, they are to be found in the Alpine regions, as well as in great numbers in the Weinviertel, the Mühlviertel and in the Waldviertel. There are also similar structures in the South Bohemian Region and the South Moravian Region. In Czech, column shrines are traditionally called "boží muka" (= divine sufferings).
In the Eifel in particular, shrines that consist of a pillar with a niche for a depiction of a saint are known as Schöpflöffel (German for 'ladle' or 'serving spoon'). Some of these icons date from the Late Middle Ages, but for the most part were put up in the 16th century.
Near Arnstadt in Thuringia, there is a medieval shrine that is over two metres tall and that has two niches. According to a legend recorded by Ludwig Bechstein, this shrine was once a giant’s spoon, and it is therefore known as the Riesenlöffel.
Chapel-shrines, built to resemble a small building, are common in Slovenia. They are generally too small to accommodate people and often have only a niche (occasionally, a small altar) to display a depiction of a saint. The main two varieties generally distinguished in Slovenia are the open chapel-shrine (Slovene: kapelica odprtega tipa, odprti tip kapelice), which has no doors, and the closed chapel-shrine (kapelica zaprtega tipa, zaprti tip kapelice), which has a door. The closed chapel-shrine is the older form, with examples known from the 17th century onward. The earliest open chapel-shrines date from the 19th century. Also known in Slovenia are the belfry chapel-shrine (kapelica - zvonik) and the polygonal chapel-shrine (poligonalna kapelica).
Chapel-shrines, known as kapliczka, are also often found in Poland.
In the Czech Republic, chapel-shrines are called výklenková kaple 'niche chapels' and are characterized as a type of chapel (kaple) in Czech. In Moravia, they are also called poklona 'bow, tribute'.
A shrine in Hesselbach, Germany
- Leary, James P. 1998. Wisconsin Folklore. Madison: University of Wisconsin Press, p. 451.
- Reallexikon zur deutschen Kunstgeschichte. - II. Band Bauer-Buchmalerei, S. 698, Uni München accessed 26 November 2008 (in German)
- Zadnikar, Marijan. 1970. Znamenja na Slovenskem: Risbe je naredil Ignacij Vok. Ljubljana: Slovenska matica, pp. 26, 28.
- Omerzu, Rozika. 1964. "Marijan Zadnikar, Znamenja na Slovenskem. Izdala in založila Slovenska matica, Ljubljana 1964." Book review. Kronika: časopis za slovensko krajevno zgodovino 12(2), p. 144.
- Skok, Barbara. 1985. "Tipi in razvoj znamenj na Loškem ozemlju - Selška dolina." Loški razgledi 32: 44-62, p. 45.
- Kaple s interiérem a výklenkové, Lidová architektura – Encyklopedie architektury a stavitelství
- Wayside shrine (in Polish)
|Wikimedia Commons has media related to Wayside shrines.|
|
Children should be exposed to all segments of society. Youngsters learning with children of different races, nationalities, and religions tend to be more tolerant and accepting of individual differences. They learn customs, beliefs and rituals of classmates that maybe quite different from what they have been taught. Youngsters learning in an environment of diversity are well prepared to deal more effectively in society after they complete their education. A mutual respect and understanding of other cultures removes barriers and stereotypes. Individual differences need to be threatening. In fact, knowledge of other cultures helps a person realize and appreciate the similarities more than the differences. It is most important that the teacher is trained to teach about and respect individual differences. A diverse group of youngsters can add a great deal to the classroom environment. Interaction between children, handled effectively, can promote a climate of curiosity, mutual respect and acceptance. Nina Rees addressed the topic of teaching styles at both public and private school systems. She suggested students achieve greater results in an environment in which competition and different religious and cultural backgrounds exist. (Rees 93). Although students may have a different religion, culture, race and socio-economic level, they all deserve an equally outstanding education. There is a national attempt to give parents the option of a public or a private education for their youngsters. "There is also such a thing as a Voucher System. In "PUBLIC SCHOOLS, PRIVATE SCHOOLS, SPECIAL NEEDS, AND VOUCHER SYSTEMS- A GENERAL REVIEW OF BASIC PRINCIPLES," the author writes, "the idea of the voucher system is that parents would be given a voucher representing the normal per pupil expense for a public school student, and parents could then choose the school to which to deliver their child and their voucher. Parents could choose a private school or public one; and in either case the voucher money would go to the school. The voucher system blurs the distinction between public and private schools. The downside is that the voucher system might produce a lowering of the standards of the public school system and money would be drained as well. It also might produce a racial or ethnic balkanization of society."(www.angelfire.com/hi2/hawaiiansovereighty/publicprivatespecialbasic. html). A voucher system would be a mute point if the public and private sectors were comparable.
Equality for all socio-economic classes
An educational system should allow every student to reach his or her potential. The concept of equality is clearly stated in the Encyclopedia Britannica, "The constitution guarantees...freedom of opinion, expression, press, publication, assembly and association... any political party based on race, religion, region, or language is forbidden."(Encyclopedia Britannica). It would appear that an educational system that ignores this...
|
Finding Refractive Indexes
One of the most common uses of the refractive index is to compare the value you obtain with values listed in the literature. This comparison is used to help confirm the identity of the compound and/or assess its purity. The following sources list refractive indexes for a wide variety of substances:
Chemical catalogs (e.g., the one from Aldrich Chemical Co.)
MSDS datasheets (many are available on the web)
There are also many computer-based chemical databases that contain refractive indexes. For example, both the CRC Handbook of Chemistry and Physics, and The Merck Index have computer-based versions. These can be particularly useful if your sample is an unknown and you want to search for compounds with similar indexes of refraction. One of the most comprehensive databases for organic compounds is MDL's Beilstein Crossfire database. (Last time I checked it contained 96 reported values for the index of refraction of isopropanol alone!) If you don't have access to one of the commercial chemical databases, I recommend The Organic Compounds Database at Colby College which can be used on the web for no charge.
Comparing Refractive Indexes
Since the refractive index of a substance depends on the wavelength it is important that the refractive index you are comparing to was obtained at the same wavelength as the one you determined. This is usually not an issue since the vast majority of refractive indexes are obtained using the sodium D line at 589.3 nm. (Even refractometers that use white light are normally constructed so that the refractive index obtained corresponds to that for light at 589.3 nm.)
The refractive index also depends on the temperature. Thus, it is best to obtain the refractive index of your sample at the same temperature as the value you plan to compare with; in most cases this will be 20 °C. However, if your refractometer is not equipped with a temperature regulating system, you may simply be stuck with room temperature, whatever that happens to be.
For most organic liquids the index of refraction decreases by approximately 0.00045 ± 0.0001 for every 1 °C increase in temperature. See Table 1 for a few examples. Note that the index of refraction for water is much less dependent on temperature than most organic liquids, decreasing by about 0.0001 for every 1 °C increase in temperature.
If you determined your index of refraction at a different temperature than that reported in the literature you will need to correct your value for the temperature variation before comparing it to the literature value. For example, if you determined the index of refraction of an organic liquid at 24°C, and want to compare it to a literature value determined at 20 °C, you should subtract 4(0.00045) = 0.0018 from the index of refraction you obtained.
A typical laboratory refractometer can determine the refractive index of a sample to a precision of ± 0.0002. However, small amounts of impurities can cause significant changes in the refractive index of a substance. Thus, unless you have rigorously purified your compound, a good rule of thumb is that anything within ± 0.002 of the literature value is a satisfactory match.
Another possible source of error is miscalibration of the refractometer. This is readily checked by using a sample of known refractive index. Distilled water is a particularly convenient standard since it is nontoxic, readily available in pure form, and its refractive index varies only slightly with temperature (Table 1). If you find that the index of refraction of the standard is consistently off by more than 0.0005 from the expected value report this to your instructor or the person in charge of calibrating the refractometer.
Probably the most common source of error in analog refractometers is misreading of the scale. If the index of refraction you determined seems inconsistent with other data, try repeating the measurement.
Determining Concentrations of Solutions
Determining the concentration of a solute in a solution is probably the most popular use of refractometry. For example, refractometer-based methods have been developed for determining the percentage of sugar in fruits, juices, and syrups, the percentage of alcohol in beer or wine, the salinity of water, and the concentration of antifreeze in radiator fluid. Many industries use refractometer-based methods in quality control applications.
In most cases the refractive index is linearly (or nearly linearly) related to the percentage of dissolved solids in a solution (Figure 2). By comparing the value of the refractive index of a solution to that of a standard curve the concentration of solute can be determined with good accuracy. Many refractometers contain a "Brix" scale that is calibrated to give the percentage (w/w) of sucrose dissolved in water.
The refractive index does not provide detailed information about a molecule's structure, and it is not usually used for this purpose since spectroscopic techniques are much more powerful at revealing details of molecular structure. One structural factor that influences the refractive index of a sample is its polarizability. Substances containing more polarizable ("soft") groups (e.g., iodine atoms or aromatic rings) will normally have higher refractive indexes than substances containing less polarizable ("hard") groups (e.g., oxygen atoms or alkyl groups). See Table 2 below.
|
Prevention of blood clots in stroke patients
Stroke is the third leading cause of death in the United States. Strokes occur when the brain doesn’t get the oxygen it needs. There are two classifications of strokes and the treatment may be different depending on the underlying cause of the stroke. A stroke can be hemorrhagic or ischemic. An ischemic stroke occurs when blood flow to the brain is blocked. A hemorrhagic stroke occurs when a blood vessel in the brain bursts. Strokes require immediate medical attention. The sooner patients are treated for a stroke, the more likely they are to survive and have a better quality of life after the stroke.
Stroke patients are at a higher risk for venous thromboembolism (VTE). VTE refers to blood clots lodged in the deep veins of the leg. VTE can lead to serious conditions, including a clot traveling to the lungs, or even death. It is important that medical staff give prophylactic (preventive) treatment to stroke patients in order to avoid VTE. Preventive treatment can include intermittent compression to the lower legs and/or medication to thin the blood. A higher percentage may indicate that a hospital provides a higher level of patient care.
About this measure
This measure tracks the percentage of ischemic and hemorrhagic stroke patients who receive VTE prophylaxis by the end of their second day in the hospital.
In this case, a higher number is better.
|
“History is written by the victors,” Winston Churchill said. Another way to understand this power to define reality is through the construction of master narratives. A master narrative is majority-constructed script that specifies and controls how social processes are contextualized. An example of a master narrative that is perpetuated by our education system is one about the “discovery” of America by Christopher Columbus.
When the Nina, the Pinta, and the Santa Maria landed on Plymouth Rock in 1492, America was already settled with indigenous tribes. These tribes had a different worldview than the Europeans who came to their land. Journal entries and letters from Columbus himself observe the pacifistic behaviors of the natives: “they are artless and generous with what they have, to such a degree as no one would believe but him who had seen it. Of anything they have, if it be asked for, they never say no, but do rather invite the person to accept it, and show as much lovingness as though they would give their hearts.” While these qualities seem like they would pave the way for a welcoming banquet, Columbus instead chose to capitalize on their selfless nature. He enslaved the native peoples, and shipped many back to Spain as slaves. Many who remained in America were forced with violence to convert to Christianity throughout the European colonization of America.
“I ought to be judged as a captain who for such a long time up to this day has borne arms without laying them aside for an hour,” Columbus wrote, proud of his violence and weaponry. My history teachers did not teach me about the cruelty and violence that Columbus prided himself in. My history teachers did not teach me about the speedy subjugation of the native population. In 1971 Columbus Day was instated as a federal holiday, a day to celebrate the ideals of Patriotism. If days such as Columbus Day are celebrations of our patriotism, what is our patriotism truly representative of?
|
This book distills the information from the k-8 literacy text Literacy for the 21st Century, focusing specifically on literacy learners from pre-kindergarten through grade 4. What are the specific needs of these students? How can you predict early literacy difficulties and how best can you scaffold your instruction to prevent reading difficulties in the future? How are the PK student’s needs different from the older, primary grades student?
Chapter opening vignettes-Contextualize chapter concepts in an authentic classroom, complete with photos, dialog, and samples of student work to model excellent classroom teaching and prepare readers for the classrooms in their future.
Spotlights-In-depth look at a single student, peppered through chapters. Helps to detail literacy development and teacher decision making, one student at a time.
Guideline features-Offer specific guidelines for implementing chapter concepts in the PK-4 classroom.
Minilessons --offer ready to use skill and strategy instruction presented specifically for use in PreK-4 reading and writing classrooms. Find how the minilessons correlate to state and national standards on the text’s Companion Website.
The Compendium of Instructional Procedures --a robust resource of instructional methods designed to get teacher candidates up and running quickly in their first literacy classroom. The easily accessible Compendium at the back of the book offers clearly articulated instructional methods, an invaluable resource and quick reference.
Assessment Tools --highlight the complete chapter on assessment and provide future teacher candidates with the means to evaluate their students’ progress in early literacy. They'll also find ideas for alternative assessment
|
Sentences are linked to form paragraphs, which are linked to form the essay that is the introduction or discussion section. Cohesive writing contains sentences logically arranged, allowing the reader to immediately grasp the intended meaning. The writer guides the reader using key words at the beginning of each sentence, which often influence the reader's thinking. Certain words alert the reader how the new sentence relates to the preceding sentence. These help the reader to quickly grasp the exact meaning of each sentence. If the reader does not, he must pause and think about the sentence. In this time, the reader may lose interest in the work, or the reader may interpret the writing in a different manner than the author intended and ultimately arrive at a different conclusion. To avoid these possibilities, link sentences using precise transitional words and phrases.
The next four sections list words and techniques to link sentences.
|
In the news this month: neutron stars, the cosmic web and a black hole.
Neutron stars are without a doubt some of the most extreme objects known to astronomers. They are formed when a massive star runs out of fuel to sustain the nuclear fusion processes that push back against gravity. The star effectively collapses under its own weight, and if its mass is in the right region (roughly 8-30 solar masses), it will compress down to a neutron star. The density in a neutron star is around 10^12 kg/cm3, meaning that a teaspoonful of the stuff would weigh roughly as much as every human being alive (around a 400 million tons). They also rotate extremely quickly, tens or even hundreds of times a second, and possess a magnetic field up to a trillion times that of the Earth. Magnetars are a subset of neutron stars which rotate at a slower pace (about once every few seconds) and with a magnetic field thousands of times stronger than a regular neutron star - these objects have the strongest fields in the known universe!So far, only 20-odd magnetars have been detected. One of them has recently been behaving unexpectedly, though: astronomers using the X-Ray telescope SWIFT have found that its rate of rotation abruptly slowed down. While neutron stars have been observed to sporadically speed up their rotation for short periods of time (known as a 'glitch'), this is the first time a star has slowed down like this, a phenomenon that has been dubbed an 'anti-glitch'. While the mechanisms involved are not fully understood, it is thought that interactions between the star's fluid, inner components and its outer crust made of iron are responsible. These may rotate at different rates: if the crust slows down slightly, the superfluid neutron interior will remain spinning at the same speed. but from time to time it, too, will slow down, and by conservation of momentum the surface will speed up, thus explaining a glitch. Complex magnetic effects, moving material around the crust and the inner fluid may be responsible for the opposite effect, with the inner fluid slowdown and subsequent speedup causing the anti-glitch. The observation of this anti-glitch is in any case going to help increase our understanding of the truly exotic objects that are neutron stars.
Since the coining of the term 'Island Universe' by Immanuel Kant to describe the nebulae people suspected were distinct entities, the study of galaxies has come a long way. While the image is poetic, we now know that galaxies are not isolated in space; the predominant theory behind galaxy formation is that of hierarchical growth: small clumps of matter interact gravitationally, colliding and merging with each other to form bigger and bigger objects, and eventually galaxies. Simulations of this mechanism have been quite successful in describing the formation of elliptical galaxies. However observational evidence has shown that big, bright galaxies are more commonly found when looking back to earlier times in the universe - this goes against the model of large galaxies building over time. Furthermore, it has been calculated that at the rate galaxies are using up their hydrogen to form new stars, they should run out in a few billion years - but galaxies such as our own Milky Way have been steadily consuming hydrogen for more than 13 billion years.It is in fact possible that the galaxies are even more linked than we thought: numerical simulations have suggested that galaxies consist of only around 1/3 of the so-called baryonic, or ordinary matter: protons, electrons and neutrons. The other two thirds are supposedly to be found in intergalactic filaments of gas, forming a sort of cosmic web. This would be where all the hydrogen the galaxies are consuming is coming from. Furthermore, it seems that the gradual condensation of these giant clouds is more responsible for the formation of galaxies than the hierarchical interaction model. Why hasn't this model been suggested sooner? Until recently, the huge intergalactic filaments were incredibly difficult to spot - they're mainly formed of ionized hydrogen, a hydrogen atom that has lost its electron. Direct detections are extremely fiddly, but since intergalactic hydrogen is never 100% ionized, it is possible to pick up some neutral hydrogen and infer the presence of its ionized counterpart. This is exactly what a team of astronomers has done, mapping the so called 'fractional neutral hydrogen' located between the Andromeda and Triangulum galaxies. This was a very challenging task: the fractional hydrogen is 10 000 times less dense than the hydrogen commonly observed in galaxies, and the detection was made at the limits of current technology. Nevertheless, this opens the door for more direct studies of the intergalactic gas, which will greatly help our understanding of galaxy formation and evolution processes, and of how they interact with their surrounding medium.
Finally, astronomers have observed some surprisingly hot gas in the neighbourhood of Sagittarius A*, the bright radio source that is theorized to be the location of the Milky Way's central, supermassive black hole. Like most other spiral and elliptical galaxies, our own galaxy contains a huge black hole at its centre. The origin of these central singularities is still the subject of much research and debate, although observations at very high redshift (thus looking back towards the early universe) show that black holes could be in place at the centres of galaxies as early as 1 billion years after the big bang. Our own black hole has a mass of about 4 million times that of the sun and is located around 26,000 light years away from the solar system. A team of astronomers using the Herschel space observatory found clouds of extremely hot gas (around 1000 degrees Celsius, much hotter than the usual handful of degrees above zero) less than a light year away from the black hole. Herschel is an infrared telescope, which can see through the dust obscuring the centre of the galaxy and reveal the environment around the black hole.It is suspected that the high temperatures of the gas are due to shocks and collisions within the gas caused by strong magnetic fields.
Interview with Prof. David Neufeld
Prof. David Neufeld from Johns Hopkins University, USA, talks to us about hydrides, discussing what they are and how we can observe them using both Herschel and SOFIA, an airborne infrared observatory. He discusses hydrogen fluoride (HF) specifically and talks about its uses in deriving abundances of molecular hydrogen. He also goes on to tell us about SH, also known as a mercapto radical, why it was absent from previous interstellar observations and what it can tell us.
The Night Sky
Ian Morison tells us what we can see in the northern hemisphere night sky during June 2013.
Leo the Lion is in the west after sunset. Between Leo's hindmost star, Denebola, and the bright star Arcturus, in Boötes, is the constellation of Coma Berenices, which hosts part of the Virgo Galaxy Cluster. Corona Borealis, the Northern Crown, is an arclet of stars near between Boötes and Hercules. The four brightest stars in Hercules make a trapezium shape called the Keystone, and the globular cluster M13 can be found two thirds of the way up one side of it. The bright star Vega, in Lyra, is towards the east, and near to it is the Double Double - Epsilon Lyrae - which appears as a double star in binoculars but as a pair of double stars through a telescope. Cygnus the swan rises high into the sky later in the night, with its bright star Deneb. Altair, in Aquila, is lower to the south-east and completes the Summer Triangle of Vega, Deneb and Altair. About a third of the way from Altair to Vega is the dark region of the Milky Way called the Cygnus Rift, as well as the asterism called Brocchi's Cluster or the Coathanger.
- Jupiter is still just about visible at twilight at the beginning of the month. It shines at magnitude -1.8, but is lost against the setting Sun by mid-month, after which it will re-emerge into the pre-dawn sky towards the end of July.
- Saturn is in Virgo and crosses the south as darkness falls. It is near the first-magnitude star Spica, but appears more yellow in colour. Its angular diameter decreases from 18.5 to 17.8" over the month as it moves away from us. It also approaches the star Kappa Virginis, which has a magnitude of +4.2, and is 0.5° away from it at month's end. Saturn's rings are now at 17° to the line of sight, allowing the largest gap between the rings, Cassini's Division, and the planet's largest moon, Titan, to be seen using a small telescope. Saturn's maximum elevation each night is now quite low, and will continue to decrease over the coming years.
- Mercury forms the top of a line with Venus and Jupiter on the 1st. It has a magnitude of -0.4, and reaches greatest eastern elongation (its furthest easterly point from the Sun in the sky) on the 12th. It is best seen at that time, being 24° from the Sun, and can be most easily viewed around 30 minutes after sunset. A telescope will show its slightly gibbous disc, 8" across. Mercury is 2.1° from Venus on the 18th, moving below it to 1.9° separation the following night. You made need binoculars to locate Mercury at this time, so be sure to use them only after the Sun has gone down.
- Mars reached superior conjunction (passing behind the Sun) on the 18th of April, and this month appears in the eastern sky before dawn. It rises about 30 minutes before the Sun on the 1st. It is difficult to spot at magnitude +1.4, but this becomes easier by the end of the month, when it is 7° above the horizon shortly before dawn. You may still need binoculars to find it, so put them away before the Sun comes up.
- Venus is about 8° above the western horizon 30 minutes after sunset at the beginning of the month. It does not get very high in the sky, reaching 10° elevation around the 20th-25th. Its disc, 10" across, is 96% illuminated at the star of June as it is on the far side of the Sun, shining at magnitude -3.8. By the end of the month, it is still 91% illuminated.
- The asteroid Ceres can be found between the 5th and 7th, when it passes within 1° of the star Pollux, in Gemini. Look towards the west about an hour after sunset using binoculars to spot the asteroid at magnitude +8.8, but don't mistake it for a star of magnitude +8.4 nearby!
- Mercury, Venus and a thin crescent Moon congregate on the 10th, visible shortly after sunset if you have a low western horizon. You may also spot earthshine - sunlight reflected from the Earth and reflected again from the dark part of the Moon.
- A gibbous Moon appears very close to Spica, in Virgo, on the 18th, with Saturn not far away.
John Field from the Carter Observatory in New Zealand speaks about the southern hemisphere night sky during June 2013.
The south-eastern evening sky is dominated by the zodiacal constellations of Scorpius the Scorpion and Sagittarius the Archer. The red star Antares marks the Heart of the Scorpion, and its name means 'The Rival of Mars'. To Māori, and some Polynesians, Scorpius is seen as a fishing hook. Rehua is one Māori name for Antares, showing the blood of Māui staining the eye of the Hook. Straddling the Milky Way, the region around Scorpius is home to a number of nebulae and star clusters. The globular clusters M4 and NGC 6144 are near to Antares and can be observed using binoculars, while a number of double stars can be found along the body of the Scorpion. The open star cluster NGC 6231 appears rather like a comet to the naked eye and is near to the Scorpion's stinger, as is the hazier-looking open cluster M7. M6, the Butterfly Cluster, is in the same region but is fainter. Sagittarius also contains a wealth of nebulae and star clusters, while its brightest stars form the asterism known as the Teapot. Using binoculars, the globular cluster M22 can be found near to Lambda Sagitarii, which marks the top of the Teapot. M8 and M20 - otherwise known as the Lagoon Nebula and the Trifid Nebula - make spectacular sights in Sagittarius. M8 is a compact open cluster surrounded by a circle of nebulosity containing a dark rift. M20 is similar, but is distinguished by dark lanes that split the nebula into three segments. The constellation of the Archer also hosts M23, an open cluster forming arcs of stars, M24, a looser cloud of stars, M25, an open cluster containing several deep yellow stars, and M55, a globular cluster. The Milky Way is at its brightest, widest and densest around Scorpius and Sagittarius because we are looking towards the centre of our Galaxy, some 30,000 light-years away. In Arabic it is Al Nahr, the river, to the Chinese it is the River of Heaven, and to Māori it is Te Ika Roa, the Long Fish. It contains dark bands consisting of gas and dust which may eventually form new clusters of stars.
The planet Saturn is easily spotted in the northern sky after sunset, while Venus appears with Mercury in the west. The Moon will also be in the west as the Sun sets on the 10th, while Venus and Mercury will be only 2° apart on the 20th. The 21st marks the winter solstice, when the Sun rises and sets at its most northerly points and the night hours are at their longest. This date was celebrated in many cultures. In Aotearoa (New Zealand), the dawn rising of Matariki (the Pleiades Cluster) and Puanga (the star Rigel) coincide with the winter solstice, and mark the beginning of the new calendar year in the Māori system known as Te Maramataka.
Odds and Ends
The International Space Station welcomed three new crewmembers on the 29th of May. Russian Fyodor Yurchikin, American Karen Nyberg and Italian Luca Parmitano formed Expedition 36 and joined the three astronauts already on board to complete the crew. Parmitano is the first of ESA's new batch of astronauts to go up to the ISS. The spacefarers have a busy schedule ahead, with over 70 hours of experiments a week to conduct.
Seats on a Virgin Galactic space flight with a "mystery guest" were auctioned at the Cannes film festival auction for the amfAR Cinema Against AIDS charity. The mystery guest is reported to be Leonardo DiCaprio and the seat next to him sold for 1.2 million Euros. A second pair of seats on the same flight sold for 1.8 million Euros. The 'normal' price for seats on a Virgin Galactic space flight is $250,000 and around 550 people have already paid for (either partially or fully) tickets. Commercial flights could happen as early as 2014.
Late in May, the European Space Agency (ESA) collected ideas from astronomers for the next two large (or L-class) ESA missions. ESA only plans to fund three of these L-class missions in the next two decades. Last year, ESA selected the JUpiter ICy moons Explorer (JUICE) as its first L-class mission in this time period; the spacecraft is scheduled to be launched in 2022. The next two missions, which will be selected over the course of the next few years, would be scheduled for launch in 2028 and 2032. ESA could select from a variety of mission concepts, including a mission ot other planets, asteroids, or comets; a spacecraft that can be used to measure gravitational waves from binary pulsars and merging black holes; a new X-ray space telescope, a new infrared/millimetre all-sky survey telescope; or a high-resolution infrared telescope. More information on the selection process as well as ESA's Cosmic Visions program can be found here .
|Interview:||Christina Smith and David Neufeld|
|Night sky:||Ian Morison and John Field|
|Presenters:||George Bendo, Indy Leclercq and Christina Smith|
|Editors:||Adam Avison, George Bendo, Claire Bretherton, Indy Leclercq and Mark Purver|
|Segment Voice:||Mike Peel|
|Website:||Indy Leclercq and Stuart Lowe|
|Cover art:||The Galactic centre, seen here in infrared from the 2MASS project. Obscured by dust clouds in viible light, the galactic centre is home to a plethora of stars and a supermassive black hole around 4 million times as massive as the Sun. Atlas Image obtained as part of the Two Micron All Sky Survey (2MASS), a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. CREDIT: 2MASS/G. Kopan and R. Hurt|
|
Gum disease is commonly caused by poor oral hygiene. When teeth are not regularly brushed and flossed, plaque builds up on their surface. Plaque left on the teeth hardens, forming tartar, which cannot be removed by brushing and flossing. This is why it is important to have your teeth professionally cleaned at least once a year. Only a trained dental professional can remove tartar.
When tartar is left on the teeth for a long time, it becomes very harmful to the gums. Your mouth always contains bacteria, but it can be controlled by good oral hygiene. When the bacteria are not controlled, it causes inflammation of the gums called gingivitis. The symptoms of gingivitis are red and swollen gums that bleed easily, but it does not result in loss of bone or connective tissue. Gingivitis is a mild form of gum disease and may be reversed by our specialist.
Periodontitis is an advanced form of gum disease that may develop if gingivitis is not treated. Periodontitis means inflammation around the tooth, and it makes the gums pull away from the teeth and form pockets. These pockets are hiding places for bacteria and may develop infection. While the bacterial toxins grow and the body tries to fight the disease, the result is the breakdown of the bone and connective tissue that holds the teeth in place. If periodontitis is not treated, the result may be disastrous for your health. The teeth can become loose and need to be removed.
Fortunately, gum disease is easy to avoid with a little oral care and attention to some risk factors. Smoking is one of the most significant risk factors for developing gum disease and can reduce the chances of a successful treatment.
Women should be more attentive at times when they experience hormonal changes such as pregnancy, menstruation and menopause because the gums are more sensitive at these times. People who have diabetes are at risk of developing any infection, including gum disease. Some treatments for cancer may put you at higher risk of developing gum disease.
Certain medications can put you at higher risk because they inhibit the flow of saliva that protects the teeth and gums from bacteria. Without enough saliva, the gums may be vulnerable to infection. Genetics is another risk factor because if your parents or grandparents had gum disease, there is a good chance you will too.
The main symptoms of gum disease are:
• Red and swollen gums
• Bad breath that cannot be removed with brushing and mouthwash
• Pain while chewing
• Bleeding when flossing
• Sensitive or loose teeth
• Receding gums that make the teeth look longer
If you have any of the above symptoms, it is important to visit the office of Dr. Latha Subramanian, DDS to see if you have gum disease. It is much easier to cure if it is diagnosed early. Contact us today to schedule an appointment at our office in Mountain View.
|
An engine is a machine used to transform one kind of energy into another to produce work. In this project, we'll learn how to make a rubber band heat engine, a type of engine that converts thermal energy, or heat, into mechanical energy, or movement.
Typically, things expand (get bigger) when heated, and contract (get smaller) when cooled. Have you ever heard creaking sounds in your room at night? These come from the floorboards contracting due to the drop in temperature that usually accompanies the sun going down.
However, this rule doesn’t apply to rubber! The molecular structure of rubber is very complex: Imagine a bunch of molecules linked together in a “chain” that resist being stretched when pulled on. This “stretchiness” enables rubber to be used in erasers, bicycle tires, and bungee cords. However, this chain-like structure also causes rubber to behave rather unusually during temperature changes.
What kind of work will a rubber band perform when its heated and cooled?
- Rubber bands of different thicknesses
- Heat lamp
- Ice cube
- Hammer and nail
- Push pin
- Duct tape
- Plastic spool from a roll of tape (a roll with a gutter works really well; these can be acquired from certain rolls of electrical tape or PTFE tape)
- Hammer a nail about a foot above a desk or countertop that is close to an electrical outlet (so that you’ll be able to plug in your heat lamp later). Ask an adult before making holes in the wall! Leave an inch or two of the nail sticking out of the wall.
- Take the tape roll and hang it over the center on the nail so it can rotate freely.
- Use the scissors to cut a rubber band open so you have one long strip.
- Use a thin strip of duct tape to tape the coin to one end of cut rubber band.
- Use the push pin to secure the end of the rubber band without the weight to the wall on one side of the tape roll.
- Drape the other side of the rubber band with the weight over the tape roll and let it hang freely.
- Set up the heat lamp to point at the rubber band. Don’t turn it on yet!
- Using tape, mark the starting height of the weight (coin) on the wall.
- Rub an ice cube along the part of the rubber band between the tape roll and the push pin. Wait about a minute and record your observations.
- Mark the distance (also called displacement) of the coin from its original position on the wall with tape. Use your ruler to measure and record the distance.
- Set up your heat lamp so that the bulb is about 3 inches away from your rubber band. Turn your heat lamp on. Wait about two minutes and record your observations.
- Mark the displacement of the coin from its original position on the wall with tape. Use your ruler to measure and record the distance.
- Repeat the experiment with rubber bands of different thicknesses. Does rubber band thickness affect how much the band will expand or contact?
The rubber band will expand (lengthen) when rubbed with ice, and will contract (shorten) when heated by the lamp.
The work performed by your heat engine in this experiment is the movement of the weight by the expansion and contraction of the rubber band.
When the rubber band is heated, heat energy from the surrounding environment goes into the molecules of the rubber band and causes them to vibrate. The more the molecules vibrate, the more they collide with their neighbors, putting tiny kinks and bends into each chain. This causes the rubber band to contract.
When the rubber band is cooled with the ice, its chain-like molecular structure becomes more rigid, or stiff—meaning the molecules aren’t vibrating and colliding with each other. This prevents the chains of rubber molecules from contracting, and the “chains” are allowed to loosen up— that is, the molecular bonds become longer and the rubber band gets stretched out.
Get another rubber band. Without stretching it, touch it to your lip. How does it feel? What do you notice about its temperature? Now, stretch it out quickly and bring it to your lip while it’s still stretched. You should notice that it’s warmer, because you performed work on the rubber band. The energy you gave the rubber band got converted to heat.
Wait a few seconds with the rubber band still stretched out. Quickly release the tension and let it assume its normal shape. Bring it to your lips now. It should feel cold, since it absorbed thermal energy!
|
STAGE: 3rd and 4th ESO students
TITLE OF THE WEBQUEST: Discover Ireland Trip to London
To create a Project-Work based on the search of some specific information on the Internet.
To select and process information from the net.
To develop students’ thinking skills.
To encourage students to work through cooperative groups.
To promote the use of new technologies into the language class.
-Writing a project-work to be presented to the classmates.
-Presenting orally all the information gathered.
-Learning some cultural, social and economical aspects of Ireland.
- Integrating the four skills (listening, reading, speaking and writing).
- Searching, analysing and comparing different pieces of information.
- Identifying and selecting the most useful information.
- Organising different pieces of information in order to create a coherent whole.
- Designing of a Power-point presentation.
- Taking part in an oral presentation of their work.
-Showing a positive attitude towards learning English.
-Showing a positive attitude towards working in groups.
-Showing creativity both in written and oral communication activities.
ASSESSMENT: Students will be assessed according to a rubric which will be given to them at the beginning of their project.
TIME: This activity will last three or four sessions, depending on the number of students in class. During the first and second day, students will be divided in groups of three or four people and they will be explained what to do. Then they will start their research on the net. On the third or fourth day they will present orally their work.
RESOURCES: In order to do this activity students must have access to the Internet, Word and Power Point programs.
|
When we talk about denaturing a protein we just mean that the natural structure of said protein may be altered, their biological activity may be changed or destroyed. This however does NOT disrupt the primary structure.
In denaturation some proteins would be able to return to their native structure, this depends on proper conditions, but under extreme conditions such as strong heating it would usually cause irreversible changes.
Different denaturing processes include :
Heat – because of increased translational and vibrational energy hydrogen bonds can be broken. ( coagulation of egg white albumin and frying)
Ultraviolet Radiation – this is similar to heat ( sunburn)
Strong acids or bases – salt formation , disruption of hydrogen bonds.
Urea – Competition for hydrogen bonds. ( precipitation of soluble proteins)
Some organic solvents – change in dielectric constant and hydration of ionic groups.
Agitation – Shearing of hydrogen bonds. ( beating an egg white albumin into a meringue)
- Denaturing of proteins usually occurs by heat , and this affects the interactions within a protein molecule
- As temperature increases in a slow manner, the protein’s conformation generally remains intact until an abrupt loss of structure and this occurs over a narrow temperature range.
- Because of the abruptness this change suggest that unfolding is a cooperative process which means loss of structure in one part of the protein destabilizes the other part.
- Solubility drastically decreases as in heating egg white, this is where the albumins unfold and coagulate.
- Because most enzymes are made up of proteins, it is only natural that when they denature they lose their catalytic power.
Proteins can also be denatured by chemicals two of which are
- Chaotropes are ions that enhances the solubilty of non polar compounds in water by disordering the water molecules. Some examples if chaotropes are SCN- , thiocynate and CIO4- , perchlorate , the guaniduim ion and the nonionic compound urea.
- The water molecules disrupt the hydrophobic interactions that normally stabalize the native conformation
- The hydrophobic tails, of detergents such as sodium docecyl sulphate (SDS) also denature proteins by penetrating the protein interior and disrupting hydrophobic interactions.
|
|Bacteria grow FAST! In ideal circumstances, some common bacterial cells can divide and double every 20 minutes.
In the following experiment, you are going to use different methods for counting bacteria - LOTS of bacteria!
Microbiologists use many different ways to "count" bacteria. Some are direct methods, such as counting cells under a microscope ("total count"). Others are indirect methods, e.g. electrical resistance, ATP production).
|
Title: Major theories in the organization of Families
Description: In this lesson plan the teacher explores the major theoretical frameworks, and how each framework views the organization of families.
Expectations: It is expected that students learn the differing view points about families.
Student Groupings: Students work in groups of 3-4 to research information about a theoretical framework, and each member of the group is expected to participate in the presentation.
The teacher provides the following major theoretical frameworks concerned with families, with a brief introduction to each:
*Conflict/Political Economy Approach
*Social Exchange Theory
After the introduction students form groups of three or four, and must research the following information for the presentation:
*Basic overview of the theoretical framework
*Underlying philosophical assumption.
*Strengths/Weaknesses of theoretical framework.
*Common issues investigated by the theoretical framework.
*Terms/Key words of the theoretical framework.
*Phrase to remember theoretical framework by.
Assessment: Each student is assessed on their participation in the group presentation. The group can be marked on the following criteria: content of information presented, use of resources, organization of ideas,etc.
Resource: Baker, Maureen. Families: Changing trends in Canada. Toronto: McGraw-hill Ryerson Limited,1990
Submitted by: Natalia Charles
|
Students will explore different rates of change. Using the TI-Nspire students will be expected to make predictions based upon information that a Pharaoh has given. Students will explore points in a scatter plot of time and height on the building of a pyramid in ancient times. They will calculate the rates of changes and answer questions based upon information given from the Pharaoh. An investigative activity that relates Rate of Change and Slope of a line.
Before the Activity
Download the activity and assessment. Teachers will want to transfer the TNS file to the students handhelds before starting the activity. Use the asssessment during the activity so that the teacher is able to gauge student learning.
During the Activity
Download the Activity and Assessment. Teachers may want to make copies of the assessment and hand them out before the activity begins.
|
William Henry Harrison is widely accepted as the ninth President of the United States. In fact he was a fictitious character because the United States did not have a satisfactory presidential candidate by the planned inauguration date, March 4, 1841. This situation has never happened again although the United States came close when the American people had to choose between Senators Obama, Clinton, and McCain before the inauguration date of January 20, 2009.
In this particular election in 1841, the leaders of America realized throughout the campaigns that the American people did not have a clear-cut candidate so they invented William Henry Harrison. The government went so far as to give this fictitious character a stellar military record stretching back to the War of 1812. They also commissioned an artist to paint a portrait of him. When the votes were tallied, they gave the office to Harrison.
A portrait of the first fictitious President of the United States.
For 30 days, 12 hours, and 30 minutes, the government continued the hoax until they settled upon John Tyler as the tenth president of the United States, who is technically the ninth president. Tyler spent four turbulent years while in office, at one point losing all but one cabinet member.
|
Geography Intent at St Francis Xavier
Geography is essentially about understanding the world we live in. It helps to provoke and provide answers to questions about the natural and human aspects of the world. At St Francis Xavier, children are encouraged to develop a greater understanding and knowledge of the world, as well as their place in it and to enhance their cultural capital and know about life beyond Balby. The geography curriculum enables children to develop knowledge and skills that are transferrable to other curriculum areas. Geography is an investigative subject, which develops an understanding of concepts, knowledge and skills. At St Francis Xavier, our intent, when teaching geography, is to inspire in children a curiosity and fascination about the world and people within it; to promote the children’s interest and understanding of diverse places, people, resources and natural and human environments, together with a deep understanding of the Earth’s key physical and human processes.
In Year One, for our first Geography lesson, we explored our school by following a map. We looked at what our school has and how it could be improved. We also explored how keys are used in maps to help us follow it.
For the beginning of the second part of our Europe journey we are going to be focusing on Italy and Rome. Therefore, we immersed ourselves in the Italian cuisine. We tried different types of pizzas, pastas and macarons. Once we had tried them we described the appearance, the smell and the taste of the food to help us get a feel for Italian food.
How would you care for a dragon? Well Nursery know how to! The children talked about how they would care for Madam Dragon. They mentioned how they would hug her, feed her, give her water and also play with her!
Year 2 have loved learning all about Planet Earth in Geography. They have learnt about the 7 continents and 5 oceans and are learning lots of interesting facts about the different continents.
As part of our Spring term topic, Year 4 are looking at the Amazon. We started by looking at the human and physical geography of South America. We have then looked at where rainforests are located in the world, before investigating the different layers within the rainforest.
|
FORMATIVE-RICH™ Elementary Math Lessons – a Learning Gap Closer and Beyond!
By Darrell Ward, PhD, and Holly Sutton
This article describes a set of mathematics lessons for elementary students, pre-K through 5th grade. These lessons contain unique and effortless teacher activities and support that we believe can rapidly close the pandemic-created student learning gap in young mathematic’s learners. Additionally, they provide the basis for student learners to grow continuously throughout their careers as owners of their learning. These lessons are not only unique in that they deliver graphically designed situational engagement with many opportunities for students to respond. They provide continuous assessment opportunities with exceptional teacher feedback scenarios and a variety of timely reports. Additionally, they can be used in a variety of ways. They can be used as collaborative learning, self-paced instruction, and parental involvement via a student portal. It is vitally important that instruction, engagement, assessment, student effort and feedback rise again in the teacher-centered classrooms. It is also important that schools provide a solution for the learning gap created by the pandemic. We seek to transform this tragedy into an opportunity to renew our commitment to solid in-class FORMATIVE-RICH™ activities proven to raise student achievement levels.
Learning Intentions and Success Criteria
Learning intentions describe what it is that we want students to learn, and their clarity is at the heart of formative assessment (See Hattie 2012). As John Hattie points out in his research, teacher clarity (a deep understanding of what a teacher is teaching) is a real winner as it is well to the right of the .4 size effect “hinge” point” . Right of .4 indicates a significant positive impact on student achievement.
The work of John Almarode and Kara Vandas (2019) greatly expand on teacher clarity and its impact on student learners. They define five essential components of clarity for both teachers and students:
- Crafting learning intentions and success criteria
- Co-constructing learning intentions and success criteria with learners
- Creating opportunities for students to respond (formative assessment)
- Providing effective feedback on and for learning
- Sharing learning and progress between students and teachers
Additionally, they define assessment-capable learners. These are learners that
- Know what is to be learned
- How they are progressing
- What they need to learn next
Notice the increase in Hattie’s effect size as learning is taken control of by the learners. The impact is quite impressive; thus, we will focus on creating assessment-capable learners by supporting the major clarity points and partnering those concepts with technology that will further expand the capabilities of teacher clarity.
Every lesson in our elementary mathematics suite is preceded by an initial title of the lesson and a descriptive graphic. This is followed by a slide providing the statement(s) of Learning Intentions and Success Criteria. All our Pre-K through 8th grade math lessons are constructed in PowerPoint; thus, teachers may modify lessons for their specific classroom needs. Here is an example introduction slide and subsequent Learning Intention and Success Criteria introduction to a lesson on Symmetry.
Each lesson is terminated by a set of slides that reflect the results of the lesson with a “what we learned” summary. Here is the “what we learned” from the Symmetry lesson.
Opportunities to Informally Respond
The instructional component of the lessons is designed to be rich in informal student engagement opportunities (student Opportunities To Respond – OTR,s). These are described in detail by Siobhan, Lyon, Thompson and Wiliam (2005). Our lessons integrate a variety of their techniques to engage students during an instructional lesson.
Our lessons and technology are sequenced to provide teachers with optional OTR points throughout. Additionally, we provide the teacher with a tool to randomly select a student or student team to engage. This tool allows the teacher to reduce the scenario in which one or more students dominate the classroom interactions. All these OTR points are optional, and the teacher can proceed through without utilizing them as they evaluate their value for the class.
The student or student team selection is controlled by a single click of the button shown below.
Consider this slide and suppose you wish to ask a student or student team the question:
How many lines of symmetry do you see in this object?
You can Click the button (circled in red) to randomly select a student or student team in your class. In this case, Francis Chung or Francis’s team is chosen (shown in box outlined in red).
The teacher can provide immediate feedback to Francis or her team and the class by revealing the lines of symmetry one by one until all four lines are shown.
The FORMATIVE-RICH™ instruction is sequenced throughout to provide opportunities for students to informally respond. The randomness provides an equitable allotment of the questions to the class. The objective is to generate student engagement and encourage involvement. If clarity of instruction is lacking, these informal engagements will reveal the lack of clarity within the instruction.
Educators agree that engaging all students during a lesson is essential to learning. Siobhan, Lyon, Thompson, & Wiliam (2005) suggest using popsicle sticks with names on them to randomly identify students or teams to engage. With technology, it’s as simple as clicking a button to identify individual students or teams.
Opportunities to Formally Respond
Informal OTR’s do not result in the recording of the outcomes. Formal opportunities in our FORMATIVE-RICH™ lessons do record each of the student’s responses. This provides a data rich trail of all students that is captured effortlessly during teacher-led instruction within the lessons. We provide assessment points where teachers can optionally provide questions to detect student progress up to that point.
The example below shows the entry to each set of assessments. The entry question is always a simple Yes/No question which should provide all students the opportunity to show understanding. Then, based on the teacher’s evaluation of the instruction and student outcomes, additional whole class questions are available to further strengthen or assess the initial student responses.
Below is the initial Yes/No question in the Symmetry lesson.
We can see that 14 of the 15 students responded (student 10 was absent or chose not to respond).
With our student response technology (student response pads and/or student devices), all students can respond devoid of peer pressure and potential embarrassment. Siobhan L., Lyon C., Thompson M., & Wiliam D. (2005) suggest dry erase boards or A, B, C, D cards, both of which have some major drawbacks:
- First and foremost, data cannot be recorded for subsequent sharing of student progress.
- Student embarrassment is a major concern as students have access to other student’s answers which produces undue pressure in the classroom.
- Processing the data from 25 students holding up handwritten results or cards may be difficult at times for teacher providing only marginal outcome.
The use of student response pads to enhance the learning process and to specifically foster engagement during whole class instruction has been well documented by Radosevich, Salomon, Radosevich, & Kahn. (2008). Either student response pads and/or student devices will eliminate the above concerns making the OTR rather effortless for students and teacher.
Upon termination of the time allocated for the students to respond, the teacher is provided results of the question as shown below:
Notice that each lesson provides a sidebar of assessments that are optionally available. These are denoted below and are clickable.
So, basically at each assessment point, the teacher has four questions available to provide opportunities for students to respond and for teachers to provide effective feedback. Our FORMATIVE-RICH™ lessons typically provide two or three of these multiple assessment opportunities per lesson, and a lesson will cover a specific standard, like symmetry.
We provide private data to the teacher via their smartphone, providing the specifics for each student participating in each of the assessments that are activated in the class session.
Consider the MC4 question below:
We can see how the class performed via the bar chart in the lower left corner of the slide, but we also have private data on our smartphone as shown below.
We see the 2 students that responded A, the student that responded B and the student who responded C. Also, we see the student that didn’t respond.
The more time devoted to opportunities to respond, the higher the student achievement rises (Black and Wiliam, 1998; Hattie, 2009, 2012; Marzano, 2007). Here is how we address this crucial instructional initiative. Each set of 3 dots indicate continued instruction occurring with informal opportunities to respond opportunities.
The whole class questions utilize student response pads and/or student devices and can be deployed in a hybrid mode with students at home using student devices. In class, students can be using a combo of student devices and student response pads.
FORMATIVE-RICHTM lessons live in the ALL In Learning cloud but have attached PowerPoint copies that can be downloaded to a teacher computer if needed. Of course, the lessons can then be customized if the teacher or district so chooses.
The Payoff: Teacher Feedback
Evidence of learning is present in the above FORMATIVE-RICH™ lesson structure. That should serve as the basis for teacher feedback. Teacher feedback is an ongoing activity based on whole class outcomes as well as individual student outcomes. How important is teacher feedback? Hattie’s research pegs teacher feedback with an effect size of .7 which is approximately equal to two years of growth in one academic year (Hattie, 2009, 2012; Almarode and Vandas, 2019).
Timing is a key factor in providing effective feedback. We will connect feedback here with whole class instruction; thus, the timing can be almost immediate, and the delivery of the feedback will be universal to all students in the class.
Immediate feedback produces significant gains as learners obtain corrective information in almost real time (Eggen & Kauchak, 2004). It is designed to nurture learning by helping learners close the learning gap. (Almarode & Vandas, 2009). Feedback is always focused on the topic at hand, not the learners that are the recipients of the feedback.
In the example below, a portion of the class did not see the viability of more than two lines of symmetry, so the corrective approach is to show the additional lines of symmetry and to emphasize the non-vertical and non-horizontal lines of symmetry that may exist in some objects.
Feedback is timely and directed to the task at hand, not the learners who missed the additional lines of symmetry. Teacher feedback should frequently include focus on effort. There is nothing wrong with not seeing the additional lines of symmetry, but now learners can, with effort and knowledge from the feedback, proceed successfully going forward.
The Growth Mindset researched and supported by Dweck, (2006) indicates that one can raise their level of achievement through persistence, resilience, and effort. Feedback which includes directives toward those concepts provides a path for learners to follow throughout their learning for life.
Teachers should always search for opportunities, both with whole class and individual learners, to instill these characteristics in the learners’ tool bag. With rich opportunities to respond provided with the math lessons and effective feedback, the growth mindset of learners and achievement progress will flourish.
Hattie’s research (2012) supports that of Dweck, finding that the effect size for effort is .77. Our experiences as teachers also anecdotally support that the students delivering the effort seem to be more successful than those that just “glide along” without the dedication to learning WITH effort.
Small Group Formative Assessment Opportunities
Johnson and Johnson (1975, 1978) have been major proponents of small group and cooperative learning. As the co-directors of the Cooperative Learning Institute at the University of Minnesota, they have been dedicated to raising the level of student achievement through these methods.
Our recommendation is to form small groups of 3-5 students. The learning compatibility of the group must be developed over time by teacher observation. However, it is also important to keep in mind that the groups must be able to cooperate to have a positive learning experience.
Each group is provided with one or more worksheets and one response pad. The screen at the front of the room will reflect the teams by response pad number. It will also reflect the question that each team is current working on. See the picture below:
As you can see in the picture, Team 3 is currently working on question 7 from the worksheet. Once Team 3 has decided on an answer for question 7 they will enter the answer on the response pad, and the answer grid on the screen will update to question 8. The response pad slot will blink blue to indicate that the response has been received. See the picture below.
These small group activities can deliver formative assessment moments as the teacher is immediately provided the outcomes of each group or team upon completion of the activity. Thus, class discussion can proceed based on those small group outcomes with each group providing input on their approach in completing the assessment activity. Classroom discussion sits at a size effect of .82 on Hattie’s scale (2012).
Each of the FORMATIVE-RICH™ math lessons is accompanied by a PDF file of the questions associated with that specific subject or standard. The PDF can be easily printed, and an answer key created. Once the answer key is created, the small group activity can proceed as the teacher can quickly start the answer grid with the team rosters.
After recording the results within the cloud, the standard can be attached, and this can be added to the progress tracking history of that standard. One is not restricted just to the assessment options in our FORMATIVE-RICH™ math lessons. Any set of appropriate math assessment items that can be provided to the teams can be used with the response pads (1 per team) to generate response opportunities. We are believers in providing OTR’s in a variety of ways as well as providing instant feedback. These feedback opportunities promote new learning and allow teachers to praise the effort of their students.
Willingham (2009) shares a variety of research on cognition and its application to learning and classrooms. Much of his work is quite applicable to mathematics and the application of a variety of cognitive tools, especially as young learners are progressing.
One of his themes is
“Memory is the residue of thought”
The ability to extract facts from long term memory and integrate them into thinking provides a pathway to learning progress. Memorization of specific facts, practice, and integration into problem solving are clear advantages that can be accrued in young learners.
Willingham clearly states that thinking is slow, effortful, and uncertain. Thus, facts from stored memory and the environment can facilitate the utilization of working memory to deliver positive outcomes. BadderleWy (2007) is the originator of the idea of working memory and how facts, in addition to outside information, are utilized in working memory to impact thinking and problem solving.
FORMATIVE-RICH™ lessons are provided as a primary tool for teachers to provide solid instruction, elicit student responses, and provide feedback. Thus, the fundamental instruction is provided, with efficacy, to the whole classroom. However, a student portal is also available which contains the lesson intact with all the assessment items. Thus, in addition to drill and practice systems typically available in schools, the entire instruction lesson, complete with assessments, is available for the student to utilize in a remediation or practice mode. The student outcomes in this mode are also recorded, providing the teacher with additional data points to validate student progress.
The student portal can be configured via team rosters to serve as roundtable learning groups of students with student discussion as the lesson is rehashed, again providing recording of outcomes.
Progress, effort, and resiliency are all important aspects of this FORMATIVE-RICH™ approach for raising student achievement. Outcomes are collected throughout this process, and below is an example of how the outcomes play into the total picture.
Tom shows progress throughout the various assessment opportunities. Progress from all the various formal opportunities is easily captured and saved within the ALL In Learning cloud. Teaching can proceed based on data driving the direction using class and student outcomes, see Bambrick-Santoyo (2010).
All the above applications of technology to support a variety of formative assessment activities deliver evidence of learning. The evaluation of evidence of learning for each student will identify student progress that we, as various researchers do, see as required to document and drive student academic progress.
Mathematics skills need a foundation to grow upon, and a variety of facts must be mastered to bring substantial progress to our young learners. The technology, lesson structure, and formative assessment opportunities described above provide teachers with ample tools to deliver on academic progress while minimizing wasted effort on less important administrative duties.
Almarode, J. & Vandas, K. (2019). Clarity for Learning. Thousand Oaks: Corwin Press.
Baddeley, A. (2007). Working Memory, Thought and Action. London, Oxford University Press.
Bambrick-Santoyo, P. (2010). Driven by Data. John Wiley and Sons.
Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan , 80 (2): 139-148
Dweck, C. (2006). Mindset: The new psychology of success. New York: Random House
Eggen. P. & Kauchak, D. (2004). Educational Psychology: Windows on Classroom (6th ed.) Columbus, OH: Prentice Hall.
Gassenheimer, C. (2019). Hattie Says Teacher Clarity Is One of the Top Learning Interventions. Here’s How It Works. Alabama Best Practices Center.
Hattie, J. (2012). Visible Learning for Teachers. New York: Routledge.
Hattie, J. (2009). Visible Learning: A synthesis of over 800 meta-analysis relating to achievement. New York: Routledge.
Johnson, D.W. & Johnson R.T. (1975) Learning Together and Alone. Englewood Cliffs, N.J.: Prentice-Hall.
Johnson, D.W., Johnson R.T., & Scott, L. (1978). The Effects of Cooperative and Individualized Instruction on Student Attitudes and Achievement. Journal of Social Psychology 104, 207-216.
Radosevich, D., Salomon, R., Radosevich, D. M., & Kahn, P. (2008). Using Student Response Systems to Increase Motivation, Learning, and Knowledge Retention. Journal of Online Education, Volume 5, Issue 1. October/November.
Siobhan L., Lyon C., Thompson M., & Wiliam D. (2005). Classroom Assessment: Minute by Minute, Day by Day. Educational Leadership., Volume 63, Number 3, Assessment to Promote Learning Pages 19-24.
Ward, D. (1977). A Computerized Lecture Preparation and Delivery System. Journal of Educational Technology, 6, 1, 21-32.
Willingham, D. T. (2009). Why Students Don’t Like School. San Francisco: Jossey-Bass.
About the authors:
Dr. Darrell Ward pioneered student response pads in both the K-12 market and the higher education beginning in the late 1990s as CEO of eInstruction. Prior to that he taught at the University level at Texas A&M, the University of Mississippi (initiated the Computer Science Program at Ole Miss in 1973) and the University of North Texas. His paper in 1977 (Ward) was his first foray into utilizing technology as a classroom teaching tool and with the later introduction of personal computers and projectors led to the use of student response pads as an “in the moment” teaching tool. He is currently CEO of ALL In Learning which supports a cloud-based platform for daily student engagement, assessment, feedback and standards-based progress tracking as a student achievement and growth tool. Darrell currently resides in Denton, Texas enjoying daily walking and weekly golfing.
Holly Sutton is an experienced teacher of 22 years. She teaches at Model Middle School in Floyd County, Georgia. Driven by her desire to impact the lives of her students, she takes pride in providing the best learning environment possible. As a fifth-grade math teacher, her goals are to spark an interest in learning and to help students develop a positive attitude towards the learning process. In addition to her career as a teacher, Holly has been recognized for her success in creating and selling resources on Teachers Pay Teachers, an online marketplace for teachers to exchange instructional materials and access digital teaching tools. Holly currently resides in Rome, Georgia with her husband, two sons, and three dogs.
|
Plastic pollution is believed to pose serious health risks to wildlife, and a new report presented to governments at the 7th Session of the Meeting of the Parties (MOP7) to the African-Eurasian Migratory Waterbird Agreement (AEWA) that held from December 4 to 8 in Durban, South Africa, has shown how migratory waterbirds are affected. However, increasing public awareness and changing habits have the potential to turn the tide.
According to the report, of the 254 species covered by the AEWA, more than 40 per cent have been shown to interact with plastics: 22 per cent contain ingested plastic, 31 per cent were entangled in plastic debris, and 8 per cent use plastic items in their nests.
“The growing scourge of plastic pollution across our planet is affecting waterbirds in many ways. When ingested, it can lead to malnutrition and even starvation. Plastic floating in the oceans, along rivers or stranded along our shorelines and in wetlands can cause injuries, impede mobility and cause birds to drown,” said Jacques Trouvilliez, Executive Secretary of AEWA.
One way in which plastic affects waterbirds is through ingestion. Birds often mistake plastic for food but cannot digest it. Plastic items can become lodged in the digestive tract, either blocking the throat and causing choking, or accumulating and filling in the stomach which can lead to malnutrition and starvation. As an example of this, approximately half of all phalaropes, a small migratory shorebird species, have been found with plastic in their digestive systems. In many cases these birds live in remote locations far from humans, but the collection of plastics in the environment means that even they are susceptible.
Another way in which plastics pose a danger to waterbirds is through entanglement. Because plastic does not decompose, floating items such as fishing gear, long filaments and ring-shaped items threaten waterbirds with injuries, impeded mobility and drowning. The number of seabird species affected in this manner has tripled since the mid-1990s. Northern Gannets, an AEWA-listed seabird species, are particularly prone to entanglement with old fishing gear as they follow fishing vessels at sea.
Lastly, microplastics are an increasing contributor to the scourge of plastic pollution. These originate either from small plastics such as microbeads or from degradation of larger items. Microplastics are commonly ingested by prey species which are then consumed in turn by predatory birds.
The report “Waterbirds and Plastics”, which was the first of its kind presented to governments at AEWA MOP7 notes a geographical bias towards Europe and South Africa in studies concerned with waterbirds and plastic pollution.
“It is important that studies elsewhere in the African-Eurasian Flyway fill the knowledge gaps, so we better understand thefull impact of plastics on all waterbirds within the geographic range of AEWA. At the same time, we cannot wait for these studies to fill knowledge gaps but must act now to address the problem globally and collectively across all the world’s flyways,” said Jacques Trouvilliez, Executive Secretary of AEWA.
AEWA MOP7 has taken the actions recommended in the report and included them in a resolution on seabird conservation for Parties to consider for adoption at the final day of the meeting. This will hopefully generate action across much of the African-Eurasian flyway in addressing the issue of plastic pollution. The announcement was also made during plenary that the theme for World Migratory Bird Day 2019 will be “Protecting birds from plastic pollution”.
Andrew de Blocq, a penguinologist from the conservation NGO BirdLife South Africa who attended AEWA MOP7, said: “Plastic pollution is a growing threat to waterbirds and seabirds, and we as conservationists are extremely concerned about it. However, the silver lining is that people around the world are fast becoming aware of the consequences for wildlife, and we are seeing a culture change associated with a movement away from single-use plastic items and toward a more conscious, eco-friendly lifestyle.”
|
The Doctor Who Protected Jews from Disease and Destruction
The Fellowship | January 29, 2020
During the Holocaust, Dr. Rudolf Weigl helped save Jewish lives in multiple ways — ways both active and proactive — through his medical work fighting disease, as well as his covert work sheltering Jews.
While gas chambers and firing squads were the Nazis’ preferred methods of murder, disease also claimed millions of lives during World War II. Chief among these epidemics was typhus, an especially nasty bacterial infection that claimed far too many Holocaust victims, including Anne Frank.
And so we come to the story of Dr. Weigl, a biologist who came up with a vaccination against typhus and a plan to save the Polish Jews he had long befriended and defended.
Combating — and Catching — an Epidemic
A well-known and well-liked professor of biology in Lwow, Poland, Dr. Rudolf Weigl focused on developing a vaccine for typhus, as none had yet been created. At last in the late-1930s — after he had developed typhus himself, the very disease he was trying to eradicate — Weigl’s vaccine worked.
The first beneficiaries of the new vaccine were actually Christian missionaries. Belgian missionaries stationed in China were given the vaccine from 1936 to 1943. While it proved effective in this case and in others, Weigl’s medicine was dangerous to make, as seen by his own illness. In later years, other, safer vaccines would be developed. But Dr. Weigl’s was the first.
Standing for — and Saving — Jewish People
But fight against disease was not the only war Dr. Weigl waged during his long career. He also resisted the anti-Semitism that was common in Poland, even before the Nazis invaded. Many of his friends and peers were Jewish, as were many of his students. When Poles acted out in hateful ways against their Jewish countrymen, Dr. Weigl protested loudly, branding the anti-Semites “barbarians.”
When Hitler did overtake Poland, the doctor of German descent refused to cooperate with the Nazis’ insistence that he embrace his inner Aryan. And then his words became actions.
The Nazis, wanting a typhus vaccine for their own, forced Dr. Weigl to set up a production plant. That very Nazi-run facility became a shelter for hundreds of Jews. You see, Dr. Weigl hired his Jewish friends and colleagues to work for the very Germans who wanted them dead! Working in the plant meant these Jewish Poles were not deported to death camps as they waited for liberation.
But even more lives were saved because of Dr. Weigl’s work. Throughout the war, as the Jews of Lwow and Warsaw were held to await deportation and extermination, thousands of doses of vaccine were smuggled into the ghettos. Each of these vaccinations saved a Jewish person from succumbing to the incredibly infectious disease that often strikes during times of war and starvation — both of which were happening during the Holocaust. And that is how a Christian doctor saved Jewish lives in a multitude of ways, acts for which he was named Righteous Among the Nations by Yad Vashem.
|
Asthma is a chronic lung disease that inflames and narrows the airways. This makes the airways swollen and very sensitive. They tend to react strongly to certain substances that are breathed in.
The exact cause of asthma isn’t known. Researchers think a combination of factors (family genes and certain environmental exposures) interact to cause asthma. Different factors may be more likely to cause asthma in some people than in others.
Asthma affects people of all ages, but it most often starts in childhood. In the United States, more than 22 million people are known to have asthma. Nearly 6 million of these people are children. Among children, more boys have asthma than girls. But among adults, more women have the disease than men. Its not clear whether or how gender and sex hormones play a role in causing asthma. Most, but not all, people who have asthma have allergies.
— Source: National Heart, Lung and Blood Institute
|
In the upcoming film Interstellar, Earth’s soil has become so degraded that only corn will grow, driving humans to travel through a wormhole in search of a planet with land fertile enough for other crops. In the real world things aren’t quite so dire, but degraded soil is a big problem—and one that could be getting worse. According to a new estimate, one factor, the buildup of salt in soil, causes some $27.3 billion annually in lost crop production.
“This trend is expected to continue unless concrete measures are planned and implemented to reverse such land degradation,” says lead author Manzoor Qadir, assistant director of water and human development at the United Nations University Institute for Water, Environment and Health. Qadir and his colleagues published their findings October 28 in Natural Resources Forum.
Irrigation makes it possible to grow crops in regions where there is too little rainfall to meet the plants’ water needs. But applying too much water can lead to salinization. That’s because irrigated water contains dissolved salts that are left behind when water evaporates. Over time, concentrations of those salts can reach levels that make it more difficult for plants to take up water from the soil. Higher concentrations may become toxic, killing the crops.
Qadir and his colleagues estimated the cost of crop losses from salinization by reviewing more than 20 studies from Australia, India, Pakistan, Spain, Central Asia and the United States, published over the last two decades. They found that about 7.7 square miles of land in arid and semi-arid parts of the world is lost to salinization every day. Today some 240,000 square miles—an area about the size of France—have become degraded by salt. In some areas, salinization can affect half or more of irrigated farm fields.
Crop production is hit hard on these lands. In the Indus Valley of Pakistan, for instance, salinization causes an average decline in rice production of 48 percent, compared to normal soils in the same region. For wheat, that figure is 32 percent. Salty soils also cause losses of around $750 million annually in the Colorado River basin, an arid region of the U.S. Southwest.
“In addition to economic cost from crop yield losses, there are other cost implications,” Qadir says. These include employment losses, increases in human and animal health problems and losses in property values of farms with degraded land. There could be associated environmental costs as well, because degraded soils don’t store as much atmospheric carbon dioxide, leaving more of the greenhouse gas to contribute to climate change. The total cost of salt degradation, therefore, could be quite a bit higher than the most recent estimate.
Salt damage can be reversed through measures such as tree planting, crop rotation using salt-tolerant plants and implementing drainage around fields. Such activities can be expensive and take years, but the cost of doing nothing and letting lands continue to degrade is worse, the researchers argue. “With the need to provide more food, feed, and fiber to an expanding population, and little new productive land available, there will be a need for productivity enhancement of salt-affected lands in irrigated areas,” they write.
On a cautiously hopeful note, Qadir adds that the issue is reaching the ears of policy makers: “Amid food security concerns, scarcity of new productive land close to irrigated areas and continued salt-induced land degradation have put productivity enhancement of salt-affected lands back on the political agenda,” he says. “These lands are a valuable resource that cannot be neglected.”
|
Now that we've covered the basic theory of pumps and turbines, we'll do some examples. Next we have to consider if we have a pump and we know its characteristics, and we have a particular system. For example, in this case we have a pump. We're pumping water from a lower reservoir to an upper reservoir. And in this system we have friction losses. We have minor losses for a valve here. How do we match the system to the characteristics of the pump? I applied Bernoulli equation between these two free surfaces here, surface one, the lower reservoir, and surface two, the upper reservoir and here is our full Bernoulli equation, which looks like this. But as usual, we can make a lot of simplifications. The pressure in the lower reservoir is at the surface is atmospheric, which is zero, so that goes out. Similarly, the pressure at the upper reservoir is zero, so that goes out. If the reservoirs are large, the velocities are very small, so V one and V two go out. We don't have a turbine in the system, so that goes out. Re-arranging that equation, I can write it like this. hP, the head added by the pump is equal to the elevation difference between the two free surfaces, z two minus z one plus the summation of all of the head losses in the system. I can rewrite this equation in this form, hF is f L over D, V squared over two g. The minor losses are terms which look like this, loss coefficient times V squared over two g. Combining those two equations I get hP is z two over z one plus K times Q squared because each of these terms here are proportional to velocity squared. However, the velocity squared is proportional to the flow rate, so this is proportional to Q squared. So all of those last terms I can replace by this single term here, KQ squared, where K accounts for the friction loss in the pipe and the minor losses. If I plot that equation out, I get a curve which looks like this. As the flow rate through the system goes to zero, the K Q squared term goes to zero, and I get to this point here, which is the elevation static head Z two minus Z one. Then, as the flow rate increases, the head loss increases by Q squared and I get a graph which looks like this. If the pump characteristic curve of flow versus head looks like this, then my operating point occurs where these two graphs intersect right here. And that is the operating point for this system. If something changes, for example, the, the head characteristics change, then I might get a second curve here, and my operating point could move to here. And in this case, my energy grade lines for the system look like this. The energy grade line starts off coincident with the free surface here. If there's no energy loss up to the pump, we get to this point. The pump increases the head and the flow, so it kicks up the head by an amount equal to hP. Then we have some friction loss here, a sudden drop across the valve. And then, another loss back to the free surface here. And the geometry of this energy grade line is exactly same as these equations here, because it, the, the energy grade line is simply a graphical representation of the ener, of the Bernoulli equation. Questions are not likely to be very complicated. This might be a typical one. We're given that the characteristic curve of flow and head for two centrifugal pumps are as shown. At a flow rate of six hundred cubic feet per minute which of the following statements is correct? Power delivered by pump A is 34 horsepower etcetera. So to solve this here is our basic equation of power in USCS units. Power is gamma QhP divided by 550 horsepower. And in this case we're given that the flow is 600 cubic feet per minute, or ten cubic feet per second. For pump A operating at 600 feet per minute. Here's 600 feet per minute. The head is 30 feet, so the power substituting in is 62.4 specific weight of water, etcetera, is 34 horsepower. Similarly, pump B is at this point here, the head is approximately 35 feet. So substituting in we have the, the power for pump B is 40 horsepower. So, out of those solutions, the correct one is A. Another one. Characteristic curves of head and efficiency for a centrifugal pump are shown. At a water flow rate of 700 feet, cubic feet per minute, the power required by the pump is most nearly which of these? So in this case we get, from the graph at 700 cubic feet per minute, which is converted to 11.6 cubic feet per second. Reading off here we have 700 cubic feet per minute right here. The head, hP is 20 feet and the efficiency, which is this curve right over here, is approximately 85% or 0.85. For a pump, efficiency is defined this way, eta is gamma Q hP divided by W dot. And in this case we're asked for the power required by the pump. In other words the power required by the shaft to drive it. So that is given by W dot is gamma Q hP divided by eta and now I can substitute in the numbers 62.4 times 11.7 times 20 divided by the efficiency which is 0.85 is equal to 17 thousand 180 foot pounds per second. Divide by 550 is 31.2 horsepower. So the closest answer is D. Another one, in this case we're pumping water from a canal at the bottom here to a reservoir at a higher elevation. The pump is 0.1 meters in diameter, has a total length of 60 meters and a friction factor is given as 0.02. If the flow velocity is three meters per second, and minor loss is inevitable, the head added by the pump is most nearly which of these? So the starting point is the usual Bernoulli equation. So the Bernoulli equation from the lower reservoir at one to the upper reservoir at two is given here. And, usual approximations, the pressures in the reservoirs are zero, the velocities are negligible, so those terms go out. We don't have a turbine, so that goes out. We're told to neglect minor losses, th, so that term goes out. So we're left with hP, the head added by the pump is equal to z two minus z one plus hf. Which physically states that the head required by the pump is the head required to lift water from this elevation to this elevation. In other words, z two minus z one, plus overcome the energy losses due to friction between the two points, which is hf, is the physical meaning of that equation. So hf is f L over DV squared over two g. Now we can plug in the numbers z two minus z one, 140 minus 100. Friction factor we're given is 0.02 and the other variables R is given. And the answer is 40 plus 5.5 or 5.5 meters. The answer is 45.5 and the closest answer is B. One last example. We have two reservoirs connected by a pipeline as shown. But in this case, this device here is a pump or a turbine. So it can operate either as a pump if we're pumping water from the lower level to the upper level, or a turbine if the flow is running from the upper reservoir to the lower reservoir. It's a reversible pump turbine. So, question is, when it's functioned at, at a, as a turbine at a certain flow rate, the head removed by the turbine is 70 meters. When functioning as a pump, the head that should be added to pump water at the same flow rate, is most nearly which of these? So here's our Bernoulli equation. Starting point is the same as usual. So firstly, when this is operating as a turbine, in other words, the flow is from the upper reservoir from the lower reservoir, the equation from the same approximations that we've made previously, sums to that. ht, the head extracted, is z one minus z two minus the summation of all the head losses in the system. So, the energy grade line then looks like this. It starts off here, and we have a head loss here in the pipeline, let's say to the turbine, then a sudden drop across the turbine and we move to the lower reservoir. So, the head drop here, the head extracted by the turbine is that. And this we'll say is the summation of the head losses in the system. So, in this case we have the summation of the head losses is equal to the elevation difference minus the head extracted by the turbine. And the elevations are 220 and 140 meters. We've given that the head removed is 70 meters so therefore the summation of the head losses due to friction etcetera is ten meters. Now, if we reverse this, so that the flow is from the lower reservoir to the upper reservoir. In other words, this is acting as a pump. Then our equation becomes this. hP is equal to z one minus z two plus hL. And the grade line in this case starts off here, but now the head is added here. The head goes up and then drops down by that amount. It looks something like that. So here is the head added by the pump and here are the the energy losses summation hL. So it looks like the same equation except now we have plus hL here. So that is equal to z one minus z two is 220 minus 40 and the head losses we're assuming is, are the same as when it acts as a turbine because the flow and everything is the same. So therefore that is equal to 90 meters and the closest answer is D. So this finishes our discussion of pumps and turbines.
|
Periodontal disease is a serious condition where the bone and supporting structure around the tooth is damaged and lost over time.
This is due to a long-standing bacterial infection. If left untreated teeth can become loose, painful and abscessed and usually need extraction.
Periodontal disease is the term given to a range of teeth and gum conditions from Gingivitis to more serious Advanced Periodontitis.
Periodontal disease increases risk factors for systemic medical conditions like diabetes, heart attacks and strokes and is one of the more common diseases of Australian adults.
Unfortunately it is painless in the early stages. Unless you are diagnosed early and start taking steps to manage the disease, it can be hard and expensive to control. Once diagnosed you should not ignore periodontal disease as it can lead to more serious problems.
Who is at risk of periodontal disease?
The following risk factors have been identified:
- high blood pressure
- family history of periodontal disease
- poor general health
How can I tell if I have periodontal disease?
Common Signs and Symptoms include:
- Bleeding, inflamed, spongy gums
- Wobbly teeth
- Teeth changing shape/ moving / gaps appearing between teeth
- Bad breath
- Bad taste in mouth
- Fit of the denture has changed
- Bright red gums
- Tender gums
- Bleeding gums while brushing, flossing or eating
- Receding gums
We test every single patient for periodontal disease as standard protocol. We use a periodontal probe, which measures the depth between the tooth and the gum.
How do you get periodontal disease?
Usually, periodontal disease begins because a person hasn’t managed to remove enough of the bacteria (we call it plaque or a biofilm) to stop their immune system overreacting.
If your immune system flares up, inflammation sets in and the build-up hardens around your teeth. This build-up of plaque, calculus and bacteria can become more aggressive and do more damage, attacking bone and gums unless it’s reversed.
Treatment depends on how severe your case of periodontal disease is. For the mildest form, you might just need to change the way you clean your teeth, have a thorough check up and clean, or perhaps use different toothpaste.
For long-standing or more severe disease, our dental hygienist may need to perform an especially thorough type of cleaning (not just a scale and clean).
There are particular ways to brush, floss or “picksters” your teeth to reverse gum disease. Your dental hygienist or oral therapist will show you how to care for your teeth and gums (including brushing and flossing), and schedule a follow-up appointment to make sure the disease progressed.
If you have advanced periodontal disease, you may need a referral to a specialist, or surgery to halt its spread
|
If Shakespeare is to be believed, what a rose is called is not so important, and while it makes for a poetic line it cannot be said to be true of biology (or really any scientific discipline). Names are so important there is a whole discipline dedicated to the identification and classification of all biologic organisms. This is the system of taxonomy.
Historically, identification of species was based on observation of physical attributes or/and demonstrable behaviour and infectious patterns. But, with the onset of molecular techniques we are able to re-evaluate previous conclusions on taxonomic ranks. This is particularly relevant for identifying species within a single genus.
One such case is the parasitic roundworms, the Ascaris species – A. suum and A. lumbricoides. There is a growing debate over whether they are one species or two.
Both are intestinal worms which are morphologically very similar. Usually, the host that the worm is isolated from is the species identifier – A. suum infecting pigs and A. lumbricoides infecting humans. A. suum has been noted for zoonotic potential but is still considered a pig worm primarily. But, with genetic analysis being carried out on parts and whole genomes of these worms it is raising the question – is there enough distinction to warrant speciation between A. suum and A. lumbricoides?
Based on the original observed data and on emerging genetic data, there were four theories proposed in a paper published by Daniela Leles and colleagues.
- They are distinct, valid species
- A. suum is an ancestor of A. lumbricoides originated by an allopatric event of host switching
- A. lumbricoides is an ancestor of A. suum
- A. suum and A. lumbricoides are conspecific (belong to the same species) and therefore occur as variants of a single polytypic species
There is no obvious answer
For them to be distinct species there would need to be a genetic barrier where no genetic crossover would occur. If they’re the same species, there would be full and free genetic flow. However, the current data shows a much more complex situation with some gene flow and some gene independence.
Serena Cavallero and colleagues performed a study looking at Ascaris worms from endemic and non-endemic areas and found three genotypes they called ‘As’ (A. suum banding pattern), ‘Al’ (A. lumbricoides) and ‘Ah’ (A hybrid genotype). While most ‘Al’ were from worms originating in human hosts, and ‘As’ from pigs, there were cases where the genotyped worm did not fit the expected host. Here the hybrid genotype was found in both hosts and contained both banding patterns from A. lumbricoides/suum. This does suggest some sort of gene flow occurring. The genetic genotype varied depending on whether it came from endemic or non-endemic areas (endemic having more species blurring).
The Cavallero group concluded that the presence of gene flow and the apparent absence of a genetic barrier implies the 4th assumption, as stated by Leles, is the most probable scenario in relation to this debate. These conclusions are also drawn by Leles and colleagues in another paper.
Others conclude that there are two distinct species. Anderson’s 1997 paper accounts for the genetic variation based on geography and intraspecies variation and cautions the interpretation when using single genetic markers in closely related taxa. In a previous study this team proposed that the populations of Ascaris with the same mitochondrial genotype was linked to the individual host far more than chance can account for.
Does it Matter?
So while the evidence is piling up, the answer is not (yet) proven one way or the other. Since the definition of a species is a human-created tool for convenience, it may well be that we never manage to make these two Ascaris worms fit into our neat little species box.
However, the attempt should still be made to settle this debate because Shakespeare’s line about the rose while poetic is not realistic. Aside from having a clear and accurate system that is important in terms of ensuring the robustness of the taxonomic system, it also has real life consequences for humans and for pigs, be it with a health or an economic focus.
A. lumbricoides is on the WHO Neglected Tropical Diseases list and is considered a major health concern, infecting approximately 819.0 million people, while A. suum is a burden on agriculture and economy. We have two major problems being dealt with individually when perhaps what we have is one problem that requires a joint solution. We can treat humans for ascariasis till the cows come home, or in this case, the pigs, but when they do come home, then potentially, we have re-infected humans.
We currently assume A. suum just has zoonotic capabilities, but perhaps this parasite is a bigger threat than previously thought; or perhaps not. Have we been wrongly identifying A. lumbricoides as A. suum? Knowing if it is one species that infects all, or two species that can infect humans and pigs, or two species only infecting their respective host is important. The answer will affect how we tackle the human and agricultural aspects of ascariasis.
The case of ”ascariasis – one species or two” is a wonderful jumping off point to extrapolate the question – what other species are we missing or overestimating due to incorrect taxonomy? Hopefully, the answer lies in interogating the genomes because the answers do matter for diagnostics, epidemiology, health interventions and policys.
|
Music is an essential part of the culture, and each country has unique traditional instruments representing its identity. One such instrument is the Pipa, widely regarded as one of China’s most important stringed instruments. It is a pear-shaped, four-stringed instrument that has been around for over 2000 years. Let’s explore the history and evolution of the Pipa, its role in ancient Chinese artistry, and its contribution to classical and contemporary Chinese music.
A Part of the traditional Chinese instruments
The Pipa (Chinese: 琵琶 |pípá| – “pee-paa”) is a traditional Chinese musical instrument. It is also known as the Chinese lute or Chinese guitar. Its distinctive onomatopoeic name is derived from two basic pluck techniques: ‘pí’ (琵), meaning strike outwards, and ‘pá’ (琶), meaning strum inwards. Pipa has over 2000 years of history, developing from pentatonic to full scales. This musical instrument has been an integral part of Chinese music. Pipa has four strings and a pear-shaped body. The frets on this Chinese instrument are similar to those on a guitar. Each fret has a different number of letters.
Pipa first appeared in China during the Qin Dynasty (221-206 BC), and peaked in popularity during the Tang Dynasty (618-907). The Pipa is an incredibly versatile instrument. It can be used as a solo instrument or performed with a small ensemble or large orchestra. As a result, it is one of the most often used instruments in traditional Chinese folk music.
“King” of The Chinese Musical Instruments
At its height of favor some thirteen hundred years ago, the Pipa was a regular at the Tang Dynasty court. The most difficult to master, the Pipa has many playing techniques and uses for all ten fingers. Using its unique, soul-stirring tone and sound effects, the Pipa can express incredible depth in music, from delicate melodies to exciting epic songs of the battlefield. It is thus considered the king among ancient Chinese string instruments.
History of the Pipa
The Pipa is a beautiful string instrument of Chinese culture. In the Tang Dynasty, its popularity in Chinese culture surged, becoming an important part of folk music. Despite its origins appearing very early in history (in the second century BC), the Tang Dynasty saw it at the height of its influence. It was usually used for everything from imperial performances to folk operas.
The Pipa was usually mentioned throughout Chinese history. Other pear-shaped body instruments besides the four-stringed Pipa include the Ruan, Qin Pipa, Hu Pipa, Quxiang Pipa, etc.
Origin of the name
The name of the Pipa, consisting of two syllables, ‘pí’ (琵) and ‘pá’ (琶), is an onomatopoeic representation of two plucking techniques used to play this instrument.
Though some recent Pipa documents have speculated that the name might have derived from the Persian lute, Barbat, there are several indications that this may be wrong. This is due to the difficulty in recognizing any similarity between the pronunciation of barbat and pipa.
For Chinese people, it is natural to use everyday language to describe similar sounds made by nature or created by humanity. This led to the creation of characters intelligently following certain principles; for example, making sure the characters chosen for an object can accurately represent its image, with pronunciation far from randomly given. The Chinese characters used to name an “object” usually depict the “image” of the thing as well as convey its meaning (in a self-explanatory manner). So, the name Pipa (琵琶) comes from the ancient way of saying “forward and backward plucking”.
Some documents suggest that the term “Pipa” was commonly used for plucked stringed instruments in ancient times. However, that is not the case. China also has several other plucked stringed instruments with their own names, such as Ruan, Yueqin, Liuqin, Sanxian, etc. These instruments and Pipa are also part of the Lute family.
Several Versions in The Same Family of Pipa
Qinhanzi (qin Pipa – 秦琵琶): a four-stringed lute with a skin-covered circular body, a straight neck, and 12 frets. It had an original from a rattle drum during Shihuangdi’s reign (238–210 BC). The body was composed of wood and had 12 frets during Han Wudi (141–87 BC).
Ruan (Ruanxian – 阮咸): named after the musician Ruan Xian, one of Bamboo Grove’s Seven Sages. It has a long neck and thirteen frets. The Ruan is held vertically and plucked with the fingers during the performance. A version of this Chinese lute is the famous yueqin, a short-necked lute.
Quxiang Pipa (曲項琵琶): the direct predecessor of the modern Pipa. This musical instrument traveled from Persia along the Silk Road to western China in the fourth century AD. It had a curved neck, four strings, four frets, and a pear-shaped wooden body with two crescent-shaped sound holes. The Pipa player usually held it sideways and played it with a plectrum during the performance.
Throughout the Sui and Tang dynasties, the Quxiang Pipa became the king of Chinese musical instruments and has been the dominant kind ever since. It was used as a solo instrument by virtuosos and in folk bands and orchestras for royal entertainment. During this time, the instrument’s holding position was changed from horizontal to standing, the plectrum was replaced with fingernails, and the number of frets on the body was increased.
Structure of Pipa
The Chinese Pipa (modern Pipa) has a pear-shaped wooden body and four strings. It has also profoundly symbolized ancient Chinese beliefs. Its size of three feet and five inches reflects the three realms—Heaven, the earth, and man—and the five elements—metal, wood, water, fire, and earth, while its four strings correspond to one of the four seasons.
The Pipa is constructed completely out of wood; the front is made from a Chinese parasol tree, while the back is usually made from mahogany or sandalwood. Pipa’s modern strings are made out of nylon or steel, but in ancient times they used beef tendon or silk strings.
The Chinese lute is an incredibly versatile instrument. It has a highly expressive sound that can range from vibrant and lively to quiet and tranquil, and its tones are also incredibly diverse. High notes are bright, middle notes are gentle, and low tones are thick. All these notes together create a stunningly wide variety of sounds that can express everything from intense battle cries to the peaceful sound of nature on a moonlit night. Hence, the Pipa lends itself to conveying both epic grandeur and deep serenity.
Chinese Pipa Playing Technique
The Pipa is a Chinese string instrument with a unique range of expressions, making it capable of creating various musical styles. It can utilize playing techniques like Western string instruments, and its wooden body allows for drumming and twisting of the strings to create a cymbal-like sound. This fusion of all these musical techniques creates memorable, enchanting melodies and reappears in thrilling emotions and narratives.
One of the most versatile instruments, Pipa can make various changes in sound. Over the centuries, over 60 different techniques have been developed. The Pipa playing skill is distinguished by exceptional finger dexterity and virtuosic programming effects. Some techniques, such as Rolls, slaps, pizzicato, tremolo, strumming, etc., allow Pipa players to create unique and exciting sound effects. For example, striking the Pipa’s wooden body in performance produces a sound similar to percussion instruments, twisting strings to create a cymbal-like effect, etc. That makes it one of the most special and challenging antique instruments.
Pipa’s playing technique has various changes over time. It was initially held horizontally, like a guitar. Pipa players use a sizeable triangular plectrum to pluck the twisted silk strings. As playing the instrument developed over time, musicians began using their fingertips instead of the plectrum. This allowed them to reach higher levels of excellence when playing Pipa.
Pipa in Ancient Chinese Artistry
The Pipa is an important Chinese musical instrument. It has existed for over 2,000 years. Stories about the Pipa and its sound have made their way into many of China’s famous literary works.
The story of Wang Zhaojun
Wang Zhaojun’s journey to the Xiongnu Empire has been memorialized in Chinese history. Wang Zhaojun (王昭君) is one of the “Four Great Beauties” of ancient China. Emperor Yuan delivered her to marry the Xiongnu Empire’s Chanyu Huhanye to improve relations with the Han dynasty through marriage.
On her journey, she took only a qin (Chinese lute) and rode on a yellow horse to symbolize her noble identity. When she rode her horse northward, sorrow overwhelmed the scene. In the face of these feelings, Wang Zhaojun played a sorrowful melody on her Pipa. Her music moved even the wild geese, who stopped flapping their wings to listen to this beautiful Chinese lute music and fell to the ground.
The poem “Pipa Xing”
Bai Juyi was a famous Chinese poet from the Tang Dynasty. He wrote many renowned works, including “Pipa Xing” (琵琶行: “Song of the Pipa” or “Ballad of the Lute”). It is a poem composed of 616 Chinese characters that captured the imagery of a pipa performance and this Chinese lute music.
This poem tells the story of a famous pipa player and the writer’s sympathy with her life. The poem is an exploration of life’s sorrows, as well as the beauty of music and art. It captures the sadness of a departed love through vivid descriptions and metaphors while expressing admiration for the musician’s skill.
大絃嘈嘈如急雨 Thick strings clatter like splattering rain,
小絃切切如私語 Fine strings murmur like whispered words,
嘈嘈切切錯雜彈 Clattering and murmuring, meshing jumbled sounds,
大珠小珠落玉盤 Like pearls, big and small, falling on a platter of jade.
Pipa can be found in more than just literature. It is also featured in many Chinese paintings and sculptures, providing insight into how it was used in the past. For example, Mogao Caves and Yulin Caves in western China have an ancient collection of mural artwork dubbed a cultural treasure of the Silk Road.
The Pipa’s Contribution to Classical Chinese Music
Pipa is traditionally referred to as the “king” of Chinese instruments. It has long been used to compose and play such pieces as folk songs, opera, poetry, and literature. Over time, playing the Pipa has evolved with new playing techniques developed through generations of pipa players.
Different regions of China have their own style of playing Pipa, and numerous schools have been developed. During the Qing dynasty, two major schools of playing emerged—the Northern and Southern schools. These two schools gave rise to the five main schools associated with the solo tradition afterward. The name of each of the five distinct playing styles after their place of origin. Those are the Wuxi school (無錫派), Pudong school (浦東派), Pinghu school (平湖派), Chongming school (崇明派) and Shanghai (Wang) school (汪派). The range of playing styles creates a library of compositions that shows both versatility and emotion. Playing this complex instrument brings these pieces to life with skill and feeling.
Some famous historical compositions used by the Pipa instrument
- 十面埋伏 – Ambush on Ten Sides
- 夕陽簫鼓/春江花月夜 – Flute and Drum at Sunset / Flowery Moonlit River in Spring
- 陽春白雪 – White Snow in Spring Sunlight
- 龍船 – Dragon Boat
- 大浪淘沙 – Big Waves Crashing on Sand
- 昭君出塞 – Zhaojun Outside the Frontier
- 霸王卸甲 – The Warlord Takes Off His Armour
- 高山流水 – High Mountains Flowing Water
- 月兒高 – Moon on High
- 龜茲舞曲 – Dance along the old Silk Road
- 九連鈺 – Nine Jade Chains
- 彝族舞曲 – Dance of the Yi People
One piece of Chinese classical pipa music titled “Ambush on Ten Sides” lies below:
This piece portrays the decisive battle between the forces of Zhou and Han in 202 B.C. at Gaixia (southeast of today’s Linbi County, Anhui Province). This piece provides a general overview of combat. The Han army set up a ten-sided ambush formation, destroying the Zhou army and compelling Xiang Yu to commit suicide near the Wujiang River’s bank. It depicts the furious and dramatic battle scenes and the dismal and sorrowful scenes of the vanquished Xiang Yu in the style of a musical narrative and concludes with the victor’s triumph. In this composition, a wide range of Pipa performance approaches is used to create a majestic and impassioned tale that is sharp in the aesthetic picture, exalting in melody, and ultimately exhilarating.
Pipa in Contemporary Music
Pipa is a very versatile instrument. First used in traditional Chinese music, but it has now become a part of other genres. This instrument can be performed as a solo instrument or in orchestras. It is known for having a unique sound that boosts other instruments and creates new experiences for listeners. The world is becoming more open to cultural integration. From classical music to modern genres like pop, rock, punk, and even EDM, the Pipa can also make a unique sound in these various styles.
The Pipa is still used in contemporary music, and its popularity has spread beyond China’s borders. In recent years, many musicians have started experimenting with combining traditional Chinese instruments like the Pipa with Western classical symphony instruments to create a unique sound. Shen Yun is one of the most notable examples of this fusion.
A Unique Combination Of Ancient Chinese Musical Instruments & Western Classical Symphony
The Shen Yun Symphony Orchestra is a group of musicians specializing in playing traditional Chinese instruments alongside Western classical symphony instruments. They have performed in many countries worldwide, including the United States, Canada, etc. The Pipa is one of the key instruments used in their performances, and its unique sound helps to create a beautiful and harmonious blend of Eastern and Western music.
Shen Yun’s music is an impressive blend of the East and West. The Eastern focuses on expressing inner feelings, and the Western emphasizes arrangement and harmony. This musical fusion transports audiences back in time to explore China’s rich culture and history. Their performances often feature elaborate costumes, stunning visuals, and a combination of music and dance. Bringing in traditional Chinese instruments like the Pipa has infused its own subtle flavors into the symphony, creating a truly unparalleled experience for the audience.
The Pipa is a beautiful and unique instrument that has been part of Chinese culture for centuries. Its history and contribution to ancient Chinese artistry and classical music are significant, and its role in contemporary Chinese music is just as important. Through the Shen Yun Symphony Orchestra and other contemporary musicians, the Pipa has gained favor worldwide, and its unique sound continues to capture people’s hearts everywhere. Moving forward, it’s clear that the Pipa will remain an essential part of Chinese culture and a vital instrument in the world of music.
|
News stories are typically written in a straight-forward, objective style. However, the tone of your story will depend on the publication you’re writing for. For example, a news story for a trade publication will likely be different than one for a local newspaper.
When writing a news story, you’ll want to start with the most important information first – this is known as the inverted pyramid. The lead, or opening paragraph, should give the reader the who, what, where, when and why of the story.
The five Ws rule is a basic guideline for any paper writer who is writing a news story. It helps to ensure that all the important information is included in the story. The rule goes like this: every news story should answer the questions Who, What, When, Where, and Why.
The five Ws rule is a helpful guideline for writing a news story, but it is not set in stone. There may be times when one or more of the questions cannot be answered, or when the answer is not relevant to the story. In such cases, the question can be omitted.
The lead paragraph of a news story is the most important part of the article. It must provide the reader with a clear idea of what the story is about and why it is important. The lead paragraph should be concise and to the point, offering only the essential facts. It should also be interesting enough to encourage the reader to continue reading.
Once you have the basic structure of your news story down, you’ll need to add additional paragraphs as needed to flesh it out. Remember to keep your writing tight and focused – each paragraph should serve a specific purpose in advancing the story.
If you’re covering a breaking news event, be sure to include the latest developments as they happen. And if you’re writing a feature story, be sure to include quotes from experts or other relevant sources e.g., from a case study writer to add depth and context to your piece. As you write, keep in mind the basic principles of news writing: who, what, when, where, why and how. Answering these questions will help ensure that your story is complete and informative.
When writing a news story, it is often helpful to include a quote from a source. This helps to add credibility to the story and allows readers to hear directly from the people involved.
Including a quote can also help to add personal touches to a story, making it more relatable for readers. When choosing a quote, make sure it is relevant to the story and helps to illustrate the main points. If you are having trouble finding a quote that works, try reaching out to the source directly. They may be able to provide you with a statement that can be used in the article.
After you have written your news story, it is important to check for any grammar or spelling mistakes. This will help to ensure that your article is accurate and free of errors. If you are unsure about anything, consider asking a colleague or friend to proofread your work for you. Once you have made any necessary corrections, your news story should be ready to publish!
At the end of the story, summarize what happened in a few sentences. This will help readers remember the main points of the story and give them a quick overview of what happened. Include key quotes from sources to help illustrate the main points of the story.
When writing your news story, be sure to:
If you follow these tips, you’ll be well on your way to writing a great news story!
What are argumentative essays? The first step to succeeding in producing a top-notch argumentative essay is to define basic terms. …
You've been successfully subscribed!
|
Chemical processes to test wastewater are usually done in laboratories on samples gathered from the field as well as from wastewater treatment plants. Such chemical analyses often involve the use of chemicals such as Potassium and Iodine. In this article I discuss the a do-it-yourself Tidy’s Test, involving both these chemicals.
One of the tests that can be easily employed to test wastewater is the Tidy’s test. Potassium iodide is used in this test which requires an acid solution. In the laboratory test, the oxidising agent in the solution undergoes a reduction reaction where it is reduced by the potassium iodide. This reaction releases sulfate. If this reaction takes place in wastewater, then it forms wastewater sulfate.
Below are some of the reactions that are used to understand the chemical composition of and to test wastewater.
2KMnO4 + 6KI + 8H2SO4 -> 4K2SO4 + 2Mn(SO4)2 + 8H2O + 3I2
In this reaction, potassium permanganate (2KMnO4) reacts with Iodine and sulfuric acid to produce two forms of sulfate namely potassium sulfate (4K2SO4) and manganese(II) sulfate (2Mn(SO4)2) along with iodine in large quantities.
2Na2S2O3 + I2 -> Na2S4O6 + 2NaI
This reaction follows the one earlier. In this reaction, the sodium thiosulfate (2Na2S2O3) reacts with the iodine formed earlier to form sodium tetrathionate (Na2S4O6) and Disodium diiodide (2NaI).
I2 + starch -> Starch Iodide (blue)
This last reaction depicts the action of iodine with starch. If these substances are present then they react to form starch iodide. The end product thus obtained is of the color blue.
When analysts test wastewater in laboratories they conduct experiments in clean environments without having to worry about contamination of effluents. But at wastewater testing plants, the water is a mixture of industrial effluents and sewage. This is a potentially hazardous mix as it contains both biological pathogens and chemical-metallic waste from industries. Such samples taken to laboratories have to be filtered to remove the contaminants that can interfere with these reactions.
It is important to identify the chemicals in wastewater and to then plan a proper treatment considering the combined effect of the effluents.
|
When chemicals react, they can either lose or gain electrons. Oxidation states are numbers that show us whether an atom has lost or gained electrons in a reaction. If an atom loses electrons, its oxidation number goes up, but if it gains electrons, its oxidation number goes down. We use these numbers to see how many electrons an ion has lost or gained compared to the uncombined element. If the element lost electrons, it has a positive oxidation state, and if it gained electrons, it has a negative oxidation state.
Let's take magnesium oxide (MgO) as an example. When magnesium and oxygen react, two electrons from magnesium transfer to oxygen. This means that magnesium has lost two electrons, so it has an oxidation state of +2. Conversely, oxygen has gained two electrons, so it has an oxidation state of -2. When we write oxidation states, we put the sign before the number. So we would write +2 or -6, for instance. Understanding these oxidation states can help us better understand chemical reactions and the behavior of different elements. This is especially important for transition elements, which can have varying oxidation states.
When atoms bond with other atoms, the number of electrons available for bonding determines its oxidation state. In transition metals, both the 4s and 3d electrons are available for bonding. This unique feature gives transition metals something called variable oxidation states.
Variable oxidation states are important because they determine the charge on an atom under certain conditions. To better understand variable oxidation states, let's imagine that only ionic bonding is possible. If an atom can only lose up to three electrons to make bonds with another atom, it has 3 variable oxidation states, depending on the atom it bonds to. Now, let's dive deeper into why transition metals have variable oxidation states. We will explore the redox potential of a transition metal when going from a high to a lower oxidation state, which depends on the pH and ligand. We will also discover the five oxidation states of vanadium through a reaction between vanadate(V) and zinc. Finally, we will carry out the ‘silver mirror’ test or the Tollens' test. Understanding variable oxidation states is important for understanding how transition metals behave and interact with other elements.
Transition metals have the unique ability to use both the 4s and 3d electrons for bonding. This is because there is not a significant energy gap between the 3d orbital and the 4s orbital. In fact, the 4s electrons are lost first. In transition metals, the ionisation energy required to remove the third electron (on the 3d sub-shell) is not much larger than the ionisation energies required to remove the first two (on the 4s sub-shell).
To better understand this concept, let's examine the graph below. It shows the first ionisation energies of vanadium (a transition metal), calcium, and scandium (non-transition metals). Notice how the first ionisation energies of vanadium are closer together than those of scandium and calcium? This small difference between the ionisation energies makes it easy for transition metals like vanadium to have variable oxidation states. Not much energy is required to remove a small number of electrons from transition metals.
The table below shows examples of transition metals and their common oxidation states.
The atoms with lower oxidation states can exist as simple ions like Mn2+ or Mn3+. The higher oxidation states can only exist as complex ions covalently bonded to significantly electronegative elements like oxygen. For example, MnO4-.The higher oxidation states are oxidising while the lower ones are reducing. Examples include the MnO4- and the Cr2O72- ions which make excellent oxidising agents. On the other hand, Cr2+ and Fe2+ are reducing.
Transition metals are not the only elements on the periodic table with variable oxidation states. Most elements have variable oxidation states! Oxidation states can be crucial in helping a chemist balance an equation. But it can be tricky to figure out the oxidation state of an element with variable oxidation states, so chemists follow several rules to make it easier. The most straightforward rule is that the sum of the oxidation states of all the elements in a compound must always equal zero. Some other oxidation state rules that chemists follow are listed below. You may already be familiar with some of them. Metals always have a positive oxidation state. Metals cannot have an oxidation state below zero. The oxidation state of a single ion is the same as its ionic charge. The lowest oxidation state for non-metals is 8 minus the group number of the element.
The redox potential of a substance is a measure of its tendency to undergo reduction or oxidation reactions. It tells us how easily a species accepts or donates electrons. In the context of redox reactions, we typically only consider the transfer of electrons and write what is called a half-reaction. For example, the half-reaction below shows the reduction of zinc 2+ ion (Zn2+) to zinc (Zn) with an oxidation state of zero by accepting two electrons:
Zn2+ (aq) + 2e- ⇌ Zn (s) Eº = -76 V
The Eº value of -76 V represents the electrical potential of this half-reaction. It is the difference between the demand for electrons in the reduction half-reaction and the tendency to lose electrons in the oxidation half-reaction. This value is also referred to as the electrode potential or reduction potential, and it shows how easily a substance can be reduced.
Half-reactions with positive Eº values go towards the right, indicating a greater tendency to undergo reduction, while half-reactions with negative Eº values move to the left, indicating a greater tendency to undergo oxidation.
The redox potential of transition metal ions when going from a high oxidation state to a lower one is influenced by two factors: the pH and the ligand. The pH can affect the redox potential by altering the concentration of H+ ions, which can interact with the metal ions and affect their oxidation state. The ligand, which is a molecule or ion that binds to the metal ion, can also affect the redox potential by affecting the stability of the metal-ligand complex and the ease with which the metal ion can be reduced or oxidized.
pH is a measure of the acidity or basicity of a solution. It is a logarithmic scale that ranges from 0 (very acidic) to 14 (very basic). A neutral solution has a pH of 7. When transition metal ions in an aqueous solution go through a redox reaction, it usually involves hydrogen ions. In other words, it requires acidic conditions.
For example, the half-reaction for the reduction of manganate(VII) ion in an acidic solution is MnO4-(aq) + 8H+(aq) + 5e- ⇌ Mn2+(aq) + 4H2O(l) Eº = +1.51 V. The positive Eº value indicates that the process moves towards the right, and manganese gets reduced from the +8 to +2 oxidation state. An excess of acid is used to ensure this reaction goes to completion.
On the other hand, if a neutral solution like water is used, manganese ions only get reduced to the +4 oxidation state. The Eº value of this reaction is much lower, as the manganese ions are less willing to accept electrons in a neutral solution. The half-reaction for this is MnO4- (aq) + 2H2O (l) + 3e- ⇌ MnO2 (s) + 4OH- (aq) Eº = +0.59 V.
How do ligands affect the redox potential of transition ions? Notice the Eº values of the half-reactions below. They separately show the reduction of the nickel(II) ion and the hexaaminenickel(II) ion. What can you conclude about the ligands by looking at the Eº values?
[Ni(H2O)6]2+ (aq) + 2e- ⇌ Ni (s) + 6H2O (l) Eº = -0.26 V
[Ni(NH3)6]2+ (aq) + 2e- ⇌ Ni (s) + 6NH3 (aq) Eº = -0.49 V
By comparing the above two equations, we can state the following:
Eº becomes increasingly negative when ammonia replaces the water ligands. The ammonia ligands are more firmly attached to the nickel ions than the water ligands. The process with nickel(II) is more positive than with ammonia. So the equilibrium lies slightly more to the right.
Vanadium exhibits variable oxidation states, including vanadium (II), (III), (IV), and (V). Among these, vanadium(IV) is the most stable oxidation state. By carrying out a redox reaction between vanadate(V) ions and zinc in an acidic solution, we can form vanadium species in different oxidation states.
To carry out this reaction, we start with ammonium vanadate, which dissolves in hydrochloric acid to produce a yellow-coloured solution. When we add zinc to the solution, vanadium gets reduced from +5 to +2. This reduction is evident from the colour change of the solution, which goes from yellow to blue to green to violet, indicating the presence of different vanadium species in solution. It is worth noting that the higher oxidation states of vanadium do not exist as simple ions such as V5+(aq) or V4+(aq). Instead, they exist as complex species with ligands that stabilize the high oxidation states. The colour changes observed in the solution reflect the different vanadium species formed during the reduction of vanadate(V) ions by zinc.
The vanadium(II) ions quickly oxidise in the air when you remove the zinc, because they are unstable. If you experiment, you may briefly observe a pale green colour as the solution goes from yellow (+5) to blue (+4). What you see is not a new oxidation state, but a mixture of the two colours as the vanadium(V) ions get reduced to vanadium(IV).
Now that you know what is happening between the molecules, consider the half-reactions that show the stages of the reaction between vanadate(V) and zinc:Stage 12VO2+ (aq) + 4H+ (aq) + Zn (s) ⟶ 2VO2+ (aq) + 2H2O (l) + Zn2+ (aq) The half-equations for the above reaction are below. You get the ionic equation above by putting the two half-reactions for vanadate(V) and zinc together and balancing them out. Notice how the half-equation for zinc moves towards the left because of its negative Eº value. VO2+ (aq) + 2H+ (aq) + e- ⇌ VO2+ (aq) + H2O (l) Eº = +1.00 VZn (s) ⇌ Zn2+ (aq) + 2e- Eº = -0.76 V 2VO2+ (aq) + 4H+ (aq) + Zn (s) ⟶ 2VO2+ (aq) + 2H2O (l) + Zn2+ (aq) Find out how to write an ionic equation in Balancing Equations! Stage 2 VO2+ (aq) + 2H+ (aq) + e- ⇌ V3+ (aq) + H2O (l) Eº = +0.34 VStage 3V3+ (aq) + e- ⇌ V2+ (aq) Eº = -0.26 VWe can also use tin as the reducing agent instead of zinc to achieve the same results. As long as vanadium has the more positive Eº value, the reaction will proceed in the direction where the vanadium(V) ion gets reduced to the vanadium(II) ion.
The 'silver mirror' test is another reaction that uses the variable oxidation states of a complex ion. Read on to discover why we call it that!
Variable oxidation state of dichromate ions
As mentioned previously, transition metal ions can have a variety of oxidation states. We have gone through this for vanadium, so now we will be exploring dichromate ions.
Firstly, let's explore how the dichromate(VI) ion, Cr2O72− can be reduced to Cr3+ and Cr2+ ions. This can be done using zinc and a dilute acid such as sulphuric acid or hydrochloric acid. Cr2O72− is orange and when it is reduced using zinc and a dilute acid, it can form Cr3+ (green), which can be further reduced to Cr2+ (blue). This is represented with the following equations.
This equation shows the reduction from +6 to +3.
Cr2O72- + 14H+ + 3Zn → 2Cr3+ + 7H2O + 3 Zn2+
This shows the reduction from +3 to +2.
2Cr3+ + Zn → 2Cr2+ + Zn2+
Now we shall explore how dichromate ions can be produced from the oxidation of Cr3+. This is done using hydrogen peroxide in alkaline conditions which is then followed by acidification. When a transition metal in a low oxidation state is in an alkaline solution, it is more easily oxidised than when it is in an acidic solution. This can be seen in the following equation:
[Cr(H2O)6]3+ (aq) → [Cr(OH)6]3- (aq) in excess sodium hydroxide (NaOH)
The reduction can be seen here:H2O2 + 2e- → 2OH-The oxidation can be seen here: [Cr(OH)6]3- + 2OH- → CrO42-+ 3e- + 4H2O
This then leads to the following equation:
2[Cr(OH)6]3- + 3H2O2 → 2CrO42- + 2OH- + 8H2O
Finally, let us explore how dichromate(VI) ion Cr2O72−, can be converted into chromate(VI). Chromate can be converted to dichromate using this equilibrium equation:
2CrO42- + 2H+ ⇌ Cr2O72- + H2O
It is important to note that this reaction is not a redox reaction. This is because the oxidation number is alway +6. is instead an acid base reaction.
CrO42- is a yellow solution and can be turned into Cr2O72-, an orange solution by adding dilute sulphuric acid. To change from the orange solution to the yellow solution, we need the addition of sodium hydroxide.
Reduction of Tollens' reagent
We carry out a test to distinguish between an aldehyde or ketone by using the complex ion diamminesilver(I). This test, also called Tollens' test, is one of the ways we use to identify the functional group in an unknown organic compound. Diamminesilver(I) [Ag(NH3)2]+ is also known as Tollens' reagent after the German chemist Bernard Tollens.
Learn more about ketones and aldehydes in Aldehydes and Ketones.
The test involves reducing Tollens' reagent, which contains silver(I) nitrate, to metallic silver. In order to carry out the test, you must first prepare Tollens' reagent. We prepare it for each test since Tollens' reagent is unstable in solution.
To prepare Tollens' reagent:
Add some sodium hydroxide to silver nitrate to produce silver(I) oxide, a brown precipitate. Add concentrated ammonia solution to redissolve the silver(I) oxide back into diamminesilver(I).
Now the Tollens' reagent is ready for Tollens' test. To carry out the test:
Add a few drops of the unknown organic compound to Tollens' reagent. Then gently warm in a water bath. If the unknown compound is a ketone, you will observe no change in the colourless solution. If the unknown compound is an aldehyde, you will get a grey silver precipitate. The Ag+ ions have reduced to Ag while the aldehyde oxidised to carboxylic acid.
We can observe the silver precipitate when the substance contains an aldehyde because aldehydes are reducing agents and reduce the silver(I) nitrate to metallic silver. The Tollens' test is also called the 'silver mirror' test because of the silver coating formed inside the test tube.
Tollens' test can also be used to detect sugars like glucose. Previously, we have used this reaction to coat mirrors with silver.
You can find the half-equations for the reduction of Tollens' reagent by an aldehyde below, as well as the net ionic equation.
Reduction of Tollens' reagent
Ag(NH3)2+ + e- ⟶ Ag + 2NH3
Oxidation of aldehyde
RCHO + 3OH- ⟶ RCOO- + 2H2O + 2e-
Net ionic equation
2Ag(NH3)2+ + RCHO + 3OH- ⟶ 2Ag + 4NH3 + RCOO- + 2H2O
Sure, I can explain the procedure for these titrations.
To carry out a redox titration using potassium permanganate, we start by preparing a solution of the reducing agent we want to analyze in a flask. We then add a few drops of dilute sulfuric acid to the flask to make the solution acidic. This is important because potassium permanganate is only a strong oxidizing agent in acidic conditions.
Next, we fill a burette with a standard solution of potassium permanganate. We slowly add the potassium permanganate solution to the flask containing the reducing agent solution, swirling the flask gently to ensure complete mixing. As we add the potassium permanganate solution, we observe the colour change of the solution. At the beginning of the titration, the solution will be the colour of the reducing agent. As we add the potassium permanganate, the solution will gradually change colour until it reaches a pale pink colour. This pale pink colour indicates that all the reducing agent has been oxidized by the potassium permanganate. We stop adding the potassium permanganate solution at the point where the pale pink colour persists for at least 30 seconds. This point is known as the endpoint of the titration. To calculate the concentration of the reducing agent, we use the balanced chemical equation for the reaction between the reducing agent and potassium permanganate, and the volume and concentration of the potassium permanganate solution used in the titration.
Additionally, the redox potential of a substance can be used to predict spontaneous reactions. A spontaneous reaction occurs when the redox potential of the oxidizing agent is greater than the redox potential of the reducing agent. Transition metal ions can also form complex ions with ligands, which can affect their oxidation state and reactivity. The coordination number of a complex ion refers to the number of ligands attached to the central metal ion. The colour of a complex ion is determined by the energy difference between the d-orbitals of the metal ion when different ligands are attached. This is known as crystal field theory. Overall, understanding the variable oxidation states of transition elements is important in various fields such as chemistry, biochemistry, and materials science.
Why do transition metals have variable oxidation states?
Transition metals have variable oxidation states, because their 3d and 4s electrons are available for bonding. The small difference between the ionisation energies makes it easy for transition metals like manganese to have variable oxidation states. Not a lot of energy is required to remove a small number of electrons from transition metals.
Which element has variable oxidation states?
Transition metals show variable oxidation states. However, transition elements are not the only elements in the periodic table that have variable oxidation states. In fact, most elements have variable oxidation states.
What is a variable oxidation state?
A variable oxidation state is a number that determines the charge on an atom depending on certain conditions.
What are the oxidation states of transition metals?
The oxidation states of transition metals vary. Vanadium, for example, has four oxidation states: vanadium (II), (III), (IV) and (V). The +5 oxidation state is the most stable.
What are the 7 oxidation states?
The 7 oxidation states are +1, +2, +3, +4, +5, +6, +7.
Join Shiken For FREEJoin For FREE
|
There are many birth injuries that can lead to impaired motor function, including dystonia disorder.
Believed to be caused by a lack of communication between the brain and nerves, dystonia disorder is neurological in nature and is most often caused by a lack of oxygen to the part of the brain that controls movement.
It manifests itself with slow, writhing, involuntary movements or twisting as muscles move against each other. It can cause distorted postures, and symptoms can range from mild to severe enough to interfere with the performance of tasks including tying shoes, writing and getting dressed, among other things. Some symptoms can be controlled by medications, but in severe cases, lifelong complications are possible.
Dystonia Disorder is Complex, Varied
There are several different types of dystonia disorder, which causes the muscles in body parts impacted by the disorder to twist or move in a painful manner.
The different types include:
- Generalized dystonia disorder, which impacts the motor function of all or most of the body, causing simultaneous movements.
- Focal dystonia, which impacts only a particular part of the body.
- Multi-focal dystonia, which causes uncontrolled movements of at least two areas of the body.
- Segmental dystonia, which affects at least two connected parts of the body.
- Hemi-dystonia, which is dystonia of muscles on the same side of the body, such as the face, arm and leg.
- Cervical dystonia, which impacts the muscles of the neck, can cause the head and neck to snap back and forth painfully and can pull the chin toward the shoulder, making movement of the head more difficult.
- Blepharospasm, which is a type of focal dystonia, causes uncontrollable blinking or spasms of the muscles around the eye. Both symptoms can cause functional blindness despite a healthy eye.
- Cranio-facial dystonia, a type of dystonia that impacts the muscles of the face, head and neck. Cranio-facial dystonia can cause problems with speech and controlling one’s facial expression as well as difficulty chewing and swallowing.
- Task-specific dystonia, another type of focal dystonia that is triggered by an activity that is repeated regularly, such as spasms that are triggered when a child is writing a paper for school, resulting in writhing of the hand and forearm.
Symptoms of Dystonia Disorder
While distorted posture or repetitive movements triggered by involuntary muscle contractions are the most common symptoms associated with dystonia disorder, they are not the only ones.
Dystonia can also lead to difficulty using the hands or maintaining a grip, which makes eating, brushing one’s teeth, writing and other tasks difficult, and difficulty controlling the muscles of the mouth and tongue, which can also impact eating as well as communication skills.
During times of stress, symptoms generally tend to worsen, and contractions become more obvious and occur at more regular intervals.
What Causes Dystonia Disorder?
In most cases, dystonia disorder is linked to birth injuries that cause a lack of oxygen to the brain, killing off brain cells in the region of the brain that controls motor function, or an injury that causes hemorrhaging of the brain.
Hemorrhaging, which can be the result of the improper use of assisted birthing devices including forceps or vacuum extractors, can cause pressure to the brain when blood becomes trapped between the skull and brain, leading to brain damage.
Injuries that alter the communication pathways of nerve cells, infant stroke, brain tumors, a traumatic brain injury, oxygen deprivation caused by problems with the placenta or umbilical cord or a reaction to medication can also cause dystonia. In some cases, the disorder is genetically linked, and can be inherited.
How is Dystonia Disorder Treated?
While there is no cure for dystonia, there are some treatment options that may relieve symptoms enough so that symptoms are controlled.
Physical therapy can control some spasms, especially those related to task-specific dystonia, and speech therapy can help improve communication skills. Too, because some types of dystonia tend to worsen later in the day, when levels of dopamine are depleted, medications that can replenishing lost stores of dopamine can ease symptoms, in some cases completely. (Ref. 1)
Injections of the botulinum toxin (the toxin related to botulism) can prevent the release of a certain neurotransmitter, causing flaccid paralysis that can control spasms.
In some cases, deep brain stimulation – the implantation of a device that acts similarly to a pacemaker, and sends messages to the areas of the brain that control movement, potentially easing spasms – or surgery are suggested.
Dystonia Disorder Legal Options
While some cases of dystonia disorder are genetic in nature, others are the result of birth injuries that are caused by negligence on the part of medical professionals who failed to prevent brain hemorrhaging or a lack of oxygen to the brain at some point during delivery.
Dystonia can be debilitating and can impact a child’s life significantly, making communication, learning and other activities difficult. The costs of therapy and other treatments can also be costly, even if you have a good insurance plan.
If your child developed a form of dystonia disorder due to a birth injury, an injury attorney can help you recover damages resulting from the injury.
While financial compensation will not restore your child’s physical health, it will help you provide the necessary medical care to help improve his or her quality of life.
“David delivered more than expected for me in every way”
“Working with David was a pleasure. From the first time I spoke to him I felt at ease with him as he seemed more concerned with my well being before all. He was always keeping me updated on everything every step through the process and was always available for me if I had a question. David delivered more than expected for me in every way and I would recommend him to anyone. A real class act with your best interest at heart!”Frank T.
|
Amblyopia Summary Amblyopia, or “lazy eye,” is the most common cause of visual impairment in children. It happens when an eye fails to work properly with the brain. The eye may look normal, but the brain favors the other eye. In some cases, it can affect both eyes. Causes include Strabismus – a disorder in which the two eyes don’t line up in the same direction Refractive error in an eye – when one eye cannot focus as well as the other, because of a problem with its shape. This includes nearsightedness, farsightedness, and astigmatism. Cataract – a clouding in the lens of the eye It can be hard to diagnose amblyopia. It is often found during a routine vision exam. Treatment for amblyopia forces the child to use the eye with weaker vision. There are two common ways to do this. One is to have the child wear a patch over the good eye for several hours each day, over a number of weeks to months. The other is with eye drops that temporarily blur vision. Each day, the child gets a drop of a drug called atropine in the stronger eye. It is also sometimes necessary to treat the underlying cause. This could include glasses or surgery. NIH: National Eye Institute National Eye Institute Start Here Amblyopia American Association for Pediatric Ophthalmology and Strabismus Amblyopia (For Parents) Nemours Foundation Array Amblyopia Facts about Amblyopia National Eye Institute Standard ophthalmic exam Diagnosis and Tests Amblyopia: Lazy Eye Diagnosis American Academy of Ophthalmology Vision Screening American Association for Pediatric Ophthalmology and Strabismus Symptoms Amblyopia: Lazy Eye Symptoms American Academy of Ophthalmology Treatments and Therapies Amblyopia: Lazy Eye Treatment American Academy of Ophthalmology Dilating Eye Drops American Association for Pediatric Ophthalmology and Strabismus Prevention and Risk Factors Amblyopia: Who Is at Risk for Lazy Eye? American Academy of Ophthalmology Find an Expert American Academy of Ophthalmology American Academy of Ophthalmology Finding an Eye Care Professional National Eye Institute National Eye Institute National Eye Institute Clinical Trials ClinicalTrials.gov: Amblyopia National Institutes of Health Reference Desk Diagram of the Eye National Eye Institute Glossary of Terms Foundation of the American Academy of Ophthalmology Statistics and Research Extended Daily Eye Patching Effective at Treating Stubborn Amblyopia in Children National Eye Institute Researchers Find Essential Brain Circuit in Visual Development National Institute of Neurological Disorders and Stroke Genetics Genetics Home Reference: blepharophimosis, ptosis, and epicanthus inversus syndrome National Library of Medicine Videos and Tutorials NEI You Tube Videos: Amblyopia National Eye Institute Journal Articles Amblyopia Specifics Toxic Amblyopia (Nutritional Amblyopia) Merck & Co., Inc.
|
The Native Americans are the indigenous persons in United States. It is believed that they travelled during the ice age to Alaska and gradually migrated across the land to Mexico and other areas. They were referred to as Indians. Migration of people from Eurasia to America happened through Beringia which connects to the two continents.
First-Class Online Research Paper Writing Service
- Your research paper is written by a PhD professor
- Your requirements and targets are always met
- You are able to control the progress of your writing assignment
- You get a chance to become an excellent student!
Some of indigenous people were hunter-gatherer and others practiced agriculture and aquaculture. Many societies relied on agriculture while others practiced mixed farming hunting and gathering. Indigenous Americans still occupies many arts of America; some countries such as Colombia, Peru, Mexico, Bolivia, Ecuador, and Guatemala. Many people have maintained qualities of indigenous cultural practices to varying ways. These include subsistence practices, religion and social organization.
There has been rise of indigenous movements in the recent years. These are groups which have organized themselves so as to preserve their culture. For example, Organizations like the Coordination of indigenous Organization of the Amazon River Basin and the Indian council of South America. In United States and Canada similar movements have been formed, for example, the International Indian Treaty Council.
Indigenous movements have been recognized internationally with the united nation adopting Declaration on the Rights of indigenous people. The rise to power of Leftist government in Ecuador, Paraguay, Venezuela and Bolivia where Evo Morales was the first indigenous descendants elected as the president of Bolivia made indigenous movements become more powerful.
In conclusion, the history of indigenous people shows their increasing awareness and urge to seek justice as groups and influential members. As a result it has played a very crucial role in settling contemporary lawsuits regarding Native Americans.
|
A hemoglobin abnormality is a variant form of hemoglobin that is often inherited and may cause a blood disorder (hemoglobinopathy).
Hemoglobin is the iron-containing protein compound within red blood cells that carries oxygen throughout the body. It is made up of heme, which is the iron-containing portion, and globin chains, which are proteins. The globin protein consists of chains of amino acids, the "building blocks" of proteins. There are several different types of globin chains, named alpha, beta, delta, and gamma. Normal hemoglobin types include:
- Hemoglobin A (Hb A): makes up about 95%-98% of hemoglobin found in adults; it contains two alpha (α) chains and two beta (β) protein chains.
- Hemoglobin A2 (Hb A2 ): makes up about 2%-3% of hemoglobin found in adults; it has two alpha (α) and two delta (δ) protein chains.
- Hemoglobin F (Hb F, fetal hemoglobin): makes up to 1%-2% of hemoglobin found in adults; it has two alpha (α) and two gamma (γ) protein chains. It is the primary hemoglobin produced by the fetus during pregnancy; its production usually falls shortly after birth and reaches adult level within 1-2 years.
Genetic changes (mutations) in the globin genes cause alterations in the globin protein, resulting in structurally altered hemoglobin, such as hemoglobin S, which causes sickle cell, or a decrease in globin chain production (thalassemia). In thalassemia, the reduced production of one of the globin chains upsets the balance of alpha to beta chains and causes abnormal hemoglobin to form (alpha thalassemia) or causes an increase of minor hemoglobin components, such as Hb A2 or Hb F (beta thalassemia).
Four genes code for the alpha globin chains, and two genes (each) code for the beta, delta, and gamma globin chains. (For general information on genetic testing, see The Universe of Genetic Testing.) Mutations may occur in either the alpha or beta globin genes. The most common alpha-chain-related condition is alpha thalassemia. The severity of this condition depends on the number of genes affected. (See Thalassemia for more information.)
Mutations in the beta gene are mostly inherited in an autosomal recessive fashion. This means that the person must have two altered gene copies, one from each parent, to have a hemoglobin variant-related disease. If one normal beta gene and one abnormal beta gene are inherited, the person is heterozygous for the abnormal hemoglobin, known as a carrier. The abnormal gene can be passed on to any children, but it generally does not cause symptoms or significant health concerns in the carrier.
If two abnormal beta genes of the same type are inherited, the person is homozygous. The person would produce the associated hemoglobin variant and may have some associated symptoms and potential for complications. The severity of the condition depends on the genetic mutation and varies from person to person. A copy of the abnormal beta gene would be passed on to any children.
If two abnormal beta genes of different types are inherited, the person is "doubly heterozygous" or "compound heterozygous." The affected person would typically have symptoms related to one or both of the hemoglobin variants that he or she produces. One of the abnormal beta genes would be passed on to children.
Red blood cells containing abnormal hemoglobin may not carry oxygen efficiently and may be broken down by the body sooner than usual (a shortened survival), resulting in hemolytic anemia. Several hundred hemoglobin variants have been documented, but only a few are common and clinically significant. Some of the most common hemoglobin variants include hemoglobin S, the primary hemoglobin in people with sickle cell disease that causes the red blood cell to become misshapen (sickle), decreasing the cell's survival; hemoglobin C, which can cause a minor amount of hemolytic anemia; and hemoglobin E, which may cause no symptoms or generally mild symptoms.
|
The Global News Source for the World of Science
26 September 2019Lab Chat
We might associate turbulence almost exclusively with air travel and bumpy flights, but the phenomenon is visible as a daily occurrence in many facets of our lives. Formula One racing car drivers hate it, since the aerodynamic drag it incurs can mean the difference between winning and losing. White-water rapids kayakers and rafters love it, since it constitutes the main thrill of the experience.
Meanwhile, most people barely notice it when they turn on the tap to brush their teeth in the morning, when the direction and regularity of the flow becomes unstable as the pressure increases. But despite the fact turbulence is such an everyday event, we still know remarkably little about how this enigmatic process works. Determined to right that wrong, Professor Nader Masmoudi at New York University Abu Dhabi is seeking to investigate the heart of this commonplace conundrum.
The mystery of turbulence is inextricably tied to the Navier-Stokes Equation, which is often dubbed the most difficult and complex formula in the world of science. Named for the French and Irish scientists who penned it at the start of the 19th century, the formula is supposed to explain how fluids will react according to Isaac Newton’s laws of motion.
However, things are not nearly so simple as all that. In the first place, the Navier-Stokes Equation has many different forms, including a conservation form, a convective form and a constitutive form, among others. Moreover, the equation is non-linear, meaning that tiny changes to it can have massive ramifications for the end product. This makes the formula highly unreliable when putting in various properties and expecting a consistent result.
Scientists have been wrestling with how to achieve more dependable results using the equations ever since it was first written, with largely unsuccessful results. In the 1990s, a pair of scientists tried to dispense with the unpredictable variables inherent in the equation and use a more stripped-down version of it to determine the velocity of the River Nile, but only came out with the impossible answer of 330,000km per hour. Clearly, something had gone wrong.
Now, Professor Masmoudi is hoping to find out exactly what that something was. By using technology in the laboratory alongside cutting-edge techniques in computing and mathematics, he and his team hope to shed some fresh light on this centuries-old problem. In this way, they aim to discover how and why fluids stop flowing smoothly when turbulence arises.
Most crucially, the team are also hoping to investigate how this phenomenon can impact real-world situations, such as traffic flow at rush hour, coastal erosion from waves or even human relationships. “Stability is a word we use in our daily life, not only in math and physics”, he explained. “What’s interesting is the kind of questions we ask in math or physics, you can ask in social sciences, political sciences, or even relationships.”
While the challenge is certainly a difficult one, Professor Masmoudi is perhaps the most qualified man on the planet to undertake it. In 1992, he became the first ever Arab teenager to win a gold medal at the International Mathematical Olympiad, before going on to study turbulence and winning various prizes for his work in doing so. Next step: putting Navier-Stokes to bed, once and for all.Download PDF
|
1 surface hill hydraulic gold sluicing pit smythesdale cliff face
Statement of Significance
Last updated on - May 11, 1999
The Surface Hill Hydraulic Gold Sluicing Pit consists of a large excavation containing a network of pebble dumps, tail races and drainage adits. Water for sluicing would have been delivered to the site by water races and then directed at the gold bearing deposits. The technology was introduced into Victoria in about 1855. The main period for hydraulic sluicing at Surface Hill was the 1870s.
The Surface Hill Hydraulic Gold Sluicing Pit is of historical, archaeological and scientific importance to the State of Victoria.
The Surface Hill Hydraulic Gold Sluicing Pit is historically and scientifically important as a characteristic and well preserved example of an early form of gold mining. Gold mining sites are of crucial importance for the pivotal role they have played since 1851 in the development of Victoria.
Hydraulic sluicing of alluvial gold deposits is an important key ingredient in an understanding of gold mining technology as it was employed in country where water was plentiful and perennial.
The Surface Hill Hydraulic Gold Sluicing Pit is archaeologically important for its potential to yield artefacts and evidence which will be able to provide significant information about the cultural history of gold mining and the gold seekers themselves.
|
Scientists continue to peer deeper into the galaxy in the search for exoplanets that may host alien life, but our best chance of finding extraterrestrial life might actually be right here in our own Solar System. A new study based on data collected by NASA’s Cassini orbiter is providing some tantalizing clues as to what is hiding beneath the thick ice sheets on Saturn’s moon Enceladus, and it’s incredibly exciting.
The study, which was published in Nature, reveals the presence of complex organic compounds within the planet’s vast ocean, and while it’s not definitive proof that life exists deep within the moon, it’s a massive step towards that potential discovery.
Enceladus is incredibly special. It’s tiny orb, much smaller than the Earth, but it’s covered in a thick sheet of ice that encases a massive ocean of liquid water. We know this because of the large fissures that exist near its poles, particularly near the moon’s southern end, where water sprays out into space from between the cracks. Deep within the planet, the water is warm, and that’s a pretty big deal when it comes to searching for life.
NASA’s Cassini spacecraft snatched a sample of those particles during its mission, and this new research is based upon the data that it sent back. Scientists now say that the water contained carbon-rich material, suggesting some pretty complex organic processes happening near the center of the moon. This makes Enceladus the only other body in the known universe with all the prerequisites for life, as far as we understand it.
Researchers have long hypothesized that superheated hydrothermal vents exist near a rocky core at the center of Enceladus, creating the pressure that ultimate creates the massive plumes of water that spew out into space. These new findings support that theory, and since we already know that organisms can survive off of the energy of such vents in Earth’s oceans, in the absence of sunlight, it’s entirely possible that the same may be happening inside of Saturn’s icy moon.
In order to actually detect the presence of life deep within the moon, we’re going to need to make a trip to Enceladus. At the moment, no missions have been greenlit, but several scientific bodies are working towards that goal. Late last year, a Russian billionaire decided he wanted to fund a trip to Saturn’s moon in order to take additional samples of the water being shot out into space, but confirming the presence of life — and explaining what it looks like and how it functions — is going to require a more sophisticated approach that, for the moment, is still just a dream.
|
Weekly Influenza Reports
Reports are archived by week ending date
|April: 6 | 13 | 20 | 27
March: 2 | 9 | 17 | 23 | 30
February: 2 | 9 | 16 | 23
Flu is an upper respiratory illness caused by a virus. Symptoms of flu can include fever, coughing, sore throat, runny or stuffy nose, headaches, body aches, chills and fatigue. Flu is not the same as a bad cold. It can be dangerous. Flu can cause high fever and pneumonia, and make medical conditions worse.
In the United States, about 36,000 people (mostly over the age of 65) die each year from the flu.
The flu is spread from person to person through coughs and sneezes. Sometimes people get the flu by touching something with the flu virus on it and then touching their mouth, nose or eyes. This can happen at home, work, church or school -- anywhere that we share close space or touch the same things, like chairs and tables, doors, and shopping carts.
- Stay home.
- Avoid contact with others.
- Wait 24 hours after your fever has gone away before going out.
- Get lots of rest.
- Drink plenty of fluids, especially water.
- When you cough or sneeze, cover your nose and mouth with a tissue, or with your upper sleeve or the inside of your elbow.
- Avoid smoking and drinking alcohol.
- Wash your hands often.
If you get sick with flu, antiviral drugs may be a treatment option. Check with your doctor promptly if you are at high risk of serious flu complications and you get flu symptoms. People at high risk of flu complications include young children, adults 65 years of age and older, pregnant women, and people with certain medical conditions such as asthma, diabetes and heart disease.
When used for treatment, antiviral drugs can lessen symptoms and shorten the time you are sick by 1 or 2 days. They also can prevent serious flu complications, like pneumonia. For people at high risk of serious flu complications, treatment with antiviral drugs can mean the difference between milder or more serious illness possibly resulting in a hospital stay.
The best way to prevent flu is to get vaccinated. Everyone 6 months of age and older should get a flu vaccine every season. Flu vaccination has important benefits – it can reduce flu illnesses, doctors’ visits, and missed work and school due to flu, as well as prevent flu-related hospitalizations.
Different flu vaccines are approved for use in different groups of people. Factors that can determine a person’s suitability for vaccination, or vaccination with a particular vaccine, include a person’s age, health (current and past) and any relevant allergies. Flu shots are approved for use in pregnant women and people with chronic health conditions. There are flu shots that also are approved for use in people as young as 6 months of age and up.
For common flu myths, visit CDC’s flu misconceptions - https://www.cdc.gov/flu/about/qa/misconceptions.htm
The flu vaccine contains flu viruses that are grown in a laboratory and then killed (also called "inactivated"). These are made into a vaccine, which can be injected or sprayed in the nose to help protect against the flu. The vaccine is not a treatment for people who already have the flu. Instead, it helps prevent people from getting the flu in the first place. The vaccine builds our body's ability to fight the flu.
Everyone over 6 months old should get the flu vaccine each year. It is especially important for people who are more likely to get sick, and those who can spread the virus to others. This includes children between 6 months and 5 years old (especially children younger than 2); adults over 65; pregnant women; people with chronic medical conditions including diabetes, asthma, heart disease, cancer, and HIV; people who live in nursing homes; and health care workers. Those who live with or care for children less than 6 months of age should also get the vaccine.
Certain people should talk with a doctor before getting a flu shot. This includes people who have had a severe allergic reaction to eggs or to a previous flu shot; people who have had Guillain-Barre Syndrome; or anyone who has a fever.
Yes, it is okay to get the vaccine if you have a mild illness -- as long as you do not have a fever.
Yes. The flu vaccine changes every year, to protect against new flu viruses that are expected. Last year's vaccine may not protect against this year's viruses.
There are many different flu viruses. Each year, a new flu vaccine is developed. It is designed to fight 4 flu viruses that scientists expect to be most common that year. This yearly vaccine is also called the "seasonal flu vaccine" or the "annual flu vaccine." "Seasonal" doesn't mean you need to get a flu vaccine every spring, summer, fall, and winter. You only need it once a year.
There is a nasal flu spray vaccine available in the 2018-2019 flu season. Talk to your doctor to discuss whether this is the best vaccine for you.
The flu vaccine works most of the time. Each year's flu vaccine fights the 4 most common flu viruses for that year. If they come in contact with a different flu virus, they could still get the flu. One benefit to the flu vaccine is that even if you do become infected with flu, symptoms are often more mild compared to those who are unvaccinated. Getting the flu vaccine is always better than not getting it.
The inactive flu viruses in the vaccine trick the body into thinking it is being infected, so the body builds immunity against the flu. Then, if a real flu virus tries to infect that person, their body is ready to fight against it.
Yes, but even if you get the flu, the vaccine can help lessen the symptoms. The flu vaccine takes about two weeks to work.
No, a flu shot cannot cause flu illness.
Flu vaccines have been given since the 1940's, hundreds of millions of times. Almost all people who get one have no serious problems. Sometimes people get sore at the spot where they get a vaccine. Very rarely, some people get a fever, pain or weakness after getting the flu shot. In both cases, this usually goes away in a day or two.
A vaccine, like any medicine, may cause serious allergic reactions in very rare cases. Get medical help right away if hoarseness or wheezing, hives, paleness, weakness, a fast heartbeat, or dizziness occur after getting the shot. Also, about 1 person in a million can get an illness called Guillain-Barre Syndrome (GBS) following the flu vaccine.
Last updated: 2/25/20
|
Integrity is defined as “the quality of being honest and having strong moral principles.” Integrity is about making good choices. It is doing the right things for the right reason. It is about being honest with yourself and honest with others.
Integrity is the foundation for a community where people live and work, study and play as brothers and sisters. Rooted in sound ethical and moral principles and values, integrity requires that we act fairly, honestly, and ethically at all times.
As C.S. Lewis once said, “Integrity is doing the right thing, even when no one is watching.”
At Saint Patrick Catholic School, we encourage our students to display respect for self, others and property, and to assume responsibility for behavioral choices.
Some of the ways in which we do that include:
- Encourage truthfulness
- Set a good example
- Show love for others
- Teach tolerance
- Encourage empathy
- Demonstrate patience
|
You are here
What is Alzheimer’s?
Alzheimer’s Disease (AD) is the most common cause of dementia in older people. A dementia is a medical condition that disrupts the way the brain works. AD affects the parts of the brain that control thought, memory, and language. Although the risk of getting the disease increases with age, it is not a normal part of aging. At present the cause of the disease is unknown and there is no cure.
AD is named after Dr. Alois Alzheimer, a German psychiatrist. In 1906, Dr. Alzheimer described changes in the brain tissue of a woman who had died of an unusual mental illness. He found abnormal deposits (now called senile or neuritic plaques) and tangled bundles of nerve fibers (now called neurofibrillary tangles). These plaques and tangles in the brain have come to be characteristic brain changes due to AD.
Initial mild forgetfulness
Confusion with names and simple mathematical problems
Forgetfulness to do simple everyday tasks, i.e., brushing their teeth
Problems speaking, understanding, reading and writing
Behavioral and personality changes
Aggressive, anxious, or aimless behavior
It is estimated that currently 4 million people in the United States may have Alzheimer’s disease. The disease usually begins after age 65 and risk of AD goes up with age. While younger people may have AD, it is much less common. About 3% of men and women ages 65-74 have AD and nearly half of those over age 85 could have the disease.
No definitive test to diagnose Alzheimer’s disease in living patients exits. However, in specialized research facilities, neurologists now can diagnose AD with up to 90% accuracy. The following is some of the information used to make this diagnosis:
A complete medical history
Basic medical tests (i.e., blood, urine tests)
Neuropsychological tests (i.e., memory, problem-solving, language tests)
Brain scans (i.e., MRI scan, CT scan or PET scan)
Research for Possible Risk Factors
Scientists are trying to learn what causes AD and how to prevent it. This list may not be all inclusive or definite. However, research has lead scientists to consider these as possible risk factors:
Environmental factors - aluminum, zinc, and other metals have been detected in the brain tissue of those with AD. However, it isn’t known whether they cause AD, or build up in the brain as a result of AD.
Viruses - Viruses that might cause the changes seen in the brain tissue of AD patients are being studied.
The only known risk factors are age and family history. Serious head injury and lower levels of education may also be risk factors. AD is probably not caused by any one factor. Most likely, it is several factors together that react differently in each person. Unfortunately, no blood or urine test currently exists that can detect or predict AD.
Alzheimer’s disease advances in stages, ranging from mild forgetfulness to severe dementia. The course of the disease and the rate of decline varies from person to person. The duration from onset of symptoms to death can be from 5 to 20 years.
Currently, there is no effective treatment for AD that can halt the progression. However, some experimental drugs have shown promise in easing symptoms in some patients. Medications can help control behavioral symptoms; making patients more comfortable and easier to manage for caregivers. Still other research efforts focus on alternative care programs that provide relief to the caregiver and support for the patient.
225 N. Michigan Ave., Fl. 17
Chicago, IL 60601-7633
Phone Number: (312) 335-8700
Toll-Free Number: (800) 272-3900
Fax Number: (866)699-1246
Email Address: [email protected]
Website URL: www.alz.org
Alzheimer’s Disease Education and Referral Center
PO Box 8250
Silver Spring, MD 20907-8250
Phone Number: (800) 438-4380
Fax Number: (301)495-3334
Website URL: http://www.alzheimers.org or http://www.nia.nih.gov/alzheimers
|
What is MS?
Multiple sclerosis (MS) is a disease in which a person’s immune system attacks the cells of their brain and spinal cord. The exact cause of MS is unknown. It is an autoimmune disease -- a condition in which the body’s immune system attacks its own tissues.
MS damages the nervous system to the extent that most of the patients are physically disabled in a span of 20 to 25 years. There can be symptom-free periods in between symptomatic episodes occurring months to years apart that damage different parts of the body. Some patients may not have symptom-free intervals and may experience steadily progressive worsening of symptoms.
Causative factors for MS, according to researchers:
- Role of genes
- Viral infections
- Low vitamin D levels
What are the first signs of MS?
MS presents differently in different individuals affected. Some people are affected mildly, whereas others lose their ability to read, write and speak. The early symptoms include:
- Numbness, and pins and needles sensation
- Muscle cramps and stiffness
- Bladder problems: Frequency urinating, urgency and inability to hold urine
- Bowel problems: Diarrhea, constipation and loss of bowel control
- Sexual dysfunction: Lack of arousal
- Slurred speech
- Uncontrollable shaking or tremors
- Vision problems
- Eye pain
- Double vision, especially when looking sideways
- Facial pain
- Intolerance to heat: People with MS often report an increase in fatigue or weakness when exposed to high temperatures (especially hot, humid weather), exercise, hot showers or baths, or with a fever. They may also complain of blurred vision when exposed to heat.
- Fatigue is seen in more than two-thirds of the patients.
- Generalized body and joint aches
- Reduced attention span, concentration, memory, and judgment
- Personality changes
- Difficulty walking
- Excessive itching
- Difficulty maintaining balance
What are the symptoms of MS in a woman?
MS is two times more common in women than in men. The symptoms of MS are largely similar in both men and women. However, the female hormones may make the manifestation of MS different in women. Common symptoms reported by women with MS are:
- Mood swings
- Loss of bowel and bladder control
- Nausea and vomiting
- Worsening of premenstrual symptoms
- Missing periods
- Trouble swallowing
- Difficulty speaking
- Numbness, and pins and needles sensation
- Difficulty in distinguishing between colors
- Trouble doing daily chores
- Lack of sexual arousal
- Inability to sense whether an object is hot or cold
- Muscle spasms
- Hearing loss
- Uncontrollable shaking or tremors
- Worsening of MS symptoms after menopause
What happens if a woman with MS becomes pregnant?
Fortunately, MS has not been shown to cause infertility. The physical symptoms, however, can lead to a difficult pregnancy. Many women report their symptoms become mild during pregnancy. Reduced flare-ups have also occurred during pregnancy. The beneficial effect of pregnancy on MS is transient. Women often experience a flare-up of MS symptoms postpartum. Some even experience worsening of symptoms due to the stress of pregnancy on the body.
Pregnant women and women planning pregnancy should discuss with their physicians optimal care for the mother and baby.
What age does MS start?
Symptoms of MS usually begin between the ages of 20 to 40 years. However, MS can occur at any age.
Health Solutions From Our Sponsors
Top What are the First Signs of MS Related Articles
MS (Multiple Sclerosis) vs. ALS (Amyotrophic Lateral Sclerosis) Differences and SimilaritiesALS (amyotrophic lateral sclerosis, Lou Gehrig's disease) and MS (multiple sclerosis) are both diseases of the nervous system (neurodegenerative). ALS is a disease in which the nerve cells in the body are attacked by the immune system, although it's not considered an autoimmune disease by some scientists. MS is an autoimmune disease in which the insulated covering of the nerves (myelin sheath) in the CNS (central nervous system) degenerate, or deteriorate.
Scientists don't know the exact cause of either problem. However, they have discovered that mutations in the gene that produces the SOD1 enzyme were associated with some cases of familial ALS. Scientists also theorize that multiple sclerosis may be caused by infection or vitamin D deficiency. ALS occurs between 50-70 years of age (the average age of occurrence ALS is 55), and mostly affects men. While MS occurs between 20-60 years of age, and mostly affects women. About 30,000 people in the US have ALS, and an average of 5,000 new diagnoses per year (that's about 15 new cases per week). Worldwide, MS affects more than 2.3 million people, with about 10,000 new cases diagnosed each year (that's about 200 new diagnoses per week).
Some of the signs and symptoms of both diseases include muscle weakness, muscle spasms, problems walking, fatigue, slurred speech, and problems swallowing. ALS signs and symptoms that are different from MS include problems holding the head upright, clumsiness, muscle cramps and twitches, problems holding objects, and uncontrollable periods of laughing or crying. MS signs and symptoms that are different from ALS include vision problems, vertigo and balance problems, sexual problems, memory problems, depression, mood swings, and digestive problems.
There is no cure for either disease, however the prognosis and life expectancy are different. Multiple sclerosis is not a fatal condition, while ALS progresses rapidly and leads to death.
Botox to Treat Multiple Sclerosis (MS)Botulinum toxin is a muscle-relaxing medication used to decrease spasticity related to multiple sclerosis and other neurological conditions. Botulinum toxin is derived from the bacterium Clostridium botulinum. There are three types of botulinum toxin available for therapeutic use.
Multiple Sclerosis (MS)Multiple sclerosis (MS) symptoms vary from person to person, and can last for days to months without periods of remission. Symptoms of MS include sexual problems and problems with the bowel, bladder, eyes, muscles, speech, swallowing, brain, and nervous system. The early symptoms and signs of multiple sclerosis usually start between age 20 and 40. MS in children, teens, and those over age 40 is rare. Treatment options for multiple sclerosis vary depending on the type and severity of symptoms. Medications may be prescribed to manage MS symptoms.
Multiple Sclerosis (MS) Symptoms, Causes, Treatment, Life ExpectancyMultiple sclerosis or MS is an autoimmune disorder in which brain and spinal cord nerve cells become demyelinated. This damage results in symptoms that may include numbness, weakness, vertigo, paralysis, and involuntary muscle contractions. Different forms of MS can follow variable courses from relatively benign to life-threatening. MS is treated with disease-modifying therapies. Some MS symptoms can be treated with medications.
Making an MS Friendly HomeAdults with multiple sclerosis may be at risk for injuries, hazards, and falling at home. Some simple home modifications can protect your health and safety and facilitate fall prevention. Reduce your risk of accidents and prevent hazards with these tips.
Multiple Sclerosis (MS) and PregnancyMultiple sclerosis or MS is a central nervous system disease in which the immune system attacks the myelin sheath (the protective coating around nerves). Symptoms of MS include pain, sexual problems, fatigue, numbness and tingling, emotional changes, and depression.
Women who are pregnant and have multiple sclerosis may have more difficulty carrying a pregnancy. Multiple sclerosis does not affect ability to conceive, and does not seem to affect fertility. MS symptoms during pregnancy may stay the same or get better; however, they may worsen after giving birth. Pregnancy decreases the number of relapses, but flares increase in the first 3-6 months after delivery. Pregnant women with MS may carrying a pregnancy more difficult to tell when labor starts, and there is an increased need to use forceps or vacuum to assist with delivery or b7 C-section (Cesarean birth) increases.
Some treatment MS drugs may be safe to use during pregnancy; however, some drugs should not be taken, for example, baclofen (Gablofen, Lioresal), fluoxetine (Prozac, Sarafem), or solifenacin succinate (VESIcare), and most disease-modifying therapies (DMTs).
Talk with your healthcare team about vitamins, supplements, and medications that you are taking if you are pregnant and have MS.
Multiple Sclerosis (MS) Early Warning Signs and TypesMultiple sclerosis (MS) can be thought of as an immune-mediated inflammatory process involving different areas of the central nervous system (CNS) at various points in time. Early warning signs and symptoms of MS in children, teens, and adults are similar; however, children and teens with pediatric also may have seizures and a complete lack of energy. Adults with MS do not have these signs and symptoms. Other signs and symptoms of MS include inflammation of the optic nerve (optic neuritis), changes in vision, Wiping or having tissues around the eye and moving the eye may be painful, and double vision. There are four types of MS, relapsing remitting MS (RRMS), secondary progressive MS (SPMS), primary progressive MS (PPMS), and progressive relapsing MD (PRMS).
MS QuizMultiple Sclerosis is a debilitating neurological condition. Take the MS Quiz to test your knowledge of the causes, symptoms, risks and treatments.
|
Our math circle will explore the storied history of Fermat’s Last Theorem and some of the underlying mathematics, such as Pell’s and other Diophantine Equations, and Fermat Proofs for Specific Exponents. We will discuss specific work by mathematician Sophie Germain, as well as the drama involved in Andrew Wiles’ Fermat proof.
According to Wikipedia, “Fermat’s Last Theorem states that no three positive integers a, b, and c satisfy the equation an + bn = cn for any integer value of n strictly greater than two. This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics Prior to its proof, it was in the Guinness Book of World Records as the “most difficult mathematical problem”, one of the reasons being that it has the largest number of unsuccessful proofs.”
|
Creating Standards-Based Guiding Questions
In addition to helping students ask better questions, you can create better guiding questions by aligning them with standards. Begin at the end. What do you want your students to learn? What standards are you trying to cover? Form questions based on the standard.
Scenario: Imagine that you need to teach the following social studies standards. The first is a performance standard, and the second is a content standard:
B.8.8 Identify major scientific discoveries and technological innovations and describe their social and economic effects on society.
8.5 Learn about the Civil War and Reconstruction, 1861-1877.
Building Guiding Questions
Now frame a guiding question that addresses both the performance and content standards. Also, make sure you ask questions that have many possible answers and require research and analysis.
- "What invention had the biggest effect in the Civil War?"
- "What kind of weapons did soldiers use in the Civil War?"
- "How did non-military technology shape the Civil War?"
- "What medical practices were used in the Civil War?"
- "How did medicine improve during the Civil War?"
- "What one technology from today would a Civil War soldier most want?"
Using Guiding Questions
By framing a guiding question that combines a content standard with a performance standard, you can make sure that the inquiry that students do stays on topic. You can also provide students the chance to choose which of the possible guiding questions they want to pursue. If you provide five questions, you can end up with five groups, each working to answer one of the questions and report what they find to the class.
|
What we now call autism has long existed among humans. But medicine only began noticing the particular set of physical and mental traits associated with autism within the last 100 years.
This issue has special relevance to me. I have an adult child diagnosed with Autism Spectrum Disorder (the diagnosis has shifted over the years from Pervasive Developmental Disorder, to Asperger Syndrome, to others I have forgotten). Trying to get help in the mid-1980s was a struggle—very few doctors and no schools I dealt with had any experience with a child that we now say is “on the spectrum.” Gradually I learned more, and found that the condition was not as unique as it first seemed.
This got me wondering—how would a person on the autism spectrum have fared before there was any awareness of the condition? I channeled my interest into a novel set in 1909, Into the Suffering City: A Novel of Baltimore, with a protagonist I imagined as autistic. My character, Sarah Kennecott, could not have been diagnosed as such in 1909 because the concept had not yet been invented. But I am confident that people such as her existed at the time.
The American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders identified Autism Spectrum Disorder in 2013. This is the latest of many (often, in hindsight, ludicrously inept) attempts by the medical profession to categorize a particular set of physical and mental traits. Children and adults with autism have difficulty with verbal and non-verbal communication. What follows is a non-exhaustive list of such traits as listed in the DSM:
Challenges in understanding or appropriately using:
- Eye contact
- Facial expressions
- Tone of voice
- Expressions not meant to be taken literally
Additional social challenges can include difficulty with:
- Recognizing emotions in others
- Expressing one’s own emotions
- Seeking emotional comfort
- Understanding social cues
- Feeling overwhelmed in social situations
- Gauging personal space (appropriate distance between people)
- Repetitive body movements (e.g. rocking, flapping, spinning, running back and forth)
Many clinicians and advocates hail use of the term “spectrum” because an autistic person is like any other—they are a unique individual with their own way of being in the world. As Dr. Stephen Shore has noted, “If you’ve met one person with autism, you’ve met one person with autism.” Those with autism range from people who are fully disabled to those who are highly functional and have great success in life.
I represent Sarah as gifted academically. While many, perhaps most, people diagnosed with ASD have lower than average intelligence (as measured by tests), there is evidence that some with autism have exceptional intellects, including “increased sensory and visual-spatial abilities, enhanced synaptic functions, increased attentional focus, high socioeconomic status, more deliberative decision-making, [and] professional and occupational interests in engineering and physical sciences.”
Broad public awareness of autism dates to the 1988 film Rain Main, which starred Dustin Hoffman as an intensely awkward savant who could perform amazing, but highly selective, mental tasks. The film was useful for educating the public about autism, but also led to a general assumption that every autistic person was just like the Hoffman character. What we now call autism was largely unknown among the public prior to 1988. I know that from personal experience.
The first mention of autism in the DSM appeared in the 1980 edition. During the 1960s and 1970s autism was cruelly blamed on “refrigerator mothers” who failed to love their kids enough. Autism was also linked to schizophrenia as late as the 1970s. Leo Kanner in 1943 described a group of largely intelligent children who craved aloneness and “persistent sameness;” he called this “infantile autism.”
During the late 1930s and 1940s Hans Asperger used autism in reference to people with a perceived milder form of the condition that came to be known as Asperger’s syndrome. Eugen Bleuler coined the term autism sometime between 1908 and 1911 (there is disagreement as to exactly when) as a symptom of schizophrenia, another term that Bleuler invented. Bleuler derived autism from the Greek word meaning self, and used it in reference to people who lived in a world that was not accessible to others
But autistic-like behavior was noted long before the term itself came into use. As Kanner noted, “I never discovered autism—it was there before.” Samuel Gridley Howe gets credit for first noticing, prior to the American Civil War, that some people considered “idiots” had a combination of skills and strengths that set them apart from others with intellectual disabilities. Looking back into history, it is arguable that many people, including Michelangelo, Emily Dickinson, Leonardo da Vinci, Isaac Newton, and Thomas Jefferson, were autistic. They and other, less famous, people with autism were different than ordinary people. This difference often led to cruel treatment; my character Sarah is variously called odd, strange, peculiar, and even “a spastic little freak.”
The modern neurodiversity movement urges replacement of the term “disorder” with “diversity” to account for neurological strengths and weaknesses and to suggest that variations in brain wiring—such as autism—can be a net positive for individuals and for society as a whole. Neurodiversity and autism advocacy groups share an even more important goal: insisting that people whose minds work differently are treated with respect and compassion.
|
For most of us, our lives involve a series of patterns—routines we perform almost every day, like stopping at the same place each day for coffee on the way to work. This is also very true for babies and toddlers. While we play a part in creating routines in our children’s lives, we may not fully realize the role they play in young children’s development.
Routines help babies and toddlers learn self-control.
Consistent routines, activities that happen at about the same time and in about the same way each day, provide comfort and a sense of safety to young children. Whether it is time to play, time for a snack, a nap, or a loved one to return, knowing what will happen next gives babies and toddlers security and emotional stability. It helps them learn to trust that caring adults will provide what they need. When children feel this sense of trust and safety, they are free to do their “work,” which is to play, explore, and learn.
Routines can bring you and your child closer together and reduce power struggles.
Stable routines allow babies and toddlers to anticipate what will happen next. This gives young children confidence, and also a sense of control, such as when parents say: “It is bedtime. Would you like to brush teeth now or after we get your pajamas on?” Routines can also limit the amount of “no’s” and behavior corrections you need to give a toddler throughout the day, since your child can better predict what should happen next: “I know you want a cracker. But it is clean-up time now. Remember, after clean-up, it is snack-time.”
Routines guide positive behavior and safety.
Routines are like instructions—they guide children’s actions toward a specific goal. Routines can be used for many reasons, but two of the most important are ensuring children’s health and safety, and helping children learn positive, responsible behavior. For example, children wash hands before they have snack, or must hold an adult’s hand when crossing the street. Here is another example:
Two-year-old George loves to play with his trucks in the afternoon as mom feeds baby Kira. When mom is done, it is time for them to pick up Dad at the bus stop. All the trucks have to be back in the bucket before they go. Mom lets George know when it’s clean-up time by ringing a special bell she has and saying, “Okay, driver, it’s time for the trucks to park in the garage.” One by one, George wheels each truck up a block plank and into the bucket. Each day they do this, and each day George knows he’ll find his trucks where he put them—back in the bucket. He also knows that after he puts away his trucks, he’ll get to see his dad which always makes him happy.
Routines support children’s social skills.
As babies grow, they come into contact with more people and begin to learn patterns and routines for social interaction. Greetings, good-byes, and chatting with others are examples of routine interactions that teach social skills. These interactions are also opportunities to help our children develop language skills.
Play-time and mealtime are two routines that are very social times for children and parents alike. Through talking, taking turns, sharing toys, learning to wait, and helping others during these activities, young children learn important social skills that will help them later on in school.
Routines help children cope with transitions.
Depending on your child’s temperament, transitions between activities may be easy or more difficult. Going from play to lunch, lunch to the store, the store to home…and especially transitioning to bed time, can be challenging. Routines (like bedtime routines) can help make transitions easier. Some parents use a timer or a “5-minute warning” to prepare their toddlers for a change in activity. Others use a book, song, or special game. Special rituals can also help transition a child from one caregiver to the next, such as this routine:
Each day, Leke and his mother count the steps as they walk up to the child care center. They leave his coat and lunch in his cubby. Then they go to the toy area where the other children are playing. Leke picks out a toy. He and his mother exchange “butterfly kisses” and mom waves good-bye.
Routines are satisfying for parents, too.
Not only do routines and rituals make transitions easier for children—they also help ease adults into parenthood. The early stages of becoming a parent can be overwhelming and sometimes put a strain on marriage. Continuing a ritual from your early marriage years (like an evening out or a special vacation spot) can help. In addition, taking a special ritual from your own childhood (such as a book that was read to you, a special breakfast made for you on Saturdays) can bridge your transition from a couple to a family.
Routines are an important opportunity for learning.
Daily routines are often thought of as just “maintenance” activities: meal time, running errands, getting ready for bed, taking baths. But these everyday actions are rich opportunities to support your child’s learning and development, while having fun. Routines offer the chance to build self-confidence, curiosity, social skills, self-control, communication skills, and more. Take grocery shopping:
Midori (aged 2) and her mom wheeled through the supermarket. Midori pointed at the apples and her mom said, “Look at the red apples and the green apples. Don’t they look yummy?” She held one out for Midori to touch: “Feel how smooth they are.” Then she picked up a plastic bag and turned back to Midori: “Why don’t you help me choose some to bring home?” Together, they counted out five apples and put them in the bag. Midori tried her best to help, but those apples were hard to hold! It took two hands to get one in the bag. “Nice work!” said her mother, “Thanks for helping.”
Here, a simple interaction in the produce section opened the doors for practicing language skills, taking turns, talking, using one’s senses, and learning about numbers. It also provided a chance to nurture Midori’s self-confidence and self-esteem as her mother let her know that her thoughts and interests were important. Midori’s mom also let her know that she was capable of doing important things, like choosing and bagging the apples.
Routines provide the two key ingredients for learning: relationships and repetition. So enjoy these “ordinary” moments with your child. If she’s having fun with you, she’s learning, too!
This article was reprinted from pbsparents.org . It was sponsored and written by staff at Zero to Three.
About Zero to Three
ZERO TO THREE is a national nonprofit that provides parents, professionals and policymakers the knowledge and know-how to nurture early development. ZERO TO THREE's mission is to ensure that all babies and toddlers have a strong start in life.
|
Scientists often search for oil and gas reserves by transmitting waves of energy through the earth’s crust, and recording how they are reflected back. Called seismic exploration, this process often involves using explosives, vibrating trucks, or underwater air guns. The returning energy is typically detected by acoustical instruments, and then analyzed with powerful computers. Different layers in the ground can reflect the energy differently, so scientists often use seismic exploration to find areas that might have oil, gas, or valuable minerals.
Seismology is generally based on the composition of rock layers, in the earth’s crust, that affects how energy interacts with underground materials. Energy waves usually move through the rock and then reflect back toward where they came from. The direction in which they return can give an idea of what properties the rocks have. Data on the returning seismic waves are typically analyzed by supercomputers and three-dimensional imaging software. Engineers can use this information to locate the best sites to start drilling.
On land, dynamite and other explosives are sometimes drilled into the ground at various places and detonated. The explosions typically generate seismic waves, smaller but similar to those in an earthquake, which hit rocks below the surface and bounce off of them. Devices called geophones can be placed throughout an area to detect the returning energy. Vibrating trucks are sometimes used during seismic exploration, which lift up on a pole and shake the ground. These generally do not cause as many disruptions as explosives and are more often used in populated areas.
With underwater seismic exploration, compressed air bubbles can be ejected, which hit the rocks at the bottom. Energy is reflected by rock layers below the ocean floor and is often picked up by instruments called hydrophones. These are usually attached to ships. In choosing a drilling site based on the data received, engineers can determine if fluids, faults, or other formations underground can interfere with the project.
Seismic exploration is generally more useful in finding evidence of gas. It often helps in determining the shape and size of an underground reservoir, while measurements of electrical resistance are usually better for oil exploration. Seismic methods can be used for both, and can help engineers decide the best way to get to the reserve. This technique is often part of geophysical testing to find hydrocarbons. In many places, it is regulated by local and regional agencies where the environment is of concern, such near the artic icecaps and many offshore locations.
|
Look at the pictures. Read the words. Now who can match them all up?
Sounds in this pack include: ng, nk, th, ay, ee, igh, ow (as in 'snow'), ir, ou, oi, ar, or, a-e, oa, ur and ear.
You are purchasing 16 PDF files, one for each sound. Within each PDF, there are 2 activities. Children have to read the words and match them to the correct picture. The first part of the PDF is for you to laminate and cut out to use in your whole class lesson, small group work or as a free-flow carpet activity. The second is a work sheet. You can differentiate the worksheet by: 1. Writing in sound buttons with your less confident children to support them in their reading of the words, 2. Using the worksheet as it is for your on-target children, or 3. Getting your more confident children to write the words, as opposed to sticking words, next to each picture. You can also use these as part of your assessment.
Some sounds have more than 1 worksheet. Also include PPT game for some sounds.
Resources are in PDF.
Phonics digraphs and trigraphs (ng, ou, ar, a-e, etc) activity / games / lesson
As soon as we receive an online payment (e.g. credit card or PayPal payment) we will send you a confirmation email with a download link. The link will appear on the Thank You page. You will have 30 days to download the digital file. If there are any problems, please let us know and will re-send the link.
|
Encryption refers to any process that's used to make sensitive data more secure and less likely to be intercepted by those unauthorized to view it.
There are several modern types of encryption used to protect sensitive electronic data, such as email messages, files, folders and entire drives.
Both Android and iOS smartphones now encrypt their stored data by default if the user creates a screen-lock passcode (sometimes to the chagrin of law enforcement), and Windows and macOS offer optional full-disk encryption. Many brands of the best antivirus software can encrypt individuals files and folders.
Still, it's very important to understand what kinds of encryption are most important for a particular need, and to not be lulled into a false sense of security by fancy-sounding names.
Many encryption programs provide excellent security for very little money — sometimes even for free.
For example, consider the folder-encryption options available to users of the Microsoft Windows operating system. Microsoft's own encryption software is generally strong, meaning that most users won't have to seek out additional methods of protecting their sensitive financial data, medical records and other sensitive files.
Or, if you're worried about Microsoft's alleged relationship with the U.S. National Security Agency, try VeraCrypt, an open-source, free-to-use software solution. (VeraCrypt is a fork of TrueCrypt, which is no longer developed.)
The most dangerous pitfall of folder encryption is that there may be temporary versions of the sensitive files that are not encrypted.
Consider this: Most computer users regularly save their work to avoid catastrophic data loss due to a power outage, electrical storm or other unexpected event. Each time the user saves a file in progress, a temporary version of that file is created and stored in the aptly named "temp" folder, where it remains unencrypted.
Simply deleting temp files isn't enough protection, either. Someone who wants to access your data badly enough will likely be able to access those files using free or cheap data-recovery software.
Weaknesses in encryption
All encryption techniques have weak spots. As these weaknesses are revealed and exploited, new methods of encrypting data are developed to provide additional layers of security for users.
One of the most common and bothersome weaknesses occurs when an encryption method, also called a cipher or an algorithm, that's supposed to generate seemingly random strings of gibberish instead produces outputs that have a discernible pattern. If the pattern gets noticed by interlopers, it may help them crack the encrypted data.
A similar issue involves encryption algorithms that generate predictable patterns of characters in response to repetitious, predictable input.
If this problem is extensive enough, it can help digital intruders decipher at least part of the encrypted data, which may include financial information, government documents or other sensitive information. In many cases, even a partial data breach can be devastating.
Defenses against hackers and file corruption
Individuals and organizations that want to add protection to their encryption algorithms often insert extra lines of code to alter the outputs -- a practice known as "salting."
For example, one of the most common passwords used is simply "password." Malicious hackers know what "password" and other common passwords look like after they're run though common encryption algorithms.
But if an organization adds extra characters to each password during the encryption process, such as "password" plus "safe," the output will be something malicious hackers won't recognize — as long as the extra characters are kept secret.
Encryption can also be used to verify the integrity of a file or piece of software. The raw binary data of a file or application is run through a special encryption algorithm to produce a "hash," a long number unique to that file.
Any alteration to the file, such as by a hacker inserting malicious code or by random data corruption, will produce a different hash. Computers and mobile devices compare a new piece of software's stated hash to its actual one before installing the software.
A similar process involves running a piece of software through a simple algorithm that produces a single short number, a "checksum." Altering the software in any way will likely produce a different checksum.
To guard against random, accidental corruption, many pieces of software include protection in the form of self-diagnostic checksum matches that the software performs each time it's launched.
Data encryption is important for everyone, not just big corporations and government officials. The topic can be intimidating for those without extensive computer experience, but thankfully, for most users, keeping sensitive data safe is a relatively straightforward process.
The key is to start early and regularly verify the effectiveness of the chosen security measures.
|
A satellite is composed of modular units, each of which is equipped with a set of sensors.
Another possibility to classify earth observation satellites is according to the sensors used:
- passive sensors which measure the reflected sunlight or thermal radiation(Optical)
- active sensors which make use of their own source of radiation (Radar)
Most of the remote sensing satellite platforms today are in near-polar orbits. The satellite travels northwards on one side of the Earth and then toward the southern pole on the second half of its orbit.
These are called ascending (ANX) and descending (DNX) passes, respectively.
Passive sensors recording reflected solar energy only image the surface on a descending pass, when solar illumination is available. Active sensors which provide their own illumination can also image the surface on ascending passes.
Swath of a satellite is the width of the area on the surface of the Earth, which is imaged by the sensor during a single pass.
An orbital cycle is completed when the satellite retraces its path, i.e., when the nadir point (point on the Earth’s surface directly below the satellite) of the satellite passes over the same point on the Earth’s surface for a second time. Orbital cycle is also known as repeat cycle of the satellite.
- The field of regard (FOR) is the total area that can be captured by a movable sensor.
- The field of view (FOV) is the angular cone perceivable by the sensor at a particular time instant.
The field of regard is the total area that a sensing system can perceive by pointing the sensor, which is typically much larger than the sensor’s FOV. For a stationary sensor, the FOR and FOV coincide.
cross-track scanner :
- “back and forth” motion of the fore-optics scans each ground resolution cell one-by-one
- Instantaneous Field of View (IFOV) of instrument determines pixel size
- Image is built up by movement of satellite along the orbital track and scanning acrosstrack
along-track scanner (“Pushbroom”):
- Linear array of detectors (aligned cross-track) – reflected radiance passes through a lens and onto a line of detectors
- Image is built up by movement of the satellite along its orbital track.
- Area array can also be used for multi-spectral remote sensing.
Sensors can be further divided in:
- Fixed sensors- instruments with a fixed field of view and orientation which provides always the same footprint on ground
- Steerable sensors- instruments with a field of view geometry that can be steered within certain limits, either via mechanical steering of the sensor or the full satellite body, or electronically like for Synthetic-aperture radar (SAR) instruments. The sensor geometry is characterized by a Field of Regard virtual swath that represents the access boundaries within which the sensor can operate.
Getting the Data to the Ground
- On-board recording and pre-processing
- Direct telemetry to ground stations – receive data transmissions from satellites – transmit commands to satellites (pointing, turning maneuvers, software updating)
- Indirect transmission through Tracking and Data Relay Satellites (TDRS).
|
There is a common misconception that if you were to walk in one direction without wavering and then drew a line of all the places you stopped it would appear as a curve on a map. While it is true you would actually walk in a curve, on a Google Map which uses what is called a Mercator projection, the line would actually be straight. This is because the projection Google uses stretches the latitude (y-axis) to make travelling in a straight line appear as a straight line on a map.
Travelling in one direction on a globe
On a Google Map
If you're unfamiliar with this, it might be a little mind bending. The important thing to remember is that when working with a map like Google a straight line is associated with moving in a straight line.
So now, to make every more confusing, let's talk about flight lines also known as great circle lines.
At first glance one might actually assume something that is incorrect and that is if you were to walk in a straight line you would actually follow this curve. However, this is wrong. As we established before, walking in a straight line actually appears as a straight line on a Google Map. So what is this?
The quickest route between two points for something like an airplane on a globe is not a straight line and by straight I mean keeping the same bearing. To save time and distance (which is important for jet fuel) the planes travel on a curve which actually takes less time. I will not go into this in this post, but if you're interested in why you can look up information on the "great circle."
So this bring us to the rub, when calculating the distance between two points, what should you use? The answer is usually a straight line unless you want the shortest distance between two points in the case of flight.
This is important because there are tools out there that utilize great circle curves to estimate the distance between two points and this is not actually accurate if you're planning to walk or move while maintaining the same bearing.
|
Published at Thursday, November 26th 2020. by Madeleina Gauthier in Worksheets.
Patterns and sequencing and basic addition and subtraction should follow on from counting and number recognition. By the time your child is starting kindergarten or school, they should be able to count to 20 with ease, write numbers, do simple addition sums, and have some understanding of patterns and sequences. Even if they are attending preschool, extra practice at home will help them improve their math. A systematic set of mathematics worksheets will help you teach your child the basic principles of math and help them prepare for school. Worksheets can be used as the basis for counting and adding games and other activities. Teaching your child with worksheets also makes them more comfortable with doing worksheets - which will help them when they get to kindergarten and school, where worksheets are used every day.
If you are unable to find black and white preschool worksheets that you like, you can still go for the colored ones; however, you may want to consider adjusting your printer settings. Instead of having them print off in colored ink, you may want to adjust your settings to gray scale. This will save you a considerable amount of money and printer ink, especially in the long run. Doing this can also create an additional activity for your child, as you can have them color all the pictures themselves.
Fortunately, this is not the case with home schooling. When your child has finished his work, reward him by letting him do something he enjoys. If you need to keep your child occupied while you are working with one of your other children, have certain educational things your child can be doing such as building with Legos, educational computer games, reading a book, or puzzles -- whatever your child enjoys.
What are math worksheets and what are they used for? These are math forms that are used by parents and teachers alike to help the young kids learn basic math such as subtraction, addition, multiplication and division. This tool is very important and if you have a small kid and you do not have a worksheet, then its time you got yourself one or created one for your kid. There are a number of sites over the internet that offer free worksheets that are download-able and printable for use by parents and teachers at home or at school.If you cannot purchase a math work sheet because you think you may not have time to, then you can create on using your home computer and customize it for your kid. Doing this is easy. All you need is Microsoft word application in your computer to achieve this. Just open the word application in your computer and start a new document. Ensure that the new document you are about to create is based on a template. Then, ensure that your internet connection is on before you can search the term "math worksheet" from the internet. You will get templates of all kinds for your worksheet. Choose the one you want and then download.
Teaching children to learn to read at home can be much like teaching them to learn to read in the classroom. The key to success is developing a reading program that takes advantage of all available resources and one that is geared toward the learning style of the child. The good news is that there are many reading resources available to help homeschooling educators present fresh and interesting material to new students while they learn to read.
A systematic set of mathematics worksheets will help you teach your child the basic principles of math and help them prepare for school. Worksheets can be used as the basis for counting and adding games and other activities. Teaching your child with worksheets also makes them more comfortable with doing worksheets - which will help them when they get to kindergarten and school, where worksheets are used every day.
Silly games like spotting the number of red cars while out on a shopping trip or playing about with words by making up silly rhymes all contribute to your child has education. The point is that you can still carry on with this type of learning activity and it will be a lot easier to incorporate printable worksheets into the fun and get your child working on them. Children love to draw and color and cut and paste so you can use this pleasure in a number of ways to make working on printable worksheets more enjoyable.
Any content, trademark’s, or other material that might be found on the Greenlistlouisville website that is not Greenlistlouisville’s property remains the copyright of its respective owner/s. In no way does Greenlistlouisville claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner.
Copyright © 2021 Greenlistlouisville. All Rights Reserved.
|
Definition - What does Landsat mean?
Landsat are the satellites which are used for gathering remote sensing data from various locations on the land. During exploration and production activities, the continual images of the earth's surface are required so as to help geologists understand the topography of the typical regions where it is impossible to reach for oil and gas exploration activities. These satellites provide the required data that is used for decision making for big Energy companies. They are not only useful in the oil and gas industry, but also in others sectors as well.
Petropedia explains Landsat
In the oil and gas sector, landsat satellites have helped a lot in making tactical decisions by big oil producers for their various project locations site. Landsat satellite imagery provides the most cost effective solutions to petroleum experts by acquiring and processing images of the various geographical locations that help in reducing the exploration risk of any Petroleum E&P organization, thus, overall helps in lowering the project cost.
Landsat satellites are used in different industries for:
- Environmental impact assessment
- Detection of coal mine fire
- Biofuel crop monitoring
- Surface mine and reclamation efforts
|
What to Expect in Middle School (grades 6th-8th)
Mystery of History- Middle school students study history spanning from the middle ages in 6th grade to the birth of America in 7th and to modern day events in 8th grade. Teachers challenge students intellectually while fostering independence and cultivating critical thinking. An emphasis is placed on debate and logical discussions driven by events in history and how they align with Biblical truth.
Literature and Vocabulary-Students are immersed in rich, classic literature to practice fluency and comprehension skills. Selections include Beowulf, Shakespeare, and The Screwtape Letters. Additionally, students study Latin vocabulary including stems from Michael Clay Thompson’s curriculum.
Composition and Grammar- Using IEW as our guide, students receive explicit instruction in correct paragraph and essay construction, grammar, and poetry. Students practice four-level sentence analysis using Michael Clay Thompson’s collection.
Science (God’s Design and Apologia)- Students learn about our Creator and His creation through the lens of the Bible. At least once a week, the teacher plans a hands-on investigation or experiment. Students also study Christian scientists who have made significant contributions to today’s world.
Saxon Math- This integrated, cyclical, and connected approach provides deep, long-term mastery of the content and skills.
DISCIPLESHIP & LEADERSHIP- Students participate in whole group Bible instruction which emphasizes steps to be effective Christian leaders. Students are required to commit the Bible to memory. The instructor engages the students in programs such as money management and creating a business model.
Classroom and family come together for "Grandparents Day", "Muffins with Mom" and "Donuts with Dad." At Christmas time, each class performs a number while the end of the year is reserved for "Night of the Stars", a fine arts talent and arts celebration.
Middle school students celebrate “history days” with their elementary buddies!
School of Logic students enjoy lunch and recess together daily.
|
The 17th century dawned in Ireland during the 9 years war of the northern Chieftains against the Crown. By 1602 that conflict was over; Red Hugh O’Donnell had been poisoned, the Irish had capitulated, and Queen Elizabeth was dead. Against the treachery that threatened their heirs and families, the noblest Chieftains of the north The O’Neill, the O’Donnell, and the Maguire left Ireland forever in what became known as the Flight of the Earls.
The Irish were leaderless, the Clan system had been broken, the great Gaelic Houses destroyed, and a foreign power had been established in possession of the land. The conquest of Ireland was finally complete; or so it appeared. Beneath it all, the Bards kept the heritage alive. Outlawed poets started hedge schools; Priests said Mass at stone altars in the hills and glens; the music, the language, and the learning survived – but the British were determined to stop even that limited bit of Celtic culture. After the flight of the earls, James I of England, declared that the recently departed northern Chieftains had been conspiring to rebel, and their estates were forfeit to the Crown.
Four million acres of Ulster were given to men called Undertakers – loyal Englishman who agreed to undertake the dispossession of the Irish. Soldiers, drapers, fishmongers, vintners, haberdashers and anyone seeking free land became the new owners of Ulster. A contemporary writer named Stewart, son of a Presbyterian minister, wrote that they were “for the most part the scum of both nations, who from debt or fleeing justice came hither hoping to be without fear of man’s laws.” They hunted the Irish like animals, drove them into the woods, mountains, and moors where thousands perished of starvation within sight of lands that their clans had owned from time immemorial. Before their eyes, an alien nation was planted on the fair face of Irelands proudest province.
But the Irish would not starve and die in their own fertile land. Their rage grew daily until an uprising was planned by Rory O’ Moore, Phelim O’ Neill and his brother Turlough, The Maguires of Fermanagh, the Magennis, O’ Reilly and the MacMahons. O’Moore had patiently worked for years among the leading Irish families, Irish Generals in the Continental armies, and other Irish exiles to oust the British. Then, on the night of October 21, 1641, the remnants of the northern clans burst forth sweeping the terrified Undertakers before them. Descendants of the old Clans O’Neill, Magennis, O’Hanlon, O’Hagan, MacMahon, Maguire, O’Quinn, O’Farrell, and O’Reilly burst forth from the hills and, in a few hours, made Ulster their own again. A few days later, Phelim O’Neill was proclaimed head of an Ulster army, and by early 1642, Leinster and Munster joined the fight for freedom; still later, Connaught joined. The Crown, poured men and arms into Ireland to fight the rebels. The Irish gentry formed the Confederation of Kilkenny to direct the resistance, and, believing that the new King, Catholic born Charles I, was a friend of Ireland, they confirmed their stand for ‘faith, country, and King’. The Irish Chieftains yielded for the sake of unity.
In England, a struggle between King Charles and his Puritan Parliament developed into a civil war. As his situation grew worse, King Charles began to court the Confederation. Futile negotiations frustrated the fighting spirit of the Irish, and they began to suffer defeat after defeat until, in despair, they considered coming to terms with the English. Suddenly, from the Boyne to the sea, Ulster shook with the news: Owen Roe has come!
On July 6, 1642, with 100 officers in his company, Owen Roe O’Neill, landed in Donegal. A mere boy when he had left Ireland with his uncle, Hugh O’Neill, during the Flight of the Earls, he had won distinction as a military commander in the Irish Brigade of the Spanish Army. A trained soldier and military leader, he had returned to lead the fight for Ireland’s freedom. He was given command of the northern army which he rebuilt and began to challenge the English on the field of battle. In short order, he regained all that had been lost due to the procrastination of the Confederation, but jealous of his growing power, they hampered his efforts at every turn.
Then, on June 5, 1646, England sent their best field commander, General Monroe, against Owen Roe. This would silence the young upstart forever. Monroe had 6,000 men and a full compliment of field artillery. O’Neill had only 5,000 men and no artillery. The two armies met at the junction of the river Oonah and the Blackwater adjacent to the village of Benburb – a place that would live forever on the lips of the storytellers, for it was here, in one masterful battle, that Owen Roe proved his superiority and that of his army. Monroe’s men were fresh and he set them up so that he would have the advantage of the sun at his back. O’Neill kept Monroe’s nerves and the nerves of his men on edge for several hours in that hot sun while his men harassed them with hit and run skirmishing raids. Finally, when the sun had shifted to behind his back, O’Neill gave the word “Sancta Maria,” and launched a whirlwind attack. His cavalry captured Monroe’s guns, and his infantry overwhelmed the English legions driving them into the river. In one short hour, O’Neill had wiped out the pride of the British army. 32 standards were taken; Lord Ardes and 32 officers were captured; cannon, baggage, and 2-months provisions were taken; and 1,500 horses were now in Irish possession. 3,300 of Monroe’s army lay dead on the field, while Owen Roe lost but 70. Ulster had been won by Owen Roe O’Neill. The Confederation fearing his growing power, would eventually turn on O’Neill, and everything would be lost in the end. But for a brief while, all of Ireland was talking about Owen Roe O’Neill, and the Battle of Benburb on June 5, 1646.
|
Although the word convection is usually used to describe the natural circulation of gas or liquid caused by temperature differences, the convection in "convection oven" has a more general definition: the transfer of heat via movement of gas or liquid. In a regular oven, convection occurs due to the temperature difference between air near the heating element and the cooler air near the food being warmed. A regular oven relies on a combination of radiation from the walls and, to a lesser extent, air convection to heat the food. Convection ovens impart more convective heat than regular ovens by using fans to force air movement.
By moving fast hot air past the food, convection ovens can operate at a lower temperature than a standard conventional oven and yet cook food more quickly. The air circulation, or convection, tends to eliminate "hot spots" and thus food may bake more evenly.
An impingement pizza oven at a Hungry Howie's store in Auburn, Alabama A convection oven will have a reduction in cooking temperature, compared to a conventional oven. This comparison will vary, depending on factors including, for example, how much food is being cooked at once or if airflow is being restricted by using an over sized baking tray. This difference in cooking temperature is offset by the fact that circulating air transfers heat more quickly than still air of the same temperature; in order to transfer the same amount of heat in the same time, then, one must lower the temperature to reduce the rate of heat transfer to compensate.
Many convection ovens also include a proofing capability using the same fan but at a much lower temperature. A residential double oven will often include the fan capability in only one of the two ovens.
Convection microwave ovens combine a convection oven with a microwave oven to cook food with the speed of a microwave oven and the browning ability of a convection oven.
Another form of a convection oven is the commercial impingement oven. This type of oven is often used to cook pizzas in restaurants. Impingement ovens have a high flow rate of hot air from both above and below the food. The air flow is directed onto food that usually passes through the oven on a conveyor belt. Air flow rates can range between 1 and 5 m³/s. Impingement ovens can achieve a much higher heat transfer than a conventional oven.
Like the "impingement oven", a convection oven usually has the radiant elements in view of the food, which improves heat transfer and speeds cooking from initial cold start. Some ovens have the heating elements placed in an outside enclosure and hidden from the food. This eliminates radiant heat from direct contact with the food.
|
What is it?
Development experts looked at different problems that make and keep people poor. They came up with eight targets that would help most people meet basic needs. If met, these targets would move poor people out of poverty and into a better life as well as enable people to contribute to their society in a more productive way. These targets are today known as the Millennium Development Goals (MDGs).
The goals also help development experts measure how much progress has been made in reducing poverty over the years.
The Millennium Development Goals are:
1. Eradicate extreme poverty and hunger
2. Achieve universal primary education
3. Promote gender equality and empower women
4. Reduce child mortality
5. Improve maternal health
6. Combat HIV/AIDS, malaria, and other diseases
7. Ensure environmental sustainability
8. Develop a global partnership for development
Why should I care?
As a result of the recent global economic crisis, 53 million more people will remain in extreme poverty by 2015 than otherwise would have. Even so, by 2015 the number of extreme poor could total some 920 million, marking a significant decline from the 1.8 billion people who lived in extreme poverty in 1990.
But many countries are off-track on other goals, especially the health-related goals of reducing child and maternal mortality, and access to basic sanitation, according to the Global Monitoring Report 2012
What is the international community doing?
Achieving the MDGs is possible -- if everyone does their share: Developing countries must be firm on their commitment to governance reform. And their partners -- the developed countries and international organizations -- need to support them.
Development aid and private charitable donations from developed countries are the main source of external financing for the poorest countries.
This money, of course, has to be well spent and managed in an accountable and transparent manner. Also, in addition to more aid, countries of the world have to reform global trade and make it more equitable for all countries.
What can I do?
If you live in a developed country:
- Find out what your country is doing to make the Millennium Development Goals happen.
- Find volunteer opportunities worldwide to encourage sustainable development.
- Learn how much money your government gives through bilateral and multilateral assistance, and lobby your government to give more.
If you live in a developing country:
- Stay in school -- study and learn.
- Volunteer to help those in need.
- Encourage other young people to stay in school and to volunteer.
- Learn how much money your government receives in development assistance and take action to ensure government funds are properly spent.
For more information: Millennium Development Goals
|
These three basic electrical quantities—energy, charge, and voltage—are closely related. It is difficult to visualize or measure energy directly because it is an abstract quantity and represents the ability to do work. The electrical charge can be positive or negative, and it can do work when it moves from a point of higher potential to one of lower potential. Voltage is a measure of the energy per unit of charge and can be measured easily with common instruments. Voltage is one of the electrical quantities that you will work within most renewable energy systems.
Work is done whenever an object is moved by applying a force over some distance. To do work, you must supply energy.
Energy is the ability or capacity for doing work; it comes in three major forms; potential, kinetic, and rest.
Stored energy is called potential energy and is the result of work having been done to put the object in that position or in a configuration such as a compressed gas. For example, the water stored behind a dam has stored (potential) energy because of its position in a gravitational field.
Kinetic energy is the ability to do work because of motion. The moving matter can be a gas, a liquid, or a solid. For example, the wind is gas in motion. Falling water is a liquid in motion; a moving turbine is solid in motion. Each of these processes is a form of kinetic energy because of the motion.
Rest energy is the equivalent energy of matter because it has mass. Einstein, in his famous equation E = mc2, showed that mass and energy are equivalent.
Unit of Energy
Because energy is the ability to do work, energy and work are measured in the same units. In all scientific work, the International System of Units (SI) is used. SI stands for Système International, from French. These units are the basic units of the metric system.
Energy, force, and many other units are derived units in the SI standard, which means they can be expressed as a combination of the seven fundamental units. The most common derived units are those using three fundamental units, which are the meter, kilogram, and second (mks). This forms the basis of the mks system of units, which are the most common derived units in the SI system. Another derived set of units is based on the centimeter, gram, and second (cgs). These smaller units are referred to as the cgs system.
The SI unit for energy is the joule (J), which is defined to be the work done when 1 newton of mechanical force is applied over a distance of 1 meter. A Newton is a small unit of force, equivalent to only 0.225 pounds. The symbol W is used for energy, and we will use WPE or WKE to specify potential energy and kinetic energy, respectively, to be consistent with W. (You may see E for energy in some cases, such as Einstein’s E = mc2 or PE and KE for potential energy and kinetic energy, respectively). The equation for gravitational potential energy is
WPE = potential energy in J
m = mass in kg
h = height in m
The equation for kinetic energy is
WKE = kinetic energy in J
m = mass in kg
v = velocity in m/s
Charles Augustus Coulomb (1736–1806) was the first to measure the electrical forces of attraction and repulsion of static electricity. Coulomb formulated the basic law that bears his name and states that the force between two point charges is proportional to the product of the charges and inversely proportional to the square of the distance between them. His name was also given to the unit of charge, the coulomb (C).
Coulomb’s law works for like charges or unlike charges. If the signs (+ or −) of both charges are the same, the force is repulsive; if the signs are different, the force is attractive. Long after Coulomb’s work with static electricity, J. J. Thomson, an English physicist, discovered the electron and found that it carried a negative charge.
The electron is the basic atomic particle that accounts for the flow of charge in solid conductors. The charge on the electron is very, very tiny, so literally many trillions of electrons are involved in practical electrical circuits. The charge on an electron was first measured by Robert Millikan, an American physicist, and found to be only 1.60 × 10−19 C. The power of ten, 10−19 means that the decimal point is moved back 19 decimal places.
Voltage (V) is defined as energy (W) per unit charge (Q). The volt is the unit of voltage symbolized by V. For example, a battery may produce twelve volts, expressed as 12 V. The basic formula for voltage is
One volt is the potential difference between two points when one joule of energy is required to move one coulomb of charge from one point to another.
Sources of Voltage
Various sources supply voltage, such as a photovoltaic (solar) cell, a battery, a generator, and a fuel cell, as shown in Figure 1. Huge arrays of solar modules can provide significant power for supplying electricity to the grid.
Figure 1 Sources of Voltage
Figure 2 DC Voltage Source
Voltage is always measured between two points in an electrical circuit. Many types of voltage sources produce a steady voltage, called dc or direct current voltage, which has a fixed polarity and a constant value. One point always has positive polarity and the other always has negative polarity. For example, a battery produces a dc voltage between two terminals, with one terminal positive and the other negative, as shown in Figure 2(a). Figure 2(b) shows a graph of the ideal voltage over time. Figure 2(c) shows a battery symbol. In practice, the battery voltage decreases some over time. Solar cells and fuel cells also produce the DC voltage.
Electric utility companies provide a voltage that changes direction or alternates back and forth between positive and negative polarities with a certain pattern. AC generators produce the alternating voltage or alternating current (ac) voltage. In one cycle of the voltage pattern, the voltage goes from zero to a positive peak, back to zero, to a negative peak, and back to zero. One cycle consists of a positive and a negative alternation (half-cycle). The cyclic pattern of ac voltage is called a sinusoidal wave (or sine wave) because it has the same shape as the trigonometric sine function in mathematics.
In North America, ac voltage alternates one complete cycle 60 times per second; in most other parts of the world, it is 50 times per second. The number of complete cycles that occur in one second is known as the frequency (f).Frequency is measured in units of hertz (Hz), named for Heinrich Hertz, a German physicist. Figure 3 illustrates the definition of frequency for the case of three cycles in one second, or 3 Hz.
Figure 3 Example of an AC Sinusoidal Voltage. The frequency is 3 Hz, and the period (T) is ⅓ s.
The period ( T ) of a sine wave is the time required for 1 cycle. For example, if there are 3 cycles in one second, each cycle takes one-third second. This is illustrated in Figure 3, where one cycle is shown with a heavier curve. From this definition, you can see that there is a simple relationship between frequency and period, which is expressed by the following formulas:
- What is the voltage if the energy available for each coulomb of charge is 100 J and the total charge is 5 C?
- If the period of an ac voltage 0.01 s, determine the frequency.
- If the frequency of an ac voltage is 60 Hz, determine the period.
- What is energy, and what is its unit?
- What is the smallest particle of negative electrical charge?
- What is the unit of electrical charge?
- What is voltage, and what is its unit?
- Name two types of voltage.
- Define frequency and period.
- Energy is the ability or capacity for doing work; it is measured in joules in the SI system.
- The electron
- The coulomb
- Energy per charge; the unit is the volt, symbolized by V.
- DC voltage and AC voltage
- Frequency is the number of cycles per second measured in Hertz. The period is the time for one cycle, measured in seconds.
|
- Close your eyes and picture a balloon in your lap slowly growing bigger, as you breathe in through your nose.
- Picture it getting smaller as you breathe out through your nose and release all your tummy air. Breathe in, fill your balloon, and let your arms rise away from you to encircle the balloon.
- Now gently press your balloon flat, letting your arms come back in towards your belly.
- Repeat several times.
Note for Parents
This pose can be done at a desk or sitting on the floor.
Activity Ideas for Home or Classroom
Give each student a balloon to inflate. Let them blow it up and then let the air flow back out, observing the shape of the balloon. The balloon shrinks because of the elasticity of the material. Students can learn that their lungs are also elastic: they inflate with an inhalation and will deflate on their own, with no muscular effort. Students will also be interested to learn that the lungs do not have muscles at all; we inhale by expanding the ribs and/or flattening the diaphragm muscle. The lungs are held against the inside of the chest wall by a vacuum.
Breathing deeply and fully with Balloon Breath brings more oxygen into our blood stream to make us more alert and focused.
Play with a 2 to 1 ratio as a pattern to increase your breathing capacity. For example, breathe in for 2 counts, and breathe out for 4 counts. Gradually let your breath and number power expand.
|
Scientists have developed several clean energy technologies that can help the world reduce its dependence on fossil fuels. Making these technologies successful, however, will require investments from entrepreneurs willing to look toward the future of energy production.
Understanding what each type of clean energy technology offers could help you find opportunities to support companies exploring this exciting new area of the energy industry.
Organic materials produce a wide variety of gases as they break down. Landfills can harvest gases like methane and burn it to create electricity. Burning these gases may raise some concerns among environmentalists, but it is certainly a safer approach than burning fossil fuels taken from the ground.
Farmers can also harness biogas energy by collecting methane and carbon dioxide from manure and compost.
Biogas is a renewable resource that makes up a small percentage of the world’s energy production. It may develop into a useful way for communities to produce more of their electricity locally, which would help reduce the need for large infrastructure that can cost a lot of money and use significant resources.
Biomass energy, also called biofuel, is a renewable type of energy generated from living organisms instead of fossil fuels. This has become an increasingly popular option in the United States, where farmers now raise soybeans, corn, sugarcane, and other crops specifically for energy purposes.
Bioethanol is probably the most widely used type of biofuel. Bioethanol is most commonly used as a fuel additive. In its pure form, however, it can power combustion engine vehicles on its own.
Biomass energy is a renewable energy, but it raises some concerns among critics who want to use farmland for raising food.
Geothermal energy uses heat from the earth to generate electricity. There are currently about 24 countries using geothermal energy. The United States produces nearly 30% of the world’s geothermal power.
The availability of geothermal energy depends greatly on location. Harnessing the power requires access to the earth’s heat and protection from seismic activity that can damage equipment. Many geothermal plants are located along fault lines. The world’s largest geothermal energy producer sits about 72 miles north of San Francisco, where a system of natural geysers makes it relatively easy to access heat.
Hydropower can include any technology that uses water movement to generate electricity. In most cases, the term refers to hydroelectric dams. Hydropower is also generated by using the kinetic energy of rivers and ocean waves. Other hydropower technologies include:
- Pumped Storage – An approach that involves using electricity to pump water to higher elevations. When power demands increase, the water is allowed to flow downhill. This moves turbines to generate more electricity. Some researchers describe this as a natural battery that stores electricity for future use.
- Conduit – Conduit hydropower relies on turbines placed within irrigation canals and other existing water infrastructures. There are already numerous natural and manmade canals in the world that offer opportunities for generating electricity without damaging the environment.
- Small Hydro – Instead of relying on large dams that can pose some environmental hazards, small hydro technology relies on minimal infrastructure and redesigning existing dams so that they work in coordination with local ecosystems. Many experts expect small hydro to become one of the fastest growing sectors within the energy industry.
Environmentalists often support hydropower because it uses naturally occurring kinetic energy to generate electricity. The energy is already there in the flow of rivers, streams, and ocean waves. Engineers just have to build efficient turbines to capture that energy and turn it into electricity.
Solar energy will almost certainly become a significant source of municipal and privately generated energy over the next few decades. The technology has evolved quickly, making it an affordable energy source for people living in places that receive abundant sunlight throughout the year. Improved battery technology could also make solar energy a useful option for communities in a wider variety of climates.
Innovative companies have developed numerous ways to harness the sun’s power. Some of the most promising types of solar panels include single silicon, multi-silicon, building integrated photovoltaic, and solar thermal.
- Single Silicon – Single silicon panels are extremely efficient, but fairly expensive because they contain such high amounts of silicon. Many of the solar panels installed on building roofs are single silicon designs.
- Multi-silicon – Multi-silicon panels are slightly less efficient than single silicon panels, but they are less expensive. This makes them a good option for homeowners who value energy independence but do not want to spend too much money on equipment and installation.
- Building Integrated Photovoltaic – One of the most exciting developments in clean energy technology. Building integrated photovoltaic panels look like regular roof tiles or shingles. This makes them an attractive option. They currently cost more and are less efficient than other solar panels, but that could change soon as more companies invest in research and development.
- Thin Film – Another exciting option that could benefit from more research. Thin film solar panels are extremely affordable. They are not very efficient, though. Solar farms often use thin film. Some companies have also used this technology to create portable solar devices for powering smartphones and laptops.
The world will almost certainly depend on more solar energy in the upcoming decade. As fossil fuels become more problematic, solar looks like an impressive option that could meet the world’s energy needs without causing much pollution.
Wind energy converts the kinetic energy of flowing air into electricity. Wind farms have become a popular way to generate low-cost electricity, especially in open spaces where wind can blow freely. The most efficient wind farms are actually located off shore, where the wind can reach high speeds without interference from buildings, forests, or mountains.
Denmark currently leads the world in wind energy production. The country relies on wind turbines to generate about a third of its electricity.
No one knows what types of clean energy technologies scientists will develop in the near future, but they will add to the current arsenal that helps people wean themselves off fossil fuels.
|
Scientists have long believed that the power of the sun comes largely from the fusion of protons into helium, but now they can finally prove it. An international team of researchers using a detector buried deep below the mountains of central Italy has detected neutrinos—ghostly particles that interact only very reluctantly with matter—streaming from the heart of the sun. Other solar neutrinos have been detected before, but these particular ones come from the key proton-proton fusion reaction that is the first part of a chain of reactions that provides 99% of the sun’s power.
The results also show that the sun is a remarkably steady power source. Neutrinos take only 8 minutes to get from the sun’s core to Earth, so the rate of neutrino production that the team detected reflects the amount of heat the sun is producing today. It just so happens that this is the same as the amount of energy now being radiated from the sun’s surface, even though those photons have taken 100,000 years to work their way from the core to the surface. Hence, the sun’s energy production hasn’t changed in 100 millennia. “This is direct proof of the stability of the sun over the past 100,000 years or so,” says team member Andrea Pocar of the University of Massachusetts, Amherst.
The core of the sun is a fiery furnace so hot and dense that protons—nuclei of hydrogen, the sun’s main constituent—slam together with such force that they fuse, producing a deuterium nucleus (heavy hydrogen, made of a proton and a neutron) plus an antielectron and a neutrino. This is the start of a whole sequence of reactions: Protons collide with deuterium to produce helium-3; helium-3s combine to give helium-4 plus protons; other reactions produce lithium, beryllium, and boron. Many of these reactions produce neutrinos, but the vast majority of the neutrino flux from the sun is produced by the original proton-proton, or pp, reaction. “The pp reaction is the most basic process. Everything that goes on in the sun stems from it,” says Steve Biller of the University of Oxford and U.K. spokesperson for the SNO+ neutrino detector under construction in Canada, who was not involved in the new work.
Researchers have been detecting neutrinos since the 1960s. Initially, a two-thirds deficit in the detection rate confused the results. It turned out that neutrinos could transform from one type to another as they fly through space, but detectors were sensitive to only one of the three types. Once this “neutrino problem” was resolved, neutrino observatories went on to detect neutrinos from almost all the predicted reactions in the sun—but not the pp reaction. What makes the pp reaction hard is that the neutrinos have very low energy that is about the same as the energy of various radioactive decays that happen on Earth, making it easy for an earthbound detector to confuse a decay with a neutrino event. “Detecting neutrinos of this kind is an almost impossible thing to do. You need very low background levels and a lot of patience,” Biller says.
The Borexino detector at the Gran Sasso National Laboratory, 1400 meters below the Italian Apennines, is made up of a spherical transparent vessel filled with 300 tonnes of highly pure pseudocumene, a benzenelike liquid. Neutrinos pass easily through the overlying rock, but occasionally one will hit a nucleus in this “scintillator” liquid, producing a flash of light that is detected by an array of detectors positioned all around the sphere. Such detectors are always situated deep underground, to protect them from cosmic rays, and are surrounded by buffer layers of liquids to fend off radioactive decays in the rocks.
Despite these efforts, to detect pp neutrinos the Borexino collaboration had to go through an especially lengthy purification campaign to reduce the levels of radioactive contaminants in the scintillator liquid—particularly krypton-85, a byproduct of nuclear testing and reprocessing that now pervades the atmosphere and produces a decay signal very similar to that from the arrival of a pp neutrino. “Any tiny air leak and krypton-85 will get inside,” Pocar says. The researchers, Biller says, “really pushed the cutting edge, achieving ridiculously low levels of radioactive contamination.”
There followed a year and a half of data collection and a year of analysis “to show it was not background or a detector effect,” Pocar says. After painstakingly removing multiple sources of background signals, the team was left with a neutrino flux of 66 billion per square centimeter per second, close to the standard solar model prediction of 60 billion, they report online today in Nature.
“They did a stellar job in doing this—incredibly impressive,” Biller says. “They’re peeling back the branches to get to the trunk of the main process.”
|
Given an array, the task is to reverse the array without using subtract sign ‘-‘ anywhere in your code. It is not tough to reverse an array but the main thing is to not use ‘-‘ operator.
Asked in: Moonfrog Interview
Below are different approaches:
1- Store array elements into a vector in C++.
2- Then reverse the vector using predefined functions.
3- Then store reversed elements into the array back.
1- Store array elements into a stack.
2- As the stack follows Last In First Out, so we can store elements from
top of the stack into the array which will be itself in a reverse manner.
1- In this method, the idea is to use a negative sign but by storing it into a variable.
2- By using this statement x = (INT_MIN/INT_MAX), we get -1 in a variable x.
3- As INT_MIN and INT_MAX have same values just of opposite signs, so on dividing them it will give -1.
4- Then ‘x’ can be used in decrementing the index from last.
6 1 2 7 3 5
In this method 4, the idea is to use bitwise operator to implement subtraction i.e.
A – B = A + ~B + 1
so, i– can be written as i = i +~1 +1
6 1 2 7 3 5
This article is contributed by Sahil Chhabra. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
- Minimum number of subtract operation to make an array decreasing
- Sort an array where a subarray of a sorted array is in reverse order
- First string from the given array whose reverse is also present in the same array
- Perl | Reverse an array
- Sorting array with reverse around middle
- Reverse an array in groups of given size
- Program to reverse an array using pointers
- Reverse an array upto a given position
- C Program to Reverse Array of Strings
- Reverse an array in groups of given size | Set 2 (Variations of Set 1 )
- Reverse an array without affecting special characters
- Write a program to reverse an array or string
- Program to copy the contents of one array into another in the reverse order
- Print all the combinations of N elements by changing sign such that their sum is divisible by M
- Subtract 1 without arithmetic operators
|
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
The explanation for this may imply several interconnected factors. In this essay we will expose which were the most important of these factors. Of course, technological superiority, but also the Aztec domination system that led to many Indians to see the Spaniards as their liberators, the spread of European diseases, the translation system that made Cortes able to establish relations with local peoples, the belief of the Aztec emperor Moctezuma that the Spanish were gods, and the alliances that Cortes engaged with Indian peoples such us the Tlaxcalans. The latter is probably the most important reason as it provided Cortes with thousands of soldiers, creating an army capable of defeat the Aztecs.
Spanish military superiority obviously helped them in the conquest of the Aztec Empire. They had huge advantages as their cavalry, firepower and steel outdid in every respect the Aztec’s military technology. Although Cortes arrived to continental America with only 508 people, the Aztecs and local tribes were frightened with European weapons, as they had never seen cannons or horses before. Spanish weapons consisted of pikes and swords made of hard steel from Toledo that was much stronger than any of the weapons of the locals. Another key advantage that Cortes’ men had over the Aztecs was the armour. Indian projectiles could do nothing against Spanish steel armours, which even weakened the hits from the obsidian swords of the Aztecs. With those armours Spanish faced much less risk of death, while the Indians, lacking any armour, were extremely vulnerable. European crossbows had around double the range of the Indian ones, as well as needing less training and being more powerful. The firearms of the moment, the harquebuses, did not have a very effective range, but they implied an extreme power. The combination of firepower and crossbows allowed the Spanish to fire to the Indians without armours with deadly effect. Aboard the Spanish ships there were falconets that were able to reach a maximum range of 2km. But the strong advantage held by the Spanish was not just the physical force of their weapons but also the psychological effects that European weapons caused on the Amerindians. The gunpowder of the falconets and harquebuses frightened the Indians as they had never seen that kind of things before. The cavalry also took part in this psychological disturbance. For example, at the Otumba battle a surprise charge of cavalry caused the scared Indians to run away in terror.
But the weaponry superiority, though it was a key factor, cannot explain just by itself the whole process of conquest. Some other subjects that may have had a role as significant, or even more important, as technology should be explored.
To begin with, the domination system of the Aztec empire was mainly based on a cruel tribute system that allowed Cortes to be seen as a liberator by some under the rule of the Aztecs. Human sacrifices were fairly common, and they were feared by many of the people. The Aztecs used to sacrifice between 10,000 and 50,000 people every year, trying to nourish the sun and all their gods. Most of those who were sacrificed were war prisoners who had fought against the Aztecs, but they were not the only ones who were sacrificed, as common people -both adults and children- were also sacrificed when needed. Cortes, as a catholic man, was disgusted at the fact of human sacrifice, which allowed him to find Indian allies and also respect among local tribes that opposed and feared the Aztecs. Many decided to follow Cortes as they had similar views on the ‘human sacrifice’ subject. For those, Cortes appeared as a liberator of the Aztec rule tyranny.
Secondly, another important cause for Cortes’ success was the fact of European diseases spreading quickly among the Aztecs. Although this may not seem a very significant reason, it actually benefited the Spanish due to the population reduction it comported. The Europeans were already immune to that kind of diseases, so it did not involve so much suffer to them. On the contrary, the Aztecs were more and more demoralized by a mysterious illness that led them to death but not the Spaniards, as if their enemy was provided with some kind of magic invincibility. In fact, smallpox was the main reason why the Aztecs had to stop pursuing the Spanish around Texcoco Lake after the ‘Noche Triste’. However, Cortes’ allied Indian forces also suffered from the disease, which implied important losses on his side too.
Thirdly, another reason that enabled the Spanish to defeat the Aztecs was surely their translators, Doña Marina -also known as ‘La Malinche’- and Geronimo de Aguilar. After a short period fighting against the Tabasco people, they exchanged gifts with the Spanish. One of the presents for the Spanish was a Tabasco girl, who would be baptized and named ‘Doña Marina’ by the Spanish. She was able to understand and speak some of the local Indian languages, including Nahuatl, the language of the Aztecs. Geronimo de Aguilar also played an important role. He had been shipwrecked in the coast of Mexico in 1511 and, while he was in the region, he was able to learn some local dialects. Cortes found an extremely useful help in the combination of Marina and Aguilar. The translation process consisted of two phases. First, Marina translated the local language into some dialect that Aguilar was able to understand, and then Aguilar tried to translate the message into Spanish. Marina later learned Spanish herself. She had also a very important role when Cortes wanted to gain local allies, as she helped explaining Cortes what the Spanish could do for them. Doña Marina was for sure a key help for Cortes in the defeat of the Aztec Empire.
Fourthly, Moctezuma believed that Cortes was the great god Quetzalcoatl, who was finally returning to the Aztec land. This fact really benefited Cortes, as Moctezuma was overwhelmed by a huge confusion trying to elucidate whether Cortes was actually Quetzalcoatl or not. This state of perplexity gave Cortes extra time to act while Moctezuma was still thinking. Moctezuma considered that there were a lot of aspects that proved Cortes to be the god Quetzalcoatl. Moreover, the Spanish artillery may have represented to the eyes of the Aztecs the attributes of the god Quetzalcoatl, thunder and lightning.
On the other hand, Aztecs also started to find several aspects which showed that Cortes was not the god prophesised for so long. Cortes referred to a superior -King Carlos I-, and Quetzalcoatl would not have had any. Furthermore, Cortes was not able to speak Nahuatl, the Aztec language, which even Moctezuma found estrange, as it was not plausible for a god to forget his own language. The mental state of Moctezuma should have been of total confusion, and he probably had no idea about what he was supposed to do by then. Moctezuma was impressed about the dominance of the Spanish over the Tlaxcalans, a people whom the Aztecs had never achieved to defeat. He felt all the cosmological bases of his civilization collapsing, as the returned gods were destroying his people. Moctezuma’s confusion made easy for the Spanish to capture him and destroy his empire.
Finally, and probably the most important factor, Cortes was able to engage in alliance with local Indian peoples such as the Tlaxcalans, which actually composed much of Cortes forces. By using smart anti-Aztec policies, Cortes was able to put on his side many valuable local allies. The Tlaxcalans, for example, had spent almost a century fighting against the Aztecs. Thus, the Spanish represented the desired chance on breaking the political statu quo of the region and overthrow Aztec power. The Tlaxcalans provided Cortes with valuable detailed information about Tenochtitlan, especially regarding the causeways over the lake that led to the city. Cortes obtained the support of the whole Tlaxcalan state, which implied having an extremely vital ally. Around 50,000 Tlaxcalans, along with 25,000 Indian allies from other tribes, supported the Spanish in the recapture of Tenochtitlan. Those allies worked also in other ways than fighting. At the ‘Noche Triste’, they were responsible for carrying bridge building equipment and artillery. As we can see, if Cortes had not made profit from rivalries among locals, and incorporated much of them into his army, he would never have had the chance of defeating an empire as the Aztec, with a population of millions of people.
As we have seen above, the reasons of the defeat of the Aztecs are multiple and complex, and we cannot assume that technological superiority played a key role above the others. In fact, other Indian peoples adopted cavalry and firearms in the centuries after, and they were defeated as well. Of course, the shock that these new weapons may have produced on the Aztecs should be regarded, but the thought of just a few hundreds of Spanish defeating a whole civilization of millions of people just because of that may be naive. For sure, the main reason should be found in the alliance with the Tlaxcalans or the inability of Moctezuma to create an organized resistance. Without the technological factor it seems plausible for the Spanish to conquer the Aztecs with the support of other Indians and the impotence of Moctezuma. Without these factors, and having just the technological one, the expedition of Cortes would have never been successful.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please.
|
Many prey species exhibit defensive traits to decrease their chances of predation. Conspicuous eye-spots, concentric rings of contrasting colours, are one type of defensive trait that some species exhibit to deter predators. We examined the function of eye-spots in Lepidoptera to determine whether they are effective at deterring predators because they resemble eyes (‘eye mimicry hypothesis’) or are highly salient (‘conspicuous signal hypothesis’). We recorded the gaze behaviour of men and women as they viewed natural images of butterflies and moths as well as images in which the eye-spots of these insects were modified. The eye-spots were modified by removing them, scrambling their colours, or replacing them with elliptical or triangular shapes that had either dark or light centres. Participants were generally more likely to look at, spend more time looking at and be faster to first fixate the eye-spots of butterflies and moths that were natural compared with ones that were modified, including the elliptical eye-spots with dark centres that most resembled eyes as well as the scrambled eye-spots that had the same contrast as the natural eye-spots. Participants were most likely to look at eye-spots that were numerous, had a large surface area and were located close to the insects' heads. Participants' pupils were larger when viewing eye-spots compared with the rest of the insects' body, suggesting a greater arousal when viewing eye-spots. Our results provide some support for the conspicuous signal hypothesis (and minimal support for the eye mimicry hypothesis) and suggest that eye-spots may be effective at deterring predators because they are highly conspicuous signals that draw attention.
The coevolution of predators and prey can lead to dramatic changes in phenotypic traits [1,2]. Selection favours predators that efficiently capture prey and prey that successfully avoid predators. These opposing forces can lead to an arms race in which traits of predators and prey change to counteract decreased fitness levels . However, costs may limit the evolution of these modified traits . Many prey species currently exhibit defensive traits that reduce their chances of predation .
Morphological adaptations are one defensive trait that prey use to minimize predation risk . Prey often possess specific traits, like quills or spines, which inflict injury upon potential predators . Their skin can be so thick that it is difficult for predators to puncture and it can be thicker in especially vulnerable areas of the body . Overall body size influences predation rates as well since predators may be unable to consume relatively large prey . Prey coloration can also reduce predation. Some animals have cryptic coloration that closely matches the environments in which they live, making detection by predators less likely . Alternatively, animals may exhibit bright coloration to advertise their unpalatability . Some species, such as butterflies, moths and fish, display conspicuous eye-spots, concentric rings of contrasting colours, that confer survival advantages .
Eye-spots can be effective at reducing predation through intimidation or deflection (reviewed in ). The intimidation hypothesis proposes that eye-spots, generally those that are large and central, are effective because they frighten the predator. The eye-spots can frighten the predator because they resemble the eyes of vertebrate predators (‘eye mimicry hypothesis’). When predators locate a prey item with eye-spots, the predators may be deterred or startled because the eye-spots resemble the eyes of their own predators, thus giving prey time to escape . The eye-spots can also frighten the predator because they are high-contrast and colourful markings (‘conspicuous signal hypothesis’) [15,16]. Low-level visual features, such as colour, form and luminance , exogenously capture attention [18,19]. Eye-spots may therefore be effective because they have low-level features that automatically draw the attention of predators and potentially decrease predation risk. Some evidence supports the conspicuous signal hypothesis . Avian predators are less likely to eat artificial moths with markings that have high contrast, regardless of whether these markings resemble eyes [15,20,21]. However, other studies find no support for the conspicuous signal hypothesis but support the eye mimicry hypothesis. De Bona et al. found that natural eye-spots were equally effective compared with real predator eyes at eliciting aversive reactions in great tits. By contrast, other studies find mixed support for both hypotheses . The deflection hypothesis proposes that eye-spots, generally those that are small and marginal, are effective because they attract attention away from the preys' bodies towards the non-vital wings to manipulate where attacking predators direct their strikes, increasing the probability of prey escaping predation. While some studies have found that marginal eye-spots do not affect where predators direct their attacks , other studies have found that predators direct their attacks towards marginal eye-spots and away from the insects' bodies .
We examined the function of eye-spots in moths and butterflies to evaluate how a potential predator directs its visual attention when encountering prey with eye-spots. Visual search behaviour can be readily investigated with the use of eye-tracking, providing a powerful method for investigating the relationship between prey markings and predator visual attention. While eye-tracking can be performed in non-human animals (e.g. [26,27]), such studies in humans have fewer technical barriers and we therefore used humans as our ‘predators’. Furthermore, there are similarities between humans and other animals in their perceptual systems, such as receptive fields , that can result in similar strategies for prey selection. Human subjects have been used as ‘predators’ in previous studies exploring protective coloration [29–32] and have been shown to share some perceptual abilities with other predators .
The gaze behaviour of men and women were recorded as they viewed images of butterflies and moths. The images were displayed in their natural form (with the eye-spots intact) or were artificially manipulated (eye-spots were removed, scrambled, or replaced with elliptical or triangular eye-spots with dark or light centres). If eye-spots are effective because they mimic the eyes of predators, we predict predators will direct more attention to the eye-spot regions of moths and butterflies that have their eye-spots intact than to moths and butterflies that have their eye-spots removed or have scrambled/triangular versions of eye-spots. We predict participants to spend similar amounts of time looking at the eye-spot regions of moths and butterflies that had their eye-spots intact and that had elliptical versions of eye-spots with a dark centre because both of these eye-spots resemble eyes. If eye-spots are effective because they are highly salient, we predict predators to direct similar amounts of attention towards the eye-spot regions of moths and butterflies that exhibited eye-spots and any type of modified eye-spot (scrambled, elliptical or triangular) with high contrast (contrast between the eye-spot and the rest of the insect's body).
2. Material and methods
Twenty-three men and 16 women participated in this study at Duke University from November 2012 to March 2013. They were all of European heritage and between the ages of 18 and 30 years old (mean±s.e.: 21.9±0.41 years). We used flyers and e-mails to recruit participants. They earned $15 for their participation.
2.2 Butterfly and moth images
We obtained photographs of 70 species of butterflies and moths (table 1) that exhibit eye-spots on their wings from books and online sources. The 70 species belong to seven different families (Carthaeidae (1), Noctuidae (3), Nymphalidae (38), Papilionidae (1), Riodinidae (6), Saturniidae (20) and Sphingidae (1)). The eye-spots were markings on the insect's body parts that were circular and had concentric rings of contrasting colours. We isolated each ‘original’ image of a butterfly or moth and then centred it within a white image (1280×1024 pixels) such that the width of the butterfly or moth extended to the width of the white image (33.72 cm wide; Adobe Photoshop v. 7.0; figure 1a). We then created a ‘no eye-spot’ image in which we covered all eye-spots with colours that were immediately surrounding the eye-spots (using the clone stamp tool in Photoshop) so that eye-spots were no longer present (figure 1b).
Finally, we created five modified versions of each image by overlaying shapes atop the ‘no eye-spot’ image in the exact region where the eye-spots had been located by using custom Matlab (The Mathworks, Natick, MA, USA) scripts. (i) The ‘scrambled’ image was created by randomly repositioning every pixel within ellipses that covered each eye-spot (figure 1c). (ii) The ‘ellipse light’ image had two concentric ellipses atop the eye-spot regions, using a dark colour for the outer ellipse and a light colour for the inner ellipse (figure 1d). (iii) The ‘ellipse dark’ image was identical to the ‘ellipse light’ image except the dark ellipse was inside the light ellipse (figure 1e). (iv) The ‘triangle light’ image was similar to the ‘ellipse light’ image except that triangle shapes were used instead of ellipses (figure 1f). (v) The ‘triangle dark’ image was similar to the ‘triangle light’ image except the dark triangle was inside the light triangle (figure 1g). For the ‘ellipse’ and ‘triangle’ images, the light and dark colours were chosen for each image as the colours at the 90th and 10th percentile in overall brightness within the image in order to better match the coloration of the insect. The inner shape was 50% the size (in linear dimensions) of the outer shape. For the ‘triangle’ images, the triangles were equal in surface area to the ellipses from the corresponding ‘ellipse’ images and were formed from equilateral triangles stretched to match the aspect ratio of the ellipses.
2.3 Experimental procedure
The experimenter (J.L.Y.) told participants that they would be seeing a series of images of butterflies and moths. They were instructed to imagine that they were outside searching for food and specifically looking to find butterflies and moths to eat. They saw seven blocks of images and each block included 70 images. The 70 images within a block included 10 images of each of the seven image versions (‘original’, ‘no eye-spot’, ‘scrambled’, ‘ellipse light’, ‘ellipse dark’, ‘triangle light’ and ‘triangle dark’). Within an image block, a given species of butterfly or moth only appeared one time and was presented in a randomized order. Block order was randomized across participants.
For each image, participants initially saw a white screen with a black dot that was located at the bottom. They used the mouse to click atop the dot so that they were initially fixating on an area where the butterfly or moth stimulus was not present. After clicking the mouse, an image of a butterfly or moth appeared for 3 s. To ensure that participants were actively engaged in the task, after the image disappeared, they had to indicate how likely they would be to select that butterfly or moth as a food source on a scale from 1 (very unlikely) to 10 (very likely). Given that primates, including humans, selectively forage for food , we would expect them to exhibit preferences for different food items.
We used a Tobii T60 eye-tracker along with Tobii Studio 3.1 and 3.2 (Tobii Technology, Inc., Sweden) to present our images and record the gaze of participants (accuracy: 0.5°; data rate: 60 Hz; binocular tracking). We told participants that we were measuring their pupil size but did not tell them that their eye movements were being monitored until after they finished their trial. The images were displayed using Tobii StudioTM software (v. 3.1 or 3.2) on a 1280×1024 pixel monitor (43.18 cm diagonal). Participants were positioned approximately 60 cm from the screen and a chin rest was used to stabilize their heads. The equipment was calibrated before each trial began with nine points. We used the Tobii Velocity-Threshold Identification filter (I-VT filter; gap fill-in: 75 ms; eye selection: average; noise reduction: median; noise reduction samples: 7; velocity calculator window: 20 ms; I-VT classifier threshold: 30° s−1; merge adjacent time: 75 ms; merge adjacent angle: 0.5°; discard short fixations: 60 ms) to classify fixations and saccades. This filter classifies eye movements as fixations or saccades based upon the velocity of eye movements; eye movements below and above the velocity threshold (30° s−1, in this study) are classified as fixations and saccades, respectively. Eye-tracking data consisted of coordinates of where participants were known to be looking during each sampling point.
2.5 Measurements and statistical analysis
Using a customized Matlab program, we drew regions of interest (ROI) around each eye-spot region. In the ‘original’, ‘scrambled’, ‘ellipse light’ and ‘ellipse dark’ images, the ROIs were ellipses that encompassed each eye-spot or modified eye-spot. In the ‘no eye-spot’ image, the ROIs were the same as the ROIs in the ‘original’ image of a given species even though the eye-spots were not visible. In the ‘triangle light’ and ‘triangle dark’ images, the ROIs were triangles that encompassed each modified eye-spot. For each fixation coordinate, we determined which ROI it fell within to determine whether the participant was looking at an eye-spot or modified eye-spot region. We calculated two metrics: the amount of time that elapsed before participants first fixated on the eye-spot or modified eye-spot region and the percentage of time (out of the entire time that the subject was viewing the image) that the subject was fixating an eye-spot or modified eye-spot region.
We calculated Weber contrast using custom Matlab scripts to determine the contrast between the eye-spot regions and the surrounding bodies of the insects. It was calculated as the difference between the mean pixel intensity of the eye-spot region and the mean pixel intensity of the surrounding body divided by the mean pixel intensity of the surrounding body . We measured Weber contrast between each eye-spot and the surrounding body; then we took the mean of the contrasts for each butterfly or moth image. We classified the images as having eye-spots that were darker than the surrounding body (negative Weber contrast) or lighter than the surrounding body (positive Weber contrast); we then calculated the absolute value of the Weber contrast to determine the magnitude of the contrast.
We analysed our data in two steps using SAS (v. 9.3; SAS Institute Inc., Cary, NC, USA). First, we used a generalized linear mixed model (PROC GLIMMIX) to assess whether our independent variables influenced whether or not participants directed their gaze towards eye-spot regions. The independent variables included the treatment (‘original’, ‘no eye-spot’, ‘scrambled’, ‘ellipse light’, ‘ellipse dark’, ‘triangle light’ and ‘triangle dark’), gender of the participant (male or female), the interaction between the treatment and gender of the participant, whether the image was a butterfly or moth, the phylogenetic family of the insect, the total number of eye-spots/modified eye-spots on an image, the mean percentage of surface area each eye-spot/modified eye-spot occupied out of the entire surface area of the insect (‘relative surface area of eye-spots’), the mean distance between the head of the insect and the eye-spots (‘marginality of eye-spots’), contrast type (whether the eye-spots were darker or lighter than the surrounding body), the absolute value of Weber contrast, the interaction between contrast type and the absolute value of Weber contrast, and the edibility rating. Treatment and species name (table 1) were nested within subject identity, which was nested within gender, and were included as a random effects.
Second, we used linear mixed-effects models with repeated measures (PROC MIXED) using only the portion of data in which participants directed gaze towards the eye-spot regions (images in which the participants never looked at any of the eye-spot regions were removed so that underlying model assumptions were met). We examined whether the time viewing eye-spots/modified eye-spots and the latency to fixate an eye-spot/modified eye-spot were influenced by the same independent variables as used in the first step. Treatment and species name (table 1) were nested within subject identity, which was nested within gender, and were included as random effects. We examined all pairwise comparisons among treatments and created contrasts to evaluate these differences. Pairwise comparisons were considered significant if the p-value was less than the Bonferonni corrected values.
We also examined the relationship between pupil size (average of the left and right pupil) and the part of the butterfly or moth that the participant was gazing at (body versus eye-spot), treatment, luminance of the body of the butterfly or moth (mean pixel intensity of the body), luminance of the eye-spot regions (mean pixel intensity of the eye-spot region), gender of the participant, interaction between the gender of the participant and treatment, and the edibility rating. The perceived luminance may be slightly different from the pixel luminance because of properties associated with the computer screen .
The number of eye-spots on the butterfly and moth images ranged from two to 24 eye-spots (mean number of eye-spots±s.e.: 5.8±0.59). They covered approximately a tenth of the insect's body (mean±s.e.: 8.8±0.7%; range: 0.03–24.9%). Overall, eye-spots were fixated for approximately a tenth of the viewing time (mean percentage of time looking at eye-spot regions: 6.9±0.12%) but the range was large (0–89.4%). The natural eye-spots on butterflies and moths were usually darker than the surrounding body and had a high contrast (mean Weber contrast±s.e.: −0.12±0.02; range: −0.45–0.26). The elliptical and triangular eye-spots with dark centres had the same contrast (mean Weber contrasts±s.e.: 0.18±0.03) as did the elliptical and triangular eye-spots with light centres (−0.42±0.02). The ‘scrambled’ eye-spots and ‘no eye-spots’ had similar magnitudes of contrast but in opposite directions (‘scrambled:’−0.12±0.02; ‘no eye-spot’: 0.12±0.02)
3.1 Probability of looking
The type of treatment (‘original’, ‘no eye-spot’, ‘scrambled’, ‘ellipse light’, ‘ellipse dark’, ‘triangle light’ and ‘triangle dark’) influenced whether participants looked at eye-spot regions or not (figure 2a and table 2). Participants were more likely to look at the eye-spot regions in moths compared with butterflies. They were also more likely to look at the eye-spot regions when there were more eye-spots, the mean surface area of the eye-spots was greater, and the eye-spots were closer to the head of the insect. Participants were more likely to look at eye-spot regions that were darker than the surrounding body but the magnitude of contrast did not affect whether participants looked at eye-spot regions or not. Participants indicated that they were more likely to select the butterfly or moth as a food source when they looked at the eye-spot regions.
Participants were more likely to look at the eye-spot regions in images that exhibited natural eye-spots compared with images with no eye-spots or modified eye-spots (‘scrambled’, ‘ellipse light’, ‘ellipse dark’, ‘triangle light’ and ‘triangle dark’; table 3 and figure 2a). They were statistically less likely to look at eye-spot regions in the images lacking eye-spots compared with all other images. The scrambled eye-spots were more likely to draw attention than the elliptical or triangular eye-spots. Participants were not more likely to look at the elliptical and triangular eye-spots with dark centres compared with light centres.
3.2 Latency to fixate
The latency to initially fixate an eye-spot region depended on the treatment (table 2 and figure 2b). Participants were quicker to look at the eye-spot regions of moths than those of butterflies. They were faster to fixate eye-spots when there were more eye-spots, the mean surface area of the eye-spots was greater, and the eye-spots were closer to the head of the insect. The contrast of the eye-spots did not influence latency to fixate the eye-spot regions. The latency to fixate eye-spot regions was unrelated to participants' indication of whether they would select the butterfly or moth as a food source.
Participants were faster to detect the eye-spot regions in the images that exhibited natural eye-spots compared with images with no eye-spots or modified eye-spots (with the exception of ellipse ‘ellipse light’; table 3 and figure 2b). The eye-spot regions in the ‘scrambled’ images were also more quickly detected than those in the ‘triangle light’ images. Similar amounts of time were spent looking at elliptical and triangular eye-spots regardless of whether the inside of the respective shapes was light or dark coloured, with the exception of ‘ellipse light’ and ‘triangle light’.
3.3 Time looking
The amount of time that participants fixated eye-spot regions varied depending on the treatment (table 2 and figure 2c). They directed more attention towards the eye-spot regions when there were more eye-spots, the mean surface area of the eye-spots was greater, and the eye-spots were closer to the head of the insect. Neither the type of contrast nor the magnitude of contrast was a significant predictor of the amount of time participants fixated eye-spots. Participants indicated that they were more likely to select the butterfly or moth as a food source when they spent more time looking at the eye-spot regions.
Participants spent more time looking at the eye-spot regions in the images that exhibited natural eye-spots compared with images with no eye-spots or modified eye-spots (with the exception of scrambled eye-spots). They spent less time looking at eye-spot regions in images that lacked eye-spots compared with ‘scrambled’, ‘ellipse dark’ and ‘ellipse light’ images. They directed more attention towards ‘scrambled’ eye-spots than the other modified eye-spots.
3.4 Pupil size
Pupil size was related to treatment (F6,256=5.12, p<0.0001), luminance of the body of the butterfly or moth (F1,4829=454.9, p<0.0001) and the part of the butterfly or moth that the participant was gazing at (F1,17000=13.42, p=0.0003), but unrelated to the gender of the participant (F1,37=0.50, p=0.49), the interaction between the gender of the participant and treatment (F6,211=1.04, p=0.40), the luminance of the eye-spot regions (F1,10000=3.35, p=0.067) and the edibility rating (F1,11000=1.11, p=0.29). Pupil size was larger when participants were gazing at the eye-spot regions compared with the surrounding body (LSMean±s.e.: eye-spots: 2.68±0.046 mm; body: 2.67±0.046 mm) but was smaller when the luminance of the body was bright.
Even though the eye-spot regions only occupied a fraction of the butterfly or moths' bodies, they still attracted large amounts of attention. Humans were more likely to look at the natural eye-spots of butterflies and moths than the eye-spots of butterflies and moths that had modified eye-spots (scrambled, elliptical or triangular). They were faster to initially fixate the eye-spots of butterflies and moths with their natural eye-spots compared with butterflies and moths that had modified versions of eye-spots, except for the elliptical eye-spots with light centres. They also spent the most time looking at eye-spots of butterflies and moths with their natural eye-spots compared with butterflies and moths that had modified versions of eye-spots (with the exception of the scrambled eye-spots). They were not simply looking at eye-spots because they were located in salient positions on the insects. When the eye-spots were removed, participants had a lower probability of looking at the eye-spot regions compared with when the natural eye-spots were still intact. Interestingly, participants' pupil sizes were larger when they viewed the eye-spot regions compared with the rest of the butterfly or moth body, indicating a greater arousal level (potentially due to intimidation) when viewing eye-spots.
We found mixed support for the conspicuous signal hypothesis, which posits that eye-spots are effective at deterring predators through intimidation because they are highly salient signals [15,16,20]. The natural eye-spots of the butterflies and moths did not have the highest contrast levels compared with the other modified eye-spots. Despite this, they still attracted the most attention and humans were generally quickest to fixate them compared with the eye-spot regions of butterflies and moths lacking eye-spots or exhibiting modified eye-spots. Furthermore, eye-spots with scrambled pixels had the same contrast as natural eye-spots but humans directed less attention towards the scrambled eye-spots compared with the natural eye-spots. Interestingly, the natural eye-spots of butterflies and moths were usually darker than the surrounding body of the insect and had a high contrast (large and negative Weber contrast). Previous research in humans shows that luminance decrements (i.e. negative Weber contrast) are more visually salient than luminance increments (i.e. positive Weber contrast) [36,37], and contrast may therefore be contributing to eye-spot salience but not driving its effect.
The number, size and marginality of eye-spots affected gaze behaviours. Humans were more likely to look at eye-spot regions, be faster to look at eye-spot regions and spend more time looking at eye-spot regions when the butterfly or moth had more eye-spots and those eye-spots were larger. Consistent with this gaze behaviour, Stevens et al. found that artificial prey exhibiting more eye-spots and larger eye-spots were more likely to survive than prey exhibiting eye-spots with opposite properties. This suggests that predators' visual attention towards numerous and large eye-spots impacts predator hunting success. Numerous and large eye-spots may overload the sensory system of predators, supporting the conspicuous signal hypothesis . In addition, participants spent more time looking at eye-spot regions and were faster to first fixate eye-spot regions when the eye-spots were located closest to the heads of the insects, suggesting that marginal eye-spots are not the most effective at drawing predator attention and may therefore have limited effects on deflecting attack .
We found minimal support for the eye mimicry hypothesis, which states that eye-spots are effective at reducing predation through intimidation because they resemble the eyes of real predators . Participants had a higher probability of looking at natural eye-spots and spent more time looking at natural eye-spots compared with elliptical eye-spots (with dark centres) that also resembled eyes. In addition, participants spent similar amounts of time looking at and initially fixating elliptical eye-spots regardless of whether the inner ellipse was dark (and thus more closely mimicked a real eye) or light. Eye-spots with scrambled pixels, which did not resemble eyes except in their elliptical shape, attracted more attention than elliptical eye-spots with dark centres. Furthermore, elliptical eye-spots with dark centres did not have higher probabilities of being fixated and were not fixated for longer amounts of time than triangular eye-spots with dark centres, the latter of which did not resemble eyes in shape but exhibited similar contrast levels to the elliptical eye-spots. Therefore, the resemblance of eye-spots to real eyes seems unlikely to be the driving factor for attracting or maintaining attention because more eye-like stimuli (elliptical eye-spots with dark centres) did not attract more attention than less eye-like stimuli (elliptical eye-spots with light centres, triangular eye-spots and scrambled eye-spots; albeit natural eye-spots did attract more attention than any of these modified eye-spots ). Similarly, avian predators were deterred by eye-spots regardless of whether the eye-spots were more or less eye-like . An experiment probing the cognitive biases of starlings also failed to find support for the eye mimicry hypothesis: starlings did not show an increased aversion to eye-spots after being exposed to alarm calls, suggesting that they did not categorize the eye-spots as a threat . Future experiments using modified eye-spots that more closely resemble the eyes of real predators (rather than using simplified ellipses; see also ) would be informative to further probe this hypothesis.
The visual systems of predators can significantly impact their abilities to detect and capture prey. Predators may be unable to see certain wavelengths that prey use when signalling with each other [40,41] and this may decrease the predators' abilities to detect that prey. Predators may also fail to note prey that mimic environmental features or blend in with their environments . Alternatively, predator attention may be drawn to conspicuous prey markings that can lead to predator startle responses . Rather than attacking the vital body region of prey, predators may even mistakenly target the conspicuous wing markings of prey and allow the prey to successfully escape. Our results demonstrate that the visual system of human predators can also be influenced by the appearance of prey. In particular, human attention was drawn towards eye-spot markings on butterflies and moths and this attention could affect their abilities to successfully capture prey in natural environments.
The Institutional Review Board of Duke University (no. 7646) approved this study. Written consent was obtained for all participants.
The data supporting this article are available in the Dryad Digital Repository: http://dx.doi.org/10.5061/dryad.r8dv2.
J.L.Y. conceived the project, collected the data and analysed the results. J.L.Y., G.K.A. and M.L.P. developed the methods and wrote the manuscript. All authors give their final approval for this version to be published.
We have no competing interests.
This project was not funded by any grant.
We thank the Department of Psychology and Neuroscience at Duke University for letting us use the eye-tracker and Matt Mielke for technical assistance. Nils Olav Handegard assisted us in calculating Weber contrast.
- Received April 21, 2015.
- Accepted May 22, 2015.
© 2015 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.
|
Planet Mars is also called the Red Planet or Red World.
Mars is reddish in colour and was named after the god of war of the ancient Romans. Mars is the only planet whose surface can be seen in detail from the Earth. Mars is the fourth closest planet to the Sun and the next planet beyond the Earth.
Planet Mars Facts:
Number of Satellites: 2 (Phobos and Deimos)
Rotation Period: 24 hours and 37 minutes
Temperature: -140 to 20 degrees Celsius (-220 to 60 Fahrenheit)
Length of Year: About 1 Earth-year and ten and a half months (687 days)
Diameter: 6796 Kms (4223 Miles)
Axial Tilt: 25.19 degrees
Atmosphere: Mainly Carbon Dioxide
Escape Velocity: 5.027 km/s
Mass: 6.4185 × 1023 kg (0.107 Earths – about 11% of Earth’s mass)
Volume: 1.6318×1011 km3 (0.151 Earths – about 15% of Earth’s volume)
Mars is a terrestrial planet and is the second smallest planet in the Solar System. It is about half the size of Earth. It has a hard rocky surface that you could walk on. Mars’ surface is dry and much of it is covered with a reddish dust and rocks. Mars has two permanent polar ice caps. Like Earth it has seasons. It has the largest dust storms in the solar system.
Borealis Basin (North Polar Basin) in the northern hemisphere covers 40% of the planet and maybe a giant impact crater.
The Martian atmosphere consists of carbon dioxide (95 percent), nitrogen (2.7 percent), argon (1.6 percent), oxygen (0.2 percent) and trace amounts of water vapor and carbon monoxide.
The surface gravity on Mars is only about 38% of the surface gravity on Earth. If you weigh 100 pounds on Earth then you would weigh only 38 pounds on Mars.
Mars has two moons and their names are Deimos and Phobos.
Nasa’s Mars Robotic Exploration:
Mars was explored in flybys by Mariner 4, 6 and 7 in the 1960s and by the orbiting Mariner 9 in 1971 before NASA mounted the ambitious Viking mission, which launched two orbiters and two landers to the planet in 1975. The landers found no chemical evidence of life. Mars Pathfinder landed on the planet on July 4, 1997, delivering a mobile robot rover that explored the immediate vicinity. Mars Global Surveyor is creating the highest-resolution map of the planets surface.
Russia/Soviets Mars Robotic Exploration:
Started in the 1960’s with the Mars Program. The last endeavour by Russia was Mars 96.
Planet Mars Spacecraft:
Viking 1 & 2: Successful Missions!
Phobos 1 and 2: Failed Phobos Probes
Mars Observer: Failed Mars Probe
Mars 96: Failed to leave Earths orbit
Mars Global Surveyor:
Mars Express: Currently in orbit
Spirit and Opportunity rovers
Mars Reconnaissance Orbiter – launched Aug 2005
Mars Phoenix Lander – launch in 2007
Mars Science Laboratory – Curiosity Rover. Launched November 2011
PhobosGrunt – Russian 2011
Maven – 2013
Mangalyaan: first Indian mission – November 2013
Insight – (new) USA Lander 2018
More Facts on Planet Mars – Did you know?
Olympus Mons is the largest volcano in the solar system (550km wide). It is also the tallest mountain on any planet in the Solar System.
The average distance from Mars to the Sun is 228 million km or 1.52 AU.
Martian Day (Sol) is 24 hours, 39 minutes and 35 seconds. It is 39 minutes and 35 seconds longer than a day on Earth. Sol refers to Solar Day.
Mars 3 lander (Soviet / Russian) was the first spacecraft to attain a soft landing in December 2, 1971.
How big would the sun look from Mars? The Sun appears about half the size on Mars.
Mariner 4 (USA) was the first successful flyby of planet Mars. It returned the first pictures of the Martian surface. It also captured the first images of another planet ever returned from deep space.
Mars: The Inside Story of the Red Planet by Heather Couper and Nigel Henbest
Picture from Bringing Life to Mars
The Planet Mars Links
- Mars – Educational facts and history of the Red Planet.
- Mars News: by NASA
- Planet Mars, Mars Exploration and Mars Missions:
- NASA’s Journey to Mars: Plan and getting there.
- How to Get to Mars: Video
- Mars Society Australia:
- Google Mars:
- Nasa’s Mars Exploration Home Page:
Any comments or suggestions on Planet Mars, then click on Contact Info.
|
Since 1957, the U.S. military has used the International Radiotelephony Spelling Alphabet, more commonly known as the NATO phonetic alphabet. The code words of this phonetic alphabet are as follows: alpha, bravo, Charlie, delta, echo, foxtrot, golf, hotel, India, Juliett, kilo, Lima, Mike, November, Oscar, Papa, Quebec, Romeo, Sierra, tango, uniform, Victor, whiskey, x-ray, Yankee and Zulu.Continue Reading
The NATO phonetic alphabet also prescribes different pronunciations for the numbers three, four, five and nine, which are "tree," "fow-er," "fife" and "niner," respectively. These alterations reduce the risk of confusing numerals with words; nine is difficult to distinguish from the German word "nein," for example.
Spelling alphabets allow for clearer communication in noisy environments or when speaking over the radio or telephone. While "b" and "d" may be difficult to distinguish, especially in the absence of visual cues and body language, "bravo" and "delta" are not. In general, the creators of spelling alphabets choose code words that are as distinct from each other as possible. This allows listeners to understand a message even when static or other interference cuts off part of a code word.
The British Army created the first spelling alphabet in 1898. This alphabet only had code words for the most commonly misunderstood letters, such as "Emma" for "m" and "Esses" for "s."Learn more about Military
|
Start a 10-Day Free Trial to Unlock the Full Review
Why Lesson Planet?
Find quality lesson planning resources, fast!
Share & remix collections to collaborate.
Organize your curriculum with collections. Easy!
Have time to be more creative & energetic with your students!
Shapes and Colors: Rectangles, Squares, Triangles, Ovals and Circles
In this shape and color worksheet, students trace rectangles, coloring them brown, squares, coloring them grey, triangles, coloring them green, ovals, coloring them neon green, and circles, coloring them red.
3 Views 2 Downloads
2-Dimensional Shapes: Coloring Shapes
For beginners to color, number, and shape theory, this is a great way to solidify skills. They use a key at the top to determine which color (yellow, green, purple, or blue) to shade each shape (square, triangle, circle, and rectangle,...
K - 1st Math CCSS: Designed
|
Education for girls probably gives the best returns on investment in terms of development, having a positive impact on a number of areas.
It promotes health and welfare for the next generation, and can help to reduce poverty and slow down population growth.
Here are some of the reasons why some girls do not start school at all while yet others do not complete their schooling:
- Poverty: Poor families often decide their priorities at the expense of their daughters’ schooling.
- Child marriage: It is estimated that every day approximately 40,000 girls under the age of 18 are married off.
- Early pregnancy: Girls who become pregnant and have children often do not return to school.
- Gender-based violence: Girls are exposed to sexual harassment and violence on the way to school or at the school. Teachers and other school staff are often involved.
- Gender stereotypes and gendered attitudes: Traditional perceptions of gender roles that characterize society often mean that educating girls is not regarded as being equally relevant and valuable as educating boys.
- Lack of female teachers. Some parents in a number of countries or regions do not want to send their daughters to school, or remove them from school when they reach puberty, unless the school has one or more female teachers.
- Sanitary facilities: Many girls who begin at school leave when they reach puberty due to a lack of sanitary facilities.
Girls’ education promotes sustainable development
When girls gain access to education they acquire important knowledge that gives them greater potential to get a job and an income when they are adults. Even with limited schooling the impact of education can be observed.
Calculations show that for each additional year of schooling, a girl in a low-income country will increase her future income by 10−20 per cent (Hanushek, EA et al., 2011). As a result girls can also play a more active role in the political and social debate and in the development of their own society.
Mothers who have attended school themselves make greater efforts to ensure that their own children attend school. Education for girls can be the start of an upward spiral and lead women and their families out of poverty.
Countries with greater gender equality and fewer gender differences in the primary and secondary schools are more likely to have higher economic growth. An educated female population increases a country’s productivity and contributes to economic growth.
There is a clear association between education and improved health. Girls’ education has a positive effect on the level of health in society. Being able to read and acquire knowledge will enable mothers to better look after their own and their children’s health.
This has a positive impact on maternal and child health. Knowledge influences women’s choices when it comes to pregnancy check-ups, childbirth and nutrition. Educated girls and women turn to the health services to a greater degree.
If all the girls in low and middle-income countries completed primary school, this would reduce child mortality for those under five by 15 per cent. When girls complete lower and upper secondary schooling the positive effect is dramatic. According to figures from UNESCO, child mortality then drops by a massive 49 per cent for those under five years of age.
Education is also effective when it comes to combating child marriage, teenage pregnancy and HIV/AIDS.
Equal educational opportunities for girls and boys are a fundamental human right and the basis of equal opportunities later in life.
Equality in education is about more than equal access for girls and boys. It also includes aspects linked to teaching practice, curricula, textbooks and teachers. A lack of equality in education often reflects the prevailing gender norms and discrimination in society.
The school and the learning it provides can play an important role in changing gender stereotypes and attitudes and in promoting gender equality. It is then vital to include knowledge and understanding of gender equality and gender sensitivity in the development of the curricula and to include knowledge of human rights and sexual reproductive health rights.
In many places, gender equality in education is still linked to an increase in girls’ access to education, particularly in sub-Saharan Africa. At the same time in some countries and regions there is a need for a stronger focus on boys’ education. In Latin America and the Caribbean there are fewer boys than girls at the lower and upper secondary level, with 93 boys per 100 girls.
Global statistics reveal gender differences when it comes to learning outcomes. Girls generally do better than boys in reading and writing. In contrast boys generally achieve better results than girls in mathematics. It is important to be aware of such gender differences when facilitating a good learning process for both girls and boys.
What is Norway doing?
Girls’ education is a key priority area for Norway and it is targeted through a variety of channels.
UNICEF is the most important multilateral channel for Norwegian support to girls’ education. The organization is working on the introduction of national and local guidelines for gender equality in schools.
UNICEF hosts the secretariat for the UN Girls’ Education Initiative (UNGEI), which is a global partnership dedicated to acquiring knowledge about girls’ education and gender equality globally, regionally and nationally, and being a driving force in this connection.
Norway is an important donor to the Global Partnership for Education (GPE). One of GPE’s five strategic objectives is that all girls in countries receiving GPE funding complete primary and lower secondary school and start at upper secondary school in a secure, supportive learning environment.
Girls’ education and gender equality is integrated in various ways in Norwegian bilateral aid to education. Girls’ education and gender equality must also be an integral part of the programmes of civil society organizations that receive financial support from Norad.
The Norwegian government will make special efforts to ensure that girls start and complete secondary education.
For example, Norway is supporting a Save the Children project in Malawi that integrates health and education in order to prevent teenage pregnancies. The programme aims at postponing young girls’ first pregnancy.
Another important element of the programme is that girls who are pregnant or who have given birth should be able to continue with their schooling. Norway also supports a joint UN project in Malawi to improve access to education for girls and enhance its quality. This project is a collaboration between UNICEF, the World Food Programme (WFP) and the United Nations Population Fund (UNFPA).
|
Project helps kids see reality in scientific concepts
Ask an average American what a force field is and you’ll probably get a loose description of an invisible protective shield affecting some mission of the Millennium Falcon or the starship Enterprise.
Ask a physicist — who in the United States today is 90 percent more likely to be a man than a woman — what a force field is and you’ll learn it’s a vector field indicating the forces exerted by one object on another.
Ask a sixth grader, and if he or she can produce any definition, it’s likely a memorized assemblage of words lacking context or rich meaning.
An international team of educators and professionals based at Virginia Tech is working on giving middle-schoolers new and innovative ways to learn scientific concepts, such as force fields. Their goal is to encourage a lifelong interest in science among children, especially girls. This effort, which involves a planned children’s book and a traveling exhibition, is called Phoebe’s Field.
The Phoebe’s Field exhibition will help enable children to identify, redefine, and make science part of their lives, ultimately facilitating a larger, more diverse scientific community. Research shows that in sixth grade, a significant shift occurs for boys and girls in relation to computing, math, and science.
In fact, only 44 percent of sixth-grade boys go online, while 79 percent of their female classmates do. Girls continue to be more engaged in communication technologies the older they get. At the same time, sixth-grade girls are found to become less interested in math and science.
Phoebe’s Field focuses on electromagnetic fields because of their intrinsic link to communication technologies favored by girls in this age group.
The exhibition will use metaphors in nature to explain complex concepts, such as electromagnetism. As they tour the exhibition, students carry out physical activities that make the concepts more concrete. Phoebe’s Field enables children to see, hear, and touch fields that are ordinarily beyond their perception. The children will step inside fields, using communication technologies they know to make this most abstract of sciences real.
One way the exhibition will make invisible fields visible is by using architectural tactics. The tactics were developed as a menu for transforming existing science museum galleries into a spatial field condition. In order to achieve the transformation creators tried to perceptually dissolve the boundaries of the museum, distort the threshold, and displace the ordinary galleries.
The team also selected exhibit concepts to help communicate abstract phenomena to students. The concepts selected were based on an alliance between the science of fields, the storyline, and the relevancy to the students’ lives. By taking this approach, the Phoebe’s Field team created an architectural grid that was taken from the metaphor of an agricultural field to help engage the students and make the concepts easier to understand.
The storyline, for instance, uses narrative to capture the student’s attention. The narrative is designed to help the student associate Phoebe’s Field concepts to a quest and a story.
Funding and expansion
The team is currently being considered for a third grant to fund the project by the National Science Foundation. The first two grants funded the planning of the book and exhibition. The third grant would make possible a four-year phase of the project that includes the creation of a 5,000-square-foot, mobile, traveling exhibition; technical manuals and training for installation; outreach components developed with the Girl Scouts; a website serving as an information hub for the exhibition; and a book documentary of the Phoebe’s Field creation process.
The team is led by Principal Investigator Mitzi Vernon, associate professor of industrial design in the College of Architecture and Urban Studies.
At Virginia Tech, Vernon is joined by:
- Katherine Cennamo, associate professor of instructional design and technology;
- Margarita McGrath, assistant professor of architecture;
- Michael Ermann, associate professor of architecture;
- John Simonetti, associate professor of physics;
- Tatsu Takeuchi, associate professor of physics;
- Marty Johnson, associate professor of engineering;
- Steve Ellingson, associate professor of electric and computing engineering;
- Richard Goff, associate professor of engineering education; and
- Janis Terpenny, associate professor of engineering education.
Key partners on the Phoebe’s Field team include the Paul Orselli Workshop and the Science Museum of Virginia.
All advisors have served the Phoebe’s Field project for several years and bring essential expertise in physics, informal learning, engineering, gender studies, and children’s literature:
- Ilan Chabay, Erna and Victor Hasselblad Professor of public learning and understanding of science at Chalmers University of Technology, Göteborg, Sweden;
- Dale McCreedy, director of gender and family learning programs, The Franklin Institute;
- Bruce Schena, engineering fellow at Intuitive Surgical;
- Steven Snyder, vice president of exhibit and program development, The Franklin Institute;
- Lynn Yanyo, engineer and director of global marketing, Lord Corp.; and
- J. D. Stahl, children’s literary critic, professor, and author.
A collaborative effort
A project of this magnitude that has the potential to touch such a large number of lives has drawn a wide array of collaborators that are helping to ensure success.
Other collaborators include:
- the Girls Scouts of the USA (outreach);
- Resolution: 4 Architecture (project architecture);
- Center for Children and Technology (project evaluation);
- Gyroscope Inc. (exhibit planning),
- the Exhibit Center at California Polytechnic State University (technology components),
- the Association of Science-Technology Centers (tour management), and
- the New School Media Studies Program (project documentation).
Look through previous Spotlight stories
|
Delano High School
Each week, every teacher defines two power words in class. These are high frequency words used across the curriculum and are included as the stem to many test questions. Understanding the meaning of these words and being able to apply that knowledge across the curriculum helps provide a foundation for learning.
Reading and Writing Across the Curriculum:
Students read and write in each classroom everyday. Teachers instruct students to read and speak in complete sentences at all times. Appropriate use of content area vocabulary is also emphasized.
DHS's MVP program has been running strong over the past four years. This program is a required seventh period class for all sophomores intended to provide extra support in English and Math before taking the CAHSEE. Curriculum consists of test taking strategies, reading comprehension exercises, and writing a variety of types of essays. The Jane Schaffer method of writing is emphasized.
This test taking strategy is a mnemonic device for students when answering multiple choice reading comprehension questions. By following the steps of the acronym, students are led through the process of close reading.
This AVID inspired note-taking system is utilized across the disciplines at DHS. While the note-taking style is organized and straight forward, it also doubles as a study tool.
Standards Plus English Warm-ups:
A warm-up system recently implemented in the English Department, Standards Plus provides well-structured and scaffolded mini-lessons. The topics of these lessons range from grammar and mechanics to literary devices. Each mini lesson contains direct instruction, modeling opportunities and individual practice.
A campaign to alert students of the benefits of a college education is underway at DHS. This includes promotion of the EAP to all Juniors. The goal is to make students aware that education equals power of choice and opportunity.
|
The effects of climate change on humans will not arise as former Vice President Al Gore explains in his Inconvenient Truth nor will it be the cataclysmic Hollywood summer blockbuster brought to you by Jerry Bruckheimer. Instead, it is a slow change that still has a severe impact on the human population of the planet.
Lack of access to fresh water, diminished capacity to produce food, affects to human health and the loss of land are the larger impacts of humans based on climate change. These factors have an effect on the national security policies of not only the United States, but also all of the other developed nations in the world.
Studies have shown that the increased ferocity of storm systems around the planet, ranging from Katrina in 2005 to the cyclone that devastated Myanmar in 2009 is affected by the warming of the planet. Models have shown that the planet may see a rise in sea levels by 3 feet (1 meter) by the end of the current century. There is also a possibility that this rise could increase based on receding ice on the planet uncovering permafrost that expels great amounts of methane that adds to the warming of the planet.
Severe storms and rising sea levels affect coastal nations, none more than Bangladesh.
Bangladesh sits at thirty feet above sea level and protected by a series of dikes from the rising ocean. The nation is a great risk against severe cyclones and the rising seas. Estimates of a three-foot rise or greater in sea levels threatens Bangladesh through sea water affecting local water tables and invading crop lands, making it difficult to raise crops. Threats of powerful cyclones rampaging across Bangladesh raises concerns of creating great numbers of refugees in the wake of these storms. The very worst estimates show that Bangladesh will be mostly seawater or devastated by constant storms, leaving approximately 20 million refugees without homes.
20 million refugees without homes streaming into India or Southeast Asia is a nightmare for those dealing with national security. What will be done with these refugees who no longer have a home to return? Where will they be relocated? Will the stress of the influx of refugees have an adverse impact on the infrastructure of the neighboring nations leading to instability in the region? These questions weigh heavily on the minds of security think tanks now studying the effects of climate change on security doctrines.
Even in the United States, dangers of rising sea levels are relevant. For example, Norfolk, Virginia, the home of the Atlantic fleet and thirty percent of the US Navy's assets. Norfolk is built on a filled in marsh and is currently feeling the effects of natural sinking matched with rising tides. If Norfolk is no longer a suitable location for a base, six Nimitz class carriers and her escorts will have to find a new home that can handle the immense draft of the nuclear powered carriers.
Norfolk and Bangladesh are not the only area affected; a majority of the world's population is located in close proximity to the oceans of the world. Rising seas not only consume land but it also taints the local freshwater reservoirs. Massive numbers of persons will be forced to move away from the coast in developed in lesser-developed nations. These migrations will create stresses on the infrastructure of other nations, some greater than others.
The concern will be those stresses on less developed nations and the potential of extremist groups taking advantage of the unfolding situation.
|
In Excel (including Excel 2007), contents of cells can be split and displayed across other cells, based on a delimiter.The following is a tutorial on splitting the contents of cells that are not merged, in excel worksheets, across multiple columns.
How to split contents in cells of excel worksheets?
- Select the cell, the range of cells, or the entire column that contains the text values, that you want to divide across other cells, based on a delimiter. A range can be any number of rows tall, but no more than one column wide.
- On the Data menu, click Text to Columns
- In step 1 of “convert to Text Columns wizard“, choose “Delimited” to specify the delimiter that should be used, to split the contents of the cells. Click “Next”
- In step 2 of “convert to Text Columns wizard”, choose the actual delimiter. The values in the cells can be delimited by comma or tab or semicolon or space or any other.In this example, the delimiter is comma. You can also specify text qualifier and the way to treat consecutive delimiters. Click “Next”
- In step 3 of “convert to Text Columns wizard”, you can choose the Column data format.In this example we have chosen “General“. “General” converts numeric values to numbers, date values to dates and all remaining values to text.
- Choose the destination cell (starting cell) where the split values should be displayed.However, note that unless there are one or more blank columns to the right of the selected column, the data to the right of the selected column will be overwritten.Hence choose the start destination cell, for the resultant output, carefully. In this example, I had chosen B2 as the start cell, for the resultant output.
- Finally, click “Finish“.The resultant output will look as shown in the figure below
You can also join or merge the excel column contents. Did you like the tutorial on how to split contents of cells in Excel 2007 worksheet into multiple cells, based on delimiter?
|
Watch the coin orbit!
The Gravity Well here at The Austin Children’s Museum teaches us about energy. When the coin drops lower into the well some of its gravitational potential energy is converted into kinetic energy. As the coin drops down it has higher velocity. Also, the coin goes around in smaller circles the lower it gets. So you can see how the coin completes orbits much faster near the center of the well, just like a planet would orbiting around the sun!
Make your own gravity well:
What you need:
- large piece of paper
- various balls
- paper tube
Experiment with balls or marbles of all shapes and sizes and send us your results. Do the heavier ones travel faster? What about the smaller ones?
Send us your pics of your homemade gravity wells!
|
Dance notation, the recording of dance movement through the use of written symbols.
Dance notation is to dance what musical notation is to music and what the written word is to drama. In dance, notation is the translation of four-dimensional movement (time being the fourth dimension) into signs written on two-dimensional paper. A fifth “dimension”—dynamics, or the quality, texture, and phrasing of movement—should also be considered an integral part of notation, although in most systems it is not.
Dance poses recorded through pictures date to early dynastic Egyptian wall paintings, ancient Greek vases that depict dancing figures, and iconographic examples from many other early cultures. Verbal descriptions of dances have been found in India, notably in a book dating to approximately the 2nd century bc. In Europe during the 15th to 17th centuries, many treatises on dance were written in the form of descriptions often accompanied by illustrations. However, none of these can be clearly defined as a system through which actual dance movements (as opposed to positions) could be captured and subsequently faithfully reconstructed.
The Renaissance (c. early 15th–early 17th century)
Read More on This Topic
dance (performing arts): Dance notation
Since dance is a performing art, the survival of any dance work depends either on its being preserved through tradition or on its being written down in some form. Where tradition is continuous and uninterrupted, changes in style and interpretation (inevitable when different dancers perform the same material) may be corrected and the dance preserved in its original form. But when a tradition is...
The first device to be considered a true notation system was found in Cervera, Catalonia (now part of Spain): two manuscript pages, dated from the 15th century, revealed the first use of signs to represent the letter abbreviations used in Renaissance Italy, France, and Spain to record the popular basse danses (“low dances”). These were letter abbreviations for the five well-known steps: R for révérence; s for simple; d for double; b for branle; and r for reprise. Dances were composed of a sequence of these steps in different arrangements.
In his book Orchesographie (1588), the Frenchman Thoinot Arbeau provided valuable descriptions of the dances of that period, placing the names of the dancer’s movements next to the vertically arranged music. His system, however, cannot be called a notation system as such, because no symbols were used.
The Baroque period (c. 17th–18th century)
At the French court of Louis XIV, patterns traced on the floor were an important part of formal dances; drawings of these pathways, with signs added to indicate the steps used, were the basis of the first important, widely used dance notation system. Originated by the ballet teacher Pierre Beauchamp, it was first published by his student Raoul-Auger Feuillet in 1700 as Chorégraphie; ou, l’art de décrire la danse (“Choreography; or, The Art of Describing the Dance”). The system spread rapidly throughout Europe, with English, German, and Spanish versions soon appearing. Well suited to the dance of that era, which featured intricate footwork, this notation became so popular at court and among the educated classes that, for a while, books of collected dances were published annually. Indications for the appropriate arm gestures were later developed to accompany the intricacies of the footwork. However, at the watershed of the French Revolution, when dance for the educated classes at the royal courts declined, the Feuillet system—which was unsuited to theatre dance with its greater range of movement—fell into disuse.
The Romantic period (late 18th–late 19th century)
In the mid-19th century two important systems were published, both based on the idea of “stick figure” representation. That of the renowned French dancer and choreographer Arthur Saint-Léon, illustrated in his book Sténochorégraphie, was published in 1852. It combined slightly abstracted figure drawings with musical note indications for specific timing—not a surprising addition considering Saint-Léon’s musical background (he had been a child prodigy on the violin). His inclusion in his book of the pas de six from his ballet La Vivandière provided a valuable example of a Romantic ballet, and it has been studied and performed into the 21st century. The second of the two major mid-19th-century notation systems was that of the German dance teacher Friedrich Albert Zorn, whose book Grammatik der Tanzkunst (1887; Grammar of the Art of Dancing) employed a more directly pictorial stick figure, placed under the accompanying music to indicate timing. A highly respected dancing master, Zorn focused on detailed descriptions of the exercises and steps required in dance training. He included a selection of dances, notably the cachucha solo made famous in 1836 by the Austrian ballerina Fanny Elssler.
The close affinity between music and dance made inevitable the idea of using musical notes to record movement. The first such system was developed by Vladimir Ivanovich Stepanov, a dancer of the Mariinsky Ballet in St. Petersburg; it was published in Paris with the title Alphabet des mouvements du corps humain (1892; Alphabet of Movements of the Human Body). Stepanov’s method was based on an anatomical analysis of movement and thus was applicable to the recording of any type of movement. Stepanov’s method was adopted by the Mariinsky, where it was used to record the repertory. Of the scores notated during that period, many were incomplete, rapidly written notes intended as memory aids. The dancer and choreographer Léonide Massine learned Stepanov notation as a student at the Imperial School of Ballet and made use of it in developing his own choreographic theories. His Massine on Choreography was published in 1976.
Test Your Knowledge
Poetry Puzzle: Fact or Fiction?
Another student who learned Stepanov notation at the Imperial School was the legendary Russian dancer Vaslav Nijinsky, whose interest in it led to his own modification of the system, one that improved significantly on Stepanov’s ideas, especially in the indication of directions and levels. During a period of inactivity when he was separated from the Ballets Russes, Nijinsky worked on his notation ideas and recorded every movement of his first ballet, L’Après-midi d’un faune (1912; Afternoon of a Faun). When in 1988 the code to his system was broken, this ballet could be revived in its authentic version—that notated by Nijinsky himself.
The 20th century was marked by the advent of abstract symbol systems, notably those of Margaret Morris and Rudolf Laban. Morris, a British dancer, teacher, and choreographer, was also a movement therapist, which led to her anatomical approach to recording movement. She outlined her system in The Notation of Movement (1928); in addition to direction symbols, she provided separate signs for each movement of each part of the body. This was not an advantage in comparison with “alphabet” systems, in which the same basic type of movement is written with the same symbol for each part of the body.
Schrifttanz (1928; “Written Dance”), by the Hungarian-born dance theorist Rudolf Laban, provided the basis for the notation system that bears his name: labanotation (also called Kinetography Laban). Laban had an eclectic interest in movement but found himself architecturally fascinated by its spatial aspects. Thus, his system initially depicted movement from a spatial perspective; an anatomical description was added later by others. It is an “alphabet” system in that each movement is “spelled out” according to the sequence of its basic components. A vertical three-line staff represents the body, the centre line dividing right and left. The shape and shading of the movement symbols indicate direction and level; their length indicates timing (duration); and their placement in the appropriate column on the staff indicates the part of the body that is moving. A particular strength of labanotation is its ability to show precise gradations in the timing of movements. The system became widely used because it is applicable to all forms of movement. For decades labanotation was refined by research practitioners working in a variety of different movement disciplines, not only ballet and contemporary choreographies but dance of different styles and cultures as well as gymnastics and other sports, remedial exercises, and even zoological studies.
A number of other notation systems were invented in the 20th century. Pierre Conté, a French musician, wrote Écriture de la danse théatrale et de la danse en général (1931; “Writing of Theatrical Dance and Dance in General”); his system combined musical notes with simple signs placed on an expanded music staff. In Choroscript (written in 1945 and unpublished), the American musician, dance teacher, and choreographer Alwin Nikolais indicated movement with modified musical notation symbols. Nikolais’s movement analysis was based on labanotation, and he used a Laban-style vertical staff but in two parts, with torso and head indications placed separately on the right. Kineseography (1955), created by the dancer and choreographer Eugene Loring with D.J. Canna, incorporated an unusual movement analysis. This system used a vertical staff and simple signs to record four categories of movement: Emotion, Direction, Degree, and Special. It was used to record Loring’s signature ballet, Billy the Kid (1938).
The system developed by the Israeli dance theorist Noa Eshkol and the architect Abraham Wachmann was first published in English as Movement Notation in 1958. It took an anatomical and mathematical view of movement and initially had the aim of exploring the abstract shapes and designs of movement rather than recording existing dance patterns, which had been the primary goal of all previous systems. Numbers and a small selection of symbols are used to represent each possible physical motion. The full horizontal staff provides a space for each body segment. Eshkol’s original aim was to create a method of recording her own choreography; however, the Movement Notation Society in Israel (the centre for this system) subsequently published books on folk dance, ballet, and other art forms and also illustrated the uses of the system in recording the movements of animals.
Despite the introduction of abstract symbol systems, notation methods making use of stick figures continued to appear during the 20th century. The most successful of these was a visual representation system devised in the 1950s by the English artist Rudolf Benesh and his wife, Joan Benesh, a dancer with the Sadler’s Wells Ballet (now the Royal Ballet). A matrix on a five-line horizontal staff represents the dancer from head to foot, seen from the back. Positions and movement lines are plotted on this matrix. Timing indications are placed above the staff. More complex movements that cannot be indicated visually on the staff are written with additional signs and numbers above the staff. Initially developed as a ballet shorthand, it has proved very useful in recording the repertoires of ballet companies. The Benesh Institute was established in London in 1962. Following the example set in the 1960s by the Royal Ballet, many companies have hired trained Benesh notators. Computer programs for writing Benesh scores have been developed, and a centre for training notators and producing publications was established in the 1990s at the Centre National de la Danse in Paris.
In the United States a labanotation score was first accepted for copyright protection in 1952. This event was a major breakthrough that afforded dance a protection it had not experienced until then. Subsequently, many notated scores were submitted for copyright registration.
Notation systems were developed in other countries, such as Russia and Romania, where they were used to record traditional folk dances. One of the more significant of these was Romanotation, first published by the ethnochoreologist Vera Proca-Ciortea in 1956. A decade later the dance teacher and choreographer Theodor Vasilescu developed a different system to describe Romanian folk dances, with a view toward teaching them.
Efforts to document Korean dance movements include the method developed in North Korea about 1970 by the Korean choreographer U Chang Sop. His book, published in English as Chamo System of Dance Notation (1988), uses pictorially based symbols and additional abstract signs.
In India during the 1970s, the classical-dance performer Gopal Venu devised a notation system with the aim of providing a shorthand for the many mudras (hand positions) needed in kathakali dance-drama. These are combined with pictographs of body positions.
Manuscripts from the Han dynasty in China record dance through pictures with a few symbols, none of which has been clearly deciphered. The first true notation system in China was Coordination Method Dance Notation (Eng. trans. 1987), created by Wu Ji Mei and her husband, Gao Chun Lin, in the early 1980s. Based on a logical movement analysis and placed under the (Western) music staff to show timing, it uses letters of the alphabet, numbers, and a few abstract signs and thus is typeable on a computer keyboard. It has been applied to dance forms, movement in sports, and physical education.
Although good, workable systems have been around for centuries, the use of dance notation has never been an integral part of dance study and practice, as musical notation is in the study of music. Writing a dance score inevitably takes time, as does writing a book or a symphony; the time spent learning to read and write notation was often seen as wasted. Until the development in the 1960s of the simple Motif Notation (related to labanotation) and its application in the Language of Dance teaching method, in which the basic movement “building blocks” are explored, notation was not included in a child’s introduction to movement; dance was still taught solely by imitation. The study of labanotation includes an investigation into movement that not only enriches a dancer’s understanding of movement but also provides dance students with access to the scores of significant works of the past.
The advent of film and particularly in the 1970s of video recording overshadowed notation. Video has immediate appeal to dancers because it requires no lengthy study. Through experience, however, members of the dance world have come to recognize its limitations. The image is often not clear, and costumes or other dancers often block the view. A video is a record of a particular performance in which the dancers may have made mistakes (which is frequently the case); the viewer is unable to recognize the difference between the dance performance and the dance itself. Learning from video too often results in the personal mannerisms or mistakes of one dancer being picked up and exaggerated by another, which thus distorts the choreography. In contrast, the notated score is a record of the work itself in the same way that a musical score represents the work, not an individual’s performance of that work. In a score all aspects of choreographic detail—use of stage space, the relationship of performers to each other and to the music, and the choreographer’s development of movement themes—can be easily studied.
Although its use in the dance field has spread more slowly than many dance historians anticipated, notation remains an essential tool. In addition to accurately recording a working choreographer’s movement concepts, it uniquely enables the faithful preservation of past works—a major concern in the dance world. In music, scores from centuries past have preserved classic works, which modern composers may wish to modify, rearrange, or parody without loss of the originals. In dance, reliance on memory has resulted in an accumulative distortion of the originals. Without notation, unintentional changes soon become the known version, and the viewing public has no idea of the loss of authenticity.
|
Scientific Evidence for a Young Earth
It is very difficult to find any common ground between evolutionists and creationists. In fact, the two groups disagree on just about every subject in the Universe. But there is one area where they see eye to eye: the age of the Earth. No, of course they do not agree about how old the Earth is, but they both agree that if the Earth is young (with an age measured in thousands of years instead of billions), then evolution could not have happened.
Even though most science textbooks and journals teach that the Earth is billions of years old, many scientific findings do not agree. In fact, there are over 75 different scientific methods for calculating the age of the Earth, and most of those give an age of the Earth measured in thousands of years, not billions.
Fighting the Crowd
One of the strongest arguments for a young Earth comes from the field of human population statistics. According to the records that are available, the human population on Earth doubles approximately every 35 years. Two scientists named Henry Morris and John Morris have written about this in their book titled Science and Creation. Let’s suppose that humankind started with just two people who lived on the Earth one million years ago. Also, let’s say that a generation was 42 years, and that each family had an average of 2.4 children (they probably had many more than that). Even if we allow for wars, epidemic diseases, and other things that would have killed lots of people, there would still be approximately 1 x 105000 people on the Earth today! That number is a 1 followed by 5,000 zeroes. But the entire Universe (at an estimated size of 20 billion light-years in diameter) would hold only 1 x 10100 people. Using young Earth figures, however, the current world population would be approximately five billion people. Evolutionary dates would mean that the Earth’s population would be 104900 times greater than would fit into the entire Universe! The question is—which of the two figures is very close to the current population of six and a half billion people, and which could not possibly be correct?
|
Sequencing genomes, from those of simple organisms to those of creatures as complex as humans, produces torrents of information that grow as technical advances push down the cost of generating genetic data. But researchers’ ability to study the chemical nature of DNA has outstripped their ability to actually “see” chromosomes and their position in the nucleus. Yet knowing how chromosomes fold or stretch is critical to understanding gene expression and also has implications for understanding congenital abnormalities as well as cancer.
A new tool, called oligopaints, may change the imbalance between what can be sequenced and what can be seen. By developing renewable, highly specific fluorescent probes that can “paint” the genome, a research team led by Ting Wu, an HMS professor of genetics, has produced a low-cost, high-resolution method for bringing chromosomes to light. The team reported its findings in the December 26, 2012, issue of Proceedings of the National Academy of Sciences.
“There have been some fantastic technologies that have given people a molecular handle on how chromosomes are folded—these involve looking at millions of cells at once,” Wu says. “What people are also hankering for is the ability to see every nucleus for itself.”
Scientists have long used chemical stains to view chromosomes in the nucleus, but such methods did not provide the precision needed to detect the nuclear arrangement and integrity of individual chromosomes. To light up chromosomes, a paint technique called fluorescent in situ hybridization was developed, but it has remained both laborious and expensive. Wu’s lab focused on lowering the cost of painting by employing easily made oligonucleotides, which are short, single-stranded DNA sequences. The probes they developed contain as few as 32 bases, compared to the 100 bases or more of other methods, and can target any sequenced region of the genome along a chromosome. Each oligopaint probe carries single-fluorophore primers, so it lights up at only one point, allowing for greater precision in super-resolution microscopy and image interpretation.
One of the goals of Wu’s lab is to make chromosomal analysis as inexpensive as a blood test. Such a test could potentially be used to screen newborns for congenital abnormalities or to guide treatment for cancer patients. The lab has thus far been working in fruit flies and human cell lines, but the principle could apply to any organism, including humans.
|
Learn something new every day
More Info... by email
Young Earth Creationism is the belief that the Earth was created between 6,000 and 10,000 years ago by the hand of God, as described in the Biblical book Genesis, considered canonical in Christianity and Judaism. This number is come upon by examination of the family lineages described in the Old Testament -- the book begins with Adam and Eve and then traces a line of descent all the way to more recent events whose dates are known, such as the Siege of Jerusalem by Babylon in 597 BC. By this method, adherents of Young Earth Creationism determine the Earth's age as relatively young. One of the first church figures to use the Old Testament as a guide to the Earth's age was James Ussher (1581–1656), Church of Ireland Archbishop of Armagh and Primate of All Ireland, who argued that the Earth was created in 4004 BC.
The perspective of Young Earth Creationism is closely linked to the idea of Biblical literalism, which views the Bible as the inerrant word of God rather than the work of unaided human beings. In fact, Biblical literalism pretty much demands Young Earth Creationism, as the Bible is quite clear that the Earth hasn't been around for the billions of years suggested by radiometric dating. However, very few Christians, and even fewer scientists, accept Young Earth Creationism, arguing that there is ample evidence (radiometrics, geology, plate tectonics, etc.) that the Earth is billions of years old. Before the rise of relevant sciences in the 17th and 18th centuries, Young Earth Creationism was much more common, but today it is a minority position.
Since the modern-day revival of Christian fundamentalism in the early 20th century, especially in the United States, Young Earth Creationism has seen a comeback. Various authors and organizations have tried to use scientific evidence to support their religious idea. The New Geology, published by George McCready Price in 1923, is considered one of the founding books of modern Young Earth Creationism, though many of the ideas have been extensively criticized by other creationists. More recently, in 1961, Henry M. Morris and John C. Whitcomb Jr. published their book The Genesis Flood, which presents evidence for a Great Flood as well as a young Earth. in 1972, Morris founded the Institute for Creation Research, which continues to be a leading organization in the area of Young Earth Creationism.
Young Earth Creationists have used various arguments to boost their position. First, they argue that dinosaurs are mentioned in the Bible and still exist in places like Central Africa or the deep seas. Young Earth Creationists acknowledge some form of evolution and natural selection, but only within the boundaries of a God-created kind of animal. Regarding people distributed all over the planet, such as Native Americans, Australian aborigines, and all other races, Young Earth Creationists believe that these peoples migrated to their respective locations after the destruction of the Tower of Babel sometime in the 3rd millennium BC. There are many other beliefs common among Young Earth Creationists, far too many to list here, which can be found on websites like those of the Institute for Creation Research.
What does it matter, so long as you are saved! This argument has no real bearing! Only our God knows! It's not like a once saved, always saved" belief where it may keep you from eternity with our Lord!
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
|
The chronology of the Crystal Lake Watershed extends from the present day (Holocene epoch) back to the extensive glaciations of several million years ago (Pleistocene epoch). Many advances and retreats of the glaciers across Michigan formed and reformed the Great Lakes over geologic time (Quaternary period). Levels of the large glacial lakes rose and fell by several hundreds of feet. Crystal Lake was a bay of Lake Michigan until about 2,000 years ago when it was finally became separated as the prevailing westerly winds created sand dunes to complete the embayment.
The Crystal Lake Watershed has always captivated the imagination of all who walk about it. Beginning with the early explorations of Frs. Marquette and Charlevoix, the land survey of the Burt brothers, Alvin and Austin, the geological surveys of Douglass Houghton and Henry Schoolcraft, the environmental studies of Henry Chandler Cowles, William James Beal, Warren Gooklin Waterman, Irving D. Scott, and James Lewis Calver, and the prose of Walter B. Case and Bruce Catton, it has continued to the present day. The Crystal Lake Watershed contains many diverse, but hydrologically intertwined ecologies and unique environmental niches, including active sand dunes, forested heights, wetlands, tributaries, and a large deep inland lake connected to Lake Michigan. Crystal Lake, with its immense body of pristine water of exceptional clarity, mixed sandy and rocky nearshore perimeter, sandy shoreline, deep marl bottom, and high-ridged vistas, captivates all who view it.
|
Inner Ear(redirected from Ear, inner)
Also found in: Dictionary, Thesaurus, Medical, Wikipedia.
inner ear[¦in·ər ′ir]
a membranous labyrinth; main part of the organs of hearing and equilibrium in vertebrates and man. The inner ear is filled with a fluid—endolymph—and embedded in the cartilaginous or bony skeletal labyrinth. The slitlike cavity between the inner ear and skeletal labyrinth is filled with perilymph; in terrestrial vertebrates this cavity is connected with the lymphatic cavities of the head through the perilymphatic duct. Two openings, or windows, are formed in the skeletal labyrinth of terrestrial vertebrates. The base of an auditory ossicle (stapes) enters the oval window from the middle ear. Below it is the round window, which is covered with an elastic membrane to permit the fluid in the inner ear to shift when the stapes moves.
The inner ear originates as a depression in the ectoderm in the posterior part of the head. As the embryo develops, the rudiment of the inner ear assumes the form of a vesicle connected with the external environment by a thin endolymphatic duct and later completely separated from the ectoderm. The rudiment of the inner ear is subsequently differentiated into upper and lower portions that are joined together. Three semicircular canals appear in the upper portion in all vertebrates (in Cyclostomata, one or two canals). A swelling— ampulla—is formed at one end of each of the canals. The remaining part of the upper portion of the inner ear, which connects the semicircular canals to each other, is called the oval saccule (utricle). A round saccule (sacculus) is formed in the lower portion of the inner ear; it has a peculiar swelling called the lagena, or cochlea.
The sensory (receptor) epithelium of the inner ear is distributed unevenly. In the oval and round saccules it forms so-called acoustic spots (maculae)—sensory cells with short hairs and acoustic (ampullar) crests that protrude in the form of plates into the inner cavity of the ampullae of the semicircular canals; the sensory cells of the crests have long hairs. In most vertebrates, the cochlea has a receptor apparatus in the form of a primary acoustic papilla that is formed when the round saccule separates from the acoustic spot. In fish, amphibians, and some other vertebrates, there is one small acoustic spot near the junction of the oval and round saccules. In amphibians, the main acoustic papilla separates from the primary acoustic papilla, and the corresponding part of its wall forms the so-called main (basal) membrane. In reptiles, the prominence of the saccule is more strongly developed. In crocodiles, it becomes a long, somewhat curved cochlear canal; the development of the main membrane with sensory hair cells on it causes the cochlear canal to separate into upper (scala vestibuli) and lower (scala tympani) portions. A cover plate develops over the main membrane and hair cells as the receptor acoustic apparatus becomes more complex. Birds and monotrematous mammals have a curved cochlear canal separated from the round saccule by a narrow canal. The organ of hearing is most highly developed in viviparous mammals and man. The cochlear canal becomes even more elongated and is twisted in a spiral with IVi to five turns. The primary acoustic papilla disappears, and the main acoustic papilla becomes the organ of Corti.
The bases of the receptor cells in all the structures of the inner ear come into contact with the short processes (dendrites) of the nerve cells whose bodies are grouped together in the so-called cochlear ganglion, while the long processes (axons) of the nerve cells form the acoustic nerve, which transmits excitation to the vestibular and acoustic centers of the brain. The endolymph of the inner ear contains calcareous deposits characteristic of the organs of equilibrium— otoliths (statoliths) of different sizes that are often replaced by a mass of tiny granules, or otoconia. In Cyclostomata, the calcareous deposits of the inner ear appear in the protoplasmatic reticulum in the form of otoconia, which may coalesce into an otolith. In most fish and all terrestrial vertebrates, the large otoliths are contained in sacs, while the small calcareous inclusions are frequently found in other parts of the inner ear as well (for example, in the endolymphatic duct). The calcareous inclusions and cupulae in the ampullae of the semicircular canals and the accumulations of ciliated cells and endolymph on which they act make up the structural and functional foundation of the vestibular apparatus.
REFERENCESShimkevich, V. Kurs sravnitel’ noi anatomii pozvonochnykh zhivotnykh, 3rd. ed. Moscow-Petrograd, 1922.
Shmal’gauzen, I. I. Osnovy sravnitel’ noi anatomii pozvonochnykh zhivotnykh, 4th ed. Moscow, 1947.
Prosser, L., and F. Brown. Sravnitel’naia fiziologiia zhivotnykh. Moscow, 1967. (Translated from English.)
Kisliakov, V. A., and I. V. Orlov. “Fiziologiia vestibuliarnoi sistemy (sovremenoe sostoianie problemy).” In the collection Voprosy fiziologii sensornykh sistem [issue 1]. Moscow-Leningrad, 1966.
G. N. SIMKIN
|
Wood Does Not Melt
Date: June 2004
Why does wood NOT melt?
To melt something would imply that it can be taken between a liquid and
solid state by heating and cooling. Wood, like other plant material, is
very complex and takes its form from its cellular structure. In its natural
state, wood is roughly 1/4 to 2/3 water by weight, so it consists of large
amounts of liquid at room temperature to begin with. Wood is the source of
many liquid products, such as latex rubber, turpentine, and maple syrup, to
name a very few. It is possible to burn wood and condense the smoke into a
liquid (which is actually how "liquid smoke" food seasoning is made), but
the physical structure of the wood is destroyed in the process and the
resulting material cannot be reconstituted back into the original source.
Wood does not melt when the temperature is raised because it decomposes
chemically first. That is, the chemical bonds that hold it together come
apart first. Also, when wood is heated in air, it (or its components) start
Materials like water, metal, or rock are simple structures that do not go
through any large changes when they are heated. These materials usually
melt. When metals are heated, he atoms usually reorder themselves into a
new arrangement at higher temperatures, but then the new arrangement melts.
Materials like wood, paper, concrete, are not simple, and some of the
chemical bonds essentially fall apart or reorganize. In concrete, the
calcium hydroxide decomposes, and the concrete loses strength.
Many plastic or polymer materials will melt before they decompose. Some
decompose before they melt. Rocks, on the other hand, will often melt.
Wood is cellulose, which is a larger, longer molecule whose approximate formula is
The OH's in this formula link to each other (bridging from molecule to molecule)
more strongly than almost any other "functional group".
These "hydrogen bonds" are weaker than the covalent bonds within each molecule, but
not by a huge factor.
When there are more hydrogen bonds in each molecule, it gets difficult to ever free
each molecule enough to move around as they do in liquids.
Listing members of this family from smallest to largest:
n=1: methanol, melting point = -94 degreesC
n=2: ethylene glycol (anti-freeze), mp= -12C
n=3: glycerin, mp= 20 degC
n=6: glucose sugar, mp=90-150C
n=12: sucrose sugar, mp=~185C (and it tries to turn brown while you're melting
n=18+: starch mp>200C, decomposes
n>20: cellulose (cotton, wood, paper), mp>250C, blackens, chars, and/or burns
You can see that as the molecules get larger, the melting points keep getting
But their threshold temperatures for reaction with oxygen in air are all pretty
As is their temperature to suffer molecular breakdown in airless places ( i.e.,
wood-> carbon(charcoal) + water(steam) ).
So you can see that at some point in the sequence, the melting points will be higher
"burning points", and you will never get to see melting behavior. It just
This is also related to a term: "cross-linking".
Suppose you have a pile of slimy worms. Like long-chain molecules, the pile behaves
somewhat like a liquid.
If you glue each worm to two neighbors, what you have is longer worms. A thicker
But if you glue each worm to three different neighbors, then it is all one big knot
or web or lattice.
And the permanent sense of shape inherent to a solid is born.
In any "cross-linked" substance, these glue-spots are made of molecular bonds,
just like the rest of the molecule.
So it is no longer possible to separate the solid bulk cleanly into separate
Some irreversible chemical breakdown would have to occur first.
Several un-meltable "thermosetting" plastics are in this category.
Cellulose can be a cross-linked substance.
One might think of cellulose as the carbon polymer (plastic) most closely related to
Click here to return to the Engineering Archives
Update: June 2012
|
The following map shows the current position of the Sun and the Moon. It shows what areas of the Earth are in daylight and which are at night.
The map shows the position of the moon on the selected date and time, but the moon phase corresponding to that date is not shown. If you want to know the moon phase you can use our lunar phase calendar.
The map shows the position of the sun and the parts where they are during the day and where they are at night. If you want to know exactly the time that dawns or dusk in a specific place, you can use our solar calendar.
Universal coordinated time or UTC is the main standard of time by which the world regulates clocks and time. He is one of the several successors closely related to Greenwich Mean Time (GMT). For most common purposes, UTC is synonymous with GMT, but GMT is no longer the most precisely defined standard for the scientific community.
It is called twilight at the interval before sunrise or after sunset, during which the sky is still somewhat illuminated.
Twilight occurs because sunlight illuminates the upper layers of the atmosphere. The light is diffused in all directions by the molecules of the air, reaches the observer and still illuminates the environment.
|
The history of solar power, its discovery, and ultimate rise as a powerhouse in the energy industry can be traced back to the 1800s, when scientists Willoughby Smith, William Grylls Adams, and Richard Evans Day discovered the potential selenium had in generating energy when exposed to sunlight.
While this was a monumental leap forward for the technology, it wasn’t until 1954 that the first solar panels — as we recognize them today — began to take shape. Replacing selenium with silicon, Daryl Chapin, Calvin Fuller, and Gerald Pearson created the first silicon photovoltaic (PV) cell, which was the first time that a solar technology could power an electrical device for multiple hours.
What are solar panels made of?
Today, there are three main types of solar panels on the market: monocrystalline solar panels, polycrystalline solar panels, and thin film (or amorphous) solar panels.
Monocrystalline and polycrystalline panels produce more energy than amorphous panels and can last up to 30 or 40 years. These crystalline panels are more expensive as they require extremely high grade silicone (99.99999% pure), which requires a resource intensive process to convert sand into silicone. The silicone is then used to create cylindrical ingots, which when later combined with boron will gain positive electrical polarity.
Once the ingots have been created, they are sliced into thin wafers and then treated and combined with metal conductors that allow them to generate power as solar cells. These conductors give the grid-like appearance you see on solar panels. An anti-reflective coating is then added to reduce the amount of sunlight reflected by the silicon.
To offset the positive polarity from the boron, the solar cells are combined with phosphorus in an industrial oven to add a negative orientation.
Finally, the solar cells are soldered together to create the solar panel. A layer of glass is placed over the cells to protect them from weather, and a highly durable, polymer-based backsheet and side frame are added to seal the panel and make it easier to secure to its final structure.
Each solar panel receives rigorous cleaning and quality inspections before it leaves the manufacturing facility to ensure that it stands up to the elements, delivering power to your home or business for many years.
To learn more about solar energy, or to find the right residential or commercial solar system for you, contact TerraSol Energies at 888.873.9995 or visit us online at https://www.terrasolenergies.com/
|
The Essence of Mathematics Through Elementary Problems
by Alexandre Borovik, Tony Gardiner
Publisher: Open Book Publishers 2019
Number of pages: 400
The authors of this book explore the extent to which elementary mathematics allows us all to understand something of the nature of mathematics from the inside. The Essence of Mathematics consists of a sequence of 270 problems with commentary and full solutions. The reader is assumed to have a reasonable grasp of school mathematics.
Home page url
Download or read it online for free here:
- Lumen Learning
Contents: Numbers and Operations; Equations, Inequalities and Graphing; Systems of Equations; Functions; Linear Functions; Quadratic Functions and Factoring; Polynomials and Rational Functions; Exponents, Logarithms, and Inverse Functions; etc.
- W.W. Shannon
Designed to prepare the pupils for the intelligent mastery of the fundamental operations. Through the application of number to objects, an insight into common operations is gained. The memorizing of facts is subordinate to the getting of ideas ...
by John Radford Young - Wm. H. Allen
The preparation necessary for the profitable study of the following course of Mathematics is a knowledge of common Arithmetic, and some acquaintance with Geometry, as taught in Euclid's Elements. We shall commence with a treatise on Algebra.
by Zhuo Jia Dai, Martin Warmer, Tom Lam - Wikibooks
This is a high school textbook for 14 to 18 year olds who are interested in mathematics. Some of the materials presented here can be challenging, several topics not covered in the standard curriculum are introduced in this text.
|
Mesothelioma is a form of cancer characterized by the growth of tumors on the mesothelium, a tissue that lines the body’s organs. Mesothelioma can be classified in two ways, either by the region of the body that it affects, or by the type of cells that make up the tumor. Mesothelioma usually affects one of three different areas of the body, the chest (pleural mesothelioma), the abdomen (peritoneal mesothelioma), or the heart (pericardial mesothelioma). As for cell types, doctors will classify the cells as either epithelioid, sarcomatoid, or biphasic.
Pleural mesothelioma affects the membrane that surrounds and protects a person’s lungs and chest cavity. This type of mesothelioma is the most common, and represents approximately three out of every four cases of the disease. It occurs when a person breathes in asbestos fibers, which lodge in their lungs, and cause irritation and scarring. This irritation, known as asbestosis, can eventually lead to the development of tumors. However, this whole process can take 20 to 50 years after the person’s first asbestos exposure. Common symptoms of this type of mesothelioma include persistent cough, difficulty swallowing, shortness of breath, chest pain, fatigue, and coughing up blood.
Peritoneal mesothelioma affects the tissue that lines someone’s abdominal organs like their liver or stomach. Unlike pleural mesothelioma, which occurs when a person inhales asbestos, peritoneal mesothelioma is caused by a person’s swallowing the fibers. Once again, these fibers can lodge in the peritoneum and lead to scarring, irritation, and, possibly, tumors. This form of the cancer is rarer and only accounts for approximately 10 to 20 percent of the total number of mesothelioma cases. The symptoms of peritoneal mesothelioma include abdominal pain, swelling, nausea, diarrhea, constipation, nausea, and fatigue. This form of the cancer can also take up to 50 years to develop, making it difficult to properly diagnose.
Pericardial mesothelioma is considerably rarer than the other two forms of the disease, making up only 1 to 6 percent of the total number of mesothelioma cases. This type of mesothelioma affects the tissue that surrounds a person’s heart known as the pericardium. Doctors believe that asbestos causes this form of mesothelioma when it enters a person’s bloodstream and makes its way to the pericardium. This form of the cancer is particularly deadly, with a 50 percent survival rate of just six months. Common symptoms of this type of mesothelioma can include irregular heartbeat, cough, chest pain, heart murmurs, and chest pain.
Doctors also classify mesothelioma based on its histologic cell type, which means the specific makeup of the tumor’s cells. The most common cell type for mesothelioma is epithelioid, which accounts for 50 to 70 percent of all mesothelioma cases. Cells in this type appear generally healthy and may be the most easily treated of the histologic types. Another, rarer, histologic type is sarcomatoid mesothelioma, which involves long, spindle-shaped cells. This form of the cancer is much more aggressive than epithelioid mesothelioma. The final type of the cancer is biphasic mesothelioma, which occurs when the tumor has both epithelioid and sarcomatoid cells in it at the same time.
|
Guide to Boreal Birds
Like many other swallows, the Violet-green lives in colonies, basically because of its feeding needs. Where one finds food there is usually enough for all, and when feeding communally these birds can more readily detect and defend themselves from hawks.
5-5 1/2" (13-14 cm). Dark, metallic, bronze-green upperparts; iridescent violet rump and tail, the latter slightly forked; white underparts. White cheek extending above eye and white on sides of rump distinguish it from Tree Swallow.
A high dee-chip given in flight. Also a series of varying tweet notes.
4 or 5 white eggs in a grass-and-feather nest in a woodpecker hole, a natural cavity, under the eaves of a building, or in a nest box.
Breeds in forests, wooded foothills, mountains, suburban areas.
Swallows and swifts migrate during the day, feeding on the wing as they move to or from their breeding areas. Violet-green Swallows are relatively early spring migrants that tend to follow the coastline or low-elevation features as they travel. Cold snaps that depress insect flight can stall migration; prolonged cold weather can be fatal to swallows, particularly in the Cascade Range, where this species may appear as early as the first week of February. The early migration of this species may be related to the intense competition for nest sites, which are usually tree holes.
Breeds from Alaska east to South Dakota, south to southern California and Texas. Winters mainly south of U.S.-Mexico border, but a few winter in southern California.
|
What Causes Earthquakes
The earth is made up of several layers. The crust, which is the top layer, is up to 46 miles deep. It includes both land (continents) and oceans. Approximately 70% of the earth’s surface is covered by oceans. The average depth of the ocean is 2.5 miles. The crust contains iron, oxygen, silicon, magnesium, sulfur, nickel and small amounts of calcium, aluminum and several other elements.
The planet’s second layer is called the mantle. It is made of rocks with heavy concentrations of magnesium and iron. The earth’s crust, which is divided into plates, floats on the mantle. These plates (called tectonic plates) are always in motion. They are like a giant puzzle with moving pieces.
The earth’s third and fourth layers are the outer and inner core. The inner core is solid iron, and its outer layer is liquid.
Faults are the outer, rough edges of a plate. They can get stuck when a plate moves. When the edge of a plate unsticks, it results in an earthquake. The three types of faults are normal, reverse (thrust) and strike slip.
Reverse faults (thrust faults) cause the strongest earthquakes, above 8.0 or more. Strike-slip quakes can also be powerful – up to a magnitude 8. A normal fault generally produces quakes that are less than magnitude 7.
The hypocenter is where the quake starts below the surface. The epicenter is the location of the quake on the surface.
The “shake” you feel in an earthquake is the result of stored energy – the stress that has been building up over time. When the plates finally shift, the energy is released as seismic waves (waves of energy) that spread out like ripples in a pond. The waves make the ground shake.
Earthquakes usually occur in groups. They are related to each other location and time-wise. When a series of earthquakes occur in a similar location over a very short period of time, it is called an earthquake swarm.
In 2012, California’s Imperial Valley experienced a swarm of small to moderate quakes. There were dozens of quakes around magnitude 3.5 and a magnitude 5.3 quake. The largest quake was a magnitude 5.5. Although there were no injuries, windows were shattered and trailers were knocked off their foundation. Residential and government buildings suffered structural damage, requiring foundation repair, and various sidewalks and roads needed repairs.
Earthquakes occur frequently in California, with building damages totaling nearly $3 billion dollars per year. The Los Angeles-Long Beach-Santa Ana metropolitan area currently has the highest estimated annualized building loss and the highest annualized percent building loss in the state, followed by San Francisco-Oakland-Freemont, Riverside-San Bernardino-Ontario and San Jose-Sunnyvale-Santa Clara.
California homeowners can minimize the possibility of quake damage with regular building inspections by a foundation repair expert. Taking care of foundation cracks and similar types of problems will help maintain the home’s structural integrity and make it less likely to collapse during a quake. There are also many ways to strengthen a home so it can withstand quakes, like foundation bolting and earthquake retrofitting. For California homeowners, earthquake preparation is a must.
|
Teaching time and specifically teaching how to read the analog clock and elapsed time is often a challenge for parents and teachers. The reason is that the concept of time is still abstract for many students as it doesn’t connect to the other concepts of measurement. The students cannot see, touch, or weigh time. It is hard to visualize and compare time. Time does not use the standard place value. An hour is not 10 or 100 minutes.
We can teach time successfully by using a variety of ordered activities (find some print and digital ones at the end) and by including time activities in our everyday routine to help students connect their knowledge to everyday life and strengthen their understanding. Even after students learn time they will forget if they don’t use it. There are plenty of opportunities every day to use a clock and discuss time. Below are a few tips that can be adjusted based on the grade level.
- When you give students time to do complete a task, measure the time using a clock, a timer, a stopwatch. When you say that we are going out in 10 minutes, keep the timer running and have them look at it often. this will help students understand the duration and the relation of seconds and minutes.
- When students are working in their groups I assign a member the job of timekeeping. Since the jobs rotate everyone gets a turn. The timekeeper has a clock or a timer, checks the time, and informs their teammates.
- Create a schedule of the day, and use clock faces to show the important times of the day. Start, recess, lunch, end. Talk about the changes in the time schedule, the time used for a subject, the length of the breaks, and more.
- Talk about time often. Ask questions like, how much time for recess? Where should the hour hand be to go for lunch? How about the minute hand? Is it before or after 12? Am or pm? “Look at the clock now. How will the clock look when it will be time to go out?”
- Ask students to time short activities with a stopwatch. Get a stopwatch that has a digital display similar to an analog clock. Instead of simply tallying the numbers, the stop watch simulates a second hand moving around the face of a clock.
- Play games with the timer. Like, which team can build the tallest LEGO tower in 1 minute. The tallest stone tower. Give specific time for scavenger hunts.
- Create a Sun clock with your students.
Preparing to teach time.
- When starting with time I use a clock divided into parts for each number. That helps the students understand that the specific hour is not only when the hand points at the specific number but when the hand is in the space of that number. For example, the hour is two at two o clock, at a quarter past 2, at half-past 2, and at a quarter to 3. Only once the minute hand reaches 12 the hour changes. We slowly move to activities with clocks without the divisions.
- Before teaching telling and writing time to the minute make sure that your students can count to 60 by 5. Practice skip counting by 5 and talk about the magic number 60. What is half of 60, What is 1/4 of 60, 3/4 of 60. These activities will help them when counting minutes on the clock.
- Make sure that you have a big analog clock. You can add time vocabulary around/ near it and add cardboard pieces around the clock to mark the minutes.
- Practice placing/writing the numbers hours and minutes on the blank clock in the correct order and direction. Have the students create their own analog clock using paper plates. For younger students, you can provide a template to be placed on the plate.
- Use a real and big analog clock so that you can show the students that both hands move at the same time, and what happens to the hour hand as the minute hand moves, and so on. For this, you can use a digital clock as well. Toy theatre has a great one that allows for modifications like using one hand at a time and works perfectly for elapsed time problems.
- Students tend to forget which hand shows what. A little trick is to notice that the word hour is shorter than the word minutes and so is the hand that shows it.
- You can use a number lines to help solve elapsed time problems. Laminate the number lines so that students can use them many times.
- Use a gear clock for elapsed time so that students can see both hands moving as they move forward or backward. These clocks are good for students to use since they can only move the minute hand and the hour follows accordingly. Gear clocks are a useful manipulative to invest in.
Where is the clock?
This is a fun scavenger hunt game. Place cards with clocks around the classroom. Make sure they are visible. Stick one on the window, on the door, on the ceiling, board, floor …you get the point. Give the students a sheet with the times of the clocks in word form. They need to find the location for each of the time. For example, A quarter to five. It’s the clock on the window. The students work in teams and you can add the competitive element if you want.
Guess the time
A student sets the time on the big analog clock without showing it to the rest of the class. The other students ask yes/no questions and try to guess the time. They can ask questions like, is it o’clock? Is it in the morning? Is it after lunch? Is it school time? Decide on the number of questions the students are allowed to ask in total. We play this in groups and each group is allowed to ask 3 questions, one group at a time. they need to think well about their question.
Find some free print to play games here.
Print and digital activity cards.
We have created a collection of activity cards for teaching time that includes
- Explanations for each concept, o’clock, half past, quarter past and to, minutes, seconds,
- Understanding the clock- placing the numbers for hours and minutes.
- Reading and writing time to the hour.
- Reading and writing time to the half-hour.
- Reading and writing time to the quarter-hour.
- Reading and writing time to the nearest 5,10,25,20 minutes.
Below is a small sample
and a quick video
Most of the activities include clocks divided in hour parts to help the students understand better. Example below.
There are activities with clocks without parts as well.
The activities are grouped to practice each concept individually and all the concepts together as well.
Below you will find a free and a premium version of the resource.
The premium resource includes 150 print and 150 digital activities. The digital activities are in google slides form that can be easily downloaded as a power point presentation.
For elapsed time activities check out our second collection of print and digital activities that includes:
- AM/PM activities
- 24-hour clock activities
- Starting with elapsed time-adding time, time before and after
- Finding times on the number line.
- Measuring the elapsed time between clocks.
- Word problems – measuring elapsed time in parts, finding the end time.
- Challenge problems
|
Lyme disease is an infectious illness caused when an individual gets infected by a bacteria called Borrelia burgdorferi. It is often difficult to diagnose on time but early detection is crucial in Lyme disease as it spreads to joints and even the nervous system over time. The disease is often diagnosed with the help of symptoms, the possibility of exposure to a vector, and laboratory testing. Lyme disease is not known to be contagious but can last in an infected person or animal for as long as six months to a year even with adequate treatment.
How Can a Person Get Lyme Disease?
Lyme disease is transmitted to a person when they get bitten by a tick carrying the bacteria that causes it. The disease is mostly transmitted by ticks in the nymph stage, and symptoms often manifest a week after the bite. People are more likely to get Lyme disease if they have animals, or if they live in woody or grassy areas where ticks can be found. Ticks that transmit Lyme disease can often transmit other diseases as well.
Symptoms of Lyme Disease
The symptoms that Lyme disease presents tend to vary depending on the stage of the disease. One very common symptom is the erythema migrants which is a circle or oval-shaped rash which appears at the site of the tick bite. The rash is easier to notice in light-skinned people as it has a red or purple color. The rash is not usually itchy and it often spreads gradually. Sometimes it can have a ring of lighter skin in its center.
Other symptoms people with Lyme disease may have are headaches, weakness, fever, difficulty sleeping, and muscle pain. Children with Lyme disease will also have these symptoms along with mood changes, aggression, and nightmares.
Symptoms that the infected person may develop if they do not begin treatment early are paralysis of the facial nerves, memory loss, arthritis, and meningitis.
Post-Lyme Disease Syndrome
PTLDS is the continuous manifestation of certain Lyme disease symptoms after treatment. About 20% of people who develop Lyme disease and receive treatment end up with the syndrome. PTLDS often affects the sufferer’s cognitive skills and mobility. Other symptoms experienced are aching joints, lack of focus, insomnia, fatigue, and swelling in the joints. It usually takes months or years before full recovery happens.
How to Treat Lyme Disease
If Lyme disease is discovered while it is still in the early stages, treatment is usually intake of antibiotics for about two weeks to get rid of the infection. Treatment plans could differ depending on the stage of the disease and the person being treated. Certain medicines are not recommended for treating Lyme disease if the sick person is also suffering facial paralysis. As a result, all symptoms must be identified before treatment begins to avoid further complications.
People with Lyme disease who experience symptoms that include arthritis and swelling in the joints are advised to reduce physical activities and to make use of support so they do not damage their joints. Treatment of Post-Lyme Disease Syndrome is usually focused on alleviating pain.
Preventing Lyme Disease
It is much better to prevent Lyme disease than to treat it. Treatment of such infectious diseases is mostly focused on keeping the vector responsible for transmitting the disease away. In this case, prevention is focused on eradicating ticks from the environment.
Here are some practical ways to prevent Lyme disease:
- Keep ticks away from animals: Whether the animals are kept as pets or reared for commercial purposes, it is important to keep them free from ticks so the ticks do not find their way into nearby homes. The best way to do this would be to make use of sprays, soaps, or other materials that will keep ticks away. A veterinary doctor must be consulted before these products are used on the animals so that products that could harm specific animals are not used.
- Avoid going to areas that may be tick-infested: You should stay away from places you suspect may harbor ticks. Like other pests, ticks can stay hidden in your home without your knowledge. They are also very tiny and it may be difficult to spot them. The moment you see one or two ticks moving around your home, you may want to do a thorough inspection to help find and get rid of them. To ensure your safety and that of those around you, it would be better to get professionals like those at Shoreline Pest Services to do this.
- Treat clothing with tick repellent: For people who live in areas with a lot of ticks, clothing, bed sheets, blankets and rugs can be treated with a safe chemical to aid in keeping ticks away. This is also something that professional pest removal companies like Shoreline Pest Services can help you with.
- Use environmental pesticides: In environments where ticks are high in population, homeowners could treat their compounds and surroundings with pesticides often so the ticks do not get into the house.
Ticks are disease-carrying agents like most pests and the best way to avoid critical conditions like Lyme disease is to exterminate ticks whenever you spot them. However, this is easier said than done. Because ticks are so small and are often found in areas with tall vegetation or a group of animals that is too large to contain, hiring a professional company to take care of your problem could be the easiest and quickest solution for you. They will have the necessary equipment to get rid of the plague and having a spot-on solution will save you money in the long run. The team at Shoreline Pest Services could be a great option for you, especially if you reside in Florida. If you don’t, make sure to check out pest control companies in your area that specialize in tick control and extermination.
|
The world of astronomy has been abuzz recently over the prospect of finding a new planet in our solar system. The idea was proposed by two scientists at the California Institute of Technology, who had been studying the trajectories of six remote objects orbiting far beyond Neptune. Noticing that the six orbits are clustered in an unnaturally skewed fashion, the scientists postulate that a larger mass is responsible for shepherding them into their current configurations.
Diagram illustrating how Planet Nine would influence the orbits of the six trans-Neptunian objects. (Image source)
This as-yet hypothetical planet would have a mass of around 10 Earths, and be so remote as to only complete an orbit around the Sun once every 10,000 to 20,000 years. Dubbed Planet Nine, it would fundamentally change our understanding of our solar system.
Understandably, this development has generated widespread excitement amongst the scientific community, while also attracting attention from most mainstream news outlets. Thus far, however, no visual evidence has been obtained to confirm the existence of this new planet. With only a few telescopes powerful enough to locate Planet Nine in the vastness of the observable sky, there are no estimates as to when, if ever, it can be found.
Those well-versed in the history of planet discovery may find this situation somewhat familiar. Late in the 19th century, mathematicians reviewed anomalies in the movements of Uranus and Neptune, and speculated that the gravity of an additional planet was responsible for these discrepancies. A massive search effort ensued, eventually leading to the discovery of Pluto in 1930. Alas, analyses revealed that Pluto was far too small to influence the movements of the two gas giants, and further study found that there was no issue with their orbits in the first place; the mathematicians had simply made a mistake. It was a false alarm of celestial proportions, and Pluto has since been demoted to ‘dwarf’ planet by the International Astronomical Union, the field’s ruling body.
In the case of Planet Nine, computations suggest a 0.007 percent chance (approximately 1 in 15, 000) that the orbital patterns of the six trans-Neptunian objects are simply a result of coincidence, and not due to another planet. A priori, this makes for a very strong case, but such compelling odds still cannot substitute for cold, hard proof.
As Karl Popper famously asserted, the inductive nature of science means that it is falsifiable, and therefore subject to constant revision. It must be noted that current calculations only consider the six largest trans-Neptunian objects, disregarding smaller bodies which ostensibly would also be influenced by the gravity of Planet Nine. As the scope of simulations widen to include the behaviour of these smaller objects, any fresh evidence to the contrary would deal a huge blow to this working hypothesis. The idea of a ninth planet may be the most plausible explanation for phenomena observed thus far, but if empirical studies show otherwise, or more convincing theory comes to the fore, Planet Nine may well have to be banished to the wasteland of scientific fads and wishful thinking.
Falsifiability is not about whether a theory can be proven right, but rather about whether it can be proven wrong. For instance, the hypothesis “All swans are white” would be falsified by finding a single black swan. (Image source)
If this theory is indeed nothing more than speculation at the moment, what explains the growing hype surrounding it? Perhaps, more than anything, the search for Planet Nine serves as a reminder that major scientific breakthroughs can occur at any given time. Living in the hypermodern 21st century, one may often feel like there is nothing new under the sun, nothing significant left to discover or create. Yet as theories and technologies mature, mankind inches ever closer to solving some of the most pressing and intractable problems of our time, such as curing cancer, generating clean energy, or understanding (and even creating) consciousness.
Whether or not Planet Nine can indeed be located, the mere possibility of its existence might well prove sufficient to capture our collective imagination. Ideas as outlandish as the discovery of a new planet may seem far-fetched, but they inspire us to see past the limits of our workaday world, and contemplate what else could be instead. And, with any luck, dreams like this will continue to awaken the intellectual curiosity in future generations, emboldening them to take risks and ask difficult questions, both of which are crucial to the next phase of progress, scientific or otherwise. Maybe that alone is enough; after all, as Socrates himself once remarked, wisdom begins in wonder.
Header and thumbnail images from Wikimedia Commons.
About the Author
Wei Xiang’s two favourite things are books and music. His idea of a good night is one spent reading a thought-provoking novel, with an album playing softly in the background. Of course he has many other interests as well, but those tend to involve, you know, going outside.
|
The Problem with Antitrust Laws
During the 19th century, the robber barons were dominating the American economy. A handful of people had more wealth and power than the entire nation. It is for this reason that they would collude with each other to keep everyone else at bay. These robber barons were depriving everybody else of fair opportunity as a result of this collusion. To prevent this from happening, the antitrust laws were created. The laws were relevant about 150 years ago. However, today these laws hamper the working of free enterprises. In this article, we will understand the major problems with these laws:
Wrong Conception of Coercive Monopolies
One of the stated functions of antitrust laws is to ensure that coercive monopolies are not established in industries. The underlying belief is that of these big organizations are allowed to have a free run the end result will be the formation of monopolies which will overcharge the consumers. The problem with this belief is that it is just not true. The reality is that monopolies cannot be formed in a free market no matter how big a company gets. Monopolies need some form of regulation which prevents the entry of new competitors in the market. This entry barrier can only be provided by the government. Hence, it would be safe to say that in the absence of government, there can be no monopolies at all. The whole antitrust act, therefore, seems like a sham. If the government really wants to prevent the rise of monopolies, they must abolish regulations which create entry barriers in free markets.
Antitrust Laws Are Vague
Antitrust laws are extremely vague. Bureaucrats can make them look like whatever they want to. For instance, if a company is charging a high price for its product, they can make it look like monopoly overcharging. On the other hand, if they charge the same price as their competitors, bureaucrats can make it appear like a case of collusion amongst competitors. Similarly, if the company charges prices which are lower than the competition, they can be accused of predatory pricing.
The laws fail to clearly define what constitutes an antitrust violation. Instead, the onus is left on the bureaucrat who could be using the government given authority to fleece these organizations.
Antitrust Makes Mergers And Acquisitions Difficult
There is nothing wrong with an organization increasing in size. Big organizations have always been more efficient. This phenomenon is known as economies of scale. Antitrust laws prevent organizations from achieving economies of scale. Many mergers and acquisitions have been disrupted by these antitrust laws. It shouldnt be illegal to buy out another company if a fair price is being paid. By preventing mergers and acquisitions, antitrust laws impede the most efficient arrangement of capital. These laws protect inefficient managers at the cost of the greater economic good.
Antitrust Laws Take The Power Away From Consumers
Markets are the most effective mechanism known to mankind. Consumer needs can be best met by free markets. Any alternative is always inferior. However, it seems like the government officials do not believe this argument. They believe that they somehow understand the interests of the consumer better than the consumer does. They also believe that their utopian regulations and expensive law enforcement mechanisms will ensure that the interests are served in the best possible way. The problem is that consumers dont have a say in this process. They elect a government once every four years. However, they vote for products each time they go to a market. Antitrust laws subvert the market mechanism.
Government Collusion and Corruption
Any behavior which can be considered to be predatory and monopolistic is temporary at best. For instance, a company can only engage in predatory pricing for a limited amount of time. Sooner or later, they will run out of money, and the free market will ensure that the competition emerges again. Also, since the monopoly would have bled money for a long time, it would be considerably weaker. It is impossible for corporations to create entry barriers on their own.
It is only with the power of law that special regulations can be passed. These special regulations are the ones that rule out the competition. Also, it needs to be noted that the big organizations do not need to do any work. The government keeps the competition at bay on their behalf. This is a system based on cronyism and favoritism. Hence, it inevitably boils down to a complex web of collusion and corruption which sacrifice consumer interests for personal profits.
Antitrust Laws Are Against Innovation
The underlying objective of a company is to earn maximum profits and grow as big as it can. The problem with antitrust laws is that it prevents the company from growing beyond a certain point. Hence, the company with the maximum resources, which can invest the maximum amount, is prohibited from growing. As a result, technological development stagnates. Also, since competition is restricted by antitrust laws, innovative companies cannot reach the marketplace. The end result of antitrust regulations is that innovation is stifled and economies perform at a suboptimal level. These economies then face competition from other nations where such laws are not in place. Needless to say that over a period of time, the lack of innovation kills entire industries.
Authorship/Referencing - About the Author(s)
The article is Written By Prachi Juneja and Reviewed By Management Study Guide Content Team. MSG Content Team comprises experienced Faculty Member, Professionals and Subject Matter Experts. We are a ISO 2001:2015 Certified Education Provider. To Know more, click on About Us. The use of this material is free for learning and education purpose. Please reference authorship of content used, including link(s) to ManagementStudyGuide.com and the content page url.
|
A Fat is a Fat is a Fat...Right?
This week, the U.S. Food and Drug Administration (FDA) ruled that, over the next three years, trans fats must be removed from a wide range of processed foods. An artificially created fat developed to be solid at room temperature, trans fats raise the risk of heart disease.
With fats in the news, we thought it was a good time to dig a little deeper and discuss the various dietary fats. You may know that trans fats are bad, but do you know why? And do you know how they are different from saturated, monounsaturated and polyunsaturated fats?
Fats get a bad rap, primarily for their calorie content. At 9 calories per gram, it doesn’t take much fat to push our daily calorie intake too high. Only one tablespoon of olive oil contains 119 calories. But fats are essential for our bodies to function so you need to get some in your diet every day.
To better understand the differences in fats, we need a little chemistry lesson.
A dietary fat molecule is primarily composed of a long chain of carbon and hydrogen atoms. The carbon atoms may be linked by single or double bonds and the length of the chain can vary. The length of the chain and the number of carbon double bonds and their positions help define the type of fat and its role in the body.
The simplest fat to describe is a saturated fat. It has a uniform-looking chain of carbon atoms all linked by single bonds. All other bonding sites along the chain are occupied (saturated) by hydrogen:
This structure makes saturated fatty acids solid at room temperature. Butter is a great example of a saturated fat.
Monounsaturated fats have one (mono) double carbon bond in the chain. Polyunsaturated fats have more than one (poly) double carbon bond. Both of these fats are liquid at room temperature. You’ll find monounsaturated fats in great abundance in olive oil while polyunsaturated fats are found in fish, nuts and seeds.
Here’s what a polyunsaturated fat might look like:
The position of the first double carbon bond defines the type. An omega 3 fat will have the first double bond at carbon atom #3, while an omega 6 fat will have its first double bond at carbon atom #6. Omega 3s are especially desirable as they have multiple health benefits especially for your cardiovascular system. Walnuts, flax and chia are all good sources of omega 3s.
You might have noticed that the hydrogen atoms surrounding a carbon double bond in poly and monounsaturated fats all fall on the same side of the carbon chain. This is the structure nature intended.
Trans fats, also referred to as hydrogenated and partially hydrogenated fats, are artificially modified to change that structure with hydrogen atoms found on opposite (trans) sides of the carbon. These fats are found primarily in margarines, processed and fast foods.
This seemingly minor variation in chemical structure completely changes the functionality of the fat. Rather than being liquid at room temperature, trans fats are solid, and rather than being vulnerable to rancidity, trans fats have a long shelf life.
In the 1970s, when saturated fats were thought to be major contributors to heart disease, the use of trans fats was encouraged, since they are technically an unsaturated substitute. Remember the commercials telling us to switch from butter (saturated) to the “healthier” choice margarine (trans)?
But it turns out trans fats are actually harmful to cell structure and promote heart disease to a greater extent than saturated fats ever could. In fact, the FDA estimates that eliminating these fats from the American food supply would translate into 20,000 fewer heart attacks and 7,000 fewer deaths per year.
Ironically, that old commercial was right after all: “It’s not nice to fool Mother Nature!”
There are a whole lot of details here, maybe more than you need. But here’s the takeaway:
- Fats are essential for our bodies to function properly, so we need some fat every day.
- Fat calories add up fast so limit your consumption.
- Avoid trans fats at all costs.
- Poly and monounsaturated fats, found only in plant-based foods, are beneficial for cardiovascular health.
Get heart health tips and articles like this, delivered right to your email.
New articles every week.
|
1. Based on the ideal gas law, there is a simple equivalency that exists between the amount of gas and the volume it occupies. At standard temperature and pressure (STP; 273.15 K and 1 atm, respectively), one mole of gas occupies 22.4 L of volume. What mass of methanol (CH3OH) could you form if you reacted 3.39 L of a gas mixture (at STP) that contains an equal number of carbon monoxide (CO) and hydrogen gas (H2) molecules?
2. Assuming the temperature and volume remain constant, changes to the pressure in the reaction vessel will directly correspond to changes in the number of moles based on the ideal gas law. Suppose the reaction between nitrogen and hydrogen was run according to the amounts presented in part A, and the temperature and volume were constant at values of 298 K and 2.00 L, respectively. If the pressure was 8.02 atm prior to the reaction, what would be the expected pressure after the reaction was completed?
3. Hydrogen has also been considered as an alternative fuel for vehicles designed to combust hydrogen and oxygen, which produces water as a product. However, concerns were raised because methane is typically used on a large scale to produce hydrogen gas. Assume that a gallon of gasoline contains 2400 g of carbon. If a gasoline engine achieves 30 miles per gallon, each mile consumes 80 g of carbon (about 107 g of methane contains 80 g of carbon). Alternatively, a hydrogen engine can achieve 80 miles per kilogram of hydrogen gas. What is the mass of methane (CH4) needed to produce enough hydrogen gas (H2) to drive one mile using the theoretical hydrogen engine?
|
Here you have some examples of English nouns: Time, year, day, company, car, pumpkin, dog, world, life, work and many others. As you can see, nouns in English are diverse and you do need to integrate them into a phrase in order for it to make sense. However, there are many uses when it comes to English nouns, as they can appear in a phrase either as a complement, object, subject or as a preposition, depending on the situation. There are a few situations in which nouns can be used in order to describe other nouns, such as “rugby ball” for example. In some situations you can find verb forms that are used like nouns, and these are called verbal nouns.
1. Abstract nouns refer to something that cannot be physically done or an intangible concept. Abstract nouns are concepts, ideas, state of mind, traits, quality or feeling. You cannot experience an abstract noun with your five senses.
Sentence: Send them my love. (Love is a thing that exists).
2. Collective nouns is a name given to a group. For example a team of players, pack of thieves, staff of employees, tribe of natives, or a flock of birds, hive of bees or group of islands.
Sentence: A pack of thieves stole my car.
3. Common nouns are generic nouns. They name people, places and things or ideas that are not specific. Examples are shoe, dog, cat, city, woman, and man. They do not need to start with a capital letter unless they are at the beginning of a sentence.
Sentence: The woman too over the business.
4. Compound nouns are made up of two or more words. Examples are haircut applesauce, dishrag, or tablecloth.
Sentence: I put the tablecloth on the wrong table.
5. concrete nouns is something that physically exists or something you can perceive with the five senses. Concrete nouns can be names of people or actually things like a cat or dog, chocolate or milk.
Sentence: My cat drank my chocolate milk (in this case, chocolate is an adjective and describes the milk).
6. Countable/Uncountable nouns are nouns that can be counted by adding an “s” at the end like poem/poems, paper/papers, light/lights or Sam/Sams. Uncountable nouns cannot be counted or do not have an “s” at the end of the world. These are milk, rice, hair, or mice (there is no such word as rices, mices, or really even milks).
Sentence: The mice ran up the clock.
7. Gerunds are a little tricky. Gerunds is a noun formed by taking a erb and adding the suffix “ing.” For example the gerund form of give is giving, talk is talking, and run is running.
Sentence: Running away from punishment is not cool
8. Plural nouns are just what they sound like. These refer to roe than one person, place, thing or idea. They usually end in an “s” and include worlds like boxes, roads, desks, chairs, or televisions.
Sentence: Boxes lined the walls of the attic.
9. Predicate nouns are nouns that follow linking verbs. For example: The play is a comedy. Mary is a girl. Mrs. Smith is the principle.
10. Proper nouns are specific people, places, things or ideas. They always are capitalized. Some proper nouns are Christianity, London, Mary Smith, or McDonalds.
Sentence: London is a great place to visit.
Singular/Plural nouns Each noun has a different form based on what it wants to display and accomplish. Some nouns have only a singular version, but there are also some with only a plural version. In order to add a plural for most of the nouns, you will need to add “s”, such as in the case of “pencils” for example. If the noun ends in x, ch or s, then you need to add “es”. The same can be said by those nouns who end in “f” or “fe”, in this case the “f” turns into “v”.
Possessive Case nouns These nouns are showing possession over a certain item, and this is shown mostly by adding “of” or by adding “'s” (or just '), depending on the situation. Example: “my mother's garden”.
|
Brynn O'Donnell is a freshwater ecosystem scientist, with a focus on urban biogeochemistry. She also believes in the importance of science accessibility, and practices this through telling stories of the human relationship with water through her podcast, Submerge. She's finishing her graduate degree in Biological Sciences at Virginia Tech, where she studies the impact of storm disturbances on stream health.
Across the country, buried beneath the pavement you walk on, an invisible network of waterways flows through the darkness. These are ghost streams, and they're haunting us.
In their former lives, they wound through natural landscapes above ground; it’s only through decades of development that humanity has relegated them beneath the earth's surface, enclosing the waterways in tombs of concrete and iron. The effects, decades later, plague us. Without a natural habitat to snake through, these streams carry downstream an excessive amount of pollutants (like salt and sediment) and nutrients (like nitrogen and phosphorus) because they can't divest these materials into their surrounding environs.
Here’s how ghost streams happen: Civilization grows near water sources, clustering around lakes, rivers, and springs that provide the resources required for drinking, bathing, and irrigating. As we industrialized drinking water infrastructure and outsourced water sources to larger, distant reservoirs and aquifers, most towns stopped using the smaller springs that originally drew them to a place. With that shift, many of the original freshwater sources go unused. Without relying on them for drinking water or irrigation, they become nothing but nuisances to development. If you want to build on a piece of land, the stream that threads through it has got to go. But streams are formidable obstacles; you can’t just demolish them and move on. Water needs to flow, so when we construct on land traversed by a stream, we bury it.
The move isn’t a recently devised trick. The western world has been moving streams underground since the Roman Empire. Between then and now, our stream burial technology has not undergone any revolutions, aside from separating stormwater and raw sewage and using different pipe materials.
Most people are not aware of the historic streams that have been buried—except for the curious few who wonder, for instance, why the street in downtown New York City is named “Canal.” In fact, we’ve buried streams all across the nation—in Los Angeles, D.C, and more. The U.S.’s Environmental Protection Agency estimates that we’ve buried 98 percent of the streams that once crossed through Baltimore’s urban core.
Although we’ve buried these streams, we haven’t put them to rest. They are still flowing, and still take in all the things we shed, spill, drop, and leak into our landscape. As rain runs over paved streets and sidewalks, it sweeps everything from the urban world directly into the nearest waterbody. Urban runoff makes its way to these hidden streams.
Unpiped, healthy streams naturally filter much of the water that flows into them. Smaller streams are mediators of human effluent: receiving the waste discharged from point sources (like industrial pipes and wastewater treatment plants) and from nonpoint sources (like runoff from streets and agricultural activities) and using tools like microbes, algae, rocks, and soil to slowly unload and transform excess nutrients and pollutants. Unwittingly tasked with filtering chemicals and solutes, natural streams become highly important to human health. And when we bury streams, we rob ourselves of our natural purifiers.
Streams typically teem with life: algae, fish, and invertebrates. A stream is home to microbes that require light, nutrients, and a natural stream bottom. These microorganisms are the power players that remove those excessive nutrients. But most ghost streams don’t host much life at all. When we bury a stream underground, we cut it off from light and the stream bottom. Only nutrients remain, which are funneled downstream, mixing city runoff with fresh water in the nearest river.
“Nutrients” sound good, but they can wreak havoc in downstream waterbodies, polluting waterways, creating coastal dead zones, and feeding thick blooms of toxic cyanobacteria.
Luckily, towns are beginning to acknowledge the importance of these buried streams in an effort to reduce the terrors of urban runoff. Simply letting locals know a stream exists beneath them, and that the stream receives everything, untreated, that goes down the drain, encourages people to keep their waste out of the secret streams.
For example, small frog statues adorn city drains in Blacksburg, Virginia, marking the drains directly above the local ghost stream. It’s a callback to the Ancient Romans, who marked their buried streams with shrines to “Cloaca Maxima” or the sewer goddess. Baltimore stencils its storm drains, and Richmond, Virginia and Dayton, Ohio want to do the same using the work of local artists. Entry points to waterways are embellished with paintings of fish, octopuses, and otters encircled by cautionary reminders like “all water drains to the sea” and “only rain should go down the drain.” Other storm drain murals are decorated with landscape paintings of scenic wildlife, images of kelp with plastic and litter for companions, or paintings of fish where grated drains act as mouths.
Some places are going further, ripping up pavement, shattering pipes, and hammering away the concrete to exhume ghost streams. Daylighting, as the procedure is called, opens the streams up to the sun and restores the adjacent land connection. This begins the process of healing, re-growing vegetation, and encouraging microbes and algae to come back. It’s great, but unburying a stream is expensive and requires strong community backing, and community support for daylighting a stream can’t be mustered if residents aren’t aware of the buried stream itself. Art is a great first step.
By recognizing ghost streams and getting locals engaged, we can work toward healing the waterways by limiting the pollutants poured into them, and even eventually unearthing them from the ground.
|
Use index cards with before and after scenes to get them to remember what's next. Index cards with the lines on them may also help with memorization. But try mixing it up and writing the lines that come afterward or prior. These should both be other lines your child has, as well as lines from other roles in the play or musical. This helps the child remember in which order to do things when it's time for the performance.
Do mini-versions of the play or musical during practice. Not only does this help with memorization, but it also helps prevent nervousness. Many times kids get stage fright because they are afraid they'll forget their lines. It won't prevent all cases, especially those unrelated. But it can certainly help. Some kids learn by performing actions. Even for those who don't, hands-on experience is beneficial.
Record them saying the lines and play it back to them. This can be a fun memorization method for kids because they get to record themselves or have you do it. When they record the lines, they are free to read from their study book or sheet. The lines can be played back while in the car or while they do other things. Auditory learners will greatly benefit from this screenplay line memorization method.
Have them write down the lines. This helps visual learners. But it also helps the brain process the information. By writing the lines for the play or musical down, a child needs to read them, as well as think about them. Repeating the process helps keep them memorized. For each succession, have the child write the lines down at least three times each and read them aloud afterward.
Be consistent and persistent with a variety of methods. Practicing often for a good length of time will prove to be beneficial. Make sure they spend ample time every day practicing their lines, using the above methods, as well as any others you can think of. No matter their learning style or how the scenario plays out when they perform, they can feel confident they know their lines all around.
*I originally published a version of this via Yahoo Contributor Network
|
Gradient, Divergence and Curl
Gradient, divergence and curl are frequently used in physics. The geometries, however, are not always well explained, for which reason I expect these meanings would become clear as long as I finish through this post.
Curl means curl, which is explicitly shown by this word.
The curl of a singular point doesn’t always show the singularity. One of the examples is the magnetic field generated by dipoles, say, magnetic dipoles, which should be
where the vector potential is
The reason for the extra Dirac delta is that vector is singular at point 0 meanwhile the curl of such a function does’t really show the singularities of the field. We need to calculate the integral without calculating the curl directly, i.e.,
in which we used the trick similar to divergence theorem.
|
Curious about Mars? NASA called it CURIOSITY for a reason.
When NASA scheduled it’s next rover to Mars, it was to learn more about whether Mars could be habitable for humans — particularly in terms of its weather.Does Mars have radiation? How would humans protect themselves from such a harsh environment such as all year round freezing temps and a thin atmosphere, composed mostly of carbon dioxide? –(95.32 %) to be exact. And what about water, as in where is it? Any ice?
Studies have suggested that Mars was once covered in water, it’s called the Mars ocean hypothesis. MOH states that nearly a third of the surface was at one time covered by an ocean of liquid (as in water) early in the planet’s development (or geological history).
2001 :According to Astrobiology Magazine,
“To test the hypothesis that oceans once covered much of the northern hemisphere of Mars, scientists at Malin Space Science Systems (MSSS) of San Diego, CA, have used high resolution images of Mars taken with the Mars Orbiter Camera (MOC) on Mars Global Surveyor.
“The ocean hypothesis is very important, because the existence of large bodies of liquid water in the Martian past would have had a tremendous impact on ancient Martian climate and implications for the search for evidence of past life on the planet,” said Dr. Kenneth Edgett, a staff scientist at MSSS.
Features in earlier Mars probes, in particular the startling Viking images, led a number of researchers to look for remnants of ancient coastlines and further raised the possibility that such a body of water once existed.”
The idea stayed in limbo for quite sometime , water on Mars was still only a hypothetical reasoning, there was no proof.
“Evidence for this ocean includes geographic features resembling ancient shorelines, and the chemical properties of the Martian soil and atmosphere. Early Mars would have required a denser atmosphere and warmer climate to allow liquid water to remain at the surface.” We are talking 3.8 BILLION years ago, so let’s not get too excited!!!!
By 2009, it had all changed!
“In a new study, scientists used an innovative computer program to produce a new and more detailed global map of the valley networks on Mars, which adds to the growing body of evidence suggesting the Red Planet once had an ocean.”http://www.phenomenica.com/2009/11/new-global-map-of-mars.html ” Scientists have previously hypothesized that a single ocean existed on ancient Mars, but the issue has been hotly debated.”
So for decades MARS was under the microscope , and let’s get to the punchline here shall we, it was pronounced uninhabitable for humans, period. That is until 2012. What changed?
Let’s start with the current announcement and back track from there. I let you in on a little hypothesis of my own, and then you can decide.
Here is a recent photo from NASA’s Rover Curiosity:
The Huffington Post recently reported that
“Researchers from the Australian National Universityhave determined that very large regions under the surface of the red planet may contain water and have sufficiently comfortable temperatures for Earth-based life — albeit microbial life.
“We found that 3 percent of the volume of Mars is habitable in terms of having the right temperatures and pressures for liquid water and life,” astrobiologist Charley Lineweaver told The Huffington Post by email. “The biggest surprise is that extensive regions of Mars could be habitable in terms of temperature, pressure and water.” http://www.huffingtonpost.com/2011/12/13/life-on-mars-we-could-be-martians_n_1144790.html
*NOTE: photos on this article have EXPIRED. (Whatever that means)
NASA is really pushing the scientific community at this point to suggest that MARS will soon be habitable for humans, but how is this possible, what happened and what is the true agenda behind this?
Here are more photos of proof NASA is bringing forward …
Observations by NASA’s Mars Reconnaissance Orbiter have detected carbon-dioxide snow clouds on Mars and evidence of carbon-dioxide snow falling to the surface. Image credit: NASA/JPL-Caltech › Full image and caption
PASADENA, Calif. — NASA’s Mars Reconnaissance Orbiter data have given scientists the clearest evidence yet of carbon-dioxide snowfalls on Mars. This reveals the only known example of carbon-dioxide snow falling anywhere in our solar system.
Frozen carbon dioxide, better known as “dry ice,” requires temperatures of about minus 193 degrees Fahrenheit (minus 125 Celsius), which is much colder than needed for freezing water. Carbon-dioxide snow reminds scientists that although some parts of Mars may look quite Earth-like, the Red Planet is very different. The report is being published in the Journal of Geophysical Research.
“These are the first definitive detections of carbon-dioxide snow clouds,” said the report’s lead author, Paul Hayne of NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “We firmly establish the clouds are composed of carbon dioxide — flakes of Martian air — and they are thick enough to result in snowfall accumulation at the surface.”
The snowfalls occurred from clouds around the Red Planet’s south pole in winter. The presence of carbon-dioxide ice in Mars’ seasonal and residual southern polar caps has been known for decades. Also, NASA’s Phoenix Lander mission in 2008 observed falling water-ice snow on northern Mars.”
Mars’ south polar residual ice cap is the only place on the Red Planet where frozen carbon dioxide persists on the surface year-round. Just how the carbon dioxide from Mars’ atmosphere gets deposited has been in question. It is unclear whether it occurs as snow or by freezing out at ground level as frost. These results show snowfall is especially vigorous on top of the residual cap. http://www.nasa.gov/mission_pages/MRO/news/mro20120911.html
So what has changed in the past 2 decades? Well one thing for sure is a new phenomenon here on earth called Geoengineering, or Chemtrails. And another term called Terra Forming. Here is a source document from 2009:
First, check this out http://www.nasa.gov/pdf/672319main_MPPG%20NAC%20REV%2010.pdf
This explains the following
“MPPG (Mars Program Planning Group) is delivering options for NASA’s consideration on a new architecture sequence of interconnected missions with particular attention to 2018/2020 opportunities that follow MSL, Curiosity, Maven, and ESA/TGO Missions.”
Also from NASA’s website referring to TERRA-FORMING
“Returning to the most important reason for a new lunar program, dispersal of the human species, the most promising site for such dispersal is obviously Mars, now known to have an atmosphere and water. Mars itself is obviously a fascinating object for exploration. But it may even now be marginally habitable for astronaut visits, and in the very long view, might be “terraformed,” or engineered to have a more Earth-like atmosphere and climate. This was described in Kim Stanley Robinson’s trilogy, Red Mars and its successors Green and Blue Mars. A second Earth, so to speak, would greatly improve our chances of surviving cosmic catastrophes.” ~Paul D. Lowman Jr.
14 January 2008
What is terra-forming, and who conducts such a task?
Terra Forming means “Earth-shaping”.
Terraforming of Mars
Artist’s conception of the process of terraforming Mars.
The terraforming of Mars is the hypothetical process by which the climate, surface, and known properties of Mars would be deliberately changed with the goal of making it habitable by humans and other terrestrial life, thus providing the possibility of safe and sustainable colonization of large areas of the planet. The concept relies on the assumption that the environment of a planet can be altered through artificial means; the feasibility of creating a planetary biosphere is undetermined. There are several proposed methods, some of which present prohibitive economic and natural resource costs, and others which may be currently technologically achievable. http://en.wikipedia.org/wiki/Terraforming_of_Mars
The term Terra Forming is sometimes used more generally as a synonym for planetary engineering, although some consider this more general usage an error. The concept of terraforming developed from both science fiction and ACTUAL SCIENCE.
The following link describes:
National Aeronautics and Space Administration
Earth vs. Mars
Remote satellite images of Earth and
Mars are used to compare and contrast
physical processes that occur on both
Identify similarities and differences
between the physical processes
that occur on Earth and Mars
Classify images of Earth and Mars
by observing physical features in
Speculate about the physical
features observed in each image
Now hold on to your seats.
Another definition (Per NASA) for Terra Forming is GEOENGINEERING). So what is the point here? Are you connecting any dots yet?
GEOENGINEERING: planetary engineering applied specifically to the Earth. It includes only those macroengineering concepts that deal with the alteration of some global parameter, such as the greenhouse effect, atmospheric composition, insolation or impact flux.
Back to Mars:
Beginning in 1985, Martyn J. Fogg began publishing several articles on terraforming. Martyn was part of the The British Interplanetary Society (BIS) founded in 1933 by Philip E. Cleator, is the oldest Space Advocacy organisation in the world whose aim is exclusively to support and promote Astonautics and Space Exploration. Check out their website, it will blow your mind! http://www.bis-space.com/
” Once conditions become more suitable for life, the importation of microbial life could begin.” ~ Martyn J. Fogg
Is it possible that science is experimenting right here on Planet Earth? Terraforming, engineering, geoengineering the earth to test it on all of us, all living beings, all LIFE FORMS to be sure it works for another planet such as Mars?
“The colonization of Mars by humans is the focus of serious study because surface conditions, such as the availability of frozen ground water, make it the most hospitable planet in the solar system other than Earth. The Moon has been proposed as the first location for human colonization given its close proximity, but Mars has twice the gravity, more water (in ice form) and a thin atmosphere, giving it the potential capacity to host human and other organic life in more abundance than on the Moon. Both the Moon and Mars, as potential settlement sites, have the disadvantages of cost and risk associated with landing within gravity wells, which may make asteroids another option for early expansion of humans into the solar system.” http://en.wikipedia.org/wiki/Colonization_of_Mars
WHY are statements such as these being made by the scientific community?
“Recent observations by NASA‘s Mars Exploration Rovers, ESA‘s Mars Express and NASA’s Phoenix Lander confirm the presence of waterice on Mars. Mars appears to have significant quantities of all the elements necessary to support Earth-based life.”
We will continue to update you on this story , as we promised you in part I. http://www.thetruthdenied.com/news/p=5052
Is it possible that the missions to Mars are because scientists and Astronauts are ready for a visit to the red planet? Or could it be that after years of GEOENGINEERING planet earth, terraforming the earth, that MARS is ready to have visitors?
One more thing. You thought I forgot about the very first question that I asked in this article, you know the one about why NASA choose to use the pyramid on the U.S.Dollar as an example to show off the power of their new lense ? Do you know the answer yet?
Please see PART ONE HERE: http://www.thetruthdenied.com/news/2012/08/05/curiosity-lands-on-mars-see-mars-photos-and-livestream/
|
Shepherds and Their Flocks on the Argive Plain, Greece
- Special Collections > Keystone Slides
- tiff scanned file from original glass slide
- One third of the land of Greece is in pasture and meadow. Most of the pasture area is stony upland, not fit to cultivate. Sheep, goats, horses, mules, cattle, and hogs are herded in the pastures. Sheep and goats are the most important of these. The country contains about 3½ million sheep, and 2½ million goats. Such a scene as the one you observe is common. On the wind-swept plains the shepherds herd their small flocks. In the summer their garb is the usual belted tunic and breeches, made of cloth. Their caps are kerchiefs tied under the chin. In the winter their coats are often of sheepskin, with the wool turned inside. The Gulf of Corinth cuts the peninsula (pn-n´ s-lå) of Greece almost in two. The parts are actually severed now by a ship canal. The southern portion is called the Peloponnesus (pl´ -p-n´ ss). The chief city of this section in ancient days was Sparta. It was a strong rival of Athens for the control of the country. The scene here shown is about half way between Sparta and Athens, near Argos. Argos is now a small city of 12,000 people. In the very early history of Greece it figured as one of the chief city-states. For a time it was a stronger power than Sparta, and controlled the northern part of the Peloponnesus. It was the parent city of many little city-kingdoms, Corinth being one of these. It waged war on Sparta many times, and was gradually overcome. It became a part of the Roman Empire, 146 B. C. The word "Argos," signifying "plain," was formerly applied to the country about, as well as to the city itself. It is from this word that "Argive" comes, the name now given the plain which you are viewing. Keystone ID: 7171 Note: All titles, descriptions, and location coordinates are from the original Keystone Slide documentation as supplied by the Keystone View Company. No text has been edited or changed.
- Copyright by the Keystone View Company. The original slides are housed in McConnell Library's Special Collections.
|
Periodontal means “around the tooth.” Periodontal disease (periodontitis and gum disease) is a common inflammatory condition that affects the supporting and surrounding soft tissues of the tooth, and in advanced stages affects the jawbone.
Gingivitis precedes periodontal disease, a bacterial infection of gum tissue. A bacterial infection affects the gums when the toxins contained in plaque begin to irritate and inflame the gum tissues. Once this bacterial infection colonizes in the gum pockets between the teeth, it becomes much more difficult to remove and treat.
Periodontal disease progresses and leads to the destruction of oral connective tissue and works its way into the jawbone. Ignored, it can cause tooth loss, loose teeth, and shifting teeth. Periodontal disease is the leading cause of tooth loss among adults, and we address it as quickly as possible.
If left untreated, mild gum inflammation (gingivitis) can spread below the gum line. When a person's gums are overcome by plaque toxins, a chronic inflammatory response will cause the body to break down and destroy soft tissue and bone.
Here are the stages of periodontal disease:
Chronic periodontitis– This is the most common form of periodontal disease. It is inflammation within supporting tissues that causes gum recession and deep pockets. It will look like teeth are lengthening, but it is gum tissue that is receding.
Aggressive periodontitis– Rapid loss of gum attachment and continuous bone destruction. This gum disease happens in an otherwise healthy individual.
Necrotizing periodontitis– This form occurs in people suffering from systemic conditions that include malnutrition, HIV, and immunosuppression. Necrosis (tissue death) occurs in periodontal ligaments, bone and gum tissue.
Periodontitis caused by systemic disease– This form of gum disease often begins at an early age with medical conditions like respiratory disease, diabetes, and heart disease.
Don't delay: periodontal disease leads as the cause of tooth loss among adults, and we address it as quickly as possible.
Periodontal disease can progress without any sign or symptom, and this is why our recommendation for dental checkups are imperative. Some of the common signs and symptoms of periodontal disease are:
There are nonsurgical treatments that we may recommend, depending upon the exact condition of a patient's teeth, gums, and jawbone. A complete periodontal exam of the mouth will be done before any treatment is performed or recommended.
Here are some of the more common treatments for periodontal disease:
Twenty-four hours is all that is needed for plaque to turn into calculus (tartar)! Daily home brushing and flossing helps to control plaque and tartar formation, but those hard to reach areas will always need special attention.Good oral hygiene practices and periodontal cleanings are essential in maintaining dental health and keeping periodontal disease under control!
Our patients love the individual attention and care they receive at Robertsdale Dental Care, and we are confident you will too.
Relief from your dental issue and from the anxiety that often accompanies dental problems is at the heart of Robertsdale Dental Care.
Robertsdale Dental Care understands your issue, including the emotions that can surround dental health. We long to offer kindness and comfort to fellow humans. Regardless of what makes you "you," we welcome you here.
Our hygienists and dental assistant staff have an average of 15 years of experience. Including the doctors, we have a combined 236 years of dental health practice.
We recognize and know that many patients experience dental anxiety. We're here to help. We have multiple sedation options to help make sure your experience is enjoyable, and we always communicate what to expect.
Our dental assistant staff and hygienists have an average of 15 years of experience. Including the doctors, we have a combined 236 years of dental health practice.
Hit Go to Search or X to close
|
Talk:Life Skills Development/Module Three/Barriers/Archive
BARRIERS TO COMMUNICATION
INTRODUCTION / RATIONALE
Misunderstandings during the communication process are often the result of various barriers. Diverse elements including cultural and physical factors may contribute to the failure experienced when encoding and decoding the correct meaning of the message.
Learners should be able to:
- identify various factors which hinder the process of effective communication.
- categorize barriers to Communication into groups.
- identify skills needed to overcome certain barriers
- demonstrate the skills required to conquer those barriers during communication
•Barriers •Hindrances •Physical, Mental, Emotional, Cultural Barriers •Diversity •Context •Distractions •Disruptions •Prejudice •Language •Connotative
Barriers of Communication …………. A range of physical, mental or emotional hindrances which can prevent messages from being passed on successfully between sender and receiver during the process of communication. These include physical, cultural (including language), as well as mental and emotional barriers. Participants should recognize and take responsibility for overcoming (as much as possible) barriers to effective communication.
Dramatizations, Role-play, Small-group discussions,
Situations: • A French-speaking family arrives in Trinidad from Martinique and is immediately thrown into situations reflecting differences in culture including language, forms of greetings etc. Problems include interaction with
- the Immigration Officer
- the taxi driver
- the waiter at the restaurant
- the hotel clerk etc.
• A meeting of community leaders to present and discuss a proposed plan for improving the community. The speaker is presenting the list of various sub-committees and their allotted tasks. Members (with their various “issues”) listen to the presenter then complain, showing their misunderstanding / misinterpretation of the proposed plan. The members speak while hinting at their “issues” / barriers
- Member “A” came into the meeting angry, due to a domestic problem
- Member “B” had been reading the newspaper and only heard the very end of the presentation
- Member “C” interprets all behaviour / decisions based on race, thinking everyone else is racist.
•Father (single parent) and teenaged sons: (background noises including music from CD player) Father is packing to leave on a business trip and reminding his sons about guidelines for visitors to the house; securing the house at night ……………… Young men are playing hand-held computer games while listening. Scene 11: Brothers are arguing about the guidelines, since they are organizing a party.
Husband and wife:
My brother and his girlfriend
|
Key Difference – Plasmolysis vs Deplasmolysis
Water molecules move across the cell membrane according to the difference of water potential in and out of the cell. When the outside solution has a low water potential, until water potential becomes equal, the cell loses water molecules to the outside solution. When the water potential of the cell interior is low compared to the outside solution, water molecules enter the cell. Plasmolysis was the process of protoplasm shrinkage and detachment with the cell wall due to the loss of water when it placed in a solution with low water potential (hypertonic solution). Deplasmolysis is the reverse of plasmolysis. Deplasmolysis occurs when a plasmolyzed cell is placed in a solution having a high water potential (hypotonic solution). The key difference between the plasmolysis and deplasmolysis is that, during the plasmolysis, water molecules go out of the cell and cell protoplasm shrinks while during the deplasmolysis, water molecules enter the cell and cell protoplasm swells.
1. Overview and Key Difference
2. What is Plasmolysis
3. What is Deplasmolysis
4. Similarities Between Plasmolysis and Deplasmolysis
5. Side by Side Comparison – Plasmolysis vs Deplasmolysis in Tabular Form
What is Plasmolysis?
Plasmolysis is the process occurs due to exosmosis. When a plant cell is placed in a solution, which has low water potential, water molecules come out the cell until water potentials of the cell and the solution become equal. Due to the water loss, protoplasm of the cell shrinks and detaches from the cell wall. However, due to the rigid cell wall of the plant cell, cells resist the breakage.Water molecules come out from the cell by exosmosis during the plasmolysis. Plasmolysis causes the plant to wilt. When the plants are watered again, plasmolysis can be reversed. Water will absorb the plant cells by the endosmosis and plants return to the normal turgid state.
There are several internal and external factors affecting the plasmolysis process and the plasmolysis time. They are cell wall attachment, protoplasmic viscosity, cell species, cell wall pore size etc. The age of the plant, cell type and the development stage of the plant also affects the plasmolysis and the time.
What is Deplasmolysis?
Deplasmolysis is the reverse process of plasmolysis. When a plasmolyzed plant cell is placed in a solution having a high water potential, water molecules enter the plant cell across the cell membrane. Hence, the volume of the protoplasm increases and the cell return to the normal position gradually.
The water potential of the cell restores due to plasmolysis. Deplasmolysis is a result of water entering the cell by endosmosis.
What are the Similarities Between Plasmolysis and Deplasmolysis?
- Plasmolysis and deplasmolysis are two processes occur in plant cells.
- Both plasmolysis and deplasmolysis processes occur due to water molecule movement across the cell.
- Both plasmolysis and deplasmolysis processes can be reversed.
- Both plasmolysis and deplasmolysis processes occur due to the difference in water potential.
- Both plasmolysis and deplasmolysis occur due to osmosis.
What is the Difference Between Plasmolysis and Deplasmolysis?
Plasmolysis vs Deplasmolysis
|Plasmolysis is the process of contracting the cell protoplasm due to the loss of water when placed in a hypertonic solution.||Deplasmolysis is the reverse of plasmolysis in which cell swells due to the absorption of water when placed in a hypotonic solution.|
|Plasmolysis occurs due to exosmosis.||Deplasmolysis occurs due to endosmosis.|
|Protoplasm shrinks during the Plasmolysis.||Protoplasm swells during the Deplasmolysis.|
|Type of Solution|
|Plasmolysis occurs when the plant cell is placed in a hypertonic solution.||Deplasmolysis occurs when a plant cell is placed in a hypotonic solution.|
|Water molecules lose from the cell to the outside during the plasmolysis.||Water molecules enter the cell during the deplasmolysis.|
|The cell has a higher water potential than the outside solution during the plasmolysis.||The cell has a lower water potential than the outside solution during the deplasmolysis.|
|Osmotic Pressure of the Cell|
|Osmotic pressure is low in the cell due to plasmolysis.||Osmotic pressure is high in the cell due to deplasmolysis.|
|Plasmolysis causes plants to wilt.||Deplasmolysis restore the turgidity of the plants.|
Summary – Plasmolysis vs Deplasmolysis
Plasmolysis and deplasmolysis are two processes important for the water balance of the plants. Plants wilt or shrink when there is insufficient water surrounding the soil area. This process is known as plasmolysis. When we water them, plants absorb water and regain the turgidity by the process of reverse plasmolysis or deplasmolysis. Plasmolysis occurs by the exosmosis. Water leaves the cell hence, protoplasm shrinks. Deplasmolysis occurs by the endosmosis. Water enters the cell and the cell protoplasm swells. This is the difference between plasmolysis and deplasmolysis.
Download the PDF Version of Plasmolysis vs Deplasmolysis
You can download the PDF version of this article and use it for offline purposes as per citation note. Please download the PDF version here: Difference Between Plasmolysis and Deplasmolysis
1.“ Plasmolysis and deplasmolysis.” ScienceDirect, Academic Press, 29 Nov. 2003. Available here
|
As you saw in Chapter 8, algebraic functions not only produce straight lines but curved ones too. A special type of curved function is called a parabola. Perhaps you have seen the shape of a parabola before:
- The shape of the water from a drinking fountain
- The path a football takes when thrown
- The shape of an exploding firework
- The shape of a satellite dish
- The path a diver takes into the water
- The shape of a mirror in a car’s headlamp
Many real life situations model a quadratic equation. This chapter will explore the graph of a quadratic equation and how to solve such equations using various methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.