content
stringlengths 275
370k
|
---|
Green Fluorescent Protein
A molecular tag that can be inserted into genes
GFP stands for green fluorescent protein (the official name for the molecule) and is, imaginatively, a protein that fluoresces green in the presence of UV light . It has found its use in all areas of cellular biology and advances have reached the point where it is the focus of works of art , such as a pet rabbit called Alba, whose fur glowed green under UV light. In November 2008, three men, Osamu Shimomura, Martin Chalfie and Roger Tsien, were awarded the Nobel Prize [3-7] in chemistry, "for the discovery and development of green fluorescent protein". So what makes this such a versatile and important molecule?
In the first century AD, Pliny the Elder reported that certain jellyfish produced a glowing light . This was the first report of bioluminescence. So what is bioluminescence? Simply put, it is when a biological organism emits light. A common example of this is in the firefly and similar organisms, as seen in figure 1 , where the protein luciferin, and the enzyme luciferase , combine with oxygen and ATP, a source of energy in living organisms, to produce light. Another example is of deep sea fish which use a similar system to produce light in a lure, attracting prey that they can then eat.
So how does this differ from fluorescence? In fluorescence, incident light is merely re-emitted at a different, less energetic wavelength. Fluorescence works when a photon collides with a molecule, causing the molecule to gain energy. If this energy is enough, an electron will be excited from its initial ground state to a higher energy state, with any excess energy causing the molecule to vibrate more or move faster. The excited molecule slowly loses the vibrational and translational energy, most commonly by colliding with surrounding molecules, or a rearrangement of the structure, but sometimes by other methods. After a length of time, roughly a few microseconds, the molecule can lose the energy stored in the excited electron by allowing it to fall back to its ground state. With this fall, the electron emits a photon with the same energy as the difference between the excited and ground states. Because of the loss of vibrational and translational energy, the energy of absorbed and emitted photons differ, changing the colour of the light emitted , e.g. the absorption of blue light and emission of green.
Osamu Shimomura is the starting point for GFP. As a young boy, he was only miles from the atomic bomb that landed on Nagasaki, close enough to be temporarily blinded by the explosion . In 1960, Osamu Shimomura moved from Japan to work at Princeton University and here he worked in a research group studying the jellyfish Aequorea victoria. The project he was working on involved bioluminescence, the edges of the skirt, or umbrella, of this species of jellyfish emitting a green light, figure 2. So, along with two colleagues from the university he went to Puget Sound, a series of waterways extending into northern Washington State from the Pacific Ocean. The three scientists collected roughly 10,000 specimens of the jellyfish, all with hand nets to avoid catching anything else, from which they removed a 5mm-thick ring from around the skirt, responsible for the glow observed. Somewhat gruesomely, these rings were crushed to remove the liquid constituents and forced through cheese-cloth to separate the wanted material from the solid that made up the jellyfish.
The crushed rings, or "squeezate" as it was referred to in the original paper, was purified to produce just over 5 mg of glowing substance, though this proved not be the green light observed, but possessed a blue glow instead, originating from a protein named Aequorin. In the purification process another, discarded, protein was found that did posses this green fluorescence, but was practically ignored as it only emitted the distinctive green light when irradiated with blue or UV light. In 1962 the results of the work was published, with only a minor mention of the fluorescing green protein . Realising the potential of the green fluorescing protein, Shimomura shifted his focus, and over the next 17 years continued to work on the jellyfish. In this time, over 850,000 of the organisms were captured and killed. During the 17 years, and due to the increase in numbers, family members were roped in to help, and the scissors that had originally been used to remove the fluorescent skirt were discarded in favour of a specially made jellyfish cutter . At the end of this, Shimomura had finally deciphered the structure of the fluorescing portion of the molecule.
Research into GFP slowed considerably until in 1992 the gene for GFP was eventually coded by Douglas Prasher , genes and DNA, in general, being the blueprints from which proteins are synthesised. This was a significant breakthrough and led to a race to be the first research teams to successfully express the gene in and organisms. The first person to manage this was Martin Chalfie and in 1994 published the results of this work . Chalfie succeeded in inserting the gene for GFP into the bacteria E. coli, which then fluoresce with a green light in the presence of UV radiation. Since then, GFP has been used for thousands of different applications, and improvements have been made to the properties of the protein by mutations in the gene, creating brighter variants, as well as multiple different colours. As well as this, GFP and GFP variants have been found in dozens of different marine species, from sea anemones to sea pansies, which produce their own light, to corals that possess no bioluminescence, but have the ability to fluoresce. And the natural varieties do not just stop at green, with almost all imaginable colours having been found .
The structure of GFP is built up in the same way as any protein and as such has multiple levels of structure, as well as multiple methods of chemical interaction. The base or primary structure of GFP is a chain of 238 amino acids weighing roughly 27,000 atomic mass units (27 kDa) , with only about 4 of the amino-acids directly producing any fluorescence effect . The secondary structure is a series of helices and pleated sheets, caused by hydrogen-bonding within the chain, while the tertiary structure is a barrel made from 11 of the sheets, capped with the helices. At the centre of this lies the chromophore, a short chain of altered amino-acids responsible for the light emission. The barrel structure keeps the chromophore away from solvents, making GFP capable of fluorescing under almost any conditions, being able to fluoresce nearly to the point at which the protein is denatured by things such as heat and pH . Figure 3 shows the structure of GFP, the cylinder being ~42 Å long and ~24 Å in diameter .
When a protein is produced by a cell, the only part that is synthesised is a chain of amino-acids with no more structure than that. This is referred to as the "primary" structure (1°). Once completed, the amino-acid chain is folded into the right shape, forming the necessary bonds in order to hold the structure rigid. In the case of GFP, this folding brings the amino acids necessary for the chromophore close enough together to enable it to react in a way as to produce the actual chromophore . Scheme 1 shows how the amino-acids form the structure needed for the molecule to fluoresce .
So now that we know what GFP looks like, how exactly does it convert blue light to the green light show? This is, surprisingly, the easy part, even though it may look complex at first. Molecule 1, in Scheme 1 shows the complete structure for the part of the protein responsible for fluorescence. To get the jargon out of the way, there are two sets of states that contribute most to the strongest absorptions and emissions, i.e. the UV absorption and green emission. The first state, shown as 1 in scheme 1, is referred to as the A state, or simply A. The other partner in the set is A*, an energetic version of A after absorption of a photon. The second set of states are called I and I*, and have a similar structure to 1, but the phenolic hydrogen (the one on the OH at the left of 1 in Scheme 1), has migrated to an oxygen not shown on the diagram, along a series of hydrogen-bonding links [20, 21], again, with I being the ground state and I* being the energetically excited state.
So now that all of that is out of the way, the way UV light is converted to green is simple. A photon of UV light hits the chromophore, converting the A state to A*. Excess energy from the photon collision converts A* to the slightly lower energy I*, through a simple proton transfer, lowering the energy stored in the system. The I* state then simply releases energy to return to the I state by emitting a photon of green light. The energies of I and A are so close in energy that the proton can simply jump back to its starting place, enabling the whole process to begin again.
So how does all this make GFP a molecule worthy of Noble Prize winning research? Many peptide chains require enzymes to aid in the complex folding that occurs in producing the correct-shaped protein. Many proteins also use enzymes to operate, for example, the luciferin/luciferase combination mentioned earlier. In GFP, however, the complex folding operation occurs automatically and the only thing required for the protein to fluoresce is atmospheric oxygen for the final oxidation of the chromophore, seen in scheme 1.
So why is this important? Well this means that organisms, other than those in which GFP naturally occurs, can be genetically engineered to have a gene that produces GFP and it will still work without multiple other genes being implanted . When Chalfie first made glowing green E. coli, it opened wide the possibilities for looking inside a living cell for the first time. Figure 4 shows the Petri dish of glowing E. coli, under a UV light, compared to a regular sample, while figure 5 shows the first example of what makes GFP so amazing. It is a photograph of a round-worm, C. elegans, which has had one of its genes replaced with the genetic code for GFP. This resulted in GFP being expressed in the worm, but only in the places in which the original gene would have been expressed.
Neither organism suffered toxic effects from the protein and the experiment showed that both prokaryotes (bacteria) and eukaryotes (almost everything else) can be made to express GFP. Since then it has been used in organisms as diverse as fruit flies, mice , rabbits, tobacco plants and human cells . As well as replacing genes, the relatively small size of GFP, for a protein, enables it to be used as a tag, or reporter gene , involving adding the genetic code for GFP onto the end of the gene for the protein that needs to be tagged and growing the organism. This results is the protein being produced with a small tag that doesn't affect the organism or function of the protein at all. The protein can then be seen, identifiable by the green fluorescence enabling the pin pointing of genetic expression.
So far, the examples given have only been concerned with positioning of gene expression, but GFP can be used to do much more. A technique known as FRET (fluorescent resonance energy transfer), can be used to image real-time events within a cell. The basic principle is that two variants of GFP, that absorb and emit light at different wavelengths to each other, are bound to interacting proteins. When these proteins are far apart and the system is illuminated with a light that only excites one of the GFP variants, only the colour from that protein is emitted. However, when the two proteins interact, such as an enzyme acting upon its substrate, and the same light is used, the two GFP variants are brought close enough together for the energy from the absorbed light to be transferred between the GFP molecules, resulting in the colour emitted to change to that of the second GFP.
When Roger Tsien first heard about the coding of the GFP gene, instead of wondering how he could implant it into an organism, he wondered how he could make it better - the GFP protein acquired from A. victoria (known as wild-type or wtGFP), sometimes being less than ideal for research. For instance, wtGFP has broad excitation peaks, that is, they absorb multiple colours of light, making it unsuitable for FRET . They are also slow in the formation of the chromophore, taking over 2 hours for the final oxidation to occur. They also have tendency to form dimers and trimers, increasing the molecule in weight massively, which can inhibit not only the function of proteins they are attached to but also the function of the GFP itself. The increased size of the dimerised and trimerised GFP molecules can inhibit the movement of tagged proteins around the cell and through membranes.
So how could you go about solving these? The easiest way to do this, and indeed, the way it was done, is to engineer colonies of bacteria to express the gene and allow them to grow, letting nature produce mutant varieties of GFP in time. In the experiment undertaken by Roger Tsien, a large proportion of these mutations destroyed the fluorescence, but a small fraction resulted in improvements. These included variants that shifted the excitation and emission peaks, changing the colours, variants that replace the two excitation peaks by one, increasing the brightness, and ones that oxidise much faster. As well as GFP mutations, mutations in a similar protein found in a type of coral, Discosoma, called RFP as it fluoresces red, have resulted in a wide spectrum of usable fluorescent proteins, shown in figure 4.
So what is left for GFP and its derivatives? After changing the face of the biosciences and being the subject of a Nobel Prize, GFP has begun to reach the pinnacle of what can be done in terms of new techniques and mutations. However, this does not mean that what has been done will not continue to be done and, with GFP being the powerful tool that it is, continue to change the face of the biosciences. One way in which this is clearly shown is the "brainbow", shown in figures 5 and 6 . Both images are of the nervous systems of animals and both were part of a photography competition, run by Olympus America Inc. The "brainbow" as it is so aptly named, is created by enabling 3 or 4 variants of the GFP and RFP proteins to be produced and act as tags for different proteins used throughout the nerves and brain. Where only one of these proteins is expressed, only that colour is shown, but often, multiple proteins will be expressed in different quantities in the same cells. This combination of GFP molecules produces a rainbow of colours in a similar way to the 3 colours found in printers, allowing the nerve cells in an animal to be seen.
Thank you to Dr. Charles King for making sure that the writing was both understandable and concise.
Back to Molecule of the Month page. |
United states constitution: the constitution of the united states of america: analysis and interpretation (popularly known as the constitution annotated) known as the federalist papers. Editor's note: the following is the tenth in a series of articles giving an introduction to the federalist papers, a collection of 85 articles and essays written by alexander hamilton, james madison, and john jay promoting the ratification of the united states constitution james madison makes. The federalist papers study guide contains a biography of alexander hamilton, john jay and james madison, literature essays the united states constitution that hamilton defended has become one of the most copied and admired documents in the history of civilization. Critics of the constitution argued that the proposed federal government was too large and would be unresponsive to the people home / founding documents and resources / primary source documents / the federalist papers / federalist papers no 10 federalist no 10 (1787. The federalist papers were a series of essays published in several new york state newspapers during 1787 and 1788 these essays were designed to explain and argue for the developing united states constitution, and came about partly in response to a group of essays critical of the constitution the. What impact did the federalist and anti federalist have on the constitution follow 3 answers 3 what impact did the anti-federalists have on the united states constitution federalist and anti-federalist which one had the greatest impact on the united states more questions.
Alexander hamilton and the federalist the federalist essays are a series of eighty-five essays that were written with the intention of persuading the residents of new york to ratify the proposed united states constitution. United states' raised an issue of constitutional law that had not pre- under the constitution instead federalist no 44 continues the supreme court and the federalist papers ° the federalist ° §. Publius and persuasion: rhetorical readings of the federalist papers you are here pdf affirmed that the federalist was not only a classical work in the english language, but as a commentary on the constitution of the united states where the federalist essays were first published. A timeline of the essential anti-federalist papers summary and texts the issues this reputation of the antifederalists as irrelevant, even proto-calhoun read gordon lloyd's introduction to the federalists who supported the constitution as well as his essential federalists.
Get free homework help on the federalist: book summary, chapter summary and analysis and original text, quotes, essays, and character analysis courtesy of cliffsnotes first published in 1788, the federalist is a collection of 85 newspaper articles, written by the mysterious publius, that argued swift ratification of the us constitution. A summary of the federalist papers and the bill of rights: 1789 george washington becomes the first us president 1791 bill of rights is ratified key people on one side were the federalists, who favored the constitution and a strong central government. Which feature of the united states constitution was included to address the concern expressed by 8 the primary purpose of the federalist papers was to (1) justify the american revolution to the 14 the map illustrates the impact on the united states of the (1) great compromise (3.
This article describes how the usconstitution came to be with a play-by-play so they mounted a campaign in defense of the constitution by publishing a series of essays in new york people who opposed the constitution, known as anti-federalists, launched a campaign to defeat. After the fact: virginia, new york, and the federalist papers 16d after the fact: virginia, new york, and the federalist papers the federalist papers were a series of essays by john jay, alexander ratification of the us constitution by the state of virginia.
Federalist number 10: ap us history crash course review in response to the federalist papers economic, and diplomatic crises of the 1780's shaped the provisions of the united states constitution. What evidence is presented in this essay of the importance of magna carta to the american colonists who challenged king the influence of magna carta on the us constitution published a series of newspaper articles under the title of the federalist arguing for ratification of the.
The federalist, commonly referred to as the federalist papers, is a series of 85 essays written by alexander hamilton the federalist papers were written and published to urge new yorkers to ratify the proposed united states constitution. What were the federalist papers and what effect did they have on the federalist papers are a series of 85 articles or essays promoting the ratification of the united states constitutionthe federalist effected the ratification because it gave an in-depth analysis and an. Federalist papers, articles about the constitution written by john jay, james madison, and alexander hamilton. The nook book (ebook) of the the federalist papers (including declaration of independence & united states constitution): written by the founding fathers in. Study guide and teaching aid for james madison: federalist 10 featuring document text, summary, and expert commentary.
They were written at the time to convince new york state to ratify the us constitution new york to ratify the constitution third, the federalist papers explain the motives of the to separating power structures because of their desire to limit the impact of human sinfulness notes. Anti-federalist in the united states essay the midst of discussing the ratification of the constitution in 1787 two factions emerged, the federalists and the anti-federalists. Essay separation of powers and federalism: their impact on individual liberty and the during the 200-year history of the american constitution, the united states has evolved from thirteen disunited states into the. Ratification of the us constitution, debate between federalists and anti-federalists, constitution facts, how america transitioned from the articles of confederation to the united states constitution. The federalist : a commentary on the constitution of the united states a collection of essays item preview. Us constitution history of the united states of america how did the us constitution impact the development of american freedom in the very short term the us constitution formalized the freedoms that the british already had. |
Roof System Components
All steep-slope roof systems (i.e., roofs with slopes of 25% or more) have five basic components:
- Roof covering: shingles, tile, slate or metal and underlayment that protect the sheathing from weather.
- Sheathing: boards or sheet material that are fastened to roof rafters to cover a house or building.
- Roof structure: rafters and trusses constructed to support the sheathing.
- Flashing: sheet metal or other material installed into a roof system’s various joints and valleys to prevent weather seepage.
- Drainage: a roof system’s design features, such as shape, slope and layout that affect its ability to shed water. |
Introduction Monarchy; monarchi\'a \"absolutism\", a form of government in which the highest power is symbolized by an hereditary monarch. In Europe two variants have been discernible; the unlimited, absolute monarchy, and the limited, constitutional monarchy. The absolute monarchy reached its highest peak during the 17th and the 18th centuries. The constitutional monarchy grew out of that. Belgium, Denmark, the Netherlands, Norway, and Great Britain are countries which have constitutional monarchies, in which the monarch\'s rights stem from tradition or constitutional law and are limited to certain tasks. These are primarily ceremonial or symbolic, sometimes with some mystical features retained, but are often also political, especially at the formation and dissolution of government. In Sweden those tasks are in the hands of the Speaker of Parliament. The British today is the most ancient secular institution in the United Kingdom with a continuous history stretching back over a thousand years. It has evolved from absolute personal authority to the present constitutional form by which the Queen \"reigns but does not rule\". The United Kingdom is a constitutional monarchy, which means that the monarch is bound by rules and conventions and remains politically impartial. According to the royal website, her primary role is as a \"focus of national unity\". Great Britain is one of the few countries, where the monarchy has been preserved throughout many centuries. As a rule, the throne was passed to the eldest male descendant in the royal family and sometimes to other legal successors. And there were only several cases, when the continuity was broken. These occasions were usually linked to some complex problems in the state, such as the Bourgeois Revolution in the mid-17th century. And those were the times, when the monarch\'s power was real, not nominal.
Introduction Chapter 1. British Monarchy. Overview 1.1 History of the British Monarchy 1.2 Constitutional Role of the British Monarchy 1.3 The Role of the Monarchy in Modern Britain Chapter 2. Tests, quizzes and games Conclusion Bibliography Appendix. Kings and queens from 1066 till present
1. Алебастрова И.А. Конституционное право зарубежных стран: Курс лекций.- М.: Юрайт- М, 2002. 2. Беляева Г.П., Ливанцев К.Е. История государства и права зарубежных стран. Ленинград ЛГУ 1966 3. Крылова Н. С. Английское государство: М, 1981 г. 4. Пaуэлл И. Парадокс монархии. - Исследования по консерватизму. Вып. 1. Консерватизм в современном мире. Материалы международной научной конференции. Пермь. 27-28 мая 1993. - Пермь, 1994. 5. Хрестоматия по истории государства и права зарубежных стран/Под ред. З.М. Черниловского.-М.: Юрид. лит., 1984. - 472с 6. Ashley, Mike (1998). The Mammoth Book of British Kings and Queens. London: Robinson 7. Aston, Ben, What is the role of the monarchy in modern Britain? (2003) 8. Bagehot, Walter; edited by Paul Smith (2001). The English Constitution. Cambridge University Press 9. Brazier, Rodney (1997). Ministers of the Crown. Oxford University Press 10. Crabbe, V.C.R.A.C. (1994). Understanding Statutes. Cavendish Publishing 11. Fraser, Antonia. (2000). The lives of Kings and Queens of England, University of California Press 12. Grice, Andrew (9 Apr. 2002), \"Polls reveal big rise in support for monarchy\", The Independent 13. Pearsall, Ronald. (1996). Kings & Queens: A History of British Monarchy, Smithmark Publishers 14. Richard Rose and Dennis Kavanag, Comparative Politics, Vol. 8, No. 4 (Jul., 1976), pp. 548-576 15. The Monarchy in Britain (2003), A brief guide, the center for citizenship 16. The Parliamentary Oath (2001), Research Paper 01/116 17. Toporoski, Richard (2008). The Invisible Crown, Monarchy Canada 18. Weir, Alison (1996). Britain\'s Royal Families: The Complete Genealogy. (Revised edition). London: Pimlico 19. Official website of the British Monarchy, www.royal.gov.uk 20. The online archives of The Independent, http://www.independent.co.uk |
Exploring Effects of Oil Spills on Birds
Students will use magnifying lenses to examine feathers and will name and describe characteristics of the feathers. As they are guided through explaining how feathers help birds, they will examine a container of water. They will place the feather in water, describe it, and then place it in the same container after oil has been added to the water. They will observe and name the differences to the water and to the feather.
Students will identify clean water as an important habitat for certain wildlife (e.g. ducks, geese).
Vocabulary: soluble / insoluble, oil spill, toxic
Context for Use
Resource Type: Activities:Classroom Activity
Grade Level: Primary (K-2)
Theme: Teach the Earth:Teaching Environments:K12, Teach the Earth:Course Topics:Environmental Science
Description and Teaching Materials
Introduce by reading "OIL SPILL!" by Melvin Berger
In small groups, students examine and describe the feather before placing it in water and after placing it in water.
How do feathers help birds? Why do ducks and geese go to water? The teacher makes a chart to record observations, wonderings, and things the students discover.
Questions are offered to guide the discovery and complete the chart. Oil is added to the water and again the feather is placed in it. What is happening? How does the water look? Where is the oil? Is the feather different now? Dry? Wet? Heavy? Light? How might this affect the bird? As students place feathers in the substances, allow time in the groups for shared observations and then elicit responses to specific questions. The class summarizes the results of the chart and reflects on the impact of the oil spill. The teacher seeks ideas about local implications.
Teaching Notes and Tips
This lesson has an intentional emphasis on vocabulary–especially the solubility of substances. Students will be asked to specify what proves the solubility of oil / water, or disproves it. Having students actually engage in this experiment, rather than a teacher demonstrating it, will be much more effective and engaging. The picture book could be used either at the beginning or the end of the lesson. Some parts of it could be modified for kindergartners.
thoughtful, reflective ideas. Specific praise and feedback will be given to those performing at or above expectations. Additional questions and guidance will be offered to those who may be struggling.
describe using simple tools).
1st IIA1. Students understand objects have physical properties (describe by color, size, shape, weight, etc.)
2nd IVC1. Students understand that organisms live in different environments. |
Soil amendments are materials added to the soil to improve soil health and plant growth. The types of amendments that are added depend on various soil conditions such as soil type (clay, loam, sandy, etc.), climate, soil nutrient level, and plant type. Understanding your soil is important in determining what type of amendments to add.
Additionally, if you want to get your organic garden off to a good start, then begin with a reliable soil test. Testing your soil will take the guesswork out of ‘what your soil needs’, and in the long run, saves money, time and effort. A healthy soil produces healthy plants and healthy plants are more resistant to insect damage and disease.
All soils will benefit from organic gardening compost. Compost helps neutralize soils with extreme conditions. If the soil is sandy and has rapid drainage, compost can help the structure by adding more bulk with humus and organic matter and increase the soil’s water holding abilities. Compost increases porosity of soils with fine soil structure (clay, clay-loam) by adding humus and organic matter. Compost will also make these fine-textured soils easier to work with and erosion resistant.
Cover crops are annual, biennial or perennial plants grown directly in your garden to improve the health of your soil and then turned over or mowed to allow the primary plants to be grown. Cover crops help fix and trap soil nutrients, and like compost, add organic matter. They are an essential element of organic gardening.
Organic fertilizers are derived (or manufactured) from naturally occurring sources such as manure, fish emulsion, and bone meal, and are a great source of targeted nutrients. While organic fertilizers can be expensive they are much healthier for your soil than synthetic and non-organic fertilizers. |
Examples chosen from the realms of art, literature, and music produced during the Enlightenment demonstrate both the multiplicity and the interrelation of the three arts in Europe beginning with Watteau, Addison, and Couperin and ending with David, Goethe, and Mozart.
This course examines the arts in 18th-century Europe. We focus on the enlightenment contexts and “texts” of art, literature, music, and theatre in France, Italy, and England. The purpose is to come to an understanding of how the artists saw themselves and their cultures and to grasp their changing perceptions over the duration of the century. The overarching theme of the class is the tension between sense (rationality) and sensibility (feeling).
Student learning goals
1. Gain historical awareness of literature, visual culture, and music.
2. Increase ability in visual, literary, and musical analysis.
3. Make connections among art forms in an interdisciplinary interpretative framework.
4. Recognize the relationship between the world of politics, social values, and artistic production.
5. Formulate ideas on the arts through speaking and writing, including a research paper.
General method of instruction
Lecture/discussion, small group consultations and analysis of texts, video productions, music, and visual images, plus some short in-class writing assignments.
Any humanities-based class, literature, cultural history, art history, music history course and the like.
Class assignments and grading
Midterm and final exam and a research paper on visual themes with first and second versions to help improve the students' writing and critical thinking.
50% for exams; 35% for the paper, and 15% for class participation. |
Not just play music, but play with it. Make it fun. Make it approachable.
There are always children in my early childhood music class who are not comfortable participating in the singing or activities that we do. For some, they are just shy, others feel too much pressure to “perform” in front of others, still others use the time to absorb what is going on around them and then replicate it after their brains have had time to process it.
Almost all children get involved in free instrument time- something about all those instruments is just too much fun to ignore. Even so, there are children who prefer to play with the instruments, like toys, rather than play the instruments like, well, musical instruments!
If you are the parent of one these children, don’t despair! It’s not that they are unmusical! Some are just focused on how the instruments work or how they look. We can meet them half way, so here’s a few ideas.
- Mirroring or echoing- make a game out of taking rhythmic or short melodic phrases and have the children repeat them. It doesn’t matter much at an early age if they get it exactly right. They are learning to listen, audiate, and expanding their musical memory. Even older children (and parents!) often stop listening at the end of a pattern because they are getting ready to repeat, so make sure that you incorporate at least a beat of silence and a preparatory breath so that they have a better chance of repeating the whole pattern.
- Conversational instruments- Use instruments to have a rhythmic conversation. Castanets work well for this because they already look like clams with mouths! Have one castanet talk to the other and have the child join in the conversation. By using your voice, too, on a vocable such as “baa,” the castanets can even ask questions by having last syllable end on a slightly higher pitch. “Baa, baa baa, baa baa, BAA?” Your child will likely recognize it as a question.
- Drum with me- Play the same drum at the same time. With young children, use both hands at the same time to the beat. You can put their hands on top of yours for infants, or have toddlers sit across from you. As they get confident, you can can do left-right-left-right patterns.
- Shaky egg hide and seek- Take turns hiding a shaky egg in the room. As the seeker gets closer to the egg, shake faster, and shaker slower for farther away. Littler children might not understand the “hot and cold” shaking, so they can find the match to their egg and shake the egg they still have with you while they look for the egg.
Just make sure you and your children are having fun! And if your little engineer is still more focused on how it works than making music, don’t worry- she’s still learning and absorbing so much music and will perform when she’s ready! |
In this lesson, we'll take a look at drawing a horse in motion with graphite pencils. Although this drawing may seem intimidating for a beginner, we'll break the process down into two simple steps.
The first step will focus on drawing the basic shape of the horse with loose, light lines. We'll only be concerned with finding the contours of the subject in this step of the process. This first step is quick, but essential. We are concerned with finding the correct proportions along with the shape.
The second step will focus on developing the tone and value of the horse. This is often referred to as "shading". This time consuming step will bring the form of the horse to life and create the necessary illusion of surface texture.
Drawing with pencil doesn't require a huge investment in materials. A high quality drawing surface and a couple of drawing pencils are all that is needed to create a successful work. While many pencil manufacturers sell pencil sets with a variety of graphite hardness, most artists will find that only a few pencils will suffice. In this lesson, only two graphite grades are used - "HB" and "2B".
(Some of the following links are affiliate links which means we earn a small commission if you purchase at no additional cost to you.)
We'll begin by lightly and loosely sketching out the contour lines or outlines of the horse. A light touch is used to prevent any indentations from marring the surface of the paper. Even though the shape of the horse is curved, mostly straight lines are drawn. By drawing straight lines, we can concentrate on the angles of the lines as they are drawn. We are also concentrating on finding the overall shape of the horse. It may be helpful to look for smaller shapes within the larger one to build up the larger shape.
Once the basic contour lines are in place, we can begin the process of developing the tone and value. This part of the drawing process is usually referred to as "shading", even though we are considering both the light and dark values. We'll start with the head of the horse and work our way to the left of the body to prevent smudging (for right handers). We begin with light applications that flow over the cross contours of the form of the horse. We can darken these applications easily, but if we go too dark too quickly, it's a bit harder to fix.
We'll gradually and patiently work our way down the body of the horse with even applications of graphite. By paying attention to subtle changes in tone and value, we can develop the illusion of the muscles of the horse, just underneath the skin.
In areas where the cross contours are a bit harder to identify, we can apply the graphite by circling. Circling refers to making small circular strokes with the pencil to create even transitions of tone and value. Since we are not using a blending stump to smooth transitions, we must rely on the pressure placed on the pencil. Circling does not mean that we are drawing small circles, instead it simply means that we are pressing with the graphite in a circular motion. The pressure is not necessarily heavy, but it is consistent. This produces an even transition of value.
The environment that the horse is in plays a role in how we approach portions of the drawing. The horse is outside, which produces a strong shadow underneath the head, body, and portions of the rear leg. Including these locations of strong contrast help to create the illusion of natural sunlight. The horse is also running through a field of loose dirt. This motion is causing portions of the dirt to fly up, overlapping the body. The slant of the horse helps to create the illusion of movement, but including the flying dirt accentuates it even further.
We can continue developing the value through graphite applications down the rear of the horse. For each section of the body, we are still paying close attention to the form. The form of the horse dictates the directional strokes applied with the pencil. The tail of the horse is drawn with deliberate strokes of the pencil. These marks taper as they are made and include some variety.
Another factor for creating the illusion of natural light in the scene is the cast shadow below the horse. We need to include it to strengthen the light, but in this case, we want to exclude the rest of the surrounding background information. By excluding the background information, we place more emphasis on the horse and its motion. For this reason, we'll include textural details only in the location of cast shadow.
Up to this point in the drawing, we have only used the HB graphite. It's now time to increase the contrast a bit and broaden the range of value. To do this, we'll go over the darkest locations within the drawing with a 2B graphite.
Once all of the graphite applications have been made, we can use a kneaded or vinyl eraser to clean up the drawing and remove any stray marks or smudges left by the graphite.
Like with any drawing media, success with graphite or pencil requires patience and attention to subtle changes in value. By working slowly, we have full control over the marks that are made. No matter what subject you are trying to tackle, try starting loose with lighter marks and become more precise when it's time to develop the tone. |
In the Chajnantor Plateau in the Atacama Desert, one of the highest and driest places on Earth, a gentle “rain” is falling. It is light from space, in millimetric and submillimetric wavelengths, a natural, scarce and precious resource. It is well-known that these waves are full of information about our cosmic origins, that is why people thirsty for this knowledge have gathered here to collect, channel and analyze it.
This is what gives rise to the Atacama Large Millimeter/submillimeter Array (ALMA), currently the largest radio telescope in the world. This achievement is the result of an international association between Europe (ESO), North America (NRAO) and East Asia (NAOJ), in collaboration with the Republic of Chile, to build the observatory of the “Dark Universe”.
The light in these millimetric and submillimetric wavelengths comes from vast cold clouds in space, at temperatures of just a few dozen degrees above absolute zero (-273oC), and from some of the earliest and furthest galaxies in our Universe. Astronomers can use this light to study the chemical and physical conditions in these molecular clouds, which are dense regions of gas and dust where new stars are forming. These regions of the Universe are often dark and remain hidden from the visible range of light, but they shine intensely in the millimetric and submillimetric part of the spectrum.
This radio telescope is composed of 66 high-precision antennas, which operate on wavelengths of 0.32 to 3.6 mm. Its main array has fifty antennas, each with 12-meter diameters, which act together as a single telescope: an interferometer. This is complemented by a compact array of four antennas with 12-meter diameters and 12 antennas with 7-meter diameters. ALMA’s antennas can be configured in different ways, spacing them at distances from 150 meters to 16 kilometers, giving ALMA a powerful “zoom” variable, which results in images clearer than the images from the Hubble Space Telescope.
ALMA is already “irrigating” the fields of astronomy in depth, 24 hours a day, 365 days a year. Scientists foresee record harvests, where invisible light (radio waves) accumulated by ALMA will be vital to our understanding of the Universe. The purpose of ALMA is to study star formation, molecular clouds and the early Universe, closing in on its main objective: discovering our cosmic origins. |
(Beyond Pesticides, October 21, 2013) A study conducted by Sussex University researchers has identified the garden plants most attractive to pollinating insects. The study’s findings are important as pollinating insects are declining globally and are facing growing habitat losses. The study also gives vital scientific information to individuals and communities on plants that are most beneficial to pollinators. Although creating pollinator friendly habits is an important step to slowing pollinator population decline, environmental groups and activists are focused on addressing the underlying problem that leads to pollinator population loss: the continuous use of toxic pesticides.
The study, Quantifying variation among garden plants in attractiveness to bees and other flower-visiting insects, published in Functional Ecology, collected data over two summers by counting flower-visiting pollinators on 32 popular garden plant varieties to determine which varieties are more attractive to pollinators. The study found that the most attractive flowers are 100 times more attractive than the least attractive flowers. According to the study, the most attractive flowers are borage, lavender, marjoram, and open-flower dahlias. Majoram was the best all-round flower, attracting honey bees, bumble bees, other bees, hover flies, and butterflies. While information on pollinator friendly flowers is widely available, this study was designed to, “put that advice on a firmer scientific footing, by gathering information about the actual number of insects visiting the flowers to collect nectar or pollen,” according to study co-author Francis Ratnieks, Ph.D., quoted in a BBC article.
The study’s findings have several interesting implications. First, planting pollinator friendly plants does not involve extra cost or gardening effort, or loss of aesthetic attractiveness, as these flowers are not more expensive or more time consuming to plant than non-pollinator friendly flowers. The study authors acknowledge that while their sample of 32 plants is limited, the results should encourage further research to develop more scientific understanding of those flowers most attractive to insect pollinators. This study can also help cities and towns plan which flower varieties to plant in parks and public spaces so they can increase biodiversity and support pollinators.
Beyond Pesticides recently released its own BEE Protective Habitat Guide, which provides information on creating native pollinator habitat in communities, eliminating bee-toxic chemicals, and other advocacy tools. This habitat guide is part of the BEE Protective campaign launched by Beyond Pesticides this past Earth Day. The grassroots campaign is part of a larger effort to protect bees from rapid declines spurred by Colony Collapse Disorder (CCD) and other hazards associated with pesticides. The launch came one month after beekeepers, Center for Food Safety, Beyond Pesticides, and Pesticide Action Network North America filed a lawsuit against EPA calling for the suspension of certain neonicotinoid pesticides.
Pesticides, specifically neonicotinoids, have increasingly been linked to bee declines. These chemicals are used extensively in U.S. agriculture, especially as seed treatment for corn and soybeans. Agriculture is not the only concern however, as pesticide applications in home gardens, city parks, plant nurseries, and landscaping are also prime culprits in the proliferation of these harmful chemicals. The systemic residues of these pesticides not only contaminate pollen, nectar, and the wider environment, but have repeatedly been identified as highly toxic to honey bees.
A recent example of neonicotinoids’ toxic effects on bees was the massive bee death in Wilsonville, Oregon. 50,000 bumblebees were found dead or dying in a shopping mall after dinotefuran, a neonicotinoid pesticide, was applied to nearby trees. After this incident the Oregon Department of Agriculture (ODA) placed a temporary restriction on the use of pesticides with the active ingredient dinotefuran and the Oregon State University Extension Service revised its publication, “How to Reduce Bee Poisonings from Pesticides”. The publication contains research and regulations pertaining to pesticides and bees and describes residual toxicity periods for several pesticides. Though this temporary restriction and revised guide are important steps that acknowledge the effects neonicotiniod pesticides have on pollinators, they should only be viewed as the initial steps towards a complete ban on neonicotinoid pesticides.
Take Action: Beyond Pesticides’ BEE Protective campaign has all the educational tools you need to stand up for pollinators. Some specific ways you can help are:
- Join us in asking Lowe’s and Home Depot and other leading garden centers to take action and stop the sale of neonicotinoids and plants treated with these chemicals.
- Tell your member of Congress to support the Save America’s Pollinators Act.
- Sign the Pesticide Free Zone Declaration and pledge to maintain your yard, park, garden or other green space as organically-managed and pollinator friendly.
- Use our model resolution to transform your community and raise awareness about pollinator health.
For information on what you can do to keep the momentum going, see www.BEEprotective.org.
All unattributed positions and opinions in this piece are those of Beyond Pesticides. |
U. ILLINOIS (US) — Infant saliva harbors bacteria associated with tooth decay and cavities—the most prevalent infectious disease in U.S. children.
“By the time a child reaches kindergarten, 40 percent have dental cavities,” says Kelly Swanson, professor of animal science at the University of Illinois. “In addition, populations who are of low socioeconomic status, who consume a diet high in sugar, and whose mothers have low education levels are 32 times more likely to have this disease.”
Swanson’s study, published in PLoS One, focuses on infants before their teeth erupt, unlike most studies that focus on children already in preschool or kindergarten—after many already have dental cavities.
“We now recognize that the ‘window of infectivity,’ which was thought to occur between 19 and 33 months of age, really occurs at a much younger age,” he says.
“Minimizing snacks and drinks with fermentable sugars and wiping the gums of babies without teeth, as suggested by the American Academy of Pediatric Dentistry, are important practices for new parents to follow to help prevent future cavities.”
For the study, Swanson and colleagues used high-throughput molecular techniques to characterize the entire community of oral microbiota, rather than focusing on identification of a few individual bacteria.
“Improved DNA technologies allow us to examine the whole population of bacteria, which gives us a more holistic perspective,” Swanson says. “Like many other diseases, dental cavities are a result of many bacteria in a community, not just one pathogen.”
Through 454 pyrosequencing, researchers learned that the oral bacterial community in infants without teeth was much more diverse than expected and identified hundreds of species.
This demonstration that many members of the bacterial community that cause biofilm formation or are associated with early childhood caries (ECC) are already present in infant saliva justifies more research on the evolution of the infant oral bacterial community, Swanson says.
“The soft tissues in the mouth appear to serve as reservoirs for potential pathogens prior to tooth eruption,” he says. “We want to characterize the microbial evolution that occurs in the oral cavity between birth and tooth eruption, as teeth erupt, and as dietary changes occur such as breastfeeding versus formula feeding, liquid to solid food, and changes in nutrient profile.”
Educating parents-to-be on oral hygiene and dietary habits is the most important strategy for prevention of dental cavities.
The study was funded by the United States Department of Agriculture-Cooperative State Research, Education and Extension Service.
More news from University of Illinois: http://www.aces.uiuc.edu/news/ |
The Weak Nuclear Force
One of the four fundamental forces, about a million times weaker than the strong force – hence the name - the weak interaction involves the exchange of the intermediate vector bosons, the W and the Z. Since the mass of these particles is on the order of 80 GeV, the uncertainty principle dictates a range, using the Heisenberg Uncertainty Principle, of about
which is about 0.1% of the diameter of a proton.
The weak interaction changes one flavour of quark into another. It is crucial to the structure of the universe in that
1. The sun would not burn without it since the weak interaction causes neutrons to decay into protons so that deuterium can form and deuterium fusion can take place.
2. It is necessary for the buildup of heavy nuclei since this involves the production of neutrons, which have a half life of only 15 minutes, decaying back into protons – illustrated below, with a d -quark changing into a u -quark plus a W – boson which then decays into an electron plus an anti – neutrino.
The role of the weak force in the transmutation of quarks makes it the interaction involved in many decays of nuclear particles which require a change of a quark from one flavour to another. It was through beta decay that the existence of the weak interaction was first revealed. The weak interaction is the only process in which a quark can change to another quark, or a lepton to another lepton - the so-called "flavour changes".
The discovery of the W and Z particles in 1983 was hailed as a confirmation of the theories which connect the weak force to the electromagnetic force in electroweak unification.
The weak interaction acts between both quarks and leptons, whereas the strong force does not act between leptons. Leptons have no color, so they do not participate in the strong interactions; neutrinos have no charge, so they experience no electromagnetic forces; but all of them join in the weak interactions. |
Fill out the form below to receive a free trial or learn more about access:
Translate text to:
Caenorhabditis elegans is a microscopic, soil-dwelling roundworm that has been powerfully used as a model organism since the early 1970’s. It was initially proposed as a model for developmental biology because of its invariant body plan, ease of genetic manipulation and low cost of maintenance. Since then C. elegans has rapidly grown in popularity and is now utilized in numerous research endeavors, from studying the forces at work during locomotion to studies of neural circuitry.
This video provides an overview of basic C. elegans biology, a timeline of the many milestones in its short but storied history, and finally a few exciting applications using C. elegans as a model organism.
Cite this Video
JoVE Science Education Database. Biology I: yeast, Drosophila and C. elegans. An Introduction to Caenorhabditis elegans. JoVE, Cambridge, MA, (2018).
Caenorhabditis elegans, or "worms" to the scientists who study them, have revolutionized the way we approach genetic studies to understand how genes regulate cellular activities. The worm’s simple genetics, transparent body, and ease of cultivation makes them an ideal system for studying embryonic development, neuronal functions, lifespan and aging, and molecular basis of some human diseases.
First, lets get to know C. elegans as a model organism. Caenorhabditis elegans belongs to the phylum Nematoda of the animal kingdom. C. elegans are multicellular organisms that are approximately 1 mm long. They have elongated cylindrical body with no segmentation and no appendages. The worms have a transparent body throughout their life cycle, and exist as hermaphrodites and males. The hermaphrodites are capable of both self-fertilization and mating with males.
Nematodes live primarily in the soil with a constant level of moisture and oxygen
In the laboratory, they are cultured in agarose-containing Petri dishes on a lawn of the bacteria E. coli.
The life span of the worm is about 14 days. They go through 4 larval stages, L1 through L4, as they mature from an egg to an egg-laying parent. The development of worms is affected by temperature, and in the laboratory, they are cultured at 15 °C, 20 °C or 25 °C.
Now that we have reviewed C. elegans basics, lets learn what makes them a powerful model organism. First, it is relatively inexpensive and easy to culture worms on either solid or liquid medium.
Second, as they remain transparent throughout their life cycle, the entire worm anatomy is easily viewed by light microscopy. This attribute is particularly useful for studying worm development, as individual cell lineages can be easily traced. Transparency also allows fluorescent reporters, such as Green Fluorescent Protein (or GFP), to be easily viewed in live worms.
Third, C. elegans are very fertile; each hermaphrodite lays about 300 eggs following self-fertilization. Therefore, it is easy to obtain worms in large numbers. Also, worms reach reproductive maturity in only 3.5 days at 20 °C.
Fourth, worms are easy to manipulate genetically. By examining mutations, researchers gain insight into gene function, and mutations can be introduced in worms by treatment with chemicals and by exposure to UV radiation. High-throughput genome-wide screens are easy to perform with worms in 96 well plates. This allows numerous genes to be simultaneously screened for their involvement in a particular biological phenomenon or behavior. Also, the C. elegans genetic center, or CGC, maintains a large repository of mutants, which are available to researchers for a small fee.
Fifth, C. elegans was the first multicellular organism to have a completely sequenced genome. The complete sequence, and a detailed chromosomal map, has made genetic analysis faster and easier. Sequence analysis shows that many genes are conserved between humans and worms.
Finally, in addition to all these advantages, the worm research community is very friendly, and has developed many helpful online resources for studying worms.
Given all of the characteristics that make C. elegans such an attractive model system, it’s no wonder that many landmark discoveries have been made by studying worms. Lets take a look at some of them.
In 1963, Sydney Brenner decided to establish C. elegans as a model system, and used it to explore gene function. In 1974, he published the results of his genetic screen, which looked for visual phenotypes, such as dumpy body, uncoordinated movement, and transformers.
In 1976, John Sulston, who worked with Brenner, published a complete cell lineage of C. elegans. He followed the descent of every cell as it divided and differentiated and found that first five cell divisions produce six founder cells that differentiate to ultimately give rise to all of the different tissues in the organism.
In 1986, Robert Horvitz published his pioneering work on the discovery of "death genes." As cells divide and differentiate, some cells are eliminated by activation of death genes for normal development of the worm and other organisms. His work on programmed cell death, or apoptosis, has had a big impact on our understanding of developmental events in mammals, cancer, and neurodegenerative diseases.
In 2002, Sydney Brenner, John Sulston and Robert Horvitz shared the Nobel Prize in Physiology and Medicine for their seminal work done in C. elegans.
In 2006, Andrew Fire and Craig Mello shared the Nobel Prize in Physiology and Medicine for their groundbreaking work on RNA interference, or RNAi, a process that results in silencing of genes via degradation of specific mRNA molecules. RNAi technology is currently being developed for therapeutic use.
In 2008, Martin Chalfie received the Nobel Prize in Chemistry for showing that the Green Fluorescent Protein (or GFP) could be expressed in C. elegans and used as a fluorescent reporter. Since then, GFP has been expressed in all of the major model organisms.
As a model organism, C. elegans can be used to answer many important scientific questions.
For example, worms are a highly convenient model system for studying neurobiology. Although, worms do not have a brain per se, they have a rather sophisticated nervous system comprised of 302 neurons — almost a third of the total 959 cells found in an adult hermaphrodite. The worms respond to environmental cues, such as availability of food, population density, or chemicals such as chemoattractants. In addition to genetic screens, laser ablation — that is, selective cutting of neurons with laser beams — and electrophysiology have led us to appreciate how neurons function and communicate in multicellular organisms. In fact, the entire connectivity of the C. elegans nervous system has now been mapped.
Worms are also an ideal choice for aging studies. The worm’s short life span has allowed researchers to conduct genetic screens for finding longevity genes. Although many of these genes are conserved in humans, we do not yet know whether or not they affect lifespan in people.
Worm research has also advanced our knowledge of human diseases. Fluorescent reporters have been used in worms to mimic aggregation; that is, the clumping of misfolded proteins, such as alpha-synuclein. These aggregates cause neurons to degenerate, resulting in motor deficits. Genetic screens in worms have helped to identify genes that prevent the loss of neurons in neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease.
You just watched JoVE’s introduction to Caenorhabditis elegans. In this video, we reviewed the characteristics of C. elegans and the reasons that make worms a powerful model organism. This tiny worm, with its simple genetics and diminutive nervous system, has helped us to understand numerous aspects of human development, behavior, aging and disease. Thanks for watching, and good luck with your C. elegans research.
A subscription to JoVE is required to view this article.
You will only be able to see the first 20 seconds. |
So Many Solutions - Inequalities
Lesson 11 of 20
Objective: SWBAT translate and solve one step inequalities
Students enter silently according to the Daily Entrance Routine.. Do Now are handed at the door as I narrate students getting right to work and praising those who are immediately ready and beginning their work. This is a strategy touted by Lee Canter. It is meant to motivate other students to do the same as their peers, who are being praised publicly. After 5 minutes of independent work, students with correct work and answers will be asked to copy their work on the board and the rest will be asked to discuss their solutions and the ones at the board with their neighbors. I will be observing students work during those first 5 minutes to flag or target a group that will debrief solutions with me during the last 5 minutes of this section.
The three problems included on the worksheet reflect problems most commonly missed by students on the last unit test. Students continue to struggle with order of operations involving absolute value, solving equations that require distribution and combining like terms. The last problem in the Do Now ties in the previous lesson, translating word problems in t equations to solve. I review the previous lesson’s strategy, using bar models to translate, in the video included.
The most common errors made by students for each question are listed below:
- Not understanding what an exponent of three means; some students will multiply by 3
- Adding and subtracting before multiplying
- Not understanding absolute value; thinking that -3 means subtract three
- Not distributing the negative correctly; specifically the -2 to the second term of -6
- Combining terms that are not alike
- Issues with solving equations
- Lack of use of a strategy (i. e. bar models, verbal models)
- Decimal operations errors
It's important to be able to identify these errors to better place students into pairs and groups around the room for the upcoming paired work time.
After debriefing on solutions, all students will be asked to return to their seats and class notes will be distributed. The problems on the back are additional practice that students who move quickly through the notes can complete, or extra samples for students to study before the quiz this week. Anything in red font in the notes document is meant for students to copy off the board. I have a SMARTboard document displayed at the front that looks like students’ papers (this document is attached as a pdf file). I begin with translations of word problems to inequality statements to gauge students’ prior knowledge. I anticipate common issues such as not knowing how to read the symbols < vs >, or getting confused when there are variables on the right side. First I have a student read the statements. The first statement is modeled as follows and then I ask students to help me with the second statement:
The number of points a basketball team scores is greater than 80.
- What is the unknown? # of points
- Let’s give it a variable à p
- What is the short inequality statement? Greater than 80
- Summarize in the furthest column to the right:
- # of points is greater than 80
- This is another way to state the same fact:
- 80 is less than # of points
This other way tends to confuse many kids, so it could be useful to use whole number values to prove it:
3 < 5 three is less than five
5 > 3 five is greater than three
Next, another student/s model(s) the next question while I lead them through the same cycle of questions:
- What is the unknown? What variable should we give it?
- What is a shorter way to say the inequality statement?
- What is another way to say this fact? How do you say it if the variable is on the right side?
- What are some possible values for a (the unknown)?
When a student struggles to answer the question correctly I may call on (or ask the student who is stuck to call on) another student to continue where the initial one left off. This may continue until the question is answered correctly and fully. It’s a great way to keep participation positive and team based and it’s most useful when reviewing short response questions.
All students are given 3-4 minutes to complete the two left over statements in pairs. We discuss how these statements are different from the first two. This is how I develop the idea of inclusion. These last two examples can be checked for understanding by posing different values, positive negative, rational, as solutions to the set. Drawing a number line is a visual strategy that should be used for all students.
Students work independently and silently for 15 minutes to complete the worksheet. During this time I am walking around to provide help if students are getting stuck and noting those who are struggling to master the skill. At the end pf 15 minutes those who were struggling will be asked to form a small group with me to review the answers and I’ll give them a problem to complete independently once again. Other students will be asked to pair up to check their answers or complete their work. Any student who was unable to complete at least 6 of the problems will be asked to be in my group.
The common errors I anticipate are mostly with the last 4 questions. Some students may forget how to solve one step equations with a variable in the numerator. As for the number lines, I am not asking students to graph, simply to circle possible integer values. This class work also presents a good opportunity to use MP6, attention to precision. Question 2utilizes this practice through the operations with decimals, a skill some of my students continue to improve. Questions 7 and 8 utilize this skill when drawing the number line. It will be informative to see how students scale their lines or where they choose to place their positive and negative integers.
During the last 10 minutes of class students are asked to make groups of 4. Each group will be responsible for displaying the solution to one problem on the board or a piece of char paper. I work to show the answers to two other problems. They are only given 5 minutes to do this so they must hurry to display their solution. Achievement points are awarded to groups that are successful in getting all of the work on the board/chart paper. During the last 5 minutes of this section we discuss whether or not the solutions are correct and why. Homework is distributed at the end and students are dismissed. |
Your students will ask for more math practice once they start solving these math crossword puzzles addition and subtraction to 20 worksheets. These worksheets offer a fun twist on both addition and subtraction to 20 and spelling practice.
Math Crossword Puzzles Addition and Subtraction to 20 Worksheets
This printable pack contains 20 unique worksheets
Learn a little bit about this printable set;
- 20 unique addition and subtraction up to 20 worksheets
- students have to solve the equations and write down the answers with words, perfect for number spelling practice
- there is a secret word that students unvocer as they solve the puzzle
- pictures on the worksheets are decorative and do not hint to the solution
- answer key with secret words is included
You can get the whole set at Teachers Pay Teachers where you can also preview the whole set.
These worksheets can be laminated for continuous use.
Want to give these a try? You can grab one of the pages for free
Grab the free worksheet and test it out.
If your students enjoyed this resource consider purchasing the whole set.
More Basic Math Learning Resources |
They are earthworms and with them come many misconceptions. The truth of the matter is sometimes earthworms can be invasive and cause significant environmental damage.
Researchers within Oklahoma State University’s Department of Natural Resource Ecology and Management recently dug into this issue through a grant from the United States Department of Defense. The funding led Shishir Paudel, NREM post-doctoral associate, to a U.S. Navy-owned island 80 miles off the coast of California.
San Clemente Island is a remote, windswept, rugged island off the coast of southern California, with numerous unique and endangered plant and animal species, and until recently, no earthworms. When the Navy did find earthworms on the only ship-to-shore, live-firing range in the country, they became concerned about potential harmful effects to endangered species, and Paudel and his colleagues also took interest.
How did the nonnative worms get there, how many are there and are they going to be a problem?
“Based on our research, we’re still not certain how the worms got there, but we think they were likely introduced in 2008, when topsoil was brought from the mainland to pave a major road,” Paudel said. “This explanation is support by our observation that nearly all earthworms are close to the paved road.”
The team was lead by Scott Loss, NREM assistant professor of global change ecology and management, and included collaboration from Gail Wilson, NREM Sarkeys Endowed professor, and researchers from the University of Southern California. The research, which began in 2014, has been recently published in the journal “”
Paudel did a random sample of the island, digging 672 holes, all of which were 33 cm by 33 cm square and 30 cm deep. He found anywhere from 0 to more than a dozen worms in each hole.
The worms were taken back to Oklahoma to be identified, and all were found to be native to either Eurasia or South America.
Finding worms where they are not supposed to be does not sound like a big deal. But, much research has been done on invasive earthworms in forests of the northern United States, areas that were historically earthworm-free due to glaciation.
“Ecologists have learned that Eurasian earthworms drastically change the entire forest ecosystem,” said Loss. “Earthworms change the soil, which in turn reduces biodiversity of forest plants. Changes to vegetation can even negatively affect populations of animals, like birds and salamanders living on the forest floor.”
On San Clemente Island, the research team found some evidence of harmful impacts to grassland ecosystems. In particular, invasive grasses were more common in areas with invasive earthworms, suggesting that earthworms may be helping invasive plants establish a foothold.
“The worms change the soil and this in turn may impact the relative success of native and nonnative plant species,” Paudel said. “Increased dominance of invasive plants could eventually harm San Clemente’s endangered species.”
Loss and his students also have documented invasive earthworms in Oklahoma soils. Students in his Applied Ecology and Conservation class have found Eurasian and South American earthworms at the OSU Research Range near Stillwater, and Loss has found European and Asian species (such as the Asian “jumping worm”) across the state, including near fishing lakes in remote forests of eastern Oklahoma.
The team is hopeful the research in California will open the eyes and minds of those moving dirt for construction purposes without thinking about possible harmful effects to the environment.
Likewise, fishermen in Oklahoma and elsewhere can prevent environmental damage by throwing away unused bait rather than dumping it on the ground. Sometimes there are things in the dirt that should not find themselves in other areas. |
Practices for Teaching Vocabulary
Mr Ashok Pandya
Department of Communication Skills, MEFGI - Rajkot
Words have power to destroy and heal. - The Buddha
Whatever and wherever we are today it’s because of our words that we have used in our life. Using minimum words and conveying maximum thought is heart of communication. One of the problem with vocabulary is that students (we also) do not remember new words for longer time, I had question in mind that how we can teach vocabulary in such way that students can remember it for longer time. I tried to find out answer. There are some techniques that I have come across which teacher can experiment for example teaching words with different images, showing movie scenes in which character uses that particular word, using it in their conversation, recording that conversation, creating story by using that particular word, creating what’s app group etc. I have experimented some of the techniques.
1. Teaching words with different images:
Science says whatever we see we can remember it for longer time. When I teach anything and when I use any word that students find difficult I explain that word by showing image of that word for example ‘LIONIZE’ means to treat someone as celebrity, after explaining usage of that word, I showed an image of that word on projector screen (type particular word on Google and click on image)
I also explain it with white board.
2. Creating WhatsApp group:
In my lab, Students have created WhatsApp group for learning vocabulary. I have advised them to introduce any new words with images in group and then they are supposed to make sentence of that words.
3. Teaching through particular movie scene:
Students like when you talk about movie. Movies are source of learning. Students might not remember text but they remember dialogues of the movies. For example, I taught word ‘ENTREPRENEUR’ by showing movie clip of English Vinglish.
Now questions are: have they learnt words in a better way? Are they able to use effective words? Are they now able to remember words for longer time? Answers of these questions will be written in my next article. |
Cerulean Warblers breed in scattered locations in Eastern North America. They winter along the Andes from Columbia through Bolivia. This species' numbers are cause for concern. Some studies indicate a 50% decline in population during the last 40 years, and numbers have declined even since 1900. Factors that may be contributing to this dire situation include loss of mature deciduous forest, fragmentation of these forests for human development, current early harvesting and shorter rotation of timber stands, loss of inmportant trees to disease (oaks, sycamores, and elms), tropical deforestation in their winter range, and loss of migratory stopover habitat, especially near the Gulf Coast (Hamel 2000). |
°C = ( °F − 32 ) * 5 / 9
°F = °C * 5 / 9 + 32
The Celsius scale (also known as centigrade) is a temperature scale named after the Swedish astronomer Anders Celsius (1701-1744), who developed a similar scale. From 1744 until 1954 0°C was defined as the freezing point of water and 100°C as the boiling point (at standard atmospheric pressure).
The Fahrenheit scale is a temperature scale named after the German physicist Daniel Gabriel Fahrenheit (1686-1736), who proposed it in 1724. The freezing point of water is set at 32°F and the boiling point at 212°F (at standrd atmospheric pressure). |
To rate this resource, click a star:
This case study is based on a 2005 journal article that deals with the issue of sexual vs. asexual reproduction and their relative merits—a question that has bedeviled biologists for more than a century. The article serves as the final stage of this case focusing on why sex is useful (at least in some circumstances).
Herreid, Clyde Freeman
portions of several class periods
Use this resource to relate evolutionary concepts to the topics of meiosis and animal reproduction (or get more suggestions for incorporating evolution throughout your biology syllabus). The case study includes extensive teaching tips. You may use it in class in its entirety or may choose to assign one part as homework. Cited references in the case provide opportunities for extensions that include the primary literature. It is written for a general biology class, but would also be appropriate for use in an evolution or ecology course.
Correspondence to the Next Generation Science Standards is indicated in parentheses after each relevant concept. See our conceptual framework for details.
- Inherited characteristics affect the likelihood of an organism's survival and reproduction.
- An individual’s fitness (or relative fitness) is the contribution that individual makes to the gene pool of the next generation relative to other individuals in the population.
- Authentic scientific controversy and debate within the community contribute to scientific progress. |
It is interesting to know that many of the attendees at the Constitutional Convention held in 1787 were OPPOSED to including a Bill of Rights in the Constitution. Why would this be so? The chief concern was that if a written bill of rights were included, the people would, over time, think that these rights were the ONLY rights they had. They were wise enough to know that the people would not understand how vast this body of “inalienable†rights was, and would therefore allow the government (especially the federal government) to dictate, and invade, the sacred domain of self-government that was to remain with the people.
As a result, the Bill of Rights was not included in the original Constitution, but was later introduced by James Madison in 1789 to the First United States Congress as a series of amendments to the Constitution.Details |
In 2013, New Zealand had its warmest winter since records began, 1.3 degrees above the long term average. Australia just experienced its warmest 12 months on record, breaking the previous record set a few years ago. A recent study from the UK Met Office found half of 2012’s extreme weather events internationally were exacerbated by climate change.
Wind energy plays an important, global role in addressing climate change. Many developed nations have worked hard to reduce their carbon dioxide (CO2) emissions from electricity over the past couple of decades. Developed countries with a downward trend in CO2 per unit of electricity generated include the US, Denmark, Australia and the UK. Unfortunately for our image and for the future, New Zealand’s trend is in the other direction.
NZ’s carbon pollution has increased 30% since 1990. Our electricity sector emissions are up by 60%. According to the Ministry for the Environment, New Zealand’s emissions intensity by population is amongst the highest for developed countries.
Electricity is one of the sectors that could, quite quickly, reduce New Zealand’s carbon pollution. Currently electricity generation is around 15% of New Zealand’s CO2 emissions, contributing around 4 million tonnes per annum, or the equivalent of the emissions from a third of our cars.
CO2 emissions from the electricity sector, 1990-2012
According to the Ministry for Business, Innovation and Employment, 64% of the electricity sector’s CO2 emissions came from gas-fired generation, 29% from coal-powered plant, and the rest from geothermal, biomass and liquid fuels.
How does Wind Energy Address Climate Change?
One way we can reduce our greenhouse gas emissions – as well as our dependency on fossil fuels – is by increasing the proportion of electricity that is generated from wind and other renewable energy resources. Wind farms don’t emit greenhouse gases as they generate electricity, whereas coal and gas stations do.
Both coal and gas generation also create a lot of waste heat that cannot be easily used for generating electricity. Over 50% of the energy used to produce electricity from gas and coal is lost through the production process, which is not the case with wind energy.
The US National Renewable Energy Laboratory recently compiled the results of all peer-reviewed publications on lifecycle emissions for different energy sources. This study shows that it takes less than six months for a wind farm to produce more energy than it will consume in its entire lifetime. The lifecycle emissions (including manufacturing of components, transport to site, construction, operation and decommissioning) from wind farms are about 1% of the emissions from thermal generation. |
The phenomenon of fluorescence was known by the middle of the nineteenth century. British scientist Sir George G. Stokes first made the observation that the mineral fluorspar exhibits fluorescence when illuminated with ultraviolet light, and he coined the word "fluorescence". Stokes observed that the fluorescing light has longer wavelengths than the excitation light, a phenomenon that has become to be known as the Stokes shift. Fluorescence microscopy is an excellent method of studying material that can be made to fluoresce, either in its natural form (termed primary or autofluorescence) or when treated with chemicals capable of fluorescing (known as secondary fluorescence). The fluorescence microscope was devised in the early part of the twentieth century by August Köhler, Carl Reichert, and Heinrich Lehmann, among others. However, the potential of this instrument was not realized for several decades, and fluorescence microscopy is now an important (and perhaps indispensable) tool in cellular biology.
Introduction to Fluorescence - Fluorescence microscopy is a rapidly expanding and invaluable tool of investigation. Its advantages are based upon attributes not as readily available in other optical microscopy techniques. The use of fluorochromes has made it possible to identify cells and sub-microscopic cellular components and other entities with a high degree of specificity amidst non-fluorescing material. What is more, the fluorescence microscope can reveal the presence of fluorescing material with exquisite sensitivity. An extremely small number of fluorescent molecules (as few as 50 molecules per cubic micrometer) can be detected. In a given sample, through the use of multiple staining, different probes will reveal the presence of individual target molecules. Although the fluorescence microscope cannot provide spatial resolution below the diffraction limit of the respective specimens, the presence of fluorescing molecules below such limits is made remarkably visible.
Overview of Excitation and Emission Fundamentals - When electrons go from the excited state to the ground state, there is a loss of vibrational energy. As a result, the emission spectrum is shifted to longer wavelengths than the excitation spectrum (wavelength varies inversely to radiation energy). This phenomenon is known as Stokes Law or Stokes shift. The greater the Stokes shift, the easier it is to separate excitation light from emission light. The emission intensity peak is usually lower than the excitation peak; and the emission curve is often a mirror image of the excitation curve, but shifted to longer wavelengths. To achieve maximum fluorescence intensity, the fluorochrome is usually excited at the wavelength at the peak of the excitation curve, and the emission is selected at the peak wavelength (or other wavelengths chosen by the observer) of the emission curve. The selections of excitation wavelengths and emission wavelengths are controlled by appropriate filters. In determining the spectral response of an optical system, technical corrections are required to take into account such factors as glass transmission and detector sensitivity variables for different wavelengths.
John Frederick William Herschel (1792-1871) - John Herschel was the only child of renowned scientist and astronomer William Herschel. In 1820, the younger Herschel was one of the founding members of the Royal Astronomical Society, and when his father died in 1822 he carried on with the elder Herschel's work, making a detailed study of double stars. In collaboration with James South Herschel compiled a catalog of observations that was published in 1824. The work garnered the pair the Gold Medal from the Royal Astronomical Society and the Lalande Prize from the Paris Academy of Sciences. In 1839, Herschel developed a technique for creating photographs on sensitized paper, independently of William Fox Talbot, but did not attempt to commercialize the process. However, he published several papers on photographic processes and was the first to utilize the terms positive and negative in reference to photography. Particularly important to the future of science, in 1845 Herschel reported the first observation of the fluorescence of a quinine solution in sunlight.
Alexander Jablonski (1898-1980) - Born in the Ukraine in 1898, Alexander Jablonski is best known as the father of fluorescence spectroscopy. Jablonski's primary scientific interest was the polarization of photoluminescence in solutions, and in order to explain experimental evidence gained in the field, he differentiated the transition moments between absorption and emission. His work resulted in his introduction of what is now known as a Jablonski Energy Diagram, a tool that can be used to explain the kinetics and spectra of fluorescence, phosphorescence, and delayed fluorescence.
George Gabriel Stokes (1819-1903) - Throughout his career, George Stokes emphasized the importance of experimentation and problem solving, rather than focusing solely on pure mathematics. His practical approach served him well and he made important advances in several fields, most notably hydrodynamics and optics. Stokes coined the term fluorescence, discovered that fluorescence can be induced in certain substances by stimulation with ultraviolet light, and formulated Stokes Law in 1852. Sometimes referred to as Stokes shift, the law holds that the wavelength of fluorescent light is always greater than the wavelength of the exciting light. An advocate of the wave theory of light, Stokes was one of the prominent nineteenth century scientists that believed in the concept of an ether permeating space, which he supposed was necessary for light waves to travel.
Interactive Java Tutorials
Electron Excitation and Emission - Electrons can absorb energy from external sources, such as lasers, arc-discharge lamps, and tungsten-halogen bulbs, and be promoted to higher energy levels. This tutorial explores how photon energy is absorbed by an electron to elevate it into a higher energy level and how the energy can subsequently be released, in the form of a lower energy photon, when the electron falls back to the original ground state.
Fluorescence Filter Spectral Transmission Profiles - Fluorescence microscopes are equipped with a combination of three essential filters (often termed a filter set) that are positioned in the optical pathway between the light source in the vertical illuminator and the objective. The filters are strategically oriented within a specialized cube or block that enables the illumination to enter from one side and pass to and from the specimen in defined directions along the microscope optical axis. This tutorial explores the spectral overlap regions of fluorescence filter combinations, and how changes to the individual filter properties help determine the bandwidth of wavelengths passed through the various filter sets.
Jablonski Energy Diagram - Fluorescence activity can be schematically illustrated with the classical Jablonski diagram, first proposed by Professor Alexander Jablonski in 1935 to describe absorption and emission of light. Prior to excitation, the electronic configuration of the molecule is described as being in the ground state. Upon absorbing a photon of excitation light, usually of short wavelengths, electrons may be raised to a higher energy and vibrational excited state, a process that may only take a quadrillionth of a second (a time period commonly referred to as a femtosecond, 10E-15 seconds). This tutorial explores how electrons in fluorophores are excited from the ground state into higher electronic energy states and the events that occur as these excited molecules emit photons and fall back into lower energy states.
Selected Literature References
Reference Listing - The field of fluorescence spectroscopy and microscopy is experiencing a renaissance with the introduction of new techniques such as confocal, multiphoton, deconvolution, time-resolved investigations, and total internal reflection, coupled to the current advances in chromophore and fluorophore technology. Green Fluorescence Protein is rapidly becoming a labeling method of choice for molecular and cellular biologists who can now explore biochemical events in living cells with natural fluorophores. Taken together, these and other important advances have propelled the visualization of living cells tagged with specific fluorescent probes into the mainstream of research in a wide spectrum of disciplines. The reference materials listed below were utilized in the construction of the introductory fluorescence section in the Molecular Expressions Microscopy Primer.
Mortimer Abramowitz - Olympus America, Inc., Two Corporate Center Drive., Melville, New York, 11747.
Ian D. Johnson, Matthew J. Parry-Hill, Brian O. Flynn, and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
Questions or comments? Send us an email.
© 1998-2013 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our |
What is Petrified Wood? Metaphysically speaking it’s a symbol of man’s infinite connection with nature.
The process of petrification begins with three raw ingredients: wood, water and mud. A material formed by the silicification of wood, generally in the form of opal or chalcedony, in such a manner as to preserve the original form and structure of the wood. Also known as agatized wood; opalized wood; petrified wood; woodstone. Petrified wood has been preserved for millions of years by the process of petrification. This process turns the wood into quartz crystal which is very brittle and shatters. Even though petrified wood is fragile, it is also harder than steel.
Petrified wood (from the Greek root petro meaning “rock” or “stone”; literally “wood turned into stone”) is the name given to a special type of fossilized remains of terrestrial vegetation. It is the result of a tree having turned completely into stone by the process of permineralization. All the organic materials have been replaced with minerals (mostly a silicate, such as quartz), while retaining the original structure of the wood. Unlike other types of fossils which are typically impressions or compressions, petrified wood is a three-dimensional representation of the original organic material. The petrifaction process occurs underground, when wood becomes buried under sediment and is initially preserved due to a lack of oxygen which inhibits aerobic decomposition. Mineral-laden water flowing through the sediment deposits minerals in the plant’s cells; as the plant’s lignin and cellulose decay, a stone mould forms in its place.
In general, wood takes less than 100 years to petrify. The organic matter needs to become petrified before it decomposes completely. A forest where the wood has petrified becomes known as a Petrified Forest.
Elements such as manganese, iron and copper in the water/mud during the petrification process give petrified wood a variety of color ranges. Pure quartz crystals are colorless, but when contaminants are added to the process the crystals take on a yellow, red, or other tint.
Following is a list of contaminating elements and related color hues:
- carbon – black
- cobalt – green/blue
- chromium – green/blue
- copper – green/blue
- iron oxides – red, brown, and yellow
- manganese – pink/orange
- manganese oxides – blackish/yellow
Petrified wood can preserve the original structure of the wood in all its detail, down to the microscopic level. Structures such as tree rings and the various tissues are often observed features.
Metaphysical facts about Petrified Wood: Used to connect to past lives, for grounding, to attract wealth, strength and courage, reminds one of nature, used to stimulate the root chakra, brings about mental calm and centers the energy body. |
Promoting children’s physical and mental well-being.
Obesity is the fastest growing health concern in the United States. Unhealthy diets and increasingly sedentary lifestyles have contributed to this epidemic. Only 2% of children in the U.S. eat a healthy diet consistent with federal nutrition recommendations, and only 15% of all elementary-aged children eat the recommended five servings of fruits and vegetables daily, according to Pediatrics the article titled “Food intakes of US children and adolescents compared with recommendations.” Minority, rural and poor populations account for disproportionately high levels of obesity.
Richard Louv, author of “Last Child in the Woods: Saving our Children From Nature Deficit Disorder”, suggests there is a direct link between the rise in physical and mental health issues in children, and their lack of connection to the natural world. Considering that children today spend less time playing outdoors than any previous generation, it is logical that their overall understanding of the natural world is in decline.
How can we address these issues in Michigan? Within educational environments, schoolyard gardens are emerging as unique venues for introducing nutrition education, improving healthy choices, and reversing sedentary lifestyle trends. Research has demonstrated that eating patterns developed during childhood are the best indicators of adult eating habits, therefore providing nutrition education and instilling healthy food choices at an early age can serve to promote lifelong physical and mental well-being. However, these concepts must be reinforced at home to be effective. Research from the University of Minnesota suggests that providing families with the knowledge and ability to improve their food choices and eating habits is a key factor toward enhancing healthy adolescent food choices. Actively engaging parents through family meals, cooking demonstrations, and other community events will likely extend healthy food choices from schools into households.
Schoolyard gardens provide other social benefits as well. They can serve as outdoor classrooms, social gathering areas for families, and offer opportunities for multi-cultural education.
Over time, we have lost the connection to where our food comes from. It is time to reverse that trend. What could be better than helping kids grow and harvest their own tomatoes, onions, peppers, and herbs, and then watch them prepare and eat salsa?
Alice Waters, a prominent chef and one of the first proponents of schoolyard gardens, suggests: “A curriculum designed to educate both the senses and the conscience–a curriculum based on sustainable agriculture–will teach children their moral obligation to be caretakers and stewards of the finite resources of our planet. And it will teach them the joy of the table, the pleasures of real work, and the meaning of community.” |
The American bison was once the symbol of a vast, limitless country filled with seemingly endless land and equally endless opportunity. But American settlers soon ensured that the bison would ultimately symbolize the dark, ugly side of “manifest destiny.”
Estimates of how many bison used to roam the Midwest, before European settlers moved in, range from 30 to 60 million. Native Americans once lived in harmony with these migratory herds, while using the bison for food, as well as their hides for clothing and shelter, and their bones for tools and weapons.
But the American settlers advancing from the east were hungry for more land and more resources, including bison. Hunters on cross-country trains would even take aim at the wild creatures from their windows and shoot down several at a time.
The hunting train would then slow to a stop for people to skin the animals for coats, or cut out their tongues for culinary delicacies in the cities along the Eastern seaboard. Unlike the Native Americans, these hunters left the rest of the bison to rot.
Overall, between 1800 and 1900, the bison population was brought down from the estimated 30-60 million to approximately 325. While more exact statistics on the amount of bison killed by settlers are hard to come by, the full scope of the problem can be glimpsed in the numbers from one railroad company: 500,000 bison hides shipped east between just 1872 and 1874.
As startling as the numbers behind this mass buffalo slaughter are, most settlers seemed to view the animal as just one small step in manifest destiny, the quasi-religious belief that American settlers were destined to own the land of the New World all the way from the Atlantic to the Pacific.
Even the extermination of Native American populations — another enormous casualty of manifest destiny — is directly tied to the bison.
“I would not seriously regret the total disappearance of the buffalo from our western plains, in its effect upon the Indians,” Columbus Delano, Secretary of the Interior, wrote in 1873.
The following year, General Philip Sheridan, a leading fighter in the Indian Wars, told the Texas Legislature that bison hunters were “destroying the Indian’s commissary,” and the people should let them “kill, skin, and sell until the buffaloes are exterminated.”
Conflicts and ideologies like these are oftentimes hard to visualize in concrete terms and solid images. But in the case of manifest destiny, one need look no further than the buffalo slaughter.
Today, however, through careful conservation and land management efforts, the bison population has been brought back up to around 500,000. |
Patterns of a colonial age
Crisis and response
In the last half of the 18th century, all the major states of Southeast Asia were faced with crisis. The great political and social structures of the classical states had begun to decay, and, although the reasons for this disintegration are not altogether clear, the expanded size of the states, the greater complexity of their societies, and the failure of older institutions to cope with change all must have played a part. It is also likely that European efforts to choke and redirect the region’s trade had already done much to destroy the general prosperity that trade previously had provided, though Europeans were neither ubiquitous nor in a position to rule, even in Java. The most serious circumstances were undoubtedly those of Vietnam, where from 1771 to 1802 there raged a struggle—the Tay Son rebellion—over the very nature of the state. This rebellion threatened to sweep away the entire Confucian establishment of Vietnam, and perhaps would have done so if its leader had not attempted to accomplish too much too quickly. Elsewhere, war and confusion held societies in their grip for much shorter periods, but everywhere rulers were compelled to think of changed circumstances around them and what they meant for the future.
In the mainland states three great rulers of three new dynasties came to the fore: Bodawpaya (ruled 1782–1819) in Myanmar, Rama I (1782–1809) in Siam (Thailand), and Gia Long (1802–20) in Vietnam. All three were fully aware of the dangers, internal as well as external, that faced them and their people, and their efforts were directed at meeting these challenges. As their armies extended their reach beyond earlier limits, these rulers vigorously pursued a combination of traditional and new policies designed to strengthen their realms. Of particular importance were efforts to bring villages under closer state control, curb shifting patron-client relationships, and centralize and tighten the state administrative apparatus. The institution of kingship itself seemed to become more dynamic and intimately involved in the direction of the state. In retrospect, some of these policies had a recognizably modern ring to them, and taken together they represented, if not a revolution, at least a concerted effort at change. Even Gia Long, whose conscience and circumstance both demanded that he give special attention to reviving the classical Confucian past, quietly incorporated selected Western and Tay Son ideas in his government. Nor were the changes ineffectual, for by 1820 the large mainland states stood at the height of their powers. Nevertheless, it was uncertain whether these efforts would be sufficient to withstand the pressures of the immediate future.
In insular Southeast Asia the Javanese state confronted a similar crisis, but it had far less freedom with which to respond. The Gianti Agreement (1755) had divided the realm and given the Dutch decisive political and economic powers. Though resistance was not impossible, it was difficult, especially since the rulers and their courts were now largely beholden to the Dutch for their positions. The elite’s response to these circumstances generally has been interpreted as a kind of cultural introversion and avoidance of reality, a judgment that probably is too harsh. The Javanese culture and society of earlier days was no longer serviceable, and court intellectuals sought to find a solution in both a revitalization of the past and a clear-eyed examination of the present. Neither effort was successful, though not for want of trying. The idea of opposing Dutch rule, furthermore, was not abandoned entirely, and it was only the devastating Java War (1825–30) that finally tamed the Javanese elite and, oddly enough, left the Dutch to determine the final shape of Javanese culture until the mid-20th century.
Test Your Knowledge
History Buff Quiz
Except in Java and much of the Philippines, the expansion of Western colonial rule in most of Southeast Asia was a phenomenon only of the 19th and the beginning of the 20th centuries. In the earlier period Europeans tended to acquire territory as a result of complicated and not always desired entanglements with Southeast Asian powers, either in disputes or as a result of alliances. After about 1850, Western forces generally were more invasive, requiring only feeble justification for going on the attack. The most important reasons for the change were a growing Western technological superiority, an increasingly powerful European mercantile community in Southeast Asia, and a competitive scramble for strategic territory. Only Siam remained largely intact and independent. By 1886 the rest of the region had been divided among the British, French, Dutch, and Spanish (who soon were replaced by the Americans), with the Portuguese still clinging to the island of Timor. What were often called “pacification campaigns” were actually colonial wars—notably in Burma (Myanmar), Vietnam, the Philippines, and Indonesia—and continued well into the 20th century. More peaceful Western encroachments on local sovereignty also occurred until the 1920s. Full-blown, modern colonial states existed for only a short period, in many cases for not much more than a generation.
These colonial regimes, however, were not insubstantial, as they put down strong bureaucratic roots and—though often co-opting existing administrative apparatuses—formed centralized, disciplined structures of great power. They were backed by the enormous economic resources of the industrialized Western nations; and by the early 20th century, having effectively disarmed the indigenous societies, they possessed a monopoly on the means of violence. There is no mistaking the impact of Western colonial governments on their surroundings, and nowhere is this more evident than in the economic sphere. Production of tin, oil, rubber, sugar, rice, tobacco, coffee, tea, and other commodities burgeoned, driven by both government and private activity; this brought rapid changes to the physical and human landscape and coupled Southeast Asia to a new worldwide capitalist system.
Indeed, colonial domination was only a variant condition in a rapidly changing world. Siam, which through a combination of circumstance and the wise leadership of Mongkut (ruled 1851–68) and Chulalongkorn (1868–1910) avoided Western rule, nevertheless was compelled to adopt policies similar to, and often even modeled on, those of the colonial powers in order to survive. Modernization appeared to require such an approach, and the Thai did not hesitate to embrace it with enthusiasm. Bangkok in the late 1920s surpassed even British Singapore as a centre of such modern amenities as electric lighting and medical facilities, and the state itself had achieved an enviable degree of political and economic viability among its colonial neighbours. The Thai may have “colonized themselves,” as some critics have noted, but in so doing they also escaped or diluted some of the more corrosive characteristics of Western rule, among them racism and cultural destruction. They also do not appear to have experienced the same degree of rural unrest that troubled their colonial neighbours in the 1920s and ’30s. They were unable, however, to avoid other concomitants of state expansion and modernization.
Transformation of state and society
It was not the purpose of the new states to effect rapid or broad social change. Their primary concerns were extending bureaucratic control and creating the conditions for success in a capitalist world economy; the chief necessity was stability or, as the Dutch called it, rust en orde (“tranquillity and order”). Boundaries were drawn, villages defined, laws rewritten—all along Western lines of understanding, often completely disregarding indigenous views and practices—and the new structure swiftly replaced the old. Social change was desired only insofar as it might strengthen these activities. Thus, the Thai began early on to send princes to Europe for their education, employing them throughout the government on their return. The Dutch created exclusive schools for the indigenous administrative elite—a kind of petty royalty—and invented ways of reducing social mobility in this group, as, for example, by making important positions hereditary. But the new governments did not provide Western-style learning to most Southeast Asians, primarily because it was an enormous, difficult, and expensive task and also because policymakers worried about the social and political consequences of creating an educated class. Except in the Philippines, by the mid-1930s only a small percentage of indigenous children attended government-run schools, and only a fraction of those studied above the primary-school level. Some Southeast Asian intellectuals soon drew the conclusion that they had better educate themselves, and they began establishing their own schools with modern, secular courses of study. Some, like the Tonkin Free School in Vietnam (1907), were closed by the colonial regimes, their staffs and pupils hounded by police; others, like the many so-called “wild schools” in Indonesia in the 1930s, were much too numerous to do away with altogether, but they were controlled as carefully as possible.
Nevertheless, during the 1920s and ’30s a tiny but thoughtful and active class of Westernized Southeast Asian intellectuals appeared. They were not the first to literally and figuratively speak the language of the colonial rulers and criticize them, for by the turn of the 20th century Java and Luzon, with the longest experience under Western rule, had already produced individuals like the Javanese noblewoman Raden Adjeng Kartini and the Filipino patriot José Rizal. The newer generation, however, was more certain in its opposition to colonial rule (or, in Siam, rule by the monarchy), clearer and far more political in its conception of a nation, and unabashedly determined to seize leadership and initiative in their own societies. In Burma this group called themselves thakin (Burmese: “master”), making both sarcastic and proud use of an indigenous word that had been reserved for Burmese to employ when addressing or describing Europeans. These new intellectuals were not so much anti-Western as they were anticolonial. They accepted the existing state as the foundation of a modern nation, which they, rather than colonial officials, would control. This was the generation that captained the struggles for independence (in Siam, independence from the monarchy) and emerged in the post-World War II era as national leaders. The best-known figures are Sukarno of Indonesia, Ho Chi Minh of Vietnam, and U Nu of Burma (subsequently Myanmar).
The chief problem facing the new intellectuals lay in reaching and influencing the wider population. Colonial governments feared this eventuality and worked to prevent it. Another obstacle was that the ordinary people, especially outside cities and towns, inhabited a different social and cultural world from that of the emerging leaders. Communication was difficult, particularly when it came to explaining such concepts as nationalism and modernization. Still, despite Western disbelief, there was considerable resentment of colonial rule at the lower levels of society. This was based largely on perceptions that taxes were too numerous and too high, bureaucratic control too tight and too prone to corruption, and labour too coercively extracted. In many areas there also was a deep-seated hatred of control by foreigners, whether they be the Europeans themselves or the Chinese, Indians, or others who were perceived as creatures of their rule. Most of the new intellectual elite were only vaguely aware of these sentiments, which in any case frequently made them uneasy; in a sense they, too, were foreigners. In the 1930s, however, a series of anticolonial revolts took place in Burma, Vietnam, and the Philippines; though they failed in their objectives, these revolts made it clear that among the masses lay considerable dissatisfaction and, therefore, radical potential. The revolts, and the economic disarray of the Great Depression, also suggested that European rule was neither invulnerable nor without flaws. When the outbreak of war in Europe and the Pacific showed that the colonial powers were much weaker militarily than had been imagined, destroying colonial rule and harnessing the power of the masses seemed for the first time to be real possibilities.
The arrival of the Japanese armed forces in Southeast Asia in 1941–42 did not, however, occasion independence. A few leaders perhaps had been naive enough to think that it might—and some others clearly admired the Japanese and found it acceptable to work with them—but on the whole the attitude of intellectuals was one of caution and, very quickly, realization that they were now confronted with another, perhaps more formidable and ferocious, version of colonial rule. The Japanese had no plans to radicalize or in any way destabilize Southeast Asia—which, after all, was slated to become part of a Tokyo-centred Greater East Asia Co-prosperity Sphere; in the short term they sought to win the war, and in the long run they hoped to modernize the region on a Japanese model. Continuity served these purposes best, and in Indochina the Japanese even allowed the French to continue to rule in return for their cooperation. Little wonder that before long Southeast Asians began to observe that, despite “Asia for the Asians” propaganda, the new and old colonial rulers had more in common with each other than either had with the indigenous peoples.
Still, for two distinct reasons the period does represent a break from the past. First, the Japanese attempted to mobilize indigenous populations to support the war effort and to encourage modern, cooperative behaviour on a mass scale; such a thing had never been attempted by Western colonial governments. Virtually all of the mobilization efforts, however, were based on Japanese models, and the new rulers were frustrated to discover that Southeast Asians did not behave in the same fashion as Japanese. Frequently the result was disorder, corruption, and, by the end of the war, a seething hatred of the Japanese. It was also the case that, both because the war was going against them and because the response to other approaches was unenthusiastic, the Japanese were compelled before long to utilize local nationalism in their mobilization campaigns, again something quite impossible under European rule. The consequences were to benefit local rather then Japanese causes and, ironically, to contribute handsomely to the building of anti-Japanese sentiments.
A second difference between Western and Japanese colonialism was in the opportunities the occupation provided the new educated elite. The Japanese were wary of these people because of their Western orientation but also favoured them because they represented the most modern element in indigenous society, the best partner for the present, and the best hope for the future. Often dismissed as “pseudo-intellectuals” by the Western colonial governments and prevented from obtaining any real stake in the state, the new intellectuals under the Japanese were accorded positions of real (though not unlimited or unsupervised) authority. Nor could Southeast Asians who found themselves in these positions easily fault the policies they now accepted responsibility for carrying out or at least supporting, since many of these policies were in fact—if not always in spirit—similar to ones they had endorsed in earlier decades. In short, the Western-educated elite emerged from the Japanese occupation stronger in various ways than they had ever been. By August 1945 they stood poised to inherit (or, given the variety of political conditions at the end of the war, to struggle among themselves over inheriting) the mantle of leadership over their own countries.
Southeast Asia was changed in an evolutionary, rather than revolutionary, way by the Japanese occupation. Although returning Europeans and even some Southeast Asians themselves complained that Japanese fascism had deeply influenced the region’s societies, there is not much evidence that this was the case. Japanese rule, indeed, had destroyed whatever remained of the mystique of Western supremacy, but the war also had ruined any chances that it might be replaced with a Japanese mystique. There was clearly little clinging to Japanese concepts except where they could be thoroughly indigenized; even the collaboration issue, so important to Europeans and their thinking about the immediate postwar era, failed to move Southeast Asians for long. And, if the general population appeared less docile in 1945 than four years earlier, the reason lay more in the temporary removal of authority at the war’s end than in the tutelage of the Japanese.
Contemporary Southeast Asia
Struggle for independence
The swift conclusion of the war in the Pacific made it impossible for the former colonial masters to return to Southeast Asia for several weeks, in some areas for months. During the interim, the Japanese were obliged by the Allies to keep the peace, but real power passed into the hands of Southeast Asian leaders, some of whom declared independence and attempted with varying degrees of success to establish government structures. For the first time since the establishment of colonial rule, firearms in large numbers were controlled by Southeast Asians. Such was the groundwork for the establishment of new, independent states.
Prewar nationalism had been most highly developed in Vietnam and Indonesia, and the colonial powers there were least inclined to see the new realities created by the war, perhaps because of the large numbers of resident French and Dutch and because of extensive investments. The result in both countries was an armed struggle in which the Western power was eventually defeated and independence secured. The Indonesian revolution, for all its internal complexities, was won in little more than four years with a combination of military struggle and civilian diplomacy. The revolution of the Vietnamese, who had defeated the French by 1954, continued much longer because of an internal political struggle and because of the role Vietnam came to play in global geopolitics, which ultimately led to the involvement of other external powers, among them the United States. In both cases, however, independence was sealed in blood, and a mythologized revolution came to serve as a powerful, unifying nationalist symbol. In the rest of Southeast Asia, the achievement of independence was, if not entirely peaceful, at least less violent. Malaysia and the Philippines suffered “emergencies” (as armed insurgencies were euphemistically called), and Burma, too, endured sporadic internal military conflict. For better or worse, these conflicts were no substitutes for a genuine revolutionary experience.
Whether by revolution or otherwise, decolonization proceeded rapidly in Southeast Asia. The newly independent states all aspired toward democratic systems more or less on the Western model, despite the lack of democratic preparation and the impress of nationalist sentiment. None expressed a desire to return to precolonial forms of government, and, although some Western observers professed to see in such leaders as Indonesia’s Sukarno Southeast Asian societies returning to traditional behaviour, their judgment was based more on ephemeral signs than on real evidence. For one thing, societies as a whole had been too much altered in the late 19th and early 20th centuries to make it clear what “tradition” really was. For another, the new leadership retained the commitment to modernization that it had developed earlier. They looked forward to a new world, not an old one. The difficulty, however, was that there was as yet little consensus on the precise shape this new world should take, and colonial rule had left indigenous societies with virtually no experience in debating and reaching firm decisions on such important matters. It is hardly surprising that one result of this lack of experience was a great deal of political and intellectual conflict. Often forgotten, however, is another result: an outpouring of new ideas and creativity, particularly in literature. This signaled the beginning of a kind of cultural renaissance, the dimensions and significance of which are still insufficiently understood.
Defining new states and societies
The first two decades of independence constituted a period of trial and error for states and societies attempting to redefine themselves in contemporary form. During this time, religious and ethnic challenges to the states essentially failed to split them, and (except in the states of former Indochina) both communism and Western parliamentary democracy were rejected. Indonesia, the largest and potentially most powerful nation in the region, provided the most spectacular examples of such developments, ending in the tragic events of 1965–66, when between 500,000 and 1,000,000 lives may have been lost in a conflict between the Indonesian Communist Party and its opponents. Even Malaysia, long the darling of Western observers for its apparent success as a showcase of democracy and capitalist growth, was badly shaken by violence between Malays and Chinese in 1969. The turmoil often led Southeast Asia to be viewed as inherently unstable politically, but from a longer perspective—and taking into account both the region’s great diversity and the arbitrary fashion in which boundaries had been set by colonial powers—this perhaps has been a shortsighted conclusion.
The new era that began in the mid-1960s had three main characteristics. First, the military rose as a force in government, not only in Vietnam, Burma, and Indonesia but also in the Philippines and—quietly—in Malaysia. The military establishments viewed themselves as actual or potential saviours of national unity and also as disciplined, effective champions of modernization; at least initially, they frequently had considerable support from the populace. Second, during this period renewed attention was given by all Southeast Asian nations to the question of unifying (secular and national) values and ideology. Thailand, Indonesia, and Vietnam had been first in this area in the 1940s and ’50s, but the others followed. Even Singapore and Brunei developed ideologies, with the express purpose of defining a national character for their people. Finally, virtually all Southeast Asian states abandoned the effort of utilizing foreign models of government and society—capitalist or communist—and turned to the task of working out a synthesis better suited to their needs and values. Each country arrived at its own solution, with varying degrees of success. By the 1980s what generally had emerged were quasi-military bourgeois regimes willing to live along modified democratic lines—i.e., with what in Western eyes appeared to be comparatively high levels of restriction of personal, political, and intellectual freedom. Whatever their precise political character, these were conservative governments. Even Vietnam, the most revolutionary-minded among them, could not stomach the far-reaching and murderous revolution of the Khmer Rouge in Cambodia in the mid-1970s and by the end of the decade had moved to crush it.
Tempting as it may be to conclude that greater doses of authoritarian rule (some of it seemingly harking back directly to colonial times) merely stabilized Southeast Asia and permitted the region to get on with the business of economic development, this approach was not successful everywhere. In Burma (called Myanmar since 1989) the military’s semi-isolationist, crypto-socialist development schemes came to disaster in the 1980s, revealing the repressive nature of the regime and bringing the country to the brink of civil war by the end of the decade. In the Philippines the assault by President Ferdinand Marcos and his associates on the old ruling elite class brought a similar result, in addition to a spectacular level of corruption and the looting of the national treasury. In Vietnam, where the final achievement of independence in 1975 brought bitter disappointment to many and left the country decades behind the rest of the region in economic development, public and internal Communist Party unrest forced an aging generation of leaders to resign and left the course for the future in doubt as never before.
The states generally thought to be most successful to date—Thailand, Indonesia, Malaysia, and especially Singapore—have followed policies generally regarded as moderate and pragmatic. All are regarded as fundamentally stable and for that reason have attracted foreign aid and investment; all have achieved high rates of growth since the mid-1970s and enjoy the highest standards of living in the region. Their very success, however, has created unexpected social and cultural changes. Prosperity, education, and increasing access to world media and popular culture have all given rise, for example, to various degrees of dissatisfaction with government-imposed limitations on freedom and to social and environmental criticism. Particularly in Indonesia and Malaysia, there has been a noticeable trend toward introspection and discussion of national character, as well as a religious revival in the form of renewed interest in Islām. It appears that the comparatively small and unified middle class, including a generally bureaucratized military, is becoming larger, more complex, and less easily satisfied. That was undoubtedly not the intent of those who framed governmental policy, but it is a reality with which they must deal.
Reappearance of regional interests
After the end of the 17th century, the long-developed polities of Southeast Asia were pulled into a Western-dominated world economy, weakening regional trade networks and strengthening ties with distant colonial powers. In the early years of independence these ties often remained strong enough to be called neocolonial by critics, but after the mid-1960s these partnerships could no longer be controlled by former colonial masters, and the new Southeast Asian states sought to industrialize and diversify their markets. On the one hand, this meant a far greater role for Japan in Southeast Asia; that country is by far the most important trading partner of most Southeast Asian nations. On the other, it meant that many countries began to rediscover commonalities and to examine the possibilities within the region for support and markets.
In 1967 the Association for Southeast Asian Nations (ASEAN) was formed by Malaysia, Indonesia, the Philippines, Thailand, and Singapore (Brunei joined in 1985). This group’s initial interest was in security, but it has moved cautiously into other fields. It played an important role, for example, in seeking an end to the Vietnam-Cambodia conflict and has sought a solution to the civil strife in Cambodia. In economic affairs it has worked quietly to discuss such matters as duplication of large industrial projects, but, perhaps because the economies of most of its members are quite similar and as yet only partially industrialized, ASEAN has not attempted to build a true economic community. Only since the mid-1980s has ASEAN been taken seriously by major powers, or even sometimes by Southeast Asians themselves. It seems likely, however, that the formerly Soviet-dominated states of Vietnam, Laos, and Cambodia will become part of ASEAN before the end of the 1990s, and Myanmar may be compelled to follow. Such circumstances will undoubtedly open up greater regional markets and give the region as a whole a more imposing world profile. Moreover, modern communications, which have already begun to inform ASEAN populations more closely about each other, cannot help but further this process and draw attention to common strands in an emerging modern culture that is shared, at least to some degree, by all the nations of the region. |
According to The Virtual Fossil Museum, secondary endosymbiosis is the process that occurs when the product of primary endosymbiosis is taken up and retained by a eukaryote. Primary endosymbiosis is the engulfment of a bacterium by an organism. Secondary endosymbiosis gives rise to different forms of algae.Continue Reading
"Secondary Endosymbiosis" by J.M. Archibald explains how endosymbiosis led to the development of several forms of alga. A heterotrophic eukaryote engulfed an ancestor of cyanobacteria. This ancestor of cyanobacteria became a permanent part of its cell as an organelle in the cytoplasm. This primary endosymbiosis gave rise to the differentiation of red algae, glaucophyte algae and green algae with double-membrane plastids. Secondary endosymbiosis occurred when another eukaryote engulfed and retained the alga containing the primary plastids.
With secondary endosymbiosis, three or four membranes may surround secondary plastids, according to The Virtual Fossil Museum. The two additional membranes are said to correspond to the membrane of the host cell and the alga that was engulfed.
When one organism lives inside another, the process is called endosymbiosis, according to the Genetics Science Learning Center from University of Utah. Mitochondria and chloroplasts are two structures that may have become permanent fixtures in the cell through endosymbiosis. They have similar features to bacterial cells, including separate DNA from the nucleus of the cell. This, along with the fact they both have double membranes, suggests they were ingested by another host.Learn more about Women's Health |
Most, if not all of Europe has a suitable climate for biogas production. The specific type of system depends on the regional climate. Regions with harsher winters may rely more on animal waste and other readily available materials compared to warmer climates, which may have access to more crop waste or organic material.
Regardless of suitability, European opinions vary on the most ethical and appropriate materials to use for biogas production. Multiple proponents argue biogas production should be limited to waste materials derived from crops and animals, while others claim crops should be grown with the intention of being used for biogas production.
Biogas Production From Crops
Europeans in favor of biogas production from energy crops argue the crops improve the quality of the soil. Additionally, they point to the fact that biogas is a renewable energy resource compared to fossil fuels. Crops can be rotated in fields and grown year after year as a sustainable source of fuel.
Extra crops can also improve air quality. Plants respire carbon dioxide and can help reduce harmful greenhouse gasses in the air which contribute to global climate change.
Energy crops can also improve water quality because of plant absorption. Crops grown in otherwise open fields reduce the volume of water runoff which makes it to lakes, streams and rivers. The flow of water and harmful pollutants is impeded by the plants and eventually absorbed into the soil, where it is purified.
Urban residents can also contribute to biogas production by growing rooftop or vertical gardens in their homes. Waste from tomatoes, beans and other vegetables is an excellent source of biogas material. Residents will benefit from improved air quality and improved water quality as well by reducing runoff.
Proponents of biogas production from crops aren’t against using organic waste material for biogas production in addition to crop material. They believe crops offer another means of using more sustainable energy resources.
Biogas Production From Agricultural Waste
Opponents to growing crops for biogas argue the crops used for biogas production degrade soil quality, making it less efficient for growing crops for human consumption. They also argue the overall emissions from biogas production from crops will be higher compared to fossil fuels.
Growing crops can be a labor-intensive process. Land must be cleared, fertilized and then seeded. While crops are growing, pesticides and additional fertilizers may be used to promote crop growth and decrease losses from pests. Excess chemicals can run off of fields and degrade the water quality of streams, lakes and rivers and kill off marine life.
Once crops reach maturity, they must be harvested and processed to be used for biogas material. Biogas is less efficient compared to fossil fuels, which means it requires more material to yield the same amount of energy. Opponents argue that when the entire supply chain is evaluated, biogas from crops creates higher rates of emissions and is more harmful to the environment.
In Europe, the supply chain for biogas from agricultural waste is more efficient compared to crop materials. Regardless of whether or not the organic waste is reused, it must be disposed of appropriately to prevent any detrimental environmental impacts. When crop residues are used for biogas production, it creates an economical means of generating useful electricity from material which would otherwise be disposed of.
Rural farms which are further away from the electric grid can create their own sources of energy through biogas production from agriculture wastes as well. The cost of the energy will be less expensive and more eco-friendly as it doesn’t have the associated transportation costs.
Although perspectives differ on the type of materials which should be used for biogas production, both sides agree biogas offers an environmentally friendly and sustainable alternative to using fossil fuels. |
New Studies Link Measles to Immune Amnesia
Measles, an extremely contagious disease caused by the Rubeola virus, presents itself in sufferers through a host of symptoms, including:
- Sore throat
- Runny noses
- Inflamed rashes on the skin
Due to the disease’s highly contagious nature and relatively nonexistent cure, people may tend to flippantly underestimate its seriousness. New research, though, has shown that measles is a far deadlier and damaging disease than was once thought.
Apart from the mortality rate of 20–30 percent in complicated infections (in countries without adequate health care), the measles disease can result in a weakened immune system, known as immune amnesia.
New research links the Rubeola virus to the onset of immune amnesia, the weakening of an individual’s immune system, caused by the destruction of the system’s antibody-producing cells. Antibodies, which are proteins produced by the immune system to target foreign pathogens (disease causing agents), bind to pathogens and destroy them and their effects on the human body. The presence of antibodies in the immune system is what gives us our “immunity”; we are able to fight off diseases we have been previously infected with. It is important to note that antibodies are produced during and after exposure to disease- individuals who have been previously infected with chicken pox, for example, never acquire the disease again. This is a result of the antibodies that were produced as the immune system fought off the disease, giving previous sufferers “immunity” to the chicken pox disease.
Measles eradicate the antibody-producing cells, leading to a suppression of the immune system and a loss of immunity in an individual who may have once possessed the defensive resources to protect against certain illnesses. Consequently, due to the loss of immunity, the infected individual is left vulnerable because the immune system, essentially, has “forgotten” how to defend itself- hence, the term “amnesia” in “immune amnesia.”
Even after patients recover from measles, their immune systems may be chronically affected by immune amnesia, sometimes for up to several years after the initial infection. Studies have shown that despite recovery from measles, previous sufferers may still possess amnesiac immune systems, meaning the effects of measles exposure are even more extensive and dangerous than was once thought.
The studies carried out by researchers contribute to the understanding of the importance of the measles vaccine. Not only does the vaccine protect against measles, it also protects against immune amnesia, and consequently, a host of other illnesses. |
Scientific names : Artocarpus altilis Linn.,Artocarpus communis ,Artocarpus incisus
Common names:Fruta de pan (Span.), Breadfruit (Engl.),Rimas (Tag.)
Habitat :Native to the Malay Peninsula, through all of Island Southeast Asia and into most Pacific Ocean islands. The ancestors of the Polynesians found the trees growing in the northwest New Guinea area around 3500 years ago. They gave up the rice cultivation they had brought with them from ancient Taiwan, and raised breadfruit wherever they went in the Pacific (except Easter Island and New Zealand which were too cold). Their ancient eastern Indonesian cousins spread the plant west and north through Insular and coastal Southeast Asia. It has, in historic times, also been widely planted in tropical regions elsewhere
Breadfruit trees grow to a height of 85 feet (26 m). The large and thick leaves are deeply cut into pinnate lobes. All parts of the tree yield latex, a milky juice, which is useful for boat caulking.
You may click to see the picture:->
The trees are monoecious, with male and female flowers growing on the same tree. The male flowers emerge first, followed shortly afterward by the female flowers, which grow into a capitulum, which are capable of pollination just three days later. The pollinators are Old World fruit bats in the family Pteropodidae. The compound, false fruit develops from the swollen perianth and originates from 1,500-2,000 flowers. These are visible on the skin of the fruit as hexagon-like disks.
Breadfruit is one of the highest-yielding food plants, with a single tree producing up to 200 or more fruits per season. In the South Pacific, the trees yield 50 to 150 fruits per year. In southern India, normal production is 150 to 200 fruits annually. Productivity varies between wet and dry areas. In the Caribbean, a conservative estimate is 25 fruits per tree. Studies in Barbados indicate a reasonable potential of 6.7 to 13.4 tons per acre (16-32 tons/ha). The grapefruit-sized ovoid fruit has a rough surface, and each fruit is divided into many achenes, each achene surrounded by a fleshy perianth and growing on a fleshy receptacle. Some selectively bred cultivars have seedless fruit.
Breadfruit is an equatorial lowland species that grows best below elevations of 650 metres (2,130 ft), but is found at elevations of 1,550 metres (5,090 ft). Its preferred rainfall is 1,500–3,000 millimetres (59–120 in) per year. Preferred soils are neutral to alkaline (pH of 6.1-7.4) and either sand, sandy loam, loam or sandy clay loam. Breadfruit is able to grow in coral sands and saline soils.
Nutritional :Breadfruit is roughly 25% carbohydrates and 70% water. It has an average amount of vitamin C (20 mg/100g), small amounts of minerals (potassium and zinc) and thiamin (100 ?g).
*Crop considered a carbohydrate food source.
*Fruit can be fried, boiled, candied or cooked as a vegetable.
*High in starch, it is also high in Vitamin B, with fair amounts of B and C.
Breadfruit is a staple food in many tropical regions. They were propagated far outside their native range by Polynesian voyagers who transported root cuttings and air-layered plants over long ocean distances. They are very rich in starch, and before being eaten they are roasted, baked, fried or boiled. When cooked the taste is described as potato-like, or similar to fresh-baked bread (hence the name).
Because breadfruit trees usually produce large crops at certain times of the year, preservation of the harvested fruit is an issue. One traditional preservation technique is to bury peeled and washed fruits in a leaf-lined pit where they ferment over several weeks and produce a sour, sticky paste. So stored, the product may last a year or more, and some pits are reported to have produced edible contents more than 20 years later. Fermented breadfruit mash goes by many names such as mahr, ma, masi, furo, and bwiru, among others.
Drawing of breadfruit by John Frederick MillerMost breadfruit varieties also produce a small number of fruits throughout the year, so fresh breadfruit is always available, but somewhat rare when not in season.
Breadfruit can be eaten once cooked, or can be further processed into a variety of other foods. A common product is a mixture of cooked or fermented breadfruit mash mixed with coconut milk and baked in banana leaves. Whole fruits can be cooked in an open fire, then cored and filled with other foods such as coconut milk, sugar and butter, cooked meats, or other fruits. The filled fruit can be further cooked so that the flavor of the filling permeates the flesh of the breadfruit.
The Hawaiian staple food called poi made of mashed taro root is easily substituted or augmented with mashed breadfruit. The resulting “breadfruit poi” is called poi ?ulu. In Puerto Rico, it is called “panapen” or “pana”, for short. Pana is often served boiled with a mixture of sauteed bacalao (salted cod fish), olive oil and onions. It is also serve as tostones or mofongo. In Dominican Republic, it is known by the name “buen pan” or “good bread”. Breadfruit is also found in Indonesia and Malaysia, where it is called ‘sukun’. In the South Indian state of Kerala and coastal Karnataka especially on the sides of Mangalore, where it is widely grown and cooked, it is known as Kadachakka and Gujje respectively. In Belize, the Mayan people call it ‘masapan’.
Parts used:Bark, leaves, fruit.
Properties and constituents : Study has yielded papayotin, enzyme and artocarpin.
• Decoction of the bark used as vulnerary (wound healing). In the Visayas, decoction of the bark used in dysentery.
• Used as emollient.
• In the Carribean, leaves are used to relieve pain and inflammation.
• In Jamaican folk medicine, leaf decoction used for hypertension.
.It is also used in traditional medicine to treat illnesses that range from sore eyes to sciatica.
• Phytochemical: (1) Study concluded that the starch of Artocarpus altilis showed a high degree of purity. Physiochemical and rheological characteristics suggest the starch could be useful in products that require long heating process, with an excellent digestibility that might be advantageous for medical and food use. (2) Study showed percent recoveries of amino acid, fatty acid and carbohydrate content showed 72.5%, 68.2% and 81.4%. The starch content is 15.52 g/100 g fresh weight.
• Cytoprotective: Study yielded cytoprotective components – ß-sitosterol and six flavonoids with good potential for medicinal applications.
• Antiinflammatory: Extract of breadfruit leaves was shown to contain compounds with significant anti-inflammatory activities.
• Phenolic Compounds / Cytotoxicity: Study isolated isoprenylated flavonoids – morusin, artonin E, cycloartobiloxanthone and artonol B – that showed high toxicity against Artmia salina. Result of cytotoxicity test showed the presence of an isoprenyl moiety in the C-3 position in the flavone skeleton, an important factor for its activity.
• Negative Inotropic Effect: Leaf extract study exerted a weak, negative chronotropic and inotropic effect in vivo in the rat. The mechanism of action of the inotropic agent was not cholinergic and may involve decoupling of excitation and contraxction.
The wood of the breadfruit tree was one of the most valuable timbers in the construction of traditional houses in Samoan architecture.
Breadfruit was widely and diversely used among Pacific Islanders. Its lightweight wood (specific gravity of 0.27) is resistant to termites and shipworms, consequently used as timber for structures and outrigger canoes. Native Hawaiians used its sticky sap to trap birds, whose feathers were made into cloaks.
Its wood pulp can also be used to make paper, called breadfruit tapa
The information presented herein is intended for educational purposes only. Individual results may vary, and before using any supplements, it is always advisable to consult with your own health care provider. |
The past influences all aspects of our lives and shapes the way in which we live today. At St Luke’s Catholic Primary School, we believe that history provides us with a sense of identity and gives us an insight into the diverse human experience. Through studying history, children develop a wide range of critical thinking skills.
Our curriculum enables children to study and explore the human past and develop a chronological understanding of the passing of time. The children also learn to distinguish between ‘fact’ and subjectivity. Most importantly, they foster an enjoyment for the subject and a desire to find out more.
At St Luke’s, we take an enquiry-based approach to the teaching of history. Our planning is supported by the Collins Connected History scheme which is in keeping with our philosophy and provides a broad and balanced curriculum. There is a clear progression of skills as the children move through the school. The children build on knowledge acquired in previous year groups and develop an understanding of the chronology of events in British and world history. Each year group will study three different historical topics during the course of the academic year. These topics can be found in our long-term planning documents.
The children have the opportunity to take part in historical enrichment activities. These may include: trips to museums and places of historical significance; workshops with historical experts; themed days; and visits from those in our local community who can share their own experiences of life in the past.
The children will leave St Luke’s Catholic Primary School with memorable experiences of studying history at primary school and a deeper understanding of the past. They will have a secured the knowledge and skills to be ready for studying history in the next stage of their education. |
(ORDO NEWS) — A team of researchers from Iowa State University has found that it is possible to successfully grow alfalfa on Mars.
As scientists around the world speculate not only about sending humans to Mars, but building shelters on the Red Planet, work continues on ways to make such projects possible.
Before these dreams become reality, many challenges must be overcome, including how to feed the people living so far away.
One option is to grow food inside protected enclosures. The buildings should imitate earthly conditions, since the plants that will be grown there will obviously be brought from Earth.
Growing plants on Mars will require a few basic elements – soil, water, food, and sunlight. In this new study, the researchers looked at the first two points.
Mars doesn’t have much to offer in terms of soil, but it does have basalt, a type of volcanic rock. There are few substances in basalt that could be used as food by plants, and it is stony, not clayey.
Thus, growing food will require not only changing the basalt, but also using plants that can grow in such conditions.
The researchers tried to grow several types of agricultural plants in finely ground basalt found on Earth. They found that plants such as turnips, lettuce and radishes do not grow well in basalt.
On the other hand, scientists have noticed that alfalfa does very well. The researchers also found that if they grew alfalfa in basalt and then planted other crops in the same soil, those crops grew much better. For example, turnip yields increased by 311%.
The scientists then turned their attention to water, which is very scarce on Mars. It is mainly found in the ice at the poles.
The water there is also very salty and therefore cannot be used for growing plants. To reduce the salinity of water samples on Earth, scientists have added bacteria known as Synechococcus, which can desalinate water.
Testing has shown that Synechococcus can significantly reduce the salt concentration, but the water will still be too salty for the plants.
The researchers then filtered a sample of water by pouring it over piles of basalt rocks and obtained fresh water that could be used to grow plants.
Contact us: [email protected] |
This worksheet is about English functions. It contains different statements presenting various situations supported with visual aids (pictures) to facilitate the task for both teachers and students. The objective is help students express orders or commands (positive and negative) in different life situations either inside or outside the classroom context.
Good luck and I hope you enjoy it
Other pedagogical goals
The above lesson is a great teaching resource for:Elementary (A1), Pre-intermediate (A2)
This resource is intended for:Elementary schoolers
Solutions not included
Quality not yet verified by the community.
This resource does not contain any images, words or ideas that would upset a reasonable person in any culture. |
Social interaction, whether with adults or peers, is a learning experience for young children. An encouraging, responsive setting provides infants, toddlers, and older children with the opportunity to develop creativity, language skills, social awareness, and confidence. Fostering social skills prepares children to function well in society and enjoy healthier relationships throughout their lifetime. Good manners, effective communication, articulation of their own needs, and being considerate of the feelings of others are all an integral part of a happy, successful, and well-adjusted life as an adult.
How do children develop social skills?
Whether a child is naturally more outgoing, makes friends easily, or tends to be shy and hesitant, social skills can be learned and improved. Different strategies in each stage of development strengthen the ability to adapt to uncomfortable situations. Play time is the perfect opportunity to develop these skills. In fact, children acquire the majority of their skills through play. They explore, interact, mimic, try new things, and gather new ideas. Positive feedback reinforces these skills and nurtures feelings of confidence and security.
As the child grows older, discussions of how they feel helps the child learn words associated with those feelings, better understand and deal with their own feelings, and interpret the feelings of others. As children become more able to talk out their feelings, they transition away from acting out physically.
Give your kids a chance to socialize and improve their overall wellbeing!
Interacting with peers is an opportunity for children to practice communication, expand their vocabulary, and share ideas. Collaborating with others promotes creativity, exercises the imagination, and promotes cooperation. As the child socializes, he or she gains new skills through practice and trial and error. A safe, positive, and comfortable environment welcomes sharing and expression and builds self-esteem. This then leads to a confident child who is at ease with the world around them. A child’s interaction with others plays a crucial role in shaping their identity and developing important skills they’ll benefit from throughout their lives. |
Clouds bright enough to see at night are not as hard to find as they once were.
These so-called night-shining clouds are still rare — rare enough that Matthew DeLand, who has been studying them for 11 years, has seen them only once. But his odds are increasing. [Related: In Images: Reading the Clouds.]
These mysterious clouds form between 50 and 53 miles (80 and 85 kilometers) up in the atmosphere, altitudes so high that they reflect light long after the sun has dropped below the horizon.
DeLand, an atmospheric scientist with NASA's Goddard Space Flight Center in Greenbelt, Md., has found that night-shining clouds — technically known as polar mesospheric or noctilucent clouds — are forming more frequently and becoming brighter. He has been observing the clouds in data from instruments that have been flown on satellites since 1978.
For reasons not fully understood, the clouds' brightness wiggles up and down in step with solar activity, with fewer clouds forming when the sun is most active. The biggest variability is in the far north.
Underlying the changes caused by the sun, however, is a trend toward brighter clouds. The upward trend in brightness, DeLand said, reveals subtle changes in the atmosphere that may be linked to greenhouse gases.
Night-shining clouds are extremely sensitive to changes in atmospheric water vapor and temperature. The clouds form only when temperatures drop below minus 200 degrees Fahrenheit (minus 130 degrees Celsius), when the scant amount of water high in the atmosphere freezes into ice clouds. This happens most often in far northern and southern latitudes (above 50 degrees) in the summer when, counter-intuitively, the mesosphere is coldest.
Changes in temperature or humidity in the mesosphere make the clouds brighter and more frequent. Colder temperatures allow more water to freeze, while an increase in water vapor allows more ice clouds to form. Increased water vapor also leads to the formation of larger ice particles that reflect more light.
The fact that night-shining clouds are getting brighter suggests that the mesosphere is getting colder and more humid, DeLand said. Increasing greenhouse gases in the atmosphere could account for both phenomena. In the mesosphere, carbon dioxide radiates heat into space, causing cooling. More methane, on the other hand, puts more water vapor into the atmosphere because sunlight breaks methane into water molecules at high altitudes.
So far, it's not clear which factor — water vapor or cooling — is causing polar mesospheric clouds to change. It's likely that both are contributing, DeLand said, but the question is the focus of current research. |
How do we know about the Anglo Saxons? 12.4.16
Welcome back, I hope you had a wonderful Easter Holidays.
We have started our topic, ‘Vicious Vikings and Amazing Anglo-Saxons off in earnest. We have started by learning about just how we know so much about these people…or we think we know! We have become archaeologists and experimental historians this week. We have looked at the artefacts and remains that have been left behind by the Anglo Saxons and tried to work out what they were, what they would have been used for and what it tells us about how they lived and what they were like.
We also looked at the remains of Anglo Saxon houses and from these built upwards to work out what they houses looked like. We decided the holes in the ground that have been found were left behind by posts. Then we discussed what type of material we thought would be used to make walls and decided that thin sticks might have been used (we used art straws). However, there were gaps in our walls and we felt the rain and cold might get in so we decided that they may have used earth to stick onto the walls…we used clay. Look at our fantastic houses as they dry out, we now need to decide what to use on the roof!
A positive, purposeful and enthusiastic atmosphere |
Kids will love these super cool science activities and science experiments. Come take a peak at over 19 science for kids ideas.
Free Science Printables for Kids:
- Fall Leaf Scavenger Hunt for identifying types of trees
- Zoo Fieldtrip Animal Reports
- Zoo Scavenger Hunts (Preschool-6th Grade)
- Plant Lifecycle Printables
- Habitat Adventure Game: Exploring Biomes & Taxonomy
- Lots more Science for Kids
Science Experiments for Kids
Get ready to start pinning up a storm because it’s Friday and time for our fun kids linky party. Come take a peak at last weeks features and then browse through dozens of new ideas this week. |
Get to Know Tybee Island’s Fascinating History
Spanish explorers were searching for riches in the New World. In 1520, Lucas Vasquez de Ayllon laid claim to Tybee as part of Spain’s “La Florida”, an area that extended from the Bahamas to Nova Scotia.
In 1605, the French were drawn to Tybee in search of Sassafras roots, which were considered to be a miracle cure at that time. The Spanish fought the French in a naval battle just off shore of Tybee to regain control over the area.
For many decades, pirates visited the island in search of a safe haven and hiding place for treasure. Tybee and other remote islands were also a source of fresh water and game.
Superior French and British settlements eventually forced Spain to relinquish their claim on Tybee and other islands. In 1733, General James Oglethorpe led the settlement of this area, which was called Savannah because of the vast marshlands and tall grass. The new colony of Georgia was named in honor of King George of England.
Tybee was extremely important because of its location at the mouth of the Savannah River. In 1736, Oglethorpe had a lighthouse and small fort constructed here to ensure control of river access. Also in 1736, John Wesley, the “Father of Methodism”, said his first prayer on the American continent at Tybee.
During the Revolutionary War, Tybee was the staging area for French Admiral D’Estaing’s ill-fated 1779 “Siege of Savannah”, when combined multinational forces attempted to defeat the British held Savannah. During the War of 1812, the Tybee Island Lighthouse was used to signal Savannah of possible attack by the British. Though no such attack took place, a “Martello Tower” was constructed on Tybee to provide protection in guarding the Savannah River. On the western end of the island, an area known as a “Lazaretto”, a variation of an Italian word meaning ‘hospital for the contagious’, was established to quarantine slaves and other passengers who may have been carrying diseases. Tybee would be the final port of call for many of those quarantined there.
Tybee also played an important military role at the outbreak of the American Civil War. First, Confederates occupied the Island. In December of 1861, the Confederate forces, under orders from Robert E. Lee, withdrew to Fort Pulaski to defend Savannah and the Savannah River. Union forces commanded by Quincy Adams Gilmore took control of Tybee and constructed cannon batteries on the west side of the island facing Fort Pulaski about one mile away.
On April 11, 1862, those cannon batteries fired a new weapon called a “Rifled Cannon” at Fort Pulaski and changed forever the way the world protected coastal areas. Within 30 hours the rifled guns had such a devastating effect on the brick fort that it was surrendered. All forts like Pulaski suddenly became obsolete.
After the Civil War, Tybee became popular with Savannah residents who wanted to escape the city heat seeking the cool ocean breezes on the island. There were very few year-round residents before the 1870s, but by the 1890s there were more than 400 beach cottages and other buildings for summer residents. Clear, saltwater breezes were believed to be remedies for various ailments, including asthma and certain allergies. Steamships began carrying patients and tourists to Tybee Island just after the Civil War.
In 1887, the Central of Georgia Railroad completed a line to Tybee Island, opening the island to a wave of summer tourists. The railroad built the Tybrisa Pavilion in 1891, and by the end of the decade several hundred summer cottages dotted the island.
In 1897, Fort Screven was built on the north end of Tybee to provide a more modern coastal defense. Six poured-concrete, low-profile gun batteries and a minefield, along with hundreds of other military buildings, were constructed. Gun batteries were named to honor America’s war heroes. From 1897 to 1947, Fort Screven was an integral part of America’s Coastal Defense system. In 1947, the fort was closed and sold to the town of Tybee Island and tourism returned as a major part of Tybee’s history.
In the 1920s, U.S. Route 80 was completed, connecting Tybee Island via road with the mainland. The Tybrisa Pavilion became a popular stop for Big Band tours, and development pushed toward the island’s southern tip. By 1940, the island had four hotels, including the DeSoto Hotel and Hotel Tybee, and numerous smaller lodges. The Tybrisa Pavilion burned in 1967 and was replaced by the Tybee Pier and Pavilion in 1996. Cecil B. Day opened the first Days Inn on Tybee Island in 1970.
In 1961, Battery Garland, the former gun battery and magazine storehouse for a 12-inch long-range gun, became the Tybee Island Museum. Rooms which once stored 600 pound projectiles and 200 pound bags of gun powder now hold the collections and exhibits of over 400 years of Tybee Island history. |
Understanding Population Assessments
Population assessments—also known as stock assessments—provide important information for marine resource management.
Population assessments are a key component of marine resource management. These assessments allow us to evaluate and report the status of managed fisheries, marine mammals, and endangered/threatened species under the authorities of the Magnuson-Stevens Fishery Conservation and Management Act, the Marine Mammal Protection Act, and the Endangered Species Act.
To conduct population assessments, our scientists:
The assessment process uses current data and advanced analytical techniques in an effort to provide the best scientific information available for conservation and management decisions.
Fish stock assessments often use catch, abundance, and biology data. These data feed into mathematical models that produce estimates of the fishery management factors needed for managers to make decisions about how to best regulate a fishery.
Scientific review groups advise NOAA Fisheries and U.S. Fish and Wildlife Service on the status of marine mammal stocks within three areas: Alaska, the Atlantic (including the Gulf of Mexico), and the Pacific.
Stock assessments measure the impact of fishing on fish and shellfish stocks. Assessments also project harvest levels to maximize the number of fish that can be caught every year while preventing overfishing, protecting the marine ecosystem, and—where necessary—rebuilding depleted stocks.
These reports provide resource managers with information needed to manage marine mammal stocks protected under Marine Mammal Protection Act. These reports contain valuable information about geographic range, population size and trends, productivity rates, and estimates of mortality to design and implement appropriate conservation measures.
These assessments provide the foundation for evaluating the status of—and threats to—endangered marine mammals, fish, and sea turtles managed by NOAA Fisheries under the Endangered Species Act. Endangered species assessments include synthesis and analysis of scientific information on a species’ or stock’s population structure, life history characteristics, abundance, and threats—particularly those caused by human activities.
Stock assessments are the scientific foundation of successful and sustainable fishery harvest management. Stock assessments measure the impact of fishing on fish and shellfish stocks. They project harvest levels that maximize the number of fish that can be caught every year while preventing overfishing (removing too many fish), protecting the marine ecosystem, and where necessary, rebuilding overfished (depleted) stocks.
Each stock assessment produces a report that provides fishery managers with a scientific basis for setting sustainable harvest policies under the authority of the Magnuson-Stevens Fishery Conservation and Management Act. Under the Act, we partner with eight regional fishery management councils to manage nearly 500 fishery stocks. NOAA Fisheries provides scientific guidance to resource managers by addressing fundamental questions including:
To learn more about the basics of the fisheries stock assessment process, read our Stock Assessments 101 series:
In addition to commercial and recreational fishery-dependent data sources, many stock assessments use fishery-independent data from surveys. We conduct sample surveys for fishes, invertebrates, and environmental conditions (e.g., temperature, salinity, dissolved oxygen) across the eight regions of the United States exclusive economic zone. We analyze abundance and biological data (e.g., species, length, stomach content) collected by these surveys in stock assessments.
Fishery-independent surveys are managed by our regional fisheries science centers and tracked nationally via the Fisheries-Independent Survey System. This national system characterizes our ocean observation activities and data collection during fishery-independent surveys and provides up-to-date information to fishery scientists, managers, and the public through flexible digital mapping and tabular reporting applications.
Along with our regional, state, and international partners, we conduct an average of 200 stock assessments annually. This includes more than 85 assessments of stocks included in the Fish Stock Sustainability Index, which is used to measure the performance of the most commercially and recreationally important fisheries.
We collect and store fish stock assessment results and related information in the Species Information System. A new public portal allows users to view and download stock assessment summaries and results. We also produce National Fish Assessment reports on a quarterly basis with up-to-date summaries on the status of NOAA Fisheries assessment activities for federally-managed fish stocks.
We provide the scientific information that supports the management of approximately 500 fish stocks. However, we only have data and resources to assess about 200 stocks each year. Stock assessment prioritization allows us to work with regional partners to decide which stocks are assessed each year.
Stock assessment prioritization considers stocks managed under federal fishery management plans as well as non-federal stocks that might also be assessed by our regional fisheries science centers. This process considers:
We developed the prioritization process during several years of collaboration with partners. The result is a national framework for prioritizing stocks. Each region uses this framework to help determine assessment targets and priorities to best meet those targets.
The Species Information System database is the central repository for regional and national fish stock information across NOAA Fisheries and includes stock assessment results and related information used to determine stock status. The database also has a public version, the Stock Status, Management, Assessment, and Resource Trends (Stock SMART) web portal that provides easy access for anyone to view and download summaries and results from stock assessments since 2005.
Interested in specific regional stock assessments?
Or take a deeper dive and learn more about our stock assessment programs at our science centers:
We are working to advance our stock assessment program to provide fishery managers and the public with more timely, accurate, and complete information on sustainable catch levels and fish stock status. We are updating our Stock Assessment Improvement Plan, first published in 2001, which provides a framework for moving toward a next generation stock assessment enterprise.
The vision of this next generation enterprise is to improve timeliness and efficiency of assessments while maintaining their utility to fishery management, prioritizing work relative to available resources, expanding the scope of stock assessments to be more holistic and ecosystem-linked, and using innovative modeling and data collection techniques. When finalized, the updated Stock Assessment Improvement Plan will better guide us toward our vision of resilient ecosystems, communities, and economies for future generations.View our Stock Assessment Improvement Plan
We also support the development of future and current stock assessment scientists. Programs focused on training the next generation of scientists in stock assessment and other relevant career fields include the QUEST Program and NOAA Fisheries-Sea Grant Fellowship Program. We also provide current stock assessment scientists with resources and opportunities for continued education and training in the evolving skills necessary for next generation stock assessments through in-person and online workshops.
We publish marine mammal stock assessment reports, which contain information about geographic range, population size and trends, productivity rates, and estimates of mortality. Marine mammals under our jurisdiction include whales, dolphins/porpoises, and seals/sea lions. The reports are prepared in consultation with one or more of three regional scientific review groups, and drafts are available for public review and comment.
Each year, we review reports for strategic stocks of marine mammals. For non-strategic stocks, we review reports every three years, or when new information becomes available. If the reviews show that the status of the stock has changed or can be assessed more accurately, we revise the report in consultation with the scientific review groups and after public review and comment.
The U.S. Fish and Wildlife Service also prepares stock assessment reports for marine mammals under their jurisdiction including manatees, polar bears, sea otters, and walruses. Some reports include information on multiple stocks.
NOAA Fisheries and U.S. Fish and Wildlife Service prepare reports only for marine mammal stocks that occur in waters under U.S. jurisdiction, as stated in the Marine Mammal Protection Act. We do not prepare reports for marine mammal stocks worldwide.
Data collection, analysis, and interpretation are conducted through marine mammal research programs at each of our regional fisheries science centers and by other researchers. Data are collected in a variety of methods, including aerial and ship-based surveys, acoustic monitoring, photo identification studies, biopsy sampling for genetic studies, and tagging.
The Marine Mammal Protection Act provides only general descriptions of the kinds of information that must be included in stock assessment reports. For example, the reports require a "minimum population estimate," which means we have "reasonable" assurance there are at least the estimated number in the population.
Each marine mammal stock assessment report includes:
The first stock assessment reports prepared in 1995 included about 165 reports on marine mammal stocks in U.S. waters:
The number of reports may vary from year to year because stock identification is subject to change. Marine mammal stocks may be added or removed from the regional list of compiled reports due to changes in distribution.
We use marine mammal stock assessment reports to:
For marine mammal stock assessments, the Marine Mammal Protection Act provides only general guidance on assessment methods and on the content of the reports. To include values for the required elements in the reports, NOAA Fisheries and U.S. Fish and Wildlife Service translated qualitative concepts into quantitative terms. After building a scientific foundation through simulation modeling, we proposed guidelines for selecting specific values to include in the reports. The guidelines received review and comments by the public and scientific review groups.
View the Guidelines for Assessing Marine Mammal Stocks for information on the background, decisions, and default values that go into developing the stock assessment reports.
Additionally, we work with partners to develop and evaluate analytical products and applications to improve population assessments.
To disseminate results and increase national coordination and collaboration in conducting assessments, we support and organize protected species assessment workshops biennially. Other workshops address specific technical topics and advance various protected species science initiatives with direct relevance to management actions. Similarly, various dedicated working groups encourage dissemination of best practices and latest advances in the field.
Population assessments provide the foundation for evaluating the status of and threats to marine mammals, sea turtles, and fish protected under the Endangered Species Act and to plan and implement species recovery and conservation actions.
Marine resource managers require accurate and precise information on a species or stock’s population structure, life history characteristics and vital rates, abundance, and threats (particularly those caused by human activities). This information informs agency decisions related to:
Information included in endangered species population assessments is vital to how we support and advise state and tribal-managed coastal areas. It also allows us to provide scientific and policy leadership to regional and international bodies such as multi-state marine fishery commissions, U.S. fishery management councils, international fishery management organizations, and the Convention on International Trade in Endangered Species of Wild Fauna and Flora.
We provide funding support to our agency scientists as well as university, federal, and state partners to improve sea turtle population assessments through a competitive, peer-reviewed process. Funds are awarded based on relevance to management concerns and scientific research priorities.
For assessing acoustic impacts on endangered species, we also provide funding through a competitive, peer-reviewed process to support research conducted by NOAA scientists and partners.
We have established the National Protected Species Toolbox Initiative to support the development of analytical products and applications that aim to investigate impacts and consequences of human and environmental disturbance on endangered and threatened marine life and other protected species.
Recovery Status Review for the Main Hawaiian Islands Insular False Killer Whale Distinct Population…
Survey to collect data on the distribution and abundance of crab, groundfish, and other benthic…
Model-based estimates of abundance, distribution or age composition from Alaska Fisheries Science…
Pacific cod (Gadus macrocephalus) is a transoceanic species, ranging from California, northward…
Pacific cod (Gadus macrocephalus) ranges across the northern Pacific Ocean from California,…
In 2015, a final rule was issued adding the grenadier stock complex as an Ecosystem Component to…
We create various types of statistical models that incorporate ecosystem information such as predator-prey relationships, food web relationships, diet analysis and socio-economic data that is used to inform fishery management decisions for Alaska…
June 6 to August 15, 2022 Aleutian Islands Biennial Bottom Trawl Survey information sheet. |
The Venus Express mission was selected by ESA in 2002 following a call for ideas concerning the potential use of the Mars Express satellite spare model. Spare models and parts from instruments developed for Mars Express and Rosetta were also available for Venus Express.
The mission was designed to try and answer the following questions:
- What are the mechanisms and dominant forces responsible for the super-rotation of the atmosphere?
- What are the fundamental processes responsible for the atmosphere’s general circulation?
- What is the current state of equilibrium for water vapour, and what was it in the past?
- What impact has the greenhouse effect had on planetary evolution in the past, and what role will it play in the future?
- Is there any tectonic or volcanic activity on Venus?
- And finally, the fundamental question: why is Venus so different from Earth, given their similarities in size and composition?
Here is a more detailed list of the primary science objectives for this orbital mission:
- General study of the atmospheric temperature profile, from the surface up to an altitude of 200 km;
- Detailed study of atmospheric circulation (super-rotation, thermospheric winds) and waves, from 50 to 150 km above the surface;
- Complete study of chemical composition (H2O, SO2, SO, COS, HCl, HF, etc.) and evolution of the lower atmosphere (0-40 km) and of the atmosphere within and above the clouds (55-150 km);
- Study of the structure, clouds, and distribution of the unknown UV radiation absorber; study of the clouds’ formation and evolution processes;
- Study of the energy balance sheet and the greenhouse effect;
- Search for atmospheric lightning;
- Study of the plasma environment (energetic neutral atoms, ions, and electrons), escape processes and solar wind interaction;
- Measure the magnetic field resulting from the interaction between the solar wind and the ionosphere;
- Infrared mapping of the surface, measuring infrared emission variations;
- Search for volcanic and seismic activity and study of volcanic activity’s impact on the formation of the current climate;
- Study of the Sun’s corona by radio sounding during Venus’s conjunctions.
A 153-DAY JOURNEY TO VENUS
Venus Express was successfully launched on 9th November 2005, from the Baikonur space centre in Kazakhstan. The launch window was open from 26th October to 25th November 2005. The Soyuz launcher placed the Fregat upper stage (on which the Venus Express spacecraft was installed) on a sub-orbital trajectory and then on an escape trajectory before the spacecraft finally separated from the Fregat stage.
During its 153-day journey to Venus, the satellite was monitored by the New Norcia ground station in Australia and the Cebreros ground station in Spain, the latter of which was operational for the first time for the Venus Express mission. The spacecraft received instructions to adjust its trajectory using its boosters. At least one trajectory adjustment was planned 60 days after launch. Once it approached Venus, the spacecraft used its primary engine so slow down and let itself be captured by the planet’s gravitational pull.
Insertion into orbit
Venus Express was initially inserted into a very elliptical polar orbit with a pericentre of about 250 km and a 5-day orbital period. Auxiliary boosters were then used to reduce the apocentre and reach operational orbit.
The operational orbit was determined so that every longitude could be covered during one Venus sidereal day (243 Earth days). Venus’s sidereal revolution period (a Venus year) is 225 days, longer than the planet’s (retrograde) rotation. The mission’s nominal lifetime was set for 2 Venus sidereal days. Planet coverage was obtained at different spatial resolutions. Unlike Mars Express, which aimed to cover the entire surface of Mars, Venus Express didn’t cover the entire surface in high resolution but instead studied the spatial and temporal variability of the atmosphere and the surface on different scales. The chosen operational orbit was an elliptical polar orbit, with a pericentre of about 250 km and a pericentre of about 66,000 km, and an orbital period of 1 Earth day to facilitate ground monitoring operations. |
When a distant planet moves in front of its star as seen from Earth, the slight drop in starlight is often enough to allow sensitive instruments to make a detection. We call the degree to which the star’s light is diminished the ‘transit depth,’ and even with transiting gas giants, the figure is usually on the order of one percent. What we’re getting at is the ratio of the area of the planet to the area of the star behind it. The transit depth of the ‘hot Jupiter’ HD 189733b is unusually large at three percent. Obviously both a planet’s size and the the size of the star come into play.
In the case of the super-Earth GJ3470b, the primary star is relatively nearby and is also an M-dwarf, allowing greater transit depth and propelling a series of investigations from the ground. GJ3470b orbits its star at 0.036 AU, completing its orbit in a mere 3.3 days. The new work, led by Akihiko Fukui and Norio Narita (NAOJ), along with Kenji Kuroda (University of Tokyo), looks at the atmosphere of a planet with some fourteen times the mass of the Earth.
The team calculates the radius of GJ3470b at 4.3 times larger than that of the Earth, a figure about 10 percent smaller than previously reported. Its calculations also indicate that the planet possesses a hydrogen-rich envelope of considerable mass. Says Fukui:
“Suppose the atmosphere consists of hydrogen and helium, the mass of the atmosphere would be 5 to 20% of the total mass of the planet. Comparing that to the fact that the mass of Earth’s atmosphere is about one ten-thousandth of a percent (0.0001%) of the total mass of the Earth, this planet has a considerably thick atmosphere.”
Using the Near-Infrared Imager/Spectrograph (ISLE) mounted on the Multicolor Imaging Telescopes for Survey and Monstrous Explosions (MITSuME) instrument in Okayama, the team looked at the lightcurve of the transit in four colors from visible to near-infrared. This news release from the National Astronomical Observatory of Japan provides the image below, showing estimates of planetary radius by each of the colors observed. The radius derived from near infrared turns out to be 6 percent less than that derived from visible light.
Image: Radius of each color measured (observation wavelength) of GJ3470b (shown as Planet-to-Star radius ratio). Credit:NAOJ.
From the paper:
A plausible explanation for the differences is that the planetary atmospheric opacity varies with wavelength due to absorption and/or scattering by atmospheric molecules. Although the significance of the observed Rp / Rs [the planet-to-star radius ratio] variations is low, if confirmed, this fact would suggest that GJ3470b does not have a thick cloud layer in the atmosphere. This property would offer a wealth of opportunity for future transmission-spectroscopic observations of this planet to search for certain molecular features, such as H2O, CH4, and CO, without being prevented by clouds.
In other words, the lack of a substantial cloud cover should make it easier to find traces of water or methane in the atmosphere that could give us clues as to how the planet formed — thick clouds would have masked the differences in radii by color that the researchers detected. The team hopes to conduct further observations of the planet using the 8.2-meter optical-infrared Subaru telescope on Mauna Kea in Hawaii, looking for further ways to characterize the world’s atmosphere and get a clue as to whether it formed in its present position or further out in the system, later migrating inward.
The paper is Fukui et al., “Optical-to-Near-Infrared Simultaneous Observations for the Hot Uranus GJ3470b: A Hint of a Cloud-Free Atmosphere,” in The Astrophysical Journal, Vol. 770 (2013), p. 95 ff. (abstract). |
Some planets are tilted so their North and South Poles are not straight up and down. Earth is tilted a bit - about 23°. Uranus is tilted a lot - more than 90°. Mercury, on the other hand, is hardly tilted at all. Mercury's tilt is less than 1/30th of one degree! That's a lot smaller than Jupiter's tilt, which comes in second place. Jupiter is tilted just a bit more than 3°.
We have seasons on Earth because of our planet's 23° tilt. Most other planets are also tilted. They have seasons, too. Mercury does not have seasons because it isn't tilted. If you were at one of Mercury's poles, you would see a strange sight. The Sun would always be at the horizon, like it was rising or setting. The Sun would look bigger, too, because Mercury is closer to the Sun than Earth is. The big Sun would seem to move around the horizon, going up and down a little... but it would never rise all the way and it would never set all the way.
Mercury has lots of meteor craters. Some of the craters are near Mercury's poles. If you were inside a crater near the pole, you might never see the Sun. The rim of the crater would be like a hill all the way around you. The Sun might never rise over the top of the rim-hill.
Scientists think there might be some craters like that... where the Sun never shines on the bottom of the crater. They even think some of the craters might have ice in them. That seems really strange since Mercury is so near the Sun and so hot. The temperature is sometimes as high as 452° C (845° F) on Mercury. Still, scientists have taken radar images that may show ice in craters near the poles.
Mercury has a magnetic field. Its field is weak. Earth's magnetic field is about 100 times stronger. Earth's magnetic field is tilted, and so is Mercury's. That means the magnetic poles are not in the same place as the geographic poles. |
- Measuring Growth
- Public Reports
- Restricted Reports
- Accountability Reports
- School Reports
- District Reports
- Teacher Reports
- Accessing the Teacher Reports
- Student Growth Measure
- Teacher Value-Added
- Teacher Diagnostic
- Teacher Custom Diagnostic
- Evaluation Dashboard
- Reports for Administrators
- Student Reports
- Comparison Reports
- Roster Verification
- Getting Started
- Claiming Instructional Responsibility
- Viewing the History of Actions on Rosters
- Additional Resources
- General Help
Interpreting the Data
Scatterplots illustrate the relationship between two variables, such as achievement and growth. Specifically, scatterplots enable you to visually examine the relationship between the variables and answer the question, "As variable A changes, what happens to variable B?" In the case of achievement and growth, you might ask, "As the average achievement of students in a school increases, does the average growth also increase?" In other words, is there a relationship between achievement and growth?
When data points on a scatterplot are distributed somewhat symmetrically along a horizontal or vertical line, there is little to no relationship between the selected variables.
On the other hand, a more diagonal pattern indicates that the variables are related. The closer the pattern is to a diagonal line, the stronger the relationship.
Variables are positively correlated if one variable increases or decreases as the other variable increases or decreases. A good example of positive relationship is that of temperature and the sale of ice cream. As the temperature rises, ice cream sales rise with it.
Variables are negatively correlated if they move in opposition to each other. In other words, when one increases, the other decreases. For example, as the temperature goes up, sales of hot chocolate go down.
When interpreting the relationship between two variables on a scatterplot, it's important to remember that correlation does not prove causation. If variable B increases as variable A increases, that does not necessarily mean that changes in variable A caused the changes in variable B. Also, if the graph contains only a small number of data points, a correlation might be suggested that does not exist in a larger set of data.
For example, if we were to create a scatterplot of achievement vs. growth in fifth-grade math and we included only a few schools, the relative achievement and growth of those schools would determine the correlation that is suggested in the graph. We might mistakenly conclude that achievement and growth are positively correlated if the selected schools that serve a lower-achieving population of students haven't had great success in helping those students make growth, and the other schools serve a higher-achieving population of students and have had great success with student growth. In other words, we might believe that schools serving lower-achieving students cannot achieve high growth.
However, comparing only a few schools might not offer a fair representation of the true relationship between achievement and growth. If we added all the schools in the district to the scatterplot, we would see little to no relationship between the two variables. With that in mind, it's important to be careful when drawing conclusions from small amounts of data on a scatterplot. |
How Music Helps Build Language
Right from the beginning, music has shown not only to help children with their rhythm and motor skills but it can also help build language. Right from the beginning, your little one is looking for ways to communicate to you. When they learn the building blocks for language, their ability to communicate their needs and wants is not only essential to their development but also to a parent’s sanity. The tantrums lessen when they are able to communicate better. As a parent, it is important to look for opportunities to foster language development. Music helps us do just that.
What makes using music helpful in building language is that it helps stimulate multiple areas of the brain. This is great for language development. An example of this is Melodic Intonation Therapy. This therapy is used to help severe stroke patients speak again. Adding a melody to a spoken phrase helps stimulate the right side of the brain. This is helpful if the left side of their brain is damaged as language is typically a left side brain function. With children who do not have any brain damage, music engages both hemispheres of the brain. Adding movements such as dancing, playing instruments or tapping this helps engage their brain further by stimulating their frontal lobes.
Singing songs to your child, especially ones that they are familiar with, is not only comforting, but they are able to start to distinguish between the similarities and differences between sounds which is an important building block to language learning as well as pre-reading skills. Music has repetition which helps children learn concepts more easily. Singing helps break words down into sounds and symbols which also help your children learn to pronounce words more clearly.
You don’t have to be musically trained to use music to help your child’s language development. You can easily incorporate music by adding a simple melody to words and phrases. By singing, it helps slows down the words while your child learns each syllable. Adding a higher pitch will stress the symbols that show where to put the emphasis when speaking the word. If there is a word that your child has problem pronouncing, add a rhythm or a melody.
A fun activity to do with your little one is to make your own instruments. When you are done, you can use your new instruments to practice your songs and help repeat words in the melody of your favorite songs. Dance or add streamers as this helps with their gross motor development as well as their visual stimulation to the musical play. Incorporate fun musical songs that help them remember routines and concepts. |
If you Google image search "black hole", you’re going to be swimming in incredibly beautiful, shocking, and awe-inspiring images that only hint at the magnitude of these unfathomable comic giants.
But the sad truth is they're all just pretty pictures representing our best assumptions about the nature of black holes, because even light itself can’t escape once it’s fallen past the event horizon. Add that to the fact that black holes are incredibly far away from Earth, and they’re almost entirely invisible to us.
Fortunately, physicists aren’t the kind to give up simply because they’re lacking a few photons - a team from MIT and Harvard has developed a new algorithm that could help them produce the first actual image of a black hole.
"A black hole is very, very far away and very compact," lead researcher and graduate student at MIT, Katie Bouman, explains. "[Taking a picture of the black hole in the centre of the Milky Way galaxy is] equivalent to taking an image of a grapefruit on the Moon, but with a radio telescope."
"To image something this small means that we would need a telescope with a 10,000-kilometre diameter, which is not practical, because the diameter of Earth is not even 13,000 kilometres."
Just take a second to let that sink in. We need a telescope bigger than EARTH to see these things the way we see other planets and stars... It’s clearly time for Plan B.
Plan B is an algorithm that essentially stitches together data collected from radio telescopes positioned all around the globe to create a cohesive image of a black hole - a project known as the Event Horizon Telescope.
Why radio telescopes? Well, we know that black holes don’t emit visible light like stars and asteroids do, but we could use radio wave signals to get an idea of what black holes look like, and these signals come with the added bonus of not getting jumbled up in space dust.
"Radio wavelengths come with a lot of advantages," says Bouman. "Just like how radio frequencies will go through walls, they pierce through galactic dust. We would never be able to see into the centre of our galaxy in visible wavelengths because there’s too much stuff in between."
The drawback of using radio telescopes is that, because they have such long wavelengths, they require enormous antenna dishes, as Larry Hardesty explains for MIT News:
"The largest single radio-telescope dish in the world has a diameter of 1,000 feet [304 metres], but an image it produced of the Moon, for example, would be blurrier than the image seen through an ordinary backyard optical telescope."
Instead of building a radio telescope the size of Earth to see a black hole, we’re going to turn Earth itself into a giant radio telescope dish, by linking up as many radio telescopes as we can, and then filling in the knowledge gaps with a lot of very clever maths. Science, ILU.
Bouman and her team have so far gotten six observatories from around the world to sign on to the Event Horizon Telescope project, and they expect to get confirmations from more in the coming weeks.
The plan is for these telescopes to train in on the black hole at the centre of our Milky Way galaxy, called Sagittarius A, filtering out as much 'noise' as possible. Using the data they collect, Bouman and her team will start to construct the first direct image of a black hole.
At the same time, Bouman’s new algorithm, called CHIRP (Continuous High-resolution Image Reconstruction using Patch priors), will be applied to the data being shared across the telescopes, making an informed 'guess' in places that the telescopes can’t access.
It does this using the same technique physicists have used to detect black holes in our galaxy, and beyond.
"To detect black holes today, computer-powered observatories scan for and record bright points of light that are emitted as a black hole, say, eats a star's plasma," Sarah Kramer explains for Tech Insider.
"The new model will take such data about known black holes to identify common patterns among the enigmatic objects. Then the software will ‘learn' those patterns and use them to predict what appears in areas we can't see using radio telescopes."
The team is expected to present their plan on June 27, at the Conference on Computer Vision and Pattern Recognition in Las Vegas. From that point, other researchers will have the chance to pore over their calculations and figure out how sound it all is, and if their assumptions are correct, we just might see the first ever direct image of black hole some time next year.
We seriously cannot wait. |
HOW TO IMPROVE OUR RECYCLING PROCESS
Recycling is a process to change (waste) materials into new products to prevent waste of potentially useful materials, reduce the consumption of fresh raw materials, reduce energy usage, reduce air pollution (from incineration) and water pollution (from landfilling) by reducing the need for "conventional" waste disposal, and lower greenhouse gas emissions as compared to plastic production. Recycling is a key component of modern waste reduction and is the third component of the "Reduce, Reuse and Recycle" waste hierarchy.
Recyclable materials include many kinds of glass, paper, metal, plastic, textiles, and electronics. The composting or other reuse of biodegradable waste—such as food or garden waste—is also considered recycling. Materials to be recycled are either brought to a collection center or picked up from the curbside, then sorted, cleaned, and reprocessed into new materials bound for manufacturing. |
Origins of Slavery in Ancient Greece
Comments Off on Origins of Slavery in Ancient Greece
When most of us think of Ancient Greece, certain images come to mind. We think of the Golden Age of Greece and the way the Greek culture has influenced Western Civilization. Without Ancient Greece, wouldn’t have certain concepts, such as democracy, theater, and even mathematics. We admire the architecture. We listen to grand stories about the mythological heroes and are generally fascinated by all that the Ancient Greeks accomplished.
However, there is a side of the Ancient Greek culture that isn’t often talked about. Not all of the people living in Ancient Greece were “free” or were actually considered citizens. There was a large slave population present in allt he Ancient-Greek city-states. For example, from around 450 B.C. to 320 B.C., there were around 100,000 slaves living in the city-state of Attica and the slaves were an essential part of the economy of Greece.
When learning about slavery in Ancient Greece, it’s a good idea to start at the beginning. Here’s a look at the origins of slavery in Ancient Greece:
Slavery Had an Early Beginning
There is some evidence that slavery was present starting with the Minoan civilization located where modern-day Crete is today. The Minoans are the earliest known people in Greece, and evidence of their greatness can still be found on the island at the grand palaces located at the archaeological sites in Knossos in modern-day Heraklion and other Minoan archaeological sites, such as Malia, located elsewhere on the island. The Minoans were known for their peaceful nature and have been shown to engage in trade with other civilizations, such as the Egyptians. There is also some evidence that the Minoans used slaves to help out. However, not much is known by how many slaves they used or whether or not slavery had a large economic impact.
Mycenaean Civilization Also Used Slaves
Once the Minoan civilization collapsed, the Mycenaeans took over as being the prominent civilization in Ancient Greece. There is some debate as to how the Mycenaean Civilization began and the Minoan Civilization ended. However, one thing historians do understand is that slavery started to be a common practice in Ancient Greece during the time when Mycenae thrived. The reason why historians know this is that archaeologists unearthed stone tablets in Pylos that described parts of the economy in Mycenae. One of the economic categories was listed as “slaves”, which usually referred to domestic slaves and also the “slaves of gods”, which referred to those people who were in service to the gods, usually Poseidon. Both types of slaves were common in Mycenae. After that, not much is known about the nature of slavery during the Greek Dark Ages, which came about after the Mycenaean Civilization collapsed.
As you can see, slavery started early on in Ancient Greece. Through the centuries, the role of slavery expanded and the slaves became increasingly important to the culture. In fact, the Ancient Spartans even enslaved an entire ethnic group known as the Helots. Here, the slave population actually outnumbered the Spartan population and much of how the society was structured depending on keeping the Helots from revolting.
Categorized in: Ancient Greek History
This post was written by GreekBoston.com |
Publication – Science Daily
Most of the world’s population will be subject to degraded air quality in 2050 if human-made emissions continue as usual, according to Science Daily.
If continuation does occur, the average world citizen will experience similar air pollution to that of today’s average East Asian citizen. These conclusions are those of a study published August 1 in Atmospheric Chemistry and Physics, an Open Access journal of the European Geosciences Union (EGU).
“Strong actions and further effective legislation are essential to avoid the drastic deterioration of air quality, which can have severe effects on human health,” concludes the team of scientists, led by Andrea Pozzer of the Abdus Salam International Centre for Theoretical Physics in Italy (now at the Max Planck Institute of Chemistry in Germany), in the new paper.
According to the article, “The researchers studied the impact of human-made emissions on air quality, assuming past emission trends continue and no additional climate change and air pollution reduction measures (beyond what is in place since 2005) are implemented. They noted, while pessimistic, the global emissions trends indicate such continuation.”
At present, urban outdoor air pollution causes 1.3 million estimated deaths per year worldwide, according to the World Health Organization.
Publication – Environmental News Network
The drought that is parching the Midwest this year has led to the “dead zone” in the Gulf of Mexico, a patch of oxygen-starved water at the mouth of the Mississippi River, to be the fourth smallest ever recorded by NOAA. It is still larger than the state of Delaware at 2,889 square miles (7482 square km), according to ENN.
“The smaller area was expected because of drought conditions and the fact that nutrient output into the Gulf this spring approached near the 80-year record low,” Nancy Rabalais, executive director of the Louisiana Universities Marine Consortium who led the survey cruise, said.
Dry conditions on land lead to a smaller dead-zone because less nutrient-rich river water is washed out to sea during a drought.
“The Mississippi and its tributaries pick up tons of eroded soil, fertilizers, animal and human wastes and other substances as it flows through the American heartland. Algae in the Gulf of Mexico feast upon that flow of foodstuffs and become massive blooms. But the lifespan of the phytoplankton is pretty short and soon the dead plantlife is quickly consumed by bacteria that suck the oxygen out of the water, leaving none for fish and other aquatic life,” according to the article.
The lack of rains and flooding this year have resulted in the Mississippi river giving the algae less food, which leaves more air for the fish, according to ENN. Hence, as the drought withers this year’s corn crop, the low-flow of the river may help the fish harvest.
Publication – Science Daily
Earth’s oceans, forests and other ecosystems continue to soak up about half the carbon dioxide emitted into the atmosphere by human activities — even as those emissions have increased — according to a study by University of Colorado and NOAA scientists published August 1 in Nature.
The scientists analyzed 50 years of global carbon dioxide (CO2) measurements and found that the processes by which the planet’s oceans and ecosystems absorb the greenhouse gas are not yet at capacity, according to the article.
“Globally, these carbon dioxide ‘sinks’ have roughly kept pace with emissions from human activities, continuing to draw about half of the emitted CO2 back out of the atmosphere. However, we do not expect this to continue indefinitely,” Pieter Tans, a climate researcher with NOAA’s Earth System Research Laboratory in Boulder, Colo., and co-author of the study, told Science Daily.
According to the article, “Carbon dioxide is emitted into the atmosphere mainly by fossil fuel combustion but also by forest fires and some natural processes. The gas can also be pulled out of the atmosphere into the tissues of growing plants or absorbed by the waters of Earth’s oceans. A series of recent studies suggested that natural sinks of carbon dioxide might no longer be keeping up with the increasing rate of emissions. If that were to happen, it would cause a faster-than-expected rise in atmospheric carbon dioxide and projected climate change impacts.”
Publication – Science Daily
The mechanisms explaining species-specific responses to changes in temperature and water availability are most likely much more complex than many simple models of plant response to warming climates, according to researchers at Texas Tech University and the United States Geological Survey.
After reexamining an upslope vegetation shift reported in a high-profile 2008 study published in Proceedings of the National Academy of Sciences, the researchers refuted the findings that plants are moving upslope in California due to climate warming.
In a study published in PLoS ONE, Texas Tech ecologist Dylan Schwilk and USGS fire ecologist Jon Keeley reexamined a climate-driven vegetation shift at the Golden State’s Santa Rosa Mountains by studying one particular desert shrub. Schwilk told Science Daily that he was initially suspicious of the 2008 findings after they suggested that a shrub called desert ceanothus was one of nine that were moving upslope because of global climate change.
“I want to be clear that I’m not saying climate change isn’t happening or having effects,” Schwilk told Science Daily. “I study it all the time. But we’re trying to have people be more explicit about describing the mechanisms and causes of plant shifts, because I suspect there may be a bias toward automatically assuming climate change as the reason.”
Publication – EurekaAlert!
The planet’s changing climate is devastating communities in Africa through droughts, floods and myriad other disasters, according to the article.
Researchers from the Climate Change and African Political Stability (CCAPS) program developed an online mapping tool that analyzes how climate and other forces interact to threaten the security of African communities using detailed regional climate models and geographic information systems
“The first goal was to look at whether we could more effectively identify what were the causes and locations of vulnerability in Africa, not just climate, but other kinds of vulnerability,” Francis J. Gavin, professor of international affairs and director of the Strauss Center, told EurekaAlert!.
“In the beginning these all began as related, but not intimately connected, topics” Gavin said, “and one of the really impressive things about the project is how all these different streams have come together.”
Africa is particularly vulnerable to the effects of climate change due to its reliance on rain-fed agriculture and the inability of many of its governments to help communities in times of need, according to the article.
Have you seen breaking climate change news or discussion that should be included in our next “Roundup?” Let us know! |
THE ELEMENTARY YEARS OVERVIEW
At this stage in their development, all children have inherent creativity. These creative interests are enlivened when academics are delivered through carefully structured lessons enriched with choral singing, story telling, rhythmic activities and dynamic discussion.
Math concepts are introduced with various manipulatives and games while questions are explored on the chalkboard next to intricate drawings created by the teacher to stimulate imaginative problem solving. The teacher balances artistic expression with intellectual and emotional development by engaging the child’s heart and hands, as well as the mind.
During the elementary school years, the child’s consciousness gradually shifts from a pictorial to a more conceptual focus. Guided by their teacher, students create their own main lesson books. These books involve drawing, verse, and content from science to literature, and provide the student an opportunity to be inspired by the interaction of subjects.
Each school day begins with a two-hour main lesson, which focuses on one primary topic for several weeks. Students are encouraged to joyfully submerge themselves in their lessons, studying language and literature, science and mathematics, geography and history in alternating, integrated blocks. During the rest of the day, shorter lessons concentrate on specific subjects such as music, two foreign languages, the visual arts, movement and games, and handwork or woodwork, as well as further practice of academic skills. |
| Potentially edible!|
Dietary cholesterol refers to cholesterol obtained from foods in the human diet. According to the current scientific consensus from evidence-based medicine, dietary cholesterol does not significantly increase the total blood cholesterol level or increase the risk of cardiovascular disease in most people.
Vegans and opponents of the egg industry continue to promote the view that dietary cholesterol causes a significant increase in LDL cholesterol, thus increasing cardiovascular disease (CVD) risk and all cause mortality. However, high quality research on egg consumption analyzed in recent systematic and umbrella reviews does not support this view.
The idea that dietary cholesterol significantly increases the total blood cholesterol level was first promoted by American health authorities in the 1960s and was debated for many years. The American Heart Association removed dietary restriction of cholesterol in 2013 and the Dietary Guidelines Advisory Committee removed the restriction in 2015.
Dietary cholesterol refers to the cholesterol found in food. It is found only in animal products. It is not necessary to get cholesterol from food as the human body makes more cholesterol than it needs.
The mainstream medical consensus is that cholesterol in food only has a small effect on the bad (LDL) cholesterol in your blood. Saturated and trans fats in food cause a much greater increase in LDL cholesterol and is a risk factor for heart disease.
The United States Department of Agriculture, Dietary Guidelines For Americans 2015-2020, recommends limiting the intake of saturated fats to less than 10 percent of calories per day but does not set a specific limit for consumption of dietary cholesterol. They note that the removal of the specific limit "does not suggest that dietary cholesterol is no longer important to consider when building healthy eating patterns. As recommended by the IOM, individuals should eat as little dietary cholesterol as possible while consuming a healthy eating pattern". Some foods that are rich in cholesterol are also high in saturated fat, so this may confuse people. The British Dietetic Association note that:
Although some foods contain cholesterol – such as shellfish, eggs and offal – this has much less effect on our blood cholesterol than the cholesterol we make in our body ourselves in response to a high saturated fat diet. Many cholesterol containing foods are relatively low in saturated fat and contain other useful vitamins and minerals. Only cut down on these foods if you have been advised to by your doctor or a dietitian. Cutting down on saturated fat in the diet is much more helpful than reducing dietary cholesterol.
Many medical studies agree that eggs do not increase CVD risk in the general population. The American College of Cardiology have noted that observational studies from 1980-2012 (involving more than 250,000 subjects) have not supported an association between dietary cholesterol and CVD risk.
A 2013 meta-analysis found no association between egg consumption and heart disease or stroke. Another meta-analysis from 2013, suggested that egg consumption is not associated with the risk of CVD in the general population.
A 2016 study found that eating one egg every day is not associated with an elevated risk of coronary artery disease. The study concluded that "egg or cholesterol intakes were not associated with increased CAD risk, even in ApoE4 carriers (i.e., in highly susceptible individuals)."
A 2018 review noted that:
Current studies have tended to show that the consumption of eggs is not a risk factor of CVD in healthy people. However, people who are at high risk of CVD such as those with diabetes or hypertension need to have caution with dietary cholesterol intake, especially egg intake. Also, some people seem to be more sensitive to dietary cholesterol whose blood cholesterol level is highly correlated to dietary intake. Therefore, even though the recommendation of restricting cholesterol and egg consumption in AHA and DGAC has been eliminated, we still need to have caution with them based on the physiological status of people.
Vegans are the main opponents of dietary cholesterol. For example, Michael Greger claims that "blood cholesterol levels are clearly increased by eating dietary cholesterol. In other words, putting cholesterol in our mouth means putting cholesterol in our blood."
- How much cholesterol should you have per day?. "Current research indicates that dietary cholesterol does not have a major effect on a person's health. Instead, a person should concentrate on reducing or eliminating foods high in saturated fats, trans fat, and added sugars."
- Can I eat eggs?. HEART UK. "Cholesterol in eggs does not have a significant effect on blood cholesterol."
- Dietary Cholesterol and Cardiovascular Risk: AHA Advisory "Findings in observational studies from 1980-2012, collectively with more than 250,000 subjects, have not supported an association between dietary cholesterol and CVD risk (fatal or nonfatal myocardial infarction or stroke), particularly when adjusting for total energy intake. Similarly, egg intake is not associated with CVD risk."
- Check recent systematic and umbrella reviews cited on the article.
- Mah E, Chen CO, Liska DJ. The effect of egg consumption on cardiometabolic health outcomes: an umbrella review. Public Health Nutr. 2020 Apr;23(5):935-955.
- Stefano Marventano, Justyna Godos, Maria Tieri, Francesca Ghelfi, Lucilla Titta, Alessandra Lafranconi, Angelo Gambera, Elena Alonzo, Salvatore Sciacca, Silvio Buscemi, Sumantra Ray, Daniele Del Rio, Fabio Galvano. Egg consumption and human health: an umbrella review of observational studies. Int J Food Sci Nutr. 2020 May;71(3):325-331
- Emamat H, Totmaj AS, Tangestani H, Hekmatdoost A. The effect of egg and its derivatives on vascular function: A systematic review of interventional studies. Clin Nutr ESPEN. 2020 Oct;39:15-21.
- Krittanawong C, Narasimhan B, Wang Z, Virk HUH, Farrell AM, Zhang H, Tang WHW. Association Between Egg Consumption and Risk of Cardiovascular Outcomes: A Systematic Review and Meta-Analysis. Am J Med. 2020 Jul 10:S0002-9343(20)30549-0.
- Kuang, H.; Yang, F.; Zhang, Y.; Wang, T.; Chen, G. (2018). The Impact of Egg Nutrient Composition and Its Consumption on Cholesterol Homeostasis. Cholesterol. 6303810.
- The Golden Egg: Nutritional Value, Bioactivities, and Emerging Benefits for Human Health.
- Cholesterol. U.S. Food and Drug Administration.
- Dietary fats, dietary cholesterol and heart health. "Cholesterol in food has only a small effect on low density lipoprotein (LDL or bad) cholesterol. Saturated and trans fats in food causes a much greater increase in LDL cholesterol. Therefore it is alright to include eggs as part of a healthy balanced diet that is low in saturated fat."
- Cholesterol in food. National Heart Foundation of Australia. "Cholesterol in food only has a small effect on the level of cholesterol in your blood."
- Is the Cholesterol in Your Food Really a Concern?. Penn Medicine. "It has long been a common myth that cholesterol consumed in foods, called dietary cholesterol, impacts the level of cholesterol in your body. Well, consider that myth busted."
- High cholesterol food. Heart UK. "Some foods contain cholesterol, but surprisingly they don’t make a big difference to the cholesterol in your blood."
- Panel suggests that dietary guidelines stop warning about cholesterol in food. Harvard Medical School. "There’s a growing consensus among nutrition scientists that cholesterol in food has little effect on the amount of cholesterol in the bloodstream."
- Foods with high cholesterol to avoid and include. Medical News Today.
- See article on Saturated fat and cardiovascular disease.
- Cholesterol. British Dietetic Association.
- Rong, Ying; Chen, Li; Tingting, Zhu; Yadong, Song; Yu, Miao; Shan, Zhilei; Sands, Amanda; Hu, Frank B; et al. (2013). "Egg consumption and risk of coronary heart disease and stroke: dose-response meta-analysis of prospective cohort studies". British Medical Journal 346 (e8539): e8539.
- Shin JY, Xun P, Nakamura Y, He K. (2013). Egg consumption in relation to risk of cardiovascular disease and diabetes: a systematic review and meta-analysis. Am J Clin Nutr 98: 146-159.
- . K. Virtanen, J. Mursu, H. E. Virtanen, M. Fogelholm, J. T. Salonen, T. T. Koskinen, S. Voutilainen, T.-P. Tuomainen. (2016). Associations of egg and cholesterol intakes with carotid intima-media thickness and risk of incident coronary artery disease according to apolipoprotein E phenotype in men: the Kuopio Ischaemic Heart Disease Risk Factor Study. American Journal of Clinical Nutrition 103 (3): 895-901.
- High-cholesterol diet, eating eggs do not increase risk of heart attack, not even in persons genetically predisposed, study finds. ScienceDaily.
- The Effects of Dietary Cholesterol on Blood Cholesterol. Michael Greger.
- "Eating Cholesterol Doesn't Raise Cholesterol" Debunked. Mic the Vegan. |
Norovirus is a term that has set people on edge every winter for a fair few years now, but there are simple ways that you can take steps towards norovirus prevention and protect yourself and your family from coming down with the nasty bug such as cleaning away norovirus on surfaces. Read on to find some easy tips for you to follow.
Five facts you should know about norovirus
There are five important things everyone should know when it comes to preparing yourself and understanding how to prevent norovirus.
- Norovirus is often called “the winter vomiting bug” but it can in fact strike at any time of year.
- Norovirus spreads through the air, but you can also catch it if you touch something or someone who has the germs.
- The most common cause of norovirus is contaminated food, so always take care when preparing food.
- Symptoms may include:
- Stomach aches and/or abdominal pains and cramps
- Very loose diarrhoea
- There is currently no treatment for norovirus, so you will only be able to offer symptom control to anyone affected.
- People most at risk are children; those living, working or holidaying in close quarters with lots of other people; and anyone who has an already weak immune system such as those with immune disorders or with conditions such as AIDS.
How long does norovirus live on surfaces?
Norovirus on surfaces outside of a human host has an ability to survive for quite long periods of time, making it particularly nasty. The most important lifespans for you to know are as follows:
- Hard surfaces: Two to three weeks
- Fabrics: Up to twelve days
- Water: Months, potentially years
Seven steps you should take towards norovirus prevention
There are seven things that you can do to help protect yourself and your family from spreading and catching a norovirus bug.
- Ensure you wash your hands meticulously, especially if you have been in contact with anyone who is currently unwell.
- Wash all fruits and vegetables, even if you are cooking them and always cook seafood thoroughly.
- Avoid preparing food if you are unwell.
- Be sure to carefully clean all surfaces that people or food regularly come into contact with including door handles, stair banisters, and bathroom and kitchen surface, even if you think they look clean.
- Ensure you regularly wash clothing properly, especially if you have been in contact with other people who are under the weather.
- Take care when handling nappies or other soiled items, particularly if your child is unwell; and always wash your hands after.
- Regularly clean your toilets, and always clean them if you or anyone in your family is suffering from diarrhoea.
Norovirus may be a nasty bug, but now you have some tips for how to prevent norovirus from affecting your family; you will be able to protect them, and you, from coming down with it.
- Regularly wash your hands with products like Neutral to prevent the spread of the germs that cause norovirus, especially if you have been in contact with someone who is unwell.
- Keep surfaces clean with products like Cif and Domestos, especially ones that are regularly touched by you and or family members. |
Diphthongs are blended vowel sounds expressed in one syllable. They are formed by a combination of two vowels (AU, OI, OO, OU, OY) or by a combination of a vowel and the letter W (OW, AW, EW). Examples of words with diphthongs include boil, school, toy, cow and few. Interactive games and activities are an effective way to teach diphthongs in primary school.
Pass out a children's newspaper, magazine or photocopied story. The handout should be something the students can mark up. Give students 10 or 15 minutes to circle as many diphthongs as they can find. Next, you can have students compare answers or share a word with the class.
Divide students into groups of three to five. Give each group a sheet of paper and a pencil. Ask them to decide who will go first. Write a word containing a diphthong on the board. Direct the students going first to write down a word that rhymes with the word on the board.That student should then pass the paper to the next student, who should do the same. When students can no longer find rhyming words, ask them to share their answers. Use a new starter word to repeat the exercise as many times as you like. Before each round, ask each group to set a goal for the number of rhyming words they expect to find. A list of rhyming diphthongs might include words such as brown, town, crown, frown, gown and clown.
Mix and Mingle
Pass out index cards with parts of words or single letters written on them. Include lots of diphthong cards. Pass out a pencil and one card to each child. Allow students to freely move around the room to mingle with each other. The objective is for each student to make as many words as possible by combining letters with the other children. For example, a student with OI can team up with L to make OIL; these two students can then team up with B to make BOIL. Ask students to write down their words on the back of the index card or on a separate sheet of paper. For smaller groups, you may want to give each student more than one card.
Fill in the Blanks
Prepare a handout of a simple story; this will be the teacher's copy. Prepare a student version of the handout by removing all the diphthongs and replacing them with blank spaces. Give each student his own copy of the handout. Ask students to fill in the blanks as you read the story to them out loud. You may also pause at certain spots in the story to ask someone to volunteer a word that might fit.
- Creatas/Creatas/Getty Images |
- What is a dilution factor of 1?
- What is a 1% solution?
- What is a 1/64 dilution?
- What is 10fold dilution?
- How do you make a dilution?
- What does a 1 in 5 dilution mean?
- How do you make a 1 to 15 dilution?
- What is a 1 in 50 dilution?
- What is a 2% dilution?
- What is a 1 to 2 dilution?
- What is a 1/3 dilution?
- How do you calculate a 1/10 dilution?
- How do you do a 1/20 dilution?
- How do you calculate a dilution ratio?
- What are two ways to make a 1 100 dilution?
- What is a 1 to 4 dilution?
What is a dilution factor of 1?
There is often confusion between dilution ratio (1:n meaning 1 part solute to n parts solvent) and dilution factor (1:n+1) where the second number (n+1) represents the total volume of solute + solvent..
What is a 1% solution?
In biology, the “%” symbol is sometimes incorrectly used to denote mass concentration, also called “mass/volume percentage.” A solution with 1 g of solute dissolved in a final volume of 100 mL of solution would be labeled as “1%” or “1% m/v” (mass/volume). … Thus 100 mL of water is equal to approximately 100 g.
What is a 1/64 dilution?
To convert dilution ratios to ounces per gallon divide 128 (the number of ounces per gallon) by the dilution ratio. … For example, 1-ounce of a product diluted at 1:64 makes 65-ounces, not 64. This includes not only the 64-ounces of dilutant, but also the original 1-ounce of concentrate.
What is 10fold dilution?
A ten-fold dilution reduces the concentration of a solution or a suspension of virus by a factor of ten that is to one-tenth the original concentration. A series of ten-fold dilutions is described as ten-fold serial dilutions.
How do you make a dilution?
To make a dilution, you simply add a small quantity of a concentrated stock solution to an amount of pure solvent. The resulting solution contains the amount of solute originally taken from the stock solution but disperses that solute throughout a greater volume.
What does a 1 in 5 dilution mean?
Answer: 1:5 dilution = 1/5 dilution = 1 part sample and 4 parts diluent in a total of 5 parts. If you need 10 ml, final volume, then you need 1/5 of 10 ml = 2 ml sample. To bring this 2 ml sample up to a total volume of 10 ml, you must add 10 ml – 2 ml = 8 ml diluent.
How do you make a 1 to 15 dilution?
DILUTION CHART 1:x means 1 part concentrate to x parts of water. For example, to make to quart of solution in a 1:15 dilution, mix 2-oz of concentrate into 30-oz of water. (NOTE: To minimize foaming fill the container with water before adding the concentrate. Then stir gently, but thoroughly.)
What is a 1 in 50 dilution?
Explanation: If you want to make a 1/50 dilution you add 1 volume part of the one to 49 parts of the other, to make up 50 parts in all.
What is a 2% dilution?
Answered November 3, 2018. a 1:2 dilution is usually used for Volume #1 out of Volume #2 . Vol1/vol2 . In this case you want a certain substance Volume. And double the amount of solvent to dilute it .
What is a 1 to 2 dilution?
For example, a 1:2 serial dilution is made using a 1 mL volume of serum. This expression indicates that 1 mL of serum is added to 1 mL of H20 and then mixed. This initial dilution is 1:2. Then, 1 mL of this dilution is added to 1 mL of H20 further diluting the sample.
What is a 1/3 dilution?
a dilution of 1:3 (one to three) means mix one part concentrate with. three parts water. it doesn’t mean mix a 33% solution.
How do you calculate a 1/10 dilution?
For example, to make a 1:10 dilution of a 1M NaCl solution, you would mix one “part” of the 1M solution with nine “parts” of solvent (probably water), for a total of ten “parts.” Therefore, 1:10 dilution means 1 part + 9 parts of water (or other diluent).
How do you do a 1/20 dilution?
Convert the dilution factor to a fraction with the first number as the numerator and the second number as the denominator. For example, a 1:20 dilution converts to a 1/20 dilution factor. Multiply the final desired volume by the dilution factor to determine the needed volume of the stock solution.
How do you calculate a dilution ratio?
Diluting a Stock Solution to a Desired Working ConcentrationA working solution is a less concentrated solution that you want to work with. … This equation is called the dilution equation:% w/w = % weight/weight.% w/v = % weight/volume.% v/v = % volume/volume.More items…
What are two ways to make a 1 100 dilution?
For a 1:100 dilution, one part of the solution is mixed with 99 parts new solvent. Mixing 100 µL of a stock solution with 900 µL of water makes a 1:10 dilution. The final volume of the diluted sample is 1000 µL (1 mL), and the concentration is 1/10 that of the original solution.
What is a 1 to 4 dilution?
A 1:4 dilution ratio means that a simple dilution contains one part concentrated solution or solute and four parts of the solvent, which is usually water. For example, frozen juice that requires one can of frozen juice plus four cans of water is a 1:4 simple dilution. |
The Bedford Canal Experiments
The classic Flat Earth water convexity experiments described in the book Earth Not a Globe by Samuel Birley Rowbotham. Rowbotham lives near the canal and performs the experiment numerous times over a number of years. Of special interest, in regards to the subject of refraction, we look at the second experiment.
From Experiment 2 of Earth Not a Globe we read:
“ Along the edge of the water, in the same canal, six flags were placed, one statute mile from each other, and so arranged that the top of each flag was 5 feet above the surface. Close to the last flag in the series a longer staff was fixed, bearing a flag 3 feet square, and the top of which was 8 feet above the surface of the water--the bottom being in a line with the tops of the other and intervening flags, as shown in the following diagram, Fig, 4. ”
“ On looking with a good telescope over and along the flags, from A to B, the line of sight fell on the lower part of the larger flag at B. The altitude of the point B above the water at D was 5 feet, and the altitude of the telescope at A above the water at C was 5 feet; and each intervening flag had the same altitude. Hence the surface of the water C, D, was equidistant from the line of sight A, B; and as A B was a right line, C, D, being parallel, was also a right line; or, in other words, the surface of the water, C, D, was for six miles absolutely horizontal.
If the earth is a globe, the series of flags in the last experiment would have had the form and produced the results represented in the diagram, Fig. 5. The water curvating from ”
“ C to D, each flag would have been a given amount below the line A, B. The first and second flags would have determined the direction of the line of sight from A to B, and the third flag would have been 8 inches below the second; the fourth flag, 32 inches; the fifth, 6 feet; the sixth, 10 feet 8 inches; and the seventh, 16 feet 8 inches; but the top of the last and largest flag, being 3 feet higher than the smaller ones, would have been 13 feet 8 inches below the line of sight at the point B. ”
On analysis of this experiment, if the earth were a globe, one important remark would be that it is quite the coincidence that the flags all experienced the Flat Earth refraction effect, one by one, all the way down to the end, which projected each flag into the air at the exact height they needed to be at in order to make things look flat in accordance with the distance looked across and the height of the observer.
The English Mechanic
From The English Mechanic, a scientific journal:
“ Bedford Canal, England. A repeat of the 1870 experiment
"A train of empty turf-boats had just entered the Canal from the river Ouse, and was about proceeding to Ramsey. I arranged with the captain to place the shallowest boat last in the train, and to take me on to Welney Bridge, a distance of six miles. A good telescope was then fixed on the lowest part of the stern of the last boat. The sluice gate of the Old Bedford Bridge was 5ft. 8in. high, the turf-boat moored there was 2ft. 6in. high, and the notice board was 6ft. 6in. from the water.
The sun was shining strongly upon them in the direction of the south-southwest; the air was exceedingly still and clear, and the surface of the water smooth as a molten mirror, so that everything was favourable for observation. At 1.15 p.m. the train started for Welney. As the boats gradually receded, the sluice gate, the turf-boat and the notice board continued to be visible to the naked eye for about four miles. When the sluice gate and the turf-boat (being of a dark colour) became somewhat indistinct, the notice board (which was white) was still plainly visible, and remained so to the end of six miles. But on looking through the telescope all the objects were distinctly visible throughout the whole distance. On reaching Welney Bridge I made very careful and repeated observations, and finding several men upon the banks of the canal, I called them to look through the telescope. They all saw distinctly the white notice board, the sluice gate, and the black turf-boat moored near them.
Now, as the telescope was 18in. above the water, The line of sight would touch the horizon at one mile and a half away (if the surface were convex). The curvature of the remaining four miles and a half would be 13ft. 6in. Hence the turf-boat should have been 11ft., the top of the sluice gate 7ft. 10in., and the bottom of the notice board 7ft. below the horizon.
My recent experiment affords undeniable proof of the Earth's unglobularity, because it rests not on transitory vision; but my proof remains printed on the negative of the photograph which Mr.Clifton took for me, and in my presence, on behalf of J.H.Dallmeyer, Ltd.
A photograph can not 'imagine' nor lie!" ”
From "The Flat Earth: another Bedford Canal experiment" (Bernard H.Watson, et al), ENGLISH MECHANIC, 80:160, 1904
Weather and Wave Conditions
In the chapter On the Dimensions of Ocean Waves, Rowbotham explains that the above is effected by wind and water conditions. The reproduction woks best in fine weather:
“ It is well known that even on lakes of small dimensions and also on canals, when high winds prevail for some time in the same direction, the ordinary ripple is converted into comparatively large waves. On the "Bedford Canal," during the windy season, the water is raised into undulations so high, that through a powerful telescope at an elevation of 8 inches, a boat two or three miles away will be invisible; but at other times, through the same telescope the same kind of boat may be seen at a distance of six or eight miles.
During very fine weather when the water has been calm for some days and become as it were settled down, persons are often able to see with the naked eye from Dover the coast of France, and a steamer has been traced all the way across the channel. At other times when the winds are very high, and a heavy swell prevails, the coast is invisible, and the steamers cannot be traced the whole distance from the same altitude, even with a good telescope.
Instances could be greatly multiplied, but already more evidence has been given than the subject really requires, to prove that when a telescope does not restore the hull of a distant vessel it is owing to a purely special and local cause ” |
15th and 16th Century Mindset
An ancient Greek scholar, Aristarchus of Samos discovered and perpetrated the first notion that the sun was the center of our solar system 1,700 years before Copernicus was born. Medieval scholars were led to believe the Earth was the center of everything and that the planets, sun and moon circled Earth every day.
The Catholic Church—a huge authoritative figure—regarded the Earth-centered universe as God’s plan. This is what geocentric means: the center of it all.
Nicolaus Koppernigk was born in Poland in 1473 and grew up in Torun. Nicolaus’s father, also Nicolaus Koppernigk (the elder), was a successful merchant, a trader of copper goods. The town of Torun snaked along the Vistula River in Poland where most citizens spoke German and was populated by 20,000 industrious citizens, which made the area a cultural center and a town with a bustling river trade. Nicolaus had many opportunities to enjoy various cultures, learning from the sailors and merchants in the lively port city.
His father died when he was 10 and his uncle Lukasz Watzenrode, an important man and a Catholic bishop in Warmia, raised Nicolaus. Because of his uncle’s standing, Nicolaus received a rigorous education and learned Latin, the language of scholars and the educated.
At 19, Nicolaus enrolled in the University of Krakow –the finest institution of its time and the highest academia in all of Eastern Europe. He excelled in mathematics and astronomy. During this time, Nicolaus Latinized his surname name to Copernicus to fit in with this cosmopolitan city.
Nicolaus’ thoughts were already turning heavenward and he studied the Alfonsine Tables and the Tables of Directions, both fundamental texts for astronomers. In 1494, Copernicus left Krakow and took up the obligation of church duties bequeathed him. His uncle probably saw it as a way for his nephew to acquire a steady economic future, but soon after, he headed to Italy for a degree in canon law—church law—at the University of Bologna.
He arrived at an apt time. The wave of barbarism and raids for land and protectionism had passed and the people looked to the ancients for wisdom and truth once again. The works of ancient thinkers such as Plato, Eudoxus, Aristotle and Ptolemy were being revisited after the difficult Middle Ages, when Europe was rife with rival states’ takeovers and periods of mere survival for peasants, serfs and the uneducated. The Italians especially witnessed a rebirth in classical learning and values along with a renewed curiosity for life, science and ideas that took center stage.
Nicolaus was fortunate enough to live during this European Renaissance, where Greek, Roman and Islamic learning texts were becoming available; Gutenberg had invented the printing press in 1445, which forever changed the lives of literate people. Previously, books had been laboriously hand-copied by monks and available to only a few. Now great minds were back to scientific experiments and Renaissance scholars took their place in the universe and discussed humanity’s role in it.
Born in 427 BCE, ancient scholar Plato founded the Academy in Athens where scholars came together to study the great queries of life and the universe. Plato’s influential ideas were put into a book called Timaeus. In this book, he argued that an unknown and divine maker must have created the universe’s order. He surmised the universe as they knew it was earth-centered and that heavenly bodies were divine objects.
He also believed that eclipses occurred when the Earth blocked the sun’s light from hitting the moon.
For centuries, scholars would argue over the fact that planets sped up, slowed down and even occasionally reversed directions in a loop before their usual march from east to west.
Eudoxus and Aristotle
Eudoxus and Aristotle were students of Plato who further developed Plato’s theory of the universe and planetary motion. Their plan suggested that planets were lined in connected spheres and all turned like a mechanical clock’s movements around a centered Earth. The drawings of the paths were becoming more convoluted.
An Egyptian of Greek origin from the second century, Ptolemy had a different description: he said that the motion of heavenly bodies had to be in perfect circles at a constant speed and that Earth was the center of it all. Ptolemy described this phenomenon in an enormous volume—a series of 13 books—called the Almagest. His complicated system concluded that the Sun circled the earth but with speed inconstant. By Copernicus’ time, Ptolemy’s modified description added epicycles—smaller circles that went around a point on a larger circle using midpoint lines called equants and circular paths called deferent.
In the late 1490’s, Copernicus befriended a man named Domenico Maria da Novara who had published a book challenging Ptolemy’s authority. He claimed to have discovered that the latitudes of most European cities were actually up one degree and 10 minutes, meaning there was a shift in the tilt of Earth and that it rotated on its own axis.
Around 1510, Copernicus developed a different theory. He re-envisioned the cosmos with the sun rather than Earth at the center of the planets. One of Copernicus’ students, Georg Joachim Rheticus of Austria, helped his mentor prepare a long-neglected manuscript with those thoughts in mind.
Cosmos Inside Out
Copernicus brought a new idea to Ptolemy’s plan when he wrote the document, Brief Sketch—The Commentariolus. He wrote, “All spheres surround the Sun as through it were in the middle of all of them, and therefore, the center of the universe is near the Sun. What appear to us as motions of the Sun arise not from its motion but from the motion of the Earth and our sphere, with which we revolve about the Sun like any other planet.” As if a magician, he had made Earth a planet and set it spinning.
Nicolaus used three instruments to log heaven’s measure: a triquetrum, a quadrant and an armillary sphere. None of these devices contained lenses of any kind. They didn’t sharpen his vision. Similar to surveyor’s tools, they helped him map the stars and trace the path of the moon and planets.
A wooden triquetrum could gauge a body’s altitude by slinging a hinged bar until its peepholes framed the planet or star, and then reading its elevation from the calibrated lower scale printed on the wood.
His bold plan for astronomical reform went slowly and was put on the back burner of publication for decades. He proceeded cautiously and leaked the idea to fellow mathematicians. All the while, history and warfare again churned around him. He probably would have been burned at the stake should he have brought these ideas forth earlier.
He agreed to publish his work only at the repeated urging of insistent friends. When his great book, On the Revolutions of the Heavenly Spheres was finally in print, Nicolaus was taking his last breath. He wrote a dedication to Pope Paul III:
“I can readily imagine, Holy Father, that as soon as some people hear that in this volume, which I have written about the revolutions of the spheres of the universe, I ascribe certain motions to the terrestrial globe, they will shout that I must be immediately repudiated together with this belief.”
Decades later when the first telescope lent credence to Copernicus’ work, the Holy Office of Inquisition condemned his efforts. An Index of Prohibited Books came out in 1616 and Copernicus’ book remained on that list for two hundred years.
- Bortz, Fred. The Sun-Centered Universe and Nicolaus Copernicus. New York: Rosen Publishing Group, Inc., 2014. Book.
- Sobel, Dava. A More Perfect Heaven. New York: Walker & Company, 2011. Book.
- Vollman, William T. Uncentering the Earth: Copernicus and The Revolutions of the Heavenly Spheres. New York: W.W. Norton, 2006. Book. |
Sabertooth tigers lived in North and South America during the Pleistocene Era, which began about 2.6 million years ago. However, it was at the end of the Pleistocene, about 12,000 years ago, that these megafaunas–more simply known as “big animals”–became extinct. This was called the Quaternary extinction, which also took out animals such as the Wooly mammoth, giant ground sloths, and many more.
When the Ice Age occurred, tar pits formed in the landscape of their territory. The tigers would become stuck, causing them to sink into the asphalt and eventually die.
After thousands of years of evolution and environmental changes, it’s no surprise that there are a few differences between the sabertooth tigers and the tigers we are familiar with today. First and foremost, sabertooth tigers were massive. If you need a comparison, imagine this: Picture a lion. Now, take that ordinary lion and double its size. That’s what you could expect to see from a sabertooth tiger.
Some of their found skulls have ranged from 15 to 20 inches long! Some university researchers say that sabertooth tigers scaled around 1000 pounds, suggesting it was quite possible they were able to take down giant plant-eaters as heavy as pickup trucks! Tigers today can’t hunt anything beyond their own size as they kill by breaking the necks and strangling their prey.
Their build was a bit different from today’s tigers as well. Sabertooth tigers had a short tail and a heavy, muscular build which helped them trap and attack their prey, unlike the slow stalk-and-chase-down method used by today’s tigers.
Now you may be wondering where the “tooth” in Sabertooth comes from. The scientific name for a sabertooth tiger is Smilodon fatalis, meaning “deadly knife tooth.” These tigers are known for their distinctive pair of long canines that could grow up to 8 inches long! Their jaws could open over 120 degrees–twice the size of a modern-day tiger.
The purpose of their extremely large fangs hasn’t been completely proven. Here’s what most scientists think after looking more closely at the texture of the tooth’s surface. The wear on their tooth resembled closely to those of a present-day African lion, which sometimes crush bone when they eat. However, they didn’t have the same wear as living hyenas that consume entire carcasses, including bones. Therefore, researchers believe the animals were not gnawing their prey to the bone.
Like us humans, sabertooth tiger cubs lost their baby teeth too! Cubs had small canines that were shed when they reached around 20 months of age. By the age of about three, these young sabertooth tigers had their fully-formed eight-inch adult canines.
By Michela Pantano, contributor for Ripleys.com |
A subset of the estimated 6 million eczema patients in the United States are susceptible to widespread infections of their skin by herpes simplex and vaccinia viruses. The herpes simplex virus is common but only rarely causes disseminated skin infections that can spread to the eye and bloodstream sometimes leading to encephalitis and meningitis. The widespread herpes simplex skin infection is known as eczema herpeticum.
Vaccinia virus, which is used in smallpox vaccinations, can also cause serious and life-threatening skin infections in a smaller subset of patients. People who have eczema or had it in the past are susceptible to this infection when they receive a smallpox vaccination. This situation could limit the ability of those people to safely receive vaccinations in case of a smallpox bioterrorism event.
Researchers from the Atopic Dermatitis Vaccinia Network (ADVN) believed that they might be able to identify eczema patients at high risk for these infections, and to obtain clues about the mechanisms of susceptibility by studying a large cohort of patients who had suffered eczema herpeticum, the herpes simplex viral skin infections. They examined a wide variety of demographic, pathologic and biologic characteristics in 901 subjects, 138 of whom had suffered eczema herpeticum.
They found that eczema patients susceptible to herpes simplex infections had more severe disease, earlier age of disease onset, more frequent history of other allergic diseases such as food allergy, asthma and hay fever, more allergic biomarkers, and more frequent skin infections with other microbes.
The greater allergic disease and sensitisation, as well as infection by other microbes, point to a potential mechanism for the increased susceptibility to viral skin infections. An emerging model of eczema highlights the importance of skin-barrier defects and a lack of antimicrobial proteins among eczema patients. The skin-barrier defect is believed to result in the greater allergic sensitisation among eczema patients in general. The even higher allergic sensitisation among eczema herpeticum patients suggests the skin-barrier defect is particularly acute in those patients.
The higher levels of infections with staphylococcus and other microbes suggests that eczema herpeticum patients may be particularly lacking in antimicrobial proteins.
The Journal of Allergy and Clinical Immunology
Click here for more research on eczema
For LINKS to freefrom skin care products click here. |
Gaucher disease is an inherited disorder that affects many of the body's organs and tissues. The signs and symptoms of this condition vary widely among affected individuals. Researchers have described several types of Gaucher disease based on their characteristic features.
Type 1 Gaucher disease is the most common form of this condition. Type 1 is also called non-neuronopathic Gaucher disease because the brain and spinal cord (the central nervous system) are usually not affected. The features of this condition range from mild to severe and may appear anytime from childhood to adulthood. Major signs and symptoms include enlargement of the liver and spleen (hepatosplenomegaly), a low number of red blood cells (anemia), easy bruising caused by a decrease in blood platelets (thrombocytopenia), lung disease, and bone abnormalities such as bone pain, fractures, and arthritis.
Types 2 and 3 Gaucher disease are known as neuronopathic forms of the disorder because they are characterized by problems that affect the central nervous system. In addition to the signs and symptoms described above, these conditions can cause abnormal eye movements, seizures, and brain damage. Type 2 Gaucher disease usually causes life-threatening medical problems beginning in infancy. Type 3 Gaucher disease also affects the nervous system, but it tends to worsen more slowly than type 2.
The most severe type of Gaucher disease is called the perinatal lethal form. This condition causes severe or life-threatening complications starting before birth or in infancy. Features of the perinatal lethal form can include extensive swelling caused by fluid accumulation before birth (hydrops fetalis); dry, scaly skin (ichthyosis) or other skin abnormalities; hepatosplenomegaly; distinctive facial features; and serious neurological problems. As its name indicates, most infants with the perinatal lethal form of Gaucher disease survive for only a few days after birth.
Another form of Gaucher disease is known as the cardiovascular type because it primarily affects the heart, causing the heart valves to harden (calcify). People with the cardiovascular form of Gaucher disease may also have eye abnormalities, bone disease, and mild enlargement of the spleen (splenomegaly).
Gaucher disease occurs in 1 in 50,000 to 100,000 people in the general population. Type 1 is the most common form of the disorder; it occurs more frequently in people of Ashkenazi (eastern and central European) Jewish heritage than in those with other backgrounds. This form of the condition affects 1 in 500 to 1,000 people of Ashkenazi Jewish heritage. The other forms of Gaucher disease are uncommon and do not occur more frequently in people of Ashkenazi Jewish descent.
Mutations in the GBA gene cause Gaucher disease. The GBA gene provides instructions for making an enzyme called beta-glucocerebrosidase. This enzyme breaks down a fatty substance called glucocerebroside into a sugar (glucose) and a simpler fat molecule (ceramide). Mutations in the GBA gene greatly reduce or eliminate the activity of beta-glucocerebrosidase. Without enough of this enzyme, glucocerebroside and related substances can build up to toxic levels within cells. Tissues and organs are damaged by the abnormal accumulation and storage of these substances, causing the characteristic features of Gaucher disease.
This condition is inherited in an autosomal recessive pattern, which means both copies of the gene in each cell have mutations. The parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition.
- What is genetic testing?
- Genetic Testing Registry: Acute neuronopathic Gaucher's disease
- Genetic Testing Registry: Gaucher disease
- Genetic Testing Registry: Gaucher disease type 3A
- Genetic Testing Registry: Gaucher disease type 3B
- Genetic Testing Registry: Gaucher disease type 3C
- Genetic Testing Registry: Gaucher's disease, type 1
- Genetic Testing Registry: Subacute neuronopathic Gaucher's disease
- cerebroside lipidosis syndrome
- Gaucher splenomegaly
- Gaucher syndrome
- Gaucher's disease
- Gauchers disease
- glucocerebrosidase deficiency
- glucosyl cerebroside lipidosis
- glucosylceramidase deficiency
- glucosylceramide beta-glucosidase deficiency
- glucosylceramide lipidosis
- kerasin histiocytosis
- kerasin lipoidosis
- kerasin thesaurismosis
- lipoid histiocytosis (kerasin type)
- National Human Genome Research Institute
- National Institute of Neurological Disorders and Stroke: Gaucher's Disease Information Sheet
- National Institute of Neurological Disorders and Stroke: Lipid Storage Diseases Fact Sheet
- Office of NIH History & Stetten Museum: Researching Disease: Dr. Roscoe Brady and Gaucher Disease |
Ancient Mayan farming
The ancient Maya had diverse and sophisticated methods of food production. It was formerly believed that shifting cultivation (swidden) agriculture provided most of their food but it is now thought that permanent raised fields, terracing, forest gardens, managed fallows, and wild harvesting were also crucial to supporting the large populations of the Classic period in some areas. Indeed, evidence of these different agricultural systems persist today: raised fields connected by canals can be seen on aerial photographs, contemporary rainforest species composition has significantly higher abundance of species of economic value to ancient Maya, and pollen records in lake sediments suggest that corn, manioc, sunflower seeds, cotton, and other crops have been cultivated in association with deforestation in Mesoamerica since at least 2500 BC.
The Mayans were skilled farmers, clearing large sections of tropical rain forest and, where groundwater was scarce, building sizable underground reservoirs for the storage of rainwater. The Maya were equally skilled as weavers and potters, and cleared routes through jungles and swamps to foster extensive trade networks with distant peoples.
While the Maya diet varies, depending on the local geography, maize remains the primary staple now as it was centuries ago. Made nutritionally complete with the addition of lime, the kernels are boiled, ground with a metate and mano, then formed by hand into flat tortillas that are cooked on a griddle that is traditionally supported on three stones. Chile peppers, beans and squash are still grown in the family farm plot (milpa) right along with the maize, maximizing each crop's requirements for nutrients, sun, shade and growing surface. Agriculture was based on slash and burn farming which required that a field be left fallow for 5 to 15 years after only 2 to 5 years of cultivation. But there is evidence that fixed raised fields and terraced hillsides were also used in appropriate areas.
The Maya farmer cultivated corn, beans, cacao, chile, maguey, bananas, and cotton, besides giving attention to bees, from which he obtained both honey and wax. Various fermented drinks were prepared from corn, maguey, and honey. They were much given to drunkenness, which was so common as hardly to be considered disgraceful.
Chocolate was the favorite drink of the upper classes. Cacao beans, as well as pieces of copper, were a common medium of exchange. Very little meat was eaten, except at ceremonial feasts, although the Maya were expert hunters and fishers. A small "barkless" dog was also eaten.
Contemporary Maya peoples still practice many of these traditional forms of agriculture, although they are dynamic systems and change with changing population pressures, cultures, economic systems, climate change, and the availability of synthetic fertilizers and pesticides.
In the News ...
Ancient People of Teotihuacan Drank Milky Alcohol, Pottery Suggests Live Science - September 15, 2014
Ancient pottery confirms people made and drank a milky alcoholic concoction at one of the largest cities in prehistory, Teotihuacan in Mexico, researchers say. This liquor may have helped provide the people of this ancient metropolis with essential nutrients during frequent shortfalls in staple foods, scientists added. The ancient city of Teotihuacan, whose name means "the city of the gods" in the Nahuatl language of the Aztecs, was the largest city in the Americas before the arrival of Christopher Columbus. At its zenith, Teotihuacan encompassed about 8 square miles (20 square kilometers) and supported an estimated population of 100, 000 people, who raised giant monuments such as the Temple of Quetzalcoatl and the Pyramids of the Sun and the Moon.
Discovery - November 29, 2012
Planning a last supper party on December 21? To celebrate the Mayan way, you might need several clay balls. That's one way the Maya cooked their food, according to U.S. archaeologists who have unearthed dozens of rounded clay pieces from a site in Mexico. About 1-2 inches in diameter and more than 1, 000 years old, the clay balls contained microscopic pieces of maize, beans, squash and other root crops.
Ancient Farm Discovery Yields Clues to Maya Diet National Geographic - August 20, 2007
The ancient Maya cultivated crops of manioc - also known as cassava - some 1, 400 years ago, according to archaeologists studying a Maya farm preserved in volcanic ash. The discovery may help solve the long-standing mystery of how the ancient culture produced enough energy-rich, starchy food to support its large city-centered populations. |
Posters showing numbers 1-12 and their multiples.
Use these posters in your classroom as a reference for students who are either learning how to skip count by specific numbers, or learning the multiples of numbers 1 through 12. Print out each one on cardstock and hang on your wall for a visual reference for your students. Or, have your students cut them out and glue them in their math journals to use as a reference.
Common Core Curriculum alignment
Interpret products of whole numbers, e.g., interpret 5 × 7 as the total number of objects in 5 groups of 7 objects each. For example, describe a context in which a total number of objects can be expressed as 5 × 7.
Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division (e.g., knowing that 8 × 5 = 40, one knows 40 ÷ 5 = 8) or properties of operations. By the end of Grade 3, know from memory all pr...
Find all factor pairs for a whole number in the range 1-100. Recognize that a whole number is a multiple of each of its factors. Determine whether a given whole number in the range 1-100 is a multiple of a given one-digit number. Determine whether a ...
Multiply a whole number of up to four digits by a one-digit whole number, and multiply two two-digit numbers, using strategies based on place value and the properties of operations. Illustrate and explain the calculation by using equations, rectangul...
Fluently multiply multi-digit whole numbers using the standard algorithm.
Find the greatest common factor of two whole numbers less than or equal to 100 and the least common multiple of two whole numbers less than or equal to 12. Use the distributive property to express a sum of two whole numbers 1-100 with a common factor...
We create premium quality, downloadable teaching resources for primary/elementary school teachers that make classrooms buzz!
Removed the grids that were appearing on the page when printed.
Fix by Natalie Mar 31st, 2020
Request a change
You must be logged in to request a change. Sign up now!
Report an Error
You must be logged in to report an error. Sign up now! |
What is salmonellosis?
Salmonellosis is a type of foodborne illness caused by the Salmonella enterica bacterium. There are many different kinds of these bacteria. Salmonella serotype Typhimurium and Salmonella serotype Enteritidis are the most common types in the Canada.
Salmonellosis is more common in the summer than in the winter. Children are the most likely to get salmonellosis. Young children, older adults, and people who have impaired immune systems are the most likely to have severe infections.
What causes salmonellosis?
You can get salmonellosis by eating food contaminated with salmonella. This can happen in the following ways:
- Food may be contaminated during food processing or food handling.
- Food may become contaminated by the unwashed hands of an infected food handler. A frequent cause is a food handler who does not wash his or her hands with soap after using the washroom.
- Salmonella may also be found in the feces of some pets, especially those with diarrhea. You can become infected if you do not wash your hands after contact with these feces.
- Reptiles, baby chicks and ducklings, and small rodents such as hamsters are particularly likely to carry Salmonella. You should always wash your hands immediately after handling one of these animals, even if the animal is healthy. Adults should also be careful that children wash their hands after handling reptiles, pet turtles, baby chicks or ducklings, or small rodents.
Beef, poultry, milk, and eggs are most often infected with salmonella. But vegetables may also be contaminated. Contaminated foods usually look and smell normal.
What are the symptoms?
Symptoms of salmonellosis include diarrhea, fever, and abdominal cramps. They develop 12 to 72 hours after infection, and the illness usually lasts 4 to 7 days. Most people recover without treatment. But diarrhea and dehydration may be so severe that it is necessary to go to the hospital. Older adults, infants, and those who have impaired immune systems are at highest risk.
If you only have diarrhea, you usually recover completely, although it may be several months before your bowel habits are entirely normal. A small number of people who are infected with salmonellosis develop reactive arthritis, a disease that can last for months or years and can lead to chronic arthritis.
How is salmonellosis diagnosed?
Salmonellosis is diagnosed based on a medical history and a physical examination. Your doctor will ask you questions about your symptoms, foods you have recently eaten, and your work and home environments. A stool culture and blood tests may be done to confirm the diagnosis.
How is it treated?
You treat salmonellosis by managing any complications until it passes. Dehydration caused by diarrhea is the most common complication. Antibiotics are not usually needed unless the infection has spread.
To prevent dehydration, take frequent sips of a rehydration drink (such as Pedialyte). Try to drink a cup of water or rehydration drink for each large, loose stool you have. Soda and fruit juices have too much sugar and not enough of the important electrolytes that are lost during diarrhea, and they should not be used to rehydrate.
Try to stay with your usual diet as much as possible. Eating your usual diet will help you to get enough nutrition. Doctors believe that eating a normal diet will also help you feel better faster. But try to avoid foods that are high in fat and sugar. Also avoid spicy foods, alcohol, and coffee for 2 days after all symptoms have disappeared.
How can you prevent salmonellosis?
To prevent salmonellosis:
- Do not eat raw or undercooked eggs. Raw eggs may be used in some foods such as homemade hollandaise sauce, Caesar and other salad dressings, tiramisu, homemade ice cream, homemade mayonnaise, cookie dough, and frostings.
- Cook foods until they are well done. Use a meat thermometer to be sure foods are cooked to a safe temperature. Do not use the colour of the meat (such as when it is no longer "pink") to tell you that it is done.
- Avoid raw or unpasteurized milk or other dairy products.
- Wash or peel produce before eating it.
- Avoid cross-contamination of food. Keep uncooked meats separate from produce, cooked foods, and ready-to-eat foods. Thoroughly wash hands, cutting boards, counters, knives, and other utensils after handling uncooked foods.
- Wash your hands before handling any food and between handling different food items.
- Do not prepare food or pour water for others when you have salmonellosis.
- Wash your hands after contact with animal feces. Since reptiles are particularly likely to carry salmonella bacteria, wash your hands immediately after handling them. Consider not having reptiles (including turtles) as pets, especially if you have small children or an infant.
Current as of: January 26, 2020
Author: Healthwise Staff
Medical Review: E. Gregory Thompson, MD - Internal Medicine
Anne C. Poinier, MD - Internal Medicine
Adam Husney, MD - Family Medicine
W. David Colby IV, MSc, MD, FRCPC - Infectious Disease |
When is an internal combustion engine not an internal combustion engine? When it's been transformed into a modular reforming reactor that could make hydrogen available to power fuel cells wherever there's a natural gas supply available.
By adding a catalyst, a hydrogen separating membrane and carbon dioxide sorbent to the century-old four-stroke engine cycle, researchers have demonstrated a laboratory-scale hydrogen reforming system that produces the green fuel at relatively low temperature in a process that can be scaled up or down to meet specific needs. The process could provide hydrogen at the point of use for residential fuel cells or neighborhood power plants, electricity and power production in natural-gas powered vehicles, fueling of municipal buses or other hydrogen-based vehicles, and supplementing intermittent renewable energy sources such as photovoltaics.
Known as the CO2/H2 Active Membrane Piston (CHAMP) reactor, the device operates at temperatures much lower than conventional steam reforming processes, consumes substantially less water and could also operate on other fuels such as methanol or bio-derived feedstock. It also captures and concentrates carbon dioxide emissions, a by-product that now lacks a secondary use — though that could change in the future.
Unlike conventional engines that run at thousands of revolutions per minute, the reactor operates at only a few cycles per minute — or more slowly — depending on the reactor scale and required rate of hydrogen production. And there are no spark plugs because there's no fuel combusted.
"We already have a nationwide natural gas distribution infrastructure, so it's much better to produce hydrogen at the point of use rather than trying to distribute it," said Andrei Fedorov, a Georgia Institute of Technology professor who's been working on CHAMP since 2008. "Our technology could produce this fuel of choice wherever natural gas is available, which could resolve one of the major challenges with the hydrogen economy."
A paper published February 9 in the journal Industrial & Engineering Chemistry Research describes the operating model of the CHAMP process, including a critical step of internally adsorbing carbon dioxide, a byproduct of the methane reforming process, so it can be concentrated and expelled from the reactor for capture, storage or utilization. Other implementations of the system have been reported as thesis work by three Georgia Tech Ph.D. graduates since the project began in 2008. The research was supported by the National Science Foundation, the Department of Defense through NDSEG fellowships, and the U.S. Civilian Research & Development Foundation (CRDF Global).
Key to the reaction process is the variable volume provided by a piston rising and falling in a cylinder. As with a conventional engine, a valve controls the flow of gases into and out of the reactor as the piston moves up and down. The four-stroke system works like this:
- Natural gas (methane) and steam are drawn into the reaction cylinder through a valve as the piston inside is lowered. The valve closes once the piston reaches the bottom of the cylinder.
- The piston rises into the cylinder, compressing the steam and methane as the reactor is heated. Once it reaches approximately 400 degrees Celsius, catalytic reactions take place inside the reactor, forming hydrogen and carbon dioxide. The hydrogen exits through a selective membrane, and the pressurized carbon dioxide is adsorbed by the sorbent material, which is mixed with the catalyst.
- Once the hydrogen has exited the reactor and carbon dioxide is tied up in the sorbent, the piston is lowered, reducing the volume (and pressure) in the cylinder. The carbon dioxide is released from the sorbent into the cylinder.
- The piston is again moved up into the chamber and the valve opens, expelling the concentrated carbon dioxide and clearing the reactor for the start of a new cycle.
"All of the pieces of the puzzle have come together," said Fedorov, a professor in Georgia Tech's George W. Woodruff School of Mechanical Engineering. "The challenges ahead are primarily economic in nature. Our next step would be to build a pilot-scale CHAMP reactor."
The project was begun to address some of the challenges to the use of hydrogen in fuel cells. Most hydrogen used today is produced in a high-temperature reforming process in which methane is combined with steam at about 900 degrees Celsius. The industrial-scale process requires as many as three water molecules for every molecule of hydrogen, and the resulting low density gas must be transported to where it will be used.
Fedorov's lab first carried out thermodynamic calculations suggesting that the four-stroke process could be modified to produce hydrogen in relatively small amounts where it would be used. The goals of the research were to create a modular reforming process that could operate at between 400 and 500 degrees Celsius, use just two molecules of water for every molecule of methane to produce four hydrogen molecules, be able to scale down to meet the specific needs, and capture the resulting carbon dioxide for potential utilization or sequestration.
"We wanted to completely rethink how we designed reactor systems," said Fedorov. "To gain the kind of efficiency we needed, we realized we'd need to dynamically change the volume of the reactor vessel. We looked at existing mechanical systems that could do this, and realized that this capability could be found in a system that has had more than a century of improvements: the internal combustion engine."
The CHAMP system could be scaled up or down to produce the hundreds of kilograms of hydrogen per day required for a typical automotive refueling station — or a few kilograms for an individual vehicle or residential fuel cell, Fedorov said. The volume and piston speed in the CHAMP reactor can be adjusted to meet hydrogen demands while matching the requirements for the carbon dioxide sorbent regeneration and separation efficiency of the hydrogen membrane. In practical use, multiple reactors would likely be operated together to produce a continuous stream of hydrogen at a desired production level.
"We took the conventional chemical processing plant and created an analog using the magnificent machinery of the internal combustion engine," Fedorov said. "The reactor is scalable and modular, so you could have one module or a hundred of modules depending on how much hydrogen you needed. The processes for reforming fuel, purifying hydrogen and capturing carbon dioxide emission are all combined into one compact system."
This publication is based on work supported by the National Science Foundation (NSF) CBET award 0928716, which was funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5), and by award 61220 of the U.S. Civilian Research & Development Foundation (CRDF Global) and by the National Science Foundation under Cooperative Agreement OISE- 9531011. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSF or CRDF Global. Graduate work of David M. Anderson, the first author on the paper, was conducted with government support under an award by the DoD, Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a.
CITATION: David M. Anderson, Thomas M. Yun, Peter A. Kottke and Andrei G. Fedorov, "Comprehensive Analysis of Sorption Enhanced Steam Methane Reforming in a Variable Volume Membrane Reactor," (Industrial & Engineering Chemistry Research, 2017). http://dx.doi.org/10.1021/acs.iecr.6b04392
Story Source: Materials provided by Scienmag |
PLEASE HELP LAST QUESTION ON TEST Rock layers made of clay have very small particles. The pores between the particles are not well connected. Will a good aquifer form in a layer of clay? A. Yes, because smaller particles tend to increase permeability by increasing the surface area. B. No, …Read More »
WILL GIVE BRAINLIEST PLEASE HELP! Please select the word from the list that best fits the definition stage in a star’s evolution formed as the core of a main sequence star uses up its helium and the outer layers escape into space
Answered by answersmine AT 22/10/2019 – 03:07 AM sorry I’m latewhite dwarf is late stage in the life cycle of a comparatively low-mass main sequence star; formed when its core depletes its helium and its outer layers escape into space, leaving behind a hot, dense core. so the that would be …Read More »
Who recognized that different layers of rock contained unique collection of fossils?Read More »
Think about an igneous rock at the top of a mountain. If this rock went through the whole rock cycle, number the steps that would follow. 1. Step 1 Regolith is dumped into some type of reservoir, usually water, by deposition. 2. Step 2 Metamorphic rock begins to melt back into magma. 3. Step 3 Regolith is transported or eroded by rain or wind. 4. Step 4 Rock is broken down into regolith by weathering. 5. Step 5 Layers deposited begin to compact and cement together to form sedimentary rock. 6. Step 6 Sedimentary rock is buried deeper into the earth’s crust. The intense heat and pressure begins metamorphism and the rock turns into metamorphic rock. 7. Step 7 Magma is resurfaced from volcanic activity, cools, and becomes igneous rock.
Answer: The correct sequence according to the question is steps- 4→1→3→5→6→2→7. Explanation: The given steps show that how the rocks are recycled from one form to another form due to certain factors like weathering over geological time scale which was explained by James Hutton and he termed this concept as …Read More »
An artesian aquifer is: a groundwater storage area between impermeable layers of rock the upper surface of the saturated zone of groundwater that is under pressure a reservoir found in underground caverns a land area that delivers water into a stream or river system
Hello Darlings! Q) What does riparian mean? Answer Choices) A) Riparian means ripe with agriculture B) Riparian means to have or be near water sources. C) Riparian means ancient or respected D) Riparian means anything of Middle Eastern origin The answer I have for you darlings is: B) Riparian means …Read More »
The mountain shown is composed of deformed sedimentary layers. they are located near a tectonic plate boundary and are still increasing in elevation due to –
The correct answer is – Andes Mountains. The Andes Mountain Range is a result of a oceanic-continental convergence. This mountain range stretches alongside the western coast of the South American continent, in close proximity of the convergent boundary between the South American tectonic plate, which is continental one, and the …Read More »
A clothing manufacturer uses lasers to cut patterns in fabric when making T-shirts. Which best describes a benefit of using lasers instead of traditional cutting machines? Laser machinery does not require energy. Laser machinery is safer because its light is weak. Lasers save time by cutting through more layers of fabric. Lasers result in fewer injuries because they quickly become blunt.
Answer: 1) Water has very high specific heat. 2) It expands when it freezes 3) ability to dissolve in ionic substances 4) Water molecules at the surface experience fewer hydrogen bonds than water molecules within the liquid. Explanation: 1) Water molecules due to their high specific heat , undergo relatively …Read More »
The most efficient way to memorize multiple layers of information all at once is to use flashcards. t or f
Answers: 1. Gives one’s opinions about a current problem or issue – B. editorial. Usually a brief piece of writing that expresses the pubishling house’s own view on an issue. 2. Writing in newspapers and magazines – I. article. Articles form an independent part of a publication. 3. The life …Read More »
Which of the Earth’s layers is represented by the number two (2) on the image above? A. the mantle B. the outer core C. the inner core D. the crust
The answer is plate tectonics theory. The idea that earth’s lithosphere is divided into large, moving sections is called the Plate Tectonics Theory. The Plate Tectonics Theory is not attributed to any single person or geologist. In fact, it developed over a number of years due to scientific exploration and …Read More »
Which of the following distinctions are used to identify sedimentary rock? Select all that apply. conditions it was formed under when it was formed where is was formed how many layers it consists of what it is composed of
1. The equation 2NO2↔N2O4 shows a system (1 point) in chemical equilibrium. 2. If an endothermic reaction is in equilibrium, what will happen when you increase the temperature? (1 point) More products form. 3. One way to determine the degree of saturation of a solid-liquid solution is to drop a crystal of the solute into …Read More »
PLEASE HELP ME! A clerk in the sports department of a store is stacking tennis balls in the shape of a square pyramid. The top layer has 1 ball, the second layer has 4 balls, the third layer has 9 balls, and so on, as shown below. Which of the following represents the number of balls in the first 8 layers of the pyramid? a series that is arithmetic a series that is geometric a series that is neither arithmetic nor geometric a series cannot be used to represent this situation.
The first step is finding out what percentage of the first sample eat at school. Out of 60 students surveyed, 40 eat school lunch. To find that percentage: 40 / 60 = 0.67 or 67% (approximately) Now, we need to that that information and plug it into our problem. If …Read More »
Which statement is most likely a scientific law? A.Animals need oxygen because it unlocks the energy contained in food. B.Colonies of microscopic organisms grow faster in closed spaces because they cannot fly away. C.The oldest layers of rock lie below newer layers of rock on Earth. D.Healthy foods taste better than dangerous or spoiled foods.
Which statement is most likely a scientific law? A.Animals need oxygen because it unlocks the energy contained in food. B.Colonies of microscopic organisms grow faster in closed spaces because they cannot fly away. C.The oldest layers of rock lie below newer layers of rock on Earth. D.Healthy foods taste better …Read More »
Which of the following is an example of positive correlation? A: the number of students in a school and the number of stores nearby B: the age of a child and the height of that child C: the temperature outside and the number of layers of clothing D: the age of pets in a home and the number of pets in that home
Sphere and right cyl. have the same radius and volume. Thus, find the equations for the volumes of a sphere and a right cylinder and set them equal to one another: Vol. of sphere = Vol. of right cyl. (4/3) pi r^3 …Read More » |
By David Warmflash, MD | 16 June 2017
Genetic Literacy Project
Genetic biotechnology is usually discussed in the context of current and emerging applications here on Earth, and rightly so, since we still live exclusively in our planetary cradle. But as humanity looks outward, we ponder what kind of life we ought to take with us to support outposts and eventually colonies off the Earth.
While the International Space Station (ISS) and the various spacecraft that ferry astronauts on short bouts through space depend on consumables brought up from Earth to maintain life support, this approach will not be practical for extensive lunar missions, much less long term occupation of more distant sites. If we’re to build permanent bases, and eventually colonies, on the Moon, Mars, asteroids, moons of outer planets or in free space, we’ll need recycling life support systems. This means air, water, and food replenished through microorganisms and plants, and it’s not a new idea.
Space exploration enthusiasts have been talking about it for decades, and it’s the most obvious application of microorganisms and plants transplanted from Earth. What is new, however, is the prospect of a comprehensive use of synthetic biology for a wide range of off-Earth outpost and colonization applications.
To this end, considering human outposts on the Moon and Mars, a study from scientists based at NASA Ames Research Center and the University of California at Berkeley examined the potential of genetic technology, not only to achieve biologically based life support systems, but also to facilitate other activities that must be sustained on colony worlds. Not discussed as often with biotechnology and space exploration in the same conversation, these other activities include creation of rocket propellant, synthesis of polymers, and production of pharmaceuticals. Together with the life support system, they paint a picture of the beckoning era of space activity that puts synthetic biology at center stage.
— Church and State (@ChurchAndStateN) August 6, 2020
Although written specifically in the context of lunar and Martian outposts, the proposed biologically based technical infrastructure is just as applicable to a colony on less frequently discussed worlds, such as the dwarf planet Ceres or an outer planet moon, or to a colony that orbits in the Earth moon system.
Rocket fuel and life support
As we’ll discuss a little later in connection with rocket fuel, the chemical elements needed — oxygen and nitrogen — are available in and in the vicinity of the places we might put outposts. It’s just that the atoms of these elements are not in a breathable form. Rather they’re combined with atoms of other chemical elements. On Mars, for instance, there’s plenty of oxygen, but not a drop is useful either to mix with propellant in rocket engines, or for humans to breath. That’s because Martian oxygen atoms are bound with carbon atoms in molecules of carbon dioxide (CO2). For humans, CO2 is a waste product; instead, we need to breathe molecular oxygen (O2) to support life functions. But, in the presence of light, photosynthetic organisms, such as plants, algae, and certain bacteria take in CO2 and water (H2O) in and release O2. In the process, they make food.
The moon also has oxygen, but in the form of silicon dioxide (SiO2) in rocks, and both the moon and Mars have sources of water. While there are chemical and electrical methods that can split up and rearrange atoms of some of these compounds without the help of living things, the gist of the NASA/Berkeley conclusion was that using life forms, especially certain microorganisms, the amount of energy and effort needed to produce a given amount of oxygen can be reduced substantially. The same is true for the production of rocket propellant and for nitrogen, which is needed both for human breathing (as N2 gas to dilute O2), to support plants (with the help of bacteria), and for certain types of rocket fuel.
Emphasizing the utility of microorganisms, the study also noted that genetic methods can increase the yields of the needed chemicals. One important example involves a type of microorganism known as cyanobacteria. Descendants of ancient bacteria that are thought to have been the first major suppliers of oxygen gas to Earth’s oceans and atmosphere, cyanobacteria are photosynthetic. Like plants, they consume CO2 and water, releasing O2. The genomes – the collection of genes – of various strains of cyanobacteria are small and their sequences are well known, making the capabilities of these organisms easy to manipulate with genetic engineering. In addition to already being able to use nitrogen directly, they can be enhanced with genes from other microbes with novel energy systems, including those with the capability of generating methane and hydrogen (both useful as rocket fuel).
Food and drug production
The NASA/Berkeley study included an economic analysis showing the power of synthetic biology to produce food mass. Nature’s most famous method for this, photosynthesis, is extremely efficient; thus, colonies on the Moon, Mars, other bodies, or free space colonies will emphasis plant farming, and probably algae-based nutrition as well. You’re unlikely to see big farm animals, such as cows or pigs. They take up far too much land. But, due to their high protein to mass ration, its been suggested that space colonists might learn to farm and enjoy insects such as grasshoppers. Furthermore, possibly timed appropriately for space colonization, the technology for synthetic meat beckons. Since, colonists will largely live on their own, the NASA/Berkeley report also discussed using synthetic biology for pharmaceutical production.
Predicting Mars Cuisine: Grasshoppers with a Side of Fungi (Op-Ed) https://t.co/WCufvq38ma
— Church and State (@ChurchAndStateN) August 6, 2020
Adapting life to its new home
Certain regions of Earth feature environments similar to those on planets and moons that humans might colonize. Especially with a division of Earth life known as the Archaea domain, there are various microorganisms that can survive in extreme cold, high salinity (thought to characterized sources of underground Martian water, or ancient water on Mars), and certain Archaea are also methane produces. Thus, while not mentioned specifically in the recent report, researchers looking at applying biological methods to space exploration are also looking into the prospect of modifying certain bacteria, such as cyanobacteria with Archaea genes.
All that mentioned above is but the tip of the iceberg. On Earth, there are organisms that resist radiation, heat, cold, and drying, even to the point of being able to live in the space vacuum. Considering potential space colonization environments compared with our homeworld in terms of gravity, radiation, and various other parameters, there are a lot of traits we might eventually genetically engineer into life forms that we bring to help them survive while they perform their task, whether circulating life support gasses, producing rocket fuel, eating up rock, or even terraforming – changing the colony’s entire environment to make it like Earth.
Reprinted with permission from the author.
If you’re purchasing my book today, consider getting it through Barnes & Noble. It’s 10% off right now and the B&N ranking could really use a bog boost https://t.co/coCDjmnbtI #nonfiction #science #sciencebooks #winterreading #summerreading #Spaceexploration #NASA #SpaceX
— Dr. David Warmflash (@CosmicEvolution) March 11, 2020
The Moon and Human History – David Warmflash
Synthetic Biology In Space | Lisa Nip | TEDxBeaconStreet
How Could Genetic Engineering Affect Space Exploration?
How Close Are We to Harnessing Synthetic Life?
Be sure to ‘like’ us on Facebook |
Being able to listen to what is being said to you is an important pre-cursor to language. Like most skills, this is one that develops over the early years of life from babies who can only focus on one thing for a few seconds to a child of 6-7 who can switch their attention between a piece of work they are doing and the teacher speaking without too much difficulty. We have written more about the stages of attention and listening development in this post if you want to find out more.
In this post I am going to give you some ideas of how to develop attention and listening skills in a pre-school child. By this, I mean children up to about 4 1/2, though of course some children in this country start school when they are only just 4! At this age I would not expect them to have fully developed skills in this area yet. However, at around this age children are usually able to focus for a while on one thing, and switch their focus from one thing to another and back again without too much support.
If your child is struggling with attention and listening in the early years, here are some strategies to try:-
- Start where the child is at. If your child (or the child you are working with) is only able to attend to an activity for a few seconds, don’t expect them to go straight from there to listening for 5 minutes. Start with those few seconds and gradually add a little bit more. For example, if your child will sit and give joint attention to a book with you for 2 pages, try to work up to 3 pages, then 4. Don’t go too fast, we want each level to be consistent before we move on.
- Use highly motivating activities. Like all of us, children pay more attention to things they are interested in. Use something your child really loves to try to build up their attention span. Let them choose the activity to begin with. Often very multi-sensory toys work well for this – ones with lots to see, hear and feel! Don’t just give your child the toy and let them play – the idea is that the two of you are sharing attention on the same thing. If it is a wind-up toy, take turns to wind it up and watch it go, or see if your child can watch you set it off and request a repeat of the activity. Don’t make it into a battle of wills! Once your child’s attention to the activity has gone, just stop and try again another time.
- Use visuals. This works particularly for listening in a group (remember that this is likely to be harder than listening 1:1). Use visuals to show the child what is expected (eg a now and next board). For some children a particular spot (such as a certain cushion or carpet tile) to sit on works well. For others using a timer to show them how long they need to sit for can be useful. In any case, give them something visual to focus on rather than just words.
- Praise. Remember this is really hard. Keep positive and praise the child whenever they try to sit and attend to something with you.
Attention levels are likely to be different for activities the child has chosen themselves, activities chosen for them by an adult and group vs 1:1 activities. Remember to set different expectations for each of these situations and gradually build each one up. Start with activities your child has chosen. Whatever they are doing, get down on the floor with them and join in their play, adding something simple to what they are doing. However, eventually they will start to move on to being able to follow your lead sometimes too. Here are some activities that I often find useful for attention and listening.
- Balls. There are lots of simple games that you can play where before the child can have their turn, or complete the action, they have to wait for you to say go. A ball is just one example, you could also play a game of throwing a beanbag or running. To begin with you might have only a very small gap; basically just wait for the child to look and then say go. For most young children I might also physically hold on to the ball or whatever we are playing with until I say go as well. Once they get the idea, you hopefully won’t need to do this any more. Gradually make the gap longer, then move on to saying “ready steady go”. Gradually wait a little longer each time before you say go. You can find more ideas about how to use a ball to targets all sorts of language skills here.
- Stacking cups and boxes. This game can be played the same as the ball game described above. Build up a stack of cups or boxes (this bit requires joint attention too). Then say “ready steady go” before the child knocks them down. Once they have got the idea of this, you can add another element in too. Use two stacks of different items this time. Then say “ready steady… boxes” or “ready steady… cups”. This way they really have to listen to the instruction to get it correct. You can do a similar thing with a ball too by saying “ready steady roll” or “ready steady bounce”.
- Bubbles. You can do very similar things with these. Bubbles are very motivating for a lot of young children and they are great for encouraging children to look at you and getting joint attention with children with a very limited attention span. You can find more ideas of how to use bubbles in therapy here.
- Books. Some children will naturally sit and look at books. Others need a bit more encouragement. Try interactive books such as ones where you can lift flaps, press buttons to play sounds etc. I even have one book with a hand puppet inside. Don’t worry about reading all the words on the page – to start with, just flip through the pages with your child and point out things of interest in the pictures. Build up to reading the whole story gradually. There are more ideas about using books in therapy here.
What else do you do to help build attention and listening skills in preschool children? |
Is our little corner of the galaxy a special place? As of this date, we’ve discovered more than 1,500 exoplanets: planets orbiting stars other than our sun. Thousands more will be added to the list in the coming years as we confirm planetary candidates by alternative, independent methods.
In the hunt for other planets, we’re especially interested in those that might potentially host life. So we focus our modern exoplanet surveys on planets that might be similar to Earth: low-mass, rocky and with just the right temperature to allow for liquid water. But what about the other planets in the solar system? The Copernican principle – the idea that the Earth and the solar system are not unique or special in the universe – suggests the architecture of our planetary system should be common. But it doesn’t seem to be.
The figure above, called a mass-period diagram, provides a visual way to compare the planets of our solar system with those we’ve spotted farther away. It charts the orbital periods (the time it takes for a planet to make one trip around its central star) and the masses of the planets discovered so far, compared with the properties of solar system planets.
Planets like Earth, Jupiter, Saturn and Uranus occupy “empty” parts of the diagram – we haven’t found other planets with similar masses and orbits so far. At face value, this would indicate that the majority of planetary systems do not resemble our own solar system.
The solar system lacks close-in planets (planets with orbital periods between a few and a few tens of days) and super-Earths (a class of planets with masses a few times the mass of the Earth often detected in other planetary systems). On the other hand, it does feature several long-period gaseous planets with very nearly circular orbits (Jupiter, Saturn, Uranus and Neptune).
Part of this difference is due to selection effects: close-in, massive planets are easier to discover than far-out, low-mass planets. In light of this discovery bias, astronomers Rebecca Martin and Mario Livio convincingly argue that our solar system is actually more typical than it seems at first glance.
There is a sticking point, however: Jupiter still stands out. It’s an outlier based both on its orbital location (with a corresponding period of about 12 years) and its very-close-to-circular orbit. Understanding whether Jupiter’s relative uniqueness is a real feature, or another product of selection effects, has real implications for our understanding of exoplanets.
Throwing its weight around
According to our understanding of how our solar system formed, Jupiter shaped much of the other planets’ early history. Due to its gravity, it influenced the formation of Mars and Saturn. It potentially facilitated the development of life by shielding Earth from cosmic collisions that would have delayed or extinguished it, and by funneling water-rich bodies towards it. And its gravity likely swept the inner solar system of solid debris. Thanks to this clearing action, Jupiter might have prevented the formation of super-Earth planets with massive atmospheres, thereby ensuring that the inner solar system is populated with small, rocky planets with thin atmospheres.
Without Jupiter, it looks unlikely that we’d be here. As a consequence, figuring out if Jupiter is a relatively common type of planet might be crucial to understanding whether terrestrial planets with a similar formation environment as Earth are abundant in the galaxy.
Despite their relative heft, it’s a challenge to discover Jupiter analogs – those planets with periods and masses similar to Jupiter’s. Astronomers typically discover them using an indirect detection technique called the Doppler radial velocity method. The gravitational pull of the planet causes tiny shifts in the wavelength of features in the spectrum of the star, in a distinctive, periodic pattern. We can detect these shifts by periodically capturing the star’s light with a telescope and turning it into a spectrum with a spectrograph. This periodic signal, based on a planet’s long orbital period, can require monitoring a star over many years, even decades.
Are Jupiter-like planets rare?
In a recent paper, Dominick Rowan, a high school senior from New York, and his coauthors (including astronomers from the University of Texas, the University of California at Santa Cruz and me) analyzed the Doppler data for more than 1,100 stars. Each star was observed with the Keck Observatory telescope in Hawaii; many of them had been monitored for a decade or more. To analyze the data, he used the open-source statistical environment R together with a freely available application that I developed, called Systemic. Many universities use an online version to teach how to analyze astronomical data.
Our team studied the available data for each star and calculated the probability that a Jupiter-like planet could have been missed – either because not enough data are available, or because the data are not of high enough quality. To do this, we simulated hundreds of millions of possible scenarios. Each was created with a computer algorithm and represents a set of alternative possible observations. This procedure makes it possible to infer how many Jupiter analogs (both discovered and undiscovered) orbited the sample of 1,100 stars.
While carrying out this analysis, we discovered a new Jupiter-like planet orbiting HD 32963, which is a star very similar to the sun in terms of age and physical properties. To make this discovery, we analyzed each star with an automated algorithm that tried to uncover periodic signals potentially associated with the presence of a planet.
We pinpointed the frequency of Jupiter analogs across the survey at approximately 3%. This result is broadly consistent with previous estimates, which were based on a smaller set of stars or a different discovery technique. It greatly strengthens earlier predictions because we took decades of observations into account in the simulations.
This result has several consequences. First, the relative rarity of Jupiter-like planets indicates that true solar system analogs should themselves be rare. By extension, given the important role that Jupiter played at all stages of the formation of the solar system, Earth-like habitable planets with similar formation history to our solar system will be rare.
Finally, it also underscores that Jupiter-like planets do not form as readily around stars as other types of planets do. It could be because not enough solid material is available, or because these gas giants migrate closer to the central stars very efficiently. Recent planet-formation simulations tentatively bear out the latter explanation.
Long-running, ongoing surveys will continue to help us understand the architecture of the outer regions of planetary systems. Programs including the Keck planet search and the McDonald Planet Search have been accumulating data for decades. Discovering ice giants similar to Uranus and Neptune will be even tougher than tracking down these Jupiter analogs. Because of their long orbital periods (84 and 164 years) and the very small Doppler shifts they induce on their central stars (tens of times smaller than a Jupiter-like planet), the detection of Uranus and Neptune analogs lies far in the future. |
Welcome to Skylark Class
In Terms 3 and 4 we are exploring the project
"What is life like in the Polar Regions?"
Our work for the next two terms will be finding out about the South Pole and North Pole. We will explore the habitats in both of the Polar Regions. Finding out about the animals and geography of the areas. We will be creating art pieces using different techniques depicting the landscape and animals. We will focus on the life cycle of a penguin and what it is like for them to live in Antarctica. We will then travel to the North Pole looking at the animals and landscape. As part of the work about the North Pole we will find out about the Inuit people and the Iditarod Race, including why this race is run on a particular route.
Phonemes Pronunciation Video, Oxford University Press:
Reading is very important skill your child needs to achieve. It is important that your child is able to read the words and to understand the text he/she has read. Below is a sheet which has questions you can ask your child to help them understand the text. |
Collocations are one of the parts of English Grammar that is hardest to catch when you start learning. The good news is that it gradually assimilates naturally over time. The bad news is that, meanwhile, it can lead to very funny mistakes. We tell you what collocations are and why you should pay attention to them.
What is a collocation?
When words are used regularly together, rules are created about their use not for grammatical reasons but for the simple association. “Black and white” (black and white), for example, appears in that order by placement; always used that way and put it upside down “white and black” (black and white) seems wrong.
For the same reason, “we make a mistake” (we make a mistake) or “do a test” (we make a test). In these examples, the reason for using these verbs is that we always do it in the same way: this is collocation or “placement”.
The knowledge of the placements is vital for the correct use of the language and for the adequate translation of a text from Spanish to English, since a grammatically correct sentence may seem “rare” if the placement preferences are ignored.
How to memorize collocations
Another problem to learn collocations is that they do not follow a standard and must be learned by heart. What can you do to learn these details that will make you sound like a real native? Very easy:
Point and study the vocabulary in context. You will learn collocations without realizing it.
Take note of your mistakes. Every correction you receive in class is a very valuable lesson, take advantage of it!
Listen and learn from your teachers, movies and series, radio … not only the written texts have information for you, being exposed to spoken English will make you better your level of English.
You already know the most important thing about collocations, now it’s time to get to work. Almost all the texts that you find are going to have more than one so … pay attention!
Why learn collocations?
You will sound more natural and they will understand you more easily. You will express yourself in a much more similar way to how a native English speaker expresses himself.
You will have alternative and richer ways to express yourself in English.
It is easier for our brains to learn like this. It is easier for our brains to remember sets of words than single words.
If you are preparing an English test, placements can make the difference between passing the exam or not. In fact, it is rumored that they are the protagonists in the Cambridge Advanced exam.
Types of collocation
The grammatical collocations result from the combination between the main word (name, adjective or verb) plus a preposition, or to + infinitive, or that + clause. Linguists speak of 8 types of grammatical placements:
- Noun + preposition: apathy towards, dissatisfaction with, differences with, the reason for …
- Noun + to-infinitive: I felt the urge to do it, It was a pleasure to see you, they made an attempt to do it.
- Noun + that clause: We reached an agreement that she would come with us.
- Preposition + noun: by chance, at random, in pain.
- Adjective + preposition: keen on sports, fond of music, hungry for knowledge, angry at the children.
- Adjective + to-infinitive: it’s nice to be here, it’s necessary to work on that issue.
- Adjective + that clause: They were afraid that they would not win the match.
Different verb patterns in English (verb + inf) She began to cry, (verb + bare infinitive) we must do it.
Lexical collocations is a type of construction where a verb, name, adjective or adverb forms a predictable connection with another word:
- Noun + Noun: a pang of guilt, a piece of advice.
- Adverb + Adjective: terribly excited, unduly pessimistic
- Adjective + Noun: merry Christmas, best regards
- Noun + Verb: cats meow, alarms go off.
- Verb + Noun: keep an eye
- Verb + Expression With Preposition: apply for a job
- Verb + Adverb: drive dangerously, research thoroughly
Collocations with HAVE
Enjoy the following worksheet where we collect all the collocations with have. Do not forget to join our facebook group.
Collocations with DO
Enjoy the following worksheet where we collect all the collocations with DO. Do not forget to join our facebook group. |
The Side Angle Side postulate (often abbreviated as SAS) states that if two sides and the included angle of one triangle are congruent to two sides and the included angle of another triangle, then these two triangles are congruent.
$$ \triangle ABC \cong \triangle XYZ $$
The included angle means the angle between two sides. In other words it is the angle 'included between' two sides.Identify Side Angle Side Relationships
Side Angle Side Practice Proofs
Given: 1) point C is the midpoint of BF 2) AC= CE
Prove: $$ \triangle ABC \cong \triangle EFC $$
Side Angle Side Example Proof
Prove: $$ \triangle BCD \cong \triangle BAD $$
Given: HJ is a perpendicular bisector of KI
Side Angle Side Activity
Below is the proof that two triangles are congruent by Side Angle Side.
Can you imagine or draw on a piece of paper, two triangles, $$ \triangle BCA \cong \triangle XCY $$ , whose diagram would be consistent with the Side Angle Side proof shown below? |
Average minimum temperatures in winter and maximum temperatures in summer appear in Figure 10. The range of more than 100 Fahrenheit degrees between summer high and winter low temperatures in the Interior is characteristic of the continental zone, just as a range of only about 40 Fahrenheit degrees is typical of the maritime zone to the south. The National Weather Service, the official weather reporting and recording agency of the federal government reported 100 degrees F (37.8 degrees C) at Fort Yukon on June 27, 1915, as the highest recorded temperature in the state. The lowest recorded temperature was minus 80 degrees F (-62.8 degrees C) at Prospect Creek, about 25 miles southeast of Bettles, on January 23, 1971.
Figure 11 shows the diurnal temperature variation—the difference between the mean annual high and mean annual low temperatures for each day—throughout the state. The diurnal temperature variation in the Interior is nearly double that along the coastlines, but it is considerably less than that in continental locations closer to the equator where more heat is gained during the day and lost at night. During the long periods of sunlight in summer and little or no sunlight in winter, interior locations show little variation in heat lost or gained from day to night. The most important influence on temperature in the Interior year-round is the presence or absence of clouds. Clouds retard heat loss to the atmosphere in winter and reduce heat gain during the summer, thereby moderating both winter and summer temperatures.
Figure 22. Seasonal Daylight Pattern
Adapted from E.H. Buck, et al., 1976. Kadyak: A Background for Living
The climate of a particular place is the product of several factors including latitude, altitude, presence of mountain barriers, speed and direction of prevailing wind, and insolation. In interior Alaska latitude superimposes its influence over all others (Wolff and Harding 1967). The climate of the Yukon Region can be generally classified as continental with the exception of a transitional zone in the lower Yukon. However, to be fully understood, climate must be described in terms of local variables and conditions which produce microclimates that may vary within short distances.
The Interior has recorded both the all-time state high temperature of 100 degrees F (37.8 degrees C) at Fort Yukon and the lowest temperature of minus 80 degrees F (-62.8 degrees C) at Prospect Creek. Figure 19 shows climate statistics from 35 stations in the area. Most are located in valleys adjacent to lakes and streams and are not representative of climatic conditions at higher elevations. Data from Arctic Village, with an elevation of 2,020 feet, are included as an indication of the climate at higher altitudes, even though the period of record is only three years.
Summer Fahrenheit temperatures range from the upper 30s to the upper 60s (about 5 to 20 degrees C) with extreme temperatures over 90 degrees F (32.2 degrees C) not uncommon. Winter temperatures range from the minus 20s to plus 20s (about -29 to -7 degrees C) with extreme low temperatures in the minus 60s not uncommon. Coastal temperatures seldom rise above 60 degrees F (15.5 degrees C) in summer or drop below minus five degrees F (-21.5 degrees C) in winter due to the modifying effect of adjacent open waters and marshy areas of the Yukon delta. Frequently, low wind-chill temperatures in the region further reduce the efficiency of men and machinery that work outside during winter. The long summer days, however, allow outside activity 24 hours a day for several months of the year.
Precipitation averages seven to 39 inches annually, and most of that occurs in late summer and early fall as rain and rainshowers. Storms occur year-round but are most frequent in late summer and early fall when the primary storm track penetrates the interior of the state (Figure 5). With few exceptions the highest precipitation occurs in the western part of the region.
The Yukon Region has the greatest area in the state with soils suitable for agriculture (Figure 149). Rainfall amounts are adequate for some crops, but too much rain comes late in the season when crops are maturing instead of during the period of rapid growth. The length of the growing season and number of growing degree days are adequate for some grasses and garden crops.
Shipping on the navigable rivers and streams is only possible about five months of the year when the streams are free of ice (Figure 31). Much of the area is accessible only by air, so aviation is very important to transportation and commerce in the region. Flying weather is generally good year-round since the mountain barriers to the north and south protect the area. Exceptions are the high frequency of ice fog in the winter and the showers and thunderstorms of late summer.
Winds are moderate to strong near the coast but are generally light over the remainder of the area. Several exceptions are Windy Pass near Mt. McKinley and Isabel Pass near Big Delta where the terrain channels wind and increases its speed.
|Fort Wainwright after the flood of August 1967 which inundated much of the area bordering the Tanana River. Both the areas around Fairbanks and farther downstream at Nenana were partially covered by the flood waters. Precipitation amounts on August 12 exceeded the monthly average for August at several locations.|
The central part of the Yukon Region has high winter pollution potential because of the duration of calm winds. Fairbanks, the state's second largest city, has the most severe problem because, in addition to prevalent calm conditions, abundant pollution sources also exist. Instead of terrain acting as a "trap" for pollution such as in Anchorage, persistent inversions in the Fairbanks area concentrate the pollution and prevent dispersal.
The percentage of calm winds varies with the season. For example, the annual calm wind average for Fairbanks is 21 percent; however, February is much higher with 44 percent and May is the lowest with nine percent. Conditions can vary in relatively short distances. Eielson Air Force Base, less than 40 miles from Fairbanks, has an annual percentage calm of 41 and seasonally varies from 55 in January to 21 in June. Percentage of calm on an annual basis can be found in Figure 39. A high pressure system that moves into central Alaska from Siberia often becomes stationary over the Interior, and calm winds can persist for many days.
Temperature inversions compound the problem. Like calm winds, they are most pronounced in winter. When a strong high pressure system settles over interior Alaska in winter, skies clear, and without clouds to stop the outgoing heat radiation, the air at the surface cools and an inversion forms. Since the air in the inversion is very stable, there is no vertical mixing and the pollution concentrates.
Inversion conditions in summer usually occur during the cooler nighttime hours, disappearing during the warmest time of day, so pollution is infrequent. Average summer winds in the Fairbanks area, 75 percent faster than those of winter, combined with the low precipitation totals, about 11 to 15 inches per year, cause high particulate levels in the air in summer and fall. Most of the surface soil is wind-blown loess derived from glacial outwash, so when it dries in summer, it is easily blown around in the atmosphere by wind. After the area has snow cover, measured concentrations are reduced to approximately one-third of summer levels. The Environmental Protection Agency's (EPA) standard for concentrations of particulates has been exceeded an average of eight times a year in the last six years, varying from a low of once a year to as high as sixteen. Smoke from tundra and forest fires is another source of pollution in summer and can substantially reduce both horizontal and vertical visibility.
Inversion statistics for the Yukon Region are only available from Fairbanks and are presented in Figures 41 and 42. Central Alaska inversion conditions are shown in Figure 43, and a comparison with other areas of the state is shown in Figure 45.
The major contributors to pollution in Alaska are automobiles and residential and commercial heating. The only important industrial sources are community power plants. Unfortunately, during winter when the potential for concentrating a pollutant is greatest is when these sources of pollution are producing at peak capacities. The primary problem identified is the concentration of carbon monoxide (CO) in the air. EPA established a CO alert level of 15 parts per million (ppm) and a standard of 9 ppm. In 1974-75 the standard was exceeded 138 times, and alert conditions were reached 39 times. However, newer motor vehicles with anti-pollution devices, improved vehicle traffic patterns, and more favorable weather conditions during the 1975-76 winter reduced the carbon monoxide levels to the point that the standard was exceeded only 70 times and alert conditions reached only 15 times, or almost half those of the previous year.
Population growth in Fairbanks aggravates the problem. Modifying meteorological conditions is extremely unlikely, so the improvements will have to occur by a change in life-style or technological advances. To lower pollutant concentration levels, sources will have to be limited, at least during critical high concentration periods.
According to Wolff and Haring (1967), ice fog is produced by water vapor discharged during cold weather of minus 25 degrees F (-32 degrees C) or lower. Aside from a few natural sources, most of the vapor comes from automobiles, power plants, and domestic heating. It is usually about 30 feet (9.1 m) thick, seldom more than 100 feet (30.5 m), although thicknesses of 160 feet (50 m) have been observed directly over Fairbanks during long cold spells.
When air containing condensation nuclei cools to varying degrees below the freezing temperature of the liquid water (fog) in the air, tiny ice crystals form. The more polluted the air, the higher the temperature for formation of ice crystals. As the air continues to cool, it can hold less and less moisture. This moisture condenses on already frozen crystals, so that the air and crystals cool slowly. The crystals grow in size, producing beautiful displays of "diamond dust" and "sun dogs." The crystals are well-formed, relatively sparsely distributed, and range in size from 10 to 100 microns. Thus, they present no hazard to visibility and settle rapidly.
Warm exhaust gases discharged into the air may cool 150 Centigrade degrees in a few seconds. Many very small crystals (10 microns) form and create a serious visibility problem. Once ice crystals form, they act as heat sinks from which heat is radiated faster than from air.
Ordinarily, air cools with increased altitude, air moves horizontally and vertically, and the resulting turbulence mixes and clears the air. In cold snow-covered areas, however, radiation from the earth's surface cools the air, causing the gradient to reverse from cold to warm upward, which creates an inversion limiting mixing in the lower atmosphere. The inversion and the ice fog become thicker and more intense as extreme cold continues. In short, conditions for serious pollution problems in the Fairbanks area in winter could hardly be worse, and as population grows, so does the problem.
Figure 44 shows three areas covering 24, 38, and 64 square miles, respectively. The inner area is covered whenever ice fog is present, the next when ice fog continues several days, and the largest area only during prolonged spells of very cold weather. The fog is not continuous, especially near the outer boundaries.
Sea ice is common in the northern Bering Sea, occasionally covering the entire area from late autumn through early spring (Figure 53). Wind-induced ice movement causes ice ridge and hummock formation—convergence of ice flows—much as it does on the Arctic Ocean, although ridges probably do not attain as great a height because of the thinner, one-year ice. A discontinuous ice cover is present as a changing mass of irregular fields, floes, and cakes intersected by numerous breaks and leads. One-year winter ice in the Bering Sea generally averages only two to four feet thick. Ice formation in the northern Bering Sea usually begins during November when the permanent polar ice pack reverses direction and starts to move southward from the northern Chukchi Sea. Movement generally proceeds toward Siberia, evidenced by impingement of ice against the USSR coast, while the area adjacent to Alaska or the central Bering Sea and Bering Strait has a well-developed shore lead. Shore ice begins to freeze in early December and increases in thickness until late April. It is found as far south as Nunivak Island.
The ice edge reaches its maximum southward position during late March, both because of temperature and persistent northerly winds which tend to drive the pack ice south. In addition, a southwesterly or westerly drift in the southern Bering Sea between November and April causes a seaward flow of ice from the more shallow area where it forms near the Alaskan coast (Potocsky 1975). In April the ice begins to break up and melt, and the ice edge retreats northward. The general southern limit of sea ice in the Bering Sea is from northern Bristol Bay to the vicinity of St. George Island in the Pribilof Islands. In extreme years, ice may extend as far south as Unimak Island. South of this boundary there is little or no heavy ice, but north of it the Bering Sea has a 50 percent cover over much of its surface for five months of the year.
Ice in the Bering Sea starts to break up in June at the west coast of Alaska. By the end of June the Yukon delta and Norton Sound areas are essentially ice-free and the Bering Strait is almost entirely open. The middle Bering Sea is usually the last part to become ice-free, but ice sometimes stays in the bays and around islands longer than in the open sea (Anderson 1963). The movement and position of the ice depends greatly on wind conditions. River ice breaks up and forms earlier than sea ice; consequently, ice jams and flooding occur in the river deltas and lower reaches during spring.
By the beginning of June the whole body of ice is near St. Lawrence Island, and a passage opens in the western Bering Sea. The eastern side is generally obstructed later than the western side. By late June or early July, the Bering Sea is essentially free of ice. Ice concentration in areas north of Bering Strait continues to decrease as summer progresses. The ice retreats further into the northern Chukchi Sea, eventually merging into one continuous edge that reaches a maximum northward position during the latter half of September. Winter ice in the southern Bering Sea ranges from 12 to 28 inches (30 to 71 cm), while in the northern Bering Sea it ranges from 28 to 48 inches (71 to 122 cm).
Freezeup and breakup conditions are important to man in this environment since they affect marine activities. Figure 54 illustrates mean freezeup and breakup information for selected locations.
Figure 66: Yukon River Delta Maps—1860 and Today
Figure 88. Major Faults of Alaska
Back to Index |
Young children are famously active, flitting from one activity to another with energy to burn. But some toddlers and preschoolers are more than simply super-active and actually suffer from attention deficit/hyperactivity disorder (ADHD). How can you tell the difference? That distinction is best left to a pediatrician or specialist, but one important difference to watch for is this: High-energy, super-active preschoolers without ADHD can usually focus when necessary to put away toys, do a puzzle, or sit still for a story. Kids with ADHD can’t. They exhibit behavior that disrupts daily activities and relationships in a major way and in more than one setting for at least six months’ time. Here are some specific symptoms to look for.
Your child is inattentive, meaning she:
- Has difficulty concentrating or focusing
- Talks or thinks about things that aren’t related to the topic at hand
- Avoids tasks that she doesn’t want to do by lying or becoming angry about them
- Appears not to listen
- Has difficulty organizing, planning, and finishing work on time
- Frequently loses the things she needs, like toys, pencils, schoolwork
- Has trouble controlling her behavior in new or different settings or situations
Your child shows signs of hyperactivity or poor impulse control if she:
- Seems to be in constant motion, fidgeting all the time like a motor running on high speed
- Can’t remain seated when told to — she touches everything, taps her pencil, wiggles her feet.
- Talks all the time (more than a typical chatty preschooler)
- Often interrupts conversations and games
- Is unable to play quietly at all
- Is impatient and intrusive
Is It ADHD or Autism Spectrum Disorder (ASD)?
Sometimes the symptoms of ADHD mirror those of autism, another brain disorder that makes it tough for kids to interact and communicate with parents, caregivers, and playmates. But what defines ADHD — and distinguishes it from autism spectrum disorder (ASD) — is the inability to focus; children with ASD, on the other hand, may hyper focus, often to the complete exclusion of others.
Children with ADHD don’t usually engage in the ritualistic behavior that kids with ASD are known for, either: from head banging to meticulously lining up their toys. ADHD kids can be outgoing and interested in the people around them. Again, autistic children are not. A child with ASD doesn't have a clear understanding of right and wrong or most types of social and emotional behavior. ADHD kids know the difference but get defiant when they don’t want to do what’s been asked of them.
Other differences: A child with ADHD may irritate or offend others and knows it. An autistic kid usually isn’t aware of the other person’s reaction. Kids with autism truly don’t know why others might be upset by what they do; so while ADHD kids may cry (usually tears of frustration), ASD kids typically don’t. Also, experts say autism can be reliably diagnosed by the time a toddler is two, which isn’t the case with ADHD. Although, it is possible for children to be autistic and have ADHD, it is fairly rare.
Read on for ways to treat ADHD in children. |
A Study in Septuagint Translation Technique
Chapter One: Introduction 1
~ CHAPTER ONE Introduction Not just anyone can write a great story. For great stories are more than mere words and grammar. Great stories are more than the reporting of events. Great stories are carefully crafted pieces of art that have a literary soul and life. They are produced by authors who carefully select content and manipulate form to maintain their reader's interest and to shape their reader's response. Narrative is art with a message. But what happens to that art and its message when it is translated into another language? What kind of storytellers are the translators? Translation technique analysis must pursue this question. The Hebrew Bible is a book of great stories, stories that many value as communication from God. In the third century B.C.E., the great stories of the Hebrew Bible needed to live at Jewish dinner tables where Greek, not Hebrew, was the language of choice. Thus the Septuagint was born and translators became storytellers. Scholars have carefully analyzed the translation style of the Septuagint translators. Like all translators, their work may be viewed as a series of decisions. For example, the Septuagint writer would regularly encounter a series of waw-consecutive clauses in his Hebrew text. No single Greek equivalent would be appropriate for translating all of them. Since the original language and target language lacked absolute parity in their linguistic structure, the translator was required to make a decision. 1 When that same translator encountered a metaphor whose literal sense no longer communicated well in the...
You are not authenticated to view the full text of this chapter or article.
This site requires a subscription or purchase to access the full text of books or journals.
Do you have any questions? Contact us.Or login to access all content. |
Oracy in the Curriculum
How can teachers support oracy in their classrooms?
Speech and communication lies at the heart of classroom practice. It is the predominant way in which teachers provide instruction and support to their children and is central to how most students engage with the curriculum.
A recent article studied by English faculty members at Sandal Castle examines how teachers can support oracy in the classroom, drawing on research commissioned by Voice 21, an organisation working with UK schools to support the teaching of spoken communication skills, and undertaken by LKMco, a think tank working across the education and policy sectors.
What is ‘oracy’?
Oracy can be seen as an outcome, whereby children learn to talk confidently, appropriately and sensitively. The article focuses on oracy as a process, whereby children learn through talk, deepening their understanding through dialogue with their teachers and peers (Alexander, 2012). Oracy involves teachers and their classes thinking carefully and deliberately about the sorts of spoken language they are using, and this will vary across subjects and with different age groups. Different types of talk will be appropriate at different points in the learning cycle, and Robin Alexander outlines five key types of ‘teaching talk’ (Alexander, 2008):
- Rote: imparting knowledge by getting students to repeat key pieces of information to impart facts, ideas and routines.
- Recitation: using questions to test students’ knowledge and understanding, to check students’ progress, and stimulate recall.
- Instruction: telling students what to do and explaining key facts, principles or processes in order to transmit information.
- Discussion: encouraging the exchange of ideas within a class, to share information.
- Dialogue: using structured questions and discussion, helping students deepen understanding of key knowledge, principles and processes.
What are the benefits of developing teachers’ and students’ oracy?
Developing classroom talk has a wide range of benefits on the outcomes of children during school, and beyond. In particular, structured dialogue during lessons, where students are encouraged to participate verbally and given space and time to reflect upon and discuss complex ideas, is linked with:
- Cognitive gains, including improved results in English, maths and science, the retention of subject-specific knowledge, and ‘transference’ of reasoning skills across subject areas (Jay et al., 2017);
- Personal and social gains, including attitudes towards learning, enhanced self-esteem and self-confidence, and a reduction in anxiety (Hanley P et al., 2015); (Gorard et al., 2015), and;
- Civic engagement and empowerment, increasing children and young people’s ability to debate issues, while also increasing understanding about social issues and ability to manage differences with others (Nagda and Gurin, 2007).
Recent Education Endowment Foundation-funded evaluations indicate raising the quality and rigour of classroom talk has a range of positive academic, personal and social outcomes. (Gorard et al., 2015); (Hanley P et al., 2015), and in terms of teachers’ confidence (Jay et al., 2017).
We have looked at the work of School 21 to support our work in school.
Learning through talk: Deepening subject knowledge through oracy
What could an oracy-rich classroom look like and how could it support students to refine their subject knowledge and develop their understanding?
At School 21, in Stratford, East London, teachers provide students with opportunities to learn, both to and through talk. In practice, this means that students are encouraged to develop and revise their understanding through sustained and productive dialogue with their peers. When engaging in discussion, for example, students must have a system for turn-taking, and they must ensure that everyone has a chance to contribute and that when somebody speaks, their ideas are respected. Introducing ‘ground rules for talk’, as advocated by Dawes et al. (Dawes et al., 2004) has been particularly effective at teaching students the conventions of group talk and ensuring that everybody’s voice is valued.
To ensure that the contributions students make to group discussions improve their reasoning and develop their understanding, students are also taught a number of ‘talk moves’ or ‘roles’. These encourage students to develop and interact with their own and other’s ideas by, for example, challenging, clarifying or probing a group member’s idea. Students are also taught to build or elaborate on each other’s ideas, rather than merely stating their own thoughts with no relation to what has been said previously. They are taught when to introduce a new line of enquiry or summarise a discussion and are encouraged to consider how these ‘moves’ can help further their thinking as a group.
The Oracy Framework, developed in conjunction with teachers at School 21 and Cambridge University, provides a lens through which to view the oracy skills required to engage in effective group talk, and can be an effective way of framing the teaching of these skills ((Mercer N et al., 2017); see https://impact.chartered.college/article/mercer-identifying-assessing-student-spoken-language-skills/) |
The USGS Water Science School
Sediment and Suspended Sediment
Sediment-laden water from a tributary, where development is probably taking place, entering the clearer Chattahoochee River near Atlanta, Georgia
Storms, of course, deliver large amounts of water to a river, but did you know they also bring along lots of eroded soil and debris from the surrounding landscape? Rocks as small as tiny clay particles and as large as boulders moved by the water are called sediment. Fast-moving water can pick up, suspend, and move larger particles more easily than slow-moving waters. This is why rivers are more muddy-looking during storms—they are carrying a LOT more sediment than they carry during a low-flow period. In fact, so much sediment is carried during storms that over one-half of all the sediment moved during a year might be transported during a single storm period.
If you scoop up some muddy river water in a glass you are viewing the suspended sediment in the water. If you leave your glass in a quiet spot for a while the sediment will start to settle to the bottom of the glass. The same thing happens in rivers in spots where the water is not moving so quickly—much of the suspended sediment falls to the stream bed to become bottom sediment (yes, mud). The sediment may build up on the bottom or it may get picked up and suspended again by swift-moving water to move further downstream.
So what does this have to do with people? On the positive side, sediment deposited on the banks and flood plains of a river is often mineral-rich and makes excellent farmland. The fertile floodplains of the Nile in Egypt and of the Mississippi River in the United States have flooding rivers to thank for their excellent soils. On the negative side, when rivers flood, they leave behind many tons of wet, sticky, heavy, and smelly mud—not something you would want in your basement.
Sediment in rivers can also shorten the lifespan of dams and reservoirs. When a river is dammed and a reservoir is created, the sediments that used to flow along with the relatively fast-moving river water are, instead, deposited in the reservoir. This happens because the river water flowing through the reservoir moves too slowly to keep sediment suspended -- the sediment settles to the bottom of the reservoir. Reservoirs slowly fill up with sediment and mud, eventually making them unusable for their intended purposes.
Sediment-data collection in the Little Colorado River a kilometer upstream from the Colorado River, Grand Canyon, Arizona
The U.S. Geological Survey (USGS) does quite a lot of work across the country measuring how much sediment is transported by streams. To do this, both the amount of water flowing past a site (streamflow or flow) and the amount of sediment in that water (sediment concentration) must be measured. Both streamflow and sediment concentration are continually changing.
Streamflow is measured by making a discharge measurement. Suspended sediment, the kind of sediment that is moved in the water itself, is measured by collecting bottles of water and sending them to a lab to determine the concentration. Because the amount of sediment a river can transport changes over time, hydrologists take measurements and samples as streamflow goes up and down during a storm. Once we know how much water is flowing and the amount of sediment in the water at different flow conditions, we can compute the tonnage of sediment that moves past the measurement site during a day, during the storm, and even during the whole year.
To view PDF files, the latest version of Adobe Reader (free of charge) or similar software is needed. |
Early Navigational Techniques
In ancient times, mariners navigated by the guidance of the sun and stars and landmarks along the coast. The Phoenicians were among the most daring of the ancient navigators. They built large ships and, traveling out of sight of land by day and by night, probably circumnavigated Africa. The Polynesians navigated from island to island across the open ocean using observations of guide stars and the moon, the winds and currents, and birds, knowledge of which was passed from generation to generation.
In England, Queen Elizabeth I did much to establish navigation laws, giving additional powers to Trinity House, a guild that had been created in 1514 for the piloting of ships and the regulation of British navigation. During this period the study of bodies of water, or hydrography, was given much attention, and harbors and the outlets of rivers were surveyed and buoyed. A tremendous advance in navigation had taken place with the introduction of the compass. Early in the 15th cent. there was progress by the Portuguese under the leadership of Prince Henry the Navigator, who built an observatory and formulated tables of the declinations of the sun; collected a great amount of nautical information, which he placed in practical form; made charts; and sponsored expeditions that led to numerous discoveries.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. |
I am a firm believer in handing tests back during the following lesson. Sometimes it is definitely painful to get tests corrected so quickly, but I think it's important to provide prompt feedback to the students.
At the start of today's class I hand the Unit 1 Test back and provide students time to talk within their Groups about their answers (MP3). I then ask if there are any problems on which anyone would like more explanation. If so, I explain these on the whiteboard.
In the previous unit we used constructions to find a midpoint. What about finding midpoints on the coordinate plane?
According to Math Open Reference:
If you know the coordinates of a group of points you can:
In this lesson we will explore a number of these topics, including midpoint, distance, slope, and the slopes of parallel and perpendicular lines. This in turn will set us up for future lessons on finding area and perimeter, studying equations of lines, parabolas, and circles, and exploring transformations.
To begin today's foray into Coordinate Geometry, I have on hand lots of graph paper and I hand out one sheet to each student, along with a straight edge. I have the students fold the graph paper into quarters (as they would say, "Like a hot dog roll and then like a hamburger roll") and draw a set of axes on the top left quadrant of their paper.
I am not particularly picky about having the students fold their papers, label their axes, number the axes from -10 to 10 and so on; in fact, I know my kids have been coached in middle school, for developmentally sound reasons, to do all these things, and usually at least one student will ask about having to do them. I always respond, "These are your notes and you can choose how and what you write on your notes." I stress this because I want wean them away from relying on their teacher to tell them exactly what to do and how to do it; instead I'd like to help them move toward taking responsibility for their own learning.
My students (the honors or accelerated students) are pretty solid on which axis is which, the direction the numbers go, and so on, and therefore I don't feel the need to emphasize these concepts; I realize, however, that groups of students who are more diverse in ability and experience may, in fact, need a lot more practice on these skills.
I have the students plot the points (6,8) and (8,10) and draw a connecting line segment, and then I ask: What is the midpoint of this line segment? My students are usually able to find it by inspection.
We repeat this process with (1,2) and (5,0), then with (-2,5) and (-6,1), continuing with more if necessary. At this point, I ask a series of questions:
The most common error students make with regard to midpoints is confusing the Midpoint Formula with slope or distance (i.e., "Should I add the coordinates or subtract?") For this reason, I really hammer home the notion that finding the coordinates of midpoints is nothing more than averaging the x-coordinates and averaging the y-coordinates, because they all know how to average two numbers.
I give the students several more pairs of coordinates and ask them to find the midpoints without plotting the points, and, as we go over these, I constantly pose the questions:
Finally, only after all this, do I ask what the midpoint formula might be. Most of the time, the students are easily able to volunteer this, particularly after the discussion relating midpoints to averages. I find it is helpful at this point to take the time to discuss and contrast subscripts and exponents, as there are some students who confuse them. And someone usually asks at this point, "Do we have to use the formula?" My answer to this question is, "No."
Using the top right-hand quadrant of a sheet of graph paper, I have the students plot the points A(2,2) and B(5,6), and I ask them to figure out the length of line segment AB. I give them time to confer with their groups and, if they seem to be struggling, I remind them of the work that we did in the opening lesson on the isosceles right triangles.
How did we figure out the length of the hypotenuse?
Eventually, perhaps with some further suggestions from me if needed, the students should draw a right triangle with line segment AB as its hypotenuse and use the Pythagorean Theorem to calculate the length of the line segment.
This section of the lesson is a good opportunity to frequently remind students of the different notations used for the length of a segment and the name of the segment. As I write the length of segment AB on the board, I ask various questions about the notation I should use and, as I walk around the room observing the students' work, I watch for their use of notation.
Now, on the bottom left-hand quadrant of the graph paper, I have the students repeat the same process with C(-1,-2) and D(2,4), asking them to find the exact length of line segment CD. When students have successfully completed this, we discuss the triangle that they used to find this length.
I'd like my students to think about their answers whenever they find length or distance, and to always ask themselves, "Does this answer make sense?" (MP1, MP6).
Next I ask the class about the legs of their right triangle. Why does one leg have length 3? Where does this 3 come from? Why does the other leg have length 6? In their answers to these questions, I'm expecting to hear that these values are the differences in the x and y coordinates, but, even better, I'd love to hear that the lengths of the legs represent the distance between the x-coordinates and the distance between the y-coordinates. I have found that finding the lengths of the legs of a triangle makes a lot more sense to some students when I say, "How far is it from -1 to 2 on the number line? How far is it from -2 to 4 on the number line?", particularly for those students who struggle with operations with signed numbers. I will model this line of thought often when doing problems involving distance.
Lastly, I ask the students what the formula for distance would be. I help set the stage for this by reminding them: What process did we use to find distance or length? How did we find the lengths of the legs of the triangle? I give them time in their groups to work on this, while I sketch on the board a line segment with endpoints (x1,y1) and (x2,y2). This helps to suggest notation for those who are struggling with it.
Once we have arrived at the Distance Formula and tried it out on sample coordinates, I take the time to remind the students again that they can simply ask themselves "How far is it? How far is it from one x coordinate to the other? How far is it from one y coordinate to the other?" This helps those students who struggle with subtraction, and helps those students who use subtraction make certain that their calculations make sense.
My students have encountered slope in Algebra and in 8th grade, but, invariably, when I ask them what slope is, they respond mechanically, offering up either "delta y over delta x" or "y equals mx plus b." (Maybe this will change as students are exposed to the Common Core!) When I ask what these phrases mean, I'm almost always met with silence. So we go to the bottom right-hand quadrant on our graph paper and proceed similarly to my midpoint process.
Once we begin to make some progress, I ask the students to plot (3,-5), (6,-5), and draw a line. I'll ask, "What is the slope of this line?"
When we think rise over run, the change in the y-coordinates is 0 and the change in x-coordinates could be any number (other than zero), depending on what points they choose to use. What is 0 divided by a number equal to? We do the same thing with (4,2) and (4,-3). Here the change in the y-coordinates could be any number, but the change in x-coordinates is 0. What is a number divided by 0 equal to?
I ask the students why 0 divided by 2 is 0, while 2 divided by 0 is undefined. My experience has been that they have no idea, so I quickly run through a division by zero conversation. I think this is an important conversation to have with them in terms of number sense, and is also key in helping them to understand why horizontal lines have zero slope and vertical lines have no slope. Once they own this knowledge, they won't have to rely on memorization but will know conceptually what the slope of a horizontal or vertical line is (MP 2).
All of this leads us to the slope formula. Once again, I ask the students to figure out and tell me what the slope formula is. We practice using it on some points that I write on the board, and we revisit the notion that the difference in the x values and the difference in the y values is really just the distance between these values on the number line (MP8).
Lastly, I bring up the fact that someone had mentioned "y equals mx plus b" when I asked about slope. What is y = mx + b? At this point, they realize that the m represents the slope of a line and most recall that b is the y-intercept of a line. I explain that we will be working with equations of lines later in the course.
I hand out the Coordinate Geometry Practice problems. I also provide graph paper and small white boards and markers that have the coordinate grid drawn on them. Students can choose which of these tools (MP5) they would like to use.
In this set of problems, the students first practice the concepts of midpoint, distance, and slope. Then they apply their knowledge to two problems that require them to determine which of these concepts they need to use. These problems also require that the students begin to justify their answers.
The students work in their groups, discussing approaches and comparing answers. (MP3) Any problems that are not completed during the class period will be assigned for homework.
With 5 minutes left in the class, I ask the students to complete the Coordinate Geometry Practice problems for homework.
I then hand out the Ticket Out the Door. In this short assignment, I am hoping the students will tie together one of the geometric definitions they have learned previously with the coordinate geometry that they learned in this lesson. It also asks them to justify a congruency relationship using a definition, which is a skill leading directly to the geometric proofs in a future unit. |
Hydrostatic pressure created by the heart forces blood to move through the arteries. Systolic blood pressure, the pressure measured during contraction of the ventricles, averages about 110 mm Hg in arteries of the systemic circulation (for healthy, young adults). The diastolic blood pressure, measured during ventricle relaxation, is about 75 mm Hg in these arteries. As blood travels through the arterial system, resistance from the walls of the blood vessels reduces the pressure and velocity of the blood (see Figure 1). Blood pressure drops sharply in the arterioles and falls to between 40 and 20 mm Hg in the capillaries. Blood pressure descends further in the venules and approaches zero in the veins.
Because blood pressure is so low in venules and veins, two mechanisms assist the return of blood to the heart (venous return):
The muscular pump arises from contractions of skeletal muscles surrounding the veins. The contractions squeeze the veins, forcing the blood to move forward, the only direction it can move when valves in the veins close to prevent backflow.
The respiratory pump is created by the expansion and contraction of the lungs during breathing. During inspiration (inhaling), pressure in the abdominal region increases while pressure in the thoracic cavity decreases. These pressures act upon the veins passing through these regions. As a result, blood flows toward the heart as it moves from regions of higher pressure (the abdomen) to those of lower pressure (the chest and right atrium). When the pressures are reversed during expiration (exhaling), backflow in the veins is prevented by valves. |
History of pathology
The history of pathology can be traced to the earliest application of the scientific method to the field of medicine, a development which occurred in the Middle East during the Islamic Golden Age and in Western Europe during the Italian Renaissance.
Early systematic human dissections were carried out by the Ancient Greek physicians Herophilus of Chalcedon and Erasistratus of Chios in the early part of the third century BC. The first physician known to have made postmortem dissections was the Arabian physician Avenzoar (1091–1161). Rudolf Virchow (1821–1902) is generally recognized to be the father of microscopic pathology. Most early pathologists were also practicing physicians or surgeons.
Origins of pathology
Early understanding of the origins of diseases constitutes the earliest application of the scientific method to the field of medicine, a development which occurred in the Middle East during the Islamic Golden Age and in Western Europe during the Italian Renaissance.
The Greek physician Hippocrates, the founder of scientific medicine, was the first to deal with the anatomy and the pathology of human spine. Galen developed an interest in anatomy from his studies of Herophilus and Erasistratus. The concept of studying disease through the methodical dissection and examination of diseased bodies, organs, and tissues may seem obvious today, but there are few if any recorded examples of true autopsies performed prior to the second millennium. Though the pathology of contagion was understood by Muslim physicians since the time of Avicenna (980–1037) who described it in The Canon of Medicine (c. 1020), the first physician known to have made postmortem dissections was the Arabian physician Avenzoar (1091–1161) who proved that the skin disease scabies was caused by a parasite, followed by Ibn al-Nafis (b. 1213) who used dissection to discover pulmonary circulation in 1242. In the 15th century, anatomic dissection was repeatedly used by the Italian physician Antonio Benivieni (1443-1502) to determine cause of death. Antonio Benivieni is also credited with having introduced necropsy to the medical field. Perhaps the most famous early gross pathologist was Giovanni Morgagni (1682-1771). His magnum opus, De Sedibus et Causis Morborum per Anatomem Indagatis, published in 1761, describes the findings of over 600 partial and complete autopsies, organised anatomically and methodically correlated with the symptoms exhibited by the patients prior to their demise. Although the study of normal anatomy was already well advanced at this date, De Sedibus was one of the first treatises specifically devoted to the correlation of diseased anatomy with clinical illness. By the late 1800s, an exhaustive body of literature had been produced on the gross anatomical findings characteristic of known diseases. The extent of gross pathology research in this period can be epitomized by the work of the Viennese pathologist (originally from Hradec Kralove in the Czech Rep.) Carl Rokitansky (1804-1878), who is said to have performed 20,000 autopsies, and supervised an additional 60,000, in his lifetime.
Origins of microscopic pathology
Rudolf Virchow (1821-1902) is generally recognized to be the father of microscopic pathology. While the compound microscope had been invented approximately 150 years prior, Virchow was one of the first prominent physicians to emphasize the study of manifestations of disease which were visible only at the cellular level. A student of Virchow's, Julius Cohnheim (1839-1884) combined histology techniques with experimental manipulations to study inflammation, making him one of the earliest experimental pathologists. Cohnheim also pioneered the use of the frozen section procedure; a version of this technique is widely employed by modern pathologists to render diagnoses and provide other clinical information intraoperatively.
Modern experimental pathology
As new research techniques, such as electron microscopy, immunohistochemistry, and molecular biology have expanded the means by which biomedical scientists can study disease, the definition and boundaries of investigative pathology have become less distinct. In the broadest sense, nearly all research which links manifestations of disease to identifiable processes in cells, tissues, or organs can be considered experimental pathology.
Other Pertinent Topics
- History of medicine
- Anatomical pathology
- Surgical pathology
- List of pathologists
- United States and Canadian Academy of Pathology
- Von Staden, H (1992). "The discovery of the body: human dissection and its cultural contexts in ancient Greece". The Yale journal of biology and medicine 65 (3): 223–41. PMC 2589595. PMID 1285450.
- Toby E. Huff (2003), The Rise of Early Modern Science: Islam, China, and the West, p. 54, 246-247, 216-218. Cambridge University Press, ISBN 0-521-52994-8.
- History of Pathology, at the USC School of Dentistry
- Hippocrates: The Father of Spine Surgery : Spine
- Greek Medicine - Galen
- Medicine And Health, "Rise and Spread of Islam 622-1500: Science, Technology, Health", World Eras, Thomson Gale.
- Islamic medicine, Hutchinson Encyclopedia.
- Rubin's Pathology, Fifth Edition. 2008. Ed. R. Rubin and D.S. Strayer
- A History of Medicine from the Biblioteca Centrale dell'Area Biomedica
- Morgagni, GB (1903). "Founders of Modern Medicine: Giovanni Battista Morgagni. (1682-1771)". Medical library and historical journal 1 (4): 270–7. PMC 1698114. PMID 18340813.
- Karl von Rokitansky at Whonamedit.com
- Rudolf Virchow at Whonamedit.com
- Jewish Encyclopedia entry on Julius Cohnheim
- Mission of the American Society for Investigative Pathology |
Line-chart based on a histogram
, it is drawn by joining the mid-points of the blocks at their apexes with a straight line. The extreme points of the line are joined to the horizontal ('X') axis (where the mid-point of the respective next class
would have been) to result
into a polygon
. In a very large set of data
(where the number of classes increases and difference between their widths decreases) the polygon turns into a smooth curve known as frequency
curve or frequency distribution
curve. Histographs (and histograms) are commonly used where the subject item is discrete
(such as the number of students in a school) instead of being continuous (such as the variations in their heights).
Also called frequency polygon
, a histograph is usually preferred over histogram where the number of classes is eight or more. |
Biology refers to the disciplines rich in complex terms. Sometimes, it may be difficult for strangers to remember this unusual vocabulary. This fact determines the peculiarities of teaching biology to foreign students.
The professional vocabulary for his sphere has a fairly wide range. There are the organs of the human body, symptoms, diseases, and the terms taken from chemistry. A little facilitates the process of mastering the fact that scientific biological terms are largely based on the Latin language, but this vocabulary is still quite complicated.
A significant place in this training should be given to practical classes that orient the foreign learners to the professional activity and, at the same time, form the higher level of their competence in the biology and anatomy, which largely determines their language skills in relation to specific goals and tasks of speaking communication. It is very important for foreign students to consider the process of teaching the language as a stage of professional preparation for mastering their future specialty. For this purpose, this Internet resource with English tutors can be useful: https://preply.com/en/skype/english-tutors.
Traditional and Innovative Systems of Teaching Foreign Students
The traditional system of teaching medical and biological disciplines in medical institutions, including tests, oral questioning, explaining new material and performing experimental works, is insufficient for the quality teaching foreign students, because the language barrier, interethnic communication, and different levels of basic education create big difficulties in the study of these disciplines.
An innovative system is a new approach in teaching, which includes more attention to the learners, the fundamental principles of education, creativity, professionalism, synthesis of both technical and humanitarian methods, and use of the latest technologies.
Innovative learning system differs of the classical one by an increase of the role of visual material (films, slides, drawings, and tables) and frequent changes of activities (listening, writing, drawing, and telling). In both cases, tasks are given to varying degrees of complexity, depending on the level of preparation of the foreign trainees.
According to the study conducted, an innovative approach to teaching foreign students improved the quality of their preparation, while students showed an increased interest to the subject, the desire to gain additional knowledge, and a more successful solution of situational problems.
Moreover, the success of training leads to the better adaptation of students in new conditions, which, according to the feedback principle, depends on the ability of the pedagogical system to flexibly consider the interests and needs of foreign students coming to study.
An important part of the teaching and methodological complex to train foreign students is the curriculum for general biology and anatomy, taking into account the biological or medical orientation. The training process has to contain the related goals and objectives, interdisciplinary links, a list of knowledge and skills to be acquired. It should include the tasks for the independent study of the subject and topics for self-examination.
A systematic approach also relies on the formation of tasks for the individual study in the form of homework. At the same time, it is necessary to widely use various sources of information in English (Internet, TV, scientific works, press, etc.), which would allow students not only acquire new knowledge but also form speech skills in English. It contributes to the formation of skills for obtaining information from various sources, which is an additional significant factor in learning a specific terminology.
This innovative approach would facilitate the rapid adaptation of foreign students to the subject under study and, as a result, deeper mastering of the learning material and the formation of skills, as well as the active development of speech as the basis for professional communication. |
DNA synthesis, and electron transport. 1
Most of the iron in our body is found in haemoglobin (the oxygen-carrying protein of the red blood cells
that transports oxygen from the lungs to the tissues throughout the body) and myoglobin (is a protein in
the heart and muscle cells that accepts, stores, transports and releases oxygen). 2
Iron is difficult to excrete once it’s in the body therefore the body conserves it. To maintain the balance of
iron absorption, iron is absorbed more when the stores are empty and less is absorbed when stores are full.
However, most of the people don’t consume enough iron-rich foods which causes depletion of iron stores.
Iron deficiency is the most common and widespread nutritional disorder in the world. It is the only nutrient
deficiency that is significantly prevalent in industrialised countries. 3
Who is at risk for iron-deficiency?
Depletion of iron stores and iron deficiency occur in all age groups particularly in-
- Babies given cow’s or other milk instead of breastmilk or infant formula milk
- Women in reproductive years
- Pregnant women
- Elderly people
- Vegetarians (especially vegans)
- Indigenous Australians
- Institutionalised people
What are the symptoms of iron-deficiency?
Unfortunately, signs and symptoms of iron deficiency are not physically obvious unless iron deficiency
anaemia occurs. People who suffers from iron deficiency are easily mistaken for motivational or
behavioural problems. They behave in certain ways which include lethargy, lack of interest, unmotivated
and less physically fit.
Iron in foods
Combining iron-rich foods with foods high in Vitamin C helps the body absorb the iron.
Red meats, brown legumes and dark green vegetables make the greatest contributions of iron to the diet.
Iron-fortified foods in breads and cereals can contribute significantly to iron intakes but the iron in these foods
are not absorbed as well as naturally occurring iron.
Excellent Sources of Iron
Whitney, Eleonor, Rolfes, Sharon, Crowe, Tim, Cameron-Smith, David, Walsh, Adam, 2014,
Understanding Nutrition Australia and New Zealand Edition, 2nd edition,
Cengage Learning Australia, Australia. |
Tips and guidance on effective study - simply choose the links that interest you!
Creating a cogent précis of your studies does two useful things: it forces you to understand the subject matter you are summarising and it creates condensed versions of the subject matter that you can then review repeatedly before you exam.
Use tabular summaries to gather various pieces of information. Summary tables are an effective revision technique and a great way to compare or evaluate competing theories, grammatical rules or examples of themes in different parts of your study material. You can use a table like the ones shown below. Change the number of columns or rows for your own work, but keep them fairly simple so you can remember them in the exam.
Tabular summaries are extremely valuable because they convert broad themes and the detailed discussions into a more manageable form.
|Block 1||Block 2|
Ged's advice to students who are revising for an examSign in to view this video
Try creating a tabular summary of an overarching topic by following the steps below.
Notes on index cards are particularly handy as you can carry them with you and review them in odd moments or for testing yourself - perhaps on a train or bus, or while waiting in a queue in the supermarket.
Summarise your topic in a few words. Using your own words means you process the information, which improves your understanding and your memory. Keep the notes brief to act as prompts.
Organise your notes in new ways on the cards - perhaps providing an overview of a topic on one, and then notes around sub-topics on others. Try using colour as an aid to memory.
Lynn's advice on revision techniquesSign in to view this video
Assignments can be a very useful starting point for producing summaries. Look through them and reduce the assignment by making summary sheets or cards for use in your revision. As you do so, compare exam and assignment questions on the same topic. How do the questions differ? What would the differences be (if any) between an assignment and exam answer on the same topic?
You might find that 'Outline view', in Microsoft Word processing software, helps you to quickly scan through Word documents and find useful material |
Consider the water flea. They don’t seem like the flashiest animals, but it turns out they can grow helmets and spines to beat their enemies, and can even customize the defenses based on which predator they’re fighting.
Linda Weiss, a professor at Ruhr-University Bochum in Germany, is leading research on how water fleas, or Daphnia lumholtzi, developed this ability, and presented it at the most recent meeting of the Society for Experimental Biology. Apparently, the fleas have the ability to detect the unique chemical traces left in their environment by predators such as phantom midge larvae and fish. Depending on which chemical traces the water flea detects, they will develop different types of armor.
“These defences are speculated to act like an anti-lock key system, which means that they somehow interfere with the predator’s feeding apparatus,” says Dr. Weiss.
For example, many freshwater fish can only eat small prey so the water fleas who detect freshwater fish will grow head and tail spines to make themselves larger and harder to eat. Her team has detected the specific neurotransmitters that detect the chemical traces and then activate the process that causes the spines or helmets to grow.
Future research will look at how the water flea’s “arming” abilities affect the local ecosystem, but for now we’re just wishing there were a way that we could do the same thing. |
One day in the not-too-distant future, the plastics in our satellites, cars and electronics may all be living their second, 25th or 250th lives.
New research from the University of Colorado Boulder, published in Nature Chemistry, details how a class of durable plastics widely used in the aerospace and microelectronics industries can be chemically broken down into their most basic building blocks and then formed once again into the same material.
It’s a major step in the development of repairable and fully recyclable network polymers, a particularly challenging material to recycle, as it is designed to hold its shape and integrity in extreme heat and other harsh conditions. The study documents how this type of plastic can be perpetually broken down and remade, without sacrificing its desired physical properties.
“We are thinking outside the box, about different ways of breaking chemical bonds,” said Wei Zhang, lead author of the study and chair of the chemistry department. “Our chemical methods can help create new technologies and new materials, as well as be utilized to help solve the existing plastic materials crisis.” …
Source: Science Daily |
Suggestions about just how to take a relationship sluggish and steady
Parts 10.7 – 10.9
Fluid characteristics could be the scholarly research of exactly how liquids act once they’re in movement. This might get really complicated, therefore we’ll give attention to one case that is simple but we must shortly mention different kinds of fluid movement.
Liquids can move steadily, or be turbulent. In constant movement, the fluid moving a provided point keeps a stable velocity. For turbulent flow, the rate as well as the direction regarding the flow differs. In constant movement, the movement could be represented with streamlines showing the way water moves in numerous areas. The density regarding the streamlines increases due to the fact velocity increases.
Liquids are incompressible or compressible. This is actually the difference that is big fluids and gases, because fluids are usually incompressible, which means that they don’t really change volume much in reaction to a pressure modification; gases are compressible, and certainly will alter amount as a result to a modification of force.
Fluid could be viscous (pours gradually) or non-viscous (pours effortlessly).
Fluid movement is irrotational or rotational. Irrotational means it travels in right lines; rotational means it swirls.
For many regarding the remaining portion of the chapter over here, we are going to concentrate on irrotational, incompressible, constant improve flow that is non-viscous.
The equation of continuity
The equation of continuity states that for an fluid that is incompressible in a pipe of varying cross-section, the mass movement price could be the exact same all around the pipe. The mass movement price is merely the rate of which mass moves past a given point, so it is the mass that is total past divided by enough time interval. The equation of continuity are reduced to:
Generally speaking, the density stays constant after which it is essentially the movement price (Av) this is certainly constant.
Making liquids movement
You will find essentially two how to make fluid flow through a pipe. A good way is to tilt the pipeline therefore the movement is downhill, in which particular case gravitational kinetic energy sources are changed to kinetic power. The way that is second to help make the force at one end of this pipeline bigger than the stress in the other end. A pressure distinction is much like a force that is net creating acceleration for the fluid.
So long as the fluid flow is constant, plus the fluid is non-viscous and incompressible, the movement could be looked over from a power viewpoint. It’s this that Bernoulli’s equation does, relating the force, velocity, and height of a fluid at one indicate the exact same parameters at a second point. The equation is extremely of good use, and will be employed to explain things like exactly how airplanes fly, and exactly how baseballs curve.
The stress, rate, and height (y) at two points in a steady-flowing, non-viscous, incompressible fluid are associated because of the equation:
Several of those terms probably look familiar. the 2nd term for each part appears something similar to kinetic power, therefore the 3rd term a great deal like|nearly the same as|as being similar to} gravitational possible power. The density could be replaced by mass, and the pressure could be replaced by force x distance, which is work if the equation was multiplied through by the volume. Looked over in that way, the equation is reasonable: the huge difference in force works, which are often utilized to alter the energy that is kinetic the possibility power associated with the fluid.
Pressure vs. speed
Bernoulli’s equation has many astonishing implications. A fluid flowing through a horizontal pipe for our first look at the equation, consider. The pipeline is narrower at one spot than over the other countries in the pipeline. The velocity of the fluid is greater in the narrow section by applying the continuity equation. May be the stress greater or lower when you look at the slim part, in which the velocity increases?
Very first inclination could be to state that where in fact the velocity is best, the force is best, because in the event that you stuck your turn in the movement where it is going fastest you’d feel a force that is big. The force doesn’t originate from the pressure there, however; it comes down from your own hand using energy away through the fluid.
The pipe is horizontal, therefore both points are in the exact same height. Bernoulli’s equation could be simplified in this situation to:
The kinetic power term from the right is bigger than the kinetic power term regarding the left, so for the equation to balance the stress from the right must certanly be smaller compared to the stress in the left. It’s this stress huge difference, in reality, which causes the fluid to move faster in the spot in which the pipe narrows.
Give consideration to a geyser that shoots water 25 m to the atmosphere. Just how fast may be the water traveling whenever it emerges through the ground? In the event that water originates in a chamber 35 m underneath the ground, what’s the pressure there?
The water has when it comes out of the ground to figure out how fast the water is moving when it comes out of the ground, we could simply use conservation of energy, and set the potential energy of the water 25 m high equal to the kinetic energy. One other way to get it done would be to apply Bernoulli’s equation, which amounts towards the thing that is same conservation of power. Let us get it done that means, merely to persuade ourselves that the strategy are exactly the same.
Bernoulli’s equation claims:
However the stress in the two points is similar; it is atmospheric force at both places. We are able to assess the prospective power from walk out, so that the prospective power term goes away completely in the remaining part, together with kinetic energy term is zero regarding the hand side that is right. This reduces the equation to:
The thickness cancels away, making:
Here is the equation that is same could have discovered when we’d done it utilising the chapter 6 preservation of power technique, and canceled out of the mass. Solving for velocity provides v = 22.1 m/s.
To determine the force 35 m below ground, which forces water up, apply Bernoulli’s equation, with point 1 being 35 m below ground, and point 2 being either at ground degree, or 25 m above ground. Let us take point 2 become 25 m above ground, which can be 60 m over the chamber where in actuality the pressurized water is. |
Sequence Those Sentences, A Story Game
Ask most kindergarteners, “What happened?", and you can get ready for a pretty wild, often non-sensical tale. That’s because for most kids, the concept of a story having sequence, or a “beginning, middle, and end”, is only just starting to make sense to them. Sequencing is a crucial skill for learning success, however, and kindergarten teachers heavily encourage practicing it. Want to help your child sequence at home? Try this activity with your child to increase his vocabulary and strengthen his grasp of the beginning, middle, and end by having him place sentence strips in the right order.
What You Need:
- 3" x 5" index cards
- 6 flat, stick-on refrigerator magnets |
Basics of Information Theory
Computer Science Department
Carnegie Mellon University
Version of 24 November 2004
Translations available: Belorussian
Although information is sometimes measured in characters, as when describing the length of an email message, or in digits (as in the length of a phone number), the convention in information theory is to measure information in bits. A "bit" (the term is a contraction of binary digit) is either a zero or a one. Because there are 8 possible configurations of three bits (000, 001, 010, 011, 100, 101, 110, and 111), we can use three bits to encode any integer from 1 to 8. So when we refer to a "3-bit number", what we mean is an integer in the range 1 through 8. All logarithms used in this paper will be to the base two, so log 8 is 3. Similarly, log 1000 is slightly less than 10,and log 1,000,000 is slightly less than 20.
Suppose you flip a coin one million times and write down the sequence of results. If you want to communicate this sequence to another person, how many bits will it take? If it's a fair coin, the two possible outcomes, heads and tails, occur with equal probability. Therefore each flip requires 1 bit of information to transmit. To send the entire sequence will require one million bits.
But suppose the coin is biased so that heads occur only 1/4 of the time, and tails occur 3/4. Then the entire sequence can be sent in 811,300 bits, on average. (The formula for computing this will be explained below.) This would seem to imply that each flip of the coin requires just 0.8113 bits to transmit. How can you transmit a coin flip in less than one bit, when the only language available is that of zeros and ones? Obviously, you can't. But if the goal is to transmit an entire sequence of flips, and the distribution is biased in some way, then you can use your knowledge of the distribution to select a more efficient code. Another way to look at it is: a sequence of biased coin flips contains less "information" than a sequence of unbiased flips, so it should take fewer bits to transmit.
Let's look at an example. Suppose the coin is very heavily biased, so that the probability of getting heads is only 1/1000, and tails is 999/1000. In a million tosses of this coin we would expect to see only about 1,000 heads. Rather than transmitting the results of each toss, we could just transmit the numbers of the tosses that came up heads; the rest of the tosses can be assumed to be tails. Each toss has a position in the sequence: a number between 1 and 1,000,000. A number in that range can be encoded using just 20 bits. So, if we transmit 1,000 20-bit numbers, we will have transmitted all the information content of the original one million toss sequence, using only around 20,000 bits. (Some sequences will contain more than 1,000 heads, and some will contain fewer, so to be perfectly correct we should say that we expect to need 20,000 bits on average to transmit a sequence this way.)
We can do even better. Encoding the absolute positions of the heads in the sequence takes 20 bits per head, but this allows us to transmit the heads in any order. If we agree to transmit the heads systematically, by going through the sequence from beginning to end, then instead of encoding their absolute positions we can just encode the distance to the next head, which takes fewer bits. For example, if the first four heads occurred at positions 502, 1609, 2454, and 2607, then their encoding as "distance to the next head" would be 502, 1107, 845, 153. On average, the distance between two heads will be around 1,000 flips; only rarely will the distance exceed 4,000 flips. Numbers in the range 1 to 4,000 can be encoded in 12 bits. (We can use a special trick to handle the rare cases where heads are more than 4,000 flips apart, but we won't go into the details here.) So, using this more sophisticated encoding convention, a sequence of one million coin tosses containing about 1,000 heads can be transmitted in just 12,000 bits, on average. Thus a single coin toss takes just 0.012 bits to transmit. Again, this claim only makes sense because we're actually transmitting a whole sequence of tosses.
What if we invented an even cleverer encoding? What is the limit on how efficient any encoding can be? The limit works out to about 0.0114 bits per flip, so we're already very close to the optimal encoding.
The information content of a sequence is defined as the number of bits required to transmit that sequence using an optimal encoding. We are always free to use a less efficient coding, which will require more bits, but that does not increase the amount of information transmitted.
Variable Length Codes
The preceding examples were based on fixed-length codes,such as 12-bit numbers encoding values between 1 and 4,000. We can often do better by adopting a variable length code. Here is an example. Suppose that instead of flipping a coin we are throwing an eight-sided die. Label the sides A-H. To encode a number between 1 and 8 (or between 0 and 7, if you're a computer scientist) takes 3 bits, so a thousand throws of a fair die will take 3,000 bits to transmit. Now suppose the die is not fair, but biased in a specific way: the chances of throwing an A are 1/2, the chances of throwing a B are 1/4, C is 1/8, D is 1/16, E is 1/32, F is 1/64, and G and H are each 1/128. Let us verify that the sum of these probabilities is 1, as it must be for any proper probability distribution:
Now let's consider an encoding ideally suited to this probability distribution. If we throw the die and get an A, we will transmit a single 0. If we throw a B we will transmit a 1 followed by a 0, which we'll write 10. If we throw a C the code will be 11 followed by 0, or 110. Similarly we'll use 1110 for D, 11110 for E, 111110 for F, 1111110 for G, and 1111111 for H. Notice that the code for A is very concise, requiring a single bit to transmit. The codes for G and H require 7 bits each, which is way more than the 3 bits needed to transmit one throw if the die were fair. But Gs and Hs occur with low probability, so we will rarely need to use that many bits to transmit a single throw. On average we will need fewer than 3 bits. We can easily calculate the average number of bits required to transmit a throw: it's the sum of the number of bits required to transmit each of the eight possible outcomes, weighted by the probability of that outcome:
So 1,000 throws of the die can be transmitted in just 1,984 bits rather than 3,000. This simple variable length code is the optimal encoding for the probability distribution above. In general, though, probability distributions are not so cleanly structured,and optimal encodings are a lot more complicated.
Exercise: suppose you are given a five-sided biased die that has a probability of 1/8 of coming up A, 1/8 for B, and 1/4 for each of C, D, and E. Design an optimal code for transmitting throws of this die. (Answer at end.)
Measuring Information Content
In the preceding example we used a die with eight faces. Since eight is a power of two, the optimal code for a uniform probability distribution is easy to caclulate: log 8 = 3 bits. For the variable length code, we wrote out the specific bit pattern to be transmitted for each face A-H, and were thus able to directly count the number of bits required.
Information theory provides us with a formula for determining the number of bits required in an optimal code even when we don't know the code. Let's first consider uniform probability distributions where the number of possible outcomes is not a power of two. Suppose we had a conventional die with six faces. The number of bits required to transmit one throw of a fair six-sided die is: log 6 = 2.58. Once again,we can't really transmit a single throw in less than 3 bits, but a sequence of such throws can be transmitted using 2.58 bits on average. The optimal code in this case is complicated, but here's an approach that's fairly simple and yet does better than 3 bits/throw. Instead of treating throws individually, consider them three at a time. The number of possible three-throw sequences is = 216. Using 8 bits we can encode a number between 0 and 255, so a three-throw sequence can be encoded in 8 bits with a little to spare; this is better than the 9 bits we'd need if we encoded each of the three throws seperately.
In probability terms, each possible value of the six-sided die occurs with equal probability P=1/6. Information theory tells us that the minmum number of bits required to encode a throw is -log P = 2.58. If you look back at the eight-sided die example,you'll see that in the optimal code that was described, every message had a length exactly equal to -log P bits. Now let's look at how to apply the formula to biased (non-uniform) probability distributions. Let the variable x range over the values to be encoded,and let P(x) denote the probability of that value occurring. The expected number of bits required to encode one value is the weighted average of the number of bits required to encode each possible value,where the weight is the probability of that value:
Now we can revisit the case of the biased coin. Here the variable ranges over two outcomes: heads and tails. If heads occur only 1/4 of the time and tails 3/4 of the time, then the number of bits required to transmit the outcome of one coin toss is:
A fair coin is said to produce more "information" because it takes an entire bit to transmit the result of the toss:
The Intuition Behind the -P log P Formula
The key to gaining an intuitive understanding of the -P log P formula for calculating information content is to see the duality between the number of messages to be encoded and their probabilities. If we want to encode any of eight possible messages, we need 3 bits, because log 8 = 3. We are implicitly assuming that the messages are drawn from a uniform distribution.
The alternate way to express this is: the probability of a particular message occurring is 1/8, and -log(1/8) = 3, so we need 3 bits to transmit any of these messages. Algebraically, log n = -log (1/n), so the two approaches are equivalent when the probability distribution is uniform. The advantage of using the probability approach is that when the distribution is non-uniform, and we can't simply count the number of messages, the information content can still be expressed in terms of probabilities.
Sometimes we write about rare events as carrying a high number of bits of information. For example, in the case where a coin comes up heads only once in every 1,000 tosses, the signal that a heads has occurred is said to carry 10 bits of information. How is that possible, since the result of any particular coin toss takes 1 bit to describe? Transmitting when a rare event occurs, if it happens only about once in a thousand trials, will take 10 bits. Using our message counting approach, if a value occurs only 1/1000 of the time in a uniform distribution, there will be 999 other possible values, all equally likely, so transmitting any one value would indeed take 10 bits.
With a coin there are only two possible values. What information theory says we can do is consider each value separately. If a particular value occurs with probability P, we assume that it is drawn from a uniformly distributed set of values when calculating its information content. The size of this set would be 1/P elements. Thus, the number of bits required to encode one value from this hypothetical set is -log P. Since the actual distribution we're trying to encode is not uniform, we take the weighted average of the estimated information content of each value (heads or tails, in the case of a coin), weighted by the probability P of that value occuring. Information theory tells us that an optimal encoding can do no better than this. Thus, with the heavily biased coin we have the following:
P(heads) = 1/1000, so heads takes -log(1/1000) = 9.96578 bits to encode
P(tails) = 999/1000, so tails takes -log(999/1000) = 0.00144 bits to encode
Avg.bits required =
= (1/1000) × 9.96578 + (999/1000) × 0.00144 = 0.01141 bits per coin toss
Answer to Exercise
The optimal code is:
Created by Mathematica (November 25, 2004) |
What do those blood numbers mean?
Normal or reference ranges are simply calculated from the range of values of various samples of the healthy population. Different labs sometimes use slightly different reference ranges so if the reference range on your lab work report are not identical do not let that alarm you. Be sure that you check the measurement units as well as the reference range. Most labs will use metric units, but even with standard metric units there will be differences due to counting by millions, or billions.
Complete Blood Count
Red Blood Count
Red Blood Count (RBC) is the count of red blood cells These cells carry oxygen throughout the body. Normal RBC values for men are higher than for women and range from 3.6 to 6.1 million per cubic millimetre. Too many RBCs (or platelets) in the bloodstream may cause slow blood flow and compromise circulation. A low RBC may signify anaemia, a shortage of red blood cells, or haemoglobin the oxygen- carrying part of the RBC; this usually reflects underproduction or premature destruction of the cells.
Haemoglobin (HGB) is a protein that enables RBCs to carry oxygen from the lungs to the rest of the body. The amount of haemoglobin determines how much oxygen the RBCs are capable of carrying to other cells. Normal haemoglobin levels for adult males range from 130 to 180 grams per litre for men and approximately 120 to 160 grams for women. Levels for children vary with age but are generally 1 to 2 grams lower than adult female values. Smokers often show an increase in their haemoglobin level. Epogen is an injectable drug that stimulates the production of red cells. It is used in anaemic patients to reduce the frequency of transfusions.
Hematocrit (HCT) is the volume of red blood cells expressed as a percentage of the total blood volume. If you spin a sample of blood so that the cells settle to the bottom of the tube, the percentage of volume occupied by the cells alone is called the "hematocrit". The hematocrit shows the oxygen-carrying capacity of the blood. This value also tells whether the blood is too thick or too thin. The average range is 40%-54% in adult males and 37%-47% in adult females. As a general rule, the hematocrit value is approximately three times the haemoglobin value, and doctors will refer to either of the values interchangeably.
Mean Corpuscular Volume
Mean Corpuscular Volume (MCV) is the average volume of the individual red blood cells. MCV is calculated by dividing the hematocrit by the total RBCs. The average range is 78-96 femtoliters. A low MCV indicates the cells are smaller than normal. This most commonly occurs because of an iron deficiency or chronic disease and may also happen if there is a Vitamin B12 deficiency.
Mean Corpuscular Haemoglobin
Mean Corpuscular Haemoglobin (MCH) and Mean Corpuscular Haemoglobin Concentration (MCHC) are measures of the amount and volume of haemoglobin in the average cell. The MCH average range is 28-32 picograms. The MCH results from dividing total hemogloblin by total RBCs. The average range for MCHC is 310-360 g/l.
Platelets (PLT or PT) are important for clotting, and are formed in the marrow. Low counts of platelets is called thrombocytopenia, and is quite common during chemotherapy. During thrombocytopenia the risk of bleeding and bruising is higher. Dangerously low platelet counts (<10; 10^9 / Litre )can put the patient at risk for brain Haemorrhages. High levels of platelets can cause circulation problems as the blood becomes too "thick".
White Blood Count
White Blood Cell Count (WBC) is the count of white blood cells called leukocytes. WBCs defend the body against infection and make up part of the immune system. Like other blood cells they are produced in the bone marrow. The total number of white blood cells has a wide range from 4,000 to 11,000 per cubic millimeter in the average healthy adult. While it can mean many things, a high WBC may mean you are fighting an infection, or that your immune system has been activated for some other reason. A low WBC might mean there is a problem with production in the bone marrow, which could be the result of various chronic diseases. It can also be a side effect of various different drugs, particularly chemotherapeutic drugs for cancer treatment.
CBC Differential is a breakdown of the different types of white blood cells and is usually expressed as a percentage of total WBCs. Multiplying these percents by the total WBCs gives the "absolute" counts. For example, if the percent of lymphocytes is 30% and the total WBCs is 10,000, the absolute lymphocyte count is 3,000.
Neutrophils are WBCs involved in fighting bacterial infections, and they are the most common of all the white blood cells. With a lifespan of just about 8 hours your body has to produce about 5 billion neutrophils every hour of the day. Neutropenia is a drop in the absolute neutrophil count to below 1000 and places the patient at increased risk of infection and is defined as follows.
- Neutropenia in general = ANC < 2000 (slight risk of infection)
- Mild Neutropenia = ANC > 1000 & < 1500 (minimal risk of infection)
- Moderate Neutropenia = ANC > 500 & < 1000 (moderate risk of infection)
- Severe Neutropenia = ANC < 500 (severe risk of infection)
Lymphocytes (lymphs), the second most common type, are cells that produce antibodies, regulate the immune system, and fight viruses and tumors. Ranges vary from 10%-45%.
Monocytes or Macrophages (Monos) are WBCs involved in fighting bacterial infections. After monocytes circulate in the blood stream, these cells settle in various tissues and become macrophages.
Eosinophils (Eos) are WBCs usually involved in allergic-type and parasitic reactions. The make up only a very small portion of the WBC.
Basophils (Bas) are WBCs usually involved in fighting parasitic infections. Increases reflect a possibility of parasitic activity in the body. If you have an abnormal basophil count and are experiencing diarrhea, loose stools, gas or stomach bloating, you may want to ask your doctor to be tested for parasites. Basophils are the least common of the WBC's and a count of zero is quite normal.
Chemistry (Chem) Screen
Calcium is the mineral which carries protein across the intestinal membrane. Therefore calcium represents the primary protein and fat digestion in the gut. Poor digestion here will affect further digestion in the liver. Increased calcium indicates poor emulsification (the breakdown of larger fat molecules into much smaller ones so they will become water soluble and therefore easily processed and eliminated from the body after use), and therefore improper protein digestion at the liver. Gallbladder function may be involved. Decreased calcium indicates poor enzymation (down translate) of fatty acids and improper protein digestion to the liver.
Phosphorous is the mineral which carries whole carbohydrates to the liver. The proper bowel pH (acid/alkaline balance) is needed for complete digestion, storage, and utilization of carbohydrates. Increased phosphorous indicates an alkaline gut; i.e., a deficiency of hydrochloric acid (HCL). Carbohydrates are oxidized and energy is wasted. Decreased phosphorous indicates an over-acid condition in the gut and carbohydrate congestion in the liver.
Glucose is sugar in the blood, most commonly used to monitor the disease diabetes mellitus.
Sodium, Potassium, and Chloride
These are also known as electrolytes. These must be monitored carefully in dehydration, kidney disease, and during intravenous therapy.
Sodium levels reflect the salt/water balance as well as fluid control and kidney and/or adrenal function. Increases indicate an alkaline kidney and lack of membrane lubrication. Often fluid retention is seen. Decreases indicate an acid kidney membrane and a need for calcium.
Potassium reflects tissue composition of several major organs including the heart. Potassium levels rise in kidney failure, and may be low after severe vomiting or diarrhea. Potassium levels may also decline if one is taking forms of licorice, especially the purified form glycyrrhizin (GL), and may need to be supplemented.
Chloride reflects the proper fluid exchange membranes of the bowel and the bladder. Increased chloride indicates improper membrane lubrication with Vitamin A. Decreases indicate tissue decomposition.
Blood Urea Nitrogen (BUN)
Blood Urea Nitrogen (BUN) is waste from the liver, processed by the kidneys. It also reflects whole carbohydrate storage, liberation, and continuation in the liver associated with many glands (i.e., kidneys). BUN tends to rise in dehydration and in kidney or heart failure. Prednisone and other steroids may cause BUN to rise. Increases in BUN can also indicate liver and/or thyroid inactivity. Decreased BUN indicates pancreas and/or adrenal inactivity. BUN can also be elevated by a high protein diet or recent exercise.
Uric Acid is the end product of protein digestion. The level in the blood is dependent on liver production and kidney elimination. Increased uric acid indicates incomplete protein digestion and/or an over-acid kidney membrane.
Albumin is one of the two major types of protein in the blood and promotes the transfer of nutrients and wastes to and from the blood and cells. Manufactured in the liver, albumin decreases in chronic liver disease. It also reflects one's general nutritional status. Increases indicate thick blood which could be due to improper protein digestion or dehydration. Decreases indicate thin blood with possible water retention, nutritional congestion, toxic buildup and edema (swelling). Usually the liver production of albumin is sluggish.
Creatinine is a waste product and a measure of kidney function as well as skeletal muscle buildup and breakdown in body maintenance. Increases indicate muscle breakdown, often to supply amino acids to the body when protein digestion is impaired. Decreases indicate low protein intake or impaired protein digestion.
Bilirubin derives from the haemoglobin of dead RBCs. Bilirubin is excreted by the liver as part of the bile. Bilirubin causes the yellow color of the skin and eyes (jaundice) which occurs in hepatitis, bile duct obstruction, and other liver disorders. It also reflects the function of the lymph and spleen systems. Increases in Bilirubin indicates inefficient lymphatic or liver/gallbladder function. Decreases in Bilirubin indicates inefficient blood cell breakdown by the spleen.
Alkaline Phosphatase (ALK PHOS )
Alkaline Phosphatase (ALK PHOS ) reflects the alkaline blood pH and its effect on adrenal function (and posterior pituitary). Note: In growing children, there is an increased amount of alkaline minerals in the blood for bone growth. An elevation can be normal in children. An increased alkaline phosphatase indicates an alkaline blood system and inefficient mineral transfer to the cells. A decreased alkaline phosphatase expresses an exhausted adrenal system and an acid blood system (common in chronic disease). If both LDH and alkaline phosphatase are elevated together a primary liver condition exists. If both LDH and alkaline phosphatase are decreased together, a primary thyroid deficiency may be present with a need for Vitamin B-12.
AST can also reflect gonandal function and shows the amount of oxygen available at membranes. Aside from liver/heart/muscle damage, elevations in AST can indicate a deficiency of hormones and Vitamin E. Decreases indicate a deficiency of the gonad itself.
ALT also reflects liver function and increases can mean problems with this organ as well as possible heart damage.
LDH can also reflect blood acidity and pancreatic function as well as being an intercellular enzyme prominent in heart and skeletal muscles. Persons on nucleoside analogues (AZT, DDI, DDC) who exercise frequently commonly have an elevated LDH from muscle tissue breakdown. Others who do not exercise, but are on these drugs -or are wasting- may also have an increased LDH. Persons with Pneumocystis pneumonia tend to have a more serious prognosis if they have an elevated LDH. Decreases indicate an alkaline tendency to the blood. A high LDH often indicates that a heart attack has recently occurred. It can also be high during active cancer growth, however it cannot be considered a specific cancer indicator. |
PERIODS of global warming 55 million years ago released massive amounts of carbon trapped in frozen polar soil - and the same thing could happen again, according to Sheffield University scientists.
Thawing permafrost accelerated increasing global temperatures and acidification of the oceans - with temperatures rising by 5C in course of just a few thousand years.
A team from Sheffield analysed a series of sudden extreme global warming events - called hyperthermals - that occurred about 55 million years ago.
They were linked to rising greenhouse gas concentrations and changes in the Earth’s orbit.
Prof David Beerling said: “For the first time we have linked these past global warming events with a climatically sensitive terrestrial carbon reservoir. It shows that global warming can be amplified by carbon release from thawing permafrost.
“The research suggests that carbon stored in permafrost stocks today in the Arctic region is vulnerable to warming. Warming causes permafrost thaw and decomposition of organic matter releasing more greenhouse gases back into the atmosphere.
“This feedback loop could accelerate future warming. It means we must arrest carbon dioxide emissions released by the combustion of fossil fuels if humanity wishes to avoid triggering these sorts of feedbacks in our modern world.”
Colleague Rob DeConto said: “Global warming is degrading permafrost in the north polar regions, unlocking carbon and methane and releasing it into the atmosphere. This will only exacerbate future warming.” |
|Photodetectors are semiconductor devices which respond to light. They can replace light dependent resistors and have the advantages of lower pollution and smaller size.
How does it operate?
There are several kinds of photodetector.
Photodiodes are similar to normal diodes but, if they are reverse biased, the current through the diode increases with the light level.
Phototransistors (and photodarlingtons) are like ordinary transistors (and Darlington drivers) but the ‘base current’ is produced by the light falling on the device – there in no actual electrical connection to the base.
Click on the circuit diagram to download a Livewire file of the circuit that you can investigate and add to your own circuit.
The circuit diagram on the left is for a phototransistor.
As the light level increases the current through the phototransistor and R1 increases, so the output signal voltage increases.
To allow the sensitivity to be adjusted the fixed resistor R1 could be replaced with a variable resistor.
To produce a ‘dark sensor’ the positions of the phototransistor and the resistor are interchanged, so that the output signal voltage increases as the light level falls.
Note – the output current available from a phototransistor is small – enough for the input signal to a PIC, a CMOS integrated circuit or a MOSFET, but not large enough to drive a transistor.
- Sensing if it is night or day
- Sensing if an object has blocked a beam of light
The pin connections and PCB shown are applicable to both the SFH309F phototransistor (Rapid Electronics Order code 58-0425) and the SFH300-4 (Rapid Electronics Order code 58-0480).
How part of the PCB might look
Note – the shorter leg of the phototransistor, and the side with the flat, is the collector. The collector should be connected to the +Vs supply voltage. This is the opposite way round from a LED (hotlink to data sheet), where the shorter leg is the cathode or negative lead.
Make sure that the signal going out (on the green PCB track) changes from high to low when the photodetector is covered.
If there is a fault, check:
- the voltage on the collector (the leg identified with the flat) is +Vs;
- the value of the resistor
- that the detector has been connected the right way round.
If there is a fault, check the tracks and solder joints.
- Light dependent resistors (LDR) – cheaper, but cause pollution because it contain cadmium. LDRs are also larger.
- A cheaper L-610MP4BT (Rapid Electronics Order code 72-8968) phototransistor is available. However, this produces very, very small currents and can only be used reliably with process subsystems that only need a very low input signal current.
Return to list of datasheets |
Annie Ate Apples With A
Rationale: For students to be able to recognize the phonemes in spoken words. This lesson will help students recognize one specific phoneme: /a/. Students will learn the sound letter a makes by using tongue twisters and visual motions to represent the sound. Then students will connect the letter to its sound by letter writing. After the lesson is over, students should be able to recognize and identify /a/ in spoken words by separating its sound from the rest of the word. Students will also be able to recognize the letter when they see it and know the sound it makes.
- Chart with tongue twister Aunt Annie always ate apples alone.
- white board
- primary paper
- plain white copy paper
- children’s book: Pat’s Jam
1. “Today we are going to learn about the sound that letter a makes. It makes the /a/ sound, like when a baby cries. Let’s all say /a/ together. aaa. Great boys and girls! Now we are going to listen for the /a/ sound and we are going to practice writing the letter a, which makes the /a/ sound.”
2. ”Let’s try this tongue twister. I’ll say it first and then it will be your turn. Aunt Annie always ate apples alone. Now let’s all say it together. Let’s say it again but this time, make sure you really say the /a/ sounds. AAAunt AAAnnie aaalways aaaate aaapples aalone. Very good! Do we all hear the /a/ sound?
3. “I am going to pass out paper. We are going to practice writing the letter that makes the /a/ sound. That letter is a. To make an a you will start under the fence. Go up and touch the fence, then around and touch the sidewalk, around and straight down. I will make one on the board (draw a on board) Now you make a row of a’s just like that.”
4. “Fantastic job! Alright now put your pencils down and we are going to listen to some words that I say. Follow my directions. I am going to say 2 words. Tell me which word you can hear the /a/ sound the letter a makes. Here we go. Do you hear /a/ in last or list? Cat or cut? Last or lost? Good job!”
5.”Its time to read a fun book! We are going to read the book Pat’s Jam. This is Pat. [Point to Pat.] He’s a rat, and drives a van. Pam is his friend. When they both get in the van, Pat realizes his van has no gas! Will they be able to solve this problem? You’ll have to read closely to find out what happens to Pat, Pam, and the van! Let’s pair up and take turns reading Pat’s Jam to find out if they ever solve their problem!”
6. “I am passing out paper for you to draw on. I want everyone, first, to think of something that has the /a/ sound in it. Ex: apple, alligator, cat, rat.”
Assessment: Teacher can assess group progress by walking around and observing students as they write the letter a across their lined paper. For individual assessment, teacher can look at the drawing of objects they chose to see if the objects have the /a/ sound.
Alison Stokes, "Aaa-aaa-aaa-choo!!" at: http://www.auburn.edu/rdggenie/insp/stokesbr.html
Return to the Rendezvous Index |
Bringing “extreme” poverty to an end will not jeopardise the chances of limiting global warming to 2C above pre-industrial levels, a new study says.
Pulling the 770 million people around the world out of extreme poverty – which is defined as living on less than $1.90 a day – would add a mere 0.05C to global temperatures by 2100, the research shows.
However, eradicating poverty entirely by moving the world’s poorest into a “global middle class” income group, which earns a modest $2.97-8.44 a day, could add 0.6C to global temperatures by 2100.
In order to end all forms of poverty without driving up global temperatures, world leaders will need to ramp up climate mitigation efforts by 27%, the lead author tells Carbon Brief.
Ending extreme poverty for “all people everywhere” is the first of the United Nation’s Sustainable Development Goals, an internationally-agreed set of targets aimed to improve the global standard of living by 2030.
However, putting an end to extreme poverty could bring additional challenges to meeting the long-term goals of the Paris Agreement, which aims to limit global temperature rise to “well below” 2C.
This is because raising the quality of life of the world’s poorest would mean using more of the planet’s resources – such as food and energy – driving up carbon emissions that contribute to global warming.
This paradox is known as the “climate-development conflict”, explains Prof Klaus Hubacek, a researcher at the University of Maryland and lead author of the new research published in Nature Communications.
In his research, he aimed to quantify the total “cost”, in terms of carbon emissions, of ending extreme poverty. He tells Carbon Brief:
“Eradicating extreme poverty does not jeopardise the climate target even in the absence of climate policies and with current technologies.”
To calculate the cost of eradicating extreme poverty, the researchers first set about estimating the carbon footprints of the world’s poorest and richest people.
For each carbon footprint, researchers considered both direct carbon emissions – from the consumption of food, heating and cooling of homes and the use of transport – and indirect carbon emissions – from the production of household goods and services. They then combined this with expenditure data from the World Bank’s Global Consumption Database.
Food makes up the largest proportion of the carbon footprint of those living in extreme poverty, Hubacek explains:
“The food-related carbon footprint is close to 60% of the total footprint for the extreme poverty group. It’s mainly food, shelter, clothes. There’s nothing much left for anything else when your expenditure is $1.90 a day in purchasing power parities (PPP).”
The chart below (left) shows the respective carbon footprints of the world’s rich and poor. The left column shows how the world’s population can be split into different income groups, including those living on: less than $1.90 a day (green); between $1.90 and $2.97 a day (blue); between $2.97 and $8.44 (yellow); between $8.44 and $23.03 a day (purple); and more than $23.03 a day (orange). The right column shows the proportional carbon footprints of each of these income groups.
The research finds that, in 2010, the world’s top 10% of earners were responsible for about 36% of global carbon emissions for the consumption of goods and services (see the orange section in each column).
In comparison, the extreme poor, which accounted for 12% of the world’s population in 2010, were responsible for just 4% of global emissions (green).
The second chart (right) shows the carbon footprint per person for different income groups. Each footprint is measured using CO2e, or the carbon dioxide equivalent, which is the standard unit for measuring carbon footprints. The black line separates direct carbon emissions (lower part) and indirect carbon emissions (upper part).
The research finds that the carbon footprint of the world’s average top earner is close to 14 times that of the average person living in extreme poverty.
Carbon cost of ending poverty
To calculate the total carbon cost of eradicating poverty, the researchers estimated the carbon implications of moving the population living in extreme poverty up to the next income level ($1.90-2.97 a day).
The researchers then took the additional carbon emissions that resulted from lifting people out of extreme poverty and added them to a “baseline” emissions scenario. As a baseline, the researchers used a relatively low emissions scenario known as RCP2.6, which assumes that global annual greenhouse gas emissions peak in 2020 and fall quickly afterwards.
The chart below shows how the additional carbon cost of eradicating extreme poverty could affect global surface warming by 2100. A scenario where extreme poverty is eradicated (green) is compared to the baseline scenario (yellow). Both scenarios assume that global emissions will peak in 2020.
The chart identifies present day to 2030 as a “window of opportunity” to lift the world’s poorest out of extreme poverty. The year 2030 is the deadline of the SDGs.
The research finds that lifting people out of extreme poverty has a relatively small impact on global temperatures, accounting for an additional 0.05C of warming by 2100.
This means that extreme poverty could be eradicated without jeopardising long-term climate goals.
However, this is only the case if global greenhouse gas emissions peak in 2020 and then fall, Hubacek explains. If carbon emissions continue to rise past 2020, ending poverty while keeping warming to 2C will be “impossible”, he says.
And it may be too late to end extreme poverty and limit global warming to 1.5C, which is the aspirational goal of the Paris Agreement, he adds:
“We did not investigate the 1.5C [limit] explicitly but, as it is almost impossible to achieve the 1.5C goal, removing extreme poverty would not change that challenge significantly.”
Some charities have argued that eradicating extreme poverty is not ambitious enough. Instead, world leaders should seek to eradicate poverty completely.
This would mean moving the world’s poorest into what may be considered the “global middle class”, an income group that earns between $2.97 and $8.44 a day. This income group is the yellow section of the first chart in this article.
The increase in carbon footprints of pulling everyone up from lower income groups into the global middle class would cause an additional 0.6C of warming by 2100, the study finds.
You can see this in the chart below. The yellow line again shows the baseline RCP2.6 scenario, and this time the green line shows the impact on global average temperature of eradicating poverty entirely. Both scenarios assume that global emissions will peak in 2020.
In order to end global poverty without causing significant additional warming, global leaders will need to ramp up climate mitigation efforts by 27%, the research finds.
To do this, countries may need to adopt negative emissions technologies on a large scale, Hubacek explains. However, many of the negative emission techniques that were once hailed as “saviour technologies” have failed to live up to expectation. He adds:
“So far technology has not been able to keep up with additional emissions and our scenarios would require even more technological progress on top of what we would have otherwise.”
Instead, people in wealthier countries should consider adopting “lifestyle and behavioural changes” to reduce the size of their carbon footprints, he adds, in order to offset the extra carbon cost of ending poverty.
“Given that the global elites are responsible for 36% of the current carbon emissions, a discussion on global income distribution and carbon intensive lifestyles should at least become part of the discourse of future efforts towards a low carbon society.”
Hubacek, K. et al. (2017) Poverty eradication in a carbon constrained world, Nature Communications, http://nature.com/articles/doi:10.1038/s41467-017-00919-4 |
Being ready for kindergarten doesn’t depend on a child’s birthday, how many letters she can recognize, or how high she can count. Instead, the most important factors in kindergarten readiness are what researchers call non-cognitive skills: motivation, resilience, self-discipline.
Researchers call these skills non-cognitive to distinguish them from measures like IQ and test scores. But no one really agrees on what to call them. You’ll hear terms like “social-emotional skills,” “growth mindset,” and “grit.” What everyone does agree on is that these traits predict long-term outcomes such as academic attainment, employment, and health.
At Ring Mountain Day School, we look at children’s social and emotional skills as well as their pre-academic abilities. Can they negotiate with each other when they want to play with the same tricycle? When they’re frustrated by a toppling tower, can they pick up the pieces and try again?
In a student-centered school, our responsibility is to help the whole child grow. As students master the foundations of math, reading, and writing, we also attend to their character and development. Through reading books such as “Beautiful Oops!” by Barney Saltzberg, kindergarten and first grade students learn that mistakes open opportunities to grow, create and change. As we teach the basics of restorative justice, students learn to repair relationships and move forward.
When you walk into our kindergarten and first grade classroom, you’ll see all kinds of tools designed to build independence and regulation. The student-generated “Ways to Cool Down” chart on the wall reminds kids that they have a set of tools to manage their emotions, from smelling the lavender to reading a book. Conflict resolution circles let students role-play ways to handle playground challenges.
Students who are ready for kindergarten are eager to learn, ready to engage with each other, and able to grow from their mistakes. At RMDS, the kindergarten and first grade classroom offers explicit instruction in social-emotional learning, building non-cognitive skills for lifelong success. |
Physics and Compounds Help (page 2)
Different elements can join together, sharing electrons. When this happens, the result is a chemical compound . One of the most common compounds on Earth is water, the result of two hydrogen atoms joining with an atom of oxygen. There are thousands of different chemical compounds that occur in nature.
Compounds: Not Just A Mixture!
A compound is not the same thing as a mixture of elements. Sometimes, however, when elements are mixed (and, if necessary, given a jolt of energy), compounds result because the elements undergo chemical reactions with each other. If hydrogen and oxygen are mixed, the result is a colorless, odorless gas. A spark will cause the molecules to join together to form water vapor. This reaction will liberate energy in the form of light and heat. Under the right conditions, there will be an explosion because the two elements join eagerly. When atoms of elements join together to form a compound, the resulting particles are molecules . Figure 9-3 is a simplified diagram of a water molecule.
Fig. 9-3 . Simplified diagram of a water molecule.
Compounds often, but not always, appear different from any of the elements that make them up. At room temperature and pressure, both hydrogen and oxygen are gases. But water under the same conditions is a liquid. The heat of the reaction just described, if done in the real world, would result in water vapor initially, and water vapor is a colorless, odorless gas. However, some of this vapor would condense into liquid water if the temperature got low enough for dew to form. Some of it would become solid, forming frost, snow, or ice if the temperature dropped below the freezing point of water.
A note of caution: Do not try an experiment like this! You could be severely burned. In the extreme, if enough of the hydrogen-oxygen air is inhaled, your lungs will be injured to the point where you may die of asphyxiation. We sometimes read or hear news reports about home experimenters who blew themselves up with chemistry sets. Don’t become the subject matter for one of these stories!
Another common example of a compound is rust. This forms when iron joins with oxygen. Iron is a dull gray solid, and oxygen is a gas; however, iron rust is a maroon-red or brownish powder, completely unlike either of the elements from which it is formed. The reaction between iron and oxygen takes place slowly, unlike the rapid combination of hydrogen and oxygen when ignited. The rate of the iron-oxygen reaction can be sped up by the presence of water, as anyone who lives in a humid climate knows.
Compounds Can Be Split Apart
The opposite of the element-combination process can occur with many compounds. Water is an excellent example. When water is electrolyzed , it separates into hydrogen and oxygen gases.
You can conduct the following electrolysis experiment at home. Make two electrodes out of large nails. Wrap some bell wire around each nail near the head. Add a cupful (a half-pint) of ordinary table salt to a bucket full of water, and dissolve the salt thoroughly to make the water into a reasonably good electrical conductor. Connect the two electrodes to opposite poles of a 12-volt (12-V) battery made from two 6-V lantern batteries or eight ordinary dry cells connected in series. (Do not use an automotive battery for this experiment.) Insert the electrodes into the water a few centimeters apart. You will see bubbles rising up from both electrodes. The bubbles coming from the negative electrode are hydrogen gas; the bubbles coming from the positive electrode are oxygen gas (Fig. 9-4). You probably will see a lot more hydrogen bubbles than oxygen bubbles.
Fig. 9-4 . Electrolysis of water, in which the hydrogen and oxygen atoms are split apart from the compound.
Be careful when doing this experiment. Don’t reach into the bucket and grab the electrodes. In fact, you shouldn’t grab the electrodes or the battery terminals at all. The 12 V supplied by two lantern batteries is enough to give you a nasty shock when your hands are wet, and it can even be dangerous.
If you leave the apparatus shown in Fig. 9-4 running for a while, you will begin to notice corrosion on the exposed wire and the electrodes. This will especially take place on the positive electrode, where oxygen is attracted. Remember that you have added table salt to the water; this will attract chlorine ions as well. Both oxygen and chlorine combine readily with the copper in the wire and the iron in the nail. The resulting compounds are solids that will tend to coat the wire and the nail after a period of time. Ultimately, this coating will act as an electrical insulator and reduce the current flowing through the saltwater solution.
Always In Motion
Figure 9-3 shows an example of a molecule of water, consisting of three atoms put together. However, molecules also can form from two or more atoms of a single element. Oxygen tends to occur in pairs most of the time in Earth’s atmosphere. Thus an oxygen molecule is sometimes denoted by the symbol O 2 , where the O represents oxygen, and the subscript 2 indicates that there are two atoms per molecule. The water molecule is symbolized H 2 O because there are two atoms of hydrogen and one atom of oxygen in each molecule. Sometimes oxygen atoms are by themselves; then we denote the molecule simply as O. Sometimes there are three atoms of oxygen grouped together. This is the gas called ozone that has received attention in environmental news. It is written O 3 .
Fig. 9-3 . Simplified diagram of a water molecule.
Molecules are always moving. The speed with which they move depends on the temperature. The higher the temperature, the more rapidly the molecules move around. In a solid, the molecules are interlocked in a sort of rigid pattern, although they vibrate continuously (Fig. 9-5a). In a liquid, they slither and slide around (see Fig. 9-5b). In a gas, they are literally whizzing all over the place, bumping into each other and into solids and liquids adjacent to the gas (see Fig. 9-5c).
Fig. 9-5 . Simplified rendition of molecules in a solid (a), in a liquid (b), in a gas (c). The gas molecules are shown smaller for illustrative purposes only.
Practice problems of these concepts can be found at: Particles of Matter Practice Test
- Kindergarten Sight Words List
- First Grade Sight Words List
- Child Development Theories
- 10 Fun Activities for Children with Autism
- Social Cognitive Theory
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- Signs Your Child Might Have Asperger's Syndrome
- Theories of Learning
- Definitions of Social Studies
- A Teacher's Guide to Differentiating Instruction |
Genetics studies inheritance of characteristics and the role of DNA.
Variation and Meiosis
There are two types of variation between people: continuous and discontinuous, this variation can be caused by meiosis, the process of cell division in gametes. Steps include crossing over and bivalent formation. See Variation and Meiosis
The gametes are the sperm in males (structures: tail, acrosome, mitochondria) and ovum (structures: polar body, jelly coat, yolk droplets) in females. Meiosis and life cycles are also important in reproduction. See Sexual Reproduction
The process of making proteins from 'instructions' in DNA and RNA. Transcription creates RNA in the nucleus and translation is the building of polypeptides using the ribosome and tRNA. Mutations can occur. See Protein Synthesis
This is the process of growing small plants from pieces of tissue from a plant (explant); in the laboratory in vitro. It uses sterile lab techniques and produces a plantlet that can be used in agriculture. See Micropropagation
Inheriting characteristics from alleles. How is sex determined? Monohybrid and dihybrid inheritance, using the example of blood groups. Genes can be sex linked (haemophillia) or interact with each other: epistasis. See Inheritance
Gene Therapy - Cystic Fibrosis
Cystic fibrosis is an inherited diseaseresulting from an irregular gene causing mucus to be too thick. A potential cure for this is to gene therapy: giving the patient the correct gene by virus or liposome. See Gene Therapy
Enzymes such as ligase and restriction endonuclease are used to manipulate genetic sequences - producing sticky ends. Pieces of genetic material can be inserted to plasmids and bacteria produce proteins, like insulin say. See Genetic Engineering
DNA is a molecule that carries the genetic code, made up of nucleotides forming deoxyribonucleic acid and forming a double helix. Complimentary base pairing is important in DNA replication and the polymerase chain reaction. See DNA
This makes genetically identical offspring. It is done by the process of somatic cell nuclear transfer. This technique is used in stem cell research or perhaps theraputic cloning, but also reproductive. There are ethical problems with this. See Cloning
Artificial insemination is the use of collected semen from bulls to inseminate and impregnate cows; without the two meeting. It is used to produce cattle with desirable characteristics like selective breeding. See Artificial Insemination
How to articles in category "Genetics"
The following 12 pages are in this category, out of 12 total. |
A mummy preserved about 2,250 years ago in Egypt suffered from prostate cancer. The mummy tumors were detected using a technology that only recently became available, so it might mean lots more ancient cancer cases will be revealed.
Scientists report in a paper soon to appear in the International Journal of Paleopathology that the mummy, known as M1, died when he was between 51 and 60. He is the oldest case of prostate cancer identified in ancient Egypt, and the second oldest in the world (the oldest was in the 2700-year-old skeleton of a Scythian king in Russia). M1's scans also show a malignant cancer that had spread to his spine and other areas around his pelvis.
Both cancers were identified using a high-resolution computerized tomography scanner that can detect tumors just one to two millimeters in diameter. A device that can discern such minute lesions has been available only since 2005. So researchers think that the number of cases of all types of cancer throughout human history were probably underestimated.
Science Now reports that a 1998 study in the Journal of Paleopathology found just 176 cases of skeletal malignancies in tens of thousands of ancient human remains. So researchers thought cancer cases were much lower before industry started polluting everything. However, there were actually plenty of carcinogens around way back when: soot from wood-burning chimneys and fireplaces, and the bitumen used to build ancient boats has been linked to lung cancer.
In fact, in a recent paper in the journal Science, scientists reported that indoor cooking stoves kill 2 million people every year.
What I really want to know is: did they time this publication to fall near Halloween? |
Compare Numbers to 100 (I)
In this comparing numbers worksheet, students solve 100 problems in which two-digit numbers are compared using the signs < or > or =.
3 Views 1 Download
Ten-Frames – A Games Approach to Number Sense
"How can you help students visualize numbers in a way that is compatible with our base-ten number system?" The answer is simple: use ten-frames. Whether they're being used as a part of classroom routines or as instructional tools, this...
K - 3rd Math CCSS: Adaptable
Water: The Math Link
Make a splash with a math skills resource! Starring characters from the children's story Mystery of the Muddled Marsh, several worksheets create interdisciplinary connections between science, language arts, and math. They cover a wide...
1st - 4th English Language Arts CCSS: Adaptable |
An accurate tool is equipment that can determine physical or non-physical changes in a specific process. The instrument then changes these changes so that they can be understood by the user. Information obtained from instrumentation can be presented in two ways: displaying and recording information. The information display only shows the status of the variables. In the data recording method, the data is stored and allows the user to view the current status of the variable and its changes in the past.
Instrumentation equipment generally has three main parts:
Sensor: A tool that detects changes in the process.
Amplifier and adapter: Changes detected by the sensor can be very minor, therefore, this information must be reinforced and modified so that it can be displayed.
Display: The information obtained should be well visible to the user. This can be done through a calibrated measuring tool or an electronic tool.
Usually the information obtained by the instrument must be sent to the control center or control room. This control center or control room is often located away from the instrumentation site. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.