content
stringlengths 275
370k
|
---|
What are wound and skin infections?
Wound and skin infections represent the invasion of tissues by one or more species of microorganism. This infection triggers the body's immune system, causes inflammation and tissue damage, and slows the healing process. Many infections remain confined to a small area, such as an infected scratch or hair follicle, and usually resolve on their own. Others may persist and, if untreated, increase in severity and spread further and/or deeper into the body. Some infections spread to other organs and/or into the blood and cause a systemic infection (septicemia).
Skin is the body's largest organ and its first line of defense. Even when it is clean, the surface of the skin is not sterile. It is populated with a mixture of microorganisms called normal flora. This normal flora forms a dynamic barrier that helps to keep other more harmful microorganisms (pathogens) at bay. People may also have pathogens on their skin. At any one time, a certain percentage of the general population will be carriers of a pathogen that displaces some of their normal flora and colonizes locations like the mucous membranes of the nose. Most of the time, normal flora and colonizing pathogens do not cause illness and do not stimulate the immune system. If there is a break in the skin or if the immune system becomes compromised, then any of the microorganisms present can cause a wound or skin infection.
Wounds are breaks in the integrity of the skin and tissues. They may be superficial cuts, scrapes or scratches but also include punctures, burns, or may be the result of surgical or dental procedures. The microorganisms likely to infect them depend on the wound's extent and depth, the environment in which the wound occurs, and the microorganisms present on the person's skin. The skin has three layers: the outer epidermis, the dermis – where many hair follicles and sweat glands are located – and the fatty subcutaneous layer. Below these layers are membranes that protect connective tissues, muscle, and bone. Wounds can penetrate any of these layers, and skin infections can spread into them. Wound healing is a complex process that involves many related systems, chemicals, and cells working together to clean the wound, seal its edges, and to produce new tissues and blood vessels.
Skin and wound infections interfere with the healing process and can create additional tissue damage. They can affect anyone, but those with slowed wound healing due to underlying conditions are at greater risk. Examples of conditions that increase the risk of wound infections include:
- Poor circulation
- Weakened/suppressed immune system (e.g., HIV/AIDS, organ transplant recipient)
- Low mobility or immobility (e.g., confined to bed, paralysis)
When infections penetrate deep into the body into tissues such as bone, or when they occur in tissue that has inadequate circulation, they can become difficult to treat and may become chronic infections. |
Historical Book Arts Collection
Woodcuts were the most common form of illustration techniques in early printed books and continue to be used in modern limited edition books today. Relief prints had been used in Asia for hundreds of years before they were created in Europe. Prior to letterpress printing in Europe [1450-55], books called block books used single planks of wood with both text and image carved out.
Woodcuts in European books were the primary form of illustration in letterpress printed books from 1460 to 1550 and continued to be used long after that heyday. The LIBER CHRONICARUM of 1493 is the most highly illustrated book of the incunabula period. Woodcut images are carved with the grain into wooden planks by hand using special knives or chisels. The wood that is not part of the design is cut away, leaving the image in relief. Woodcuts are fragile because of the tendency of the wood to split along the grain. This led to images that are wider in line [fine detail was difficult to do] and a block with a limited printing lifespan. The relief image was then inked using an ink ball. Paper was placed over the block and the surface was rubbed. When the printing press was invented in Europe, pressmen made the height of the images blocks and the relief type the same height. This allowed the text and image to be inked and then printed at the same time, saving time and resources.
Famous early wood cut artists were Albrecht Durer, Hans Holbein, Jost Amman and Virgil Solis.
By the seventeenth century, engraving and etching had displaced woodcuts as the most common form of printed image.
The term “wood engraving” is a little misleading. It, too, is a relief block process where the wood around the image is removed, leaving the image in relief. What differentiates the wood engraving from the wood cut is that the image is cut on the end grain [against the grain] of the wood rather than the plank side [with the grain] using a tool similar to an engravers burin. Because the close grain of the block allows for a more detailed and finer line, they were more precise then woodcuts. Often wood engravers could achieve a tonal quality not obtainable by printing with woodcuts. Although used early in conjunction with woodcuts, after 1600 this medium was used only with small printing jobs, such as head and tail pieces and small vignettes. During its revival at the end of the 18 th century, wood engraving achieved popularity through its rediscovery by Thomas Bewick, an English engraver. Bewick’s discovery was of phenomenal importance because the images had all the detail of one made in a copper plate but could be printed at the same time as the text, decreasing the amount of press work by half.
Wood engravers, however, were often not the artists themselves but craftsman diligently copying the work of artists who would present their drawings to the “blockmakers”. When ten of thousands of editions were required for the populace during the years of steam powered printing presses, this technique was the only one that could produce cheap, but acceptable quality products.
Other types of relief printing are: metal relief printing, color relief printing, chiaroscuro, line blocks, halftones.
Intaglio prints are characterized by an impression on the printing paper of the frame of the original metal plate, often called a plate mark. Except where the plate mark has been trimmed off after printing [not normally done], all intaglios should have a plate mark. There are some other processes which create a shadow behind an image that appears to be a plate mark but isn’t.
Copper engravings made their first appearance in the mid-fifteenth century. This is an intaglio process by which the design is incised into a relatively thin metal plate surface with a tool known as a burin instead of the image being cut away from the surface as with woodcuts. Details of these copper engravings show the characteristic tapered line style imposed by the use of the burin. Toning and modeling in an image are achieved by crosshatching and changing the width of the engraved line. After the plate is made the entire surface is covered with ink. The “u” or “v” shaped channels in the metal fill with the ink and the surface is wiped clean. The plate is then put into a special press, called an intaglio or etching or engraving press, that puts a tremendous amount of pressure on the plate. Damped paper is used and the paper draws up the ink from the plate.
Copper engraving became quite popular with artists who used the form to design and create their own images. One such artist was William Hogarth [1697-1764]. Hogarth obtained an Act from the English Parliament to give artists the sole right to print and sell copies of their own work. Other famous English engravers are Thomas Rowlandson, George Cruikshank [more].
During the sixteenth century, however, some copper engravers became craftsmen who were relegated to reproducing the works of others, often paintings. Copper engravings remained popular until eventually being replaced by steel engravings in about 1820. The disadvantage with the engraving process is that it is incompatible with relief printing required for text. Engraved illustrations generally were done on separate sheets and inserted into the text, frequently following the instruction from the printer to the binder as where the plates should go.
There are a number of specialized engraving forms: crayon manner andstipple engraving, line engraving, steel engraving
Etching is also an intaglio process. In the etching process a heated copper plate is covered with a ground made of wax, gum mastic and asphaltum that is impervious to acid. The engraver then uses a etching needle to draw his image into the wax exposing the copper plate underneath. Immersion in an acid bath allows the acid to eat or bite into the metal forming grooves that are later inked and printed. The longer the plate stays exposed to the acid the deeper and broader the line becomes. The etcher can control the intensity of the line by the stopping-out process. In that process the plate is removed from the bath, and lines that have been bitten deeply enough can then be varnished over making that portion of the plate impervious to the next submersion in the acid. Alternately, the etcher can impose newly etched lines into the wax to achieve light and dark lines.
Many intaglio illustrations are a combination of etching and engraving. Backgrounds and major elements of images would be etched and then fine details added through engraving. Most of the time these terms were used interchangeably, even on the title pages of books.
The first etchings appeared at the beginning of the 16 th century. By the 17 th century engraving and etching had replaced the woodcut as the most common form of illustration. After 1900, most engraving and etching had been replaced by printed photographic processes.
Other types of etching processes are: aquatint, soft ground
Other types of intaglio are: mezzotint, drypoint, nature prints, photogravures
Planographic prints are characterized by having no plate mark [see under Intaglio above].
Lithography, a planographic method, was invented by a German, Alois Senefelder, at the end of the 18 th century and became increasingly popular in artistic and commercial work. There are two types of lithography: chalk style and line style. Both were originally done on a printing surface of limestone. In both forms a heavy, thick stone blocks is used for the surface of the image. The image is drawn with a greasy ink or pencil and the stone is then dampened. Ink is applied. The ink sticks to the image and is repelled by the water. The print is made using a special press that will accept the stones.
Later on zinc plates replaced the stones, being easier to handle and lighter.
The look of lithographic images can vary remarkably. The surface can be grained, stippled, sprinkled or dabbed changing the quality of the image. Sometimes a second tint stone was used to add overall color to the background. This required a second run through the press [second impression] and is called a tinted lithograph.
Other forms of lithography are: chromo-lithography, transfer lithography, collotypes, photolithographs
Brown, Louise Norton. Block Printing & Book Illustration in Japan London: George Routledge & Sons, 1924.
Gascoigne, Bamber. How to Identify Prints: A Complete Guide to Manual and Mechanical Processes from Woodcut to Inkjet. Second Edition. London: Thames and Hudson, 2004.
Hind, Arthur M. An Introduction to a History of Woodcut. New York: Dover Publications, 1963.
Wakeman, Geoffrey. Graphic Methods in Book Illustration. Loughborough, England: The Plough Press, 1981. |
Stroke occurs when the blood supply to the brain gets cut off. The most common type of stroke is ischemic stroke, when the blood supply to the brain is disrupted by a clot. Ischemic strokes are of two types, thrombotic stroke (clot is formed in the arteries of the brain) and embolic stroke (clot is formed outside the brain). Embolic stroke is a type of ischemic stroke, in which the clot is formed elsewhere in the body and travels through the blood stream, to the arteries of the brain.
Embolic stroke results from an embolus or a clot, which has travelled from another location. While the embolus can form in many areas in the body, heart, neck and chest are the commonest locations. The embolus enters the bloodstream and flows through it to reach the brain and neck arteries. It continues to travel till it gets lodged in one of the arteries supplying the brain. This results in blocking of the artery and the blood flow to the brain get affected, resulting in an embolic stroke.
Ischemic strokes account for around 80% of strokes in the United States. Embolic stroke occurs due to blocking of blood supply to the brain by an embolus, thus stopping the blood reaching the brain. This can result in death of brain cells due to lack of blood supply. Depending on the time for which the blood supply to the brain is stopped and the time in which the person receives emergency treatment, the severity and the damage caused can be assessed. Many deaths are caused due to embolic stroke every year, while many who survive, may be left with permanent disability.
Causes of Embolic Stroke
Causes of an embolic stroke include those that can form a clot in the brain, neck or heart. An embolus can result from deposition of atherosclerotic plaque on the inner lining of arteries. Embolus can also develop as a result of fat globules or air bubbles entering into the blood stream or getting lodged in arteries. Another cause of embolus formation is abnormal rhythm of the heart, medically termed as atrial fibrillation.
Some risk factors that can increase the risk of ischemic or embolic stroke include existing older age, heart disease, high blood pressure, high cholesterol, diabetes, obesity, certain autoimmune disorders and a family history of stroke or heart disease. Lifestyle preferences like smoking, alcohol consumption, faulty dietary habits too can increase the risk of embolic stroke.
Symptoms of Embolic Stroke
Embolic stroke presents itself with the warning signs of stroke, which, if identified in the early stages can help in initiation of immediate medical treatment. The precise symptoms may vary depending on which part of the brain is affected during the embolic stroke.
The signs and symptoms of embolic stroke include:
- Sudden changes in the face, numbness in one part of the face, inability to smile or drooping of lips to one side.
- Weakness or numbness in arms, legs or one side of the body, appearing suddenly can be a symptom of Embolic Stroke.
- Sudden difficulty in seeing objects with one or both eyes,
- Severe headache, confusion, difficulty in understanding or trouble speaking.
- Sudden loss of balance, difficulty in co-ordination, dizziness or trouble in walking can be a symptom of Embolic Stroke.
Diagnosis of Embolic Stroke
Embolic stroke is an emergency situation, which needs immediate medical care. History and clinical examination can reveal existing medical problems and the symptoms of embolic stroke, if experienced any. The degree of severity can be assessed and emergency medical management approach can be planned.
Investigations like scanning of the brain, angiogram or Doppler studies may be performed to gauge the formation of clots and blood flow to the brain. Investigations of the heat may also be done to evaluate the heart functioning, presence of any blood clots and related problems that may have caused an embolic stroke.
Treatment of Embolic Stroke
Embolic stroke calls for an emergency treatment plan, aimed at dissolving the clot or removing the embolus from the artery and restoring the blood supply to the brain. Medicines to remove the clot are given orally or administered intravenously.
Surgical procedures that help to prevent further episodes of stroke may be required. These include, opening the narrowed arteries that have plaque deposition inside (carotid endarterectomy) or stents may be placed in the artery to keep it open and prevent from narrowing. Long term treatment depends on the cause of embolic stroke and often includes medications to prevent further attacks of stroke.
Recovery Period for Embolic Stroke
Recovery depends on the degree of damage caused during the embolic stroke. In most cases, difficulty in movement of limbs may persist after the episode and may recover within few months. Physical or speech therapy and rehabilitation is often required to regain the strength, co-ordination, balance, speech ability and other impaired functions.
Most patients show steady improvement with proper rehabilitation and regular medication. Persons with fewer and milder symptoms often have better recovery than those with major symptoms of greater severity. Similarly, younger persons may show improvement better than older ones. |
The Poet's Voice
Write a poem in the style of MM that discusses the things in life that make you happy.
The Traveler's Way
Create a map that charts out all of MM's pilgrimages. Name the temples and the surrounding regions, as well as label the time period. Give information about her traveling companions and her reasons for the trip.
The Historical Perspective
Create a presentation for the class in any medium you wish that discusses the real-life people in "The Kagero Diary." Show how they influenced the development of their community, as well as how they might have influenced MM, or how she influenced them.
Write a paper or create a presentation that discusses the presence of feminism in the Heian period. Relate this to MM and discuss whether her actions and beliefs were in line with the ideas of the time period.
Another Man's Words
This section contains 788 words
(approx. 3 pages at 300 words per page) |
Our focus the past two weeks in Shared Reading has been fiction. The essential question we have been asking 6th graders is: How do the actions of the characters in the text move the plot to a resolution? (6.RI.3)
Students have been able to describe how a particular story's plot unfolds in a series of episodes as well as being able to evaluate how the characters respond to change as the plot moves towards a resolution.
We began by reading the short story "To Sleep Under the Stars." As a class we made a story plot that included the characters, setting, and theme. From there we identified the main character and searched for character traits of the character and quotes from the text that supported the character trait. The last task we completed was to write a paragraph about the character and their traits using citations.
In groups, students went through the same process with another short story called "Just a Pidgeon."
This past week, with a partner students are working on reading the short story "Eleven" by Sandra Cisneros. They have dissected the story, identified traits and evidence for the main character Rachel, and are now working on their paragraphs.
Here are some examples of what we have been working on: |
Gauss Law And Its Applications Philosophy Essay
The relationship between the net electric flux through a closed surface (often called as Gaussian surface) and the charge enclosed by the surface is known as Gauss’s law. Consider a positive point charge q located at the center of a sphere of radius r. We know that the magnitude of the electric field everywhere on the surface of the sphere is E=. The field lines are directed radially outward and hence are perpendicular to the surface at every point on the surface. That is at each surface point, is parallel to the vector ∆representing a local element of area ∆surrounding the surface point. Therefore, =E ∆ and the net flux through the Gaussian surface is = = =.
where we have moved E outside of the integral because, by symmetry E is constant over the surface. The value of E is given by E=. Furthermore, because the surface is spherical, .
Hence, the net flux through the Gaussian surface is
This equation shows that the net flux through the spherical surface is proportional to the charge inside the surface. The flux is independent of the radius r because the area of the spherical surface is proportional to, whereas the electric field is proportional to 1/ . Therefore, in the product of area and electric field, the dependence on r cancels.
Now, consider several closed surfaces surrounding a charge q. Surface is spherical, but surfaces and are not. The flux that passes through has value q/. Flux is proportional to the number of lines through the nonspherical surfaces and. Therefore, the net flux through any closed surface surrounding a point charge q is given q/ and is independent of the shape of that surface.
Now consider a point charge located outside a closed surface of arbitrary shape. As can be seen from this construction, any electric field line entering the surface leaves the surface at another point. The number of electric field lines entering the surface equals the number leaving the surface. Therefore, the net electric flux through a closed surface that surrounds no charge is zero. The net flux through the cube is zero because there is no charge inside the cube.
Let’s extend these arguments to two generalized cases: (1) that of many point charges and (2) that of a continuous distribution of charge. We use the superposition principle, which states that the electric field due to many charges is the vector sum of the electric fields produced by the individual charges. Therefore, the flux through any closed surface can be expressed as =
where is the total electric field at any point on the surface produced by the vector addition of the electric fields at that point due to the individual charges. Consider the system of charges, the surface S surrounds only one charge hence the net flux through S is. The flux through S due to charges outside it is zero because each electric field line from these charges that enters S at one point leaves it at another. The surface S’ surrounds charges and hence the net flux through it is ( +). Finally, the net flux through surface is zero because there is no charge inside this surface. That is, all the electric field lines that enter at one point leave at another. Charge does not contribute to the net flux through any of the surfaces because it is outside all the surfaces.
Gauss’s law is a generalization of what we have just described and states that the net flux through any closed surfaces is
where represents the electric field at any point on the surface and represents the net charge inside the surface.
APPLICATIONS OF GAUSS’S LAW TO VARIOUS CHARGE DISTRIBUTIONS
Gauss’s law is useful for determining electric fields when the charge distribution is highly symmetric. The following examples demonstrate ways of choosing the Gaussian surface over which the surface integral given by
can be simplified and the electric field is determined. In choosing the surface, always take advantage of the symmetry of the charge distribution so that E can be removed from the integral. The goal in this type of calculation is to determine a surface for which each portion of the surface satisfies one or more of the following conditions:-
The value of the electric field can be argued by symmetry to be constant over the portion of the surface.
The dot product in
can be expressed as a simple algebraic product E dA because and are parallel.
The dot product in
is zero because and vector are perpendicular.
The electric field is zero over the portion of the surface.
Electric Field Due to a Line Charge - Cylindrical Symmetry
Let's find the electric field due to a line charge. Consider the field due to an infinitely long line of charge as opposed to the one of finite length. It's clear here that it's impossible to talk about a finite amount of charge stretched over an infinitely long distance. Instead, state that the line has a constant linear charge density.
Realistically, all line charges are finite. Consider the figure below which shows a view of the line charge and a point P a distance h away from it. We have to find the electric field at point P. To set up the integral, take infinitesimally small line segments of charge in pairs so that their horizontal components cancel and the vertical (i.e. radial) components add.
Figure: Calculation of the electric field at the midpoint of a line charge of length l.
(2.0×10-6 C/m3)(0.02 m)
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal: |
Music for three or four voices is at once a logical extension and a glorious expansion of the two-voice elements we have just considered. It involves combining intervals to create new multi-voice sonorities, and also combining or superimposing two-voice progressions to build unifying cadences.
Around 1300, Johannes de Grocheo tells us that three voices are required for complete harmony, and in fact three-voice compositions become the norm from the age of Perotin on.
In this section, we survey some of the most important categories of stable and unstable combinations for three or four voices. Then in Section 4, we focus on directed cadential resolutions, while in Section 5 we consider obliquely resolving sonorities.
In theory and practice, the unit of complete harmony in the 13th century is a combination with three voices and intervals: the trine (trina harmoniae perfectio, or "threefold perfection of harmony," as Johannes de Grocheio calls it). This sonority requires three voices, the foundation-tone, fifth, and octave, and it includes three intervals: an outer octave, a lower fifth, and an upper fourth:
|g' | 4 8|d' | 5 |g
Throughout the 13th century, and well beyond, this combination represents ideal euphony and stable plenitude; it is a point of rest and the goal of unstable sonorities.
Using a much later but rather familiar form of notation, we may describe this combination as 8/5 (8 + 5 + 4). The "8/5" tells us that the intervals above the lowest tone are an octave and fifth, while the "(8 + 5 + 4)" identifies all three intervals, including the upper fourth.
In theory and practice, these same three intervals may be arranged conversely so that the fourth is below and the fifth above:
|g' | 5 8|c' | 4 |g
This combination - 8/4 (8 + 5 + 4) -- is also common, especially in the period around 1200, but is very rarely conclusive. Around 1325, Jacobus of Liege expresses the likely view of 13th-century musicians that this sonority, while concordant, is less pleasing than the ideal arrangement of fifth below and fourth above. He suggests a general rule that a larger or more blending interval (here the fifth) should best be placed below a smaller or less blending interval (here the fourth).
In this guide, I shall use the term "trine" for both the 8/5 and 8/4 combinations, but with the 8/5 trine normally presumed unless the context indicates otherwise.
While the trine represents complete and stable harmony, two families of mildly unstable combinations add sheer vertical color to the music as well as lending themselves to a variety of directed and decorative resolutions. Happily, we have an eloquent witness: Jacobus of Liege mentions the pleasing qualities of these combinations, and indeed the music speaks for itself.
In the quinta fissa or "split fifth" of Jacobus, an outer fifth is "divided" by a third voice into two thirds: 5/M3 (5 + M3 + m3) or the "converse" arrangement of 5/m3 (5 + m3 + M3). Here the fifth is ideally blending, while the two thirds are unstable but relatively blending (being the mildest unstable intervals).
Incidentally, Jacobus prefers the form with the major third below and minor third above, but notes that the converse is also permissible, citing the opening of a 13th-century motet preserved in the Bamberg Codex.
| d' | e' | m3 | M3 5 | b 5 | c' | M3 | m3 | g | a
(Notation graphics: 1, 2)
The 5/3 combination often resolves by directed contrary motion (Section 4.1), and has a featured role in many internal and final cadences. Additionally, throughout the century it is often treated more freely, as we might expect for one of the mildest unstable combinations, and in fact the only one to consist exclusively of stable or relatively blending intervals.
Jacobus also tells us about another favorite kind of mildly unstable combination common in practice from Perotin on. Two fifths, two fourths, or a fifth and a fourth combine with a relatively tense M2, m7, or M9 in a kind of energetic blend or fusion.
In his monumental Speculum musicae or "Mirror of Music," Jacobus enthusiastically recommends the three-voice sonority of a major ninth "split" into two fifths by a third voice, i.e. 9/5 (M9 + 5 + 5). He also observes that it is pleasant if a minor seventh is "split" into two fourths, i.e. 7/4 (m7 + 4 + 4).
| g' | f' | 5 | 4 M9 | c' m7 | c' | 5 | 4 | f | g
(Notation graphics: 1, 2)
Additionally, Jacob mentions another very common type of sonority, in which an outer fifth is "split" into a fourth below and a major second above, or the converse:
| d' | d' | M2 | 4 5 | c' 5 | a | 4 | M2 | g | g
(Notation graphics: 1, 2)
These four sonorities, like 5/3, represent the mildest unstable combinations possible: here two of the intervals are ideally blending fifths and/or fourths, while the third interval (M2, m7, or M9) is relatively tense but not sharply discordant.
The treatise of Jacobus suggests that to 13th-century ears, as to modern ones, the overall impression was one of an energetic variant on 8/5 or 8/4, with the unstable major second or ninth or minor seventh lending a sense of excitement and motion.
While these combinations sometimes participate in directed cadential progressions (Section 4.2), they often lend themselves to resolutions by oblique mention (Section 5.2) - or, like 5/3, to freer treatments.
In addition to stable trines and mildly unstable combinations, composers of the 13th century deploy some strikingly tense cadential combinations resolving very effectively to a complete trine or a fifth (the prime interval of an 8/5 trine). These combinations fall into two major families.
Combinations with an outer sixth characteristically resolve to a complete trine, with the sixth expanding to the octave of this trine. We shall focus on this group of cadences in Section 4.3.
For now, it may suffice to give some examples of the most common forms: 6/3, 6/5, 6/5/3, 6/2, and 6/4. These sonorities, although all on the tense side, may vary considerably in their degree of tension. We should recall that M6 is relatively tense, roughly on par with M2 and m7, while m6 is often regarded as sharply discordant (like m2, M7, tritone). Forms involving m6, m2, or tritonic fifths or fourths heighten the level of tension, and along with their somewhat gentler relatives are very effectively employed by Perotin and other composers. The following examples may give a sampling of these possibilities:
| f' | | m2 | | c' | e' | e' | 4 | | 4 | M2 m6 | M3 | | m6 | g M6 | d' | c' | | 5 | m3 | 5 | m3 | | e | g | a | m6/m3 M6/5 m6/5/m3 (m6 + m3 + 4) (M6 + 5 + M2) (m6 + 5 + m2 + m3 + M3 + 4) | f' | e' | d5 | M3 m6 | b M6 | c' | M2 | 4 | a | g m6/M2 M6/4 (m6 + M2 + d5) (M6 + 4 + M3)
(Notation graphics: 1, 2, 3, 4, 5)
The fact that these sixth sonorities include a large number of unstable intervals means from a cadential perspective that they invite some very dynamic resolutions. As we shall see, the 6/3, 6/5, and 6/5/3 combinations can expand to a complete trine in an especially efficient manner which makes them among the most favored of cadential sonorities.
Additionally, the 6/5 combination often resolves by oblique motion to a simple fifth (the highest voice descending a step while the others remain stationary), as is discussed in Section 5.3.
In the other leading family of more tense cadential combinations, an outer minor seventh characteristically contracts by stepwise contrary motion to a fifth. In Section 4.4, we shall explore these standard progressions.
For now, let us briefly look at these sonorities themselves. As with the sixth combinations, they are all decidedly on the tense side, but more so in the case of forms including M7 or a tritonic fifth. Here we consider 7/3, 7/5, and 7/5/3, sampling some of these possibilities:
| d' | | m3 | | e' | a' | b | 5 | | 5 | M3 m7 | M3 | | M7 | a m7 | f' | g | | 5 | M3 | d5 | m3 | | f | b | e | M7/M3 m7/d5 m7/5/m3 (M7 + M3 + 5) (m7 + d5 + M3) (m7 + m3 + M3 + m3 + 5 + 5)
(Notation graphics: 1, 2, 3)
As in the case of our sixth combinations, a preponderance of unstable intervals means a wealth of opportunities for directed cadential action. Additionally, the 7/5 combination lends itself to an oblique resolution where the upper voice ascends stepwise (Section 5.4).
Here we have by no means covered the full range of combinations appearing in 13th-century music: Jacobus lists a catalogue of such sonorities with outer intervals ranging from a major third to a twelfth (about the practical limit, given the typical range of voices in this period). However, attuning ourselves to some of the most prevalent and important families of sonorities is a large step toward appreciating and understanding this music.
The stable trine (8/5, with its variant form of 8/4) is the unit of complete harmony, and the ultimate goal of unstable combinations.
The "split fifth" with its two thirds (5/3), and mildly unstable combinations featuring a preponderance of fifths or fourths (9/5, 7/4, 5/4, 5/2), are relatively blending. They add pleasant vertical color to the music, and lend themselves either to standard resolutions or to a freer treatment.
Other, tenser, combinations strongly invite directed cadential resolutions where an outer sixth expands to the octave of a complete trine (6/3, 6/5, 6/5/3, 6/2, and sometimes 6/4), or an outer seventh contracts to a fifth (7/3, 7/5, 7/5/3).
Having considered the harmonic vocabulary of 13th-century music, we now turn to its dynamic grammar: the ways in which an unstable sonority may effectively resolve to a stable one.
To Section 4 - Directed cadences for three or four voices.
To Table of Contents.Margo Schulter |
Handpicked Research Paper Topics About Nietzsche
Friedrich Nietzsche was a German philosopher who had a profound influence on some of the ideas found in Western philosophy. He criticized the reason of the Western society, challenged traditional morality, and disputed the focus that religion placed on the spirit of man. His large level of impact means that when you have to write a research paper, you have a number of topics to choose from. Here are some to get you started.
- Nietzsche Philosophy and the Athens-Jerusalem Conflict- The conflict between Athens and Jerusalem was one that put faith against religion. Discuss this and the way that Nietzsche’s own philosophy relates to this conflict.
- Slave Morality vs. Master Morality- Discuss whether or not Nietzsche believes there is a distinction between slave morality and master morality. Does this agree with his opinion of objective values and how does that affect his ideas?
- Personal Morals vs. Society- Think about the expectations that society has, as far as morals. What was Nietzsche’s opinion on this? Do you think that his opinion would have changed in today’s society?
- Nietzsche and His Opinion of the Herd Instinct- The concept of herd instinct explores the idea of people acting for the good of the community. It is sometimes believed that those who act morally are then punished by society. Think of examples of how this happens today, for example, in the case of Edward Snowden.
- Nietzsche and Nazism- Analyze Nietzsche’s ideas surrounding will to power. How does this relate to Nazism? Do you believe that Nietzsche’s will to power theories contributed to Nazism? Why or why not?
- Morality, Will to Power, and Egoism- Consider Nietzsche’s theories on will to power and herd morality. How does psychological egoism relate to these idea? Does psychological egoism make morality almost impossible in today’s society? Explain.
- Nietzsche, Other Scholars, and Pity- Take scholars such as Kant, Spinoza, Plato, and La Rochefoucauld and compare their opinions on pity with Nietzsche’s. Think about the vast difference in their many ideas and then compare their opinions on pity.
As you choose your topic for a research paper on Nietzsche, be sure you are choosing one that is an appropriate size for the requirements of your paper. Then, delve into your research. Nietzsche was quite an interesting man, so dig around until you find an idea that interests you. |
The Future of CPUs in Brief
Today silicon is king. But if computers are going to keep up with Moore’s law, they’ll need something better.
In the world of computers, silicon is king. The semiconducting element forms regular, near-perfect crystals into which chipmakers can carve the hundreds of millions of features that make the microchips that power the processors. Technological improvements let chipmakers cut the size of those features in half every 18 months-a feat known as Moore’s law, after Intel cofounder Gordon Moore. Today, that size hovers around 180 nanometers (180 billionths of a meter), and researchers expect to push below 50 nanometers within a decade. But that’s about as far as silicon can go: below that quantum physics makes electrons too unruly to stay inside the lines. If computers are to keep up with Moore’s law, they will have to move beyond silicon. After a couple of decades of theorizing, computer scientists, bioengineers and chemists in the mid-1990s began lab experiments seeking alternative materials for future CPUs and memory chips. Today, their research falls into three broad categories: quantum, molecular and biological computing.
In the field of quantum computing, researchers seek to harness the quantum effects that will be silicon’s undoing. Scientists succeeded in making rudimentary logic gates out of molecules, atoms and sub-atomic particles such as electrons. And incredibly, other teams have discovered ways to perform simple calculations using DNA strands or microorganisms that group and modify themselves.
Molecular Building Blocks
In one type of molecular computing (or nanocomputing), joint teams at Hewlett Packard Co. and UCLA sandwich complex organic molecules between metal electrodes coursing through a silicon substrate. The molecules orient themselves on the wires and act as switches. Another team at Rice and Yale universities has identified other molecules with similar properties.
Normally, the molecules won’t let electrons pass through to the electrodes, so a quantum property called tunneling, long used in electronics, is manipulated with an electric current to force the electrons through at the proper rate. If researchers can figure out how to lay down billions of these communicating molecules, they’ll be able to build programmable memory and CPU logic that is potentially millions of times more powerful than in today’s computers.
Molecular researchers like the HP/UCLA team, however, face a challenge in miniaturizing their current wiring technology-nanowires made from silicon strands-from several hundred to approximately 10 nanometers. Carbon nanotubes are promising substitutes. The rigid pipes make excellent conductors, but scientists must figure out how to wrangle them into the latticework needed for complex circuitry. “We’ve shown that the switching works,” says HP computer architect Philip Kuekes. “But there is still not as good an understanding of the basic mechanism so that an engineer can design with it.” Hewlett Packard and UCLA have jointly patented several techniques for manufacturing of molecular computers, most recently in January of 2002.
Although molecular circuits employ some quantum effects, a separate but related community of scientists is exploring the possibilities of quantum computing-computing with atoms and their component parts. It works from the notion that some aspect of a sub-atomic particle-say, the location of an electron’s orbit around a nucleus-can be used to represent the 1s and 0s of computers. As with molecules, these states can be manipulated-programmed, in effect.
One approach pursued by members of a national consortium involving Berkeley, Harvard, IBM, MIT and others, involves flipping the direction of a spinning electron to turn switches on or off. By applying electromagnetic radiation in a process called nuclear magnetic resonance (NMR) like that used in medical imaging, researchers can control the spin of the carbon and hydrogen nuclei in chloroform. Alternatively, filters and mirrors show promise for controlling photons’ light as a switching mechanism. Other researchers work with materials such as quantum “dots” (electrons in silicon crystal), and “ion traps” (ionized atoms suspended in an electrical field).
Quantum bits (qubits) have an unusual quality that makes them a double-edge sword for computing purposes, though. Due to the lack of determinism inherent in quantum mechanics, qubits can be on or off simultaneously, a phenomenon called superposition. This makes it harder to force qubits into digital lockstep, but it also multiplies exponentially the amount of information groups of qubits can store. It theoretically allows massively parallel computation to solve problems previously thought uncomputable, such as factoring large prime numbers. One implication: today’s encryption techniques depend on the unfeasibility of computing the two multipliers (factors) of certain numbers, so quantum computers may one day be able to crack most encrypted files that exist today. This possibility has given the research a boost from government agencies, including the National Security Agency.
To be manufacturable, quantum computers will require billions of such sub-atomic switches working together and interacting with their environments without falling into a disorganized state called decoherence. A quantum state called entanglement-where many atoms are made to behave exactly alike-provides one possible solution. Researchers also hope to fight decoherence by harnessing a phenomenon called interference, that is, the overlapping of quantum particles’ wavelike energy.
Getting Down to the Biology
In addition to molecular and quantum computing, a third approach, biological computing, relies on living mechanism to perform logic operations.
Bioengineers have long understood how to manipulate genes to function as switches that activate other genes. Now they’re using the technique to build rudimentary computer “clocks” and logic gates inside bacteria such as E. coli. Other researchers use genes to prod microorganisms into states that represent information. A team headed by Thomas Knight at the MIT Artificial Intelligence Laboratory genetically manipulates luciferase, an enzyme in luminescent creatures such as fireflies, to generate light that serves as a medium of cell-to-cell communication.
One of biological computing’s biggest challenges is calculating with elements that are flawed, unreliable and decentralized. To that end, Knight’s amorphous computing group studies ways to encourage bacteria to organize themselves into parallel-processing computers. “I don’t think of it as likely to be the path to making conventional computers,” Knight says. “It will be the way in which we build the molecular-scale computers.”
Molecular computers face similar reliability challenges. At HP, researchers used fault-tolerant algorithms to construct a silicon-based computer called Teramac that worked despite having 220,000 defects. Kuekes, Teramac’s project manager, says the company is now exploring ways to translate what they’ve learned to molecular computing.
Farther out on the biological curve is DNA computing, which attempts to exploit the way DNA strands recognize each other and combine into structures that could perform large, compute-intensive calculations in parallel.
Few in the biological community expect biocomputers to replace the general-purpose silicon computer. They hope instead to manufacture molecular computers cheaply and efficiently with organisms that can orient themselves into logic circuits or transform vats of chemicals to manufacture other chemicals.
Still more exciting possibilities come from the potential of special-purpose biological computers to interact with other biological systems. Miniature computers could be injected into living tissue to reprogram cancer-causing genes, for example, or administer insulin shots.
For now, all these applications loom distant on the horizon. But researchers agree that silicon’s days are numbered, and that radical new approaches will be needed to keep computers zooming through the 21st century.
Become an Insider to get the story behind the story — and before anyone else. |
In the Arctic Ocean, sea-ice habitats are undergoing rapid environmental change. Polar cod (Boreogadus saida) is the most abundant fish known to reside under the pack-ice. The under-ice distribution, association with sea-ice habitat properties and origins of polar cod in the central Arctic Ocean, however, are largely unknown. During the RV Polarstern expedition ARK XXVII/3 in the Eurasian Basin in 2012, we used for the first time in Arctic waters a Surface and Under Ice Trawl with an integrated bio-environmental sensor array. Polar cod was ubiquitous throughout the Eurasian Basin with a median abundance of 5000 ind. km-2. The under-ice population consisted of young specimens with a total length between 52 and 140 mm, dominated by 1-year-old fish. Higher fish abundance was associated with thicker ice, higher ice coverage and lower surface salinity, or with higher densities of the ice-amphipod Apherusa glacialis. The fish were in good condition and well fed according to various indices. Back-tracking of the sea-ice indicated that sea-ice sampled in the Amundsen Basin originated from the Laptev Sea coast, while sea-ice sampled in the Nansen Basin originated from the Kara Sea. Assuming that fish were following the ice drift, this suggests that under-ice polar cod distribution in the Eurasian Basin is dependent on the coastal populations where the sea-ice originates. The omnipresence of polar cod in the Eurasian Basin, in a good body condition, suggests that the central Arctic under-ice habitats may constitute a favourable environment for this species survival, a potential vector of genetic exchange and a recruitment source for coastal populations around the Arctic Ocean.
AWI Organizations > Climate Sciences > Sea Ice Physics
AWI Organizations > Biosciences > Junior Research Group: ICEFLUX
Helmholtz Research Programs > PACES II (2014-2018) > TOPIC 1: Changes and regional feedbacks in Arctic and Antarctic > WP 1.4: Arctic sea ice and its interaction with ocean and ecosystems |
Algae Bio-Gas: The "Real" Natural Gas
Algae biogas is actually a mixture of gases, usually carbon dioxide and methane. It is produced by many kinds of algae microorganisms, usually when air or oxygen is absent. (The absence of oxygen is called “anaerobic conditions.”) Animals that eat a lot of plant material, particularly grazing animals such as cattle, produce large amounts of biogas. The biogas is produced not by the cow or elephant, but by billions of microorganisms living in its digestive system. Biogas also develops in bogs and at the bottom of lakes, where decaying organic matter builds up under wet and anaerobic conditions.
Biogas is a Form of Renewable Energy
Flammable biogas can be collected using a simple tank, as shown here. Algae along with animal manure are stored in a closed tank where the gas accumulates. It makes an excellent fuel for cook stoves and furnaces, and can be used in place of regular natural gas, which is a fossil fuel.
Biogas is considered to be a source of renewable energy. This is because the production of biogas depends on the supply of grass, which usually grows back each year. By comparison, the natural gas used in most of our homes is not considered a form of renewable energy. Natural gas formed from the fossilized remains of plants and animals-a process that took millions of years. These resources do not “grow back” in a time scale that is meaningful for humans.
Biogas is not new. People have been using biogas for over 200 years. In the days before electricity, biogas was drawn from the underground sewer pipes in London and burned in street lamps, which were known as “gaslights.” In many parts of the world, biogas is used to heat and light homes, to cook, and even to fuel buses. It is collected from large-scale sources such as landfills and pig barns, and through small domestic or community systems in many villages.
Benefits of Using Algae Biogas
When biogas is used, many advantages arise. In North America, utilization of biogas would generate enough electricity to meet up to three percent of the continent's electricity expenditure. In addition, biogas could potentially help reduce global climate change.
- Storable & Dispatchable: can be assigned to highest value use
- Maximizes Existing Infrastructure: pipelines, storage, generation
- Multiple Uses: electric generation, fuel transportation, end-use appliances/equipment
- Reduces Emissions/flaring, Air Permitting: captures CH4, 21x more potent GHG than CO2
Furthermore, by converting cow manure into methane biogas instead of letting it decompose, we would be able to reduce global warming gases by ninety-nine million metric tons or four percent.
The 30 million rural households in China that have biogas digesters enjoy 12 benefits:
- Saving fossil fuels,
- Saving time collecting firewood,
- Protecting forests,
- Using crop residues for animal fodder instead of fuel,
- Saving money, saving cooking time,
- Improving hygienic conditions,
- Producing high-quality fertilizer,
- Enabling local mechanization and electricity production,
- Improving the rural standard of living,
- Reducing air and water pollution.
Anaerobic Digestion of Whole Algae
The production of algae biogas from the anaerobic digestion of macroalgae, such as Laminaria hyperbore and Laminaria saccharina, is an interesting mode of gaseous biofuel production, and one that receives scant attention in the United States
The use of this conversion technology eliminates several of the key obstacles that are responsible for the current high costs associated with algal biofuels, including drying, extraction, and fuel conversion, and as such may be a cost-effective methodology. Several studies have been carried out that demonstrate the potential of this approach.
A recent study indicated that biogas production levels of 180.4 ml/g-d of biogas can be realized using a two-stage anaerobic digestion process with different strains of algae, with a methane concentration of 65%. If this approach can be modified for the use of microalgae, it may be very effective for situations like integrated wastewater treatment, where algae are grown under uncontrolled conditions using strains not optimized for lipid production.
You might also be interested in...
The Algae Biodiesel Process
The algae biodiesel process is fairly straight-forward, however the devil is in the details. Read More
The Two Biggest Mistakes in Algae Biofuels
Without a doubt, The Two Biggest Mistakes I saw as a biofuel consultant in advanced biofuels were...Read More
Algae Biofuels: Separating Myth From Fact
Lot so wild stories and wild claims being made in the algae biofuels space. Sifting through the flotsam of cyberspace isn't easy... Read More
The Algae Revolution Has Begun
David Sieg provides more than enough information to tackle this process with confidence
I’m not a biodiesel expert, but I would say that in his book, Making Algae Biodiesel at Home, David Sieg provides more than enough information to tackle this process with confidence if you have the inclination to tackle a do-it-yourself project of this magnitude, which will require dedication, patience, and natural skill. He provides concise overviews as well as in-depth resources for understanding the process and its variables, including tips from commercial versions and national labs. The guide covers all aspects required, including selecting, growing, harvesting the algae; and making the biodiesel. Though the process is certainly not easy, David skillfully makes it easy enough that it is within the grasp of doing this at home. His guide is well-organized and well-written, accompanied by abundant illustrations, photos, and diagrams.
New Energy Congress |
HEALTHY GROWTH & DEVELOPMENT
Adults must meet children’s basic needs because children do not typically have the resources or ability to provide for their own survival and development.
Guiding National Blueprint Principle-It is the responsibility of all members of society to work toward the shared goal of advancing the fundamental rights and needs of children. Children should have access to food, clean and safe water, shelter, and clothing required for survival and healthy development.
Guiding National Blueprint Principle-Families, individuals, communities, organizations, and systems protect children from abuse and neglect, and provide an array of supports and services that help children, youth, and their families to accomplish developmental tasks, develop protective factors, and strengthen coping strategies.
Current focus areas include: |
Valves serve a variety of purposes in the industrial, engineering, manufacturing and scientific communities. Selecting the right valve can determine the success or failure of the system or process.
The main purpose of a valve is to control media flow through a system. The valve may be used to start, stop, or throttle the flow to ensure safe and efficient operation of the process. To learn more about the mechanisms that valves use to control flow, please read Valve Types.
Valves play a large role in most industries. They are used in many parts of daily mechanical devices, including in HVAC and water systems in an office and the gasoline mechanism for an automobile. Below are a few examples of the many industries in which valves play a major role in proper operation.
This use is an essential aspect of many industries, but there are hundreds of thousands of miles of crucial pipelines that transport media from its source to the place where it will be transformed into its final product. This media could include piping for crude oil and gas, both onshore and offshore. Valves are used to optimize the pipeline operating conditions, and can be found in the upstream, midstream and downstream section of the piping. Upstream starts at the bottom of the hole in the ground and covers everything on the wellhead up to the choke. In this case, the choke is a specialized globe valve that is mounted on the wellhead to regulate the output of the well. Midstream starts at the output of the choke and ends at the fence of the final destination (usually a refinery). Downstream is everything inside the area of the destination. The most important factor to consider when selecting a valve for a pipeline application is whether the valve is piggable – that is, the inside is designed to be cleaned or inspected.
Image Credit: W.L. Sunshine
Oil and Gas
The oil and gas industry is a subset of the pipelines category. Due to the high demand for oil and gas, deeper wells, longer pipelines, and lower production costs have become necessary. Along with the need for an inexpensive valve, the device must also be tougher, last longer, and perform better to meet the demands of the industry. Valve service environments and operating conditions are often extreme with high temperatures (greater than 1,500°F) and high pressure (greater than 25,000 psig) or cryogenic and very low pressure applications. Another feature important to valves used in the oil and gas industry is the capacity for remote control.
Image Credit: Univalve
Food and Beverage
The food and beverage industry is a large and growing industry with an increasing need for parts and products that keep plants running smoothly .The industry’s many challenges, including safety concerns, have prompted strict material requirements for the valves used in these plants. There are two classifications for valves in the food and beverage industry: those in direct contact with food materials and those handling utility services (i.e. steam, water). For valves which come into direct contact with food, there are regulations in place (issued by such organizations as the FDA) which require that the inside of the valve be smooth enough to avoid trapping particles or bacterial accumulation. Valves made of a soft material must not absorb or hold any product going through the valve. These standards also specify that there should not be dead volume in the valve or crevices where material can be trapped to avoid decay or stagnate. Valves in the food and beverage industry do not face the high pressures or highly corrosive materials that are present in other industries.
Image Credit: Nordson
The biopharm industry is part of the larger chemical processing industry. The most important feature of valves used in this industry is their ability to be cleaned and sterilized. The chemical processing industry is responsible for processing raw materials into products. Since chemical processing often involves reactions using pressure and/or heat, and could cause toxic by-products, the media in this industry tends to be highly corrosive and abrasive. The valves need to be able to tolerate the nature of the media, as well as offer precise flow control and high leakage protection to protect against spills and cross-contamination.
Image Credit: RS Engineering Works
Valves play a critical role in the marine industry. As ships become larger and are used more frequently, they require the ability to generate power, treat and manage wastewater and control HVAC, as well as perform their required tasks. The size and application of the ship will determine the different types and number of valves on board. Valves are used to regulate the loading and storage of a ship’s power supply, provide water for fire-fighting capabilities, handle and processes wastewater and store any liquid cargo, among many other applications. Any valve that processes sea water must be durable, and all marine valves must be reliable due to lack of resources once out at sea.
Firefighting system on a ship. Image Credit: Fire Fight Systems
These are just a few examples of the many industries and applications of valves.
The system media is a critical consideration when selecting a valve. Media is a term used to describe the material that will be passing through the system. Media plays an important role when selecting the valve body and disc composition as well as the type and speed of the actuator. There is a wide variety of materials that could be used in the valve system; these include:
- Gas: Valves for gas systems seal tightly up to a minimum specified leakage rate at rated operating temperatures and pressures.
- Air: used to describe all non-pressurized air.
- Compressed air: used to describe pressurized and potentially explosive air.
- High purity, natural gas, sour, specialty or corrosive
- Liquefied petroleum
- Liquid: Valves for liquid systems require tight seals to prevent leakage.
- Water (hot or cold, clean or dirty, fresh or salt)
- Gasoline (diesel fuel)
- Hydraulic fluid
- Highly viscous or gummy fluids
- Solid: Valves for solid materials must be durable and have few parts to prevent clogging.
- Slurry: A slurry is a solution with suspended particles. For this media type, the valve must be able to operate effectively in aggressive conditions.
Powder: The chart below describes several common media as well as the appropriate valve to use for the application.
Valves can be found in non-industrial applications. These may include valves used for residential applications such as a faucet or outdoor hose, or in medicine such as a heart valve.
GlobalSpec is the leading specialized vertical search, information service and e-publishing company serving the engineering, manufacturing and related scientific and technical market segments. Our product suite offers consumers the ability to find the most appropriate valve application to make their projects a success. |
The Mourning Warbler is a small songbird. These 13 cm long birds have yellow underparts, olive-green upperparts and pink legs. Adult males have a grey hood and a black patch on the throat and breast. Females and immatures are grey-brown on the head with an incomplete eye-ring. The mourning in this bird's name refers to the male's hood, thought to resemble a mourning veil.
Habitat and Distribution
Their breeding habitat is thickets and semi-open areas with dense shrubs across Canada east of the Rockies and the northeastern United States. These birds migrate to Central America and northern South America.
They forage low in vegetation, sometimes catching insects in flight. These birds mainly eat insects, also some plant material in winter.
Mourning Warblers build an open cup shaped nest which is placed on the ground in a well concealed location under thick shrubs or vegetation.
Calls and Songs
The song of this bird is a bright repetitive warble. The call is a sharp chip. |
Elephants come in two species: Asian elephants and African elephants. They also divide the African elephant into forest elephants (Loxodonta cyclotis) or savanna elephants (Loxodonta africana). According to the Wildlife Conservation Society "over 60% of forest elephants have been slaughtered in the past decade". Elephants are also known as pacyderms. Elephants are the largest living mammal today. Elephants are herbivorous - that means they eat plants, and a lot of them!.
There are two reasons elephants are becoming extinct. First their habitats are disappearing because of man building cities and planting farms so the trees are disappearing (also because the elephants de-nude a forest pretty quickly because they need to eat a LOT - 400 lbs a day per animal - to give them energy). Second, man hunts the elephant both for the meat to eat and primarily to sell the tusks for the ivory.
Conservationists estimate that there are only approximately 30,000 to 35,000 Asian elephants in the wild (Very endangered) and less than 500,000 African elephants living in the wild today. That's why zoo conservation is critical to saving these endangered animals. See more about what the Denver Zoo is doing to help. Part of this story is also covered in this article. |
It has long been hard for people to accept that they can be hurt, or even killed, by something they cannot see. But this is exactly what viruses do and have always done; they evade our body’s immune system, hijack our cells to create more viruses, and have consequently caused epidemics throughout human history. More recently, we’ve recognized they can also cause cancers, as well as other changes to our very genomes that we have yet to fully understand the implications of.
Smallpox: Possibly the oldest viral disease on record is smallpox, with accounts dating to 1100 B.C. in China and India. It made its way to Europe around 500 A.D., and then infamously followed the Europeans to the Americas in the early 1500s, destroying Native American populations. Because it is eradicated today, it can be hard to imagine just how difficult life with this disease was; of the people who caught it, up to forty percent died, mainly through damage to their heart, kidneys, or brain. People who survived could still have injured kidneys, disfiguring scars, or blindness (which it was the leading cause of).
Luckily, in the late 1700s, an observant British scientist named Edward Jenner noticed that dairymaids previously infected with “cowpox” (which caused red blisters on cow udders, and is similar to a mild case of smallpox in people) had immunity against smallpox. Using scabs from infected cow udders, Jenner infected several hundred thousand people with cowpox, effectively vaccinating them against smallpox. But even with a vaccine, like many viruses smallpox remained a significant health problem for years; in 1967, ten to fifteen million people were infected, and over two million died. The World Health Organization took action and created an intensive, $300 million program to finally put a stop to the disease, and it worked; it was officially eradicated in 1979.
Measles: Measles has caused epidemics in Europe and Asia for about two thousand years. Like smallpox, measles made its way to the Americas with early Europeans and wrecked havoc on the previously unexposed Native Americans. Given how contagious it is, its destructive ability is not so surprising; measles is possibly the most contagious virus in the world, even more than the common cold, as a person can catch it just by breathing air in a room where an infected person was two hours earlier. And it can kill about ten percent of the people it infects, if they face malnutrition and inadequate health care, with weeks of high fevers, rashes, severe diarrhea, dehydration, and other infections. Trying to survive measles was simply a part of life for much of human history. In 1963, a vaccine was developed and the disease mostly vanished in developed countries. Like smallpox, measles too could be completely eradicated from the human population if a global, stringent vaccination policy were carried out for several years because, unlike many diseases, measles doesn’t live in any other animals we know of; we’re its only home. While the number of lives measles claims each year is declining, killing about 873,000 people in 1999 down to 164,000 in 2008, it remains a significant cause of mortality for children in many developing countries.
Herpes: Like measles, herpes (caused by the herpes simplex virus 1 and 2) has also caused known epidemics for about two thousand years. When the virus multiplies, it creates blistering lesions, which spread across the infected area and gave the virus its name; herpes is Greek for “to creep.” While simplex 1 targets the skin and mucous on the lips and mouth, creating “cold sores,” simplex 2 attacks these kinds of tissues in the genitals. It’s been long known that the virus spreads through the contact of these sores; the Roman emperor Tiberius Julius Caesar Augustus banned kissing possibly in efforts to prevent the rampant spread of herpes throughout Rome.
While the body’s immune system eventually fights off herpes, it isn’t gone for good. A persistent viral infection, herpes simply retreats when “beaten,” waiting to return and fight another day. Where does herpes retreat to? It actually hides in the body’s nerve cells, sometimes for years before returning to cause lesions. While there’s no cure yet, there are treatments to control its severity and several vaccines in trials.
The virus that causes chickenpox (the varicella-zoster virus) belongs to the same family as herpes simplex, and retreats in a similar manner. After causing chickenpox in children, the virus retreats to nerve cells (in the vertebrae), but can reemerge years or decades later and cause the painful rash condition of shingles.
The Epstein-Barr virus, which causes mononucleosis (“mono”) and may infect over ninety five percent of all people at some point, also belongs to the herpes family. Like chickenpox, it can make a repeat performance later in life, particularly in the form of cancer; it’s been associated with causing Burkitt’s lymphoma, nasopharyngeal carcinoma, and several other cancers.
Influenza: While it is hard to track the historic spread of influenza (also known as the flu) as its symptoms resemble a respiratory disease, it’s thought that, along with smallpox and measles, it too made its way from Europe to the Americas in the early 1500s. Altogether, the three viruses killed as much as ninety-five percent of some Native American tribes.
But the worst influenza epidemic was yet to come; in 1918, half of the world came down with influenza. Within months, between forty and one hundred million people (about five percent of the world’s population), mostly healthy young adults, died from the “Spanish flu.” It was the worst pandemic in modern history. It’s also why scientists and physicians were terrified in 2009; the influenza strain of 1918 was also H1N1. Luckily, the 2009 H1N1 influenza strain, the “swine flu,” did not prove to be as virulent as the 1918 strain.
So why is the flu so effective, and why is there a different flu vaccine every year? The protein “coat” that surrounds the influenza virus’ is what our immune system uses to recognize and attack the virus, but the virus changes this coat all the time, making it hard for our immune system to identify. Part of why it changes all the time is because it doesn’t just live in humans; influenza often benignly lives in the digestive tract of pigs, ducks, and other waterfowl. There it swaps genes with other influenza strains, and occasionally a very potent virus is created this way. Consequently, where people are often in contact with these animals new influenza strains often find their way into the human population, such as in many Asian countries.
Rabies: While people in the U.S. today may be remotely familiar with rabies from the 1957 Walt Disney Classic “Old Yeller,” the disease was a serious threat to everyday life until the early 1900s. After being bitten by an infected animal, a person often developed rabies, which included a fever followed by depression, restlessness, frothing, unquenchable thirst, and certain death, if untreated. Luckily, the brilliant French chemist Louis Pasteur developed a vaccine (using dried fluids from infected animals’ spinal cords, where the virus resides). Unlike most vaccines, the rabies vaccine can be used after a person has been in contact with an infectious source because the virus can take a week to over a year to infect a person after exposure. In 1885, Pasteur first successfully used his rabies vaccine on a 9-year-old boy who was attacked by a rabid dog. Within the next couple years, Pasteur’s vaccine saved about 2,500 lives, and has saved many more since. While rabies is rare in the U.S., it is still common in developing countries, killing around forty to seventy thousand people annually.
Hepatitis: Around the same time that Pasteur was curing rabies, people were inadvertently spreading hepatitis while reusing immunization needles to vaccinate against other viruses. Hepatitis had yet to be identified, and people did not know that they were developing yellowish skin (jaundice) due to viral liver damage. This symptom is actually caused by three different viruses, hepatitis A, B, and C. Hepatitis A (HPA) is the mildest, spreading orally from bad food or water in unclean living environments and causing nausea, vomiting, and jaundice for a month or two, but then allowing for complete recovery.
Hepatitis B (HPB) is much more serious. HPB is very infectious, spreads through sexual contact or body fluids, and is often deadly. Worldwide, about three hundred million people (five percent) have a chronic infection, mostly in Southeast Asia and tropical Africa, but this also includes 1.25 million Americans. Around one million people die from HPB annually, due to liver disease (cirrhosis) and the five million liver cancer cases it causes. Luckily, a vaccine was developed in the 1970s, making it the first cancer vaccine.
It took researchers a bit longer to identify hepatitis C (HPC), the most deadly of the hepatitis viruses in the U.S. Isolated in 1989, HPC was found to also cause liver disease and cancer. In the U.S., it infects nearly four million people, occurring more frequently than any other blood-borne infection here, and killed ten thousand in 2001. While many drugs can be used to treat it, and it’s become the leading cause of liver transplants in the U.S., unfortunately, because HPC mutates so quickly, a successful vaccine has yet to be developed.
Poliovirus: While HPC remains a significant problem for many in the U.S., polio (caused by the poliovirus) has mostly been eradicated from this country. 1916 witnessed the first U.S. polio epidemic, and it remained a significant fear for four decades. Like hepatitis A, polio spreads through contaminated, swallowed substances, such as from swimming in a public swimming pool. While most infections go unnoticed, as the virus stays in the intestines, sometimes it enters the nervous system and proceeds to kill neurons, often resulting in paralysis. After realizing this in the late 1940s, American medical researchers Dr. Jonas Salk and Dr. Albert Sabin developed different vaccines. While it is now rare in the U.S., polio is still a major concern in South Asia and sub-Saharan Africa, where, in 2000, nearly four thousand were afflicted with pain and paralysis, and hundreds of thousands more were infected. Efforts to perform a global eradication are underway, but it has been difficult to implement vaccination programs in countries with social and political upheaval.
Ebola: Ebola has caused many of the most recent viral epidemics. Discovered near the Ebola river valley in Zaire in 1976, Ebola killed over ninety percent of people in over fifty villages there in a fast, gruesome manner, with hemorrhagic fevers. The virus is transmitted to people from animals (most likely infected carcasses), and then spread between people through the contact of contaminated bodily fluids. Although there have been multiple epidemics since 1976, because the virus is actually fragile without a host, and kills a person within a week, the epidemics have remained fairly localized. While there is no vaccine available or standard treatment, several vaccines are in trials.
HIV: Human immunodeficiency virus (HIV-1), the virus that causes acquired immunodeficiency syndrome (AIDS), is currently a pandemic that has infected nearly forty million people, and has killed over twenty five million people since it was recognized in 1981. HIV-1 attacks the host’s immune system, causing a person to die from opportunistic infections. Although HIV-1 is from a chimpanzee virus (the simian immunodeficiency virus [SIV]), SIV does not cause a disease in the chimpanzee, but it does in other non-human primates. While anti-viral treatments are available, vaccines are not, although there have been promising stem cell findings based on individuals who are somewhat resistant to HIV-1 infection. HIV-1 is an ongoing, significant problem, with nearly three million new cases reported in 2007 and two million deaths. For now, perhaps the best weapon against HIV-1 infection is awareness.
Cancer: Although we’ve known that viruses can cause cancer since 1911 (with the discovery of the Rous sarcoma virus, a chicken virus), the idea has only recently been widely accepted by the scientific community. Because many viruses reproduce by inserting their genes into our genomes, they can cause cancer; these insertions can (usually randomly) disrupt important genes we have for preventing cancer, or insert their own cancer-promoting genes. In addition to the Epstein Barr virus and hepatitis mentioned above, another prominent example is the human papillomavirus (HPV), which causes cervical cancer and kills a quarter million people globally (four thousand in the U.S.). Luckily, a vaccine against HPV, and consequently against cervical cancer, has been developed.
Because viruses have this habit of inserting their genes into our genomes, it may not be surprising to learn that our genome is actually eight percent viral DNA, specifically belonging to bornaviruses. These RNA viruses first started their invasion over forty million years ago. Today, it’s unclear what the implications are for our genome; some of the viral genes surprisingly appear to be beneficial, while others may not be (and may even cause schizophrenia).
Either way, scientists have been using viruses in research for decades, such as in studying how different, introduced genes behave in other organisms, or, more recently, in promising new gene therapies to treat genetic diseases. It appears as though the tables may be starting to turn; instead of viruses using us to their advantage, as they’ve done for millions of years, we’re learning how to apply their unique abilities to further our knowledge and ourselves.
For more on viruses and their origins, see Luis P. Villarreal’s book “Viruses and the Evolution of Life,” the book “The Biology of Viruses” by Bruce A. Voyles, Barry E. Zimmerman and David J. Zimmerman’s book “Microbes and Diseases That Threaten Humanity,” and Wikipedia’s article on “Virus.”
Biology Bytes author Teisha Rowland is a science writer, blogger at All Things Stem Cell, and graduate student in molecular, cellular, and developmental biology at UCSB, where she studies stem cells. Send any ideas for future columns to her at [email protected]. |
Robots in a Swiss laboratory have evolved to help each other, just as predicted by a classic analysis of how self-sacrifice might emerge in the biological world.
“Over hundreds of generations … we show that Hamilton’s rule always accurately predicts the minimum relatedness necessary for altruism to evolve,” wrote researchers led by evolutionary biologist Laurent Keller of Switzerland’s University of Lausanne in Public Library of Science Biology. The findings were published May 3.
Hamilton’s rule is named after biologist W.D. Hamilton who in 1964 attempted to explain how ostensibly selfish organisms could evolve to share their time and resources, even sacrificing themselves for the good of others. His rule codified the dynamics — degrees of genetic relatedness between organisms, costs and benefits of sharing — by which altruism made evolutionary sense. According to Hamilton, relatedness was key: Altruism’s cost to an individual would be outweighed by its benefit to a shared set of genes.
In some ways, the rule and its accompanying theory of kin selection is contested. Some scientists have used it to extrapolate too easily from insects to people, and some researchers think it overstates the importance of relatedness. But a more fundamental issue with Hamilton’s rule is the difficulty of testing it in natural systems, where animals evolve at a far slower pace than any research grant cycle.
‘A fundamental principle of natural selection also applies to synthetic organisms.’
Simulations of evolution in robots, which can “reproduce” in mere minutes or hours, have thus become a potentially useful system for studying evolutionary dynamics. And though simple in comparison to animals, Keller’s group says robot models are not too different from the insects that originally inspired Hamilton.
In the new study, inch-long wheeled robots equipped with infrared sensors were programmed to search for discs representing food, then push those discs into a designated area. At the end of each foraging round, the computerized “genes” of successful individuals were mixed up and copied into a fresh generation of robots, while less-successful robots disappeared from the gene pool.
Each robot was also given a choice between sharing points awarded for finding food, thus giving other robots’ genes a chance of surviving, or hoarding. In different iterations of the experiment, the researchers altered the costs and benefits of sharing; they found that, again and again, the robots evolved to share at the levels predicted by Hamilton’s equations.
“A fundamental principle of natural selection also applies to synthetic organisms,” wrote the researchers. “These experiments demonstrate the wide applicability of kin selection theory.”
Video: Evolution of cooperative and altruistic behavior in earlier research Keller’s group. That study established a basis for their altruism; the new study explores its relationship to biological theory.
- Robot Swarms Could Help Search for Life in Martian Caves
- Robots Taught How to Deceive
- How to Train Your Rat Neuron-Controlled Robot
- Altruism’s Bloody Roots
- Termite Altruism Might Have Roots in War
- E.O. Wilson Proposes New Theory of Social Evolution
Citation: “A Quantitative Test of Hamilton’s Rule for the Evolution of Altruism.” By Waibel M, Floreano D, Keller L. PLoS Biology, Vol. 9 No. 5, May 3, 2011.Go Back to Top. Skip To: Start of Article. |
The structure of the electric power system
A power system consists of generation sources which via power lines and transformers transmits the electric power to the end consumers.
The power system between the generation sources and end consumers is divided into different parts according to Figure 2.3 above. The transmission network, connects the main power sources and transmits a large amount of electric energy.
In Figure 2.4, a general map of the transmission system in Sweden and neighboring countries is given.
The primary task for the transmission system is to transmit energy from generation areas to load areas. To achieve a high degree of efficiency and reliability, different aspects must be taken into account. The transmission system should for instance make it possible to optimize the generation within the country and also support trading with electricity with neighboring countries.
It is also necessary to withstand different disturbances such as disconnection of transmission lines, lightning storms, outage of power plants as well as unexpected growth in power demand without reducing the quality of the electricity services.
As shown in Figure 2.4, the transmission system is meshed, i.e. there are a number of closed loops in the transmission system.
A new state utility, Svenska Kraftnat, was launched on January 1, 1992, to manage the national transmission system and foreign links in operation at date.
Note that the Baltic Cable between Sweden and Germany was taken into operation after the day Svenska Kraftnat was launched and is therefore not owned by them.
Sub-transmission network, in Sweden also called regional network, has in each load region the same or partly the same purpose as the transmission network. The amount of energy trans- mitted and the transmission distance are smaller compared with the transmission network which gives that technical-economical constraints implies lower system voltages.
Regional networks are usually connected to the transmission network at two locations.
|Title:||Static Analysis of Power Systems – Lennart Soder and Mehrdad Ghandhari (Electric Power Systems Royal Institute of Technology)|
|Download:||Right here | Get Download Updates | Get Technical articles| |
Helmets for Motorcyclists
Motorcycles have high performance capabilities, are less stable, and less visible than cars and trucks. And when motorcycles crash their riders lack the protection of an enclosed vehicle so they're more likely to be injured or killed.
Helmets decrease the severity of injury, the likelihood of death, and the overall cost of medical care. They're designed to cushion and protect riders' heads from the impact of a crash. Like seat belts, helmets can't provide total protection against head injury or death, but they do reduce the incidence of both.
Motorcycle crash statistics show that helmets are about 37 percent effective in preventing crash fatalities. The National Highway Traffic Safety Administration (NHTSA) estimates an unhelmeted rider is 40 percent more likely to suffer a fatal head injury and 15 percent more likely to incur a nonfatal head injury than a helmeted motorcyclist.
- Head injury is a leading cause of death in motorcycle crashes. (NHTSA Traffic Safety Facts, Motorcycle Helmet Use Laws, 2004)
- Helmets are estimated to be 37 percent effective in preventing fatal injuries to motorcyclists. (NHTSA Traffic Safety Facts, Motorcycles, July 2012)
- Wearing a properly fitted helmet can actually improve a rider's ability to hear by reducing wind noise and allowing the rider to hear other sounds. (Motorcycle Safety Foundation)
- Helmets prevent eye injuries from dust, dirt, and debris thrown up by other vehicles on the road. (Motorcycle Safety Foundation)
- Per vehicle mile, motorcyclists are about 30 times as likely as passenger car occupants to die in a traffic crash and about five times as likely to be injured. (NHTSA Traffic Safety Facts, Motorcycles, July 2012)
For more information on motorcycle safety, visit the following Web sites: |
The Mahdist Revolution was an Islamic revolt against the Egyptian government in the Sudan. An apocalyptic branch of Islam, Mahdism incorporated the idea of a golden age in which the Mahdi, translated as “the guided one,” would restore the glory of Islam to the earth.
Attempting to overhaul Egypt through an aggressive westernization campaign, Egyptian ruler Muhammad Ali, who was himself a provincial governor of the Ottoman Empire, invaded the Sudan in 1820. Within a year his armies had subdued the Sudan and he began conscripting local Sudanese men into the Egyptian military. In 1822 Khartoum became the capital of Egyptian-occupied Sudan and a distant outpost in the Ottoman Empire.
Egyptian rule over the Sudan involved the imposition of high rates of taxation, the taking of slaves from the local population at will, and the absolute control over all Sudanese trade which destroyed livelihoods and indigenous practices. During the process of military conscription, tens of thousands of Sudanese men and boys died on their long march from the Sudanese hinterlands to Aswan, Egypt.
Ali’s tenure as Egyptian governor ended in 1848, but the suffering of the Sudanese people under Ottoman rule did not. When the anti-slavery campaign of the new Egyptian governor, Ismail, began in 1863, Sudanese unrest intensified since human bondage was now an integral part of the local economy. Matters were complicated by the arrival of the British in 1873 who assumed responsibility over Egypt in order to protect their interests in the Suez Canal and ensure repayment of loans to that government. General Charles Gordon was appointed governor of Sudan and he immediately intensified the anti-slavery campaign initiated a decade earlier. Sudanese Arab leaders, however, saw British efforts as a European Christian attempt to undermine Muslim Arab dominance in the region.
On June 29, 1881, a Sudanese Islamic cleric, Muhammad Ahmad, proclaimed himself the Mahdi. Playing into decades of disenchantment over Egyptian rule and new resentment against the British, Ahmad immediately transformed an incipient political movement into a fundamentally religious one. Urging jihad or “holy war” against imperial Egypt, Ahmad formed an army.
By 1882 the Mahdist Army had taken complete control over the area surrounding Khartoum. Then, in 1883, a joint British-Egyptian military expedition under the command of British Colonel William Hicks launched a counterattack against the Mahdists. Hicks was soon killed and the British decided to evacuate the Sudan. Fighting continued however and the British-Egyptian forces which defended Khartoum in a long siege were finally overrun on January 28, 1885. Virtually the entire garrison was killed. General Charles Gordon, the commander of the British-Egyptian forces, was beheaded during the attack.
In June 1885 Ahmad, the self-proclaimed Mahdi, died. As a result the Mahdist movement quickly dissolved as infighting broke out among rival claimants to leadership. Hoping to capitalize on internal strife, the British returned to the Sudan in 1896 with Horatio Kitchener as commander of another Anglo-Egyptian army. In the final battle of the war on September 2, 1898 at Karari, 11,000 Mahdists were killed and 16,000 were wounded.
Ahmad’s successor called the Khalifa fled after his forces were overrun. In November of 1899 he was found and killed, officially ending the Mahdist state. Exacting vengeance for the death of Charles Gordon a decade earlier, Kitchener exhumed Ahmad’s body and pulled out his fingernails. |
Natural Pain Relief
There are over 5.5 million women in North America who are affected with this disease, and they are mostly women in their 20’s and 30’s.
What is Endometriosis?
The uterus has a tissue lining on the inside of the uterus called “endometrium”. It makes a nice bed for a fertilized egg. Once the egg implants itself into the endometrium, “pregnancy” occurs. However, when pregnancy does not occur, the endometrium starts breaking down and shedding (this is called “menstruation”) and then the body starts making a new lining for the next chance of pregnancy. Women have a cycle of growing and shedding of this endometrial lining every month. This endometrium is supposed to exist only inside the uterus, however, the endometrial tissue can sometimes migrate to other places in the pelvic cavity such as on the ovaries, fallopian tubes, the ligaments supporting the uterus, the area between the vagina and the rectum (known as Pouch of Douglas), the outer surface of the uterus and somewhere in the lining of the pelvic cavity (called “peritoneal membrane”). When this happens, it is called endometriosis.
The misplaced endometrial tissue acts as it is supposed to do in the uterus, it grows, breaks down and sheds with the menstrual cycle, meaning there occurs a bleeding in the pelvis and that the blood irritates the peritoneal membrane causing severe pain as well as scarring.
The common signs and symptoms of endometriosis include:
- Painful periods (dysmenorrhea): it usually manifests as pelvic pain, but it also includes abdominal and lower back pain before and during periods and at ovulation Some women have such severe pain that they are unable to perform normal activities.
- Heavy periods (menorrhagia/hypermenorrhea) or bleeding between periods (metrorrhagia)
- Infertility: not all women who have endometriosis has infertility, but 30-40% of women with infertility have endometriosis
- Pain during and after sexual intercourse
- Pain with urination or bowel movement during the week of menstruation
- Other symptoms: diarrhea or constipation, urinary frequency, abdominal swelling, bloating, nausea/vomiting etc. especially during menses.
What causes endometriosis?
The cause is not fully known. There are several theories about when endometrial tissue may be found outside the uterus. One theory that is mostly supported is that endometrial tissue migrates by the backing up of menstrual blood into the fallopian tubes and the pelvic and abdominal cavity during periods. This is called “retrograde menstruation”. When observing ectopic endometrial tissue is observed by laparoscopy, its purple coloration is clearly visible, which signifies blood stasis. In Chinese Medicine, there is a famous dictum, “if there is free flow, there is no pain; if there is pain, there is no free flow”
According to Chinese medicine, there are many theories to how women can develop blood stasis leading to endometriosis:
- Early sexual activity before, during and within 2 years from puberty: Chinese medicine thinks the uterus of a young girl is vulnerable and easily affected by pathogenic factors such as cold pathogen. This cold pathogen congeals the blood, causing blood stasis.
- Intercourse during menstruation: menstrual blood is supposed to go down; however, when a woman gets sexually aroused, the Qi and fire in the body goes up. This upward movement of Qi blocks the downward outflow of menstrual blood, causing blood stasis
- Excessive physical work, exercise or sports during periods and pregnancy: Chinese Medicine advises woman to avoid excessive physical activities during their periods. Excessive physical activity has negative impact on young girls at puberty, because their bodies are not completely developed, and are therefore vulnerable
- External cold: women are vulnerable to external pathogenic factors such as cold weather, especially before during and just after their periods as well as just after childbirth Sitting on a cold and/or wet place, not wearing enough clothing, even wearing wet clothing can all lead to exposure to external cold
- Tampons and Intra-uterine device (IUD): a tampon and an IUD physically block the discharge of menstrual blood, which can cause blood stasis
- Emotional strain including anger, resentment, worry and stress affects the liver’s function of promoting the free flow of qi, causing qi stagnation. Long-term qi stagnation leads to blood stasis
How is endometriosis diagnosed?
Endometriosis is suspected based on symptoms (especially severe menstrual pain), physical examination including recto-vaginal exam, imaging tests such as ultra sound and MRI; however, direct inspection of the inside of the abdomen and pelvis with biopsy by laparoscopy (or surgery) is the only method to confirm the diagnosis of endometriosis.
How is endometriosis treated by Western Medicine?
Endometriosis is treated by medication and/or surgery; however, there is no medication to treat it completely. What western drugs do it alleviate pain (pain killers such as non-steroid anti-inflammatory drugs; NSAIDs), or inhibit ovulation so that there is no menstruation and bleeding from the migrated endometrial tissue (by gonadotropin-releasing hormone analogs, oral contraceptive pills, or progestins etc). They are more likely to mask symptoms, rather than treating the disease itself. On the other hand, surgery can be done to remove implanted endometrial tissue for long term pain relief. Surgical methods include laparoscopic surgery, and for women who do not plan on childbearing, a radical surgery such as hysterectomy with oophorectomy (removal of the uterus and both ovaries) along with removal of the endometrial tissues may be chosen. However the former does no guarantee alleviation of pain, while the latter effectively puts the woman into immediate menopause.
How is endometriosis treated by Chinese Medicine?
Since Chinese Medicine views endometriosis as blood stasis, Chinese Medicine Practitioners treat the root causes of blood stasis, The main treatment plan would be to invigorate the blood, break up the blood stasis, regulate menstruation and stop pain. This treatment should be done with mainly Chinese herbs, according to the phases in the menstrual cycle. Before and during the period the focus is on removing the blood stasis; during the rest of the cycle the accompanying conditions are treated. Acupuncture is also useful, especially in the beginning of treatment, when the patient suffers from severe pain during periods. It would be impractical to take western drugs and Chinese herbs for endometriosis simultaneously. Western hormonal drugs are prescribed to inhibit ovulation so that the patient won’t have menstruation, while Chinese herbs are used to regulate menstruation without interfering with the natural menstruation cycle.
INQUIRIES/BOOK AN APPOINTMENT
A member of our staff will get back to you within 24 hours
This message is only visible to site admins
Problem displaying Facebook posts. |
Teaching consent can feel challenging and triggering. I teach workshops on consent and have developed exercises that can be practiced in the low-stakes environment of the classroom so that they become second nature before students may need them in higher-stakes real-life situations.
Something I have learned while teaching consent is that these skills need to be incorporated and felt in an embodied way in order to be effective, and that’s why I encourage educators to go beyond short films and discussions and lead their students through these interactive exercises.
Some of the consent skills I teach are:
- saying no confidently,
- hearing no graciously,
- knowing what to do when you’re a maybe,
- changing your mind,
- asking for what you want clearly,
- noticing body language, and
- apologizing authentically.
Classroom Consent Exercises
Here are two exercises that focus on the first two skills: saying no confidently and hearing no graciously.
It’s important to always give students the option to sit out any exercise and just watch if they are uncomfortable. At the same time, encourage them to stretch their comfort zone and participate if they can, because they will get more out of it. First, make a group agreement together, and if they don’t come up with this rule, add a rule that says “no comments from people sitting out of an exercise.” My experience is that most students want to do the exercises and have fun doing them, but it’s essential to be understanding of kids who want to sit one out.
Let the students know that these exercises involve talking about touch, but that there will be no actual touching.
Saying no confidently: For the first exercise, put the kids into pairs—one student will be A and other B—and ask them to face each other. Tell them that A’s job is to ask for a hug and B’s job is to just say the word “no” and nothing else. Then have them trade places and do it again. A lot of kids will laugh, some will quietly say their lines, and some will start saying things like, “No. But I would say yes if I didn’t have to say no.”
To unpack the exercise, ask students how it felt and if anyone felt uncomfortable saying no. Mention that you heard people explaining their no even though everyone heard you tell them to say it. Ask if anyone noticed that they made their no into a joke. Then talk about how most people have a hard time saying no and the reasons why, such as not wanting to disappoint people, not wanting to hurt people’s feelings, and wanting to avoid conflict. Ask if anyone can think of other reasons why it can be uncomfortable to say no.
Hearing no graciously: For the second exercise, ask students to get into their pairs again and do the same exercise, except that this time after B says no, A needs to respond with “Thank you.” You can talk about how they can find a way to say thank you that feels natural for them, such as, “Thank you for letting me know how you feel,” or “Thanks for having clear boundaries,” or “Thanks for taking care of yourself.” Whatever works for them is OK, as long as it’s a sincere thank you.
After the pairs switch roles and take turns doing this, unpack the exercise again. Discuss how it feels different with a “thank you” added. Discuss how it feels to say thank you when someone says no, and how it feels to hear thank you after you say no. At first young people might find saying thank you after hearing no a bit weird, but after a few more exercises in which they practice it, you can see how it starts to come easily and how it shifts the dynamic.
Discuss how hearing no graciously makes others feel safe to express their true boundaries, and also makes moments that could feel like rejection or conflict feel more like moments of receiving important information.
This set of exercises teaches students a few things. It’s important for young people to understand that a lot of us struggle to say no, even when we really want to. These exercises give us a tool to help others feel more comfortable saying no to us, and to make us feel more comfortable when we hear no.
Students also have a real-time embodied experience of what it feels like to have their no heard and honored within a safe environment and a low-stakes situation. If they later find themselves in a high-stakes situation with someone who is not hearing and honoring their no, their previous experience can help them to notice that this behavior is not OK.
Try these exercises with your students. You can learn more at my website and in my book Creating Consent Culture: A Handbook for Educators. |
School district and state support of ESL and other programs serving minorities have allowed upward mobility for minorities in California and this has lead to high levels of acceptance at the University of California system. The activists seek to give minorities access to government services that created the white middle class. They push programs that even out educational disparities between children from different socioeconomic and racial backgrounds. These programs also hire people from local communities and provide an avenue to the middle class for their employees. This has been beneficial for minorities at times, as these programs made to assist disenfranchised racial/ethnic groups also hire many people from those groups. The educational system has long served as a jobs program that expands the middle class, and California in its early embrace of ESL and bilingual education has moved ahead of other states in incorporating minorities into the middle class.
California used education as a jobs program to deal with the rapidly increasing numbers of Latinos and Asians. The civil rights laws lead to the Hart-Cellar act, which abolished the quotas favoring European immigrants to the US. Once the US was opened up, the number of Asian and Hispanic immigrants rapidly increased. This lead to huge new populations of children who were non English speakers. The public school systems and the state of California had to deal with these matters. Community activists fought for bilingual education and ESL programs in their local districts. They hired teachers from the communities who spoke either Spanish or Chinese to teach those languages. These teachers were not credentialed. Later on the educational bureaucracy instituted requirements for bilingual education and ESL, but by then other major changes relating to the inclusion of minority teachers took place. Activists in California lead the charge for the creation of ethnic studies at universities. This was meant to not only promote ethnic pride, but to provide employment for Latinos and Asians. It helped lay the foundation for the creation of a Latino and Asian middle class. California's innovation in bilingual education and ESL, plus support of Adult ESL via local school districts helped more immigrant families move up socioeconomically.
Another example of how hiring and training local members of a minority population created socioeconomic mobility occurred in Mississippi. The headstart program in Mississippi from the 1960s called CDGM was formed by a coalition of local and Northern activists. They trained local women with middle school education to work as head start teachers. Around this point Black Mississippians had low rates of literacy and large numbers of them worked in the agricultural sector or as domestics. The headstart program gave many Black women in Mississippi their first formal employment and job training. Many of them were able to move forward to get college and even graduate educations. The headstart program in addition to teaching Black children literacy, sought to counteract the negative stereotypes that had been ingrained in Black people due to extreme oppression. The programs sought to prepare children for a world in which they would have to fight for their rights and for the ability to move up socioeconomically. In Wisconsin a dynamic closer to California occurred with migrants Blacks from the South. As migrant Blacks joined expanding impoverished communities, activists such as William Kelly fought for local school authorities to hire Black teachers.
Efforts to increase minority enrollments have paid off in California. In California as the economy continues to recover the UC system has increased the number of residents it accepts. During the recession the UC system accepted more out of state and international students to make up for the budget shortfalls as they received reduced state subsidies. With both university and state finances recovered, the UC system offered admission to 15% more Californians. The Los Angeles Times reports that the numbers of Latinos admitted to the UC system grew from 16, 608 to 22, 704, representing 32% of the total class admitted. Overall the UC system admitted 66, 123 freshmen to the system. The number of Blacks grew from 2,337 to 3,083, representing 4.7%. Whites and Asian Americans represent 25% and 34.3% percent respectively. California is a minority majority state and the UC system is now reflecting that. The minority students accepted into UCLA mentioned programs from the nonprofit sector and from the UC system itself as playing major roles in guiding them to the proper courses to take in order to be competitive college applications. Research has indicated that bilingual education has long term cognitive benefits. The National Institute of Health states that bilinguals have better attention spans, increased task switching abilities, increased adaptiveness to environmental changes, and less cognitive decline in old age. There are a number of bilingual character schools in California. Charter schools are fulfilling a function of their mission by being innovative in bilingual education. The programs that California has that help minorities are not limited to those under age 18. Local school districts and the state also fund adult ESL.
Other factors aside from changes in education policy that have lead to increased minority college enrollment in California. The students interviewed in the aforementioned Los Angeles Times articles mentioned all the hard work that they put in to do well in school. And surely the parents of these students supported them both emotionally and financially. No doubt individual and family responsibility as at play in these cases and plays a major role. This doesn't negate the fact good public policy in the form of drawing educators from these communities helped open up the pathways for these minority students to enter the UC system in record numbers. The increased enrollment of California residents as well as growing Latino and Asian population were obviously factors in the increased minority enrollments. However one should keep in mind that in the 1960s ESL students were placed in classes for mentally retarded children and had extremely high dropout rates. Clearly the creation of programs serving linguistic minorities and the cessation of discriminatory practices towards linguistic minorities made huge differences in the educational outcomes of California then and California now.
California is an example of how hiring more minority educators increases the number of middle class minorities and provides pathways for others to pursue their higher education. More effort need to be made to hire minority teachers from the communities that have issues with teacher shortages. In hiring locals from these communities there is a clear pathway out of poverty for those who are hired as jobs in education are stable with good benefits. In 2016 New York City announced that it would recruit 1000 minority men from CUNY to teach in NYC public schools. This program needs to be expanded. Now that NYC has universal pre-k those teachers too should be hired from local NYC communities that they will be teaching it. These initiatives need to be expanded on a national level. The programs that assisted those underrepresented minorities in California in becoming competitive students for the UC system need to be created or expanded in other states. |
Sir Frederick Grant Banting, (born November 14, 1891, Alliston, Ontario, Canada—died February 21, 1941, Newfoundland), Canadian physician who, with Charles H. Best, was one of the first to extract (1921) the hormoneinsulin from the pancreas. Injections of insulin proved to be the first effective treatment for diabetes, a disease in which glucose accumulates in abnormally high quantities in the blood. Banting was awarded a share of the 1923 Nobel Prize for Physiology or Medicine for his achievement.
Banting was educated at the University of Toronto, served in World War I, and then practiced medicine in London, Ontario. In 1889 Joseph von Mering and Oskar Minkowski had found that complete removal of the pancreas in dogs immediately caused severe diabetes. Later scientists hypothesized that the pancreas controlled glucose metabolism by generating a hormone, which they named “insulin.” However, repeated efforts to extract insulin from the pancreas ended in failure, because the pancreas’ own digestive enzymes destroyed the insulin molecules as soon as the pancreas was ground up.
In May 1921 Banting and Best, a medical student, began an intensive effort in the laboratories of the Scottish physiologist J.J.R. Macleod, at the University of Toronto, to isolate the hormone. By tying off the pancreatic ducts of dogs, they were able to reduce the pancreas to inactivity while preserving certain cells in the pancreas known as the islets of Langerhans, which were thought to be the site of insulin production. Solutions extracted from these cells were injected into the dogs whose pancreas had been removed, and the dogs quickly recovered from their artificially induced diabetes. Banting and Best were able to isolate insulin in a form that proved consistently effective in treating diabetes in humans. This discovery ultimately enabled millions of people suffering from diabetes to lead normal lives.
Banting and Best completed their experiments in 1922. The following year Banting and Macleod received the 1923 Nobel Prize for Physiology or Medicine for the discovery of insulin, though Macleod had not actually taken part in the original research. Angered that Macleod, rather than Best, had received the Nobel Prize, Banting divided his share of the award equally with Best. Macleod shared his portion of the Nobel Prize with James B. Collip, a young chemist who had helped with the purification of insulin. In 1923 Banting became head of the University of Toronto’s Banting and Best Department of Medical Research. Banting was created a knight of the British Empire in 1934. He was killed in a plane crash in 1941 while on a war mission.
This article was most recently revised and updated by Kara Rogers. |
Fishing nets that are lost or discarded at sea are one of major marine polluters and a source of harm for marine wildlife that gets entangled in them.
Recent technological developments on biodegradable fishing nets offer a glimpse of hope, however, such nets are not yet used globally.
Sustainable fishing gear certificates, combined with technological developments to reduce net loss of fishing nets and boost their recovery from the ocean, and a network to promote research and spread of the relevant technologies, could support the use of biodegradable fishing nets and contribute to the advancement of SDG 14 (life below water).
By Sophie Lilienthal, Michiel Hoornick, and Maathangi Hariharan, students at Geneva Graduate Institute
Defining the Problems: Ghost Fishing and Marine Pollution
Fishing nets, either accidentally or deliberately discarded at sea, can have serious adverse effects on the marine environment. First, these nets continue to catch fish and other marine animals (commonly called “ghost fishing”). Second, fishing nets are among the biggest marine plastic polluters. They are known to entangle other waste and have the capacity to form “garbage islands”, which pose serious threats to marine life as well.
In 2016, a team of researchers from the Republic of Korea discovered a method to produce fishing nets from a biodegradable monofilament that will degrade after 24 months. Although a promising solution, researchers find that biodegradable nets are still less efficient than non-biodegradable nets and that more research is needed to improve their efficiency. Thus, for small and industrial fishers to take up biodegradable fishing nets, there must be advantages to their use that would, at least in part, counterbalance their lack of efficiency.
Some private actors are already investing in the development of biodegradable nets – these singular efforts, however, are not enough to bring about global change. Moreover, even biodegradable nets can cause considerable damage to marine life. Therefore, their implementation must be accompanied by strategies to mark nets, to prevent loss of nets, and to recover nets from the ocean.
Finding a Solution: Introducing a Biodegradable Fishing Gear Certificate
Within international law and global governance, Voluntary Sustainability Standards (VSS) have gained increased attention, and have increased presence across different sectors, including coffee, cotton, and the broader fishing sector. VSS have proven effective tools to protect ecosystems while at the same time providing the space to invest in sustainability. VSS are private standards that require products to meet specific economic, social, and environmental sustainability metrics. The requirements can refer to product quality or attributes, but also to production and processing methods, as well as transportation. They can also serve as alternatives where laws do not exist. As voluntary schemes that guide production towards delivering positive economic, environmental, and social outcomes, VSS could provide a valuable contribution to promoting sustainable fishing practices.
Apart from a step forward towards achieving SDG 14, a biodegradable fishing gear certificate would make a wider contribution to sustainable development. Such a VSS could provide an opportunity to cushion participating small fishers against the financial risk of changing from non-degradable to biodegradable fishing nets. A VSS for fishing nets would allow for better market access and recognition (through labels) that the fishery is using sustainable fishing practices, which would lead to better sales and thus better revenue. A requirement to have biodegradable nets could also be a part of already existing VSS in the fisheries sector.
The ultimate goal of this strategy is to fully replace non-biodegradable fishing gear with biodegradable fishing gear. Specifically, this change in fishing practices would benefit SDG target 14.1, aiming to prevent and significantly reduce marine pollution of all kinds, and to increase scientific knowledge. It would also contribute to SDG target 12.3, which envisages a reduction of food losses along production and supply chains and SDG target 12.5, which aims to substantially reduce waste generation through prevention and reduction, by 2030. As fisheries are a major food industry, ghost fishing could be considered food loss in the production chain. The disposal and loss of fishing gear in the sea is waste produced by the fishing industry contributing to marine pollution, which the use of biodegradable nets could help prevent.
In addition, a VSS certificate could provide an opportunity for stakeholders and states to promote and support research in the field of improving the effectiveness of biodegradable nets and to develop strategies to mark and recover lost nets at sea. Following the example of the Forest Stewardship Council Network, such a certificate could also help create a networking platform for the exchange of know-how and for the communication of relevant information to relevant local and regional authorities. These efforts would benefit SDG target 9.5, which endeavors to enhance scientific research and upgrade technological capabilities of industrial sectors in all countries by 2030, and SDG target 14.a of SDG 14, which aims to develop research capacity in order to improve ocean health and to enhance the contribution of marine biodiversity to the development of developing countries.
In summary, ghost fishing remains one of major challenges in the prevention of marine pollution, and requires novel and innovative solutions that go beyond traditional environmental legislation. As other VSS have shown, private certification could be one such solution. A biodegradable fishing gear certificate would thus benefit not only SDG 14, but SDGs 9 and 12 as well.
This article is a result of the Spring 2022 class, ‘Law of Sustainable Development,’ Geneva Graduate Institute, taught by Dr. Charlotte Sieber-Gasser and Dr. Manuel Sanchez. |
This activity is a fun way to practice comparing numbers from 0 to 10! Students count the apples, type the number and then drag and drop the less than, greater than, or equal to symbol. This digital and interactive resource includes moveable pieces and can be used with Google Slides for Google Classroom. These digital task cards are perfect for math centers, whole group, summer review and homeschooling. You can also print and use these as clip cards.
Upon purchase, you will receive a PDF with directions and a link to allow you to copy this resource to your Google Drive.
Google Classroom allows you to view your students’ work live from your computer. Simply select the file from your Google Drive and assign it as an assignment to your students (follow the directions in the file). Student’s work is automatically saved so you can review it when they turn it in. If you would like to create differentiated versions, then create multiple copies and delete or move the slides that you want.
These digital task cards are paperless and interactive. You can use this resource in PowerPoint or by opening this Google Slides resource on your Smartboard, laptop, tablet, or iPad. |
Although she was often personified, Maat (Ma’at) is perhaps best understood as an idea, rather than a goddess, but she was central to conceptions of the universe, balance, and divine order in Ancient Egypt.
The name Ma’at is generally translated as “that which is straight” or “truth” but also implies “order”, “balance” and “justice”. Thus Ma’at personified perfect order and harmony. She came into being when Ra rose from the waters of Nun (Chaos) and so she was often described as a daughter of Ra. She was sometimes considered to be the wife of Thoth because he was a god of wisdom.
The ancient Egyptians believed that the universe was ordered and rational. The rising and setting of the sun, the flooding of the Nile, and the predicable course of the stars in the sky reassured them that there was permanence to existence which was central to the nature of all things. However, the forces of chaos were always present and threatened the balance of Maat. Each person was duty bound to preserve and defend Maat and the Pharaoh was perceived as the guardian of Maat. Without Ma’at, Nun would reclaim the universe and chaos would reign supreme.
The Egyptians also had a strong sense of morality and justice. They felt that the good should prosper, and that the guilty would be punished. They praised those who defended the weak and the poor and placed a high value on loyalty, especially to one’s family. However, they also understood that it was not possible to be perfect, just balanced.
Maat transcended specific ethical rules (which differed according to different times and different peoples) and instead focused on the natural order of things. That being said, certain actions were clearly against Maat as they increased the effect of chaos and had a purely negative effect on the world.
Each Egyptian’s soul was judged in the Hall of Maat (depicted in the book of the dead and book five of the book of gates) when they died. Their heart (conscience) was weighed against the feather of Ma’at (an ostrich feather) on scales which represented balance and justice. If their heart was heavier than the feather because they had failed to live a balanced life by the principles of Ma’at their heart was either thrown into a lake of fire or devoured by a fearsome deity known as Ammit. If, however, the heart balanced with the feather of Ma’at they would pass the test and gain eternal life. At certain times it was Osiris who sat as judge in the ritual, and many other deities were involved in the ceremony, but the scales always represented Maat.
The Ancient Egyptians also had a well developed legal system to ensure that Maat was preserved in daily life. It is thought that the Priests of Maat were involved in the justice system, as well as tending to the needs of the goddess. Pharaohs were regularly depicted “presenting Ma’at” (holding out a tiny statue of the goddess) to reiterate their commitment to upholding order and justice.
All rulers respected Maat, but Akhenaten in particular emphasised his adherence to Ma’at, despite (or perhaps because of) his rather unconventional approach to the gods. Hatshepsut also emphasised her reverence for Maat by taking the throne name Ma’atkare (justice is the soul of re), again possibly because, as a female ruler, she needed to show that her position was in line with Ma’at. She also built a small temple to Ma’at within the precinct of Montu in Karnak.
Maat was depicted as a woman wearing a crown with a single ostrich feather protruding from it. She is occasionally depicted as a winged goddess. Her totem was a stone platform representing the stable foundation on which order was built and the primeval mound which first emerged from the waters of Nun (chaos).
- Bard, Kathryn (2008) An introduction to the Archaeology of Ancient Egypt
- Kemp, Barry J (1991) Ancient Egypt: Anatomy of a Civilisation
- Pinch, Geraldine (2002) Handbook Egyptian Mythology
- Redford Donald B (2002) Ancient Gods Speak
- Robins, Gay (2008) The art of Ancient Egypt
- Watterson, Barbara (1996) Gods of Ancient Egypt
- Wilkinson, Richard H. (1992) Reading Egyptian Art
- Wilkinson, Richard H. (2003) The Complete Gods and Goddesses of Ancient Egypt
- Wilkinson, Richard H. (2000) The Complete Temples of Ancient Egypt
Copyright J Hill 2008 |
One of the main advantages of spectral remote sensing is that it allows us to get chemical information in a non-destructive and chemical-free way. Spectral remote sensing has made great advances, both in terms of its technical solutions and its areas of use. In addition to satellites, sensors installed on aeroplanes have appeared (spectral scanners), as well as field and mobile surface platforms (snapshot spectrometers) such as remote-controlled multicopters, field robots or spectral cameras.
With the growth in spectral remote sensing technologies and platforms, new forms of spectral sensing emerged in agriculture, which take into consideration the special characteristics of the crops, particularly the large spatial, spectral and temporal scales within a production cycle. One of the greatest challenges in remote sensing at the moment is increasing its resolution in time, which can be achieved by autonomous and/or independent platforms and sensor development. This kind of mobility and flexibility can adapt to the variability of agriculture in time and space, and it is also suitable for the more accurate investigation of smaller areas, which makes it a good fit for expectations in organic farming.
Spectral sensing can be used in many areas of organic farming, including when investigating soil heterogeneity, soil resource management, crop purity, vitality, pathogens and pests, estimating yields, planning watering, and mapping plant stress.
Spectral remote sensing
The human eye can only see a narrow range of the electromagnetic spectrum (just the visible light range of 380-780 nm). Even though we are limited in this way, vision is one of our most important tools for remote sensing. Photography and imaging methods that have become widely used thanks to technical developments allow documentation from various distances or photographic remote sensing. Breaking down and measuring visible and non-visible light into different wavelengths became possible with the use of spectroscopy and spectrometry, which is performed with a spectrometer.
A spectrometer reads electromagnetic waves from a given surface and records the measurements. The composition and distribution of wavelengths received from the surface are determined by the physicochemical properties of the surface. The spectrum comprising of the wavelengths encodes many characteristics of a substance as a kind of fingerprint. This method was well known for its use in laboratory settings, but has also been used on satellites, aeroplanes and in handheld devices. It moved from controlled laboratory measurements to remote measurements, where the distance between the device and the sample can be 1 meter, 1,000 meters or even further, thus establishing the science of remote sensing. |
The lunar surface taken by SMART-1. Image credit: ESA Click to enlarge
While the Earth is tilted at an angle of about 23 degrees, the moon’s tilt is just over 1 degree. Because of this, the summits of some lunar crater rims are sunlit over very long periods. In some locations, there are “peaks of eternal light,” or pics de lumiere eternelle, as the French astronomer Camille Flammarion called them at the end of the nineteenth century.
NASA’s Clementine spacecraft orbited the moon for three months in 1994. It identified some spots in the north polar region that are illuminated all the time during the summer, and others that are illuminated 80 percent of the time. This was not a big surprise, because we know that on Earth the poles receive a lot of sunlight during the summer. A question that the European Space Agency wanted to answer with the SMART-1 mission was whether there is enough solar light to still illuminate these places in winter.
SMART-1 mapped the polar areas on the moon, and we recently found an illuminated site about 15 kilometers from the north pole. Even though most of the moon is dark in that region, there’s a crater wall tall enough for sunlight to strike its rim.
Remove All Ads on Universe Today
Join our Patreon for as little as $3!
Get the ad-free experience for life
Such perpetually lit areas would be good places to start our exploration of the moon. If you didn’t want to rely on complex power systems, you could install solar power stations at the peaks and use the energy to run small rovers and landers. Such systems are easier to design than electrical and mechanical systems that must withstand the extreme variation of temperature between lunar day and night. Branching out from there, you could build a spider web of facilities and habitats, with the core feeding energy to surrounding areas.
A peak of eternal light would be a good place to retreat to in winter, where we could maintain low level operations. In the spring and summer, we could reach out to other parts of the moon, extending hundreds of kilometers from the core.
The peaks provide some temperature stability. On the moon’s equator, the temperature can vary from minus 170 degrees C to plus 110 degrees C. The peaks have less variation, and an average temperature of minus 30 degrees C. A solar collector placed on a peak could provide enough energy to maintain a habitat with a very comfortable temperature of 20 degrees C.
With such a stable environment, you could do life science experiments to test how life adapts on another world. We could see how bacteria withstand the radiation environment. We could develop plant growth experiments in preparation for human bases.
But we also want to know if different organisms can survive and proliferate in the extreme conditions of the moon. By experimenting with different temperatures, artificial pressure, and other factors, we could figure out whether we even needed to develop lunar greenhouses. Do we need to recreate an exact copy of Earth conditions, or can we just adapt aspects of lunar conditions and make use of local resources?
Some astronomers are interested in the peaks of eternal light. You could build a very large observatory, at some distance from a peak of eternal light, that could observe the universe unattended. Because there’s no atmosphere on the moon, sunlight does not get scattered, so you can make observations even during part of the daytime.
Finally, just as the moon’s axis of rotation produces peaks of eternal light, there are also places, like the bottoms of some craters near the poles, that are in permanent shadow. We are very interested in such craters because they may contain water ice. That could be a valuable resource for future bases on the moon.
So a peak of eternal light would be a good central base from which to begin our lunar activities. It could provide a source of solar power for exploration, astronomical observations, life science experiments, and the investigation of possible water in the dark craters.
To extend beyond a few hundred kilometers from the peaks, however, we would need to develop nuclear power systems. That would provide enough energy to allow us to grow from a little refuge to a global village on the moon.
Original Source: NASA Astrobiology |
The third little pig’s house made of bricks saved him from the wolf. But it wasn’t the most eco-friendly. Manufacturing bricks is carbon-intensive and creates toxic air pollution. So engineers at RMIT University in Australia have proposed brick made partly with treated sewage waste in a new study published in the journal Buildings. The idea could be hard to digest for some, but would tremendously benefit the environment.
To make bricks, a mix of clay and concrete materials is heated at temperatures between 900 and 1200°C. This requires burning a lot of fuel. In South Asian countries, where brickmakers will burn coal, biomass and trash for this firing, brick kilns have a global warming impact equivalent to that of all passenger cars in the United States, and the pollutants they create kills tens of thousands of people each year. About 8 percent of global carbon emissions come from brick manufacturing, according to some estimates.
Bricks made from sewage would help to clean up the air. Once sewage is treated and dried at wastewater treatment plants, some of the solids go into making fertilizer. But a good 30 percent of it is stockpiled or sent to landfill. Incorporating the biosolids in bricks would put this waste to good use, and it reduces emissions from brick-making.
The RMIT University researchers collected three different biosolid waste samples from two treatment plants, and used them to make bricks containing 10, 15, 20, and 25 percent biowaste. They report that bricks containing 25 percent sewage solids required about half the energy to manufacture as regular bricks.
The biobricks would also be better for the environment in other ways. More than three billion cubic meters of clay soil are dug up around the world every year to produce around 1.5 trillion bricks. That is “equivalent to over 1000 soccer fields dug 440 m deep or to a depth greater than three times the height of the Sydney Harbour Bridge,” the researchers write. Biosolid bricks could reduce the need for such massive excavation.
Plus, 43 to 99 percent of heavy metals present in the biosolids remained trapped in the bricks, keeping them from leaching into the environment, the researchers found.
The bricks passed compressive strength tests. They were more porous than their conventional cousins, which made them more insulating. And as an added bonus they were cheaper to produce.
Further tests are needed before biobricks are produced on a larger scale because sewage waste in different parts of the world can have different compositions and chemical traits, the researchers say. But based on their study results, they propose that including a minimum of 15% biosolids content into 15% of brick production could “completely recycle all the approximately 5 million tonnes of annual leftover biosolids production in Australia, New Zealand, the EU, the USA and Canada. This is a practical and sustainable proposal for recycling all the leftover biosolids worldwide.”
Source: Abbas Mohajerani et al. Proposal for Recycling the World’s Unused Stockpiles of Treated Wastewater Sludge (Biosolids) in Fired-Clay Bricks. Buildings, 2019.
Photo: RMIT University |
C++ Decision Programming
In this project you will create an application that yields a zodiac sign based on an input of birth month and day. The process of making decisions is fundamental in programming, just as it is in real life. A pre-existing condition is evaluated, and a decision is made based on the evaluation. For example, if it is cold outside, a person chooses to wear a coat. If a temperature sensor exceeds a certain threshold, the furnace turns off. In C++, as in other programming languages, a decision construct is used to control the sequence of instructions that are executed under certain conditions. It allows the programmer the power to take certain action based on various input conditions. For example, a grading program will need to assign a certain grade based on a test score. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
C++ nested decision constructs
C++ Decision programming
In a video that plays in a split-screen with your work area, your instructor will walk you through these steps:
Your workspace is a cloud desktop right in your browser, no download required
In a split-screen video, your instructor guides you step-by-step |
According to the United Nations, climate change is the defining issue of our time, even in this time of COVID. Many of the root causes of climate change also increase the risk of future pandemics. In the past century we have escalated our demands upon nature, such that today, we are losing species at a rate unknown since the dinosaurs, where along with them went half of life on earth, which all went extinct 65 million years ago. This rapid dismantling of life on earth owes primarily to habitat loss through deforestation, which occurs mostly from growing crops and raising livestock for people. With fewer places to live and fewer food sources to feed on, animals find food and shelter where people are, and that can lead to increased disease spread.
Climate change has also contributed to increased risk of bushfire within prolonged droughts, resulting in the destruction of animal habitat. In Australia alone, 8.3 millions hectares of native forest and 130,000 hectares of plantation wood was burnt during the bushfires of 2019/20, with the consequent devastation to habitats of insects, birds and animals, not to mention loss of human life, homes and the ongoing impact in so many communities.
Water management is inextricably linked to climate change. Two-thirds of Earth’s land is on pace to lose water as the climate warms. This will impact the world’s water availability in complex ways – putting pressure on water supply, food production and property values. We watched on in dread in 2018 as Cape Town, South Africa, counted down the days to ‘Day Zero’ – when the city would run out of water. The region’s surface reservoirs were going dry amid its worst drought on record, and the public countdown was a plea for help.
By drastically cutting their water use, Cape Town residents and farmers were able to push back this disaster until the rain came, but the close call showed just how precarious global water security is. Like Australia with its recent water restrictions, California and Mexico City now face water restrictions after years with little rain.
There are growing concerns that many regions of the world will face water crises like these in the coming decades as rising temperatures exacerbate and extend drought conditions. There are also the alternate outcomes from climate change – that of increased extreme weather events such as cyclones/hurricanes, snow storms and heavy rainfall and flooding. Three cyclones have been swirling over the island nation of Fiji in the past month and a half, killing at least six people and destroying thousands of homes. Category 2 Cyclone Ana pummelled Fiji on Sunday, following category 5 Cyclone Yasa in December. Just this week, a third cyclone – Bina – was downgraded to a rain depression which is exacerbating flooding in Fiji. The Fijian Government says the extreme weather events should be a climate change wake up call for the rest of the world.
This week, the first major winter storm of 2021 is threatening to dump two feet of snow on New York City and other parts of the mid-Atlantic and Northeast, stopping transportation, shutting down coronavirus vaccination sites and threatening the biggest storm surge since Superstorm Sandy in 2012.
Climate change, global warming, drought and extreme weather events are complex to unpack and understand – teasing out the relationship between warmer temperatures, warming oceans, more moisture in the atmosphere and the impact we humans have.
Understanding the risks ahead for us to ensure water security requires considering the entire landscape of terrestrial water storage – not just the rivers, but also the water stored in soils, groundwater, snowpack, forest canopies, wetlands, lakes and reservoirs. As citizens of the planet, our respect for water and optimising its use and maximising its storage, play a significant role in our future water security.
Rainwater harvesting is critical in Australia’s volatile climate. When it rains, we should be prepared, and harvest and store every drop of rainwater.
Rainwater harvesting an important practice on the farm, be it a hobby farm or a larger commercial farm setting. In the urban setting rainwater harvesting is as highly relevant.
Bushmans have a full range of water tanks in various shapes (round water tank, squat rainwater tank, slimline water tank) and a wide range of colours. |
Researchers in the humanities and social sciences like to compare, contrast, synthesise, critique build and dissect theories, ideas and preconceptions. We attempt to formulate compelling arguments and narratives, drawing on relevant literature, reflections and insights from our intellectual communities. We sometimes encroach across borders between communities and frameworks to encounter otherness that challenges our own preconceptions. For some researchers that all counts as “critical literature review” — “secondary research.”
In the humanities and social sciences such research is even more powerful when grounded in practical engagement with the world, and draws on evidence from human observation and experience. It draws on primary research.
Here’s a standard definition of primary research from a chapter by Dana Lynn Driscoll entitled “Introduction to Primary Research: Observations, Surveys, and Interviews” in a book about research methods.
“Primary research, …, is research that is collected firsthand rather than found in a book, database, or journal” (154).
In primary research, scholars gather evidence in a systematic fashion to reveal or infer some new information, or to confirm or discredit a hypothesis.
The chapter includes sound advice about planning and evaluating methods to gather and analyse appropriate data and evidence in a way that addresses a set of research questions.
“The ultimate goal in conducting primary research is to learn about something new that can be confirmed by others and to eliminate our own biases in the process” (154).
That’s a daunting prospect for trainee researchers who are constrained by time and resources: to construct a research project that produces results that can be confirmed by others, and exhibits independence from personal opinion (biases). Students on a 20 credit course are amongst those who find that difficult, but so are most researchers.
Illustration enters a research project as relatable examples, narratives, stories that bring the research problem, findings and conclusions to life. The psychologist Sigmund Freud provided copious illustrations from his experience with patients to illustrate his theories on how traumas from childhood reveal themselves in feelings and behaviours in adult life. Whatever their status as repeatable primary evidence, such observations provide relatable insights that ground his theories about human experience.
Illustration implies the opposite of unbiased research. Illustration suggests the researcher has already reached a conclusion. You know what you want to show and select examples to demonstrate that it is true. But for many scholars, such as Freud, illustration and evidence merge.
Illustration in lieu of evidence
What starts as a mere illustration can seed systematic evidence gathering. Illustrations can help scope and contextualise a study, identify research questions and trial a method. Illustrations are less costly and time consuming than systematic gathering of verifiable, repeatable and “unbiased” evidence.
Sometimes illustration is sufficient to ground a research project.
Here’s an example from my recent class on research methods. Imagine a research question: what does writing reveal about cultural difference? We videoed two people familiar with their respective non-English languages. I asked them to translate each other’s names into the language of the other, and explain the process.
Even from this still image it’s apparent that the Chinese speaker had more to describe about her language than the practitioner of the Hindi alphabet. Such an illustrations of course prompts inquiry about how representative the interlocutors are of their language communities. Of greater interest it prompts consideration of logographic and ideographic writing conventions versus alphabetic scripts, and the influence of each on cultural narratives. I’m sure more could be said, even from this illustration, and especially if analysed in the context of critical secondary research as described in my first paragraph.
- Burke Johnson, R., Anthony J. Onwuegbuzie, and Lisa A. Turner. 2007. Toward a Definition of Mixed Methods Research. Journal of Mixed Methods Research, (1) 2, 112-133.
- Driscoll, Dana Lynn. 2011. Introduction to primary research: Observations, surveys and interviews. In Charlie Lowe, and Pavel Zemliansky (eds.), Writing Spaces: Readings on Writing Volume 2: 153-173. Anderson, SC: Parlor Press.
- I’m grateful to Hairong Wang and Anjali Gupta for permission to use their writing in the illustration above. |
Guiding children often comes with many decisions that impact how children grow and develop, and discipline strategies are one of the many decisions.
Depending on the child’s age and stage of development, the strategy will be different. For toddlers and preschoolers, appropriate discipline may involve simply distracting children or giving them something different to do to redirect their attention away from misbehaviors. As children get older and can understand more complex reasoning and explanations, parents’ discipline approaches may be more adaptive as they rely more on reasoning to manage children’s behaviors.
The real goal of parental discipline is to teach children how to behave in desired ways. Rewards and punishments have long been used as a strategy. As children age, they begin to understand their parents’ rules and family values and begin to exhibit behaviors that align. The conversations parents have with their children about the rules and consequences can help children, especially if both parent and child are calm and regulated. The discussions that happen when emotions are high may have more harsh consequences than when discussed after emotions have calmed for both child and adult.
As teens, parents might find that removing a privilege is a fair consequence to bring about more desired behavior. Communication will again be a favored strategy so that the teen still feels connected to the parent even in the face of a consequence for undesired behavior. It is during this time of adolescence that the teen begins to assume the responsibilities that come with emerging adulthood and is rewarded with more privileges. Guiding preschoolers, school-agers, or teens means continual communication with one another and choosing discipline strategies that respect the age and stage of each individual.
With two earned degrees from Iowa State University, Barb is a Human Sciences Specialist utilizing her experience working alongside communities to develop strong youth and families! With humor and compassion, she enjoys teaching, listening and learning to learn!
As Donna and I pondered the topic this month, we wanted to make sure that we talked about the fact that many things die. Animals. People. Plants. Flowers. Bugs. Fish. All living things die. The most important thing when talking about the topic of death is to remember the child’s age. The age of the child is what guides your conversation. Here are a couple of age related guidelines directly from the extension.org article “Loss and Grief: Talking with Children”.
Infants. Children under a year old seem to have very little awareness of death, but do experience feelings of loss and separation. Infants might show similar signs of stress as an older child or adult who is coping with loss: crankiness, eating disturbance, altered sleep patterns, or intestinal disturbances.
Toddlers. Children between the ages of one and three generally view death as temporary. That’s why it’s very important to state simply and directly that the person has died and to explain what that means.
Young children. Children between the ages of three and six might believe their thoughts, feelings or actions can cause death. Feelings of responsibility and guilt can arise. It’s important to tell children what caused the death and be attuned to any sense of responsibility the child might convey.
Older children. School aged children begin to develop a more mature understanding of death, seeing it as both inevitable and irreversible.
Teenagers. Teenagers are going through many changes and life in general can be very challenging. During a time of loss and mourning, let your teenager know that you’re there for her/him. Be present while also allowing space and privacy. Respect your teenager’s feelings, listen well, and let them teach you about their grief and how you can help.
Mother of three. Lover of all things child development related. Fascinated by temperament and brain development. Professional background with families, child care providers, teachers and community service entities.
Many parents wonder when their child can stay home alone. While there is not a magical answer, there are many factors to consider when making this decision with your family.
First, remember that all children mature at different rates and have varying levels of skills and abilities that should be taken into consideration when making this decision.
Second, it is important for families to consider the amount of time the child will be home alone (i.e., one half hour or an entire day).
Third, it important to know how your child feels about being home alone and how your child will handle an emergency.
Answer these questions to assess your child’s readiness to stay home alone.
Is your child mature enough to handle the responsibilities of being on his or her own?
Do you and your child communicate well about feelings?
Can your child manage simple tasks like making a snack and taking a phone message?
Has your child indicated an interest and/or a willingness to stay home alone?
Does your child generally observe rules that exist in your home?
Does your child spontaneously tell you about daily events?
Is your child physically able to unlock and lock the doors at your home?
Can your child solve small problems without assistance?
Does your child know when and how to seek outside help?
Do you think your child is prepared to handle an accident or an emergency?
Will your child follow your household rules when you are not home?
If you answered “yes” to most of the questions, this may indicate your child is ready to stay alone.
Many parents find it helpful to allow their child to stay home alone in small increments to begin with, as a “testing period.” For instance, maybe a parent will go for a walk while their child is home for 20-30 minutes. This is a good opportunity to assess the event and to discuss how your child felt about staying home alone. As you and your child become more comfortable with your child staying home alone, it would be appropriate to gradually increase the amount of time your child is home alone.
Donna Donald is a Human Sciences specialist for Iowa State University Extension and Outreach who has spent her career working with families across the lifespan. She believes families are defined by function as well as form. Donna entered parenthood as a stepmother to three daughters and loves being a grandmother of seven young adults. |
Energy is the capacity to supply heat or do work (applying a force to move matter). Kinetic energy (KE) is the energy of motion; potential energy is energy due to relative position, composition, or condition. When energy is converted from one form into another, energy is neither created nor destroyed (law of conservation of energy or first law of thermodynamics).
The thermal energy of matter is due to the kinetic energies of its constituent atoms or molecules. Temperature is an intensive property of matter reflecting hotness or coldness that increases as the average kinetic energy increases. Heat is the transfer of thermal energy between objects at different temperatures. Chemical and physical processes can absorb heat (endothermic) or release heat (exothermic). The SI unit of energy, heat, and work is the joule (J).
Specific heat and heat capacity are measures of the energy needed to change the temperature of a substance or object. The amount of heat absorbed or released by a substance depends directly on the type of substance, its mass, and the temperature change it undergoes. The first law of thermodynamics states that energy cannot be created or destroyed but can be transformed from one type into another type or transferred from system to surroundings or vice versa. The total energy change of the system, U, is the sum of the heat, q, and work, w. Energy is a state function, which means that the initial and final states are what determine the energy, not the path taken.
34.1 Energy Basics
By the end of this section, you will be able to:
- Define energy, distinguish types of energy, and describe the nature of energy changes that accompany chemical and physical changes
- Distinguish the related properties of heat, thermal energy, and temperature
- Define and distinguish specific heat and heat capacity, and describe the physical implications of both
- Perform calculations involving heat, specific heat, and temperature change
Chemical changes and their accompanying changes in energy are important parts of our everyday world (Figure 34.1). The macronutrients in food (proteins, fats, and carbohydrates) undergo metabolic reactions that provide the energy to keep our bodies functioning. We burn a variety of fuels (gasoline, natural gas, coal) to produce energy for transportation, heating, and the generation of electricity. Industrial chemical reactions use enormous amounts of energy to produce raw materials (such as iron and aluminum). Energy is then used to manufacture those raw materials into useful products, such as cars, skyscrapers, and bridges.
The energy involved in chemical changes is important to our daily lives: (a) A cheeseburger for lunch provides the energy you need to get through the rest of the day; (b) the combustion of gasoline provides the energy that moves your car (and you) between home, work, and school; and (c) coke, a processed form of coal, provides the energy needed to convert iron ore into iron, which is essential for making many of the products we use daily. (credit a: modification of work by “Pink Sherbet Photography”/Flickr; credit b: modification of work by Jeffery Turner)
Over 90% of the energy we use comes originally from the sun. Every day, the sun provides the earth with almost 10,000 times the amount of energy necessary to meet all of the world’s energy needs for that day. Our challenge is to find ways to convert and store incoming solar energy so that it can be used in reactions or chemical processes that are both convenient and nonpolluting. Plants and many bacteria capture solar energy through photosynthesis. We release the energy stored in plants when we burn wood or plant products such as ethanol. We also use this energy to fuel our bodies by eating food that comes directly from plants or from animals that got their energy by eating plants. Burning coal and petroleum also releases stored solar energy: These fuels are fossilized plant and animal matter.
This chapter will introduce the basic ideas of an important area of science concerned with the amount of heat absorbed or released during chemical and physical changes—an area called thermochemistry. The concepts introduced in this chapter are widely used in almost all scientific and technical fields. Food scientists use them to determine the energy content of foods. Biologists study the energetics of living organisms, such as the metabolic combustion of sugar into carbon dioxide and water. The oil, gas, and transportation industries, renewable energy providers, and many others endeavor to find better methods to produce energy for our commercial and personal needs. Engineers strive to improve energy efficiency, find better ways to heat and cool our homes, refrigerate our food and drinks, and meet the energy and cooling needs of computers and electronics, among other applications. Understanding thermochemical principles is essential for chemists, physicists, biologists, geologists, every type of engineer, and just about anyone who studies or does any kind of science.
Energy can be defined as the capacity to supply heat or do work. One type of work (w) is the process of causing matter to move against an opposing force. For example, we do work when we inflate a bicycle tire—we move matter (the air in the pump) against the opposing force of the air already in the tire.
Like matter, energy comes in different types. One scheme classifies energy into two types: potential energy, the energy an object has because of its relative position, composition, or condition, and kinetic energy, the energy that an object possesses because of its motion. Water at the top of a waterfall or dam has potential energy because of its position; when it flows downward through generators, it has kinetic energy that can be used to do work and produce electricity in a hydroelectric plant (Figure 34.2). A battery has potential energy because the chemicals within it can produce electricity that can do work.
(a) Water at a higher elevation, for example, at the top of Victoria Falls, has a higher potential energy than water at a lower elevation. As the water falls, some of its potential energy is converted into kinetic energy. (b) If the water flows through generators at the bottom of a dam, such as the Hoover Dam shown here, its kinetic energy is converted into electrical energy. (credit a: modification of work by Steve Jurvetson; credit b: modification of work by “curimedia”/Wikimedia commons)
Energy can be converted from one form into another, but all of the energy present before a change occurs always exists in some form after the change is completed. This observation is expressed in the law of conservation of energy: during a chemical or physical change, energy can be neither created nor destroyed, although it can be changed in form. (This is also one version of the first law of thermodynamics, as you will learn later.)
When one substance is converted into another, there is always an associated conversion of one form of energy into another. Heat is usually released or absorbed, but sometimes the conversion involves light, electrical energy, or some other form of energy. For example, chemical energy (a type of potential energy) is stored in the molecules that compose gasoline. When gasoline is combusted within the cylinders of a car’s engine, the rapidly expanding gaseous products of this chemical reaction generate mechanical energy (a type of kinetic energy) when they move the cylinders’ pistons.
According to the law of conservation of matter (seen in an earlier chapter), there is no detectable change in the total amount of matter during a chemical change. When chemical reactions occur, the energy changes are relatively modest and the mass changes are too small to measure, so the laws of conservation of matter and energy hold well. However, in nuclear reactions, the energy changes are much larger (by factors of a million or so), the mass changes are measurable, and matter-energy conversions are significant. This will be examined in more detail in a later chapter on nuclear chemistry.
Thermal Energy, Temperature, and Heat
Thermal energy is kinetic energy associated with the random motion of atoms and molecules. Temperature is a quantitative measure of “hot” or “cold.” When the atoms and molecules in an object are moving or vibrating quickly, they have a higher average kinetic energy (KE), and we say that the object is “hot.” When the atoms and molecules are moving slowly, they have lower average KE, and we say that the object is “cold” (Figure 34.3). Assuming that no chemical reaction or phase change (such as melting or vaporizing) occurs, increasing the amount of thermal energy in a sample of matter will cause its temperature to increase. And, assuming that no chemical reaction or phase change (such as condensation or freezing) occurs, decreasing the amount of thermal energy in a sample of matter will cause its temperature to decrease.
(a) The molecules in a sample of hot water move more rapidly than (b) those in a sample of cold water.
Click on this interactive simulation to view the effects of temperature on molecular motion.
Most substances expand as their temperature increases and contract as their temperature decreases. This property can be used to measure temperature changes, as shown in Figure 34.4. The operation of many thermometers depends on the expansion and contraction of substances in response to temperature changes.
(a) In an alcohol or mercury thermometer, the liquid (dyed red for visibility) expands when heated and contracts when cooled, much more so than the glass tube that contains the liquid. (b) In a bimetallic thermometer, two different metals (such as brass and steel) form a two-layered strip. When heated or cooled, one of the metals (brass) expands or contracts more than the other metal (steel), causing the strip to coil or uncoil. Both types of thermometers have a calibrated scale that indicates the temperature. (credit a: modification of work by “dwstucke”/Flickr)
The following demonstration allows one to view the effects of heating and cooling a coiled bimetallic strip.
Heat (q) is the transfer of thermal energy between two bodies at different temperatures. Heat flow (a redundant term, but one commonly used) increases the thermal energy of one body and decreases the thermal energy of the other. Suppose we initially have a high temperature (and high thermal energy) substance (H) and a low temperature (and low thermal energy) substance (L). The atoms and molecules in H have a higher average KE than those in L. If we place substance H in contact with substance L, the thermal energy will flow spontaneously from substance H to substance L. The temperature of substance H will decrease, as will the average KE of its molecules; the temperature of substance L will increase, along with the average KE of its molecules. Heat flow will continue until the two substances are at the same temperature (Figure 34.5).
(a) Substances H and L are initially at different temperatures, and their atoms have different average kinetic energies. (b) When they contact each other, collisions between the molecules result in the transfer of kinetic (thermal) energy from the hotter to the cooler matter. (c) The two objects reach “thermal equilibrium” when both substances are at the same temperature and their molecules have the same average kinetic energy.
Click on the PhET simulation to explore energy forms and changes. Visit the Energy Systems tab to create combinations of energy sources, transformation methods, and outputs. Click on Energy Symbols to visualize the transfer of energy.
Matter undergoing chemical reactions and physical changes can release or absorb heat. A change that releases heat is called an exothermic process. For example, the combustion reaction that occurs when using an oxyacetylene torch is an exothermic process—this process also releases energy in the form of light as evidenced by the torch’s flame (Figure 34.6). A reaction or change that absorbs heat is an endothermic process. A cold pack used to treat muscle strains provides an example of an endothermic process. When the substances in the cold pack (water and a salt like ammonium nitrate) are brought together, the resulting process absorbs heat, leading to the sensation of cold.
(a) An oxyacetylene torch produces heat by the combustion of acetylene in oxygen. The energy released by this exothermic reaction heats and then melts the metal being cut. The sparks are tiny bits of the molten metal flying away. (b) A cold pack uses an endothermic process to create the sensation of cold. (credit a: modification of work by “Skatebiker”/Wikimedia commons)
Historically, energy was measured in units of calories (cal). A calorie is the amount of energy required to raise one gram of water by 1 degree C (1 kelvin). However, this quantity depends on the atmospheric pressure and the starting temperature of the water. The ease of measurement of energy changes in calories has meant that the calorie is still frequently used. The Calorie (with a capital C), or large calorie, commonly used in quantifying food energy content, is a kilocalorie. The SI unit of heat, work, and energy is the joule. A joule (J) is defined as the amount of energy used when a force of 1 newton moves an object 1 meter. It is named in honor of the English physicist James Prescott Joule. One joule is equivalent to 1 kg m2/s2, which is also called 1 newton–meter. A kilojoule (kJ) is 1000 joules. To standardize its definition, 1 calorie has been set to equal 4.184 joules.
We now introduce two concepts useful in describing heat flow and temperature change. The heat capacity (C) of a body of matter is the quantity of heat (q) it absorbs or releases when it experiences a temperature change (ΔT) of 1 degree Celsius (or equivalently, 1 kelvin):
Heat capacity is determined by both the type and amount of substance that absorbs or releases heat. It is therefore an extensive property—its value is proportional to the amount of the substance.
For example, consider the heat capacities of two cast iron frying pans. The heat capacity of the large pan is five times greater than that of the small pan because, although both are made of the same material, the mass of the large pan is five times greater than the mass of the small pan. More mass means more atoms are present in the larger pan, so it takes more energy to make all of those atoms vibrate faster. The heat capacity of the small cast iron frying pan is found by observing that it takes 18,150 J of energy to raise the temperature of the pan by 50.0 °C:
The larger cast iron frying pan, while made of the same substance, requires 90,700 J of energy to raise its temperature by 50.0 °C. The larger pan has a (proportionally) larger heat capacity because the larger amount of material requires a (proportionally) larger amount of energy to yield the same temperature change:
The specific heat capacity (c) of a substance, commonly called its “specific heat,” is the quantity of heat required to raise the temperature of 1 gram of a substance by 1 degree Celsius (or 1 kelvin):
Specific heat capacity depends only on the kind of substance absorbing or releasing heat. It is an intensive property—the type, but not the amount, of the substance is all that matters. For example, the small cast iron frying pan has a mass of 808 g. The specific heat of iron (the material used to make the pan) is therefore:
The large frying pan has a mass of 4040 g. Using the data for this pan, we can also calculate the specific heat of iron:
Although the large pan is more massive than the small pan, since both are made of the same material, they both yield the same value for specific heat (for the material of construction, iron). Note that specific heat is measured in units of energy per temperature per mass and is an intensive property, being derived from a ratio of two extensive properties (heat and mass). The molar heat capacity, also an intensive property, is the heat capacity per mole of a particular substance and has units of J/mol °C (Figure 34.7).
Because of its larger mass, a large frying pan has a larger heat capacity than a small frying pan. Because they are made of the same material, both frying pans have the same specific heat. (credit: Mark Blaser)
Water has a relatively high specific heat (about 4.2 J/g °C for the liquid and 2.09 J/g °C for the solid); most metals have much lower specific heats (usually less than 1 J/g °C). The specific heat of a substance varies somewhat with temperature. However, this variation is usually small enough that we will treat specific heat as constant over the range of temperatures that will be considered in this chapter. Specific heats of some common substances are listed in Table 34.1.
Specific Heats of Common Substances at 25 °C and 1 bar
|Substance||Symbol (state)||Specific Heat (J/g °C)|
|ice||H2O(s)||2.093 (at −10 °C)|
If we know the mass of a substance and its specific heat, we can determine the amount of heat, q, entering or leaving the substance by measuring the temperature change before and after the heat is gained or lost:
In this equation, c is the specific heat of the substance, m is its mass, and ΔT (which is read “delta T”) is the temperature change, Tfinal − Tinitial. If a substance gains thermal energy, its temperature increases, its final temperature is higher than its initial temperature, Tfinal − Tinitial has a positive value, and the value of q is positive. If a substance loses thermal energy, its temperature decreases, the final temperature is lower than the initial temperature, Tfinal − Tinitial has a negative value, and the value of q is negative.
A flask containing 8.0
g of water is heated, and the temperature of the water increases from 21 °C to 85 °C. How much heat did the water absorb?
To answer this question, consider these factors:
- the specific heat of the substance being heated (in this case, water)
- the amount of substance being heated (in this case, 8.0 × 102 g)
- the magnitude of the temperature change (in this case, from 21 °C to 85 °C).
The specific heat of water is 4.184 J/g °C, so to heat 1 g of water by 1 °C requires 4.184 J. We note that since 4.184 J is required to heat 1 g of water by 1 °C, we will need 800 times as much to heat 8.0 × 102 g of water by 1 °C. Finally, we observe that since 4.184 J are required to heat 1 g of water by 1 °C, we will need 64 times as much to heat it by 64 °C (that is, from 21 °C to 85 °C).
This can be summarized using the equation:
Because the temperature increased, the water absorbed heat and q is positive.
Check Your Learning
How much heat, in joules, must be added to a 502 g iron skillet to increase its temperature from 25 °C to 250 °C? The specific heat of iron is 0.449 J/g °C.
Note that the relationship between heat, specific heat, mass, and temperature change can be used to determine any of these quantities (not just heat) if the other three are known or can be deduced.
Determining Other Quantities
A piece of unknown metal weighs 348 g. When the metal piece absorbs 6.64 kJ of heat, its temperature increases from 22.4 °C to 43.6 °C. Determine the specific heat of this metal (which might provide a clue to its identity).
Since mass, heat, and temperature change are known for this metal, we can determine its specific heat using the relationship:
Substituting the known values:
Comparing this value with the values in Table 34.1, this value matches the specific heat of aluminum, which suggests that the unknown metal may be aluminum.
Check Your Learning
A piece of unknown metal weighs 217 g. When the metal piece absorbs 1.43 kJ of heat, its temperature increases from 24.5 °C to 39.1 °C. Determine the specific heat of this metal, and predict its identity.
c = 0.451 J/g °C; the metal is likely to be iron
Chemistry in Everyday Life
Solar Thermal Energy Power Plants
The sunlight that reaches the earth contains thousands of times more energy than we presently capture. Solar thermal systems provide one possible solution to the problem of converting energy from the sun into energy we can use. Large-scale solar thermal plants have different design specifics, but all concentrate sunlight to heat some substance; the heat “stored” in that substance is then converted into electricity.
The Solana Generating Station in Arizona’s Sonora Desert produces 280 megawatts of electrical power. It uses parabolic mirrors that focus sunlight on pipes filled with a heat transfer fluid (HTF) (Figure 34.8). The HTF then does two things: It turns water into steam, which spins turbines, which in turn produces electricity, and it melts and heats a mixture of salts, which functions as a thermal energy storage system. After the sun goes down, the molten salt mixture can then release enough of its stored heat to produce steam to run the turbines for 6 hours. Molten salts are used because they possess a number of beneficial properties, including high heat capacities and thermal conductivities.
This solar thermal plant uses parabolic trough mirrors to concentrate sunlight. (credit a: modification of work by Bureau of Land Management)
The 377-megawatt Ivanpah Solar Generating System, located in the Mojave Desert in California, is the largest solar thermal power plant in the world (Figure 34.9). Its 170,000 mirrors focus huge amounts of sunlight on three water-filled towers, producing steam at over 538 °C that drives electricity-producing turbines. It produces enough energy to power 140,000 homes. Water is used as the working fluid because of its large heat capacity and heat of vaporization.
(a) The Ivanpah solar thermal plant uses 170,000 mirrors to concentrate sunlight on water-filled towers. (b) It covers 4000 acres of public land near the Mojave Desert and the California-Nevada border. (credit a: modification of work by Craig Dietrich; credit b: modification of work by “USFWS Pacific Southwest Region”/Flickr)
Link to Supplemental Exercises
Supplemental exercises are available if you would like more practice with these concepts.
34.2 The First Law of Thermodynamics
By the end of this section, you will be able to:
- State the first law of thermodynamics
- Define state function
Thermochemistry is a branch of chemical thermodynamics, the science that deals with the relationships between heat, work, and other forms of energy in the context of chemical and physical processes. As we concentrate on thermochemistry in this chapter, we need to consider some widely used concepts of thermodynamics.
Substances act as reservoirs of energy, meaning that energy can be added to them or removed from them. Energy is stored in a substance when the kinetic energy of its atoms or molecules is raised. The greater kinetic energy may be in the form of increased translations (travel or straight-line motions), vibrations, or rotations of the atoms or molecules. When thermal energy is lost, the intensities of these motions decrease and the kinetic energy falls. The total of all possible kinds of energy present in a substance is called the internal energy (U), sometimes symbolized as E.
As a system undergoes a change, its internal energy can change, and energy can be transferred from the system to the surroundings, or from the surroundings to the system. Energy is transferred into a system when it absorbs heat (q) from the surroundings or when the surroundings do work (w) on the system. For example, energy is transferred into room-temperature metal wire if it is immersed in hot water (the wire absorbs heat from the water), or if you rapidly bend the wire back and forth (the wire becomes warmer because of the work done on it). Both processes increase the internal energy of the wire, which is reflected in an increase in the wire’s temperature. Conversely, energy is transferred out of a system when heat is lost from the system, or when the system does work on the surroundings.
The relationship between internal energy, heat, and work can be represented by the equation:
as shown in Figure 34.10. This is one version of the first law of thermodynamics, and it shows that the internal energy of a system changes through heat flow into or out of the system (positive q is heat flow in; negative q is heat flow out) or work done on or by the system. The work, w, is positive if it is done on the system and negative if it is done by the system.
The internal energy, U, of a system can be changed by heat flow and work. If heat flows into the system, qin, or work is done on the system, won, its internal energy increases, ΔU > 0. If heat flows out of the system, qout, or work is done by the system, wby, its internal energy decreases, ΔU < 0.
A type of work called expansion work (or pressure-volume work) occurs when a system pushes back the surroundings against a restraining pressure, or when the surroundings compress the system. An example of this occurs during the operation of an internal combustion engine. The reaction of gasoline and oxygen is exothermic. Some of this energy is given off as heat, and some does work pushing the piston in the cylinder. The substances involved in the reaction are the system, and the engine and the rest of the universe are the surroundings. The system loses energy by both heating and doing work on the surroundings, and its internal energy decreases. (The engine is able to keep the car moving because this process is repeated many times per second while the engine is running.) We will consider how to determine the amount of work involved in a chemical or physical change in the chapter on thermodynamics.
This view of an internal combustion engine illustrates the conversion of energy produced by the exothermic combustion reaction of a fuel such as gasoline into energy of motion.
As discussed, the relationship between internal energy, heat, and work can be represented as ΔU = q + w. Internal energy is an example of a state function (or state variable), whereas heat and work are not state functions. The value of a state function depends only on the state that a system is in, and not on how that state is reached. If a quantity is not a state function, then its value does depend on how the state is reached. An example of a state function is altitude or elevation. If you stand on the summit of Mt. Kilimanjaro, you are at an altitude of 5895 m, and it does not matter whether you hiked there or parachuted there. The distance you traveled to the top of Kilimanjaro, however, is not a state function. You could climb to the summit by a direct route or by a more roundabout, circuitous path (Figure 34.11). The distances traveled would differ (distance is not a state function) but the elevation reached would be the same (altitude is a state function).
Paths X and Y represent two different routes to the summit of Mt. Kilimanjaro. Both have the same change in elevation (altitude or elevation on a mountain is a state function; it does not depend on path), but they have very different distances traveled (distance walked is not a state function; it depends on the path). (credit: modification of work by Paul Shaffner)
Link to Supplemental Exercises
Supplemental exercises are available if you would like more practice with these concepts. |
Because Jesus neither met these requirements nor did the messianic age arrive, the Jewish view is that Jesus was merely a man, not the Messiah. Jesus of Nazareth was one of many Jews throughout history who either attempted to directly lay claim to being the messiah or whose followers made the claim in their name.
- Border Terrier Secrets: How to Raise Happy and Healthy Border Terriers.
- Man or Messiah: The Role of Jesus in Judaism?
- He was born, lived, and died as a Jew.
- Elements of Fracture Fixation - E-Book.
- Judaism's First Century Diversity?
Given the difficult social climate under Roman occupation and persecution during the era in which Jesus lived, it is not hard to understand why so many Jews longed for a time of peace and freedom. Bar Kochba claimed to be the Messiah and was even anointed by the prominent Rabbi Akiva, but after bar Kochba died in the revolt the Jews of his time rejected him as another false messiah since he did not fulfill the requirements of the true Messiah.
The one other major false messiah arose during more modern times during the 17th century. Shabbatai Tzvi was a kabbalist who claimed to be the long-awaited Messiah, but after he was imprisoned, he converted to Islam and so did hundreds of his followers, negating any claims as the Messiah that he had. Share Flipboard Email.
Aren’t Martyrs Automatically Canonized?
In fact, much of the daily effort of Jews in Jesus' time went into fulfilling minute details of the Law. Jewish life and culture in the first 70 years of the first century centered in the Second Temple, one of the many massive public works projects of Herod the Great. Crowds of people thronged in and out of the Temple every day, making ritual animal sacrifices to atone for particular sins, another common practice of the era.
Cookies on the BBC website
Understanding the centrality of Temple worship to first-century Jewish life makes it more plausible that Jesus' family would have made a pilgrimage to the Temple to offer the prescribed animal sacrifice of thanksgiving for his birth, as described in Luke It also would have been logical for Joseph and Mary to take their son to Jerusalem to celebrate Passover around the time of his rite of passage into religious adulthood when Jesus was 12, as described in Luke It would have been important for a boy coming of age to understand the Jews' faith story of their liberation from slavery in Egypt and resettlement in Israel, the land they claimed that God promised to their ancestors.
Despite these common practices, the Roman Empire overshadowed the Jews' daily lives, whether sophisticated urban dwellers or country peasants, from 63 B.
From 37 to 4 B. This occupation led to waves of revolt, often led by two of the sects mentioned by Josephus: the Zealots who sought Jewish independence and the Sicarii pronounced "sic-ar-ee-eye" , an extremist Zealot group whose name means assassin from the Latin for "dagger" [ sica ].
Judaism's First Century Diversity
Everything about Roman occupation was hateful to the Jews, from oppressive taxes to physical abuse by Roman soldiers to the repugnant idea that the Roman leader was a god. Repeated efforts at gaining political independence ensued to no avail. Finally, first-century Jewish society was devastated in 70 A.
The loss of their religious center crushed the spirits of first-century Jews, and their descendants have never forgotten it.
What Religion Was Jesus? – Jews for Jesus
Share Flipboard Email. Cynthia B. Astle is an award-winning journalist who covered religion for 25 years. She has authored a number of books on faith and religion.
- Life Sketches and Haiku.
- Her First Delicate Piercing.
- To Flee or to Stay? (Hakirah Single from volume 9).
- Contact Me;
- The Songwriters Idea Book.
Related Jesus and the Judaism of His Time
Copyright 2019 - All Right Reserved |
Stomach cancer, also called gastric cancer, is a cancer that starts in the stomach. To understand stomach cancer, it helps to know about the normal structure and function of the stomach. The most common cause is infection by the bacterium Helicobacter pylori, which accounts for more than 60% of cases. Certain types of H. pylori have greater risks than others. Other common causes include eating pickled vegetables and smoking. Gastric cancers due to smoking mostly occur in the upper part of the stomach near the esophagus. Some studies show increased risk with alcohol consumption as well. Most of the time, stomach cancerdevelops through a number of stages over a number of years. |
Air-compensating back-pressure regulator
A standard regulator is reset by manually turning the adjusting stem, which increases the spring pressure on top of the diaphragm. In an air compensated regulator, a change of pressure on top of the diaphragm is accomplished by introducing air pressure into the airtight bonnet over the diaphragm. As this air pressure is increased, the setting of the regulator will be increased. This will produce like changes of evaporator pressure and refrigerant temperature. The variations in air pressure are produced by the temperature changes of the thermostatic remote bulb placed in the stream of the medium being cooled as it leaves the evaporator.
Temperature changes in the medium being cooled over the remote bulb of the thermostat will cause the thermostat to produce air pressures in the regulator bonnet within a range of 0 to 15 lb. This will cause the regulator to change the evaporator suction pressure in a like amount. A more definite understanding of this operation is obtained by assuming certain working conditions for the purpose of illustration. In cases where a larger range of modulation is required, a three-to-one air relay may be installed. This will permit a 45-lb range of modulation (see Fig. 10-26).
Categories: Evaporators | Tags: Back-Pressure Regulator | Leave a comment |
Inside the Turner Lab
Publication date: Jun 17, 2013 3:08:59 PM
Edward Turner, the first professor of chemistry at UCL, was an important figure in the history of 19th century science. Appointed to the chair of chemistry soon after the college's foundation, he was also author of an important textbook of the day, his Elements of Chemistry. Turner was a highly skilled experimental chemist, and his meticulous work on atomic weights put him at the centre of a major controversy of the age.
At the time, debate raged about 'Prout's hypothesis', William Prout's theory that all elements' weights were multiples of hydrogen's. Turner's measurements played a major role in the hypothesis being disproved.
Prout's theory fell by the wayside, but it was not quite as wrong as it might have seemed to Turner. In a quirk of history, decades later, all elements were indeed proven to be multiples of something — only that something was not hydrogen. We now know that protons, electrons and neutrons in various combinations make up all the elements of the periodic table.
Today, Turner's legacy is remembered in a major teaching laboratory in UCL Department of Chemistry. The Turner Lab is the heart of UCL's undergraduate chemistry degrees.
In this photo, one of Turner's successors, Professor Andrea Sella, can be seen assisting a student with his work.
Photo credit: O. Usher (UCL MAPS)
High resolution images
This image can be reproduced freely providing the source is credited |
recycling, the process of recovering and reusing waste products—from household use, manufacturing, agriculture, and business—and thereby reducing their burden on the environment. During World War I and World War II, shortages of essential materials led to collection drives for silk, rubber, and other commodities. In recent years the environmental benefits of recycling have become a major component of waste management programs.
For many years direct recycling by producers of surplus and defective materials constituted the main form of recycling. However, indirect recycling, the recycling of materials after their use by consumers, has become the focus of activity in the 1990s. For some time, most solid waste has been deposited in landfills or dumps. Landfills are filling up, however, and disposal of wastes in them has led to environmental problems. Also, government (which had little authority over disposal of wastes until the 1970s) now has extensive regulatory powe... |
Gaius Julius Caesar
|For more information on Julius Caesar, see Julius Caesar on Wikipedia.|
Gaius Julius Caesar
Just as my namesake campaigned in Gaul before he crossed the Rubicon, so have I campaigned, and will cross the Colorado.”— Caesar, Fallout: New Vegas
Gaius Julius Caesar (13 July 100 BC – 15 March 44 BC) known by his nomen and cognomen Julius Caesar, was a Roman politician, military general, and historian who played a critical role in the events that led to the demise of the Roman Republic and the rise of the Roman Empire.
Julius Caesar and his rise to power would be the inspiration for Edward Sallow to rename himself Caesar and form Caesar's Legion, using Julius Caesar's own Commentarii de Bello Gallico and Edward Gibbon's The History of the Decline and Fall of the Roman Empire.
Background[edit | edit source]
In 60 BC, Caesar, Crassus and Pompey formed the First Triumvirate, a political alliance that dominated Roman politics for several years. Their attempts to amass power as Populares were opposed by the Optimates within the Roman Senate, among them Cato the Younger with the frequent support of Cicero. Caesar rose to become one of the most powerful politicians in the Roman Republic through a number of his accomplishments, notably his victories in the Gallic Wars, completed by 51 BC. During this time, Caesar became the first Roman general to cross both the English Channel and the Rhine River, when he built a bridge across the Rhine and crossed the Channel to invade Britain. Caesar's wars extended Rome's territory to Britain and past Gaul. These achievements granted him unmatched military power and threatened to eclipse the standing of Pompey, who had realigned himself with the Senate after the death of Crassus in 53 BC. With the Gallic Wars concluded, the Senate ordered Caesar to step down from his military command and return to Rome. Leaving his command in Gaul meant losing his immunity from being charged as a criminal for waging unsanctioned wars. As a result, Caesar found himself with no other options but to cross the Rubicon with the 13th Legion, leaving his province and illegally entering Roman Italy underarms. This began Caesar's civil war, and his victory in the war put him in an unrivaled position of power and influence.
After assuming control of government, Caesar began a program of social and governmental reforms, including the creation of the Julian calendar. He gave citizenship to many residents of far regions of the Roman Empire. He initiated land reform and support for veterans. He centralized the bureaucracy of the Republic and was eventually proclaimed "dictator for life" (Latin: "dictator perpetuo"), giving him additional authority. His populist and authoritarian reforms angered the elites, who began to conspire against him. On the Ides of March (15 March), 44 BC, Caesar was assassinated by a group of rebellious senators led by Gaius Cassius Longinus, Marcus Junius Brutus and Decimus Junius Brutus, who stabbed him to death.
Appearances[edit | edit source]
Gaius Julius Caesar is mentioned only in Fallout: New Vegas. |
Karen Greene, one of the six people who lived in a simulated Mass camp on Hawaii’s Mauna Kea volcano for four months, says the first visitors to the Red Planet should be women. The “astronauts” lived as colonists on Mars, replicating all aspects of the real experience, from donning spacesuits every time they left the camp to exercising, eating, and conducting mock experiments. Studying the eating and sleeping habits of the six participants, Greene found that the men ate more than the women, yet had a harder time maintaining their weight than the women did.
“Even though all crew members, got the same amount of exercise, the men would burn about 3,000 calories a day while the women burned approximately 2,000,” Greene said. Women typically have a smaller body mass, which gives them another advantage on a spacecraft. If an astronaut needs more food, the cost of a mission rises higher. It also increases the weight of the spacecraft, which in-turn increases the amount of fuel required to send that spacecraft into orbit or interplanetary space.
“It is thus a vicious cycle; a higher amount of fuel requires a heavier rocket for launch, which then requires more additional fuel,” Greene said. NASA researcher Harry Jones, who has addressed these issues in a published paper, downplayed matters of body size and gender, saying astronaut selection should focus on “crew performance, including group dynamics, individual psychology, etc.”
“It’s not really politically correct to mention that size, body type, gender, intelligence, agility, emotional structure, education, and other individual differences might all effect the cost-benefit quotation in astronaut selection,” Jones said.
Researching the issue of women in space, Greene discovered that NASA did train women astronauts during the 1960s but dismissed them from the program in spite of their successful performance. |
Graphic Method of Break-Even Analysis : The break-even point can also be computed graphically. For details and the methodology, see www. Bass hold a master's degree in accounting from the University of Utah. If information as to total contribution at full capacity is available, the break-even point as a percentage of estimated capacity can be found as under: B. In the same manner, variable cost per unit may also increase with the increase in level of production due to operating inefficiencies and the law of diminishing returns.
This tool fails to take into account the demand-side situation, since not all units produced are sold at the assumed price. This would help them control costs, and make sure that they remain within a given range. This is unlikely to always be the case in practice. Examples: Alison used a breakeven analysis to determine what prices she should set for her software products. Service and image compatibility with demographics of the customer-drawing area 3. Sometimes prices are not in control of the business, since they depend on market conditions and other factors such as government regulation.
Simply put, break-even point can be determined by calculating the point at which revenue received equals the total costs associated with the production of the goods or services. The goal is to locate at a minimum-cost site, where cost is measured by the annual fixed plus variable costs of production. Examples of fixed costs: - Rent and rates - Depreciation - Research and development - Marketing costs non- revenue related - Administration costs Variable Costs Variable costs are those costs which vary directly with the level of output. The fewest factors are considered by the center-of-gravity method or transportation model, although this is examined more thoroughly in Module C. This type of division often applies to government organizations and insurance companies. This concept is used when a major proportion of sales are likely to decline or in period of recession or economic turn down. Curvilinear Break-Even Analysis Two Break-Even Points : The marginal costing approach is based upon the basic assumption that selling price and variable cost per unit will remain constant at all levels of activity or in other words the cost-volume-profit relationship is linear.
The total revenue and total cost lines are linear straight lines , since prices and variable costs are assumed to be constant per unit. The model is well suited where all volumes are reasonably static from shipment to shipment and there is one transportation method available to all sites. This calls for a practical example to appreciate the relevance of the concept. The profit-volume graph may be preferred to a break-even chart because profits or losses can be directly read at different levels of activity. However, in actual practice, the selling prices do not remain the same forever and for all levels of output due to competition and changes in the general price level etc.
Assumptions of Break-Even Analysis 3. Under this method total cost line is not drawn, rather another line called contribution line is drawn from the origin and this line goes up with the increase in the level of output. It will receive inbound shipments from several suppliers, including one in Ghaziabad. Customers will travel from the seven census tract centers to the new facility when they need health-care. The contribution margin ratio is determined by dividing the contribution margin by total sales. You can find the contribution margin by subtracting the cost of production per unit from the sale price per unit. Budgeting and Setting Targets Break-even charts and calculation be used for budgeting process, since the business know exactly how many units need to be sold in order to break-even.
The site with the highest weighted score is selected as the best choice. Labor cost is twice as important as utility cost, which is in turn twice as important as climate. Thus, profit can be increased only upto a certain point and then it will decrease until it is converted into a loss. What are the major factors that firms consider when choosing a country and then a region in which to locate? Sales line in volume or value is drawn on horizontal or x-axis. This is so, because break-even analysis is the most widely known form of cost-volume-profit analysis. They represent payment output-related inputs such as raw materials, direct labour, fuel and revenue-related costs such as commission. Break even analysis is concerned with finding the point at which revenues and costs agree exactly.
Question 8 Bremend Ltd manufactures a computer stand. Open Hint for Question 14 in a new window. Using the diagrammatical method, break-even point can be determined by pinpointing where the two revenue and total costs linear lines intersect. Usually, the angle of incidence and margin of safety are considered together to indicate the soundness of a business. Profits and losses are given on vertical or y-axis.
It cannot be said with any certainty which set of factors has the greatest impact on productivity, but it is reasonable to assume that both tangible and intangible costs factor into the firm's productivity. Question 6 Provision plc makes luxury hampers for sale in a chain of high-class department stores. Understanding how to calculate the break-even point helps you determine how much you need to sell each unit for on the market. However, for load-distance method, a rough calculation that is either Euclidean or rectilinear distance measure may be used. Quality of the competition 5. Euclidean distance is the straight-line distance, or shortest possible path, between two points. Hence, this tool provides more information for the mangers to make better pricing decision, considering the supply-side of the production process.
Details of seven census tract centers, co-ordinate distances along with the population for each centre are given below. Question 4 The Sherston Brick Company manufactures a standard stone block for the building industry. The variable costs for different levels of activity are plotted over-the fixed cost line. He buys from local farms and packages these together in boxes and delivers them locally. Cost Control and Monitoring Since costs Fixed and Variable affect the profitability of the business directly, the managers can easily see these changes through break-even analysis. Comparisons can be made between different options by constructing new charts to show changed circumstances. The Index is calculated from up to 13 different individual scores. |
Understanding Mortgage Payments
In this Spreadsheets Across the Curriculum activity, students calculate the payment of a mortgage and create an amortization table. As part of the probelm-solving activity, students build a series of spreadsheets which calculate the mortgage and applicable payment. The activity focuses on calculating and understanding the allocation of each payment to principal and interest. Charting is also used to graphically illustrate the payment mix.
Quantitative Goals: In the process of solving the problem and building the spreadsheets, students will increase their knowledge of arithmetic growth, percentages, forward modeling and visual display of data.
Spreadsheet Goals: Students will build spreadsheets to work through the step-by-step calculations. In the process of building their spreadsheets, students will learn to organize their thinking about a calculation. They will learn to pay attention and be careful when they work with equations (watching for parentheses, for example). They will learn that their first attempts are not necessarily correct (when their cell equations do not result in the same numbers as shown in the module), but that Excel will indeed produce the same answer as the module when they get the mistakes out of the cell equations – that Excel does exactly what they tell it to do, which is not necessarily what they think they are telling it to do.
Finance (Content) Goals: Students will work with the concepts that:
- a mortgage payment consists of two parts: principal and interest
- both the interest rate and the length of the mortgage are very important to the total amount paid
- In the process of this work, students will increase their know-how of how mortgage payments and equity are calculated.
Context for Use
I use this module in my Personal Finance course. The course is aimed at nonbusiness majors who anticipate graduating in three or fewer semesters.
This module, the first of the course, comes in the sixth week of the semester. The preceding session is a lecture which introduces Excel and mortgages. The Mortgage Module problem-solving session happens in a lecture room equipped with computer and projector. I start the session by posing the question: "How are mortgage payments calculated?" The students brainstorm together as one large group that I facilitate.
The students soon decide that they need to know the cost of the house, the downpayment, and applicable financing (interest rate and length of the loan). We research local banks on the web for current mortgage interest rates. We discuss the basic calculation using paper and a calculator. We then move on to using the first spreadsheet (Slide 4) and discuss it before the end of the 75 minute session. They leave the session with a basic knowledge and will return in two days to the computer lab to work through the module. The students work through the module during class time to ask for help as needed in Excel. The end-of-module assignments will be due the following class period via e-mail.
Description and Teaching Materials
PowerPoint SSAC2006.HF5691.JM1.1_Student (PowerPoint 129kB Jul31 07)
The module is a PowerPoint presentation with embedded spreadsheets. The PowerPoint includes links to current interest rate information.
If the embedded spreadsheets are not visible, save the PowerPoint file to disk and open it from there.
An instructor version is available by request. The instructor version includes the completed spreadsheet. Send your request to Len Vacher ([email protected]) by filling out and submitting the Instructor Module Request Form.
Teaching Notes and Tips
The module is a computer-based activity. It is helpful to remind the students that it is still useful to have a pad of paper and a pencil at the ready to organize their thinking.
Remind the students of the first rule of building spreadsheets: save your work. Early and often.
End-of-module assignments can be used as a post-test.
The instructor version includes a pre-test. |
What are Fossils? Fossils are the remains or traces of ancient life. Scientists have
learned a lot about ancient species, such as dinosaurs, by studying these preserved remains.
Find out more about fossils here.
Tools of the Trade
Fossils of trilobites show that change in the species on earth has occurred over time. Scientists have studied
these fossilized trilobites and now call them extinct relatives of animals that you know today. Trilobites are ancient
relatives of lobsters, insects, and crabs. |
||This article has an unclear citation style. (February 2013)|
Brainstorming is a group or individual creativity technique by which efforts are made to find a conclusion for a specific problem by gathering a list of ideas spontaneously contributed by its member(s). The term was popularized by Alex Faickney Osborn in the 1953 book Applied Imagination. Osborn claimed that brainstorming was more effective than individuals working alone in generating ideas, although more recent research has questioned this conclusion. Today, the term is used as a catch all for all group ideation sessions.
- 1 Origin
- 2 Osborn's method
- 3 Variations
- 4 Computer supported
- 5 Incentives
- 6 Challenges to effective group brainstorming
- 7 See also
- 8 References
- 9 External links
Advertising executive Alex F. Osborn began developing methods for creative problem solving in 1939. He was frustrated by employees’ inability to develop creative ideas individually for ad campaigns. In response, he began hosting group-thinking sessions and discovered a significant improvement in the quality and quantity of ideas produced by employees. Osborn outlined the method in his 1948 book 'Your Creative Power' on chapter 33, “How to Organize a Squad to Create Ideas.”
Osborn claimed that two principles contribute to "ideative efficacy," these being :
- Defer judgment,
- Reach for quantity.
Following these two principles were his four general rules of brainstorming, established with intention to :
- reduce social inhibitions among group members,
- stimulate idea generation
- increase overall creativity of the group.
- Focus on quantity: This rule is a means of enhancing divergent production, aiming to facilitate problem solving through the maxim quantity breeds quality. The assumption is that the greater the number of ideas generated, the greater the chance of producing a radical and effective solution.
- Withhold criticism: In brainstorming, criticism of ideas generated should be put 'on hold'. Instead, participants should focus on extending or adding to ideas, reserving criticism for a later 'critical stage' of the process. By suspending judgment, participants will feel free to generate unusual ideas.
- Welcome unusual ideas: To get a good and long list of ideas, unusual ideas are welcomed. They can be generated by looking from new perspectives and suspending assumptions. These new ways of thinking may provide better solutions.
- Combine and improve ideas: Good ideas may be combined to form a single better good idea, as suggested by the slogan "1+1=3". It is believed to stimulate the building of ideas by a process of association.
Osborn notes that brainstorming should address a specific question; he held that sessions addressing multiple questions were inefficient.
Further, the problem must require the generation of ideas rather than judgment; he uses examples such as generating possible names for a product as proper brainstorming material, whereas analytical judgments such as whether or not to marry do not have any need for brainstorming.
Osborn envisioned groups of around 12 participants, including both experts and novices. Participants are encouraged to provide wild and unexpected answers. Ideas receive no criticism or discussion. The group simply provides ideas that might lead to a solution and apply no analytical judgment as to the feasibility. The judgments are reserved for a later date.
Nominal group technique
Participants are asked to write their ideas anonymously. Then the facilitator collects the ideas and the group votes on each idea. The vote can be as simple as a show of hands in favor of a given idea. This process is called distillation.
After distillation, the top ranked ideas may be sent back to the group or to subgroups for further brainstorming. For example, one group may work on the color required in a product. Another group may work on the size, and so forth. Each group will come back to the whole group for ranking the listed ideas. Sometimes ideas that were previously dropped may be brought forward again once the group has re-evaluated the ideas.
It is important that the facilitator be trained in this process before attempting to facilitate this technique. The group should be primed and encouraged to embrace the process. Like all team efforts, it may take a few practice sessions to train the team in the method before tackling the important ideas.
Group passing technique
Each person in a circular group writes down one idea, and then passes the piece of paper to the next person, who adds some thoughts. This continues until everybody gets his or her original piece of paper back. By this time, it is likely that the group will have extensively elaborated on each idea.
The group may also create an "idea book" and post a distribution list or routing slip to the front of the book. On the first page is a description of the problem. The first person to receive the book lists his or her ideas and then routes the book to the next person on the distribution list. The second person can log new ideas or add to the ideas of the previous person. This continues until the distribution list is exhausted. A follow-up "read out" meeting is then held to discuss the ideas logged in the book. This technique takes longer, but it allows individuals time to think deeply about the problem.
Team idea mapping method
This method of brainstorming works by the method of association. It may improve collaboration and increase the quantity of ideas, and is designed so that all attendees participate and no ideas are rejected.
The process begins with a well-defined topic. Each participant brainstorms individually, then all the ideas are merged onto one large idea map. During this consolidation phase, participants may discover a common understanding of the issues as they share the meanings behind their ideas. During this sharing, new ideas may arise by the association, and they are added to the map as well. Once all the ideas are captured, the group can prioritize and/or take action.
Breaking the rules technique
In this method, participants list the formal or informal rules that govern a particular process. Participants then try to develop alternative methods to bypass or counter these established protocols.
Directed brainstorming is a variation of electronic brainstorming (described above). It can be done manually or with computers. Directed brainstorming works when the solution space (that is, the set of criteria for evaluating a good idea) is known prior to the session. If known, those criteria can be used to constrain the Ideation process intentionally.
In directed brainstorming, each participant is given one sheet of paper (or electronic form) and told the brainstorming question. They are asked to produce one response and stop, then all of the papers (or forms) are randomly swapped among the participants. The participants are asked to look at the idea they received and to create a new idea that improves on that idea based on the initial criteria. The forms are then swapped again and respondents are asked to improve upon the ideas, and the process is repeated for three or more rounds.
In the laboratory, directed brainstorming has been found to almost triple the productivity of groups over electronic brainstorming.
A guided brainstorming session is time set aside to brainstorm either individually or as a collective group about a particular subject under the constraints of perspective and time. This type of brainstorming removes all cause for conflict and constrains conversations while stimulating critical and creative thinking in an engaging, balanced environment.
Participants are asked to adopt different mindsets for pre-defined period of time while contributing their ideas to a central mind map drawn by a pre-appointed scribe. Having examined a multi-perspective point of view, participants seemingly see the simple solutions that collectively create greater growth. Action is assigned individually.
Following a guided brainstorming session participants emerge with ideas ranked for further brainstorming, research and questions remaining unanswered and a prioritized, assigned, actionable list that leaves everyone with a clear understanding of what needs to happen next and the ability to visualize the combined future focus and greater goals of the group.
"Individual brainstorming" is the use of brainstorming in solitary. It typically includes such techniques as free writing, free speaking, word association, and drawing a mind map, which is a visual note taking technique in which people diagram their thoughts. Individual brainstorming is a useful method in creative writing and has been shown to be superior to traditional group brainstorming.
This process involves brainstorming the questions, rather than trying to come up with immediate answers and short term solutions. Theoretically, this technique should not inhibit participation as there is no need to provide solutions. The answers to the questions form the framework for constructing future action plans. Once the list of questions is set, it may be necessary to prioritize them to reach to the best solution in an orderly way.
"Questorming" is another term for this mode of inquiry.
Although the brainstorming can take place online through commonly available technologies such as email or interactive web sites, there have also been many efforts to develop customized computer software that can either replace or enhance one or more manual elements of the brainstorming process.
Early efforts, such as GroupSystems at University of Arizona or Software Aided Meeting Management (SAMM) system at the University of Minnesota, took advantage of then-new computer networking technology, which was installed in rooms dedicated to computer supported meetings. When using these electronic meeting systems (EMS, as they came to be called), group members simultaneously and independently entered ideas into a computer terminal. The software collected (or "pools") the ideas into a list, which could be displayed on a central projection screen (anonymized if desired). Other elements of these EMSs could support additional activities such as categorization of ideas, elimination of duplicates, assessment and discussion of prioritized or controversial ideas. Later EMSs capitalized on advances in computer networking and internet protocols to support asynchronous brainstorming sessions over extended periods of time and in multiple locations
Proponents such as Gallupe et al. argue that electronic brainstorming eliminates many of the problems of standard brainstorming, including production blocking (i.e. group members must take turns to express their ideas) and evaluation apprehension (i.e. fear of being judged by others). This positive effect increases with larger groups. A perceived advantage of this format is that all ideas can be archived electronically in their original form, and then retrieved later for further thought and discussion. Electronic brainstorming also enables much larger groups to brainstorm on a topic than would normally be productive in a traditional brainstorming session.
Computer supported brainstorming may overcome some of the challenges faced by traditional brainstorming methods. For example, ideas might be "pooled" automatically, so that individuals do not need to wait to take a turn, as in verbal brainstorming. Some software programs show all ideas as they are generated (via chat room or e-mail). The display of ideas may cognitively stimulate brainstormers, as their attention is kept on the flow of ideas being generated without the potential distraction of social cues such as facial expressions and verbal language. Electronic brainstorming techniques have been shown to produce more ideas and help individuals focus their attention on the ideas of others better than a brainwriting technique (participants write individual written notes in silence and then subsequently communicate them with the group). The production of more ideas has been linked to the fact that paying attention to others’ ideas leads to non-redundancy, as brainstormers try to avoid to replicate or repeat another participant’s comment or idea.
Some web-based brainstorming techniques allow contributors to post their comments anonymously through the use of avatars. This technique also allows users to log on over an extended time period, typically one or two weeks, to allow participants some "soak time" before posting their ideas and feedback. This technique has been used particularly in the field of new product development, but can be applied in any number of areas requiring collection and evaluation of ideas.
Some limitations of EBS include the fact that it can flood people with too many ideas at one time that they have to attend to, and people may also compare their performance to others by analyzing how many ideas each individual produces (social matching).
Some research indicates that incentives can augment creative processes. Participants were divided into three conditions. In Condition I, a flat fee was paid to all participants. In the Condition II, participants were awarded points for every unique idea of their own, and subjects were paid for the points that they earned. In Condition III, subjects were paid based on the impact that their idea had on the group; this was measured by counting the number of group ideas derived from the specific subject's ideas. Condition III outperformed Condition II, and Condition II outperformed Condition I at a statistically significant level for most measures. The results demonstrated that participants were willing to work far longer to achieve unique results in the expectation of compensation.
Challenges to effective group brainstorming
A good deal of research refutes Osborn's claim that group brainstorming could generate more ideas than individuals working alone. For example, in a review of 22 studies of group brainstorming, Michael Diehl and Wolfgang Stroebe found that, overwhelmingly, groups brainstorming together produce fewer ideas than individuals working separately.
Several factors can contribute to a loss of effectiveness in group brainstorming.
Blocking: Because only one participant may give an idea at any one time, other participants might forget the idea they were going to contribute or not share it because they see it as no longer important or relevant. Further, if we view brainstorming as a cognitive process in which "a participant generates ideas (generation process) and stores them in short-term memory (memorization process) and then eventually extracts some of them from its short-term memory to express them (output process)", then blocking is an even more critical challenge because it may also inhibit a person's train of thought in generating their own ideas and remembering them.
Collaborative fixation: Exchanging ideas in a group may reduce the number of domains that a group explores for additional ideas. Members may also conform their ideas to those of other members, decreasing the novelty or variety of ideas, even though the overall number of ideas might not decrease.
Evaluation apprehension: Evaluation apprehension was determined to occur only in instances of personal evaluation. If the assumption of collective assessment were in place, real-time judgment of ideas, ostensibly an induction of evaluation apprehension, failed to induce significant variance.
Free-riding: Individuals may feel that their ideas are less valuable when combined with the ideas of the group at large. Indeed, Diehl and Stroebe demonstrated that even when individuals worked alone, they produced fewer ideas if told that their output would be judged in a group with others than if told that their output would be judged individually. However, experimentation revealed free riding as only a marginal contributor to productivity loss, and type of session (i.e., real vs. nominal group) contributed much more.
Personality characteristics: Extraverts have been shown to outperform introverts in computer mediated groups. Extraverts also generated more unique and diverse ideas than introverts when additional methods were used to stimulate idea generation, such as completing a small related task before brainstorming, or being given a list of the classic rules of brainstorming.
Social matching: One phenomenon of group brainstorming is that participants will tend to alter their rate of productivity to match others in the group. This can lead to participants generating fewer ideas in a group setting than they would individually because they will decrease their own contributions if they perceive themselves to be more productive than the group average. On the other hand, the same phenomenon can also increase an individual's rate of production to meet the group average.
|Wikimedia Commons has media related to Brainstorming.|
- 6-3-5 Brainwriting
- Affinity diagram
- Group concept mapping
- Eureka effect
- Free writing
- Lateral thinking
- Mass collaboration
- Mind map
- Nominal group technique
- Speed thinking
- Thinking outside the box
- What? Where? When?
- Michael Diehl; Wolfgang Stroebe (1991). "Productivity Loss in Idea-Generating Groups: Tracking Down the Blocking Effect". Journal of Personality and Social Psychology 61 (3): 392–403. doi:10.1037/0022-35188.8.131.522.
- Lehrer, Jonah. "GROUPTHINK". NewYorker. Retrieved October 2013.
- Osborn, A.F. (1963) Applied imagination: Principles and procedures of creative problem solving (Third Revised Edition). New York, NY: Charles Scribner’s Sons.
- "What is Mind Mapping? (and How to Get Started Immediately)". Litemind.com. 2007-08-07. Retrieved 2012-11-24.
- Paul E. Plesk (2014-03-26). "Using Tools for Idea Generation". Retrieved 2014-03-31.
- Santanen, E., Briggs, R. O., & de Vreede, G-J. (2004). Causal Relationships in Creative Problem Solving: Comparing Facilitation Interventions for Ideation. Journal of Management Information Systems. 20(4), 167-198.
- Furnham, A., & Yazdanpanahi, T. (1995). Personality differences and group versus individual brainstorming. Personality and Individual Differences, 19, 73-80.
- Ludy, Perry J. Profit Building: Cutting Costs Without Cutting People. San Francisco: Berret-Koehler, Inc, 2000. Print.
- Questorming: An Outline of the Method, Jon Roland, 1985
- Nunamaker, Jay; Dennis, Alan, Valacich, Joseph, Vogel, Doug, Goerge Joey (1991). "Electronic Meeting Systems to Support Group Work". Communications of the ACM 34 (7): 40–61. doi:10.1145/105783.105793.
- DeSanctis, Gerardine; Poole, M.S., Zigurs, I., other Associates (2008). "The Minnesota GDSS research project: Group support systems, group processes, and outcomes". Journal of the Association for Information Systems 9 (10): 551–608.
- Michinov, N. (2012). Is Electronic Brainstorming the Best Way to Improve Creative Performance in Groups? An Overlooked Comparison of Two Idea-Generation Techniques. Journal of Applied Social Psychology, 42, E222 – E243.
- Gallupe, R. B., Dennis, A. R., Cooper, W. H., Valacich, J. S., Bastianutti, L. M. and Nunamaker, J. F. (1992), "Electronic Brainstorming and Group Size," Academy of Management Journal, Vol. 35, No. 2, pp. 350-369.
- Toubia, Olivier. "Idea Generation, Creativity, and Incentives". Marketing Science. Retrieved 28 April 2011.
- Michael Diehl; Wolfgang Stroebe (1987). "Productivity Loss in Brainstorming Groups: Toward the Solution of a Riddle". Journal of Personality and Social Psychology 53 (3): 497–509. doi:10.1037/0022-35184.108.40.2067.
- Lamm, Helmut; Trommsdorff, Gisela (1973). "Group versus individual performance on tasks requiring ideational proficiency (brainstorming): A review". European Journal of Social Psychology 3 (4): 361–388. doi:10.1002/ejsp.2420030402.
- Haddou, H.A.; G. Camilleri; P. Zarate (2014). "Predication of ideas number during a brainstorming session". Group Decision and Negotiation 23 (2): 285. doi:10.1007/s10726-012-9312-8.
- Kohn, Nicholas; Smith, Steven M. (2011). "Collaborative fixation: Effects of others' ideas on brainstorming". Applied Cognitive Psychology 25 (3): 359–371. doi:10.1002/acp.1699.
- Henningsen, David Dryden; Henningsen, Mary Lynn Miller (2013). "Generating Ideas About the Uses of Brainstorming: Reconsidering the Losses and Gains of Brainstorming Groups Relative to Nominal Groups". Southern Communication Journal 78 (1): 42–55. doi:10.1080/1041794X.2012.717684.
- Brown, V.; Paulus, P. B. (1996). "A simple dynamic model of social factors in group brainstorming". Small Group Research 27: 91–114. doi:10.1177/1046496496271005. |
Hot Air Balloon Physics
Photo credit: Beverly & Pack
The basic principle behind hot air balloon physics is the use of hot air to create buoyancy, which generates lift. A hot air balloon consists of a large bag, called an envelope, with a gondola or wicker basket suspended underneath. A burner (with power typically of several megawatts) sits in the basket and is used to heat the air inside the envelope through an opening. This heated air generates lift by way of a buoyant force. The figure below shows a typical burner.
Photo credit: rubberduckee
The hot air inside the envelope is less dense than the surrounding (cooler) air. This difference in density causes the hot air balloon to be lifted off the ground due to the buoyant force created by the surrounding air. The principle behind this lift is called Archimedes' principle, which states that any object (regardless of its shape) that is suspended in a fluid, is acted upon by an upward buoyant force equal to the weight of the fluid displaced by the object. So an object floating in water stays buoyant using the same principle as a hot air balloon. The figure below illustrates Archimedes' principle for an object completely submerged in a fluid (such as water, or air).
As shown in the figure above, the center of buoyancy acts through point C
, which is the centroid of the volume V
of the object. This volume is equal to the displaced volume of the fluid. The upward buoyant force FB
is equal to the weight of the displaced volume of fluid V
For the object to remain in an unconditionally stable orientation (i.e. not rotate) the center of mass of the object G
must be directly below point C
. This means that if the object were to be rotated by any amount, it will automatically rotate back to the original position where point G
lies directly below point C
. This is what is meant by unconditional stability.
For a hot air balloon, the upward buoyant force acting on it is equal to the weight (or mass) of the cooler surrounding air displaced by the hot air balloon. Since the air inside the envelope is heated it is less dense than the surrounding air, which means that the buoyant force due to the cooler surrounding air is greater than the weight of the heated air inside the envelope. And for lift to be generated, this buoyant force must exceed the weight of the heated air, plus the weight of the envelope, plus the weight of the gondola, plus the weight of passengers and equipment on board. As a result, the hot air balloon will experience sufficient buoyant force to completely lift off the ground.
As shown in the figure below, the weight of the hot air balloon is more concentrated near the bottom of the balloon (at the location of passengers and equipment), so the center of mass G
of the hot air balloon is always below the center of buoyancy C
. Therefore, the balloon is always stable during flight (i.e. it will always remain in the upright position).
Hot Air Balloon Physics — Operation
If the balloon operator wishes to lower the hot air balloon, he can either stop firing the burner, which causes the hot air in the envelope to cool (decreasing the buoyant force), or he opens a small vent at the top of the balloon envelope (via a control line). This releases some of the hot air, which decreases the buoyant force, which also causes the balloon to descend.
To maintain a steady altitude, the balloon operator intermittently fires and turns off the burner once he reaches the approximate altitude he wants. This causes the balloon to ascend and descend (respectively). This is the only way he can maintain an approximately constant altitude, since maintaining a strictly constant altitude by way of maintaining a net zero buoyant force (on the balloon) is practically impossible.
If the balloon operator wishes to move the balloon sideways (in a horizontal direction) he must know, ahead of time, the wind direction, which varies with altitude. So he simply raises or lowers the hot air balloon to the altitude corresponding to the wind direction he wants, which is the direction he wants the balloon to go.
The balloon stays inflated because the heated air inside the envelope creates a pressure greater than the surrounding air. However, since the envelope has an opening at the bottom (above the location of the burner), the expanding hot air is allowed to escape, preventing a large pressure differential from developing. This means that the pressure of the heated air inside the balloon ends up being only slightly greater than the cooler surrounding air pressure.
An efficient hot air balloon is one that minimizes the weight of the balloon components, such as the envelope, and on board equipment (such as the burner and propane fuel tanks). This in turn minimizes the required temperature of the air inside the envelope needed to generate sufficient buoyant force to generate lift. Minimizing the required air temperature means that you minimize the burner energy needed, thereby reducing fuel use.
Hot Air Balloon Physics — Analysis
Let's examine the physics of a hot air balloon using a sample calculation.
The heated air inside the envelope is at roughly the same pressure as the outside air. With this in mind we can calculate the density of the heated air at a given temperature, using the Ideal gas law, as follows:
is the absolute pressure of the gas, in Pa
is the density of the gas, in kg/m3
is the gas constant, in Joules/kg.K
is the absolute temperature of the gas, in Kelvins (K)
Normal atmospheric pressure is approximately 101,300 Pa
The gas constant for dry air is 287 Joules/kg.K
The air inside the envelope is typically heated to an average temperature of about 100 degrees Celsius, which is 373 K
Substituting the above three values into the Ideal gas law equation and solving for ρ
we get ρ
= 0.946 kg/m3
. This is the density of the heated air inside the envelope. Compare this to normal (ambient) air density which is approximately 1.2 kg/m3
Next, for an average size balloon with an envelope volume of 2800 m3
we wish to determine the net upward buoyant force generated by the envelope.
The net buoyant force is defined here as the difference in density between the surrounding air and the heated air, multiplied by the envelope volume. Thus,
= (1.2—0.946)x2800 = 711 kg (1565 lb)
This is the net buoyant force pushing upwards on the heated air inside the envelope. The hot air balloon components (such as envelope, gondola, burner, fuel tanks, and passengers) can at most weigh 711 kg in order for the buoyant force to be able to completely lift the hot air balloon off the ground.
It's worth mentioning that hot air balloons are always quite large. This is because the envelope must contain a large enough volume of heated (lower density) air in order to generate sufficient upward buoyant force.
Return from Hot Air Balloon Physics to Miscellaneous Physics page
Return from Hot Air Balloon Physics to Real World Physics Problems home page |
The Tasmanian devil, Sarcophilus harrisii, faces extinction within the next 25 years because of a clonally transmissible cancer called devil facial tumor disease (DFTD). The deadly parasitic cancer is transmitted between the Australian marsupials by cell implantation that occurs during biting each other in their social interactions. Since the disease emerged 15 years ago, the population has decreased by about 70%.
In a paper published in Proceeding of the Royal Society, researchers from the University of Sydney described the epigenetic changes and underlying mechanisms in cells infected with DFTD.
“Nobody has looked at the role of epigenetics in this devastating disease,” said Beata Ujvari, post-doctoral research fellow at the University of Sydney. “And although the tumor or this cancer cell line appears to be stable on the genomic, genetic, and stereotypic levels, it’s actually quite polymorphic on the epigenomic level which we were quite surprised by.”
Changes on the epigenetic level can affect whether a gene is expressed or not through external or environmental factors without altering DNA sequence. The results show that the epigenetic markers in devil tumor cells are actually removed over time, turning the expression of certain genes off. Because DFTD is passed from animal to animal, it is exposed to many different environments. This allows selection to act on different tumor variants to push the cancer’s evolution forward.
To understand the potential epigenetic changes in DFTD cells, the researchers measured the tumor cells’ DNA methylation, which drives epigenetic processes and is essential for the regulation of gene expression and genomic stability.
To do this, the team measured genetic and epigenetic variations between tumor and other tissue samples collected over a six-year span using two molecular techniques: amplified fragment length polymorphism (AFLP) to detect genetic variation and methylation-specific AFLP (metAFLP) to detect methylation variation.
Using these techniques, the group found that while the tumor samples were identical on the genetic level, they were highly variable and polymorphic on the epigenetic level. In addition, the researchers found that methylation loss was due to a significant increase in hypomethylation, which can actively silence tumor-suppressor genes over time.
Ujvari believes that by understanding how cancer evolves in DFTD, researchers may have a better way to begin studying cancer evolution functions in humans. “No human cancer studies so far have attempted to follow or study the effect of epigenetic changes over time,“ said Ujvari.
But for now, Ujvari and her colleagues would like to find and target the regions or exact genes that are involved in actively losing methylation in order to find potential answers to the problem.
“Unfortunately the technique metAFLP did not allow us to specify the exact gene regions which were differentially methylated over time; therefore we aim to use bisulfite sequencing of tumour samples to identify the regions/genes with different methylation patterns,” explained Ujvari.
1. Ujvari, B., A.-M. Pearse, S. Peck, C. Harmsen, R. Taylor, S. Pyecroft, T. Madsen, A. T. Papenfuss, and K. Belov. 2012. Evolution of a contagious cancer: epigenetic variation in devil facial tumour disease.Proceedings of the Royal Society B: Biological Sciences (November). |
The Southwest is the hottest and driest region in the United States, where the availability of water has defined its landscapes, history of human settlement, and modern economy. Climate changes pose challenges for an already parched region that is expected to get hotter and, in its southern half, significantly drier. Increased heat and changes to rain and snowpack will send ripple effects throughout the region’s critical agriculture sector, affecting the lives and economies of 56 million people – a population that is expected to increase 68% by 2050, to 94 million.5 Severe and sustained drought will stress water sources, already over-utilized in many areas, forcing increasing competition among farmers, energy producers, urban dwellers, and plant and animal life for the region’s most precious resource.
The region’s populous coastal cities face rising sea levels, extreme high tides, and storm surges, which pose particular risks to highways, bridges, power plants, and sewage treatment plants. Climate-related challenges also increase risks to critical port cities, which handle half of the nation’s incoming shipping containers.
Agriculture, a mainstay of the regional and national economies, faces uncertainty and change. The Southwest produces more than half of the nation’s high-value specialty crops, including certain vegetables, fruits, and nuts. The severity of future impacts will depend upon the complex interaction of pests, water supply, reduced chilling periods, and more rapid changes in the seasonal timing of crop development due to projected warming and extreme events.
Climate changes will increase stress on the region’s rich diversity of plant and animal species. Widespread tree death and fires, which already have caused billions of dollars in economic losses, are projected to increase, forcing wholesale changes to forest types, landscapes, and the communities that depend on them (see also Ch. 7: Forests).
Tourism and recreation, generated by the Southwest’s winding canyons, snow-capped peaks, and Pacific Ocean beaches, provide a significant economic force that also faces climate change challenges. The recreational economy will be increasingly affected by reduced streamflow and a shorter snow season, influencing everything from the ski industry to lake and river recreation. |
Published on Feb 11, 2016
Hamming Codes are used in detecting and correcting a code. An error-correcting code is an algorithm for expressing a sequence of numbers such that any errors which are introduced can be detected and corrected (within certain limitations) based on the remaining numbers.
Errors can happen in a variety of ways. Bits can be added, deleted, or flipped. Errors can happen in fixed or variable codes. Error-correcting codes are used in CD players, high speed modems, and cellular phones. Error detection is much simpler than error correction.
For example, one or more "check" digits are commonly embedded in credit card numbers in order to detect mistakes. The process of hamming code shows how to detect and correct an error in a code.
Here the alphabets will be finite fields. Linear codes with length n and dimension k will be described as [n, k] codes. Hamming Codes are linear codes, and a Hamming Code will be described as a [n, k] q-ary Hamming Code, where q is the size of the base field, Fq. In other words an [n,k] q-ary Hamming Code is a linear subspace of the n-dimensional vector space over Fq.
In this paper, we give two unexpected applications of a Hamming code.
The first one, also known as the "Hat Problem," is based on the fact that a small portion of the available code words are actually used in a Hamming code. The second one is a magic trick based on the fact that a Hamming code is perfect for single-error correction
• Data storages (CDs, DVDs, etc).
• Low power consumption can be achieved
Synthesis: Xilinx 9.1 |
From Wikipedia, the free encyclopedia
Halomethane compounds are derivatives of methane (CH4) with one or more of the hydrogen atoms replaced with halogen atoms (F, Cl, Br, I). Halomethanes are both naturally occurring, especially in marine environments, and man-made, most notably as refrigerants, solvents, propellants, and fumigants. Many, including the chlorofluorocarbons, have attracted wide attention because they become active when exposed to ultraviolet light found at high altitudes and destroy the earth’s protective ozone layer.
Structure and properties
Like methane itself, halomethanes are tetrahedral molecules. The halogen atoms differ greatly in size from hydrogen and from each other. Consequently, the various halomethanes deviate from the perfect tetrahedral symmetry of methane.
The physical properties of the halomethanes are tunable by changes in the number and identity of the halogen atoms. In general they are volatile but less so than methane because of their polarizability of the halides. The polarizability of the halides and the polarity of the molecules makes them useful as solvents. The halomethanes are far less flammable than methane. Broadly speaking, reactivity of the compounds is greatest for the iodides and lowest for the fluorides.
Industrial routes
The halomethanes are produced on a massive scale easily from abundant precursors, i.e. natural gas or methanol and from halogens or the halogen halides. They are usually prepared by one of three methods.
- Free radical chlorination of methane:
- CH4 + Cl2 → CH3Cl + HCl
This method is useful for the production of CH4-xClx (x = 1, 2, 3, or 4). The main problems with the method are that it cogenerates HCl and it affords mixtures.
- Halogenation of methanol. This method is used for the production of the monochloride, bromide, and iodide.
- CH3OH + HCl → CH3Cl + H2O
- 4 CH3OH + 3 Br2 + S → 4 CH3Br + H2SO4 + 2 HBr
- 3 CH3OH + 3 I2 + P → 3 CH3I + HPO(OH)2 + 3 HI
- Halogen exchange. The method is mainly used to produce fluorinated derivatives from the chlorides.
- HCCl3 + 2 HF → HCF2Cl + 2 HCl
In nature
Many marine organisms biosynthesize halomethanes, especially the bromine derivatives. Traces of halomethanes arise through the introduction of other non-natural industrial materials. Small amounts of chloromethanes arise from the interaction of chlorine sources with various carbon compounds. The biosyntheses of these halomethanes are catalyzed by the chloroperoxidase and bromoperoxidase enzymes, respectively. An idealized equation is:
- CH4 + Cl– + 1/2 O2 → CH3Cl + OH–
Classes of compounds
Halons are usually defined as hydrocarbons where the hydrogen atoms have been replaced by bromine, along with other halogens. They are referred to by a system of code numbers similar to (but simpler than) the system used for freons. The first digit specifies the number of carbon atoms in the molecule, the second is the number of fluorine atoms, the third is the chlorine atoms, and the fourth is the number of bromine atoms. If the number includes a fifth digit, the fifth number indicates the number of iodine atoms (though iodine in halon is rare). Any bonds not taken up by halogen atoms are then allocated to hydrogen atoms.
For example, consider Halon 1211: C F Cl Br 1 2 1 1
Halon 1211 has one carbon atom, two fluorine atoms, one chlorine atom and one bromine atom. A single carbon only has four bonds, all of which are taken by the halogen atoms, so there is no hydrogen. Thus its formula is CF2BrCl, and its IUPAC name is therefore bromochlorodifluoromethane.
ANSI/ASHRAE Standard 34-1992
The refrigerant naming system is mainly used for fluorinated and chlorinated short alkanes for refrigerant use. In the US, the standard is specified in ANSI/ASHRAE Standard 34-1992, with additional annual supplements. The specified ANSI/ASHRAE prefixes were FC (fluorocarbon) or R (refrigerant), but today most are prefixed by a more specific classification:
- CFC—list of chlorofluorocarbons
- HCFC—list of hydrochlorofluorocarbons
- HFC—list of hydrofluorocarbons
- FC—list of fluorocarbons
- PFC—list of perfluorocarbons (completely fluorinated)
The decoding system for CFC-01234a is:
- 0 = Number of double bonds (omitted if zero)
- 1 = Carbon atoms -1 (omitted if zero)
- 2 = Hydrogen atoms +1
- 3 = Fluorine atoms
- 4 = Replaced by Bromine (“B” prefix added)
- a = Letter added to identify isomers, the “normal” isomer in any number has the smallest mass difference on each carbon, and a, b, or c are added as the masses diverge from normal.
Other coding systems are in use as well.
Hydrofluoro compounds (HFC)
Freon is a trade name for a group of chlorofluorocarbons used primarily as a refrigerant. The main chemical used under this trademark is dichlorodifluoromethane. The word Freon is a registered trademark belonging to DuPont.
Hydrofluorocarbons (HFCs) contain no chlorine. They are composed entirely of carbon, hydrogen, and fluorine. They have no known effects at all on the ozone layer. Only compounds containing chlorine and bromine are thought to harm the ozone layer. Fluorine itself is not ozone-toxic. However, HFCs and perfluorocarbons do have activity in the entirely different realm of greenhouse gases, which do not destroy ozone, but cause global warming. Two groups of haloalkanes, hydrofluorocarbons (HFCs) and perfluorocarbons (PFCs), are targets of the Kyoto Protocol. Allan Thornton President of Environmental Investigation Agency, an environmental watchdog, says that HFCs are up to 12,500 times as potent as carbon dioxide in global warming. Wealthy countries are clamping down on these gases. Thornton says that many countries are needlessly producing these chemicals just to get the carbon credits. Thus, as a result of carbon trading rules under the Kyoto Protocol, nearly half the credits from developing countries are from HFCs, with China scoring billions of dollars from catching and destroying HFCs that would be in the atmosphere as industrial byproducts.
Overview of principal halomethanes
Virtually every permutation of hydrogen, fluorine, chlorine, bromine, and iodine on one carbon atom have been evaluated experimentally.
|Overview of Halomethanes|
|Chloromethane||Methyl chloride||methylation agent, e.g. for methyl trichlorosilane||CH3Cl|
|Tetrachloromethane||Carbon tetrachloride, Freon 10||CFC-10||Formerly in fire extinguishers||CCl4|
|Tetrafluoromethane||Carbon tetrafluoride, Freon 14||PFC-14
(CFC-14 and HF-14 also used, although formally incorrect)
|Trifluoromethane||Fluoroform||HFC-23||in semiconductor industry, refrigerant||CHF3|
|Chlorofluoromethane||Freon 31||refrigerant (phased out)||CH2ClF|
|Difluoromethane||HFC-32||refrigerant with zero ozone depletion potential||CH2F2|
|Fluoromethane||Methyl fluoride||HFC-41||semiconductor manufacture||CH3F|
|Dibromomethane||Methylene bromide||soil sterilant and fumigant. It strongly depletes ozone layer.||CH2Br2|
|Tribromomethane||Bromoform||for separation of heavy minerals||CHBr3|
|Bromochloromethane||Halon 1011||formerly in fire extinguishers||CH2BrCl|
|Bromochlorodifluoromethane||BCF, Halon 1211 BCF, or Freon 12B1||Halon 1211||CBrClF2|
|Bromotrifluoromethane||BTM, Halon 1301 BTM, or Freon 13BI||Halon 1301||CBrF3|
|Trifluoroiodomethane||Trifluoromethyl iodide||Freon 13T1||organic synthesis||CF3I|
|Methyl iodide||Iodomethane||organic synthesis||CH3I|
Because they have many applications and easily prepared, halomethanes have been of intense commercial interest.
Dichloromethane is the most important halomethane-based solvent. Its volatility, low flammability, and ability to dissolve a wide range of organic compounds makes this colorless liquid a useful solvent. It is widely used as a paint stripper and a degreaser. In the food industry, it is used to decaffeinate coffee and tea as well as to prepare extracts of hops and other flavorings. Its volatility has led to its use as an aerosol spray propellant and as a blowing agent for polyurethane foams.
One major use of CFCs has been as propellants in aerosol inhalers for drugs used to treat asthma. The conversion of these devices and treatments from CFC to halocarbons that do not have the same effect on the ozone layer is well under way. The hydrofluoroalkane propellant’s ability to solubilize medications and excipients is markedly different from CFCs and as a result requires a considerable amount of effort to reformulate (a significant amount of development effort has also been required to develop non-CFC alternatives to CFC-based refrigerants, particularly for applications where the refrigeration mechanism cannot be modified or replaced). They have now been outlawed universally in the United States.
For fire extinguishing
At high temperatures, halons decompose to release halogen atoms that combine readily with active hydrogen atoms, quenching the flame propagation reaction even when adequate fuel, oxygen, and heat remains. The chemical reaction in a flame proceeds as a free radical chain reaction; by sequestering the radicals which propagate the reaction, halons are able to “poison” the fire at much lower concentrations than are required by fire suppressants using the more traditional methods of cooling, oxygen deprivation, or fuel dilution.
For example, Halon 1301 total flooding systems are typically used at concentrations no higher than 7% v/v in air, and can suppress many fires at 2.9% v/v. By contrast, carbon dioxide fire suppression flood systems are operated from 34% concentration by volume (surface-only combustion of liquid fuels) up to 75% (dust traps). Carbon dioxide can cause severe distress at concentrations of 3 to 6%, and has caused death by respiratory paralysis in a few minutes at 10% concentration. Halon 1301 causes only slight giddiness at its effective concentration of 5%, and even at 15% persons remain conscious but impaired and suffer no long term effects. (Experimental animals have also been exposed to 2% concentrations of Halon 1301 for 30 hours per week for 4 months, with no discernible health effects at all.) Halon 1211 also has low toxicity, although it is more toxic than Halon 1301, and thus considered unsuitable for flooding systems.
However, Halon 1301 fire suppression is not completely non-toxic; very high temperature flame, or contact with red-hot metal, can cause decomposition of Halon 1301 to toxic byproducts. The presence of such byproducts is readily detected because they include hydrobromic acid and hydrofluoric acid, which are intensely irritating. Halons are very effective on Class A (organic solids), B (flammable liquids and gases) and C (electrical) fires, but they are totally unsuitable for Class D (metal) fires, as they will not only produce toxic gas and fail to halt the fire, but in some cases pose a risk of explosion. Halons can be used on Class K (kitchen oils and greases) fires, but offer no advantages over specialised foams.
Halon 1211 is typically used in hand-held extinguishers, in which a stream of liquid halon is directed at a smaller fire by a user. The stream evaporates under reduced pressure, producing strong local cooling, as well as a high concentration of halon in the immediate vicinity of the fire. In this mode, extinguishment is achieved by cooling and oxygen deprivation at the core of the fire, as well as radical quenching over a larger area. After fire suppression, the halon moves away with the surrounding air, leaving no residue.
Halon 1301 is more usually employed in total flooding systems. In these systems, banks of halon cylinders are kept pressurised to about 4 MPa (600 psi) with compressed nitrogen, and a fixed piping network leads to the protected enclosure. On triggering, the entire measured contents of one or more cylinders are discharged into the enclosure in a few seconds, through nozzles designed to ensure uniform mixing throughout the room. The quantity dumped is pre-calculated to achieve the desired concentration, typically 3-7% v/v. This level is maintained for some time, typically with a minimum of ten minutes and sometimes up to a twenty minute ‘soak’ time, to ensure all items have cooled so reignition is unlikely to occur, then the air in the enclosure is purged, generally via a fixed purge system that is activated by the proper authorities. During this time the enclosure may be entered by persons wearing SCBA. (There exists a common myth that this is because halon is highly toxic; in fact it is because it can cause giddiness and mildly impaired perception, and also due to the risk of combustion byproducts.)
Flooding systems may be manually operated or automatically triggered by a VESDA or other automatic detection system. In the latter case, a warning siren and strobe lamp will first be activated for a few seconds to warn personnel to evacuate the area. The rapid discharge of halon and consequent rapid cooling fills the air with fog, and is accompanied by a loud, disorienting noise.
Due to environmental concerns, alternatives are being deployed.
Halon 1301 is also used in the F-16 fighters to prevent the fuel vapors in the fuel tanks from becoming explosive; when the aircraft enters an area with the possibility of unfriendly fire, Halon 1301 is injected into the fuel tanks for one-time use. Due to environmental concerns, trifluoroiodomethane (CF3I) is being considered as an alternative.
Chemical building blocks
Chloromethane and bromomethane are to introduce methyl groups in organic synthesis and the production of fine chemicals. Chlorodifluoromethane is the main intermediate en route to tetrafluoroethylene, the monomeric precursor to Teflon.
Haloalkanes are diverse diverse in their behavior, making generalizations impossible. Few are acutely dangerous but many pose risks for prolonged exposure. Some problematic aspects include carcinogenicity (e.g., methyl iodide) and liver damage (e.g., carbon tetrachloride). Under certain combustion conditions, chloromethanes convert to phosgene, which is highly toxic.
See also
- ^ a b Günter Siegemund, Werner Schwertfeger, Andrew Feiring, Bruce Smart, Fred Behr, Herward Vogel, Blaine McKusick “Fluorine Compounds, Organic” Ullmann’s Encyclopedia of Industrial Chemistry, Wiley-VCH, Weinheim, 2002. doi:10.1002/14356007.a11_349
- ^ a b Manfred Rossberg, Wilhelm Lendle, Gerhard Pfleiderer, Adolf Tögel, Eberhard-Ludwig Dreher, Ernst Langer, Heinz Rassaerts, Peter Kleinschmidt, Heinz Strack, Richard Cook, Uwe Beck, Karl-August Lipper, Theodore R. Torkelson, Eckhard Löser, Klaus K. Beutel, Trevor Mann “Chlorinated Hydrocarbons” in Ullmann’s Encyclopedia of Industrial Chemistry 2006, Wiley-VCH, Weinheim. doi:10.1002/14356007.a06_233.pub2.
- ^ Gordon W. Gribble (1998). “Naturally Occurring Organohalogen Compounds”. Acc. Chem. Res. 31 (3): 141–152. doi:10.1021/ar9701777.
- ^ John Daintith (2008). Oxford Dictionary of Chemistry. Oxford University Press. ISBN 0199204632.
- ^ ASHRAE Bookstore
- ^ Lerner & K. Lee Lerner, Brenda Wilmoth (2006). “Environmental issues : essential primary sources.”“. Thomson Gale. http://catalog.loc.gov/cgi-bin/Pwebrecon.cgi?v3=1&DB=local&CMD=010a+2006000857&CNT=10+records+per+page. Retrieved 2006-09-11.
- ^ All Things Considered, NPR News, 5:24 p.m., December 11, 2007.
- ^ Office of Environmental Health Hazard Assessment (September 2000). “Dichloromethane“. Public Health Goals for Chemicals in Drinking Water. California Environmental Protection Agency. http://www.oehha.ca.gov/water/phg/pdf/dcm.pdf.
- ^ 3-III-2 HALON 1301 REPLACEMENTS
- ^ Defense Tech Briefs
External links
- Gas conversion table
- Nomenclature FAQ
- History of Halon use by the US Navy
- Ozone Loss: The Chemical Culprits
- Environmental Investigation Agency
- Environmental Investigation Agency in the USA |
The lateral line is a system of sense organs found in fish, and not in land vertebrates. It detects movement and vibration in the surrounding water. Modified epithelial cells, known as hair cells, respond to changes around the fish. These turn the changes into electrical impulses which go to their central nervous system.
Lateral lines are used in schooling, predation, and orientation. For example, fish use their lateral line system to follow the vortices produced by fleeing prey. They are faint lines running lengthwise down each side, from the gill covers to the base of the tail.
In some species, the receptive organs of the lateral line have been modified to function as electroreceptors, which are organs used to detect electrical impulses. Most amphibian larvae and some fully aquatic adult amphibians have systems which work a bit like the lateral line. |
Walk a Math Trail
Still not sure how to get started?
Many adults aren’t as comfortable with math as they would like to be. Before
leading kids on a math trail, try one for yourself. Look for ways to find
shapes and numbers in your surroundings, and ask yourself questions about
To make a geometry math trail, start with questions like these, or make up some
of your own:
- Find the first letter of your name in your surroundings. (It’s especially fun if it’s
not on a sign.)
- Look at the architecture of a building and the different shapes and patterns
in its design. Which shapes or patterns give this building its character?
- Find an unusual tiling pattern in a floor, a patio, or anywhere else you can find
repeating shapes on a flat surface. Ask questions about how the pattern repeats
and fits together.
- Sit near a street and watch a tire of a slow-moving car. How does the valve on
the tire move? Can you trace its path?
- Figure out the area or volume of a very large shape in your surroundings.
To measure the shape, you can use your own body or whatever else is handy.
Get outside and explore geometry (and other math) all around you.
A math trail is a walk with various stops where you look at math in the world around
you, and ask questions about it.
Math trails are a great way to stimulate interest in math—especially among students
in middle school or high school, when classroom math often becomes
more abstract. A math trail helps students see math, touch it, and investigate it
on their own.
Math Trails: Making Math Concrete
In this video, Ron Lancaster, senior lecturer at the University of Toronto, gives
a brief overview of how to create your own math trail, and how walking a Math
Trail can help anyone who’s learning math.
Geometry Playground is made possible by the National Science Foundation and the Gordon and Betty Moore Foundation.
findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
© 2010 Exploratorium | The
museum of science, art and human perception
Use Policy |
Press Office | |
Weight Loss: Emotional Eating
Emotional eating is the practice of consuming large quantities of food -- usually "comfort" or junk foods -- in response to feelings instead of hunger. Experts estimate that 75% of overeating is caused by emotions.
Many of us learn that food can bring comfort, at least in the short-term. As a result, we often turn to food to heal emotional problems. Eating becomes a habit preventing us from learning skills that can effectively resolve our emotional distress.
By identifying what triggers our emotional eating, we can substitute more appropriate techniques to manage our emotional problems and take food and weight gain out of the equation.
How to Identify Eating Triggers
Situations and emotions that trigger us to eat fall into five main categories.
- Social. Eating when around other people. For example, excessive eating can result from being encouraged by others to eat; eating to fit in; arguing; or feelings of inadequacy around other people.
- Emotional. Eating in response to boredom, stress, fatigue, tension, depression, anger, anxiety, or loneliness as a way to "fill the void."
- Situational. Eating because the opportunity is there. For example, at a restaurant, seeing an advertisement for a particular food, passing by a bakery. Eating may also be associated with certain activities such as watching TV, going to the movies or a sporting event, etc.
- Thoughts. Eating as a result of negative self-worth or making excuses for eating. For example, scolding oneself for looks or a lack of will power.
- Physiological. Eating in response to physical cues. For example, increased hunger due to skipping meals or eating to cure headaches or other pain.
To identify what triggers excessive eating in you, keep a food diary that records what and when you eat as well as what stressors, thoughts, or emotions you identify as you eat. You should begin to identify patterns to your excessive eating fairly quickly.
How to Stop Emotional Eating
Identifying emotional eating triggers and bad eating habits is the first step; however, this alone is not sufficient to alter eating behavior. Usually, by the time you have identified a pattern, eating in response to emotions or certain situations has become a habit. Now you have to break that habit.
Developing alternatives to eating is the second step. When you start to reach for food in response to an eating trigger, try one of the following activities instead.
- Read a good book or magazine or listen to music.
- Go for a walk or jog.
- Take a bubble bath.
- Do deep breathing exercises.
- Play cards or a board game.
- Talk to a friend.
- Do housework, laundry, or yard work.
- Wash the car.
- Write a letter.
- Or do any other pleasurable or necessary activity until the urge to eat passes.
Find out what women really need. |
CRISPR is a genetic editing technology that will change the future of genetics. In the past 3 years it has been used in labs throughout the world. It has the potential to fix point mutations and larger mutations in our genome. Diseases caused by point mutations include Cystic Fibrosis, Sickle Cell Anemia, and Tay-Sachs disease. More complex conditions such as cancer, HIV or autism could be cured if the RNA gene editing is developed further. On March 16th, 2016, it was published that RNA was successfully targeted for the first time. This is just the beginning of CRISPR.
Listen to hear how CRISPR works and how it was discovered. Get the inside scoop on the current research, ethics, politics, and patents. |
Last week, I had the opportunity to speak with Daniel Notterman and Colter Mitchell about their paper titled, "Social Disadvantage, Genetic Sensitivity and Children's Telomere Length," which was published online April 7 in the Proceedings of the National Academy of Sciences.
What are telomeres? Every chromosome in the human body has two protective caps on the end known as telomeres. As telomeres become shorter, their structural integrity weakens, which causes cells to age faster and die younger.
Multiple studies have shown that the stress caused by things like: untreated depression, social isolation, long-term unemployment, and high anxiety can speed-up the aging process by shortening the length of telomeres which can lead to poor health outcomes. I have a hunch that the study of epigenetics and telomere length may be the next wave for better understanding mind-body-environment connections.
Will Telomere Length Be the Future Biomarker of Well-Being?
This study brings researchers closer to understanding the importance of psychological and environmental stress during childhood and how these genetic imprints can be detrimental for long-term health. Using telomeres as a biomarker as the participants of this study grow older will be a valuable tool for assessing the long-term impact of childhood disadvantage and familial stress.
For this study, the researchers wanted to focus on African American boys because past studies have shown that boys may be more sensitive to their environment than girls. The team limited their sample to 40 boys who participated in the Fragile Families study and met the following criteria: they had provided saliva samples at age 9; their mothers self-identified as black or African American; and complete information was provided about their social environments.
Of the final sample, half of the boys were raised in disadvantaged environments, which the researchers characterized by such factors as: low household income, low maternal education, an unstable family structure, and harsh parenting.
In the first unexpected discovery, the researchers found that growing up in a disadvantaged environment was associated with 19 percent shorter telomeres when compared to boys growing up in highly advantaged environments. For boys predisposed to being sensitive to their environment, this negative association was even stronger.
Dopamine and Serotonin Pathways Linked to Environmental Sensitivity
The other significant finding of this study is that there appears to be an association between the social environment and telomere length (TL) which is specifically moderated by genetic variations within the serotonin and dopamine pathways.
Boys with the highest genetic sensitivity to chronic stress had the shortest TL when exposed to disadvantaged environments and the longest TL when exposed to advantaged environments. To their knowledge, the researchers state that this report is the first to document a gene–social environment interaction for telomere length.
The researchers found that boys with genetic sensitivities related to serotonin and dopamine pathways have shorter telomeres after experiencing stressful social environments than the telomeres of boys without these genetic sensitivities.
Lead author Daniel Notterman, Professor of Pediatrics and Biochemistry and Molecular Biology at Penn State College of Medicine, and Visiting Professor of Molecular Biology at Princeton, said the researchers still aren’t exactly sure of the causal effect for the changes in the serotonin and dopamine pathways.
"Our report is the first to examine the interactions between genes and social environments using telomeres as a biomarker," Notterman said. "We also demonstrate the utility of using saliva DNA to measure children's telomere length. This is important because most telomere research uses blood, which is much more difficult to collect than saliva. Using saliva is easier and less expensive, allowing researchers to collect telomere samples at various points in time to see how social environment may be affecting DNA."
In a phone conversation, Notterman clarified to me that measuring the length of telomeres is a research tool that confirms the importance of early interventions to reduce the impacts of stressful childhood environments. There is still a question though if telomere length is the actual cause of the ill-effects or just a biomarker for a more complex chain reaction.
Last week, I also had an opportunity to speak with Dr. Colter Mitchell, a co-author of the paper from the University of Michigan, whose research focuses on the causes and consequences of different types of family environments. Mitchell examines how social context such as neighborhood resources and values influence family processes and how epigenetics can influence well-being and overall health.
His research also includes the development of new methods for integrating the collection and analysis of biological and social data. "Our findings suggest that an individual's genetic architecture moderates the magnitude of the response to external stimuli—but it is the environment that determines the direction," says Mitchell.
Colter Mitchell points out that it is striking to see telomere shortening in children as young as age 9 because you are talking about accelerated aging or stress-mediated wear and tear on your body. These genetic imprints make someone more vulnerable to all kinds of illnesses and diseases ten or twenty years down the road. The health care costs for treating these illnesses in years to come will be astronomical.
Conclusion: We Can Pay Now or We Can Pay More Later
Environmental and psychological stress can alter genes throughout a lifespan. This study makes groundbreaking advances in the awareness that social disparities can shorten the length of telomeres starting at a very young age. The team plans to expand its analysis to approximately 2,500 children and their mothers to see if these preliminary findings hold true.
The most important conclusion is that stressful upbringings can leave imprints on the genes of children. Chronic stress during youth triggers physiological weathering that can lead to premature aging, increased risk of disease, and earlier mortality.
The researchers repeatedly emphasized their surprise that the biological weathering effects of chronic stress can be seen by the age of 9. Colter Mitchell concluded that from a social science and public policy perspective he hopes this study motivates people to make reducing childhood social disadvantages a top priority saying, "We can pay now, or we can pay more later. But we will pay for the effects of early life disadvantage at some point."
Huge thanks to Daniel Notterman and Colter Mitchell for taking time out of your busy schedule to speak with me about your current research. Thank you for coordinating Rose Huber. Much appreciated!
If you’d like to read more about this topic, check out my Psychology Today blog posts:
Be sure to read the following responses to this post by our bloggers: |
Cancer is the general name for more than 100 diseases characterized by uncontrolled cell growth. Whereas normal cells follow a path of growth, division and death, cancer cells continue to divide when new cells are not needed. Uncontrolled cell growth leads to a mass or growth of tissue that is called a tumor (except in the cancers known as leukemia where cancer interferes with normal blood function by abnormal cell division in the blood stream).
Not all tumors are cancerous. Benign tumors are not cancerous. They can be removed, and in most cases do not come back. Malignant tumors are cancerous. These malignant cancer cells can enter the bloodstream and spread throughout the body invading and destroying healthy tissue. This spreading from one part of the body to another is called metastasis.
Cancer is named for the place where it started. Different types of cancer behave differently, so cancer that has metastasized is still named for its place of origin. For example, if breast cancer spreads to the liver, it is called "metastatic breast cancer," not liver cancer.
Cancer can be attributed to environmental and genetic factors. Environmental factors that contribute to cancer include tobacco, diet and obesity, infections, radiation, stress, lack of physical activity, and environmental pollutants. The majority of cancers are not hereditary. Less than 0.3% of the population are carriers of a genetic mutation which has a large effect on cancer risk. An example would be an inherited mutation in the BRCA1 and BRCA2 genes. Mutations in these genes may indicate a higher risk for development of breast and ovarian cancer.
If a physical exam produces signs or symptoms that cause a physician to suspect cancer, a combination of biopsies, labs, imaging and genetic tests may be ordered.
Cancer is treated with surgery, radiation, chemotherapy, biologic therapies, hormone therapies or transplant options such the transplantation of bone marrow. A team of specialists including medical oncologists, surgeons, radiation oncologists and others will work together to determine what treatment or combination of treatments will be best for you individually. |
You wanted to know
A student in Katherine Crawford's fifth-grade class at West Oak Middle School in Mundelein asked, "How are other galaxies discovered?"
Contact information ( * required )
Check it outThe Fremont Public Library District in Mundelein suggests these titles on galaxies:
• "Galaxies" by Ruth Owen
• "Night Sky" by Giles Sparrow
• "Awesome Astronomy" by Raman Prinja
• "D.K. Encyclopedia of Space" by Heather Couper
• "Edwin Hubble: Discoverer of Galaxies" by Claire Datnow
By Hope Babowice
Ancient people knew the sky was a vast expanse that contained many thousands of stars.
Anyone with a telescope today can look into the dark night sky and see hundreds of thousands of stars, many planets and even other galaxies besides our own solar system, the Milky Way.
The brightness of the stars may make them appear close, but they actually stretch out in an almost unimaginable distance from the Earth.
"Most of these stars are relatively nearby, only a few tens of hundreds light years away, which translates to hundreds or thousands of trillions of miles away," explains Geza Gyuk, director of astronomy at Chicago's Adler Planetarium.
What does the Milky Way look like? Dr. Gyuk said, "It consists of hundreds of billions of stars of all kinds spread out in a spiral disk that measures about 100,000 light years across. It took many years, even centuries, for scientists to gradually understand how the stars and planets were organized and the sheer size of the Milky Way."
Other galaxies are similar in shape and size, and yet others are defined as elliptical and irregular shape.
The first time someone realized there was more than one galaxy was in the 1920s when cosmologist Edwin Hubble used a 100-inch telescope to identify and define two galaxies beyond the Milky Way -- Andromeda and Triangulum.
Scientists have developed more high-powered telescopes, like the space-traveling Hubble telescope, to see there are many billions of galaxies -- maybe 200 billion.
A percentage have been photographed and cataloged by scientists. NASA has a database with 4 million names, but scientists are sure that's not all of them.
The discovery of galaxies beyond the Milky Way has led to many more revelations about our universe.
Once astronomers could see the galaxies that surround the Milky Way, they formulated the theory that the universe is expanding, they identified the existence of black holes in the centers of those galaxies, and they discovered places where new stars and galaxies are forming.
Gyuk said scientists use a special cataloging system to assign names or numbers to the galaxies.
"We can tell the difference between a galaxy, a foreground star, or cloud of gas and dust by looking at its shape and color. Stars are tight, compact points of light. Galaxies and clouds are more spread out and fuzzy.
"But a galaxy has a different color than a cloud of gas, so we can tell them apart. So there are still discoveries to be made about galaxies," he said.
Readers are encouraged to help scientists classify the millions of galaxies by logging onto GalaxyZoo at www.galaxyzoo.org.
"You don't need any training or experience and, who knows, you might discover something new and marvelous," Gyuk said.
People of all ages are invited to look far into the distance of the night sky at the Lake County Astronomical Society monthly meetings. Free, the meetings include opportunities to see stars, planets and maybe even galaxies using telescopes. No registration is required.
Locations are at Volo Bog and local libraries throughout Lake County. Visit lcas-astronomy.org for dates and times. |
In WW-II and shortly thereafter, piston powered aircraft peaked in power, performance and complexity wise. Power went up to over 4000 bhp for large multi-row radial engines. Only to be defeated by the jet, which was developed by (among others) Germany's Dr. Hans von Ohain and separately in the UK by Sir Frank Whittle. Its principles are based on the "Aeolipile" of the ancient Greek scientist Hero and other great thinkers like Leonardo da Vinci and the laws of Isaac Newton.
Compared to a piston, the gas turbine has less parts and the moving parts rotate in only one direction without stopping and accelerating as the pistons normally do in a engine. Thus, a running gas turbine is basically free of the vibrations normally found in piston models, which translates in much longer service life (TBO) and higher reliability.
But piston engines of this power class (over 4000 bhp) are so complex that the only way to go was to continue the development of the gas turbine in spite of all difficulties along the way. It is also able to extract more energy from a given amount of fuel than a piston engine.
Gas turbines operate on almost the same principle as piston Aero engines. They intake air, compress it, spray fuel in the hot compressed air which vaporises, ignites and then burns continuously (this is different from a piston), the hot exhaust expands quickly and exists the combustion chambers driving a turbine which in turns rotates the compressor.
When the hot exhaust finally leaves the engine it still contains enough kinetic energy to produce a forward thrust (Newton) propelling the aircraft along at high speeds. Inside the engine, only a small portion of air taken in is used for the combustion of fuel the remainder is used for cooling and other applications like cabin pressurization and air conditioning.
Thus the process of fuel combustion goes through almost the same sequence as for a piston engine, with the main difference that power is produced continually whereas with a piston it is intermittent.
Both piston and jet aircraft work on air to accelerate it, the piston or turboprop with a propeller which gives a small acceleration to a large amount of air and the pure jet gives a large acceleration to a a small amount of air.
In a gas turbine combustion is at almost constant pressure with a volume increase, whereas high peak pressures common in piston engines are avoided. Making it possible to use low octane fuels. It becomes possible to use less robust components, but to ensure long live of the engine components special alloys are used to handle the higher gas temperatures.
For an turbine engine to produce any power air is compressed, increasing its pressure energy, and then by combustion of fuel heat energy is added. The cycle in which this is taking place is called the Brayton cycle. Named after George Brayton who did analysis on steam engine performance in the United States way back in the previous centuries.
During the work cycle of a turbine, an air mass will take in and give out heat to produce changes in pressure, temperature and velocity of that air mass. These changes are conform Boyle's and Charles laws: and this law says that the product of pressure and volume of an air mass is proportional to the absolute temperature of the air: P x V = T.
The above says that when a mass of air will heat up and cool off this will result in changes in pressure, velocity and temperature of that mass of air. As heat is form of energy, variations in temperature gives an idea of the work done in the engine. And this will occur in three main areas of a gas turbine engine:
As mass air flow is continuous, the gas volume changes only as gas velocity changes.
Also, the gas turbine is a heat engine, the greater the heat the more the gases expand and the more efficient the engine is. This is limited only to what the alloys of the engine can withstand. Cooling air is used to increase temperatures (and efficiency) beyond that of the materials the combustion chambers are made of.
This cooling air forms thin layers of air over the components thus isolating them from the heat and keeping them within designed limits. |
Follicular lymphoma is a form of non-Hodgkin lymphoma cancer that affects the bodies B-lymphocytes, which are white blood cells that attack and destroy infections. The abnormal white blood cells can spread throughout the body into organs, lymph nodes, bone marrow, and other tissue. Typically, tumors will form in these locations.
Scientists and research which have yet to develop a cure for follicular lymphoma. However, many individuals with the condition live a long and active life. Unlike many cancers, follicular lymphoma is not inherited or passed down through families from one generation to another. However, elderly individuals 60 years and older are more likely to develop the condition compared to younger males and females.
Scientists named this form of non-Hodgkin disease by its appearance under the microscope where the cellular structures resemble hair follicles. However, it is caused by a BCL-2 Gene mutation that occurs between the chromosome 14 and chromosome 18 that produces an abnormally elevated level of BCL2 protein that causes apoptosis which is the normal natural lifecycle of the cell to mature, reproduce, and then kill itself.
- What Causes Follicular Lymphoma?
- Who is at Risk for Follicular Lymphoma?
- Common Symptoms
- Diagnosing and Staging Follicular Lymphoma
- Treating the Condition
What Causes Follicular Lymphoma?
On average, individuals 50 years and older acquire follicular lymphoma compared to those at a younger age. Although scientists know there are genetic mutations of the chromosomes and jeans associated with follicular lymphoma, identifying the changes as the actual cause has not yet been determined. Follicular lymphoma is not contagious and cannot be passed on to other individuals.
Who is at Risk for Follicular Lymphoma?
According to the CDC (Centers for Disease Control and Prevention), more than 72,000 cases involving forms of non-Hodgkin lymphoma were diagnosed in the United States in 2016. Twenty percent of all these cases involved follicular lymphoma. Both males and females seem to be equally at risk of developing the disease, as are most ethnic groups except African-Americans and Asian-Americans.
Individuals who suffer from follicular lymphoma usually never experienced symptoms that are strong enough are obvious enough to be detected by a doctor. Because of its indolent nature, follicular lymphoma is difficult to diagnose. As the disease progresses, it often does not cause significant harmful symptoms until it reaches an advanced stage which often takes many years.
Most of the detectable symptoms are associated with dysfunctional bone marrow that causes anemia (low red blood cell count), leukopenia (low white blood cell count), or thrombocytopenia (low platelet cell count).
Common detectable follicular lymphoma symptoms include:
- Unexpected weight loss
- Night sweats
- Shortness of breath
- Pain was swelling of lymph nodes in the armpit, stomach, groin or neck area
Diagnosing and Staging Follicular Lymphoma
To ensure an accurate diagnosis the condition, the doctor will perform a comprehensive physical evaluation and examine the reticuloendothelial (immune-response monocytes and macrophages) system in lymph node areas that present abnormal characteristics.
Because many of the symptoms associated with the disease can come and go, the doctor will ask numerous questions to help in their diagnosis including:
- Have you ever detected a lump in your stomach, groin, neck or armpits? When they appeared, were they painful? If the lump disappeared, did it ever reappear?
- Has any other doctor diagnosed you with cancer and did you receive treatment?
- Do you know if you have ever been exposed to any chemicals that cause cancer in your work, home, or social environment?
- Has any doctor ever diagnosed you with rheumatoid arthritis, celiac disease, lupus, or HIV?
- Are you the recipient of an organ transplant?
The doctor will also examine the spleen, liver, and throat and perform other diagnostic tests and evaluations including:
- Complete Blood Cell Count (CVC) – This test can differentiate normal from abnormal blood cells to assist in the diagnosis of verifying follicular lymphoma.
- CT (Computed Tomography) Scan – Obtaining an imaging scan of the pelvis, abdomen, and chest areas can identify pelvic or abdominal adenopathy (enlarged lymph glands) if present. The doctor might also recommend a PET (positron emission tomography) scan that can identify localized diseases.
- Tissue Biopsy – Obtaining a biopsy is crucial for establishing the diagnosis of follicular lymphoma. This can help determine if the disease is extranodal or residing outside the lymphatic system into other areas of the body including organs.
- Bone Marrow Aspiration and Chromosomal Analysis – This procedure removes a small portion of bone marrow through aspiration by a long fine needle usually inserted in the back of the hipbone to detect abnormalities in the patient's chromosome makeup associated with follicular lymphoma.
Staging the Disease
Once the disease has been diagnosed by competent doctors and pathologists, the cancerous condition needs to be staged to ensure the patient is receiving the most appropriate and effective treatment. Staging involves:
- Stage I – At this stage, the disease is located in a single lymph node or in a single lymph node area.
- Stage II – Once the disease progresses, it involves two or more lymph node or areas that form on one side of the diaphragm.
- Stage III – In the beginning advancing stages, the condition has migrated to more than one lymph node or area on both sides of the diaphragm.
- Stage IV – At the most advanced stage, the disease has disseminated (widely spread) and now involves the central nervous system, liver, or bone marrow.
In addition to staging the disease, scientists will grade the disease based on the number of centroblasts. The higher number of centroblasts in the test sample, the higher the grade.
- Grade 1 – This grade presents 0-5 centroblasts per high-power field
- Grade 2 – This grade presents 6-15 centroblasts per high-powered field
- Grade 3 – This grade represents more than 15 centroblasts per high-powered field.
Treating the Condition
The type of treatment the doctor will recommend will differ greatly based on the progression of cancer along with other factors. If the follicular lymphoma is progressing slowly, the doctor may choose to a "watchful waiting" approach to see if the condition becomes worse over time. Treatment might not be started until the lymph nodes become larger in size or the patient develops night sweats or fevers, has unexpected weight-loss, or their test results show low blood counts.
However, other treatments could include chemotherapy and radiation treatments as an effective way to manage the disease more than cure it. This is because most forms of the disease cannot be cured. That said, the average survival rate of a patient with follicular lymphoma in its advanced stage is approximately twenty years. To ensure a longer lifespan, the doctor may recommend:
- Radiation Treatments that destroy cancer cells using X-ray energy beams to focus on the affected areas in the body. Radiation has been proven effective at destroying follicular lymphoma, especially in its early stage.
- Monoclonal Antibodies can be an effective treatment because the antibodies act much like the body's normal disease-fighting cells. Rituxan (rituximab) has been shown to be highly effective at destroying lymphoma cells while preventing damage to normal healthy tissue. In addition, this form of treatment produces far fewer side effects compared to other options including chemotherapy.
In addition, there are clinical trials available for individuals with advanced stages of follicular lymphoma. However, participating in a clinical trial in no way guarantees a successful outcome. |
Location: U.S. Virgin Islands
Established: August 2, 1956
Size: 15,135 acres
High green hills dropping down to enchanting turquoise bays, white powdery beaches, coral reefs, and ruins that evoke an era of sugar and a tragic period of slavery all find protection on St. John, one of about a hundred specks of Caribbean land known as the Virgin Islands.
Despite its small size—19 square miles—St. John's wide range of rainfall and exposure give it surprising variety. More than 800 subtropical plant species grow in areas from moist, high-elevation forests to desertlike terrain to mangrove swamps, among them mangoes, soursops, turpentine trees, wild tamarind, century plants, and sea grapes. Around the island live the fringing coral reefs—beautiful, complex, and exceedingly fragile communities of plants and animals, which St. John's famous beaches depend upon.
In 1493 Columbus sighted the large assemblage of islands and cays and named it after St. Ursula's legendary 11,000 virgins. Since then, Spain, France, Holland, England, Denmark, and the United States have controlled various islands at different times. The Danes began colonization in the 17th century, and in 1717 planters arrived on St. John. By mid-century 88 plantations had been established there; slaves stripped the steep hillsides of virgin growth and cultivated the cane. By the time the Danes abolished slavery in 1848, the sugar industry was doomed. A fallow, century-long period known as the "subsistence era" followed.
Fearful that the Germans might capture the islands during World War I, the United States bought St. John, St. Croix, St. Thomas, and about 50 smaller islands from Denmark for $25,000,000. In 1956 conservationist Laurance S. Rockefeller donated more than 5,000 acres for a national park on St. John; in 1962 the park acquired 5,650 undersea acres off the northern and southern coasts. Today, though its boundary includes three-quarters of St. John, the national park owns only slightly more than half the island. Of increasing concern is the escalating pace of development on private inholdings inside its borders. It also feels pressure from the numerous cruise ships that disgorge large numbers of visitors at once, badly straining park resources. Some of the park's trails may be closed for maintenance work; ask at the visitor center.
Did You Know?
Much of St. John's vegetation is second-generation growth; most of the island was clear-cut in the colonial period to make way for sugar-cane production.
Copy for this series includes excerpts from the National Geographic Guide to the National Parks of the United States, Seventh Edition, 2012, and the National Parks articles featured in "Cutting Loose" in National Geographic Traveler. |
Time Signals & Signal Transformation – GATE Study Material in PDF
2 months ago .
In these free GATE 2019 Notes, we move to the subject of Signals and Systems. Specifically, we will try and understand the basic definitions of different types of signals as well as certain properties of signals. We will also learn about Signal Transformation such as Time Shifting, Time Scaling, Time Inversion/Time Reversal. These GATE Material on Introduction to Time Signals & Signal Transformation is important for GATE EC and GATE EE. You can have all this material downloaded in PDF so that your exam preparation is made easy and you can ace your paper.
|Exam Name||GATE (Graduate Aptitude Test in Engineering)|
|Conducting Body||IIT Madras|
|Exam Level||National Level Examination|
|Exam Duration||180 minutes (3 hours)|
Recommended Reading –
Here is an excerpt of the article. You can download the PDF to read the full article –
What is a Signal?
Anything which contains some information is known as a signal. A signal may be a function of one or more independent variables like time, pressure, distance, position, etc.
For the electrical purpose, the signal can be current or voltage which is a function of time as the independent variable.
Signals can be classified into two broad categories. These are
1. Continuous-Time Signals
2. Discrete-Time Signals
A continuous signal may be defined as a continuous function of the independent variable. In case of the continuous time signal, the independent variable is time. Signals are a continuous function of time. They can also be termed as Analog Signals.
For discrete time signals, the independent variable is discrete. So, they are defined only at certain time instants. These signals have both discrete amplitude and discrete time. They are also known as Digital Signals.
Signal Transformations through Variations of the Independent Variable
A signal can undergo several transformations some of which are:
- Time Shifting
- Time Scaling
- Time Inversion / Time Reversal
Time shifting is a very basic operation that you never stop to come across if you are handling a signals and systems problem. We seek to settle all doubts regarding it for one last time.
Consider that we are given a signal x(t) then how do you implement time shifting and scaling to obtain the signal x(–αt – β), x(–αt + β), x(αt – β) or x(αt + β), where α and β are both positive quantities. The first thing we would want to clear forever is that a negative time shift implies a right shift and a positive time shift implies a left shift.
Remember it by the thinking of creating an arrow out of negative sign, – to → which implies right shift for negative time shift. The other (+ time shift) would obviously mean a left shift.
Returning to our original agenda, the next thing to know is that time shifting and scaling can be implemented, starting from both the left and right side. Since we are discussing time shifting, we set α = 1 which is responsible for scaling.
Suppose we want to shift the origin from (0, 0) and (0, a) (let a be positive), then it is a right shift for axis or left shift for the signal relative to the axis. And hence x(t) changes to x(t + a) contrary to what we intuitively expect it to be x(t – a) due to right shift for the axis. Similarly, if we want to shift origin to (0, –a) then this is left shift for axis but a right shift for signal relative to the axis. So, x(t) change to x(t – a). This lack of understanding causes us to believe that shifting signals and shifting axes are two different concepts and then remember two different sets of rule for them. Now, we see that they are just one thing and makes easier to remember.
Expansion or compression of a signal with respect to time is known as time scaling. Let x(t) be a continuous time signal, and then x(5t) will be the compressed version of x(t) by a factor of 5 in time. And x(t/5) will be the expanded version of x(t) by a factor of 5 in time.
In general, if we consider x(at) then for a > 1, the signal will be compressed by the factor ‘a’ and for a < 1, the signal will be expanded by the factor (1/a).
Time Inversion / Time Reversal
Time inverted signal is denoted by x(–t) which is achieved by replacing the independent variable time t by its negative counterpart i.e. –t. So, time inverted signal of x(t) is x(–t). This can also be considered as a special case of time scaling with a = –1.
Time inverted signal can also be achieved by taking the reflection/mirror image about the vertical axis.
When a combination of time shifting and time scaling has to be performed on a signal x(t) then to realize an expression of the form x(at-b), the natural order is time shifting followed by time scaling. However, time-shifting can also be performed after time scaling with a precaution of dividing the visible shift amount by the scaling factor.
Now, we are familiar with the Signals and Signal Transformation. In next article, we will deal with different kinds of standard signals used.
Time Signals & Signal Transformation in PDF
Did you like this article on Time Signals and Signal Transformation? Let us know in the comments?
For more information on GATE 2019, you can click the links given below
|Time Management Tips for GATE 2019||GATE 2019 Exam Schedule Out|
|GATE Virtual Calculator 2019||PSUs Recruitment Through GATE 2019|
Practice questions for the GATE 2019 Exam & boost up your preparation. Evaluate your performance & work on your weak areas.
Discuss your doubts with our experts as well as with other GATE Aspirants & get it cleared. Our team is there to help you all the time. |
The Water Cycle for Kids : Options for homeschool science
USGS Water Science for Schools – A complete guide for studying everything about water including the water cycle. From the chemical make-up of water, to rivers and streams, experiments, water supplies, and water usage, this site contains virtually everything your child will want to know about the water cycle and water supply on Earth.
BBC’s Rivers and Coasts – The water cycle, how rivers and coasts change, and the affect of rivers and coasts on the lives of people. There are printable worksheets and some animated graphics, but the site is primarily linear learning using a “next” feature to page through the informational content. This site is well-suited for elementary or middle school aged children to learn about the water cycle.
Environmental Protection Agency’s Ecosystems – site for kids to explore acid rain, watersheds, preservation, and protection of our natural resources. Additionally, there are links to other interactive sites for exploring ecosystems. While this site isn’t specifically about the water cycle, the information is primarily regarding the importance of clean water for man, animals, plants and the preservation of our world. Also check out the EPA’s water cycle animation.
Hydropower – Electricity from Moving Water – A website about energy resources, both renewable and non-renewable. Hydroelectric power is explained with graphics that help children see concepts. The explanations are thorough and reluctant readers will benefit by having the content read to them. This site will help your child why water and the water cycle is important for people in additional ways.
Tsunami Animations – Students interested in water phenomenon may also be interested in tsunamis or tidal waves. PBS’s Savage Earth animation contains text explaining how a shift in the earth’s plates can cause a tsunami. The Office of Naval Research’s Ocean in Motion animation explains how earthquakes, landslides, and volcanoes can all cause large waves. The site provides in depth information about oceans and waves. |
Three areas of the world contain rainforests: Central and South America; central and western Africa; and southeast Asia. Each of these areas is home to dozens of turtles living on land and in water. Though most of these species are opportunistic omnivores, differences in diet are noted between species, and some species have evolved into prey specialists.
The rainforest is home to many species of tortoise, including two popularly kept as pets: red-footed tortoises (Chelonoidis carbonaria) and yellow-footed tortoises (Chelonoidis denticulata). These colorfully footed turtles are omnivores that eat plant material, invertebrates like snails and carrion found on the forest floor. Fruit is an important dietary component of their diet, and the breeding behavior of male yellow-footed tortoises is known to be affected by fruit abundance.
Semi-Aquatic and Terrestrial Rainforest Turtles
Semi-aquatic and terrestrial rainforest turtles are generally omnivores since rainforest habitats offer a lot of fungi, fruit and invertebrates for food. In contrast to the tortoises of the rainforest, which rely heavily on plant material, some rainforest turtles, like Home’s hinge-back turtle (Kinixys homeana), are noted for being among the most highly carnivorous terrestrial turtles known to science. The tiny Vietnamese leaf turtle (Geoemyda spengleri) is also an omnivore that is especially fond of earthworms and slugs, though they likely eat fallen fruit as well.
Aquatic Rainforest Turtles
Turtles of the rainforest waterways eat a diet similar to aquatic turtles in other regions. Most species, like the Vietnamese pond turtle (Annamemys annamensis), eat an omnivorous diet, including crabs, crayfish, fish and aquatic vegetation. Fly River turtles (Carettochelys insculpta ) of New Guinea and Australia inhabit crystal clear waters and consume fish, invertebrates and plant material.
Dietary Specialists of the Rainforest
Some turtles found in the rainforest are dietary specialists. The fully aquatic mata mata (Chelus fimbriata) has a number of unique adaptations that facilitate the capture of fish. Remaining completely still in the muddy waters of the South American rainforest pools, these turtles are covered in small flaps of skin that make the turtle look like an algae-coated rock in the pool. Through a highly developed suction feeding mechanism in the mouth, the turtles quickly suck in entire fish when they venture too close. The giant Amazon River turtle (Podocnemis expansa) is another dietary specialist that attains its large size on a diet of fallen fruit and seeds.
- Tortoise Trust: Tropical Rainforest Tortoises and Turtles: Practical Techniques for the Provision of Adequate Ambient Humidity in Captivity
- Arkive.org: South American Yellow Footed Tortoise (Chelonidus Denticulata)
- Woodland Park Zoo: Red-Footed Tortoise
- Arkive.org: Home's Hinge-Back Tortoise (Kinixys homeana)
- The Turtle Pond: The Vietnamese Leaf Turtle, Geoemyda spengleri
- National Aquarium: Giant Amazon River Turtle
- The Asian Turtle Consortium: Species Accounts: Annamemys Annamensis: Vietnamese Pond Turtles
- University of North Carolina: Mata Mata Turtle
- Austin's Turtle Page: Fly River Turtle Caresheet
- PhotoObjects.net/PhotoObjects.net/Getty Images |
The black mamba is a species of mamba (a group of venemous snakes). Two drops can kill a human a 20 mins because the venom is so toxic. It has also been called the shadow of death by Africans for its' quickness and very ill temper.Black Mambas are 12 feet long when they are fully mature.
The snake's scientific name is Dendroaspis polylepis: Dendroaspis meaning "tree asp" andPolylepis meaning "many scaled." The name "black mamba" is given to the snake not because of its body color but because of its ink-black mouth. It displays this physical attribute when threatened.
The black mamba's back skin color is olive, brownish, gray, or sometimes khaki in color. The adult black mamba's length is on average 2.5 meters (8.2 ft), but some specimens have reached lengths of 4.3 to 4.5 meters (14 to 15 ft). Black mambas weigh on average about 1.6 kilograms (3.5 lb). The black mamba is the second longest venomous snake in the world, which is only exceeded in length by the King Cobra. The snake also has an average life span of 11 years in the wild.
The black mamba lives in Africa, occupying the following range: Northeast Democratic Republic of the Congo, southwestern Sudan toEthiopia, Eritrea, Somalia, Kenya, Eastern Uganda, Tanzania, southwards to Mozambique, Malawi, Zambia, Zimbabwe and Botswana toKwaZulu-Natal in South Africa, and Namibia; then northeasterly through Angola to southeastern Zaire. The black mamba is not commonly found above altitudes of 1000 metres (3280.8 feet), although the distribution of black mamba does reach 1800 metres (5905.5 feet) in Kenya and 1650 metres (5413.3 feet) in Zambia. The black mamba was also recorded in 1954 in West Africa in the Dakar region of Senegal.However, this observation, and a subsequent observation that identified a second specimen in the region in 1956, has not been noted and thus the snake's distribution there is inconclusive. The black mamba’s distribution contains gaps within the Central African Republic,Chad, Nigeria and Mali. These gaps may lead physicians to misidentify the black mamba and administer an ineffective anti-venom.
The black mamba has adapted to a variety of climates ranging from savanna, woodlands, rocky slopes, dense forests and even humidswamps of Africa. The grassland and savanna woodland/shrubs that extend through central, eastern and southern Africa are the black mamba’s typical habitat. The black mamba prefers more arid environments such as light woodland, rocky outcrops, and semi-arid dry bush country.
The sugarcane fields that dominate the habitat of the black mamba. The black mamba's environment is rapidly diminishing. In Swaziland alone, 75% of the population is employed by subsistence farming. Because of agricultural encroachment on the black mamba's habitat, the snake is commonly found in sugarcane fields. The black mamba will climb to the top of the sugarcane to bask in the sun and possibly wait for prey. The majority of human attacks occur in the sugarcane fields as thousands of workers must plow the fields by hand. This encroachment on the snake's territory contributes to potentially dangerous human contact with these venomous snakes.
The black mamba uses its speed to escape threats, not to hunt prey. It is known to be capable of reaching speeds of around 20 kilometers per hour (12 mph), traveling with up to a third of its body raised off the ground. Over long distances the black mamba travels 11 to 19 kilometers per hour (6.8 to 12 mph), but in short bursts it can reach a speed of 23 kilometers per hour (14 mph), making it the fastest land snake. It is shy and secretive; it always seeks to escape when confronted. When a black mamba is cornered it mimics a cobra by spreading a neck-flap, exposing its black mouth, and hissing. If this attempt to scare away the attacker fails, the black mamba will strike repeatedly, injecting large amounts of venom. The black mamba is a diurnal snake. Although its scientific name seems to be indicative of tree climbing, the black mamba is rarely an arboreal snake.
As stated, the black mamba is diurnal. It is an ambush predator that waits for prey to get close. If the prey attempts to escape, the black mamba will follow up its initial bite with a series of strikes. When hunting, the black mamba has been known to raise a large portion (approximately 48 centimetres or 18 inches) of its body off the ground. The black mamba will release larger prey after biting it, but smaller prey, such as birds or rats, are held onto until the prey's muscles stop moving. Black mambas have been known to prey on bushbabies, bats, and small chickens.
The venom of the black mamba consists mainly of neurotoxins with an LD50 of 0.25 mg/kg. Its bite delivers about 100–120 mg of venom on average; however, it can deliver up to 400 mg. The mortality rate is nearly 100%, unless the snakebite victim is promptly treated withantivenom. Black mamba bites can potentially kill a human within 20 minutes, but death usually occurs after 30–60 minutes, sometimes taking up to three hours. (The fatality factor depends on the health, size, age, psychological state of the human, the penetration of one or both fangs from the snake, amount of venom injected, location of the bite, and proximity to major blood vessels. The health of the snake and the interval since it last used its venom mechanism is also important.) Nowadays, there is a polyvalent antivenom produced by SAIMR (South African Institute for Medical Research) to treat all black mamba bites from different localities.
The black mamba’s venom is dendrotoxin. The toxin disrupts the exogenous process of muscle contraction by means of the sodium potassium pump. First, the toxin causes the release of neurotransmitters at peripheral synapses. Then, the dendrotoxin causes repetitivedepolarization in both motor and sensory neurons. This rapid activation of each neuron leads to epileptic activity. Finally, the dendrotoxin blocks potassium channels, stopping the movement of calcium. Therefore, calcium levels are unregulated leading to muscular paralysis and eventually death. An example of the potency of the venom is seen through the records of mice. Normally, the death time of a mouse after subcutaneous injection of many toxins is around 7 minutes. However, a black mamba venom can kill a mouse after 4.5 minutes.
Because of its highly potent venom, its temperament, and its speed, the black mamba is regarded as one of the most dangerous snakes inAfrica. However, humans bitten by a black mamba are rare as the snake would rather avoid confrontation with humans.
If bitten, common symptoms for which to watch are rapid onset of dizziness, coughing or difficulty breathing, and erratic heartbeat. In extreme cases, when the victim has received a large amount of venom, death can result within an hour from respiratory or cardiac arrest.Also, the black mamba's venom has been known to cause paralysis and11 death is due to suffocation resulting from paralysis of the respiratory muscles.
The yellow mongoose is just one of the many animals that prey on black mamba eggs. Mongooses are the main predators of the black mamba. They usually prey on young snakes and eggs. Mongooses are notable due to their resistance to snake toxins. This resistance is caused by mutations in their nicotinic acetylcholine receptor. These mutations prevent the neurotoxin present in snake venom from binding to the receptor, thus preventing the associated toxicity.Because of the mongoose's resistance to snake venom, adult mambas have trouble fighting them off, although mongooses seldom attack adult snakes as they are too large for the mammals to kill with ease. Cape file snakes are also predators of young black mambas. Wild boars have also been known to prey on adult and young black mamba snakes as well as eggs alike.
| Black mamba at Wilmington's Serpentarium|| Close-up of a black mamba's head|| Close-up of the snake climbing a branch, in London Zoo|| A black mamba at the St Louis zoo| |
Why not give your students important practice using place value while celebrating the start of school or the arrival of Autumn?
In this activity the students will count collections of place value blocks and then color their answer with a corresponding color. It is like a color by numbers only using place value blocks and a 100s chart.
Place value is a very important concept for primary children to work on. Being able to relate numbers to corresponding images, such as place value blocks, will greatly help them with their addition and subtraction work, especially when they start working with two digit numbers.
This file contains two pages:
1) A 100s chart attached to a sheet of place value images for the students to count.
3) An answer key showing the final image.
The final image your students will create is a ripe, red apple! I also have addition & subtraction mystery pages for this red apple in my store.
This file is also available as 8.5x11 sheets. If you are unable to print 11x17, these would be a much better option. In the 8.5x11 size the 100s chart and place value clue sheet are printed separately. Please visit my store to get that file as well as other addition, subtraction, multiplication, & place value mystery pictures. |
Who Owns this Land?
Michael O'Malley, Associate Professor of History and Art History, George Mason University
As the Civil War continued, more and more Confederate land came under Union control. These occupied lands fell under the Army's jurisdiction, and were governed by Union officers in different ways. In several instances, Union officers confiscated the lands of confederates and distributed them to former slaves. They did this partly to punish rebels, partly to hinder the South economically, and partly because they had come to regard slavery as an immoral theft of the slave's labor. Union officers often had little sympathy for wealthy planters, who they tended to regard as traitors and deadly enemies.
The most notable instances of confiscation took place in New Orleans and in the Sea Islands of South Carolina and Georgia,which the Union had captured in 1861. Special Field Order #15, issued by General William T. Sherman on January 16, 1865, allocated former Confederate land to the Freedmen. As a result, forty thousand Freedmen settled in the Sea Islands in the belief that the federal government was providing them with land.
Meanwhile, in Washington, a debate began about what to do with the former Confederate states. Should they be returned as they had been before the War? Should they be reformed into new States? Who would be in charge—Congress, or the President? In 1863 Lincoln announced a tentative proposal called the "ten percent plan." Under this plan, former states would be readmitted into the Union if ten percent of white voters took an oath of loyalty to the Union. Members of Lincoln's own party objected that this was far too lenient, but were unable to effectively oppose him. In part, they were pacified by the fact that Lincoln had finally endorsed emancipation of the slaves.
After Lincoln's assassination, Andrew Johnson more or less continued in the spirit of the "ten percent plan," pardoning southern whites wholesale. By 1865, former "rebel" leaders had been reelected to Congress, including, for example, Alexander Stephens, who had been Vice President of the Confederacy. In 1865 Johnson signed a proclamation which insisted that all confiscated land should be returned to its former owners, reversing grants of land made by Union Generals in several places, including the Sea Islands. In addition, each of the southern states passed what were called "black codes" —laws designed specifically to limit the freedoms and options of the former slaves.
Lincoln's opponents in his own Party, dubbed the "Radical Republicans," were outraged. They declared the southern states "unreconstructed," refused to seat the newly elected congressmen and senators, and began impeachment proceedings against Andrew Johnson. They also founded the Freedman's Bureau, a federal agency designed specifically to address the problems, and the rights, of the newly freed people. The Radical Republicans passed the thirteenth, fourteenth, and fifteenth amendments, which abolished slavery and established that citizenship, and the right to vote, could not be limited on the basis of race. They embarked on a program to "Reconstruct" the South.
Most of the leading radicals had been active abolitionists for many years before the war. At their most idealistic, Radicals like Thaddeus Stevens imagined using economic and military force to "break the backs" of the slave holding class and bring about genuine racial equality. Stevens argued repeatedly that the property of former slave owners should be given to their former slaves. This would crush slave owning aristocrats and establish a solid economic basis for African American citizenship. Like Lincoln, he believed that a virtuous democracy should be composed of free, independent small producers and farmer.
African Americans shared this belief. They wanted land, votes, and access to education, things which had been denied them for two centuries. They had a keen sense of what slavery had taken from them, and of the fact that their labor had made the plantations profitable. They argued that they had already paid for plantation land with their sweat and with their service to the Union, and they urged the Federal government to grant them with "forty acres and a mule" in recognition of the labor that slavery had stolen from them.
But in the North, political debate increasingly focused less on the Radical plan of land distribution and more on the right to vote. Northern Republicans cooperated with the freed men and women to establish the South's first system of free public schools. From about 1867 through 1870, African Americans experienced a remarkable increase in political power, and elected African American officials at the Federal, State, and local levels.
White southerners often cooperated in this "Radical" rule, especially members of the non slave holding classes (only 25% of southern whites had ever owned slaves). But led by ex-confederates like Nathan Bedford Forrest, who founded the Ku Klux Klan in 1867, white supremacists began a terrorist counterattack against racial equality and African American political gains. They denounced northern "carpetbaggers" who they said had come south "to fatten on our misfortune." Southern and northern newspapers began to recount stories of corruption and mismanagement in the reconstruction state governments.
The generation of abolitionists who led the Radical Republicans--Thaddeus Stevens, Charles Sumner, Wendell Phillips--either died of lost political power. Increasingly, northerners began to lose the will to implement Reconstruction policies. Most had never favored racial equality and now regarded the elevation of former slaves as a mistake. Northerners nervous about land confiscation generally argued that the right to vote would be enough to protect the political and economic rights of African Americans. A financial panic in 1873 made the expenses of military occupation of the South harder to argue for politically. By 1875, Reconstruction was over in all but name. Most African Americans had been reduced to agricultural laborers or sharecroppers rather than landowners. By 1890, African American voting had almost entirely ceased.
Updated | April 2004 |
What is mastery?
We cannot define “mastery” without first defining “proficiency.”
Proficiency, from the Latin verb proficere, means “accomplishment or progress.” Accomplishment in the education setting is generally described as reaching an established performance benchmark. Most commonly, this benchmark represents the minimally-acceptable level of student performance at a particular grade level in a specific discipline. Proficiency represents “good enough for now.” Different states (and even districts) may have established unique benchmarks that students must reach in order to be considered proficient. Using a proficiency model, students must demonstrate that they have attained a skill level that’s on par with a specific benchmark before proceeding to the next grade level.
What is EdWords™?
Edwords (ěd · words) n. 1. K12 glossary breaking through buzzwords to solve the challenge of a common definition.
2. Renaissance® resource to help educators take part in discussion, debate, and meaningful discourse. 3. Educators’ jargon buster.
While “proficiency” comes to us courtesy of the Romans, the French gave us the root for “mastery,” which they described as “intellectual command.” Mastery is of greater depth than proficiency. It connotes knowledge at a much deeper level. It is the point at which students have not only met specific benchmarks but also gained a complete understanding of the content and can consistently demonstrate the skill.
“Although I may have reached the minimum level of proficiency to pass the tests, I haven't completely mastered the skills needed to be truly college and career ready.”
What does mastery look like?
Proficiency can be hard to quantify without looking at examples from the world around us.
Example 1: Odell Beckham, Jr., has mastered the one-handed catch. If you’re a football fan, you’ve heard of Beckham and his unique catches. Through hard work, dedication, and determination, he first became proficient in catching a football before mastering the art of catching a football with one hand.
Example 2: Board certification is required to practice medicine. It’s not enough to simply pass courses or dedicate hundreds of hours to residency programs—the final step in certifying doctors requires demonstration of mastery.
Example 3: Jimi Hendrix is described by the Rock and Roll Hall of Fame as “arguably the greatest instrumentalist in rock and roll history.” Even so, when 15-year-old Hendrix first picked up a guitar, he had a long way to go before he mastered it. Of note: Hendrix was naturally left-handed, but to appease his father he learned to play a right-handed guitar. Eventually he learned to play “both-handed” and could simply flip right handed guitars over to play them left-handed. How’s that for mastery?
What would schools look like if mastery, rather than proficiency, were the benchmark?
The issue is that mastery is typically NOT required before students move on to new content in today’s classroom. Students “pass” at a 70 percent proficiency level in a subject, but they are advanced with a 30 percent knowledge gap. As students move on to tougher content, those knowledge gaps add up over time. It’s difficult to master new content in a given subject area when students start with a lack of essential knowledge. As Sal Khan, founder of Khan Academy, describes in the video below, it is like building a house on an incomplete foundation. |
Life in the Solar System and Beyond
In Life in the Solar System and Beyond, Professor Jones has written a broad introduction to the subject, addressing important topics such as, what is life?, the origins of life and where to look for extraterrestrial life.
The chapters are arranged as follows: Chapter 1 is a broad introduction to the cosmos, with an emphasis on where we might find life. In Chapters 2 and 3 Professor Jones discusses life on Earth, the one place we know to be inhabited. Chapter 4 is a brief tour of the Solar system, leading us in Chapters 5 and 6 to two promising potential habitats, Mars and Europa. In Chapter 7 the author discusses the fate of life in the Solar system, which gives us extra reason to consider life further afield. Chapter 8 focuses on the types of stars that might host habitable planets, and where in the Galaxy these might be concentrated. Chapters 9 and 10 describe the instruments and techniques being employed to discover planets around other stars (exoplanetary systems), and those that will be employed in the near future. Chapter 11 summarizes the known exoplanetary systems, together with an outline of the systems we expect to discover soon, particularly habitable planets. Chapter 12 describes how we will attempt to find life on these planets, and the final chapter brings us to the search for extraterrestrial intelligence, and the question as to whether we are alone. |
The diagram below illustrates the distinction between systematic and random errors. Systematic errors tend to be consistent in magnitude and/or direction. If the magnitude and direction of the error is known, accuracy can be improved by additive or proportional corrections. Additive correction involves adding or subtracting a constant adjustment factor to each measurement; proportional correction involves multiplying the measurement(s) by a constant.
Unlike systematic errors, random errors vary in magnitude and direction. It is possible to calculate the average of a set of measured positions, however, and that average is likely to be more accurate than most of the measurements.
In the sections that follow, we compare the accuracy and sources of error of two important positioning technologies: land surveying and the Global Positioning System. |
This page provides the web service to compute a reverse complement of the given DNA or RNA sequence, along with the brief explanation. The reverse complement sequence is the partner sequence with that the given sequence would pair with the highest possible affinity.
In double stranded DNA and RNA sequences, adenine (A) pairs with thymine (T) or with uracil (U), and guanine (G) pairs with cytosine (C). In the complement pairing sequence each nucleotide is substituted by its partner. Nucleotide sequences have also direction and in double stranded DNA or RNA the paired sequences run in the opposite directions (are antiparallel). Hence the complement sequence must also be reversed.
While in the real world DNA is normally double stranded (coupled with its reverse complement), RNA is usually single stranded. However RNA does contain short double-stranded parts, largely responsible for the molecule taking the required spatial shape. These parts can be detected with Nussinov algorithm and visualized using dot-bracket notation.
DNA can also be single stranded. During protein synthesis (initial stage, transcription of genomic information in the nucleus), DNA pairs with RNA. If the molecules are long enough, they usually will stick together also if only part of the nucleotides pair. G and C interact stronger than A and T, so in some context they may be understood as "more reverse complement" of each other.
Certain tasks also require to reverse-complement ambiguity codes, meaning more than one possible nucleotide. Such codes may arise from sequencing errors, indicate that specified enzyme like restrictase accepts several alternatives for the given position or be results of consensus search between different organisms. Normally purine (R - A or G) is complemented into pyrimidine (Y, C or T) and amino (M, A or C) into keto (K, G or T), while strong (S) and weak (W) that differ by the number of hydrogen bonds (two or three) are not swapped (nucleotide and its complement of course use the same number of bonds to make the complementing pair). Codes that specifically exclude one nucleotide can be complemented into codes that specifically exclude the complementing nucleotide.
See for the typical conversion rules.
This algorithm is relatively easy to implement unless it is important to care about performance (the real world sequences that require processing can be very long). Developers see the sequence reversing as an interesting task, if not done in the straightforward native way..
Biotechnicians frequently need to perform this operation when designing primers - short DNA sequences that must stick to the known subsequence in the larger DNA they investigate. Reverse complement is also usually implemented (directly or indirectly) in all tools that works with genomic DNA, as initially is usually unknown which of the two DNA strands contains the sequence being searched. |
April 23, 1999 Physics 208
Print your name and section clearly on all five pages. (If you do not
know your section number, write your TA’s name.) Show all work in the space
immediately below each problem. Your final answer must be placed in the
box provided. Problems will be graded on reasoning and intermediate steps
as well as on the final answer. Be sure to include units wherever necessary,
and the direction of vectors. Each problem is worth 25 points. In doing
the problems, try to be neat. Check your answers to see that they have
the correct dimensions (units) and are the right order of magnitudes. You
are allowed one 8½ x 11" sheet of notes and no other references.
The exam lasts exactly 50 minutes.
(Do not write below)
Problem 1: __________
Problem 2: __________
Problem 3: __________
Problem 4: __________
1. A laser beam is incident on a pair of slits separated by 0.12 mm.
a. What is the slit-to-screen distance if the bright fringes are 5.0 mm apart and the laser light has a wavelength of 633 nm? (9 pts.)
b. What will be the fringe spacing on the screen above with 480-nm light? (8 pts.)
c. If the whole apparatus is immersed in water with an index of refraction of 1.33, what will be the fringe spacing on the screen for the 480-nm light above? (8 pts.)
2. An advanced civilization has developed a 1000-kg spaceship that goes, with respect to the galaxy, only 50 km/s slower than light.
b. What is the galactic diameter as measured in the ship’s frame of
reference? (6 pts.)
c. What is the momentum of the spaceship in the frame of reference of
the galaxy? (6 pts.)
d. What work must be done on the spaceship to accelerate it to this
speed from rest? (6 pts.)
3. A hydrogen discharge emits light with a wavelength of 103 nm.
a. What principal quantum numbers are involved in this transition? (7 pts.)
b. What is the energy of the emitted photons (in eV)? (6 pts.)
c. If these photons are incident on a metal with a work function of 4.1 eV, what is the maximum energy of the ejected electrons? (6 pts.)
d. What is the minimum de Broglie wavelength of the electrons ejected above (6 pts.)
4. A neutral copper atom contains 29 electrons.
a. How many electrons are in the L shell in the ground state of copper? (5 pts.)
b. What is the orbital angular momentum of an electron in the 2p state? (5 pts.)
c. What is the energy (in eV) of the Ka characteristic x-ray emitted by copper? (5 pts.)
d. What is the wavelength of the x-ray calculated above? (5 pts.)
e. To what quantum state would you expect the outermost (valence) electron of copper in the 4s state to be excited by the absorption of a photon of the lowest possible energy? (5 pts.) |
Chesapeake Bay Field Office works with private landowners to restore bog turtle
The bog turtle is one of North America's smallest turtles with a light brown to ebony carapace, a bright orange, yellow or red blotch on the side of the head and neck, and a yellow plastron with black patches.
The northern population of the bog turtle extends from western Massachusetts to northern Maryland and Delaware. Despite this fairly extensive distribution, the bog turtle is limited to a specific and rare type of wetland.
Saturated, spring-fed wetlands such as bogs, fens, wet meadows, sedge marshes and pastures with thick mucky organic soils provide the habitat these turtles require for feeding, breeding and hibernation. These wetlands are dominated by low grasses and sedges with a mix of shrub species.
The bog turtle was listed as threatened under the Endangered Species Act in 1997 due to excessive collection for the pet trade and loss of the unique wetlands on which they depend. The collection of bog turtles has diminished since its listing, but loss of these rare wetlands still occurs. Development, woody plant succession and encroachment of invasive plants all contribute to loss of bog turtle habitat.
More than 97 percent of bog turtle wetlands occur on private lands, so recovery of this species depends heavily on private landowners. Since 1997, various habitat restoration techniques have been completed at 17 wetlands on private lands in Maryland totaling more than 150 acres.
Through the Coastal and Endangered Species programs, the Chesapeake Bay Field Office is working with private landowners and other partners to protect and restore bog turtles and the wetlands they need. Current activities include:
For more information, contact: |
Go to the main birds page.
Birds lay their eggs and raise their young to coincide with the time when tier particular food source is most abundant. Thrushes are able to eat all sorts of insects, slugs and seeds found on the ground and hence are not too limited by peaks and troughs of food. They just need to ensure that there is enough to feed them and their young and that the trees and bushes in which they nest can provide enough cover.
Insect eaters need to wait a little longer for when their prey hatches or emerges. For instance winter moth caterpillars abound in May and some species await this event. Seed eaters need to wait for plants to flower and set seed and hence are later still. Some birds of prey feed on young birds their populations are highest from mid summer onwards.
The table below indicates how these varying strategies influence the nesting period of birds.
| ||Eggs in the nest|
| ||Nestlings in the nest|
| ||Fledglings being looked after by their parents| |
Citizen science involves people as volunteers who help scientists by collecting information about the environment.
It's as easy as using your eyes and ears to track changes in nature. And, you can learn about the environment while gathering the information. Your information will help scientists and governments monitor and protect ecosystems.
Ecological Monitoring and Assessment Network
The Ecological Monitoring and Assessment Network of Environment Canada is a citizen-science group.
Its NatureWatch programme proposes four subjects for observation:
- You can report on amphibians in your area.
- Frogs and toads are especially sensitive to pollution and other environmental changes. Because of this, they can tell us a lot about the health of wetlands.
- You can report the date on which certain plants flower in your area.
- With the help of that information, scientists can measure the effects of climate change on the flowering in various regions of Canada.
- You can volunteer to observe when ice forms in autumn and thaws in spring.
- Scientists want to know whether climate change is affecting streams and lakes.
- By taking samples of earthworms, you can help scientists to find out how many species of earthworms are in Canada, and where they live.
- The research results help researchers follow ecological changes that may harm the environment.
How can you help protect water resources and ecosystems? Here's a list of major Canadian clubs involved with water. Join one!
Canadian Parks and Wilderness Society
The Council of Canadians—Water Campaign
David Suzuki Foundation
Earth Day Canada
Friends of the Earth Canada
The Nature Conservancy
Sierra Youth Coalition |
Springtail, (order Collembola), any of approximately 6,000 small, primitive, wingless insects that range in length from 1 to 10 mm (0.04 to 0.4 inch). Most species are characterized by a forked appendage (furcula) attached at the end of the abdomen and held in place under tension from the tenaculum, a clasplike structure formed by a pair of appendages. Although the furcula provides a jumping apparatus for the collembolan, enabling it to catapult itself (hence the common name springtail), the usual method of locomotion is crawling. Springtails also have a ventral abdominal, suckerlike tube (collaphore), which secretes a sticky, adhesive substance and also takes up water. The young hatch from spherical eggs and closely resemble the adult. There can be 3 to 12 molts before maturity and up to about 50 molts during the lifetime of a springtail.
The springtail, found in all types of soil and leaf litter throughout the world from Antarctica to the Arctic, is one of the most widely distributed insects. They are among the few species of insects that are permanent residents of Antarctica. Certain springtails known as snow fleas are active at near-freezing temperatures and may appear in large numbers on snow surfaces. Springtails live in soil and on water and feed on decaying vegetable matter, sometimes damaging garden crops and mushrooms. The small (2 mm long), green-coloured lucerne flea (Sminthurus viridis), one of the most common species, is a serious pest to crops in Australia. When necessary, insecticides are used to control springtails. Fossil springtails are among the oldest insect fossils known.
Depending on the classification scheme, springtails may be considered to be true insects (class Insecta) or in a group (class Parainsecta) closely related to the insects. |
new report from the Global Carbon Project
shows the world’s machines are belching more carbon dioxide than
ever before. The report, which measures global CO2 emissions, found
that gases from all sources jumped by more than 750 million tons
during 2013 — a 2.3 percent increase in the dangerous hothouse gas
over already extreme 2012 emission levels.
total, 39.8 billion tons of CO2 hit the atmosphere in 2013, up from
about 39.1 billion tons in 2012.
(Global carbon emissions continued along a worst-case track during 2013. Note that estimated temperature increases are for this century only. For context, it took 12,000 years for the world to warm 5 degrees Celsius at the end of the last ice age. Image source: Global Carbon Project.)
the current track, global CO2 emissions will double in about 30
years. This pace of emissions increase is along the worst-case path
projected by the UN’s IPCC. One that will hit 8.5 watts per meter
squared of additional warming at the top of the Earth’s atmosphere
and greater than 1,000 ppm CO2 equivalent greenhouse gas heat forcing
by the end of this century.
a massive increase from human sources does not include amplifying
feedback emissions from global methane or CO2 stores such as those
now apparently destabilizing in the Arctic. Such emissions could add
an additional 20 to 30 percent or greater heat forcing on top of the
human forcing, according to scientific estimates, by the end of this
massive blow would be more than enough to trigger a hothouse
extinction event — one that could well rival or exceed the Permian
(also known as ‘the great dying’) in its ferocity due to the very
rapid pace of the human heat accumulation.
(IPCC impacts graphic taking into account the RCP 8.5 scenario. Image source: IPCC.)
the pace of emission increase was slightly slower than during 2012,
which showed a 2.5% increase over 2011. The lag was due, in part, to
slowing economic growth in coal-reliant China. The massive emitter
has lately shown trends toward lowering its carbon out-gassing as it
half-heartedly pushes for cleaner air and less coal use. The US, on
the other hand, showed a jump in carbon emissions as a trend toward
greater natural gas usage whip-lashed back toward coal due to higher
natural gas prices.
adoption of renewable energy has slowed global carbon emission from
absolute worst case levels. However, the pace of renewable adoption
and increasing energy efficiency is not yet enough to knock the world
off the horrific RCP 8.5 track. Such a switch would require a much
stronger commitment from India and China together with an ever more
rapid pace of transition away from fossil fuels for the developed
world. To this point, both India and China have ominously opted out
of a global climate summit to be held at the UN tomorrow. There, 120
global leaders will push for ways to rapidly reduce carbon emissions.
But without buy-in from India and China, such measures may well be
overwhelmed by increasing emissions from these very large and
increasingly heavily mechanized Asian economies
(Global CO2 concentrations as measured at the Mauna Loa Observatory. Image source:The Keeling Curve.)
global CO2 levels were hovering near their annual minimum at just
above 395 parts per million after hitting a maximum level near 402
parts per million in May of 2014. At current rates of increase,
global CO2 is likely to remain above the 400 parts per million
concentration year-round within less than three years.
context, the last time CO2 levels were this high, global temperatures
were 2-3 degrees Celsius hotter than they were today and sea levels
were at least 75 feet higher. But since humans emit a number of other
powerful greenhouse gasses, the global CO2 measure alone doesn’t
take into account the entire picture. If all other human heat
trapping gasses are added in, the global CO2 equivalent heat forcing
(CO2e) is around 481 ppm, which is enough to increase temperatures,
long-term by about 3.8 degrees Celsius and to melt more than half of
the world’s current ice sheets.
the current pace of emission it will take less than 30 years to lock
in a 550 ppm CO2 equivalent value — enough to melt all the ice on
Earth and to raise temperatures by between 5 and 6 degrees Celsius
such, the need for rapid transition to renewables together with
reduction in harmful consumption could hardly be more urgent. With
ever more harmful impacts being locked in with each passing year, the
world needed strong global climate policy action yesterday. But
action today will be better than waiting another decade or more as
the situation continues to worsen. |
SAT CRITICAL READING
SENTENCE COMPLETION QUESTIONS
Tips on Handling Sentence
Before You Look at the Answer Choices, Think of a Word That Makes Sense
Your first step in answering a sentence completion question is, without looking at the answer choices, to try to come up with a word that fits in the blank. The word you think of may not be the exact word that appears in any of the answer choices, but it will probably be similar in meaning to the right answer. Then, when you turn to the answer choices, you’ll have an idea of what you’re looking for.
Try going through the sentence substituting the word blank for each missing word. Doing this will give you a feel for what the sentence means.
Unlike her gabby brother Bruce, Bea seldom blanks .
Just from looking at the sentence, you know the answer must be chatters, talks, or a synonym.
At this point, look at the answer choices. If the word you thought of is one of the five choices, select it as your answer. If the word you thought of is not a choice, look for a synonym of that word.
See how the process works in dealing with a more complex sentence.
The psychologist set up the experiment to test the rat’s ____; he wished to see how well the rat adjusted to the changing conditions it had to face.
Did You Notice?
The sentence above is actually two statements linked by a semicolon (;). The punctuation mark is your clue that the two statements support each other.
A semicolon signals you that the second statement develops the idea expressed in the first statement.
The psychologist set up the experiment to test the rat’s adaptability.
He wished to see how well the rat adjusted to the changing conditions it had to face.
Even before you look at the answer choices, you can figure out what the answer should be.
Look at the sentence. A psychologist is trying to test some particular quality or characteristic of a rat. What quality? How do you get the answer?
Note how the part of the sentence following the semicolon (the second clause, in technical terms) is being used to define or clarify what the psychologist is trying to test. He is trying to see how well the rat adjusts. What words does this suggest to you? Either flexibility or adaptabilitycould complete the sentence’s thought.
Here are the five answer choices given:
The answer clearly is adaptability, (E).
Be sure to check out all five answer choices before you make your final choice. Don’t leap at the first word that seems to fit. You are looking for the word that best fits the meaning of the sentence as a whole. In order to be sure you have not been hasty in making your decision, substitute each of the answer choices for the missing word. That way you can satisfy yourself that you have come up with the answer that best fits. |
Guided Reading Survival Guide Part 3
In part 1 and part 2 of my Guided Reading Survival Guide blog series, I explained how teachers must go beyond the basal and provide authentic, high-quality supplemental texts ranging in genre and teach research-based reading strategies using our cast of animal characters.
Guided Reading Survival Guide: Using Hands-On Tools
It's time to put the strategies we discussed in the previous blog into students' hands--literally. Hands-on tools motivate and engage students and make practice fun. Each of our strategy animals has an accompanying hands-on tool. Students associate animals with comfort, safety, and play and when animals are personified, students readily understand and apply the lessons and messages from the animals. Just yesterday, I was doing a fact assessment. I always remind students to double-check their answers, but they often need several reminders to do so. Yesterday I got out my Fiona Fact Fluency Fox puppet and had Fiona remind the kids about double-checking. What do you know? They all double-checked their answers.
Each of the hands-on tools is displayed in the classroom; most are in clear, inexpensive glass jars with the animal label glued to the front. They make a cute display and are easily accessible.
In my classroom: Since students used Quinn the Questioning Quail to use textual evidence to answer questions, I created a set of Quinn's Quills. I purchased fuchsia highlighters from Amazon (to match Quinn's color), printed, laminated, and cut out a set of Quinn's heads, available in the Quinn Questioning Quail unit. I glued the heads to the highlighters. The head looks like it's upside down when the marker is closed, but this protects the head and the topnotch.
Before the lesson, I enlarged and laminated my copy of "What Lives in This Hole?", my guided reading text from Reading A-Z. The larger format allows all students to easily see the text and the lamination allows me to reuse it each year.
During the lesson, I modeled how to answer each question in the I Do section, thinking aloud as I went. I demonstrated how to use Quinn's question mark topnotch to first point to the answer, then highlight it and write the question number next to it. We then practiced the strategy together by answering the questions in the We Do section. Students pointed to the answer with Quinn's topnotch. Before we highlighted, we discussed each student's response to ensure that everyone was on track. Finally, we highlighted the answer.
Students absolutely LOVED Quinn's Quills and asked to use them during whole group reading time as well. Unfortunately, our basals can't be highlighted, but this is another benefit of using Reading A-Z printable books.
Check out the full line of reading and math hands-on tools. You students will LOVE them, too!
Check out tomorrow's blog to learn how to integrate multiple strategies during guided reading time. |
Strato Volcanoes comprise the largest percentage (~60%) of the Earth's individual volcanoes and most are characterized by eruptions of andesite and dacite - lavas that are cooler and more viscous than basalt. These more viscous lavas allow gas pressures to build up to high levels (they are effective "plugs" in the plumbing), therefore these volcanoes often suffer explosive eruptions.
Strato volcanoes are usually about half-half lava and pyroclastic material, and the layering of these products gives them their other common name of composite volcanoes.
Left: This is a schematic diagram of a strato volcano, intended to illustrate the different layers of different materials that comprise them. The purple colors are meant to represent ash layers, either the products of fall-out from big eruption clouds or the products of pyroclastic flows. Notice that these ash layers tend to be thin but widespread. The orange colors represent lava flows, and note that some of them have cinder cones associated with them at the vent. The green colors are meant to represent lava domes, and notice that they do not flow very far. Each eruption, regardless of what it produces, is fed from the magma chamber by a dike. Most dikes come up through the center of the volcano and therefore most eruptions occur from at or near the summit. However, some dikes head off sideways to feed eruptions on the flanks.
Right: This is a pit that has been dug into the ground at Cotopaxi, a big strato volcano near Quito, the capital city of Ecuador. The pit is about 2 meters deep and in it you can clearly see a number of ash layers exposed. It is also easy to see that the layers are different - some are coarse and others are fine, some are dark-colored and others are light-colored
The lava at strato volcanoes occasionally forms 'a'a, but more commonly it barely flows at all, preferring to pile up in the vent to form volcanic domes. Some strato volcanoes are just a collection of domes piled up on each other. Strato volcanoes are commonly found along subduction-related volcanic arcs, and the magma supply rates to strato volcanoes are lower. This is the cause of the cooler and differentiated magma compositions and the reason for the usually long repose periods between eruptions. Examples of strato volcanoes include Mt. St. Helens, Mt. Rainier, Pinatubo, Mt. Fuji, Merapi, Galeras, Cotopaxi, and super plenty others.
Although they are not as explosive as large silicic caldera complexes, strato volcanoes have caused by far the most casualties of any type of volcano. This is for many reasons. First is that there are so many more strato volcanoes than any of the other types. This means that there will also be lots of people who end up living on the flanks of these volcanoes. Additionally, strato volcanoes are steep piles of ash, lava, and domes that are often rained heavily on, shaken by earthquakes, or oversteepened by intruding blobs of magma (or all of these). This makes the likelihood of landslides, avalanches, and mudflows all very high. Occasionally as well, entire flanks of strato volcanoes collapse, in a process that has been termed "sector collapse". Of course the most famous example of this is Mt. St. Helens, the north flank of which failed during the first stages of the big 1980 eruption. Mt. St. Helens was certainly not the only volcano to have suffered an eruption such as this, however. Two other recent examples are Bezymianny (Kamchatka) in 1956, and Unzen (Japan) in 1792. The 1792 Unzen sector collapse dumped a flank of the volcano into a shallow inland sea, generating devastating tsunami that killed almost 15,000 people along the nearby coastlines.
Left: This is a photo of lahar deposits near Santa Maria volcano (Guatemala). This used to be a wide, deep river valley, and you can see the far wall of the valley where the trees are growing. The lahar deposits extend from that far wall to behind where this photo was taken. You can see that between lahar events the river cuts into the lahar deposits, but every time there is another event, it fills up again. The people give an idea of the size of stones that can be carried by a lahar.
Another very common and deadly hazard at most strato volcanoes is called a Lahar. Lahar is an Indonesian word for a mudflow, and most geologists use the term to mean a mudflow on an active volcano. Sometimes the word is reserved only for mudflows that are directly associated with an ongoing eruption (which are therefore usually hot), but that starts to make things confusing. It is probably simplest to just call any mudflow on a volcano a lahar. Lahars are so dangerous because they move quickly, and often times a small eruption or relatively small rainstorm can generate a huge lahar. The most recent huge volcanic disaster occurred at a Colombian volcano called Nevado del Ruiz in 1985. This disaster has been well-documented by numerous post-eruption studies. Nevado del Ruiz is a very tall volcano, and even though it lies only slightly above the equator it has a permanent snow and ice field on its summit. On November 13, 1985 a relatively small eruption occurred at the summit. Even though only a little bit of ash fell and only small pyroclastic flows were produced, they were able to melt and destabilize a good deal of the summit ice cap. The ice cap had already been weakened and fractured by a few months of pre-cursor seismic activity. The melted snow and ice, along with chunks of ice, surged down gullies that started high on the slopes, picking up water, water-saturated sediments, rocks, and vegetation along the way. The eruption occurred just after 9:00 pm, and about 2 and a half hours later lahars managed to travel the approximately 50 km down river valleys to the town of Armero. The lahar entered Armero at 11:30 pm as a wall of muddy water nearly 40 meters high, and roared into the city, producing an eventual thickness of 2-5 meters of mud. Somewhere around 23,000 people were almost instantly killed. The path of destruction almost exactly matches similar disasters that occurred in 1595 and 1845. It also almost exactly covered the highest lahar-designated area on the volcanic hazard map that had been prepared prior to the 1985 eruption. Unfortunately that map had not yet been distributed by the time of the 1985 eruption.
Another place that is starting to get really tired of lahars is Pinatubo, in the Philippines. The 1991 Pinatubo eruption was the second largest this century (after Katmai in 1912), and deposited a huge volume of relatively loose pyroclastic material on already-steep and gullied slopes. Additionally, the rainfall in the Philippines is very high. The combination of all this unconsolidated material and heavy rainfall has generated probably hundreds of lahars, some of which have been enormous. Timely evacuation meant that only a couple hundred people were killed directly by the 1991 eruption. Many times that many have been killed or injured by lahars since the 1991 eruption. These lahars will continue to be a problem for decades after the big eruption. |
They are available over the counter and can be used for pediatrics at the recommended doses.
They should also be used to prevent motion sickness rather than treating it.
Although typical people use the word “antihistamine” to describe drugs for treating allergies, doctors and scientists use the term to describe a class of drug that opposes the activity of histamine receptors in the body.-receptor are used to treat allergic reactions in the nose (e.g., itching, runny nose, and sneezing) as well as for insomnia.
They are sometimes also used to treat motion sickness or vertigo caused by problems with the inner ear.
Some antihistamines may also be helpful in reducing anxiety, inducing sleep, or at preventing or treating motion sickness.The authors of the American College of Chest Physicians Updates on Cough Guidelines (2006) recommend that, for cough associated with the common cold, first-generation antihistamine-decongestants are more effective than newer, non-sedating antihistamines.First-generation antihistamines include diphenhydramine (Benadryl), carbinoxamine (Clistin), clemastine (Tavist), chlorpheniramine (Chlor-Trimeton), and brompheniramine (Dimetane).Additional administration of epinephrine, often in the form of an autoinjector (Epi-pen), is required by people with such hypersensitivities.H-antihistamines can be administered topically (through the skin, nose, or eyes) or systemically, based on the nature of the allergic condition. |
The completely interdependent communities formed by social wasps, such as the baldfaced hornet, are often described as superorganisms. The idea of a superorganism was defined in the early 1930s by the social insect researcher Alfred Emerson, who wrote that “the whole colony behaves as a single animal, the state is selected, not the single individuals; and the various forms behave exactly like the parts of one individual in the course of ordinary selection.” A paper published in 2009 restates that idea in somewhat different terms by defining group adaptionism as “the idea that groups of organisms can be viewed as adaptive units in their own right” (Gardner and Graffen). According that paper, such systems only occur where the group is either composed of genetically identical individuals, of where competition within the group is repressed. In the case of the baldfaced hornet and many other social insects, the former is true.
Queens—fertile females—are the only members of this species that survive the winter. In the spring, a queen will emerge from a hiding place in a hollow tree, under rocks or bark, or inside walls and set about building a nest. Alone, she gathers cellulose from weathered or rotting wood and mixes it into a paste with her own saliva. She uses the mixture, which dries into a paper-like substance, to shape the outside of a small nest and build several brood cells on the inside. Into these, she lays eggs that will hatch into sterile females genetically identical to her herself (workers). After they hatch, she feeds and tends the larvae until they become adults, at which point they assume all the responsibilities of building and protecting the nest, gathering food, and tending to the larvae. The queen does not leave the nest for the rest of her life.
All summer, the queen continues to lay worker eggs, which other workers feed with premasticated insects (particularly other yellow jackets) they hunt, kill, and carry back to the nest. Large nests can sometimes contain 400 individuals. As fall approaches, workers construct a new set of larger brood cells intended for young queens. Larvae raised in those cells receive more food than their sisters raised in smaller cells. However, the only differences between queens and workers is in gene expression—if worker larvae are moved to larger brood cells at an early enough age and receive the extra ministrations intended for young queens, they, too will develop into fertile queens. At the same time, the queen begins to lay eggs that will develop into males. When they are mature, the young queens and the males leave the nest and mate. Around the same time, the old queen dies, beginning a breakdown of the social order in the nest. By the first frost, all members of the hive will have died except for the new queens.
Despite its name, this is not a true hornet but rather a yellow jacket, misleadingly labelled because of its large size. As well as being the largest of the native yellow jackets, its distinctive white-and-black body sets it apart from the yellow-and-black patterns common in most other species. Queens, typically the largest individual in the colony, can reach 20 mm in length. They have a different abdominal pattern than workers, with white markings present on the upperside of their third abdominal segment, as well as the fourth, fifth, and sixth as workers display. Males have seven abdominal segments, and also display white markings on the final four.
These insects build nests in trees and bushes that can reach 60 cm in height and 45 in width. These nests can be found anywhere from one meter off the ground to 18 meters in the air. D. maculata is mainly diurnal, and workers tend to retreat to the nest at nightfall. This individual is a sterile female worker captured from a nest on the station.
- Gardner, A, and A Grafen. "Capturing the superorganism: a formal theory of group adaptation." Journal of Evolutionary Biology 22.4 (2009): 659-671. |
AbstractBoth law and justice convey broad meaning with profound implication. However, with respect to our class, law refers to a set of rules and principles established by custom, agreement or authority, if broken, subjects a party to criminal punishment or civil liability. Justice is the fair and equitable treatment of all individuals under the law.
The law is a standard to judge by. While it is open to human interpretation and may be defined by the political philosophy of the interpreter, it differs from justice in that it provides a framework of what is a right or wrong conduct. Law is precise and is the mechanism by which society enforces acceptable behavior while justice is philosophical and circumstantial and is influenced by the environments, believes and experiences.
LawAs jurisprudence shows, legal philosophy has many aspects, with formalism, realism, positivism and naturalism being the most prominent. Legal formalism treats law like math or science where the relevant legal principles is first identified and next applied to the facts of the case and finally deduced a ruling that governs the outcome of a dispute.
Formalist such as Ronald Dworkin believed that law is a rational and cohesive system of principles that judges must apply with integrity and that the application of this principle will produce a right answer in all cases. Legal formalism seeks to maintain a separation of rules and principles from political and social experience of the judge.
Legal realism on the other hand challenges the orthodox view that law is a science and as an autonomous system of rules and principles that courts can logically apply in an objective fashion to reach a determinate and apolitical decision. Realist maintains that common-law adjudication is an inherently subjective system that produces inconsistent and incoherent results that are based on the political, |
- Types of Chlorophyll
There are two main types of chlorophylls which are involved in the
process of photosynthesis. They are chlorophyll - a and chlorophyll - b.
Chlorophyll - a is an essential pigment having formula C55H72O5N4Mg.
The formula of chlorophyll - b is C55H70O6N4Mg.
- Structure of Chlorophyll: Molecule of chlorophyll
consists of two parts namely:
A ring called 'porphyrin ring' which
shows magnesium in the centre.
A long hydrocarbon tail called 'phytol' .
There are several side groups which are attached to the porphyrin ring. These groups change the properties of the
pigment. Chlorophyll - a has a methyl group (- CH3) and
while chlorophyll - b has an aldehyde group (- CHO).
- Chlorophyll absorbs blue and
red light, and reflect green light. Thus appears green.
- Carotenoids are the accessory pigments.
a. Carotenes - deep orange in colour (C40H56)
b. Xanthophylls - yellow in colour (C40H56O2).
- Chloroplast Structure: The chloroplast appears as
oval, dark green structure, measuring about 4 - 10 microns in diameter and
1 - 33 microns in thickness.
The number of chloroplasts per cell is a variable factor. Each
chloroplast is bound by a double unit membrane. It is called 'peristromium'. There is a space between two
membranes called as 'periplastidial space'.
- Quantasomes: These are the small
structures present on thylakoid membranes.
These are rich in chlorophyll. Each quantasome
contains 230 chlorophyll molecules. These are the photosynthetic units.
- Kranz Anatomy: Anatomically, C3
and C4 plants are different. C4 plants show Kranz anatomy. Here bundle sheath cells surround the
vascular bundles of veins. There are mesophyll
cell layers present on the outer side. The arrangement of chloroplasts
in the bundle sheath is towards the centre. Chloroplast dimorphism is
observed in C4 plants. The chloroplasts of bundle sheath
cells lack grana while those of mesophyll cells show well developed grana.
- Melvin Calvin could trace
the entire pathway of carbon during its fixation in dark reaction. For
this, he used a radioactive isotope 14C. In 1961, he was
awarded Nobel Prize.
- Emerson reported the
existence of two photosystems.
- Hatch and Slack (1965)
formulated the C4 cycle in plants.
- Ruben and Kamen confirmed that water is the source of oxygen
evolved during photosynthesis and not CO2. For this, they
used radioactive isotope 18O.
- Robert Hill postulated
photolysis of water.
- Van Niel
observed that H2S is the raw material in photosynthetic
bacteria instead of water. When he observed that sulphur
was released by breaking up of H2S, he postulated that during
photosynthesis, water gets dissociated.
- Blackmann found that photosynthesis
rate varied with temperature at high intensities. He gave the law of
- The rate of photosynthesis
is more when light energy is received in flashes instead of continuous
light. The flashing light regenerates ADP and NADP required in light
reaction. While in continuous light, the dark reactions are slow and
thus do not regenerate sufficient ADP and NADP. Thus dark reaction is a
rate limiting step because it is slower than the light reaction.
- Photorespiration: It was earlier believed
that the rate of photosynthesis in light and in dark is the same. But it
was later found that rate of respiration is higher in light. |
Geometry Section 10.1 Segments and Lines of Circles. A * circle is a set of points, in a plane, that are equidistant from a given point.
This given point is called the ______of the circle. A circle can be named by using the symbol _____ and naming the center of the circle. The circle to the right is _____.
segment from the center to any point on the circle.
EXAMPLE : HI
passes through the center of the circle.
2 times the length of a radius.
Name the Diameter: MC
A *secant to a circleis a line which intersects the circle in 2 points.Note: Every secant will contain a chord. Be sure to include the proper symbol for a segment or line when naming chords or secants.
Secants are LINES.
Chords are SEGMENTS.
The word TANGENT means one intersection. |
Changes In Climate Could Effect Monarch Butterfly Migration
By Jessica Rawden 2013-02-21 17:40:58
As the climate warms, groups of monarch butterflies that migrate south for the winter and north when the climate gets warmer may have to find new stomping and traveling grounds. Donít worry, we donít have a monarch die-off on our hands. The species is not threatened, but if the climate continues to change, the migration patterns will likely have to change, as well.
A team at the University of Massachusetts Medical School found that monarch butterflies that migrate from North America to Mexico are able to do so via the position of the sun in the sky. They are helped in determining the time and sunís location via circadian clocks located in their antennae. According to New Scientist, scientists were able to determine these patterns by removing the antennae on a set of butterflies and seeing whether or not the experimental group would be able to tell which direction they need to fly.
Those butterflies couldnít, which led the scientists to also test how temperatures would effect the butterflies. In the second scenario, butterflies were captured as they flew south. They spent a little under a month hanging out in a warm environment that mimicked the weather usually experienced in the Mexican portion of the migration. When these butterflies were released, they immediately flew north, despite cold weather still being present. This experiment led scientist Steven Reppert and the rest of the team to wonder what will happen as the temperatures heat up. A potential possibility is that the butterflies will no longer have to migrate all the way to Mexico and may be able to move further into Canada. While weíre still a ways off from finding out the answer, this look into the lives of monarch butterflies is fascinating, nonetheless. |
Among the five loon species on Earth, none share as much habitat with people as does the Common Loon. As a result, no other loon species warrants more active protection.
Although they are iconic symbols of wild lakes and ponds, Common Loons, particularly in the United States, are vulnerable to nest disturbance and toxicity from lead sinkers and mercury. VCE's conservation strategy recognizes the need to help people and loons coexist.
Common Loons and Us
Loons nest only a foot or two from shore. This makes nests vulnerable to fluctuating water levels, particularly on reservoirs and lakes with dams. When waters rise, they can flood the nest. Dropping lake levels also present risks. Loons can hardly walk on land. So when a lake level drops away from an existing loon nest, the adults have a tough time sliding on their bellies to return to the nest to incubate or protect eggs or chicks from predators.
The nest is a simple pile of vegetation or a small depression in the soil, in which the female lays one or two eggs. Males and females both incubate the eggs for several hours before switching places. Loons will leave their nest when people get too close, putting eggs or chicks at risk. If they’re disturbed repeatedly, loons can abandon a nest. Once loon chicks are on the water, often within hours of hatching, human disturbance is less catastrophic. Chicks under three weeks of age will often be seen “backriding” on an adult for warmth and protection from predators.
It is nonetheless essential for people to keep respectful distance and enjoy loons through binoculars. When paddling a kayak or canoe, never pursue loons for a photo or a close look. A loon constantly swimming away from you is a stressed loon.
Male and female loons both tend to their young, feeding insects and minnows at first and larger fish later. Although loons will eat most any fish they catch, perch are a favorite food. A loon’s average dive length is 35-40 seconds. Most fish are swallowed underwater with only the occasional larger fish brought to the surface. In a loon’s gizzard, which contains small stones, powerful enzymes and stomach acids help to digest the fish – bones included.
Common Loons will also eat live bait and lures from anglers. VCE routinely encounters loons tangled in fishing line, and loons that have ingested fishing line and bait. The results can often be fatal. Please “reel in” when loons are diving nearby and avoid using lead fishing gear of any kind.
Preening and Bathing (Extreme Preening)
Loons preen their feathers constantly so that they remain waterproof. Every few hours, a loon will roll over on its side to slide its bill through feathers, “zipping” them up. When the barbules of a feather are tightly locked, the entire feather sheds water, preventing the skin from becoming wet. We often see loons reach back to a gland near their rear that produces an oily powder. With its bill, a loon spreads the oil on its feathers to help them shed water and hold their form. The tail feathers often stick straight up at this time, causing some people from a distance to think they are observing a chick backriding. At the end of preening, loons often flap their wings gently, shedding water and aligning their feathers.
Occasionally a loon will bathe more vigorously, splashing its wings in the water, doing somersaults and plunge dives, and aggressively tending to its feathers with its bill. This behavior is often reported as a loon trying to remove tangled fishing. But the loon is actually giving itself a thorough bath, trying to remove mites and other parasites. We’ve termed this process “extreme preening.” To a loon, it may feel good – like jumping into a lake on a really hot day. The loon usually finishes an extreme preening session with the more common gentle preening, re-oiling those feathers and zipping them up tight.
Interpreting Loon Calls
- The Mournful “Wail” – An “ooohh ahhhh” is often the sounds of loons identifying or calling to each other; it can also signal initial signs of a mild disturbance.
- The Laughing “Tremolo” – A trill or series of trills can be a sign of distress or alarm, and occasionally excitement.
- The Crazy and Wild “Yodel” – This is the male territorial call, usually directed at unwelcome loons. Every male has a distinct yodel and transmits much information through the yodel – from how big the male is to his motivation to defend.
- Hoots and Coos – On a quiet evening you can hear the loon family or group of loons in a “social gathering” talking to each other.
Loons Facts through the Seasons
- April – Loons return soon after “ice-out” to our lakes and ponds to establish territories. Small ponds and lakes of less than 200 acres will have one loon pair, while larger lakes could have several. Intruder loons may target a territory for a “takeover” attempt.
- May-June – Loons build their nests in a protected area, one to two feet from the water’s edge, often in a marsh or on a small island or one of our artificial “nesting rafts.” Nests with water around them are much safer from predators such as raccoons. The female then lays one or two eggs that will be incubated by both parents over 27 to 28 days. On average, loons do not nest until they are six years old.
- June-August – Adults guard their chicks during this period. Males might “yodel” at intruder loons or boaters who come too close. If intruder loons are present, chicks are often “stashed” near shore. The parents move the family to areas with less wind and wave action. Starting in mid-summer, you may observe larger groups of loons in “social gatherings.”
- September-November –The chicks become much more independent during this time. They learn to feed themselves and practice flying for the upcoming migration. Adults undergo a partial molt in the late fall and will start looking like a big chick or a 1-to-2-year-old subadult (gray/white) with less pronounced white spots.
- November-April – Vermont loons migrate to their ocean wintering grounds off the New England coast. Upper Midwest loons head south to the Carolinas or the Gulf of Mexico. Adults have a complete molt in late winter, replacing their flight feathers, which means they have a flightless period before the spring migration. |
To develop an understanding about the disease Anthrax, how it is transmitted, and its effects on the body.
Students should extend their study of the healthy functioning of the human body and ways it may be promoted or disrupted by diet, lifestyle, bacteria, and viruses. After studying microbes, students should consider diseases and how they affect body systems.
This lesson focuses on the bacterial disease known as Anthrax. Anthrax has been identified as a disease that infects primarily cattle. There are known cases of people contracting this disease directly from handling infected cattle. In light of recent terrorist attacks on the United States, the threat of Anthrax being used for germ warfare is prevalent. There are three ways to contract this potentially deadly disease: through skin contact, ingestion, and inhalation.
In this online lesson, students will research the disease and its impact on human health. By the end of the lesson, students will have gained the knowledge that bacterial diseases may invade and damage different body systems. To further study benchmark concepts, students will study the effects of similar diseases and ways to fight these invaders.
Begin class by having students read the following online articles about the anthrax cases that occurred in 2001.
- 10 things you need to know about anthrax
- Bioterrorism-Related Inhalational Anthrax: The First 10 Cases Reported in the United States
After students have read the articles, discuss these questions:
- What organism do you think causes anthrax?
- What are some of the ways anthrax can be contracted?
- What are some of the symptoms of anthrax?
- What body systems are affected by anthrax?
- Why do you think anthrax is so damaging to the respiratory system?
- Can anything be done to prevent an anthrax infection?
At this point in the lesson, it is not important that students correctly answer all of these questions. Rather, the discussion should be used to help guide the student research that follows.
In this part of the lesson, students will conduct independent Internet research on anthrax. They will use what they have learned to create a “Wanted Poster” for the bacterium anthrax.
Begin by distributing the Wanted Poster student sheet and reviewing the required criteria for the poster. Students will then explore the websites listed on the student sheet on their own. After having gathered the information, students will create their posters for homework and present them in class.
After all of the students have presented their posters, review what students have learned by discussing these questions:
- Why is our nation more concerned now about the possible threat of Anthrax? (Students will likely mention the incidents of contaminated mail that occurred at the end of 2001.)
- What are precautions that our nation is using to protect its citizens from this threat? (Students will likely mention vaccinations and increasing the supplies of antibiotics that can prevent or cure anthrax. They may also talk about some of the new precautions being taken in post offices to prevent the spread of anthrax through the mail.)
- Why is anthrax one of the few microorganisms used in biological weapons? (Because anthrax spores can survive for a long time, because the spores can be spread through the air, and because inhalation anthrax is almost always fatal.)
- How difficult would it be to release anthrax spores into the air? (Anthrax bacteria and spores cannot actively move around. Instead, they move passively, transported by wind, or animal carriers.)
- If anthrax spores are inactive forms of bacteria, how can they make you sick? (Once inside the body and lungs, the spores migrate to the lymph nodes and change to the bacterial form. Then they multiply and produce toxins that cause bleeding and eventually destroy the respiratory system.)
- What are the three different types of Anthrax and how does each type invade and damage the body? (There are three forms of anthrax disease, varying by the route of infection. People can get anthrax through a break in the skin (cutaneous anthrax), by eating inadequately cooked contaminated meat (gastrointestinal anthrax), or by inhaling bacteria or spores. Inhaled anthrax does not typically spread from person to person. Because anthrax spores can live in the soil for many years, animals can get anthrax by grazing or drinking water in contaminated areas. Weaponized anthrax could be used against people in almost any location, and in many different ways. The greatest threat with the most deadly consequences comes from inhaled anthrax.)
- Can anthrax be spread from person to person? (Direct person-to-person spread of anthrax is extremely unlikely.)
- How can anthrax be destroyed? (Anthrax spores can only be destroyed by exposure to steam heat, incineration, or fumigation with a poisonous gas. Active anthrax bacteria can be destroyed by a dilute bleach solution.)
Students should understand that anthrax is an infectious disease caused by a spore-forming bacterium. They should also understand which body systems are affected by anthrax and how it is transmitted. It is also important that students understand that there are a wide variety of microorganisms that cause infectious diseases and that it is important to study and understand these organisms in order to prevent and/or cure these diseases.
You can use the rubric found on the Wanted Poster Rubric teacher sheet as a guide to evaluate students on the presentation of their Wanted Poster project, revising it as needed to suit your class.
Note:There are several resources on the Internet that describe the use of rubrics in the K-12 classroom, a few of which are highlighted here.
To learn more about rubrics in general, see Make Room for Rubrics on the Scholastic site.
For specific examples of rubrics, more information, and links to other resources, check out the following sites:
Finally, you can go to Teacher Rubric Makers on the Teach-nology.com website to create your own rubrics. At this site you can fill out forms to create rubrics suitable for your particular students, and then print them instantly from your computer.
For a historical perspective, the Science NetLinks lesson Sanitation and Human Health can help students learn about how improving sanitation helped stop the spread of bacterial diseases.
To learn more about the ways bacteria invade the body, direct students to the Science NetLinks lesson Microbes 1: What’s Bugging You?
Using the Virtual Biosecurity Center from the Federation of American Scientists, students can conduct further research on other possible germs or chemicals that might be used in weapons. |
Carboxylic acids are important precursor chemicals both in the lab and in living organisms. The twenty common amino acids are a particularly important class of carboxylic acids. Carboxylic acids are generally tart in taste and are thus widely used in the food industry. Foods are often preserved with the addition of sodium benzoate, which is the conjugate base of benzoic acid. Acetic acid is a carboxylic acid that gives vinegar its bite. Formic acid has historically been used to preserve specimens of animal tissue.
Carboxylic acids contain a carbon atom that is attached to one oxygen atom with a double bond, a hydroxyl group by a single bond, and one additional bond to an organic group (R-) which is often an alkane or alkene group. They are often written as R-C(=O)OH, , R-COOH or R-CO2H. Carboxylic acids are weak Brønsted-Lowry acids which only partially dissociate in water to produce the hydronium ion [H3O]+ and the conjugate base of the acid, [R-CO2]-. Although small carboxylic acids are soluble in water, larger acids are increasingly insoluble in water but increasingly soluble in non-polar solvents.
Stability of the conjugate base
The negative charge of the conjugate base is shared between the two electronegative oxygen atoms, and this effect can be visualized as two different resonance structures. The strength of carboxylic acids is effected by the organic group to which it is attached. If the R-group is an electron acceptor, it can stabilize the negative charge of the conjugate base either through inductive or resonance effects and therefore increase the acid strength.
Carboxylic acids can be formed by the hydrolysis of a corresponding ester or amide. Thus, ethyl acetate can be hydrolyzed to form acetic acid and ethanol. Conversely, the Fisher esterification reaction can be used to react a carboxylic acid with an alcohol to form an ester. Carboxylic acids can also be reduced to form aldehydes or further reduced to form primary alcohols.
Their tart taste makes many carboxylic acids useful as flavoring agents in the food industry. Citric acid is widely used in fruit punches and candy. |
The tools of Neuroscience
Our understanding of how the brain works has greatly increased over recent years due to advancements in brain imaging technology such as MRI scans (magnetic resonance imaging) which can track changes in brain structure and function. Videos can now be taken to monitor the brain activity when people are engaged in a task, thinking about something or experiencing a particular emotion to identify which parts of the brain are activated.
The three parts of the brain (the triune brain)
A simple model for understanding the structure of the brain is to see it as consisting of three parts. At the head of the spinal column is the reptilian brain, the governor of our instinctual responses and our automatic functioning such as breathing and the beating of the heart. Wrapped around the reptilian brain is the limbic system. This part is divided into almond shaped halves called the amygdala. The amygdala is the gateway to the emotional system, responsible for receiving and sending emotional signals. It assesses every single stimulus, every sound, smell, glimpse, piece of information for its emotional loading and then attaches a feeling to it. If the stimulus is recognised it is assigned to a pre-existing emotional track. If it is unknown, a new emotional track is created. All of this takes place in nanoseconds (Brown and Brown 2012) . Habitual responses take place when emotional tracks get used again and again, for example, if as a child you enjoyed shopping trips and every time you entered the store you encountered the smell of fresh bread, then it is likely that you will attribute a feeling of pleasure to any future incoming stimuli linked to the smell of bread. It is the limbic system that encodes such experiences and attaches feelings that create meaning. Marketing and advertising is built on the premise that decision are mostly based on emotional rather than rational responses to stimuli and so try to re-create such associations. The third and outer part of the brain is called the cognitive brain, the cerebral cortex. This receives and integrates signals from the other parts of the brain to make sense of both the outer and inner world. The cerebral cortex is wrapped around the two amygdala. The largest part of the cerebral cortex is the neo-cortex which is responsible for functions such as reasoning, planning, language and conscious thought.
The cerebral cortex and the teenage brain
The cerebral cortex has four lobes. The purpose of the frontal lobe is to inhibit emotions that may cause problems in social interactions, to enable us to stand back, to notice, observe, gain critical distance and consider the consequences of our actions. According to McGilchrist (2010) it also enables us to empathise by attempting to understand rather than just react to people. In adolescence the frontal lobe undergoes significant restructuring, reducing in size, eliminating synaptic connections that no longer suit the environment. These changes may go some way to explaining why adolescents are more self-conscious than adults yet less inhibited around risk taking as well as struggling to weigh up the consequences of actions (Blakemore 2012) .
Growth and change requires relationship and emotion
Neuroscience has discovered that change and development mainly happen through the limbic system, the home of our emotions and feelings. The infant brain arrives with no templates of meaning, only the potential to create meaning. As the infant experiences an event they will experience a primary emotion. It is generally accepted that there are eight of these – the survival emotions of fear, anger, disgust, shame, sadness (FADSS), the emotion of startle/ surprise and the two attachment emotions of excitement/ joy, and love/ trust. The relationships around the infant will influence what secondary feelings (a combination of the primary emotions) are attached to the event and therefore the meaning associated with it. “What the infant brain needs to get it working is to be structured and organized by another functioning brain” (Brown and Brown 2012:5) As we grow, the templates are built and habitual ways of experiencing and interpreting the world are formed. These are often referred to as our personal constructs and can be deeply embedded. Therefore whilst the rational part of us may recognise that we ‘should’ change, some rewiring is needed at the limbic level to create the energy and motivation to bring about actual change. Just as relationship was critical in the initial wiring of the brain and construction of meaning, it is also key to any future rewiring. In the field of psychotherapy and counselling, the core conditions of empathy, unconditional regard and congruence have long been seen as essential for learning and growth, a belief that now appears to be supported by neuroscience. A client’s amygdala will pick up cues from the coach that go beyond mere words – the subtle body language and tone created by the coach’s own emotions. The amygdala is highly attuned to signals that elicit fear, the normal response being to protect the self. In contrast, if there are no fear eliciting signals but ones that elicit the emotion of trust, then the client’s limbic system starts to open up to new possibilities (Brown and Brown 2012). In the business world, strategies for change that rely solely on reason are regarded as insufficient by emergent change advocates such as Kotter and Cohen (2002) . They argue that people’s hearts and minds can be captured by building a vision that appeals to the whole brain, that can be felt, seen, touched. It appears that, in both individual coaching as well as organisational change, the key to winning the mind is first to win the heart.
The left/ right brain division
The two halves of the brain exist for a reason although there is some debate as to what this is. Traditionally the right hemisphere is seen as mostly focusing on the emotional system, imagination and originality, the left for language, facts and what is known. The whole brain integrates these two elements to make rational sense of the inner and outer world. Whilst there is truth in this division, it has latterly been regarded as a bit simplistic an explanation. According to McGilchrist (2010) imagination requires both hemispheres as does reason. Yet there are differences in how they operate – each hemisphere offers us a different version of the world which we then combine. We use the left hemisphere for a narrow focus and sharp attention to detail. To create such a focus requires a reductionist approach, a simplified version of reality arrived at by taking data out of context and honing in. We use the right hemisphere for a broader kind of alertness and for making connections with the world. This hemisphere takes into account context, implicit meanings, understands metaphor and notices body language. Like our peripheral vision is it never fully graspable. Western History has moved from a place of balance to one where the left approach is valued over the right to our own detriment (McGilchrist 2010; Brown and Brown 2012)
How much of our brain do we actually use?
The idea that we only use about 10% of our brains is widely acknowledged myth. Modern brain scans reveal that most of the brain is active most of the time (Jarrett 2014) with the result that even minor brain damage can have devastating effects.
The idea that by the age of three our brains are fully formed is also a myth. In fact our brains impose no clearly defined biological limit on our learning potential: “our brains are certainly more ‘plastic’ when we are younger, but their connectivity, function and even structure can change dramatically in response to learning throughout our lives” (McGurk 2014:6) “It takes a long time, right up through adolescence, with a wide variety of social influencers adding onto the original parental input, to get the whole system effective”(Brown and Brown 2012:58).
Bounded rationality – the reaching of a satisfactory decision
Most decisions are not made through reason alone as rational decision making cannot cope with large amounts of data or take into account the complexities of a situation. “Although our minds can take into account of a host of different factors, and although we can remember and report doing so, “it is seldom more than one or two that we consider at any one time” ” (Shepard 1967:263 cited in Gigerenzer and Goldstein 1996:664) Therefore the phrase bounded rationality (Simon 1991) is frequently used to acknowledge that most of our decision making is concerned with arriving at a satisfactory and sufficient result rather than with the perfect answer. We often use short cuts or heuristics, problem solving methods such as rule-of-thumb or intuition. Research indicates that these heuristics can be very effective and time-efficient in solving complex real-world problems and, when applied in the right niche, often lead to better decisions than more complex decision making methods (Banks 2014) . Once the decision is made in the emotional part of the brain, the cerebral cortex then makes sense of it, justifying or rationalising the decision.
Intuition and Analysis: the dual process theory
(Evans 2008, Kahneman 2011, Lieberman 2007, Stanovich and West 2000). The dual process theory posits two contrasting but complementary modes of thinking (ways of thinking: System 1 – Intuition: slow to learn but fast in operation, effortless, automatic, spontaneous, using shortcuts or heuristics to form judgements, holistic, may be influenced by emotional responses to an event (gut feelings) System 2 – Analytic: quick to learn but slow in operation, controlled, effortful, consciously controlled, using logic and analysis of information to reach a rational decision, free from emotional response
The ‘two minds’ model (Sadler-Smith 2014:13)
Analytical Mind – Narrow band-width
Intuitive Mind – Broad band-width
- Effortful processing
- Step-by-step analysis
- ‘Talks’ in the language of words
- Recent/System 2
- Features in management education and training
- Automatic processing
- Whole pattern recognition
- ‘Talks’ in the language of feelings
- Ancient/System 1
- Ignored in most management education and training
Banks (2014:10) argues that these two modes should not be seen as competing forces as “the automatic, unconscious process typically supports the slower, conscious decision-making process”. The intuitive method cuts through vast amounts of information to hone in on what is most important. This makes decision making via analysis more manageable.“Choosing between intuitive and analytic modes of thinking is not a dilemma; typically they are both used at the same time”(Banks 2014:11) The key to effective decision making is to become aware of system 1 and system 2 thinking and to use both effectively, for example, using analytical tools to reflect on and check decisions reached by intuition.
According to Gladwell (2005) , intuition is a mental process whereby the brain rapidly thin slices through multiple cues or vast amounts of information to arrive at insights (the ‘ah ha’ or eureka moment). A military general in the field does not have the time to weigh up all the options, but makes a snap judgement. This snap judgement is still an informed decision in that the intuitive process takes into account pre-existing knowledge and experience combined with any new cues picked up from the immediate environment. Likewise the artist of any profession has had to have undertaken initial technical training, learning processes and acquiring knowledge in order to gain the confidence and informed risk taking required for innovative practice (Schon 1987) It is important to remember, that although expertise can provide the raw materials for innovation, it can also lead to tunnel vision or grooved thinking (Sadler-Smith 2014:8) . Therefore it is important to continually revisit and challenge the assumptions that underpin our decision making.
The importance of mood
A positive mood state facilitates creative and spontaneous processing (divergent thinking) whilst a negative mood state promotes a more careful approach (convergent,thinking) (Bolte et al 2003 cited in Sadler-Smith 2014).
Gender differences and types of intuition
As far as the stereotype of female intuition is concerned, there don’t seem to be substantive differences between men and women in their use of intuition in general; however, it might be the case that women have a better developed social intuition (Myers 2004 cited in Sadler-Smith 2014). A number of researchers have argued recently that there may be as many as four different types of intuition: – expert intuition is linked to decision-making and problem solving – social intuition is linked to reading other people’s motives and intentions – moral intuition is linked to the gut feelings which serve as an internal ‘moral compass’ – creative intuition is linked to ideation and connects insight and intuition.
The limits of neuroscience
Neuroscience has provided significant insights into how the highly specialized areas of the brain work. However, no-one yet really knows how the system works as a whole (Brown and Brown 2012).
Banks, A. (2014), Cognition, decision and expertise Part 2 of 3: Neuroscience and learning. CIPD Research insight
Brown, p. and Brown, V. (2012) Neuropsychology for Coaches. OU press.
Gladwell, M. (2005). Blink, the power of thinking without thinking. Penguin
Jarrett, C. (2014) Great Myths of the Brain. Wiley Blackwell.
McGurk, J. (2014), Fresh thinking in learning and development, Part 1 of 3: Neuroscience and learning. CIPD Research insight
Sadler-Smith, E. (2014) Insight and Intuition. Part 3 of 3: Neuroscience and learning. CIPD Research insight
Simon, H. (1991) Bounded Rationality and Organizational Learning. Organization Science 2 91): 125-134.
The tools of NeuroscienceOur understanding of how the brain works has greatly increased over recent years due to advancements in brain imaging technology such as MRI scans (magnetic resonance imaging) which can track changes in brain structure and function. Videos can now be taken to monitor the brain activity when people are engaged in a […] |
What are digestive enzymes, and why are they so important?We eat food, but our digestive system doesn’t absorb food, it absorbs nutrients. Food has to be broken down from things like steak and broccoli into its nutrient pieces: amino acids (from proteins), fatty acids and cholesterol (from fats), and simple sugars (from carbohydrates), as well as vitamins, minerals, and a variety of other plant and animal compounds. Digestive enzymes, primarily produced* in the pancreas and small intestine, break down our food into nutrients so that our bodies can absorb them.
*They’re also made in saliva glands and stomach, but we’re not going to focus on those here.
If we don’t have enough digestive enzymes, we can’t break down our food—which means even though we’re eating well, we aren’t absorbing all that good nutrition.
What would cause digestive enzymes to stop working correctly in the body?First, diseases may prevent proper digestive enzyme production.
- Pancreatic problems, including cystic fibrosis, pancreatic cancer, and acute or chronic pancreatitis.
- Brush border dysfunction, the most severe is long standing Celiac disease, where the brush border is flattened or destroyed. Other diseases like Crohn’s can also cause severe problems.
- Low-grade inflammation in the digestive tract (such as that caused by “food allergies,” intestinal permeability, dysbiosis, parasitic infection, etc.) can lead to deficiencies in digestive enzymes.
- Aging has been associated with decreased digestive function, though I personally wonder if this is a result of aging, or aging badly.
- Low stomach acid—we’ll talk about this more in a future article, but if you have low stomach acid, it’s likely that you won’t have adequate digestive enzymes either.
- Chronic stress. This is the most common reason for digestive enzyme problems. Our body has two modes: sympathetic “fight or flight,” and parasympathetic “rest and digest.” When we’re in “fight or flight” mode, digestive is given a very low priority, which means digestive function (including digestive enzyme output) is dialed down. Chronic stress= constant “fight of flight” mode = impaired digestive enzyme output.
BUY Now on Ebay
How do we correct a digestive enzyme deficiency?
First, a Whole30 or a
Paleo-style diet can help to restore normal digestive function,
including digestive enzymes. Dietary interventions work by reducing
inflammation in the body and the digestive tract, improving nutrient
deficiencies, removing enzyme inhibitors by taking out things like
grains and legumes, and fixing gut bacteria.
However, just because you eat Good Food doesn’t automatically mean your digestion will be healthy. In my previous article, I talked about gut bacteria, which may not be in perfect balance with a Paleo diet alone. Improper digestion is another issue that diet alone may not solve.
Managing chronic stress is vitally important to restoring healthy digestive function. Most of us are cramming food in our faces at our desks or while we’re on the go, then we’re off to do the next thing on our list. We live most of our lives in sympathetic mode—and aren’t giving a high priority to properly digesting our food. When we sit down to eat food, we should switch into a parasympathetic mode, and ideally stay in parasympathetic mode for a while afterwards. Think long European meals, followed by a siesta. (Refer to pages 182-185 in It Starts With Food for more specifics.)
Finally, after implementing these healthy dietary and lifestyle practices, digestive enzyme supplementation may be necessary to help your body properly break down your food.
BUY NOW on Ebay
Buy NOW on Amazon |
Inspired by a posting by Martina Bex and few other teachers’ practices, I adapted a variety of activities for my classroom under the umbrella title “story work choices.” The idea is that students will interact with a story or reading, on their own terms, at their own pace. This is good for a sub day, or when you need to meet with students one-on-one, or if you are present but unable to teach as you normally would.
It is important that you do not have students do an activity that is new to them. If you have not taken them through a guided version of any of the activities on this list, you should remove it from the options. Once you have taken students through an activity once or twice successfully, then you can be relatively confident that they can do it more independently. We want to set students up for success, and not cause confusion.
You can project or hand out this summary of the activity and choices. Here is what I use:
Story Work Choices
(12-18 min per activity, please complete 2 activities today):
- 4-frame write-draw (quick sketch, no additional time), with Latin captions from the story
- Ask/answer questions about the story in written Latin (see handout)
- Write a Latin summary of the story, timed write-style (50 words, approx)
- Write a polished English translation of a large chunk of the story
- Map out the story in timeline form, using Latin sentences and small illustrations.
- Write a verum/falsum quiz, 10 statements, with answers.
- Describe or explain important grammar features or new forms that occur in the story.
- Draw a picture dictionary of at least 7 words. Focus on new vocabulary.
- Write a list of Latin words in the story which have English derivatives. Use a dictionary to help you (5 words min).
- (fast finishers) read something new and challenging in Latin, and keep a log of what you read: title and page numbers, new words, forms, grammar, etc.
- You might want to specify group options, or make it entirely individual. But even in that case, you can have students collaborate and help each other. In addition to these instructions, I may make some and/or all of the following materials available to students:
- 4- or 6-frame storyboard pages,
- lined paper, printer paper,
- colored pencils,
- Glue sticks, scissors
Also, I have other textbooks and books on Roman culture and myth on the bookshelf, in addition to my Latin FVR library.
Whenever I assign this project, (especially when I’m out), I am always pleasantly surprised that, not only do things run smoothly with any sub, but I find some amazing work that I find easy to grade and deal with, and much of which can be recycled into future activities and assessments. |
Why do we need a database for oceanic methane and nitrous oxide?
Methane (CH4) and nitrous oxide (N2O) are trace gases in the atmosphere that contribute significantly to the earth's greenhouse effect. N2O is furthermore becoming the most important substance responsible for ozone depletion in the stratosphere. The ocean contributes moderately to global CH4 (up to 10% of natural emissions) and strongly to N2O emissions (up to 34% of natural emissions).
Oceanic emissions of nitrous oxide and methane are produced during natural microbial processes, and the distribution of methane and nitrous oxide in the ocean varies strongly over space and time. Their production and consumption in the ocean are sensitive to environmental changes, such as changes in temperature, oxygen concentration or primary productivity. Climate change may thus affect the oceanic emissions of nitrous oxide and methane.
A compilation of all available measurements into a global database is a useful tool to identify regions with strong emissions, to assess their variability and to quantify the oceanic CH4 and N2O emissions. It also serves as a powerful resource for validation of biogeochemical models.
The MEMENTO database currently contains about 120,000 surface and depth profile measurements of N2O and more than 20,000 measurements for CH4 all over the oceans.
Watersampling on board of RC Littorina.
N2O measurements in the laboratory. |
Graphs of Circles Worksheets
How to Find the Equation of a Circle Based on the Diameter and the Center Point - A circle is a set of all points in a plane at a defined distance known as the radius and for the given point called the center. Apart from the radius and center, a circle includes various parts in its geometry. A line segment connecting the two points on the circle and passes through the center of the circle is known as the diameter of the circle. Let's assume that point (x, y) are the coordinates given on the circle. The center of the circle will be at (h, k), and the radius will be defined as the 'r'. Using the distance formula to find the equation of the circle. √(x2 - x1)2 + (y2 – y1)2 = d Substituting (x1 , y1) = (h, k) and (x2 , y2) = (x, y) and d = r √( x – h)2 + (y – k)2 = r Squaring each side (x – h)2 + (y – k )2= r2 The equation of the circle with radius r units and the center (h, k) is (x – h)2 + (y – k )2 = r2.
A mathematician organizes a lottery in which the prize is an infinite amount of money. When the winning ticket is drawn, and the jubilant winner comes to claim his prize, the mathematician explains the mode of payment: "1 dollar now, 1/2 dollar next week, 1/3 dollar the week after that... |
Biology is considered to be a natural science which focuses on studying life, living organisms, living systems, cell structure, evolution, genetics, ecosystems and environmental conditions that support life. Biologists are responsible for the creation of the taxonomy of living organisms which helps to classify and define similarities in organisms. Latest developments in research biology are defined as synthetic biology which deals with development of biofuels, biotechnology and pharmaceuticals for consumer use. Genetics and bioengineering are currently receiving a large amount of funding to allow scientists to research the human immune system and disease treatment.
New research from biologists might suggests that dietary supplements containing macular carotenoids could help improve vision. Luteins are a form a carotenoids, a pigment found in vegetables, which helps shield the eyes from blue light radiated from the sun and can act as a “sunscreen” to help prevent damage to the retinas cause by long term exposure.
Meadow grass in Finland that live in isolated patches have a higher chance to transmit fungal pathogens than meadows that reside closer to others. Anna-Liisa Laine and fellow biologists studying plant ecology throughout the archipelago of Finland found that genetic diversity associated with meadows that reside closer to others develop higher immune systems.
Biochemists have struggled with improving efficiency of biofuel production from a plant’s biomass using chemical treatments. One breakthrough may be available soon with research by a plant biologist and biochemist to bioengineer trees containing cells that self destruct. Biofuels may become a more viable form of green energy as biotechnology improves their efficiency.
Pyrosequencing was once the most efficient way to determine extinct species of marine life however new research from scientists have determined more efficient ways to identify and categorize extinct species of plankton in the Black Sea. A new genetic data record has been unveiled using this process by biologists.
Biologists are investigating river and stream ecosystems by using one of nature’s ecological indicators: clams! Clams are filter feeders and can be used to find trace evidence for pollutants that enter our waterways by taking samples of their shells to determine where and what pollutants could endanger their ecosystem.
Biology is divided into several subdisciplines: biochemistry, molecular biology, cellular biology, physiology, evolutionary biology and ecology. Each of these pertain to a specific scale of organisms and are broken down further to provide a focus on more specific forms of study. The first recorded use of biology was found in 1736 by Linnaeus and has continued to be a prevalent scientific study since then. Despite the relatively recent use of the term, biological studies go back many centuries although these were often referred to under different names. |
Learning & Development
Children's Learning & Development
We aim to ensure that each child:
Our Approach To Learning, Development & Assessment
Learning Through PlayBeing active and playing supports young children’s learning and development through doing and talking. This is how children learn to think about and understand the world around them. We use the EYFS statutory guidance on education programmes to plan and provide opportunities which will help children to make progress in all areas of learning. This programme is made up of a mixture of activities that children plan and organise for themselves and activities planned and led by practitioners.
Characteristics of effective learningWe understand that all children engage with other people and their environment through the characteristics of effective learning that are described in the Early Years Foundation Stage as:
- Playing and exploring – engagement.
- Active learning – motivation.
- Creating and thinking critically – thinking.
The Early Years Foundation StageProvision for the development and learning of children from birth to 5 years is guided by the Early Years Foundation Stage. Our provision reflects the four overarching principles of the Statutory Framework for the Early Years Foundation Stage (DfE 2017):
- A Unique Child – Every child is a unique child who is constantly learning and can be resilient, capable, confident, and self-assured.
- Positive Enviroments – Children learn and develop well in enabling environments, in which their experiences respond to their individual needs and there is a strong partnership between practitioners, parents and carers.
- Enabling Environments – Children learn and develop well in enabling environments, in which their experiences respond to their individual needs and there is a strong partnership between practitioners, parents and carers.
- Learning and Development – Children develop and learn in different ways and at different rates. The framework covers the education and care of all children in early years provision including children with special educational needs and disabilities.
How We Provide For Development & LearningChildren start to learn about the world around them from the moment they are born. The care and education offered by our setting help children to continue to do this by providing all of the children with interesting activities that are appropriate for their age and stage of development. The Areas of Development and Learning comprise:
|Prime Areas||Specific Areas|
|> Personal, social and emotional development. > Physical development. > Communication and language.||> Literacy. > Mathematics. > Understanding the world. > Expressive arts and design.|
|Personal, social and emotional development > Making relationships. > Self-confidence and self-awareness. > Managing feelings and behaviour.||Physical development > Moving and handling. > Health and self-care.|
|Communication and language > Listening and attention. > Understanding. > Speaking.|
|Literacy > Reading. > Writing.||Mathematics > Numbers. > Shape, space and measure.|
|Understanding the world > People and communities. > The world. > Technology.||Expressive arts and design > Exploring and using media and materials. > Being imaginative.| |
Tel Aviv University researchers led the recent discovery of two new planets in remote solar systems within the Milky Way galaxy. They identified the giant planets, named Gaia-1b and Gaia-2b, as part of a study in collaboration with teams from the European Space Agency (ESA) and the body’s Gaia spacecraft.
The development marks the first time that the Gaia spacecraft successfully detected new planets. Gaia is a star-surveying satellite on a mission to chart a 3D map of the Milky Way with unprecedented accuracy comparable to standing on Earth and identifying a 10-shekel coin (roughly the size of a U.S. nickel) on the Moon.
TAU’s Prof. Shay Zucker, Head of the Porter School of the Environment and Earth Sciences, and doctoral student Aviad Panhi from the Raymond and Beverly Sackler School of Physics & Astronomy led the initiative. The findings were published in the scientific journal Astronomy & Astrophysics.
More Discoveries on the Horizon
“The discovery of the two new planets was made in the wake of precise searches, using methods of artificial intelligence,” said Prof. Zucker. “We have also published 40 more candidates we detected by Gaia. The astronomical community will now have to try to corroborate their planetary nature, like we did for the first two candidates.”
The two new planets are referred to as “Hot Jupiters” due to their size and proximity to their host star: “The measurements we made with the telescope in the U.S. confirmed that these were in fact two giant planets, similar in size to the planet Jupiter in our solar system, and located so close to their suns that they complete an orbit in less than four days, meaning that each Earth year is comparable to 90 years of that planet,” he adds.
Giant Leaps for Astronomy
There are eight planets in our solar system. Less known are the hundreds of thousands of other planets in the Milky Way, which contains an untold number of solar systems. Planets in remote solar systems were first discovered in 1995 and have been an ongoing subject of astronomers’ research ever since, in hopes of using them to learn more about our own solar system.
To fulfil its mission, Gaia scans the skies while rotating around an axis, tracking the locations of about 2 billion suns, stars at the centre of a solar system, in our galaxy with precision of up to a millionth of a degree. While tracking the location of the stars, Gaia also measures their brightness — an incomparably important feature in observational astronomy, since it relays significant information about the physical characteristics of celestial bodies around them. Changes documented in the brightness of the two remote stars were what led to the discovery. Aviad Panhi explains: “The planets were discovered thanks to the fact that they partially hide their suns every time they complete an orbit, and thus cause a cyclical drop in the intensity of the light reaching us from that distant sun.”
To confirm that the celestial bodies were in fact planets, the researchers performed tracking measurements with the Large Binocular Telescope, in Arizona, one of the largest telescopes in the world today. The telescope makes it possible to track small fluctuations in a star’s movement which are caused by the presence of an orbiting planet.
The discovery marks another milestone in the scientific contribution of the Gaia spacecraft’s mission, which has already been credited with a true revolution in the world of astronomy. Gaia’s ability to discover planets via the partial occultation method, which generally requires continuous monitoring over a long period of time, has been doubted up to now. The research team charged with this mission developed an algorithm specially adapted to Gaia’s characteristics and searched for years for these signals in the cumulative databases from the spaceship.
Signs of Life?
What about the possibility of life on the surface of those remote new planets? “The new planets are very close to their suns, and therefore the temperature there is extremely high, about 1,000 degrees Celsius, so there is zero chance of life developing there,” explains Panhi. Still, he says, “I’m convinced that there are countless others that do have life on them, and it’s reasonable to assume that in the next few years we will discover signs of organic molecules in the atmospheres of remote planets. Most likely we will not get to visit those distant worlds any time soon, but we’re just starting the journey, and it’s very exciting to be part of the search.” |
FDM can also be used to combine multiple signals before final modulation onto a carrier wave. In this case the carrier signals are referred to as subcarriers: an example is stereo FM transmission, where a 19 kHz subcarrier is used to separate the left-right difference signal from the central left-right sum channel, prior to the frequency modulation of the composite signal.
Where frequency division multiplexing is used as to allow multiple users to share a physical communications channel, it is called frequency division multiple access (FDMA).
FDMA is the traditional way of separating radio signals from different transmitters.
The analog of frequency division multiplexing in the optical domain is known as wavelength division multiplexing. |
Graphs Worksheets – New & Engaging
Horizontal and Vertical Graphs
This section includes for you straight-line graphs, quadratic graphs and cubic graphs. The image below shows the graphs of x = -4 and x = 2. The coordinates are (-4, 4), (-4, 1), (-4, -5) and (2, 3), (2, -2), (2, -6).
The image below shows the graphs y = 3 and y = -4. The coordinates are (-5, 3), (0, 3), (5, 3) and (-6, -4), (1, -4), (3, -4).
The image below shows the graphs for y = x, y = x2, y = x3, y = |x|, y = 1/x, y = √x, y = bx, x2 + y2 = r2.
Quadratic, Cubic, Reciprocal, Exponential Graphs
When straight-line graphs are horizontal the equation for the line will start as y = "a certain number", such as y = 3, or y = 5. When straight-line graphs are vertical the equation for the line will start as x = "a certain number", such as x = 3, or x = 5. Suggested after topics include: Simultaneous Equations, Quadratic Graphs and Cubic Graphs.
The image below shows the general equation for all quadratic graphs which is y = ax2 + bx + c. The curve is a parabola which has a "U" shape. The highest power is 2, or x2. Cubic graphs are "S" shaped. The general equation for the cubic graph is y = ax3 + bx2 + cx + d. The highest power is 3.
Reciprocal graphs have the general equation y = a/x. Reciprocal graphs have two asymptotes and its highest power is - 1. Exponential graphs have the general equation y = ax . Exponential graphs have one asymptote and its highest power is x. |
After humans, the mammals most successful at colonizing North America were the bison that thundered across the Great Plains.
Just when they arrived on the continent from Asia, however, has long been a mystery.
Now genetic and geologic information has helped pinpoint the time of their migration across the Bering Land Bridge. Bison arrived between 135,000 and 195,000 years ago, a new study finds. Humans were believed to have been much more recent travelers, with the first wave of migrants about 20,000 years ago, though that arrival time is facing increasing contention among experts.
After the first bison migrations, the animals multiplied, diversified and became the dominant grazers, displacing mammoths, Pleistocene horses and other mammals that arrived earlier, said Duane Froese of the University of Alberta, lead author of the study.
"They became very successful very quickly," Froese said. "Outside of humans, it's pretty much the most successful mammal invasion into North America."
The study is published in the Proceedings of the National Academy of Sciences.
Its findings rely in large part on DNA extracted from the oldest known bison fossil, a bone found a decade ago near the Gwich'in village of Old Crow in Canada's Yukon territory.
The Yukon fossil was found near the Porcupine River. It was just below a layer of ash that was spread in a massive eruption about 130,000 years ago from a southwestern Alaska volcano that has since been transformed into what is now called the Emmons Lake Volcanic Center.
The ash spread by the eruption is a time marker in the earth that allowed scientists to determine the Yukon bison fossil's age and confirm it as the oldest known bison fossil in North America, Froese said. Radiocarbon dating does not work beyond ages of about 40,000 to 50,000 years, so scientists needed other methods to determine the date, he said.
Thanks to new technology, DNA from the Yukon fossil was extracted and analyzed, he said. It was compared with that from a slightly younger fossil found near Snowmass, Colorado, and both were compared with dozens of younger bison fossils, including some from Yukon soil above the volcanic ash, where there were "lots and lots of bison all over the place," Froese said.
From that genetic material, the scientists determined that all the bison had a common ancestor 135,000 to 195,000 years ago, a period when the Bering Land Bridge was exposed, he said.
The genetic information also showed that bison migrated over the land bridge in a second wave 21,000 to 45,000 years ago, the study found.
Once bison were in North America, they quickly developed into new genetic forms. The first species on the continent — the species that left the Yukon bone — was the steppe bison, Froese said. The Colorado bison fossil, though only about 10,000 to 20,000 years younger, was of a different but related species, the giant long-horned bison, he said. That is a species known for its huge size, with fossils not found north of the Lower 48 states, he said.
Steppe and giant long-horned bison are Pleistocene species now extinct, but modern bison are their descendants.
The study results answer what has been a nagging question: Why do many assemblages of fossils made up of bones from mammoths, horses, rodents and other animals not include bison?
"It's been kind of a long puzzle in the world of paleontology and paleobiology," Froese said. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.