content
stringlengths 275
370k
|
---|
Axons are part of the specialised neuron cell that transmit electrical signals from the cell body to the axon terminal leading to the synapse, where the electrical signal turns into a chemical signal. The network of axons and dendrites allows complex signal pathways to occur simultaneously, together or independently. Microtubles fill the interior of the axon with the minus end at the cell body and the plus end at the axon terminals. The microtubles allow key protein structures, vesicles and mRNAs to be transported to the synaptic cleft. Just beneath the plasma membrane of the axon lies actin filaments which provide mechanical strength and transport mechanism to the cytoskeleton .
The speed at which the axon transmits electrical impulses depends on the diameter of the axon, the membrane resistance of the axon(how permeable the membrane of the axon is to electrical signal, the lesser the permeability, the faster the speed of transmition) and the presence or absence of myelin sheath. |
To make their sensors, NASA researchers start by coating a silicon wafer with a metal film like titanium or chromium. Next, the researchers deposit a catalyst of iron and nickel on top of the metal film, patterning the catalyst using conventional lithography. This allows the researchers to determine the location of the nanofibers, which will act as nanoelectrodes. A chemical vapor deposition process is used to grow the nanofibers on the catalyst.
“The proper construction and orientation of the nanoelectrode is critical for its electrochemical properties,” says Meyyappan. “We want to grow the nanofibers in an array like telephone poles on the side of a highway–nicely aligned and vertical.”
The researchers then place silicon dioxide in between the nanofibers so that they do not flap when they come in contact with fluids, like water and blood; this also isolates each nanoelectrode so that there is no cross talk. Excess silicon dioxide and part of the nanofibers are removed using a chemical mechanical polishing process so that only the tips of the carbon nanofibers are sticking out. The researchers can then attach a probe or molecule designed to bind the targeted biomolecule to the end of the nanofiber. The binding of the target to the probe generates an electrical signal.
The sensor is also equipped with conventional microfluidic technology–a series of pipes and valves–that will channel small drops of water over to specific probes on the biosensor side. This allows the researchers to do field testing and avoid the expense of taking the biosensor to the lab, says Meyyappan.
After the sensor is tested in its facilities this summer, Early Warning plans to place the device within an already existing wireless network to monitor the water quality of municipal systems. “The sensor gives us the advantage of having a lab-on-a-chip technology that can test for many different microorganisms in parallel,” says Gordon. “And instead of waiting 48 hours for results, we get notified within 30 minutes if the water is contaminated,” he says.
Such sensors could also be used in homeland security to detect pathogens such as anthrax, to detect viruses in air or food, and for medical diagnostics, says Meyyappan. |
Based on the more than 1,000 images from the MDM Observatory in Arizona, team leader Andrew Gould of Ohio State University and his fellow astronomers calculate that this new planet has roughly 13 times the mass of Earth--making it about the size of Neptune--and orbits its star at about the distance of the asteroid belt in our own solar system. The researchers suspect that it is a cold, rocky world, with temperatures around -330 degrees Fahrenheit--perhaps a gas supergiant in the making that failed to attract or didn't have access to gaseous material.
Although it is only the fifth Neptune-size planet discovered (all in the last decade) this type of object may be fairly common in the Milky Way. Current theory predicts that smaller stars should encourage the formation of smaller planets and, because most stars in this galaxy are red dwarfs, such planets may be more common than the 170 or so Jupiter-size orbs discovered so far. "These icy super-Earths are pretty common--roughly 35 percent of all stars have them," Gould says. "The next step is to push the sensitivity of our detection methods down to reach Earth-mass planets, and microlensing is the best way to get there." A paper detailing the findings was published online earlier this week in Astrophysical Journal Letters. |
Gas Chromatography is common type of chromatography which is used to analyze or separate volatile components of a mixture. This technique helps us to test the purity of a particular substance or separate different components of a structure. Basically, the mechanism of this technique is carried out by injecting syringe needle which contains a small amount of sample into the hot injector port of gas chromatography. The injector is set to the temp that is higher than the boiling points of the components so that the components will be evaporated into gas phase inside the injector. The carrier gas (normally is Helium) then pushes the gaseous components into gas chromatography column. The separation of components occurs here, form partition between mobile phase (carrier gas) and stationary phase (boiling liquid). More interestingly, gas chromatography column showed what’s inside, the maximum temperature along with the length and diameter due to the presence of metal identification tag on the column. Additionally, the column temperature is raised by the presence of heating element. The detector inside the gas chromatography will recognized the differences in partition between mobile and stationary phases. The molecules reach the detector, hopefully, at different intervals depending on their partition. The number of molecules that regenerate the signal is proportional to the area of the peaks.
Although gas chromatography has many uses, GC does have certain limitations. It is useful only for the analysis of small amounts of compounds that have vapor pressures high enough to allow them to pass through a GC column, and, like TLC, gas-liquid chromatography doesn't identify compounds unless known standards are available. Coupling GC with a mass spectrometer combines the superb separatiion capabilities of GC with the superior ID methods of mass spectrometry. GC can also be combined with IR spectroscopy. IR can help to identify that a reaction has gone to completion. If the functional groups of the product are depicted in the IR, then we can be sure that the reaction has gone to completion. This can also be depicted in the GC analysis. The presence of peaks that do not correlate with the standards may be due to an incomplete reaction or impurities in the sample.
The basic parts of a GC machine are as follows:
- Source of high- pressure pure carrier gas
- Flow controller
- Heated injection port
- Column and column oven
- Recording device
A small hypodermic syringe is used to inject the sample through a sealed rubber septum or gasket into the stream of carrier gas in the heated injection port. the sample vaporizes immediately and the carrier gas sweeps it into the column. The column is enclosed in an oven whose temperature can be regulated. After the sample's components are separated by the column, they can pass into a detector, where they produce electronic signals that can be amplified and recorded.
1. Wash syringe with acetone by filling it completely and pushing it out into a waste paper towel.
~Possible errors that can occur during Gas Chromatograpy can be due to the improper rinsing of the syringe. The syringe should be rinsed twice with acetone and once or twice with the sample. If improper rinsing ensues, unknown peaks can occur and alter our analysis of the sample. This error can be easily avoided. About 1 micro liter of sample is needed.
2. Pull some sample into the syringe. Air bubbles should be removed by quickly moving the plunger up and down while in the sample.
3. Turn on chart recorder, adjust chart speed in cm/min, set baseline by using zero so that the baseline is 1 cm from bottom of chart paper ( set 0), turn on the chart.
4. Inject sample into either column A or column B and push the needle completely into the injector till we can’t see the needle, then we pull the syringe out of the port.
5. Mark the initial injecting time on the chart. ~The sample should be injected at exactly the same time as the 'start' button is pressed. Otherwise, take note of how long after injection recording started. If the sample is not injected at the exact time the button is pressed, retention times will be off in the calculations.
6. Clean the syringe immediately.The syringe should be rinsed with acetone before injecting a different sample. Rinse before any other sample is injected and after every sample.
7. Record current (in milliamperes), temperature (in Celsius).
Notes on Injection:
1. The injection site, the silver disk, is very hot.
2. The needle will pass a rubber septum so there will be some resistance. Some machines have a metal plate near the septum, so if there feels like metal resistance, the needle should be pulled out and tried again. The needle should be completely inserted into the injection point if done correctly.
3. Quick injection is needed for good results.
4. Take out the needle immediately after injection. |
II. Compare and Contrast Other Artists With Hans Hofmann
Students will learn about two artists who were featured in the documentary: Red Grooms and Frank Stella. They also will learn about Stuart Davis, whose color theory may be contrasted with Hofmann’s.
Red Grooms: A Narrative Artist Incorporates Abstract Expressionism
Students will be able to: understand the work of Red Grooms as it relates Hans Hofmann, whom he studied with briefly; understand Groom’s work in terms of his reaction to Abstract Expressionism, use art journals to reflect on their process and plan artwork create three-dimensional cardboard collages that are narratives of daily life.
This lesson may be spread over 5 to 10 50-minute class sessions; it may end with the creation of models as an end in itself, or models may be viewed as prototypes for larger pieces.
Preparation: One or two weeks before the lesson:
Explore www.pbs.org/hanshofmann/legacy_001.html for information on the students of Hofmann
and www.artcyclopedia.com for links to museums that have works by Red Grooms.
Direct students to spend some time each day sketching and/or photographing scenes that represent impressions of people and places in their neighborhoods. All work should be kept in art journals.
Preparation: The day of the lesson:
Assemble the following materials: tempera paint, brushes, cardboard, poster board, glue, masking tape
Note: Use the following to inform the conversation you have with your students:
Red Grooms was born in 1937 in Nashville, Tennessee. He became interested in art as a child in elementary school. After high school, he attended several art schools. Intrigued by Abstract Expressionism, at the age of 20 he attended Hans Hofmann’s Provincetown art school. Grooms soon felt the pull to create narrative art that reflected different character types in their respective settings. Although Grooms’ work is representational, the painterly quality of the Abstract Expressionists is evident in his work.
Have students view the segment of Hans Hofmann: Artist/Teacher, Teacher/Artist that refers to Hofmann’s summer school in Provincetown. Note that at the age of 20, Red Grooms attended this school.
Show images of Red Grooms and Mimi Gross’ City of Chicago (1967) and Ruckus Manhattan 1975 – 76.
As a class, discuss Red Grooms’ style, emphasizing his incorporation of flat figures in a three-dimensional space, his sense of playfulness and wittiness of his pieces (identify how Grooms creates this), popular subject matter as theme, texturally rich and painterly interpretation of everyday life, and expressionistic use of paint.
- Knowing that Red Grooms was a student of Hans Hofmann, what surprises you about his work?
- What connections can you make between Hofmann’s and Grooms’ work (application of paint in a liberal, energetic manner; use of vibrant colors; expressive quality of work)?
Direct students in small groups to share the sketches and photographs in their journals, and to discuss which image each will use to grow into artwork in the style of Red Grooms.
In art journals, students sketch their ideas for transforming their two-dimensional images into three-dimensional cardboard collages.
Students create small three dimensional models of their glimpses into everyday life.
Before creating larger pieces, direct students to write in art journals, reflecting on their work. What do they want to communicate? Does any part need to be altered to adapt to the entire piece? What colors will be used? How will paint be applied?
Students construct larger pieces.
Assessment may take the form of one-on-one discussion with teacher, self-evaluation; peer evaluation, a display of students’ final projects accompanied by the sketch and art journal entries that led up to the final work (This display would also serve as an extension of the lesson.)
B. Frank Stella: Two Interpretations of Abstract Art
Students will be able to: understand the work of Frank Stella as it relates to that of Hans Hofmann; understand Stella’s work in terms of Abstract Expressionism; use art journals to reflect on their process and to plan artwork; create art based on Stella’s earlier architectural, flat, minimalist canvases or on his; later three-dimensional assemblages.
This lesson may be spread over 4 to 5 50-minute class sessions
Preparation: One or two weeks before the lesson:
Explore: www.pbs.org/hofmann/frank_stella_001.html for Frank Stella’s biography and Interview Excerpts.
Go to: www.artcyclopedia.com for links to museums that have works by Frank Stella.
Print out images of both Stella’s linear, hard-edge two-dimensional work and his later three-dimensional constructions.
Direct students to spend some time each day sketching both linear and rounded Abstract compositions.
Preparation: The day of the lesson:
Log on to www.pbs.org/hanshofmann/texturexploration_001.html and play with the interactive feature “Texturexploration,” allowing viewers to magnify sections of a Hofmann painting for a detailed view of Hofmann’s thick, painterly application of oils. You can later contrast this with Stella’s early hard-edge, flat, smooth application of ordinary house paint on canvas.
Assemble the following materials:
For two-dimensional work: tempera paint, brushes, watercolor paper, templates for creating hard-edge forms.
For three-dimensional work: base: cardboard, adhesive, collage material (mesh, thin wire, construction paper,etc.).
Note: Use the following information to inform the conversation you have with your students:
Although Frank Stella never took a class with Hans Hofmann (he moved to New York the same year that Hofmann retired from teaching to concentrate on painting), he cites Hofmann as a major influence, and even wrote an article naming Hofmann an “Artist of the Century.”
Stella’s work falls into two distinct categories. In the late 1950’s and ‘60’s Stella’s goal was to reinterpret decorative painting in terms of abstraction. In the process, he redefined both terms. Stella reduced his compositions to pure shapes, lines, and colors and pushed these compositions beyond the traditional frame. The paintings are large, engulfing the viewer.
In the 1970’s his art took a sharp turn. This radical change itself is instructive for students. In his desire to make abstraction more “alive”, his work literally pops out from the walls in chunks of swirling, multi-colored forms. These constructions are richly layered and textured, made out of aluminum, metal tubing and wire mesh. They are infused with a vitality not seen in Stella’s earlier works.
By the mid 1980’s and ‘90’s, the work grew so large and looming as to challenge its category of “painting.” Many of the huge works are not unlike stage sets in their monumental presence.
Show images of Frank Stella’s work.
Knowing that Frank Stella was influenced by Hans Hofmann, what surprises you about his work?
What connections can you make between Hofmann’s and Stella’s work? Both are abstract, but how do they differ? Keep in mind that Hofmann emphasized working from nature, while Stella introduced minimalism as a way to reduce art to a concept. Which of Hofmann’s concepts do you think Stella incorporated? How are the two styles different? (Hofmann worked two-dimensionally, yet there is a greater similarity between the two artists when a comparison is made between Stella’s three-dimensional work and Hofmann’s paintings, application of paint in a rich, energetic manner, use of vibrant colors, and expressive quality of work)
Direct students in small groups to share the abstract sketches in their journals. Which do they feel are the most successful and why? Can they visualize the more rounded compositions as three-dimensional pieces?
In their art journals, students sketch their ideas for transforming sketches into either two-dimensional paintings or three-dimensional cardboard collages. Encourage students, if they wish, to adapt Stella’s technique to their original sketches.
Direct students to write in art journals, reflecting on their work. What do they want to communicate?
Will that message be better expressed as a two or a three-dimensional piece? How will paint be applied?
Students create either two-dimensional or three-dimensional works.
Assessment may take the form of: one-on-one discussion with teacher, self evaluation, peer evaluation, a display of students’ final projects accompanied by the sketch and art journal entries that led up to the final work. This display would also serve as an extension of the lesson.
C. Stuart Davis and His Conversations with America
Students will be able to understand the color theory of Stuart Davis and apply it to their own work, compare Davis’ color theory to Hofmann’s “push-pull”, and use their art journals as sites to reflect upon the artistic process and plan their work.
Time Required: This lesson may be spread over 4-5 50-minute class sessions
Explore the following websites for information about Stuart Davis and images of his artwork:
Select a variety of jazz instrumentals to play in class.
One week before the lesson, ask students to begin collecting photographs from magazines and newspapers reflecting their ideas of the “American Scene”; establish a place where class can store the images.
Have art materials available: construction paper (cut into fourths or sixths), gluesticks, scissors, crayons, color pencils, tempera paints, brushes.
Note: Use the following to inform the conversation you have with your students. Lead them to discover elements in his work, rather than be told what they are:
Stuart Davis’ paintings are dialogues between the artist and the contemporary American Scene. He admired, among many other things in the United States, the urban environment, jazz, and modern technology. He conveyed the dynamic energy of contemporary life through abstract shapes and vivid colors. Davis believed that three-dimensional space could be shown on a two-dimensional surface by the way in which color forms were placed in relationship to each other; colors recede and advance depending on their position. Much of Davis’ work does not have a single focal point, giving the surface an all-over design.
- Students break into small groups. Distribute “American Scene” images collected by students and direct students to come up with words or short phrases that describe their images.
- With a jazz recording playing, ask an individual or a group of students to display the image and read their lists aloud. Encourage improvisation.
- Introduce students to the work of Stuart Davis with works such as Report from Rockport (in the Metropolitan Museum of Art) and Hot Still-Scape for Six Colors – 7th Avenue Style (in the Boston Museum of Fine Art).
- Ask students to discuss in their smaller groups the similarities and differences between Davis and Hofmann. Encourage the use of words such as color, tone, line, form, space, abstraction, pattern.
(Remind students of Hofmann’s interest in creating a sense of depth in his work; how is this accomplished by Davis?)
- Bring class together to share observations. Note that the artist’s work has a cohesive, planned quality that often was achieved through the reworking of numerous sketches.
- Direct students to select 5 different colored pieces of construction paper and experiment with placement of colors. What happens when one color is placed next to another; which recedes and which advances? All observations should be noted in students’ art journals.
- Explain the steps students will take to create an abstract work incorporating Stuart Davis’ process:
- using pencil, sketch a simple design in your art journals (it should have the effect of a coloring book image with no shading)
- using a copier, make 3 reproductions of the work
- with color pencils or crayons, experiment with different color arrangements to get the desired effect of receding and advancing forms (Davis often used a limited palette of 5colors; 2 primary, 1 secondary, black and white.
- copy the design to a larger paper and use tempera paints to complete
- Students write in art journals, reflecting on the process and the outcome. Work is shared with peers.
- Students repeat this process, now directed to create a design that reflects elements of the American Scene that they each identified at the start of the project.
- Students write in art journals reflecting on the process and the outcome. Work is shared with peers.
Assessment may take the form of: one-on-one discussion with teacher, self evaluation, peer evaluation, a display of students’ final projects accompanied by the sketches and art journal entries that led up to the final work. The list of words and phrases describing the American scene also should be included.
The museum exhibition is entitled Hofmann/Davis: Masters of Color. The location of the museum is your school. Using reproductions of the work of Stuart Davis and Hans Hofmann, student work based on the study of their color theories, and student research, create a museum installation complete with wall text, brochures and guided tours. |
|This article needs additional citations for verification. (August 2012)|
The skull is a bony structure in the head of most vertebrates (in particular, craniates) that supports the structures of the face and forms a protective cavity for the brain. The skull is composed of two parts: the cranium and the mandible. The skull forms the anterior most portion of the skeleton and is a product of encephalization, housing the brain, many sensory structures (eyes, ears, nasal cavity), and the feeding system.
Functions of the skull include protection of the brain, fixing the distance between the eyes to allow stereoscopic vision, and fixing the position of the ears to help the brain use auditory cues to judge direction and distance of sounds. In some animals, the skull also has a defensive function (e.g. horned ungulates); the frontal bone is where horns are mounted.
The skull is made of a number of fused flat bones.
The skull of fishes is formed from a series of only loosely connected bones. Lampreys and sharks only possess a cartilaginous endocranium, with both the upper and lower jaws being separate elements. Bony fishes have additional dermal bone, forming a more or less coherent skull roof in lungfish and holost fish. The lower jaw defines a chin.
The simpler structure is found in jawless fish, in which the cranium is normally represented by a trough-like basket of cartilaginous elements only partially enclosing the brain, and associated with the capsules for the inner ears and the single nostril. Distinctively, these fish have no jaws.
Cartilaginous fish, such as sharks and rays, have also simple, and presumably primitive, skull structures. The cranium is a single structure forming a case around the brain, enclosing the lower surface and the sides, but always at least partially open at the top as a large fontanelle. The most anterior part of the cranium includes a forward plate of cartilage, the rostrum, and capsules to enclose the olfactory organs. Behind these are the orbits, and then an additional pair of capsules enclosing the structure of the inner ear. Finally, the skull tapers towards the rear, where the foramen magnum lies immediately above a single condyle, articulating with the first vertebra. There are, in addition, at various points throughout the cranium, smaller foramina for the cranial nerves. The jaws consist of separate hoops of cartilage, almost always distinct from the cranium proper.
In ray-finned fishes, there has also been considerable modification from the primitive pattern. The roof of the skull is generally well formed, and although the exact relationship of its bones to those of tetrapods is unclear, they are usually given similar names for convenience. Other elements of the skull, however, may be reduced; there is little cheek region behind the enlarged orbits, and little, if any bone in between them. The upper jaw is often formed largely from the premaxilla, with the maxilla itself located further back, and an additional bone, the symplectic, linking the jaw to the rest of the cranium.
Although the skulls of fossil lobe-finned fish resemble those of the early tetrapods, the same cannot be said of those of the living lungfishes. The skull roof is not fully formed, and consists of multiple, somewhat irregularly shaped bones with no direct relationship to those of tetrapods. The upper jaw is formed from the pterygoids and vomers alone, all of which bear teeth. Much of the skull is formed from cartilage, and its overall structure is reduced.
The skulls of the earliest tetrapods closely resembled those of their ancestors amongst the lobe-finned fishes. The skull roof is formed of a series of plate-like bones, including the maxilla, frontals, parietals, and lacrimals, among others. It is overlaying the endocranium, corresponding to the cartilaginous skull in sharks and rays. The various separate bones that compose the temporal bone of humans are also part of the skull roof series. A further plate composed of four pairs of bones forms the roof of the mouth; these include the vomer and palatine bones. The base of the cranium is formed from a ring of bones surrounding the foramen magnum and a median bone lying further forward; these are homologous with the occipital bone and parts of the sphenoid in mammals. Finally, the lower jaw is composed of multiple bones, only the most anterior of which (the dentary) is homologous with the mammalian mandible.
In living tetrapods, a great many of the original bones have either disappeared, or fused into one another in various arrangements.
Living amphibians typically have greatly reduced skulls, with many of the bones either absent or wholly or partly replaced by cartilage. In mammals and birds, in particular, modifications of the skull occurred to allow for the expansion of the brain. The fusion between the various bones is especially notable in birds, in which the individual structures may be difficult to identify.
The fenestrae (from Latin, meaning windows) are openings in the skull.
The temporal fenestrae are anatomical features of the skulls of several types of amniotes, characterised by bilaterally symmetrical holes (fenestrae) in the temporal bone. Depending on the lineage of a given animal, two, one, or no pairs of temporal fenestrae may be present, above or below the postorbital and squamosal bones. The upper temporal fenestrae are also known as the supratemporal fenestrae, and the lower temporal fenestrae are also known as the infratemporal fenestrae. The presence and morphology of the temporal fenestra are critical for taxonomic classification of the synapsids, of which mammals are part.
Physiological speculation associates it with a rise in metabolic rates and an increase in jaw musculature. The earlier amniotes of the Carboniferous did not have temporal fenestrae but two more advanced lines did: the Synapsids (mammal-like reptiles) and the Diapsids (most reptiles and later birds). As time progressed, diapsids' and synapsids' temporal fenestrae became more modified and larger to make stronger bites and more jaw muscles. Dinosaurs, which are sauropsids, have large advanced openings, and their descendants, the birds, have temporal fenestrae which have been modified. Mammals, which are synapsids, possess no fenestral openings in the skull, as the trait has been modified. They do, though, still have the temporal orbit (which resembles an opening) and the temporal muscles. It is a hole in the head and is situated to the rear of the orbit behind the eye.
There are four types of amniote skull, classified by the number and location of their fenestra. These are:
- Anapsida – no openings
- Synapsida – one low opening (beneath the postorbital and squamosal bones)
- Euryapsida – one high opening (above the postorbital and squamosal bones); euryapsids actually evolved from a diapsid configuration, losing their lower temporal fenestra.
- Diapsida – two openings
Evolutionarily, they are related as follows:
In humans, as in other mammals, the aforementioned division of the skull into the cranium and mandible is not usually followed. Instead, for the purposes of describing their anatomy and enumerating their bones, mammalian and human skulls are divided differently: They are deemed to consist of two categorical parts, the neurocranium and the viscerocranium. The neurocranium (or braincase) is a protective vault surrounding the brain. The viscerocranium (also splanchnocranium or facial skeleton) is formed by the bones supporting the face. Both parts have different embryological origins.
The jugal is a skull bone found in most reptiles, amphibians, and birds. In mammals, the jugal is often called the malar or zygomatic.
The prefrontal bone is a bone separating the lacrimal and frontal bones in many tetrapod skulls.
- Chondrocranium, a primitive cartilagionous skeletal structure
- Pericranium, a membrane that lines the outer surface of the cranium
- Romer, Alfred Sherwood; Parsons, Thomas S. (1977). The Vertebrate Body. Philadelphia, PA: Holt-Saunders International. pp. 173–177. ISBN 0-03-910284-X.
- Romer, Alfred Sherwood; Parsons, Thomas S. (1977). The Vertebrate Body. Philadelphia, PA: Holt-Saunders International. pp. 216–247. ISBN 0-03-910284-X.
- The hyoid bone and the ossicles are joined together with synarthroses, but despite their location, they are not normally considered skull bones.
|Wikimedia Commons has media related to Animal skulls.|
|Look up skull in Wiktionary, the free dictionary.|
- Dept of Anth Skull Module
- Skull Anatomy Tutorial.
- Bird Skull Collection Bird skull database with very large collection of skulls (Agricultural University of Wageningen) |
For many centuries, the same materials have remained as the base structural support for any structure. These materials include concrete, wood, and steel. However, new research has created the potential for new structural support systems manufactured from fiber-reinforced polymer composites (FRP).
A FRP structure uses a combination of high-performance polymer resins, carbon and glass reinforcement fibers, and a foam core to create a highly stable, yet still flexible structural support system that is inexpensive and highly useful. The structures have been used successfully in the marine, renewable energy, and aerospace markets. FRP has been used in these markets for 40 years with favorable results.
Now, the FRP structures are available for use in architectural and civil structures. There are many benefits to using FRP, including the ability to form unique shapes, the freedom to use structural elements to create design freedom, and a simpler way to create curved forms. FRP is also resistant to structural damage, corrosion, fire, and environmental damage. The cost of FRP is also less than some materials, like steel, and the strength of the material is just as high.
Because of these benefits, the structure requires less maintenance, which cuts down on maintenance time and expenses. Buildings that use the foam structural cores will find that the chances of the structural support catching on fire is much less, and the structure is impervious to flooding. FRP is not invincible, as it can still be damaged by earthquakes and other shifts in the ground. The benefits of the material far outweigh any downsides, however.
Foam fabricating is the manufacturing of a lightweight, versatile, polymer-based material. The material, such as plastic or polyurethane, is frothed up while in a molten state and then cooled, which fills the material with countless little bubbles, giving it an appearance similar to a sponge.
Foam fabricating is a broad term used to describe types of foam, applications and uses of those types and the products that can be formed from foam materials. Foam is simply a substance formed that consists of a number of air bubbles trapped in a liquid or a solid. Foam fabricators classify their product by two categories: open-cell foam and closed-cell foam. Closed-cell foam contains foam cells which are sealed, or "closed" and separate from one another. This foam is very dense and has high compressive strength. Because the cells are not broken in this foam and gas and liquid molecules do not freely travel from cell to cell, the cells expand when exposed to heated gas. The expansion fills the material, making closed cell foam an excellent heat insulator, like spray foam. Open-cell foams are lightweight, spongy, soft foams in which the cell walls or surfaces of the bubbles are broken and air fills all of the spaces in the material. This makes the foam soft and weak; as a result, open-cell foams are commonly used for foam padding and foam cushions. In addition, open cell foams are effective as sound barriers, having about twice the sound resistance as closed cell foam. The medical, construction, automotive, electronics and furniture industries all use foam products.
Foam fabricating services, such as foam cutting, work with many types of foam for numerous applications. The most common foam used is polyurethane foam, which is resilient closed-cell foam that biodegrades in direct and indirect sunlight. Typical applications for urethane foam include surgical scrubbers, x-ray positioning pads, EKG pads, insulation foam, protective foam padding, flexible foam seating and custom insulated containers. Polyethylene foam is a closed-cell, expanded, extruded, flexible plastic foam with predictable shock absorbing qualities. Used mainly as a protective packaging material, polyethylene foam is used to wrap products such as computer components, frozen foods, furniture, signs, sporting goods and clothing. Ethafoam is polyethylene foam that offers excellent shock absorption qualities and is often used for blocking, cushioning and bracing protection in material handling and shipping. Polyether foam is low-cost polyurethane foam that provides good cushioning and has acoustic and packaging properties. PVC foam is closed-cell vinyl foam that is pliable and soft and used in gaskets to prevent water transmission. Expanded polystyrene (EPS) is being used in some surprising applications: as padding foam in bedroom slippers, filter foam in air conditioning units, insulation foam, oil rigs, weather balloons and satellites.
Foam can be made from a variety of materials including plastic, low density elastomers and rubber. Typically formed from polymers, foam is made by mixing a number of chemicals and adding a gassing agent. The addition of the gassing agent causes the material to expand and form a foam strip. The foam is comprised of numerous gas bubbles trapped in the material. After foam is formed, a variety of foam fabricating services can be performed, including several types of foam cutting processes. Die cutting is a common foam fabricating process, in which different shapes are cut out of foam strips, blocks or sheets. Water jet cutting uses a fine stream of water under ultra high pressure to perform the same function as die cutting, but offers extremely close tolerances that die cutting cannot achieve. Hot wire cutting utilizes a heated wire in order to form smooth, straight cuts in the foam. There are also several types of foam forming processes. A popular foam forming process is thermal-forming, in which bulk foam materials are heated in order to produce machining shapes like foam sheets. Another forming process, foam felting, produces denser foam materials by means of compressing and curing thick, soft foam materials.
Foam fabricating services produce a large amount of scrap. The first major source of scrap is produced during foam production as well as foam die cutting processes. Foam scrap is produced as a result of the startup and shutdown of the production line when manufacturing runs are changes over and when foam blocks are cut and shaped into the desired end product. The second major source of foam scrap is foam products that have reached the end of their useful life. Scrap foam is often shredded and rebonded and then used for such products as carpet padding and filler for pillows and furniture. Foam scrap can also be burnt in order to reduce waste bulk; however this was more popular in past decades. Although the burning of foam is considered to be non-toxic by U.S. government agencies, as a result of growing environmental concerns and more stringent carbon dioxide emission regulations, many foam manufacturers have turned to recycling as a waste handling method. Recycling offers manufacturers the ability to recoup return on investment, which would be lost by burning the scrap. Although the use of recyclable foam materials is growing, the process of collecting the foam, separating out the contaminants and then shipping the foam economically can be time consuming and costly to those foam manufacturers wanting to recycle.
- A material utilized to alter the properties,
processing or final use of a base polymer. The quantity of additive is
usually articulated in terms of parts per hundred of the total resin
in the polymer formulation
- The quantity of air that can flow through a two foot by two foot by one foot foam sample with a five inch water pressure differential. Air flow is expressed in cubic feet per minute.
- Voids in molded foam parts that are the result of the entrapment of air pockets occurring during mold fill out. Air traps are characterized by shiny, smooth surfaces.
- Category of compounds that catalyze in polyurethane foam reactions.
- Foam containing electrically conductive material in order to prohibit static electricity buildup or to promote static discharge. Anti-static flexible polyurethane foam is used mainly for packaging electronic components.
- An additive that supplements the main blowing agent water in the production of foam and could create softer or lighter foam.
- A test technique that measures the surface resilience of flexible polyurethane foam by dropping a steel ball of a specified mass from a certain height onto the foam sample. The ball rebound value is the ball rebound height as a percentage of the height of the fall.
- Large, irregular cells found beneath the surface of the skin of a molded foam part.
- The method of foaming flexible polyurethane in production. Blowing happens when toluene diisocyanate and water react to create CO 2.
- The blending of two or more components into a composite. Foam is typically attached to other foam grades or polyester fiber.
- The contouring or shaping of flexible polyurethane foam pieces by the removal of foam with abrasives.
- A section of foam cut from a constantly produced slab stock kind of foam.
- The hollow space left behind in the structure of polyurethane foam encased by polymer membranes or the polymer skeleton after blowing is finished.
- Flexible polyurethane foams produced without using chlorofluorocarbons as auxiliary blowing agents.
- A process in which high-resiliency foam is produced. Pouring is carried out without heat and foam is cured at or near room temperature.
- An additive that will decrease the ability of flexible polyurethane foam to ignite or make it burn more slowly.
- Also known as compression load deflection (CLD), it is a calculation of the load-bearing capability of a foam.
- A process involving special cutting equipment to create a foam sheet with dimples.
- The capability of a flexible polyurethane foam to return to its natural state from the pinched results of die cutting.
- A process in which the mold lid is closed and locked in molded foam production and the foaming mixture is injected through ports in the lid of the mold
- The cutting of foam with a specialized saw into patterns from a foam block, creating a custom foam part.
- The inner area of foam, away from the outer skin.
- A procedure, typically mechanical- or vacuum-assisted, in which the closed cells of a high resilience slab stock or molded foam are opened.
- Foam with low resiliency that does not quickly regain its original shape after deformation.
- A method in which the shape of the foam is altered from its original state through compression or heat.
- The cutting out of parts from foam using a process that is similar to stamping out the part. It is good for long duration runs of cut parts that necessitate uniformity in size.
- The boring of holes into a foam to enhance air flow, provide for greater ease of button application in tufted design and to make the foam feel softer.
- Polymers that, when undergoing deformation, resist and recover in a way similar to that of natural rubber.
- Also called "flame bonding" it is the process of bonding flexible foam to a fabric, film or other material by melting the surface of the foam with a flame source and quickly pressing it to the material before the foam resolidifies.
- A kind of polyurethane foam created with a combination of polymer or graft polyols. This foam is not as uniform in its cell structure in comparison to conventional products, which enhances the comfort, support, resilience and bounce of the foam.
- The cutting of foam using high-temperature wires instead of a saw blade. Hot wire cutting is generally used for cutting intricate parts.
- A quick way to refer to the group of diisocyanates that are one of the two primary ingredients in the chemical process from which polyurethane foam is produced.
- A method of bonding layers of foam together in a simple composite. Laminating could be attained with adhesives or with heat processes, such as flame lamination.
- Method of cutting thin sheets from a foam cylinder.
- The higher-density exterior surface of foam, typically resulting from the foam surface cooling at a higher rate than the core.
- Flexible polyurethane foam produced by the constant pouring of mixed liquids onto a conveyor, which creates a continuous loaf of foam.
- Method a foam cutter uses for cutting sheets from a rectangular foam block.
- Significant hollow spaces that inadvertently form in foam structures. Voids are typically the result of inaccurate mold filling or inadequate moldability. |
Health authorities are on the alert for dengue fever and have issued some advice to mitigate the spread of the potentially deadly disease. In September, the Ministry of Health said laboratory tests have confirmed 180 cases of dengue fever in Jamaica since the start of 2012 out of 472 suspected cases.
WHAT IS DENGUE?
Dengue is a viral infection found in tropical and sub-tropical regions. It is spread through the Aedes aegypti mosquito when the female bites an infected person and then bites other people.
THE FACTS OF DENGUE
- Dengue fever is caused by a virus transmitted from the Aedes Aegypti mosquito
- One adult Aedes Aegypti mosquito will live for a month and will lay around 300 or more eggs during its lifetime
- They lay their eggs in water around the house and breed in densely populated areas
- The global incidence of dengue has grown dramatically in recent decades. About half of the world's population is now at risk.
- Over 2.5 billion people – over 40% of the world's population – are now at risk from dengue
- WHO currently estimates there may be 50–100 million dengue infections worldwide every year.
- Severe dengue is a leading cause of serious illness and death among children in some Asian and Latin American countries.
- There is no specific treatment for dengue/ severe dengue, but early detection and access to proper medical care lowers fatality rates below 1%.
- Severe dengue (previously known as Dengue Haemorrhagic Fever) was first recognized in the 1950s during dengue epidemics in the Philippines and Thailand.
- Severe dengue affects most Asian and Latin American countries and has become a leading cause of hospitalization and death among children in these regions.
- There are four distinct serotypes of the virus that cause dengue. Recovery from infection by one provides lifelong immunity against that particular serotype. However, subsequent infections by other serotypes increase the risk of developing severe dengue.
- The Aedes aegypti mosquito lives in urban habitats and breeds mostly in man-made containers.
- Unlike other mosquitoes Aedes aegypti is a daytime feeder; its peak biting periods are early in the morning and in the evening before dusk.
- Female Aedes aegypti bites multiple people during each feeding period.
- Persistent high fever (40°C/ 104°F)
- Severe headache with pain behind the eyes
- Rash on chest and arms
- Loss of appetite and no taste in the mouth
- Nausea and vomiting
- Muscle and joint pain
- Symptoms usually last from two to seven days
- Severe dengue is a potentially deadly complication due to plasma leaking, fluid accumulation, respiratory distress, severe bleeding, or organ impairment.
IF YOU CATCH DENGUE…
- If you suspect that you may have dengue fever, visit your doctor in order to have the diagnosis confirmed and a report made to the Ministry of Health
- Dengue fever usual resolves itself within a week. Most people, who have dengue, will only have very mild symptoms
- If you are diagnosed with dengue or suspect that you have it, avoid aspirin as aspirin can lead to bleeding and thins the blood.
- Acetaminophen or regular pain killers can be used to help with the pain
- Drink lots of fluids and rest
DENGUE PREVENTION TIPS
- Repair leaky pipes and outdoor taps
- Cut grass cut short and keep shrubbery well trimmed. Adult mosquitoes tend to hide in shrubbery
Remove all the possible areas that allow for the breeding of mosquitoes
- Keep house plants in damp soil instead of water
- Keep flower pot saucers dry and avoid over-watering potted plants
- Empty and scrub flower vases twice weekly. Mosquito eggs can live for months in an empty vase and hatch once they become wet. Scrubbing the insides will remove any attached egg
- Keep refrigerator draws dry
- Punch holes in the bottom of tins before placing them in the garbage
- Cover trash containers to keep rain water out
- Properly cover drums, barrels, tanks, buckets and any other containers used to store water
- Search in and around your home for anything that might hold stagnant water such as an empty flower pot, coconut shells, tyres, etc. and dispose of them
- Commercially available insecticides can be used to spray mosquitoes
- Mosquito bites can be prevented by using repellent that contains deet.
- Sleep under a mosquito net
- Close windows before it gets dark
- Open windows and doors during fogging
'Dengue and severe dengue: Fact Sheet'. World Health Organisation. http://www.who.int/mediacentre/factsheets/fs117/en/. Accessed September 11, 2012 |
|This article needs additional citations for verification. (December 2008)|
A default route of a computer that is participating in computer networking is the packet forwarding rule (route) taking effect when no other route can be determined for a given Internet Protocol (IP) destination address. All packets for destinations not established in the routing table are sent via the default route. This route generally points to another router, which treats the packet the same way: If a route matches, the packet is forwarded accordingly, otherwise the packet is forwarded to the default route of that router. The process repeats until a packet is delivered to the destination. Each router traversal counts as one hop in the distance calculation for the transmission path.
The route evaluation process in each router uses the longest prefix match method to obtain the most specific route. The network with the longest subnet mask that matches the destination IP address is the next-hop network gateway.
The default route in Internet Protocol Version 4 (IPv4) is designated as the zero-address 0.0.0.0/0 in CIDR notation, often called the quad-zero route. The subnet mask is given as /0, which effectively specifies all networks, and is the shortest match possible. A route lookup that does not match any other route, falls back to this route. Similarly, in IPv6, the default route is specified by ::/0.
In the highest-level segment of a network, administrators generally point the default route for a given host towards the router that has a connection to a network service provider. Therefore, packets with destinations outside the organization's local area network, typically destinations on the Internet or a wide area network, are forwarded to the router with the connection to that provider.
- "RFC1519: Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy". IETF (Internet Engineering Task Force). "Note that the degenerate route 0.0.0.0 mask 0.0.0.0 is used as a default route and MUST be accepted by all implementations." |
In World War II, American POWs (prisoners of war) received maps, compasses and real money hidden in Monopoly games.
The idea belonged to John Waddington, who manufactured Monopoly board games in England, and the British Secret Service. Knowing that they could send POWs board games along with essential items like clothing, they used several fake charities to send Monopoly games to prisoner camps in Nazi occupied areas. They hid things like maps printed on silk cloth, compasses, and money inside the games. The manufacturers and the Secret Service also used various codes on the game boxes to make sure that the correct maps were sent to the correct areas. It wasn't only the American POWs who benefited from the service, but also POWs of other Allied Powers. It is believed quite a few POWs made their escape with the help of this assistance.
Board games weren't the only things the Allied powers used to send assistance to prisoners of war during World War II however. Special escape kits were also hidden in items like pens, cigarette tins and playing cards.
More about World War II: |
Psychometrics: Validity : Psychometrics: Validity Concerned with what the test measures and how well it does so Slide 2: How well is the test measuring the domain of knowledge that it is supposed to measure?
It measures systematic errors--something specific about the test is missing.
This is unlike reliability which measures random errors. Example of Systematic Error : Example of Systematic Error The factor of language within a test.
The test is supposed to measure coordination, but there is a big language component because the person has to read and understand the instructions. Therefore, the systematic error is that language is being tested along with coordination. Slide 4: Sometimes this happens with educational tests where the professor uses unusually difficult language, thereby reducing the validity of the test.
To improve the validity of a test, you try to reduce the systematic errors. Slide 5: Note that you can have good reliability and poor validity.
For example, a test can measure something consistently, but not be accurate.
However, it does no good to have a test that is reliable, but with poor validity. Slide 6: Conversely, you don’t want a test with good validity and poor reliability.
This type of test would measure a given trait, but could not be counted upon to measure it consistently. Types of Validity : Types of Validity The types of validity studies done depend on the purpose of the test
The four types of validity are:
Face Content Validity : Content Validity This type is mostly concerned with achievement tests
It is used specifically when a test is attempting to measure a defined doman
Content validity indicates whether or not the test items adequately represent that domain. Slide 9: Content validation begins during the development of a test.
Usually a Table of Specifications is built in order to ensure that the entire domain is represented by the items in a test. Table of SpecificationsMiller Assessment for Preschoolers : Table of SpecificationsMiller Assessment for Preschoolers Slide 11: The table of specifications compares subtests or specific items to the behavioral domain being tested.
In the previous slide we see 3 behavioral domains and four subtests.
The x’s in the boxes tell you that those domains are being tested.
In the end, all domains should be represented in the test. Slide 12: Another method of content validation is by using experts in the field
The test is sent out to experts who review the test and the domains to be evaluated.
This is used in conjunction with the table of specifications. When is it Appropriate to do Content Validation? : When is it Appropriate to do Content Validation? Appropriate for:
Tests related to occupation: employment, classification, job tasks.
(no specific domain of knowledge to be tested) Criterion-Related Validity : Criterion-Related Validity Indicates the effectiveness of a test in predicting an individual’s performance on specific activities.
Performance is checked against a criterion
Criterion: A direct and independent mesure of what the test is designed to predict.
Example: for a test of vocational aptitude, the criterion might be job performance Slide 15: There are two types of criterion-related validity: concurrent and predictive. These two types are differentiated by the time period between the test and the criterion.
Concurrent: Short time period between test and criterion.
Predictive: Long time period between test and criterion. Concurrent Validity : Concurrent Validity Example: A test is developed to identify individuals with tactile hypersensitivity.
Using concurrent validity, the goal would be to see how well the test can identify who has tactile hypersensitivity.
Two groups of subjects are needed: one group known to have tactile hypersensitivity, one group known to have normal tactile function. Slide 17: The test is then given to both groups of people.
If the test has concurrent validity, it will accurately identify those who have tactile hypersensitivity and those who are normal.
Look for a high classification rate: perhaps 90% or so. Slide 18: Another method of concurrent validity:
One group of subjects takes the newly developed test, and also takes a test that is established in the field.
The results on these two tests are compared
Hopefully the new test will be as accurate as the old test. Example Using the MAP and DDST : Example Using the MAP and DDST What Does This All Mean? : What Does This All Mean? You can see that about 70% of the kids were classified as normal by both tests.
However, 22% that were classified as normal by the DDST were classified as questionable by the MAP.
3% of the kids who were questionable on the DDST were normal on the MAP Is the MAP a Better Test Than the DDST? : Is the MAP a Better Test Than the DDST? It appears that the MAP identified 24% more kids who potentially had problems (22% in the yellow- questionable category and 2% in the red-abnormal category)
However, predictive studies are needed when these kids reach school age--to see if the MAP was accurate. Predictive Validity : Predictive Validity Similar to concurrent validity, but with a longer time period between testing and measurement of criterion.
Need a group of people who can be studied long term. This is called a longitudinal study.
Everyone in the group is given the test and scores are tabulated. Slide 23: Then you wait a period of time--usually months or years, but it could be shorter depending upon what is being measured.
After a period of time, the criterion is measured. In the example used previously, after time do the children in the group develop tactile hypersensitivity? Slide 24: The test that was given previously is then compared to their hypersensitivity status.
Did the test accurately predict which kids would develop hypersensitivity?
If so, then the test has predictive validity Slide 25: Predictive validity takes a long time to establish and a test may have predictive validity studies going on for years.
In the meantime, concurrent validity studies help establish that the test does indeed measure a specific criterion. Construct Validity : Construct Validity Construct: an unobservable trait that is known to exist.
Examples: IQ, motivation, self-esteem, motor planning, anxiety
How can we be sure a test measures these constructs if we can’t directly measure them? Slide 27: Ways to assess construct validity include:
Correlations with other tests
Convergent and discriminant validation Developmental Changes : Developmental Changes Employs the idea of age differentiation
Useful for any test that is developmental in nature.
Since abilities increase with age (during childhood), it is logical that test scores will also increase with age
This is one measure of construct validity, but is not conclusive. Slide 29: In other words, determining construct validity by developmental changes alone is not sufficient.
There needs to also be other measures of construct validity. Correlations with Other Tests : Correlations with Other Tests Correlations between the new test and established tests helps to establish construct validity.
The idea is that if the correlation is relatively high, the new test measures the same traits as the established test.
Want moderate as opposed to very high correlations (If correlations are very high you have to wonder if the new test is really necessary?) Factor Analysis : Factor Analysis FA is a multivariate statistical technique which is used to group multiple variables into a few factors.
In doing FA you hope to find clusters of variables that can be identified as new factors. Example: Factor Analysis : Example: Factor Analysis In the standardization of the Miller Assessment for Preschoolers (MAP) over 1000 children were tested using that assessment.
The FA study looked at the interrelationships between the various subtests and came up with 6 primary factors. Factor Analysis of the MAP : Factor Analysis of the MAP Convergent and Discriminant Validation : Convergent and Discriminant Validation The idea is that a test should correlate highly with other similar tests, and
The test should correlate low with tests that are very dissimilar. Example : Example A newly developed test of motor coordination should correlate highly with other tests of motor coordination.
It should also have low correlations with tests that measure attitudes. |
View your list of saved words. (You can log in using Facebook.)
Definition of SCREWWORM
: a blowfly (Cochliomyia hominivorax) of the warmer parts of America whose larva develops in sores or wounds or in the nostrils of mammals including humans with serious or sometimes fatal results; especially: its larva
: any of several flies other than the screwworm and especially their larvae which parasitize the flesh of mammals
: either of two dipteran flies of the genus Cochliomyia: a: one (C. hominivorax) of the warmer parts of America whose larva develops in sores or wounds or in the nostrils of mammals including humans with serious or sometimes fatal results; especially: its larva b:secondary screwworm
: a dipteran fly of the genus Chrysomyia (C. bezziana) that causes myiasis in the Old World
Any of several North and South American blowfly species named for the screwlike appearance of the larva's body, which is ringed with small spines. Screwworms attack livestock and other animals, including humans. The true screwworm (Cochliomyia hominivorax) and the secondary screwworm (Callitroga macellaria) develop in decaying flesh; they may also attack healthy tissue. Each female deposits 200–400 eggs near an open wound. The larvae burrow into the tissue and, when mature, drop to the ground to pupate. Severe infestations (myiasis) may kill the affected animal. |
At the beginning of 1965, the U.S. seemed on the cusp of a golden age. Although Americans had been shocked by the assassination in 1963 of President Kennedy, they exuded a sense of consensus and optimism that showed no signs of abating. Indeed, political liberalism and interracial civil rights activism made it appear as if 1965 would find America more progressive and unified than it had ever been before. In January 1965, President Lyndon Johnson proclaimed that the country had "no irreconcilable conflicts."
Johnson, who was an extraordinarily skillful manager of Congress, succeeded in securing an avalanche of Great Society legislation in 1965, including Medicare, immigration reform, and a powerful Voting Rights Act. But as esteemed historian James T. Patterson reveals in The Eve of Destruction , that sense of harmony dissipated over the course of the year. As Patterson shows, 1965 marked the birth of the tumultuous era we now know as "The Sixties," when American society and culture underwent a major transformation. Turmoil erupted in the American South early in the year, when police attacked civil rights demonstrators in Selma, Alabama. Many black leaders, outraged, began to lose faith in nonviolent and interracial strategies of protest. Meanwhile, the U.S. rushed into a deadly war in Vietnam, inciting rebelliousness at home. On August 11th, five days after Johnson signed the Voting Rights Act, racial violence exploded in the Watts area of Los Angeles. The six days of looting and arson that followed shocked many Americans and cooled their enthusiasm for the president's remaining initiatives. As the national mood darkened, the country became deeply divided. By the end of 1965, a conservative resurgence was beginning to redefine the political scene even as developments in popular music were enlivening the Left.
In The Eve of Destruction , Patterson traces the events of this transformative year, showing how they dramatically reshaped the nation and reset the course of American life. |
Powered by human energy, non-motorised transport is the cleanest form of transport, with no emissions. NMT is also the most space-efficient form of transport. For example, new bike rack designs show up to 15 bikes can fit in a single parking space for cars.
Being able to walk or cycle to reach destinations makes the city environment more enjoyable and raises the standard of living. For example, pedestrianisation projects can reduce pollution, increase economic activity and create vibrant public spaces. The majority of urban trips are short distance, making them ideal for walking and cycling. On a bike, the travel range can increase significantly.
Cities must move from a car-dependent to multi-modal transport system in transitioning to a green economy. When sidewalks and cycling paths are integrated into public transport networks, more people walk, cycle or commute instead of using a private motor vehicle. This is the kind of modal shift highlighted by the International Energy Agency (IEA) as one of the three main sources of greenhouse gas reductions in transport.
Walking and cycling road infrastructure encourages people to either stay out or get out of cars. It also promotes more healthy lifestyles.
Although walking is still the most widely used mode of transport, road infrastructure has not been built for people on foot, let alone cyclists. Since the advent of the motor vehicle, road development has continuously pushed the majority of people onto less and less space. The result is that those who can afford it ride a car while those who cannot have to compete for road space with high-speed motor traffic, often risking their very lives.
Designating road space for pedestrians and cyclists in proportion to the demand for NMT is crucial. It is also one of the most cost-effective actions for saving hundreds of thousands of lives. For example, the top two countermeasures for improving safety in Nairobi, Kenya, recommended by the International Road Assessment Programme (iRAP) are pedestrian crossings and sidewalks.
Safe road infrastructure for all users is emphasized in the global action plan for the UN Decade of Action for Road Safety, 2011-2020, spearheaded by the Make Roads Safe Campaign.
The lack of NMT facilities is one of the top reasons why pedestrians and cyclists make up a disproportionate amount of the 1.2 million who die in road crashes each year.
Cyclists need less than a third of the road space that is used by a car, while a pedestrian only needs a sixth of that space. More people using non-motorised transport means that limited land space is optimized for maximal accessibility.
NMT facilities, such as cycling lanes connected in a network, mean better accessibility for the whole society, especially for vulnerable groups such as the urban poor. Up to 25% of urban household income can be spent on public transport costs. Either to save these high costs or because it cannot be afforded, the urban poor often walk for hours to reach their school or place of employment.
For everyone, well-designed sidewalks and crosswalks are necessary to ensure journeys can be made as safely and quickly as possible. A city-wide cycling network can not only lower household transport expenditures but also increase the travel range. Many cities are developing public bike share systems that provide people with both a bicycle and the necessary road infrastructure to use the bicycle to meet their mobility needs.
Congestion is a major headache for cities worldwide. More people using NMT means less congestion since there are fewer cars on the road.
Taken together, such investments contribute towards sustainable development by promoting all three pillars of environmental, social and economic sustainability in the context of urban road transport. Developing systematic road investment policies for NMT is important for poverty reduction and achieving the Millennium Development Goals (MDGs), both directly and indirectly.
Massive savings are possible by reducing the high economic costs of urban air pollution (5% of GDP in
developing countries in healthcare costs alone) and poor road safety (up to 100 million USD a year in lowand middle-income countries, where 90% of road crashes occur). Massive savings are also possible by reducing transport expenditures at both the household and national level, for example reducing the demand for fuel imports.
Also, for millions, especially the urban poor, the bicycle, or handcart, is a tool to earn a livelihood, such as delivery services or taxis. When these people can go about their work on safe and convenient road infrastructure, the result is upward economic mobility.
Investing in walking and cycling road infrastructure is a win-win-win situation: reducing harmful air pollutants & climate emissions, improving road safety and increasing accessibility for all. |
HelpFor Aspergers Students Who Are Bullied
What do you knowabout the bullying of Aspergers children in schools? Here are the facts:
1. Although there is no consistent evidence that bullying overall isincreasing, one area of growing concern is cyber-bullying, especially amongolder children.
2. Being bullied at school typically has negative effects on the physical andpsychological well-being of those kids who are frequently and severelytargeted.
3. Bullying can be categorized as physical, verbal and gestural.
4. Bullying has been reported as occurring in every school and kindergarten orday-care environment in which it has been investigated.
5. Aspergers kids typically report being bullied less often as they get older,although being victimized tends to increase when they enter secondary school.
6. Gender differences have been found indicating that Aspergers boys arebullied physically more often than Aspergers girls. Female bullies aregenerally more often involved in indirect forms of aggression (e.g., excludingothers, rumor spreading, manipulating of situations to hurt those they do notlike).
7. There are differences in the nature and frequency of victimization reportedby Aspergers kids according to age. Generally, bullying among younger kids isproportionately more physical; with older kids, indirect and more subtle formsof bullying tend to occur more often.
Bullying usually has three common features:
• it is a deliberate, hurtful behavior
• it is difficult for those being bullied to defend themselves
• it is repeated
There are three main types of bullying:
• indirect / emotional; spreading nasty stories, excluding from groups
• physical; hitting, kicking, taking belongings
• verbal; name-calling, insulting, racist remarks
• Are often attention seekers.
• Bully because they believe they are popular and have the support of theothers.
• Find out how the teacher reacts to minor transgressions of the rules and waitto see if the ‘victim’ will complain.
• If there are no consequences to the bad behavior, if the victim does notcomplain, and if the peer group silently or even actively colludes, the bullywill continue with the behavior.
• Keep bullying because they incorrectly think the behavior is exciting andmakes them popular.
• Will establish their power base by testing the response of the less powerfulmembers of the group, watching how they react when small things happen.
• Are desperate to ‘fit in’.
• Blame themselves and believe it is their own fault.
• Don’t have the support of the teacher or classmates who find themunappealing.
• Rarely seek help.
• Lack the confidence to seek help.
• Often have poor social skills.
Bullying commonly begins when an Aspergers youngster is (a) ‘picked on’ by anotheryoungster or by a group of kids, (b) is unable to resist, and (c) lacks thesupport of others. It will continue if the kids doing the bullying have littleor no sympathy for the peer they are hurting, and especially if they aregetting some pleasure out of what they are doing – and if nobody stops them.
Bullying takes place mostly outside the school building at free play, recess orlunchtime. It may also happen on the way to or from the school, and especiallyon the school bus if there is not adequate supervision.
Bullying may sometimes occur in the classroom. Here it is usually of a moresubtle, non-physical kind (e.g., cruel teasing, making faces at someone,repeatedly making unkind and sarcastic comments).
If the bullying is severe and prolonged, and the targeted youngster isunable to overcome the problem or get help, the following can happen:
• For years to come, the youngster may distrust others and find it impossibleto make friends.
• He or she may lose friends and become isolated.
• School work may suffer.
• The youngster may become seriously depressed, disturbed or ill.
• The youngster may lose confidence and self-esteem.
• The youngster may refuse to go to preschool or school.
• The youngster may seek revenge, and in extreme cases, may use a weapon to geteven.
How Parents Can Help—
1. Don't talk to the parents of the bullies. Parents become defensive whentheir youngster is accused of bullying, and the conversation will generally notbe a productive one. Let the school administrators manage the communicationwith the parents.
2. Explore with the Aspergers youngster what leads up to the bullying. Veryoccasionally a youngster may be provoking others by annoying or irritatingthem, and can learn not to do so.
3. Find out what has been happening and how the youngster has been reacting andfeeling.
4. Children are almost always reluctant to have a parent intervene, becausethey fear the social stigma of having their mothers/fathers fight theirbattles. However, it is up to you to intervene on your youngster's behalf withschool administrators to ensure your youngster's physical and emotionalwell-being.
5. It never helps to say it’s the youngster’s problem and that he or she mustsimply stand up to the bullies, whatever the situation. Sometimes this courseof action is impractical, especially if a group is involved. Nor does it helpthe youngster to be over-protective, for example, by saying: ‘Never mind. Iwill look after you. You don’t have to go to school’.
6. Maintain open communication with your kids. Talk to them every day aboutdetails small and large. How did their classes go? What do they have forhomework that night? Who'd they sit with at lunch? Who'd they play with atrecess? Listen carefully and be responsive to show interest. Your children willknow if you're distracted or just going through the motions, so pay attention.
7. Make a realistic assessment of the seriousness of the bullying and planaccordingly.
8. Be observant and notice changes in mood and behavior. For instance, anAspergers youngster may cry more easily, become irritable or experiencedifficulty sleeping. Younger kids may find it difficult to explain what iswrong. Talking it over with a youngster’s teacher may lead to a betterunderstanding of what is happening. Simply listening sympathetically helps.Such support can reduce the pain and misery.
9. Some children in middle school or junior high would actually rather endurethe bullying than have a parent intervene on their behalf just to avoid the socialstigma of having mom or dad fight their battles. Leaving your youngster on hisown to deal with bullying could result in a decline in academic performance,depression and, in extreme cases, suicide. You are the parent. Support youryoungster lovingly, but do take the bully by the horns.
10. Sometimes it is wise to discuss with the youngster what places it might bebest to avoid, and, on occasions, whom to stay close to in threateningsituations.
11. Suggest to the youngster things to do when he or she is picked on.Sometimes by acting assertively or not over-reacting, the bullying can bestopped. It is always much better if kids, with a bit of good advice, can dosomething to help themselves.
12. Take complaints seriously, whether they be stories of physical bullying orverbal or psychological bullying. If your youngster is telling you aboutproblems she has at school, you can bet that there is plenty that she hasn'ttold you about. By the time a youngster reveals her pain to you, the bullyinghas almost always been going on for a prolonged period.
How the School Can Help—
Early intervention and effective discipline and boundaries truly are the bestway to stop bullying, but mothers/fathers of the victims cannot change thebully’s home environment. Some things can be done at the school level, however.Here are some tips for teachers:
1. Get the kid’s parents involved in a bullying program. If parents of thebullies and the victims are not aware of what is going on at school, then thewhole bullying program will not be effective. Stopping bullying in school takesteamwork and concentrated effort on everyone’s part. Bullying also should bediscussed during parent-teacher conferences and PTA meetings. Parentalawareness is key.
2. Hand out questionnaires to all children and educators and discuss ifbullying is occurring. Define exactly what constitutes bullying at school. Thequestionnaire is a wonderful tool that allows the school to see how widespreadbullying is and what forms it is taking. It is a good way to start to addressthe problem.
3. In the classroom setting, all educators should work with the children onbullying. Oftentimes even the teacher is being bullied in the classroom and aprogram should be set up that implements teaching about bullying. Kidsunderstand modeling behaviors and role-play and acting out bullying situationsis a very effective tool. Have children role-play a bullying situation. Rulesthat involve bullying behaviors should be clearly posted. Schools also couldask local mental health professionals to speak to children about bullyingbehaviors and how it directly affects the victims.
4. Most school programs that address bullying use a multi-faceted approach tothe problem. This usually involves counseling of some sort, either by peers, aschool counselor, educators, or the principal.
5. Schools need to make sure there is enough adult supervision at school tolessen and prevent bullying.
Aspergers students who have to endure bullying usually suffer from lowself-esteem, and their ability to learn and be successful at school isdramatically lessened. Schools and parents must educate kids about bullyingbehaviors. It will help all kids feel safe and secure at school. Kids who bullyneed to be taught empathy for others’ feelings in order to change theirbehaviors – and the school must adopt a zero-tolerance policy regardingbullying of all children, with or without Aspergers.
Question: Hi. I go to the 8th grade. I haveAspergers and get picked on a lot. I have been bullied since kindergarten. Howcan I get the other kids to leave me alone?
Answer: Here’s what you do if someone is picking onyou:
1. As much as you can, avoid the bullies. Youcan't go into hiding or skip class, of course. But if you can take a differentroute and avoid him, do it.
2. Don't hit, kick, or push back to deal withthe bullies. Fighting back just satisfies them – and it's dangerous too.Someone could get hurt. You're also likely to get in trouble. It's best to staywith safe people and get help from an adult.
3. It’s very important to tell an adult. Findsomeone you trust and go and tell them what is happening to you. Teachers atschool can all help to stop the bully. Sometimes bullies stop as soon as ateacher finds out because they're afraid that they will be punished. Bullyingis wrong and it helps if everyone who gets bullied or sees someone beingbullied speaks up.
4. Try your best to ignore the bullies. Pretendyou don't hear them and walk away quickly to a safe place. Bullies want a bigreaction to their teasing and meanness. Acting as if you don't notice and don'tcare is like giving no reaction at all, and this just might stop a bully'sbehavior.
5. Try distracting yourself (counting backwardsfrom 100, spelling the word 'turtle' backwards, etc.) to keep your mindoccupied until you are out of the situation and somewhere safe where you canshow your feelings.
6. Pretend to feel really brave and confident.Tell the bully "No! Stop it!" in a loud voice. Then walk away, or runif you have to.
7. Two is better than one if you're trying toavoid being bullied. Make a plan to walk with a friend or two on the way toschool or recess or lunch or wherever you think you might meet the bully.
8. When you're scared of another person, you'reprobably not feeling very brave. But sometimes just acting brave is enough tostop a bully. How does a brave person look and act? Stand tall and you'll sendthe message: "Don't mess with me."
9. Kids also can stand up for each other bytelling a bully to stop teasing or scaring someone else, and then walk awaytogether. If a bully wants you to do something that you don't want to do — say"no!" and walk away. If you do what a bully says to do, they willlikely keep bullying you. Bullies tend to bully kids who don't stick up forthemselves.
10. Feel good about yourself. A lot of kids getbullied. It doesn’t just happen to you. |
The symptoms of C. difficile in children include watery diarrhea, nausea, loss of appetite, abdominal pain and fever, according to AboutKidsHealth. WebMD notes that in severe cases, the colon may become inflamed, blood or pus may be present in the stool, the child may become dehydrated and weight loss may occur.Continue Reading
AboutKidsHealth explains that C. difficile is a type of bacteria that resides in the digestive tract and produces toxins. Normally the "good" bacteria keep the C. diff bacteria under control, but when a child takes antibiotics, an imbalance in bacteria may occur that allows overgrowth of C. difficile. Because C. difficile bacteria are also found in the stool, infection can occur when someone comes in contact with the stool of an infected person and then touches her mouth.
C. difficile often resolves without treatment once a patient stops taking antibiotics and the balance of the bacteria in the digestive tract returns to normal. In some cases, medications are prescribed. If the patient becomes dehydrated, administration of intravenous fluids may be required. To keep a child from becoming infected with C. diff, WebMD recommends limiting the use of antibiotics whenever possible and avoiding the use of proton pump inhibitors, which are stomach acid-reducing medications. Also keep the child away from those who are infected with C. diff, clean contaminated surfaces thoroughly and have the child wash hands regularly.Learn more about Conditions & Diseases |
Excess weight and obesity are growing problems that contribute to escalating rates of chronic conditions such as diabetes and cardiovascular disease. Globally, around 37% of adults were either overweight or obese in 2013. Of ten countries that have the highest rates of overweight and obese adults in the world, nine are in the Pacific region.
However, what is not widely known is that some of Australia’s less developed neighbours also have very high rates of child undernutrition. These countries have the “double burden” of overnutrition in adults and undernutrition in children.
Child undernutrition is defined by stunted linear growth: low height for a child’s age. By the age of two years, it is usually irreversible. Stunting leads to poor cognitive development, weak educational outcomes and reduced employment opportunities.
Improving nutrition is likely to have strong economic benefits for a developing country. Indeed, one study found that a child’s height-for-age index at two years of age was the best predictor of human capital.
Globally, an estimated 165 million children under five years of age suffered from stunting in 2011. Undernutrition caused just over three million child deaths in the same year – approximately 45% of all deaths in children under five.
While the highest rates of stunting are in South Asia and sub-Saharan Africa, some countries in Southeast Asia and the Pacific have very high rates, including Timor-Leste (50%), Papua New Guinea (44%), Indonesia (39%), Solomon Islands (33%), and Kiribati (33%). While there have been steady reductions in the prevalence of undernutrition in most of Asia over the past two decades, there has been almost no change in the Pacific region since 1990.
The causes of undernutrition are complex and have been divided into two groups: immediate and underlying. In addition, basic factors such as income poverty, low educational status, and cultural beliefs (such as food taboos during pregnancy and after childbirth) contribute.
Immediate causes include inadequate food intake; infectious diseases; inappropriate caring practices, such as failure to exclusively breastfeed until six months of age (as recommended by the World Health Organisation) and to give infants nutritionally diverse complementary feeding after six months; and ineffective treatment of undernutrition.
Underlying causes include lack of access to clean water and latrines; inadequate access by families to food all year round; low agricultural productivity; and gender inequalities that lead to women consuming less nutritious food (such as red meat) than men within the same household.
There is broad global consensus on the types of interventions that most effectively address child undernutrition. The influential medical journal The Lancet has published two series on maternal and child undernutrition (in 2008 and 2013) which have been a catalyst for greater attention to the issue. The scholarly articles in these series have provided the body of evidence for the “first 1,000 days” approach, which targets interventions in the period from early pregnancy to a child’s second birthday.
A major goal of this approach is to break the inter-generational nature of undernutrition whereby an undernourished woman gives birth to a low-birthweight baby who is then at high risk of stunted linear growth and – if female – may grow up to give birth to a low birthweight baby and so on. Adolescent pregnancy is a high risk for perpetuating this cycle.
Efforts to prevent child undernutrition require a multipronged collaboration between a number of development sectors: health, agriculture, water and sanitation, education, women’s empowerment, and family planning.
The global commitment to nutrition is stronger than it has ever been. In 2013, 51 countries, businesses and civil society groups signed an agreement at the Nutrition for Growth Summit in London to make nutrition one of the world’s top development priorities.
Australia was one of more than 20 donor countries at the summit and subsequently joined the Scaling Up Nutrition (SUN) movement. Established in 2010, the SUN movement aims to unite governments, civil society, the United Nations, donors, researchers, businesses and citizens in a worldwide collective effort to end undernutrition.
Australia’s Department of Foreign Affairs and Trade’s independent review into Australian aid’s contribution to promote nutrition, published in February 2015, found Australia’s investment in nutrition nearly doubled from 2010 to 2012. However, despite a stunting rate in excess of 40% in Papua New Guinea, in the years 2010 and 2012 combined, Australia only allocated 0.1% of total PNG development aid to nutrition.
Back to the obesity epidemic in Pacific Islander adults. A number of studies have found that adults who had a low birthweight or were undernourished as young children are more likely to experience high blood pressure and obesity, and associated chronic diseases including diabetes and heart disease.
Therefore, investments now to reduce high rates of child undernutrition in Pacific countries may have long-term benefits in adulthood. Although Australia’s aid budget has suffered severe cuts in recent years, Foreign Minister Julie Bishop has highlighted the need for innovation in the aid program and allocated funds to a development innovation hub known as innovationXchange.
There could be no better target for innovation than to explore effective ways to reduce the double burden of child malnutrition and adult obesity in our less developed neighbours. This would contribute to significant economic and health benefits in the region. |
A laboratory-model Hall-effect spacecraft thruster was developed that utilizes bismuth as the propellant. Xenon was used in most prior Hall-effect thrusters. Bismuth is an attractive alternative because it has a larger atomic mass, a larger electron- impact- ionization cross-section, and is cheaper and more plentiful.
The design of this thruster includes multiple temperature-control zones and other features that reduce parasitic power losses. Liquid bismuth (which melts at a temperature of 271°C) is supplied by a temperature-controlled reservoir to a vaporizer. The vaporizer exhausts to an anode/gas distributor inside a discharge channel that consists of a metal chamber upstream of ceramic exit rings. In the channel, bismuth ions are produced through an electron impact ionization process and accelerated as in other Hall-effect thrusters. The discharge region is heated by the discharge and an auxiliary anode heater, which is required to prevent bismuth condensation at low power levels and at thruster start-up. A xenon discharge is also used for preheating the discharge channel, but an anode heater could provide enough power to start the bismuth discharge directly.
This work was done by James Szabo, Charles Gasdaska, Vlad Hruby, and Mike Robin of Busek Co., Inc. for Marshall Space Flight Center. For further information, contact Sammy Nabors, MSFC Commercialization Assistance Lead, at sammy.a.nabors@ nasa.gov. Refer to MFS-32440-1. |
Heat, Heat and Heat. What does that mean to you?, well in this article I will tell you all about heat and particles.
Well there are many types of heat energy but the most basic ones are Conduction, Convection, Radiation and thermal energy.
Before we start about heat we have to learn about particles and how they pass on heat from one to another. For example look at the image below the source of heat is passing on its heat the particle at the bottom then the particle gains energy and passes on the heat to particle to particle. As you can see this is a solid diagram and imagine this diagram is ice. Once they heat up they turn into a liquid, then when they reach a certain temperature they will break their bonds and heat up again until they reach boiling point and turn into a gas. Once they are a gas they are free from any other particle and have the most energy.
Thermal Energy and Temperature
Google link : conduction.gif
Topic 5: Solids Liquids and Gases
liquid and gas and in
Google link : Topic 5: Solids Liquids and Gases
Conduction! Heat energy can move through from particle to particle by conduction. Metals are good conductors of heat, but non-metals and gases are usually poor conductors of heat. For example poor conductors of heat are called insulators. Heat energy is conducted from the hot end of an object to the cold end. Reference to BBC bitesize
Conduction | Define Conduction at Dictionary.com
Conduction definition at Dictionary.com, a free online dictionary with pronunciation, synonyms and translation. Look it up now!
Google link : Conduction | Define Conduction at Dictionary.com
Conduction | Climate Education Modules for K-
Google link : Conduction | Climate Education Modules for K-
Convection Liquids and gases are fluids. The particles in these fluids can move from place to place. Convection occurs when particles with a lot of heat energy in a liquid or gas move and take the place of particles with less heat energy. Heat energy is transferred from hot places to cooler places by convection.
Liquids and gases expand when they are heated. This is because the particles in liquids and gases move faster when they are heated than they do when they are cold. As a result, the particles take up more volume. This is because the gap between particles widens, while the particles themselves stay the same size.
The liquid or gas in hot areas is less dense than the liquid or gas in cold areas, so it rises into the cold areas. The denser cold liquid or gas falls into the warm areas. In this way, convection currents that transfer heat from place to place are set up.
from BBC Bitesize
This is a community story, written by a member of the public and not by a Storyful Curator. If you have objections to the content in this story please contact [email protected] and we will investigate. |
In computer security alphanumeric shellcode is a shellcode that consists of or assembles itself on execution into entirely alphanumeric ASCII or Unicode characters such as 0-9, A-Z and a-z. This type of encoding was created by hackers to hide working machine code inside what appears to be text. This can be useful to avoid detection of the code and to allow the code to pass through filters that scrub non-alphanumeric characters from strings (in part, such filters were a response to non-alphanumeric shellcode exploits). A similar type of encoding is called printable code and uses all printable characters (0-9, A-Z, a-z, [email protected]#%^&*() etc...) It has been shown that it is possible to create shellcode that looks like normal text in English.
Writing alphanumeric or printable code requires good understanding of the instruction set architecture of the machine(s) on which the code is to be executed. It has been demonstrated that it is possible to write alphanumeric code that is executable on more than one machine. |
Listed below are the Disciplinary Core Ideas (DCI) for Physical Science and bullet points for their specific grade band progression.
PS1.A: Structure and Properties of Matter
- Different properties are suited to different purposes.
- A great variety of objects can be built up from a small set of pieces.
- Different kinds of matter exist and many of them can be either solid or liquid, depending on temperature. Matter can be described and classified by its observable properties.
- Matter of any type can be subdivided into particles that are too small to see, but even then the matter still exists and can be detected by other means. A model showing that gases are made from matter particles that are too small to see and are moving freely around in space can explain many observations, including the inflation and shape of a balloon and the effects of air on larger particles or objects.
- The amount (weight) of matter is conserved when it changes form, even in transitions in which it seems to vanish.
- Measurements of a variety of properties can be used to identify materials. (Boundary: At this grade level, mass and weight are not distinguished, and no attempt is made to define the unseen particles or explain the atomic-scale mechanism of evaporation and condensation.)
- Substances are made from different types of atoms, which combine with one another in various ways. Atoms form molecules that range in size from two to thousands of atoms.
- Each pure substance has characteristic physical and chemical properties (for any bulk quantity under given conditions) that can be used to identify it.
- Gases and liquids are made of molecules or inert atoms that are moving about relative to each other.
- In a liquid, the molecules are constantly in contact with others; in a gas, they are widely spaced except when they happen to collide. In a solid, atoms are closely spaced and may vibrate in position but do not change relative locations.
- Solids may be formed from molecules, or they may be extended structures with repeating subunits (e.g., crystals).
- The changes of state that occur with variations in temperature or pressure can be described and predicted using these models of matter.
- Each atom has a charged substructure consisting of a nucleus, which is made of protons and neutrons, surrounded by electrons.
- The periodic table orders elements horizontally by the number of protons in the atom’s nucleus and places those with similar chemical properties in columns. The repeating patterns of this table reflect patterns of outer electron states.
- The structure and interactions of matter at the bulk scale are determined by electrical forces within and between atoms.
- Stable forms of matter are those in which the electric and magnetic field energy is minimized. A stable molecule has less energy than the same set of atoms separated; one must provide
at least this energy in order to take the molecule apart.
This is a table of the Disciplinary Core Ideas
of Physical Science. If
coming from a Standard the specific bullet point used is highlighted
and additional performance Expectations that make use of the
Disciplinary Core Idea can be found below the table.
To see all Disciplinary Core Ideas, click on the title "Disciplinary Core Ideas."
Other Standards That Use This Disciplinary Core Idea: |
Electronic Circuits and Design - a potpourri of basic electronic circuits, circuit ideas, and formulae for anyone undertaking electronic circuit design http://www.radio-electronics.com/info/circuits/index.php The exact configuration of an electronic circuit is not always easy to remember, and even then there are associated electronic circuit design formulae to calculate the various circuit values. This section of the Radio-Electronics.Com website contains information about basic electronic circuits, building blocks, along with the relevant formulae to provide a unique reference on the web for anyone undertaking electronic circuit design. This section is organized by the chief component in the circuit. Thus a filter using an operation amplifier would come under the operational amplifier section, and a transistor radio frequency amplifier would come under the transistors section and a pin diode attenuator would be found in the diodes section. Resistor circuits Resistors are the most widely used components in electronic circuits. Although very simple in concept they are keys to the operation of many circuits. They can be used in a variety of ways to produce the required results. - Resistors in parallel - Resistor attenuator circuits Resistor capacitor (RC) circuits RC or resistor capacitor circuits are used in a number of applications and may be used to provide simple frequency dependent circuits. - Twin T notch filter LC filter circuits Using inductors and capacitors a whole variety of filters can be designed and made. These include low pass, high pass and band pass filters - A basic filter overview - Low pass LC filter - High pass LC filter - Band pass LC filter Diode circuits The diode is one of the most elementary semiconductor devices. It essentially allows current though the device in one direction. Using this facet of the diode there are many uses, but there are also other facets of its nature that enable to be used in other applications as well. - Simple PIN diode attenuator and switch - Constant impedance pin diode attenuator - Power supply current limiter - Diode voltage multiplier - Single balanced diode mixer - Double balanced diode mixer Transistor circuits - Two transistor amplifier circuit with feedback - Transistor active high pass filter - Transistor current limiter for power supplies SCR, Diac and Triac Circuits SCR over-voltage crowbar circuit Operational amplifier circuits Operational amplifiers are one of the main building blocks these days used in analogue electronics. They are not only easy to use, but they are plentiful, cheap and offer a very high level of performance. - Operational amplifier basics - Inverting amplifier - High input impedance inverting amplifier - Non-inverting amplifier - High pass filter - Low pass filter - Band pass filter - Variable gain amplifier - Fixed frequency notch filter - Twin T notch filter with variable Q - Multi-vibrator oscillator - Bi-stable multi-vibrator - Comparator - Schmitt trigger Digital logic circuits Logic circuits consisting of building blocks including AND and OR and NAND and NOR gates for the basis of today's digital circuitry that is used in widely in electronics. Trigger, bi-stables, flip flops, etc. are also widely sued and can be made up from the basic building blocks. - Logic truth table - Hints and tips on designing and laying out digital or logic circuits - Using inverters to create other functions - A divide by two frequency divider using a D-type flip-flop - An R S flip flop using two logic gates - An edge triggered R S flip flop using two D types - An electronically controlled inverter using an exclusive OR gate Electrostatic Discharge ESD Electro Static Discharge (ESD) is important for anyone involved with electronics. Even small discharges that would go unnoticed in everyday life can cause large amounts of damage to electronic circuits. Find out all about it and how to ensure electronic circuits are not affected in our three page tutorial. - Electrostatic Discharge ESD (3 pages) Resistor attenuator circuits - for use in radio frequency circuits including receivers and transmitters, etc Attenuator circuits are used in a variety of radio frequency circuit design applications. The attenuators reduce the level of the signal and this can be used to ensure that the correct radio signal level enters another circuit block such as mixer or amplifier so that it is not overloaded. As such attenuators are widely used by radio frequency circuit designers. While it is possible to buy ready made attenuators, it is also easy to make attenuators for many applications. Here a simple resistor network can be used to make attenuators that provide levels of attenuation up to figures of 60 dB and at frequency of 1 GHz and more, provided that care is taken with the construction and the choice of components. One important feature that is required for radio frequency applications is that the characteristic impedance should be maintained. In other words the impedance looking into and out of the attenuator should be matched to the characteristic impedance of the system. T and Pi networks There are two basic formats that can be used for resistive attenuators. They are the T and pi networks. Often there is little to choose between them and the choice is often down to the preference of the designer. As the name suggests the "T"section attenuator is in the shape of the letter T with two resistors in the signal line and one in the centre to ground. T section attenuator The two resistor values can be calculated very easily knowing the ratio of the input and output voltages, Vin and Vout respectively and the characteristic impedance Ro. The pi section attenuator is in the form of the Greek letter pi and has one in line resistor and a resistor to ground at the input and the output. Pi section attenuator Similarly the values for the pi section attenuator can be calculated Practical aspects It is generally good practice not to attempt to achieve any more than a maximum of 20 dB attenuation in any one attenuator section. Even this is possibly a little high. It is therefore common practice to cascade several sections. When this is done the adjoining resistors can be combined. In the case of the T section attenuator this simply means the two series resistors can be added together. For the pi section attenuators there are parallel resistors. When making large value attenuators, great care must be taken to prevent the signal leaking past the attenuator and reaching the output. This can result from capacitive or inductive coupling and poor earth arrangements. To overcome these problems good earth connection and careful layout, keeping the output and input away from one another are required. It may also be necessary to place a screen between the different sections. Using these attenuators a surprisingly good frequency response can be obtained. Non-inductive resistors are required to ensure the best performance, and using good printed circuit board techniques and surface mount resistors, a good performance at frequencies in excess of 1 GHz are easy to achieve. Table of resistor values for 50 ohm attenuators Resistor designations refer to diagrams above Loss in dB R1 R2 R3 R4 1 2.9 433 870 5.8 2 5.7 215 436 11.6 3 8.5 142 292 17.6 4 11.3 105 221 23.8 5 14.0 82.2 179 30.4 6 16.6 66.9 151 37.3 7 19.1 55.8 131 44.8 8 21.5 47.3 116 52.8 9 23.8 40.6 105 61.6 10 26.0 35.1 96.2 71.2 11 28.0 30.6 89.2 81.7 12 29.9 26.8 83.5 93.2 13 31.7 23.6 78.8 106 14 33.4 20.8 74.9 120 15 34.9 18.4 71.6 136 16 36.3 16.3 68.8 154 17 37.6 14.4 66.5 173 18 38.8 12.8 64.4 195 19 39.9 11.4 62.6 220 20 40.9 10.1 61.1 248 Twin T notch filter - design and circuit considerations for a resistor capacitor (RC) twin T notch filter The twin T circuit is very useful as a notch filter. Here the twin T provides a large degree of rejection at a particular frequency. This notch filter can be useful in rejecting unwanted signals that are on a particular frequency. One example may be to filter out unwanted mains hum at 50 or 60 Hz that may be entering a circuit. The response provided by the filter consists of a low level of attenuation away from the notch frequency. As signals move closer to the notch frequency, the level of attenuation rises, giving the typical notch filter response. In theory, at the notch frequency the level of attenuation provided by the twin T notch filter is infinite. RC - Resistor Capacitor Twin T Notch Filter The circuit for the twin T notch filter is shown above and can be seen to consist of three resistors and three capacitors. It operates by phase shifting the signals in the different legs and adding them at the output. At the notch frequency, the signals passing through each leg are 180 degrees out of phase and cancel out. In theory this provides a complete null of the signal. However in practice close tolerance components are required to achieve a good null. In common with other RC circuits, the RC twin T notch filter circuit has what may be termed as a soft cut-off. The response of the notch circuit falls away slowly and affects a wide band of frequencies either side of the cut-off frequency. However very close to the cut-off frequency the response falls away very quickly, assuming that close tolerance components have been used. Calculation of the value for the circuit is very straightforward. fc = 1 / (2 pi R C) Where: fc = cut off frequency in Hertz pi = 3.142 R and C are the values of the resistors and capacitors as in the circuit Filters overview - an overview of the types of filter and the various design considerations and parameters Filters of all types are required in a variety of applications from audio to RF and across the whole spectrum of frequencies. As such filters form an important element within a variety of scenarios, enabling the required frequencies to be passed through the circuit, while rejecting those that are not needed. The ideal filter, whether it is a low pass, high pass, or band pass filter will exhibit no loss within the pass band, i.e. the frequencies below the cut off frequency. Then above this frequency in what is termed the stop band the filter will reject all signals. In reality it is not possible to achieve the perfect pass filter and there is always some loss within the pass band, and it is not possible to achieve infinite rejection in the stop band. Also there is a transition between the pass band and the stop band, where the response curve falls away, with the level of rejection rises as the frequency moves from the pass band to the stop band. Filter types There are four types of filter that can be defined. These are low pass, high pass, band pass and band reject filters. As the names indicate, a low pass filter only allows frequencies below what is termed the cut off frequency through. This can also be thought of as a high reject filter as it rejects high frequencies. Similarly a high pass filter only allows signals through above the cut off frequency and rejects those below the cut off frequency. A band pass filter allows frequencies through within a given pass band. Finally the band reject filter rejects signals within a certain band. It can be particularly useful for rejecting a particular unwanted signal or set of signals falling within a given bandwidth. Types of filter Filter frequencies A filter allows signals through in what is termed the pass band. This is the band of frequencies below the cut off frequency for the filter. The cut off frequency of the filter is defined as the point at which the output level from the filter falls to 50% (-3 dB) of the in band level, assuming a constant input level. The cut off frequency is sometimes referred to as the half power or -3 dB frequency. The stop band of the filter is essentially the band of frequencies that is rejected by the filter. It is taken as starting at the point where the filter reaches its required level of rejection. Filter classifications Filters can be designed to meet a variety of requirements. Although using the same basic circuit configurations, the circuit values differ when the circuit is designed to meet different criteria. In band ripple, fastest transition to the ultimate roll off, highest out of band rejection are some of the criteria that result in different circuit values. These different filters are given names, each one being optimized for a different element of performance. Butterworth: This type of filter provides the maximum in band flatness. Bessel: This filter provides the optimum in-band phase response and therefore also provides the best step response. Chebychev: This filter provides fast roll off after the cut off frequency is reached. However this is at the expense of in band ripple. The more in band ripple that can be tolerated, the faster the roll off. Elliptical: This has significant levels of in band and out of band ripple, and as expected the higher the degree of ripple that can be tolerated, the steeper it reaches its ultimate roll off. LC low pass filter - the design considerations and formulae (formulas) for an LC (inductor capacitor) low pass filter Low pass filters are used in a wide number of applications. Particularly in radio frequency applications, low pass filters are made in their LC form using inductors and capacitors. Typically they may be used to filter out unwanted signals that may be present in a band above the wanted pass band. In this way, this form of filter only accepts signals below the cut-off frequency. Low pass filters using LC components, i.e. inductors and capacitors are arranged in ether a pi or T network. For the pi section filter, each section has one series component and either side a component to ground. The T network low pass filter has one component to ground and either side there is a series in line component. In the case of a low pass filter the series component or components are inductors whereas the components to ground are capacitors. LC Pi and T section low pass filters There is a variety of different filter variants that can be used dependent upon the requirements in terms of in band ripple, rate at which final roll off is achieved, etc. The type used here is the constant-k and this produces some manageable equations: L = Zo / (pi x Fc) Henries C = 1 / (Zo x pi x Fc) Farads Fc = 1 / (pi x square root (L x C) Hz Where Zo = characteristic impedance in ohms C = Capacitance in Farads L = Inductance in Henries Fc = Cutoff frequency in Hertz Further details In order to provide a greater slope or roll off, it is possible to cascade several low pass filter sections. When this is done the filter elements from adjacent sections may be combined. For example if two T section filters are cascaded and each T section has a 1 uH inductor in each leg of the T, these may be combined in the adjoining sections and a 2 uH inductor used. The choice of components for any filter, and in this case for a low pass filter is important. Close tolerance components should be used to ensure that the required performance is obtained. It is also necessary to check on the temperature stability to ensure that the filter components do not vary significantly with temperature, thereby altering the performance. Care must be taken with the layout of the filter. This should be undertaken not just for the pass band frequencies, but more importantly for the frequencies in the stop band that may be well in excess of the cut off frequency of the low pass filter. Capacitive and inductive coupling are the main elements that cause the filter performance to be degraded. Accordingly the input and output of the filter should be kept apart. Short leads and tracks should be used, components from adjacent filter sections should be spaced apart. Screens used where required, and good quality connectors and coaxial cable used at the input and output if applicable. LC high pass filter - the design considerations and formulae (formulas) for an LC (inductor capacitor) high pass filter High pass filters are used in a wide number of applications and particularly in radio frequency applications. For the radio frequency filter applications, the high pass filters are made from inductors and capacitors rather than using other techniques such as active filters using operational amplifiers where applications are normally in the audio range. High pass filters using LC components, i.e. inductors and capacitors are arranged in ether a pi or T network. As suggested by its name, the pi network has one series component, and either side of it there is a component to ground. Similarly the T network high pass filter has one component to ground and either side there is a series in line component. In the case of a high pass filter the series component or components are capacitors whereas the components to ground are inductors. In this way these filters pass the high frequency signals, and reject the low frequency signals. These filters may be used in applications where there are unwanted signals in a band of frequencies below the cut-off frequency and it is necessary to pass the wanted signals in a band above the cut-off frequency of the filter. LC Pi and T section low pass filters There is a variety of different filter variants that can be used dependent upon the requirements in terms of in band ripple, rate at which final roll off is achieved, etc. The type used here is the constant-k and this produces some manageable equations: L = Zo / (4 x pi x Fc) Henries C = 1 / (4 x Zo x pi x Fc) Farads Fc = 1 / (4 x pi x square root (L x C) Hz Where Zo = characteristic impedance in ohms C = Capacitance in Farads L = Inductance in Henries Fc = Cut off frequency in Hertz Further details In order to provide a greater slope or roll off in the high pass filter, it is possible to cascade several filter sections. When this is done the filter elements from adjacent sections may be combined. For example if two T section filters are cascaded and each T section has a 1 uH inductor in each leg of the T, these may be combined in the adjoining sections and a 2 uH inductor used. The choice of components for any filter, and in this case for a high pass filter is important. Close tolerance components should be used to ensure that the required performance is obtained. It is also necessary to check on the temperature stability to ensure that the filter components do not vary significantly with temperature, thereby altering the performance. Care must be taken with the layout of the filter, especially when the filter is used for high frequencies. Capacitive and inductive coupling are the main elements that cause the filter performance to be degraded. Accordingly the input and output of the filter should be kept apart. Short leads and tracks should be used, Components from adjacent filter sections should be spaced apart. Screens used where required, and good quality connectors and coaxial cable used at the input and output if applicable. LC band pass filter - the design considerations and formulae (formulas) for an LC (inductor capacitor) band pass filter Band pass filters using LC components, i.e. inductors and capacitors are used in a number of radio frequency applications. These filters enable a band of frequencies to be passed through the filter, while those in the stop band of the band pass filter are rejected. These filters are typically used where a small band of frequencies need to be passed through the filter and all others rejected by the filter. Like the high pass filters and the low pass filters, there are two topologies that are used for these filters, namely the Pi and the T configurations. Rather than having a single element in each leg of the filter as in the case of the low pass and high pass filters, the band pass filter has a resonant circuit in each leg. These resonant circuits are either series or parallel tuned LC circuits. LC Pi and T section band pass filters The equations below provide the values for the capacitors and resistors for a constant-k filter. As the filter is a band pass filter there are two cut off frequencies. One at the low edge of the pass band and the toher at the top edge of the pass band. L1 = Zo / (pi (f2 - f1)) Henries L2 = Zo (f2 - f1) / (4 pi f2 f1)) Henries C1 = (f2 - f1) / (4 pi f2 f1 Zo) Farads C2 = 1 / (pi Zo (f2 - f1)) Farads Zo = characteristic impedance in ohms C1 and C2 = Capacitance in Farads L1 and L2 = Inductance in Henries f1 and f2 = Cut off frequencies in Hertz Further details The choice of components for any filter such as a low pass filter or a high pass filter can be crucial to its performance. In the case of a band pass filter it is even more important as the circuit comprises six components rather than just three. As a result of this, close tolerance components should be used to ensure that the required performance is obtained. It is also necessary to check on the temperature stability to ensure that the filter components do not vary significantly with temperature, thereby altering the performance. Care must be taken with the layout of the filter, especially when the filter is used for high frequencies. Capacitive and inductive coupling are the main elements that cause the filter performance to be degraded. Accordingly the input and output of the filter should be kept apart. Short leads and tracks should be used, Components from adjacent filter sections should be spaced apart. Screens used where required, and good quality connectors and coaxial cable used at the input and output if applicable. Simple PIN diode switch - PIN diode attenuator and switch circuit using a single PIN diode For applications where the ultimate performance is not required a single PIN diode can be used. The circuit shown only requires a few components and is very simple to implement. Nevertheless it is able to act as a switch for radio frequency or RF applications and is adequate for many applications. When a positive potential is applied to the control point current, this forward biases the diode and as a result the radio frequency signal is able to pass through the circuit. When a negative bias is applied to the circuit, the diode become reverse biased and is effectively switched off. Under these conditions the depletion layer in the diode becomes wide and does not allow signal to pass. Simple PIN diode attenuator and switch Although in theory any diode could be used in this position, PIN diodes have a number of advantages as switches. In the first place they are more linear than ordinary PN junction diodes. This means that in their action as a radio frequency switch they do not create as many spurious products. Secondly when reverse biased and switched off, the depletion layer is wider than with an ordinary diode and this provides for greater isolation when switching. PIN diode attenuator - a constant impedance attenuator design for radio frequency or RF circuit design applications Electronically controllable PIN diode attenuators are often used in radio frequency or RF circuit designs. It is often necessary to be able to control the level of a radio frequency signal using a control voltage. It is possible to achieve this using a PIN diode attenuator circuit. Some circuits do not offer a constant impedance, whereas this PIN diode attenuator gives a satisfactory match. The PIN diode variable attenuator is used to give attenuation over a range of about 20 dB and can be used in 50 ohm systems. The inductor L1 along with the capacitors C4 and C5 are included to prevent signal leakage from D1 to D2 that would impair the performance of the circuit. The maximum attenuation is achieved when Vin is at a minimum. At this point current from the supply V+ turns the diodes D1 and D2 on effectively shorting the signal to ground. D3 is then reverse biased. When Vin is increased the diodes D1 and D2 become reverse biased, and D3 becomes forward biased, allowing the signal to pass through the circuit. PIN diode variable attenuator Typical values for the circuit might be: +V : 5 volts; Vin : 0 - 6 volts; D1 to D3 HP5082-3080 PIN diodes; R1 2k2; R2 : 1k; R3 2k7; L1 is self resonant above the operating frequency, but sufficient to give isolation between the diodes D1 and D2. These values are only a starting point for an experimental design, and are only provided as such. The circuit may not be suitable in all instances. Choice of diode Although in theory any diode could be used in this position, PIN diodes have a number of advantages as switches. In the first place they are more linear than ordinary PN junction diodes. This means that in their action as a radio frequency switch they do not create as many spurious products and additionally as an attenuator they have a more useful curve. Secondly when reverse biased and switched off, the depletion layer is wider than with an ordinary diode and this provides for greater isolation when switching. Power supply current limiter - a simple circuit for a power supply current limiter using two diodes and a resistor In any power supply there is always the risk that the output will experience a short circuit. Accordingly it is necessary to protect the power supply from damage under these circumstances. There are a number of circuits that can be used for power supply protection, but one of the simplest circuits uses just two diodes and an additional resistor. The circuit for the power supply current limiter uses a sense resistor placed in series with the emitter of the output pass transistor. Two diodes placed between the output of the circuit and the base of the pass transistor provide the current limiting action. When the circuit is operating within its normal operating range a small voltage exists across the series resistor. This voltage plus the base emitter voltage of the transistor is less than the two diode junction drops needed to turn on the two diodes to allow them to conduct current. However as the current increases so does the voltage across the resistor. When it equals the turn on voltage for a diode the voltage across the resistor plus the base emitter junction drop for the transistor equals two diode drops, and as a result this voltage appears across the two diodes, which start to conduct. This starts to pull the voltage on the base of the transistor down, thereby limiting the current that can be drawn. Basic power supply current limiting circuit The circuit of this diode current limiter for a power supply is particularly simple. The value of the series resistor can be calculated so that the voltage across it rises to 0.6 volts (the turn on voltage for a silicon diode) when the maximum current is reached. However it is always best to ensure that there is some margin in hand by limiting the current from the simple power supply regulator before the absolute maximum level is reached. Using in other circuits The same simple diode form of current limiting may be incorporated into power supply circuits that use feedback to sense the actual output voltage and provide a more accurately regulated output. If the output voltage sense point is taken after the series current sensing resistor, then the voltage drop across this can be corrected at the output. Power supply with feedback and current limiting This circuit gives far better regulation than the straight emitter follower regulator. Also voltage drops in the series current limit sense resistor can be accounted for provided that there is sufficient voltage drop across the series pass transistor in the power supply circuit. Finally the output voltage can be adjusted to give the required value using the variable resistor. Summary The diode form of current limiting can be incorporated into a power supply circuit very easily. Additionally it is cheap and convenient. However if superior performance is needed then a transistorized form of current limit may be used. This gives a sharper limiting that is more suitable for more exacting power supply requirements. Diode voltage multiplier -a circuit using diodes that multiplies the incoming voltage Within a power supply or other rectifier circuit it is possible to configure the diodes in such a way that they double, triple or more, the level of the incoming voltage. This type of voltage multiplier circuit finds uses in many applications where a low current, high voltage source is required. Although there are some variations on the basic circuit, these ones shown below use a single winding on the transformer that is required, one side of which can be grounded. Alternatively another AC source can be used. In this configuration the circuit is particularly convenient as the AC source does not need to be isolated from ground. Diode voltage doubler circuit In this voltage doubler circuit the first diode rectifies the signal and its output is equal to the peak voltage from the transformer rectified as a half wave rectifier. An AC signal via the capacitor also reaches the second diode, and in view of the DC block provided by the capacitor this causes the output from the second diode to sit on top of the first one. In this way the output from the circuit is twice the peak voltage of the transformer, less the diode drops. Variations of the basic circuit and concept are available to provide a voltage multiplier function of almost any factor. Applying the same principle of sitting one rectifier on top of another and using capacitive coupling enables a form of ladder network to built up. The voltage multiplier circuits are very useful. However they are normally suitable only for low current applications. As the voltage multiplication increases the losses increase. The source resistance tends to rise, and loading becomes an issue. For each diode in the chain there is the usual diode drop (normally 0.6 volts for a silicon diode), but the reactance of the capacitors can become significant, especially when mains frequencies of 50 or 60 Hz are used. High voltage high value capacitors can be expensive and large. This may provide physical constraints for making them too large. Diode single balanced mixer -a circuit of a diode single balanced mixer and its typical applications for radio frequency, RF circuits Mixers are widely used for radio frequency of RF applications. The mixers used in this arena multiply the two signals entering the circuit together. (note - audio mixers add signals together). The multiplier type mixers used in radio frequency applications are formed using non-linear devices. As a result the two signals entering the circuit are multiplied together - the output at any given time is proportional to the product of the levels of the two signals entering the circuit at that instant. This gives rise to signals at frequencies equal to the sum and the difference of the frequencies of the two signals entering the circuit. One of the simpler mixer circuits is based around two diodes. This type of diode known as a single balanced diode mixer circuit provides rejection of the input signals at the output as a result of the fact that the two inputs are balanced. The circuit is only singly balanced and as a result it does not give isolation between the two input ports. This means that the signal from the local oscillator may leak onto the signal input line and this may give rise to inter-modulation distortion. However for many applications this circuit operates quite satisfactorily. Where this may be a problem then a double balanced mixer should be used. The circuit of a diode single balanced mixer The circuit has a typical conversion loss, i.e. the difference between the signal input and the output of around 8dB, although this depends upon the components used and the construction. The diodes should be as nearly matched as possible, and the transformer should be closely balanced for optimum rejection of the input signals at the output. Where the input signals are widely spaced in frequency, it is possible to utilize a variation of the basic single balanced diode mixer to good effect. The circuit which is shown below may be used in a variety of applications, for example where an audio signal needs to be modulated onto a radio frequency, RF, carrier. In the circuit the two signals are combined using C1 as a high pass filter, and the combination of RFC and C2 as a low pass filter. In this way the leakage between the two input ports is minimized. A further refinement is that a balance control is incorporated into the balanced mixer circuit. This is used to ensure optimum balance. For example when used for modulating an RF carrier, it can be used to minimize the level of the carrier at the output, thereby ensuring only the two sidebands are produced. The circuit of a diode single balanced mixer with a balance control Although this form of the single balanced diode mixer circuit does require a few more components, the performance is improved as the variable resistor enables much better balance to be achieved, and additionally there is some form of isolation between the two inputs. Double balanced diode mixer -a circuit of a double balanced diode mixer and its typical applications for radio frequency, RF circuits Radio frequency mixers such as the double balanced diode mixer are used, not for adding signals together as in an audio mixer, but rather multiplying them together. When this occurs the output is a multiplication of the two input signals, and signals at new frequencies equal to the sum and difference frequencies are produced. Being a double balanced mixer, this type of mixer suppresses the two input signals at the output. In this way only the sum and difference frequencies are seen. Additionally the balancing also isolates the two inputs from one another. This prevents the signals from one input entering the output circuitry of the other and the resultant possibility of inter-modulation. The circuit of a double balanced diode mixer Typical performance figures for the circuit are that isolation between ports is around 25 dB, and the conversion loss, i.e. the difference between the signal input and output levels is around 8 dB. Using typical diodes, the input level to the mixer on the local oscillator port is around 1 volt RMS or 13 dBm into 50 ohms. The isolation between the various ports is maximized if the coils are accurately matched so that a good balance is achieved. Additionally the diodes must also be matched. Often they need to be specially selected to ensure that their properties closely match each other. In order to obtain the optimum performance the source impedances for the two input signals and the load impedance for the output should be matched to the required impedance. It is for this reason that small attenuators are often placed in the lines of the mixer. These are typically 3 dB, and although they do reduce the signal level they improve the overall performance of the mixer. These mixers may be constructed, but for many commercial pieces of equipment they are purchased in a manufactured form. These devices can have the required level of development and as a result their performance can be optimized. Although they are often not cheap to buy, their performance is often worth the additional expense. Simple two transistor amplifier - a simple design for a two transistor amplifier with feedback This electronic circuit design shows a simple two transistor amplifier with feedback. It offers a reasonable high impedance while providing a low output impedance. It is an ideal transistor amplifier circuit for applications where a higher level of gain is required than that which would be provided by a single transistor stage. Two transistor amplifier circuit with feedback Av = (R4 + R5) / R4 The resistors R1 and R2 are chosen to set the base of TR1 to around the mid point. If some current limiting is required then it is possible to place a resistor between the emitter of TR2 and the supply. Transistor high pass filter - a simple one transistor circuit to provide an active high pass filter It is sometimes convenient to design a simple active high pass filter using one transistor. The transistor filter circuit given below provides a two pole filter with unity gain. Using just a single transistor, this filter is convenient to place in a larger circuit because it contains few components and does not occupy too much space. The active high pass transistor circuit is quite straightforward, using just a total of four resistors, two capacitors and a single transistor. The operating conditions for the transistor are set up in the normal way. R2 and R3 are used to set up the bias point for the base of the transistor. The resistor Re is the emitter resistor and sets the current for the transistor. The filter components are included in negative feedback from the output of the circuit to the input. The components that form the active filter network consist of C1, C2, R1 and the combination of R2 and R3 in parallel, assuming that he input resistance to the emitter follower circuit are very high and can be ignored. Transistor active high pass filter circuit C1 = 2 C2 R1 = R2 x R3 / (R2 + R3) This is for values where the effect of the emitter follower transistor itself within the high pass filter circuit can be ignored, i.e.: Re (B+1) >> R2 x R3 / (R2 + R3) fo = 1.414 / (4 pi R1 C2) Where: B = the forward current gain of the transistor fo = the cut-off frequency of the high pass filter pi = the Greek letter pi and is equal to 3.14285 The equations for determining the component values provide a Butterworth response, i.e. maximum flatness within the pass-band at the expense of achieving the ultimate roll off as quickly as possible. This has been chosen because this form of filter suits most applications and the mathematics works out easily Over-voltage crowbar circuit - an over voltage crowbar protection circuit using a silicon controlled rectifier or SCR Power supplies are normally reliable, but if they fail then they can cause significant damage to the circuitry they supply on some occasions. The SCR over-voltage crowbar protection circuit described provides a very simple but effective method of protecting against the certain types of power supply failure. In most analogue power supply arrangements a control voltage is fed into a series regulating device such as a transistor. This controls the current and hence the output voltage. Typically the input voltage to this may be well in excess of the output voltage. If the series regulator transistor in the power supply fails and goes short circuit, then the full input voltage will appear on the circuitry that is being supplied and significant damage may result. To overcome this SCR over voltage crowbar circuits are widely used. These over-voltage protection circuits are easy to design, simple to construct and may prevent significant levels of damage in the unlikely event of a power supply failure. By looking at the voltages involved it is very easy to see why the inclusion of over-voltage protection is so important. A typical supply may provide 5 volts stabilized to logic circuitry. To provide sufficient input voltage to give adequate stabilization, ripple rejection and the like, the input to the power supply regulator may be in the region of 10 to 15 volts. Even 10 volts would be sufficient to destroy many chips used today, particularly the more expensive and complicated ones. Accordingly preventing this is of great importance. Circuit Most good bench power supplies include a form of over-voltage protection, but for those power supplies or for other applications where over voltage protection is required, a simple over voltage crowbar circuit can be built. It uses just four components: a silicon controlled rectifier or SCR, a zener diode, a resistor and a capacitor. SCR over-voltage crowbar circuit The SCR over voltage crowbar or protection circuit is connected between the output of the power supply and ground. The zener diode voltage is chosen to be slightly above that of the output rail. Typically a 5 volt rail may run with a 6.2 volt zener diode. When the zener diode voltage is reached, current will flow through the zener and trigger the silicon controlled rectifier or thyristor. This will then provide a short circuit to ground, thereby protecting the circuitry that is being supplied form any damage. As a silicon controlled rectifier, SCR, or thyristor is able to carry a relatively high current - even quite average devices can conduct five amps and short current peaks of may be 50 and more amps, cheap devices can provide a very good level of protection for small cost. Also voltage across the SCR will be low, typically only a volt when it has fired and as a result the heat sinking is not a problem. However it is necessary to ensure that the power supply has some form of current limiting. Often a fuse is ideal because the SCR will be able to clamp the voltage for long enough for it to blow. The small resistor, often around 100 ohms from the gate of the thyristor or SCR to ground is required so that the zener can supply a reasonable current when it turns on. It also clamps the gate voltage at ground potential until the zener turns on. The capacitor is present to ensure that short spikes to not trigger the circuit. Some optimization may be required in choosing the correct value although 0.1 microfarads is a good starting point. Limitations Although this power supply over-voltage protection circuit is widely used, it does have some limitations. Most of these are associated with the zener diode. The zener diode is not adjustable, and these diodes come with at best a 5% tolerance. In addition to this the firing voltage must be sufficiently far above the nominal power supply output voltage to ensure that any spikes that may appear on the line do not fire the circuit. When taking into account all the tolerances and margins the guaranteed voltage at which the circuit may fire can be 20 - 40% above the nominal dependent upon the voltage of the power supply. The lower the voltage the greater the margins needed. Often on a 5 volt supply there can be difficulty designing it so that the over-voltage crowbar fires below 7 volts where damage may be caused to circuits being protected. It is also necessary to ensure that there is some means of limiting the current should the over- voltage crowbar circuit fire. If not then further damage may be caused to the power supply itself. Often a fuse may be employed in the circuit. In some circuits a fuse is introduced prior to the series regulator transistor, and the SCR anode connected to the junction node where the output of the fuse is connected to the input of the series regulator. This ensures that the fuse will blow swiftly. Despite its drawbacks this is still a very useful circuit which can be used in a variety of areas. Operational amplifier basics - Overview of the operational amplifier or op-amp as a circuit building block Operational amplifiers are one of the workhorses of the analogue electronics scene. They are virtually the ideal amplifier, providing a combination of a very high gain, a very high input impedance and a very low output impedance. The input to the operational amplifier has differential inputs, and these enable the operational amplifier circuit to be used in an enormous variety of circuits. The circuit symbol for an operational amplifier consists simply of a triangle as shown below. The two inputs are designated by "+" and "-" symbols, and the output of the operational amplifier is at the opposite end of the triangle. Inputs from the "+" input appear at the output in the same phase, whereas signals present at the "-" input appear at the output inverted or 180 degrees out of phase. This gives rise to the names for the inputs. The "+" input is known as the non-inverting input, while the "-" input is the inverting input of the operational amplifier. Operational amplifier circuit symbol Often the power supply rails for the operational amplifier are not shown in circuit diagrams and there is no connection for a ground line. The power rails for the operational amplifier are assumed to be connected. The power for the operational amplifier is generally supplied as a positive rail and also a negative rail. Often voltages of +15V and -15 V are used, although this will vary according to the application and the actual chip used. The gain of the operational amplifier is very high. Figures for the levels of gain provided by an operational amplifier on its own are very high. Typically they may be upwards of 10 000. While levels of gain may be too high for use on their own, the application of feedback around the operational amplifier enables the circuit to be used in a wide variety of applications, from very flat amplifiers, to filters, oscillators, switches, and much more. Open loop gain The gain of an operational amplifier is exceedingly high. Normally feedback is applied around the op-amp so that the gain of the overall circuit is defined and kept to a figure which is more usable. However the very high level of gain of the op-amp enables considerable levels of feedback to be applied to enable the required performance to be achieved. When measured the open loop gain of an operational amplifier falls very rapidly with increasing frequency. Typically an op-amp may have an open loop gain of around 10^5, but this usually starts to fall very quickly. For the famous 741 operational amplifier, it starts to fall at a frequency of only 10 Hz. Slew rate With very high gains the operational amplifiers have what is termed compensation capacitance to prevent oscillation. This capacitance combined with the limited drive currents mean that the output of the amplifier is only able to change at a limited rate, even when a large or rapid change occurs at the input. This maximum speed is known as the slew rate. A typical general purpose device may have a slew rate of 10 V / microsecond. This means that when a large step change is placed on the input, the device would be able to provide an output 10 volt change in one microsecond. The figures for slew rate change are dependent upon the type of operational amplifier being used. Low power op-amps may only have a slew rate of a volt per microsecond, whereas there are fast operational amplifiers capable to providing slew rates of 1000 V / microsecond. The slew rate can introduce distortion onto a signal by limiting the frequency of a large signal that can be accommodated. It is possible to find the maximum frequency or voltage that can be accommodated. A sine wave with a frequency of f Hertz and amplitude V volts requires an operational amplifier with a slew rate of 2 x pi x V x V volts per second. Offset null One of the minor problems with an operational amplifier is that they have a small offset. Normally this is small, but it is quoted in the datasheets for the particular operational amplifier in question. It is possible to null this using an external potentiometer connected to the three offset null pins. Inverting operational amplifier circuit - the use of an operational amplifier or op-amp in an inverting amplifier or virtual earth circuit Operational amplifiers can be used in a wide variety of circuit configurations. One of the most widely used is the inverting amplifier configuration. It offers many advantages from being very simple to use, requiring just the operational amplifier integrated circuit and a few other components. Basic circuit The basic circuit for the inverting operational amplifier circuit is shown below. It consists of a resistor from the input terminal to the inverting input of the circuit, and another resistor connected from the output to the inverting input of the op-amp. The non inverting input is connected to ground. Basic inverting operational amplifier circuit In this circuit the non inverting input of the operational amplifier is connected to ground. As the gain of the operational amplifier itself is very high and the output from the amplifier is a matter of a few volts, this means that the difference between the two input terminals is exceedingly small and can be ignored. As the non-inverting input of the operational amplifier is held at ground potential this means that the inverting input must be virtually at earth potential (i.e. a virtual earth). As the input to the op-amp draws no current this means that the current flowing in the resistors R1 and R2 is the same. Using ohms law Vout /R2 = -Vin/R1. Hence the voltage gain of the circuit Av can be taken as: Av = - R2 / R1 As an example, an amplifier requiring a gain of ten could be built by making R2 47 k ohms and R1 4.7 k ohms. Input impedance It is often necessary to know the input impedance of a circuit. A circuit with a low input impedance may load the output of the previous circuit and may give rise to effects such as changing the frequency response if the coupling capacitors are not large. It is very simple to determine the input impedance of an inverting operational amplifier circuit. It is simply the value of the input resistor R1. This is because the inverting input is at earth potential (i.e. a virtual earth) and this means that the resistor is connected between the input and earth. High impedance inverting op amp circuit - a high input impedance version of the inverting operational amplifier or op-amp circuit The standard inverting amplifier configuration is widely used with operational amplifier integrated circuits. It has many advantages: being simple to construct; it offers the possibility of summation or mixing (in the audio sense) of several signals; and of course it inverts the signal which can be important in some instances. However the circuit does have some drawbacks which can be important on some occasions. The main drawback is its input impedance. To show how this can be important it is necessary to look at the circuit and take some examples. The basic circuit for the inverting operational amplifier circuit is shown below. It consists of a resistor from the input terminal to the inverting input of the circuit, and another resistor connected from the output to the inverting input of the op-amp. The non inverting input is connected to ground. Basic inverting operational amplifier circuit The gain for the amplifier can be calculated from the formula: Av = - R2 / R1 If a high gain of, for example 100, is required this means that the ratio of R2 : R1 is 100. It is good practice to keep the resistors in op amp circuits within reasonable bounds. In view of this the maximum value for R2 should be 1 M Ohm. This means that the input resistor and hence the input resistance to the amplifier circuit as a whole is 10 k Ohm. In some instances this may not be sufficiently high. To overcome this problem it is possible to modify the circuit, and add a couple of extra resistors. The feedback resistor R2 serves to limit the amount of feedback. The higher it is the less feedback, and hence the higher the gain. By adding a couple of additional resistors across the output to act as a potential divider and taking the resistor R2 from the centre point, the level of feedback can be reduced. The circuit for this configuration is shown below: High input impedance inverting operational amplifier circuit The gain for this amplifier can be calculated from the formula: Av = - R2 (R3 + R4) / (R1 x R4) Again the input resistance is equal to R1, but this can be made higher for the same gain. Reminder It is worth mentioning at this point that for high levels of gain, the gain bandwidth product of the basic op amp itself may become a problem. With levels of gain of 100, the bandwidth of some operational amplifier ICs may only be around 3 kHz. Check the data sheet for the given chip being used before settling on the level of gain. Non-inverting operational amplifier circuit - the use of an operational amplifier or op-amp in a non-inverting amplifier circuit Operational amplifiers can be used in two basic configurations to create amplifier circuits. One is the inverting amplifier where the output is the inverse or 180 degrees out of phase with the input, and the other is the non-inverting amplifier where the output is in the same sense or in phase with the input. Both operational amplifier circuits are widely used and they find applications in different areas. When an operational amplifier or op-amp is used as a non-inverting amplifier it only requires a few additional components to create a working amplifier circuit. Basic circuit The basic non-inverting operational amplifier circuit is shown below. In this circuit the signal is applied to the non-inverting input of the op-amp. However the feedback is taken from the output of the op-amp via a resistor to the inverting input of the operational amplifier where another resistor is taken to ground. It is the value of these two resistors that govern the gain of the operational amplifier circuit. Basic non-inverting operational amplifier circuit The gain of the non-inverting circuit for the operational amplifier is easy to determine. The calculation hinges around the fact that the voltage at both inputs is the same. This arises from the fact that the gain of the amplifier is exceedingly high. If the output of the circuit remains within the supply rails of the amplifier, then the output voltage divided by the gain means that there is virtually no difference between the two inputs. As the input to the op-amp draws no current this means that the current flowing in the resistors R1 and R2 is the same. The voltage at the inverting input is formed from a potential divider consisting of R1 and R2, and as the voltage at both inputs is the same, the voltage at the inverting input must be the same as that at the non-inverting input. This means that Vin = Vout x R1 / (R1 + R2) Hence the voltage gain of the circuit Av can be taken as: Av = 1 + R2 / R1 As an example, an amplifier requiring a gain of eleven could be built by making R2 47 k ohms and R1 4.7 k ohms. Input impedance It is often necessary to know the input impedance of a circuit. The input impedance of this operational amplifier circuit is very high, and may typically be well in excess of 10^7 ohms. For most circuit applications this can be completely ignored. This is a significant difference to the inverting configuration of an operational amplifier circuit which provided only a relatively low impedance dependent upon the value of the input resistor. AC coupling In most cases it is possible to DC couple the circuit. However in this case it is necessary to ensure that the non-inverting has a DC path to earth for the very small input current that is needed. This can be achieved by inserting a high value resistor, R3 in the diagram, to ground as shown below. The value of this may typically be 100 k ohms or more. If this resistor is not inserted the output of the operational amplifier will be driven into one of the voltage rails. Basic non-inverting operational amplifier circuit with capacitor coupled input When inserting a resistor in this manner it should be remembered that the capacitor-resistor combination forms a high pass filter with a cut-off frequency. The cut off point occurs at a frequency where the capacitive reactance is equal to the resistance. Operational amplifier high pass filter -a summary of operational amplifier or op-amp active high pass filter circuitry Operational amplifiers lend themselves to being used for active filter circuits, including a high pass filter circuit. Using a few components they are able to provide high levels of performance. The simplest circuit high pass filter circuit using an operational amplifier can be achieved by placing a capacitor in series with one of the resistors in the amplifier circuit as shown. The capacitor reactance increases as the frequency falls, and as a result this forms a CR low pass filter providing a roll off of 6 dB per octave. The cut off frequency or break point of the filter can be calculated very easily by working out the frequency at which the reactance of the capacitor equals the resistance of the resistor. This can be achieved using the formula: Xc = 2 pi f C where: Xc is the capacitive reactance in ohms pi is the Greek letter and equal to 3.142 f is the frequency in Hertz C is the capacitance in Farads Operational amplifier circuits with low frequency roll off Two pole low pass filter Although it is possible to design a wide variety of filters with different levels of gain and different roll off patterns using operational amplifiers, the filter described on this page will give a good sure-fire solution. It offers unity gain and a Butterworth response (the flattest response in band, but not the fastest to achieve ultimate roll off out of band). Operational amplifier two pole high pass filter Simple sure fire design with Butterworth response and unity gain The calculations for the circuit values are very straightforward for the Butterworth response and unity gain scenario. Critical damping is required for the circuit and the ratio of the resistor vales determines this. When choosing the values, ensure that the resistor values fall in the region between 10 k ohms and 100 k ohms. This is advisable because the output impedance of the circuit rises with increasing frequency and values outside this region may affect he performance. Operational amplifier band pass filter -a sure fire operational amplifier or op-amp active band pass filter circuit The design of band pass filters can become very involved even when using operational amplifiers. However it is possible to simplify the design equations while still being able to retain an acceptable level of performance of the operational amplifier filter for many applications. Circuit of the operational amplifier active band pass filter As only one operational amplifier is used in the filter circuit, the gain should be limited to five or less, and the Q to less than ten. In order to improve the shape factor of the operational amplifier filter one or more stages can be cascaded. A final point to note is that high stability and tolerance components should be used for both the resistors and the capacitors. In this way the performance of the operational amplifier filter will be obtained. Op-amp variable gain amplifier - a variable gain circuit using an operational amplifier A useful variable gain and sign amplifier can be constructed using a single variable gain amplifier. The circuit uses a single operational amplifier, two resistors and a variable resistor. Variable gain operational amplifier circuit Using this circuit the gain can be calculated from the formula below. In this the variable "a" represents the percentage of travel of the potentiometer, and it varies between "0" and "1". It is also worth noting that the input impedance is practically independent of the position of the potentiometer, and hence the gain Op amp notch filter - the circuit and design considerations for a notch filter using an operational amplifier, four resistors and two capacitors This operational amplifier notch filter circuit is simple yet effective, providing a notch on a specific fixed frequency. It can be used to notch out or remove a particular frequency that may need to be removed. Having a fixed frequency, this operational amplifier, op amp, notch filter circuit may find applications such as removing fixed frequency interference like mains hum, from audio circuits. Active operational amplifier notch filter circuit The circuit is quite straightforward to build. It employs both negative and positive feedback around the operational amplifier chip and in this way it is able to provide a high degree of performance. Calculation of the value for the circuit is very straightforward. The formula to calculate the resistor and capacitor values for the notch filter circuit is: fnotch = 1 / (2 pi R C) R = R3 = R4 C = C1 = C2 Where: fnotch = centre frequency of the notch in Hertz pi = 3.142 R and C are the values of the resistors and capacitors in Ohms and Farads When building the circuit, high tolerance components must be used to obtain the best performance. Typically they should be 1% or better. A notch depth of 45 dB can be obtained using 1% components, although in theory it is possible for the notch to be of the order of 60 dB using ideal components. R1 and R2 should be matched to within 0.5% or they may be trimmed using parallel resistors. A further item to ensure the optimum operation of the circuit is to ensure that the source impedance is less than about 100 ohms. Additionally the load impedance should be greater than about 2 M Ohms. The circuit is often used to remove unwanted hum from circuits. Values for a 50 Hz notch would be: C1, C2 = 47 nF, R1, R2 = 10 k, R3, R4 = 68 k. Op amp twin T notch filter - the circuit and design considerations for a twin T notch filter with variable Q using an operational amplifier The twin T notch filter is a simple circuit that can provide a good level of rejection at the "notch" frequency. The simple RC notch filter can be placed within an operational amplifier circuit to provide an active filter. In the circuit shown below, the level of Q of the notch filter can be varied. Active twin T notch filter circuit with variable Q Calculation of the value for the circuit is very straightforward. The formula is the same as that used for the passive version of the twin T notch filter. fc = 1 / (2 pi R C) Where: fc = cut off frequency in Hertz pi = 3.142 R and C are the values of the resistors and capacitors as in the circuit The notch filter circuit can be very useful, and the adjustment facility for the Q can also be very handy. The main drawback of the notch filter circuit is that as the level of Q is increased, the depth of the null reduces. Despite this the notch filter circuit can be successfully incorporated into many circuit applications. Operational amplifier multi-vibrator - a simple multi-vibrator oscillator circuit using a single op amp It is possible to construct a very simple multi-vibrator oscillator circuit using an operational amplifier. The circuit can be used in a variety of applications where a simple square wave oscillator circuit is required. The circuit comprises two sections. The feedback to the capacitor is provided by the resistor R1, whereas hysterises is provided by the two resistors R2 and R3. Operational amplifier multi-vibrator oscillator The time period for the oscillation is provided by the formula: T = 2 C R1 loge (1 + 2 R2 / R3) Although many multi-vibrator circuits may be provided using simple logic gates, this circuit ahs the advantage that it can be used to provide an oscillator that will generate a much higher output than that which could come from a logic circuit running from a 5 volt supply. In addition to this the multi-vibrator oscillator circuit is very simple, requiring just one operational amplifier ( op amp ), three resistors, and a single capacitor. Operational amplifier bi-stable multi-vibrator - a circuit for a bi-stable multi-vibrator using an operational amplifier, op amp It is easy to use an operational amplifier as a bi-stable multi-vibrator. An incoming waveform is converted into short pulses and these are used to trigger the operational amplifier to change between its two saturation states. To prevent small levels of noise triggering the circuit, hysteresis is introduced into the circuit, the level being dependent upon the application required. The operational amplifier bi-stable multi-vibrator uses just five components, the operational amplifier, a capacitor and three resistors. Bi-stable multi-vibrator operational amplifier circuit The bi-stable circuit has two stable states. These are the positive and negative saturation voltages of the operational amplifier operating with the given supply voltages. The circuit can then be switched between them by applying pulses. A negative going pulse will switch the circuit into the positive saturation voltage, and a positive going pulse will switch it into the negative state. Waveforms for the bi-stable multi-vibrator operational amplifier circuit It is very easy to calculate the points at which the circuit will trigger. The positive going pulses need to be greater than Vo-Sat through the potential divider, i.e. Vo-Sat x R3 / (R2 + R3), and similarly the negative going pulses will need to be greater than Vo+Sat through the potential divider, i.e. Vo+Sat x R3 / (R2 + R3). If they are not sufficiently large then the bi-stable will not change state. Operational amplifier comparator - a simple comparator circuit using a single op amp Comparator circuits find a number of applications in electronics. As the name implies they are used to compare two voltages. When one is higher than the other the comparator circuit output is in one state, and when the input conditions are reversed, then the comparator output switches. These circuits find many uses as detectors. They are often used to sense voltages. For example they could have a reference voltage on one input, and a voltage that is being detected on another. While the detected voltage is above the reference, the output of the comparator will be in one state. If the detected voltage falls below the reference then it will change the state of the comparator, and this could be used to flag the condition. This is but one example of many for which comparators can be used. In operation the op amp goes into positive or negative saturation dependent upon the input voltages. As the gain of the operational amplifier will generally exceed 100 000 the output will run into saturation when the inputs are only fractions of a milli-volt apart. Although op amps are widely used as comparator, special comparator chips are often used. These integrated circuits offer very fast switching times, well above those offered by most op-amps that are intended for more linear applications. Typical slew rates are in the region of several thousand volts per microsecond, although more often figures of propagation delay are quoted. A typical comparator circuit will have one of the inputs held at a given voltage. This may often be a potential divider from a supply or reference source. The other input is taken to the point to be sensed. Circuit for a basic operational amplifier comparator There are a number of points to remember when using comparator circuits. As there is no feedback the two inputs to the circuit will be at different voltages. Accordingly it is necessary to ensure that the maximum differential input is not exceeded. Again as a result of the lack of feedback the load will change. Particularly as the circuit changes there will be a small increase in the input current. For most circuits this will not be a problem, but if the source impedance is high it may lead to a few unusual responses. The main problem with this circuit is that new the changeover point, even small amounts of noise will cause the output to switch back and forth. Thus near the changeover point there may be several transitions at the output and this may give rise to problems elsewhere in the overall circuit. The solution to this is to use a Schmitt Trigger as described on another page. Operational amplifier Schmitt trigger - a simple circuit using an op amp to produce a Schmitt trigger to remove multiple transitions on slow input signals Although the simple comparator circuit using either an ordinary operational amplifier (op-amp) or a special comparator chip is often adequate, if the input waveform is slow or has noise on it, then there is the possibility that the output will switch back and forth several times during the switch over phase as only small levels of noise on the input will cause the output to change. This may not be a problem in some circumstances, but if the output from the operational amplifier comparator is being fed into fast logic circuitry, then it can often give rise to problems. The problem can be solved very easily by adding some positive feedback to the operational amplifier or comparator circuit. This is provided by the addition of R3 in the circuit below and the circuit is known as a Schmitt trigger. Operational amplifier (Schmitt trigger circuit) The effect of the new resistor (R3) is to give the circuit different switching thresholds dependent upon the output state of the comparator or operational amplifier. When the output of the comparator is high, this voltage is fed back to the non-inverting input of the operational amplifier of comparator. As a result the switching threshold becomes higher. When the output is switched in the opposite sense, the switching threshold is lowered. This gives the circuit what is termed hysteresis. The fact that the positive feedback applied within the circuit ensures that there is effectively a higher gain and hence the switching is faster. This is particularly useful when the input waveform may be slow. However a speed up capacitor can be applied within the Schmitt trigger circuit to increase the switching speed still further. By placing a capacitor across the positive feedback resistor R3, the gain can be increased during the changeover, making the switching even faster. This capacitor, known as a speed up capacitor may be anywhere between 10 and 100 pF dependent upon the circuit. It is quite easy to calculate the resistors needed in the Schmitt trigger circuit. The centre voltage about which the circuit should switch is determined by the potential divider chain consisting of R1 and R2. This should be chosen first. Then the feedback resistor R3 can be calculated. This will provide a level of hysteresis that is equal to the output swing of the circuit reduced by the potential divide formed as a result of R3 and the parallel combination of R1 and R2. Logic gate truth table - used for AND, NAND, OR, NOR and exclusive OR functions in electronic logic gate circuits Logic circuits form the very basis of digital electronics. Circuits including the AND, NAND, OR, NOR and exclusive OR gates or circuits form the building blocks on which much of digital electronics is based. The various types of electronic logic gates that can be used have outputs that depend upon the states of the two (or more) inputs to the logic gate. The two main types are AND and OR gates, although there are logic gates such as exclusive OR gates and simple inverters. For the explanations below, the logic gates have been assumed to have two inputs. While two input gates are the most common, many gates that possess more than two inputs are used. The logic in the explanations below can be expanded to cover these multiple input gates, although for simplicity the explanations have been simplified to cover two input cases. AND and NAND gates An AND gate has an output that is a logical "1" or high when a "1" is present at both inputs. In other words if a logic gate has inputs A and B, then the output to the circuit will be a logical "1" when A AND B are at level "1". For all other combinations of input the output will be at "0". A NAND gate is simply an AND gate with its output inverted. In other words the output is at level "0" when A AND B are at "1". For all other states the output is at level "1". OR and NOR gates For an electronic OR gates the output is at "1" when the input at either A or B is at logical "1". In other words only one of the inputs has to be at "1" for the output to be set to "1". The output remains at "1" even if both inputs are at "1". The output only goes to "0" if no inputs are at "1". In just the same way that a NAND gate is an AND gate with the output inverted, so too the NOR gate is an OR gate with its output inverted. Its output goes to "0" when either A OR B is at logical "1". For all other input states the output of the NOR gate goes to "1". Exclusive OR One other form of OR gate that is often used is known as an exclusive OR gate. As the name suggests it is a form of OR gate, but rather than providing a "1" at the output for a variety of input conditions as in the case of a normal OR gate, the exclusive OR gate only provides a "1" when one of its inputs is at "1", and not both (or more than one in the case of a gate with more than two inputs). Inverter The final form of gate, if indeed it could be categorized as a gate is the inverter. As the name suggests this circuit simply inverts the state of the input signal. For an input of "0" it provides an output of "1" and for an input of "1", it provides an output of "0". Although very simple in its operation, these circuits are often of great use, and accordingly they are quite widely used. Logic gate truth table A B AND NAND OR NOR Ex OR 0 0 0 1 0 1 0 1 0 0 1 1 0 1 0 1 0 1 1 0 1 1 1 1 0 1 0 0 Digital circuit tips - guidance and hints and tips on using digital logic circuits Digital logic circuits are widely used in today's' electronics. These circuits are used for a very wide variety of applications. From simple logic circuits consisting of a few logic gates, through to complicated microprocessor based systems. Whatever the form of digital logic circuit, there are a number of precautions that should be observed when designing, and also when undertaking the circuit board layout. If the circuit is correctly designed and constructed then problems in the performance can be avoided. Decoupling One of the main points to ensure is that the power rails are adequately decoupled. As the logic circuits switch very fast, switching spikes appear on the rails and these can in turn appear on the outputs of other circuits. In turn this can cause other circuits to "fire" when they would not normally be intended to do so. To prevent this happening all chips should be decoupled. In the first instance there should be a large capacitor at the input to the board, and then each chip should be individually decoupled using a smaller capacitor. The value of the capacitor will depend upon the type of logic being used. The speed and current consumption will govern the size of capacitor required, but typically a 22nF may be used. For chips running with very low values of current a smaller capacitor may be acceptable, but be aware that even low current logic families tend to switch very fast these days and this can place large voltage spikes onto the rails. Some manufacturing companies suggest in their codes of practice that a proportion of the chips should be decoupled. While this may be perfectly acceptable, the safest route is to decouple each chip. Earthing The ground lines in a logic circuit of great importance. By providing an effective ground line, problems such as ringing, spikes and noise can be reduced. In many printed circuit boards a ground plane is used. This may be the second side of a double sided board, or in some cases an internal layer in a multilayer board. By having a complete, or nearly complete layer in the board, it is possible to take any decoupling or earth points to the plane using the shortest possible leads. This reduces the inductance and makes the connection more effective. With the sharp edges, and the inherent high frequencies that are present, these techniques are important and can improve the performance. For the more simple circuits that may be made using pin and wire techniques, good practice is still as important, if not more so. Earth loops should be avoided, and earth wires should be as thick as reasonably possible. A little planning prior to constructing the circuit can enable the leads to be kept as short as possible. General layout The layout of a digital logic board can have a significant affect on its performance. With edges of waveforms being very fast, the frequencies that are contained within the waveforms are particularly high. Accordingly leads must be kept as short as reasonably possible if the circuit is to be able to perform correctly. Indeed many high end printed circuit board layout packages contain software that simulates the effects of the leads in the layout. These software packages can be particularly helpful when board or system complexity dictates that lead lengths greater than those that would normally be needed are required to enable the overall system to be realized. However for many instances this level of simulation is not required, and lead lengths can be kept short. Unused inputs In many circuits using logic ICs, inputs may be left open. This can cause problems. Even though they normally float high, i.e. go to the "1" state, it is wise not to leave them open. Ideally inputs to gates should be taken to ground, or if they need a logical "1" at the input they should be taken to the rail, preferably though a resistor. In many designs, spare gates may be available on the board. The input gates to these circuits should not be left floating as they have been known to switch and cause additional spikes on the rails, etc. It is best practice to take the inputs of these gates to ground. In this way any possibility of them switching in a spurious manner will be removed. Summary At first sight digital logic circuits may not appear to need all the care and attention given to a radio frequency (RF) circuit, but the speed of some of the edges on the waveform transitions mean that very high frequencies are contained within them. To ensure that the optimum performance is obtained, good layout is essential. Obeying a few simple rules can often ensure that the circuit operates correctly Logic NAND / NOR Conversions - using inverters to enable logic NAND / AND gates and NOR / OR gates to provide alternative functions It often happens on a logic circuit board that an NAND gate and a few inverters may be available, whereas in reality an NOR function is required. If this occurs then all is not lost. It is still possible to create an OR function from an AND / NAND gate and inverters, or an AND gate from a NOR / OR function. The diagram below gives some of the conversions. As an example it can be seen that a NOR gate is the same as an AND gate with two inverters on the input. It is then possible to add inverters to create the function that is required. AND Gate and OR Gate Equivalents These simple conversions can be used to save adding additional logic circuits into circuits. By using chips with spare gates, it is often possible to save adding additional chips, and thereby save cost and board space. D-type frequency divider - using a logic D-type flip flop electronic circuit to provide a frequency division of two The D-type logic flip flop is a very versatile circuit. It can be used in many areas where an edge triggered circuit is needed. In one application this logic or digital circuit provides a very easy method of dividing an incoming pulse train by a factor of two. The divide by two circuit employs one logic d-type element. Simply by entering the pulse train into the clock circuit, and connecting the Qbar output to the D input, the output can then be taken from the Q connection on the D-type. D-type frequency divided by two circuits The circuit operates in a simple way. The incoming pulse train acts as a clock for the device, and the data that is on the D input is then clocked through to the output. To see exactly how the circuit works it is worth examining what happens at each stage of the waveforms shown below. Take the situation when the Q output is a level '1'. This means that the Qbar output will be at '0'. This data is clocked through to the output Q on the next positive going edge from the incoming pulse train on the clock input. At this point the output changes from a '1' to a '0'. At the next positive going clock pulse, the data on the Q-bar output is again clocked through. As it is now a '1' (opposite to the Q output), this is transferred to the output, and the output again changes state. D-type frequency divided by two circuits It can be seen that the output of the circuit only changes state on the positive going edges of the incoming pulse clock stream. Each positive edge occurs once every cycle, but as the output of the D type requires two changes to complete a cycle, it means that the output from the D-type circuit changes at half the rate of the incoming pulse train. In other words it ahs been divided by two. There are some precautions when using this type of circuit. The first is that the pulse train should have sharp edges. If the rising edges are insufficiently sharp then there may be problems with the circuit operating as it should. If this is the case, then the problem can be easily overcome by simply placing an inverter before the clock input. This has the effect of sharpening the edges on the incoming signal. R S-Flip Flop Circuit - two logic or digital circuits for an R S flip flop, one using NAND gates and the other using NOR gates R-S flip flops find uses in many applications in logic or digital electronic circuitry. They provide a simple switching function whereby a pulse on one input line of the flip flop sets the circuit in one state. Further pulses on this line have no effect until the R-S flip flop is reset. This is accomplished by a pulse on the other input line. In this way the R S flip flop is toggled between two states by pulses on different lines. Although chips are available with R-S functions in them, it is often easier to create an R-S flip flop from spare gates that may already be available on the board, or on a breadboard circuit using a chip that may be to hand. To make an R S flip flop, it simply requires either two NAND gates or two NOR gates. Using two NAND gates and active low R S flip flop is produced. In other words low going pulses active the flip flop. As it can be seen from the circuit below, the two incoming lines are applied, one to each gate. The other inputs to each of the NAND gates are taken from the output of the other NAND gate. It can be seen from the waveform diagram that a low going pulse on input A of the flip flop forces the outputs to change, C, going high and D going low. A low going pulse on input B then changes the state, with C going low and D going high. An R S flip flop using two NAND gates The waveforms for an R S flip flop The circuit for the NOR version of the circuit is exceedingly similar and performs the same basic function. However using the NOR logic gate version of the R S flip flop, the circuit is an active high variant. In other words the input signals need to go high to produce a change on the output. This may determine the choice of integrated circuit that is used. Although the NAND gate version is probably more widely used, there are many instances where the NOR gate circuit is of value. An R S flip-flop using two NOR gates The waveforms for the NOR gate R S flip flop These circuits are widely used in many electronic logic circuit applications. There are also contained within many integrated circuits where they are a basic building block. As such the R S flip flop is an exceedingly popular circuit. One useful application for a simple R S flip flop is as a switch de-bounced circuit. When any mechanical switch makes or breaks contact, the connection will make and break several times before the full connection is made or broken. While for many applications this may not be a problem, it is when the switch interfaces to logic circuitry. Here a series of pulses will pass into the circuit, each one being captured and forming a pulse. Dependent upon the circuit this may appear as a series of pulses, and falsely triggering circuits ahead of time. An R S flip flop used as a de-bounce circuit It is possible to overcome this problem using a simple R S flip flop. By connecting the switch as shown below, the flip flop will change on the first sign of contact being made. Further pulses will not alter the output of the circuit. Only when the switch is turned over to the other position will the circuit revert to the other state. In this way a simple two gate circuit can save the problems of de-bouncing the switch in other ways. Edge triggered flip flop - the circuit for an edge triggered R S flip flop using two D types While the simpler R S flip flop using two electronic logic gates is quite adequate for most purposes, there are instances where an edge triggered one may be needed. For these instances, this circuit provides a simple and effective manner of implementing this electronic circuit function. Edge Triggered R-S Flip Flop When there is a low to high transition on the set input to the circuit on CK1 this sets the Q1 output to high. A low to high on CK2 then sets Q1 to low. This type of circuit may have a number of applications. One could be as a phase detector in a phase locked loop. The two signals will be seeking to either set or reset the circuit, and the length of time that Q is high will be dependent upon the phase difference between the two signals. Electronically programmable inverter - a simple circuit enabling an invert / non-invert function to be switched using an exclusive OR gate This electronic circuit is a particularly elegant for its simplicity. Using a single NOR gate it provides the ability to either invert or not invert a logic signal. Simply using the truth table for the exclusive OR function it can be seen that when there is a low on one input to the exclusive OR gate, the signal on the other input is passed through the circuit and not inverted. When the signal on one input is a high, then the signal on the other is inverted at the output. Exclusive OR truth table A B Output 0 0 0 0 1 1 1 0 1 1 1 0 As seen from the electronic circuit below, it consists of an exclusive OR gate, a pull up resistor and a switch. The control line could come from an external source such as another gate. In this case there would be no need for the pull up resistor, and the circuit would simply consist of the single gate held within the integrated circuit. Programmable Inverter using Exclusive OR gate Electro Static Discharge (ESD) tutorial - a tutorial or summary about the basics of Electro Static Discharge, ESD This tutorial is in three pages dealing which address the different topics: ESD and how it arises The sensitivity of electronics to ESD Overcoming ESD Electro Static Discharge or E.S.D. awareness is particularly important for anyone associated with electronics. As integrated circuits become more compact, and feature sizes shrink, active devices as well as some passive devices are becoming more prone to damage by the levels of static that exist. To combat its effects, industry is spending many millions of pounds to prevent damage to electronic components from the effects of static. Anti-static areas using protective antistatic workbenches, as well as measures for ensuring people are not carrying static are all used. Using what are termed ESD PA or Electrostatic Discharge Protected Areas, the destructive effects of static on electronics equipment during manufacture can be virtually removed. Although awareness has grown considerably in recent years, the problem has existed for a long time. It came to light in a major way with the introduction of the first MOSFET devices. In view of the very high gate impedances that existed it was found that they were easily damaged. Originally it was thought that only devices such as MOSFETs were at risk, but studies soon revealed that far more damage was being done that had been originally imagined. The problem also became more acute as feature sizes on ICs dropped and they became more prone to damage. What is static? Static electricity is a natural phenomenon which occurs as part of everyday life. Its effects can often be felt when touching a metal door handle having walked across a nylon carpet. Another effect can be seen when hair stands up after it has been combed. The most dramatic effect is lightning. Here the scale is many orders of magnitude greater than those seen in and around the home. Colossal powers are dissipated in every strike, and its effects can be heard for many miles around. Static is created when there is movement. When objects rub together there is friction and this causes the surfaces to interact. An excess of electrons appears on one surface while there will be a deficiency on the other. The surface with the excess of electrons becomes negatively charged, whereas the surface with the deficit becomes positively charged. These charges will try to flow and neutralize the charge difference. However as many substances exhibit a very high resistance these charges can remain in place for a very long time. Tribo-electric series: The size of the charge which is generated is determined by a variety of different factors. One is obviously the conductivity of the two materials and also whether the charge between them can leak away. However one of the major influences is the materials themselves and their position of the two materials in what is called the tribo-electric series. The position of the two materials which are in rubbing against one another in this series governs the size of the charge and the relative polarities. The further apart they are in the series, then the greater the charge. The material that is higher up the series will receive the positive charge, whereas the one lower in the series will receive the negative charge. Materials such as human hair, skin, and other natural fibers are higher up the series and tend to receive positive charges, whereas man made fibers together with materials like polythene, PVC and even silicon are towards the negative end. This means that when combing hair with a man made plastic comb, the hair will receive a positive charge and the comb will become negative. positive charge skin hair wool silk paper cotton wood rubber rayon polyester polythene pvc teflon negative charge Practical examples One of the most commonly visible examples of generating charge is when walking across a room. Even this everyday occurrence can generate some surprisingly high voltages. Walking on an ordinary vinyl floor might generate a voltage in the region of 10kV. Walking on a nylon carpet is much worse with voltages in the region of 30kV to be expected. Other actions can also generate very high voltages. For example moving a polythene bag can generate voltages of around 10kV. These voltages seem to be very high, but they usually pass unnoticed. The smallest discharge that can be felt is around 5kV, and even then this magnitude of discharge may only be felt on occasions. The reason is that even though the resulting peak currents may be very high, they only last for a very short time and the body does not detect them. Effects on electronics With most electronics ICs and components being designed to operate at voltage so 5 V or less, it is hardly surprising that these discharges can cause some damage. The next page in this tutorial looks more closely at the discharges and they way in which they cause damage to electronics. The final page looks at ways of protecting against static discharges of this nature. Electro Static Discharge (ESD) tutorial - a tutorial or summary about the basics of Electro Static Discharge, ESD and the affects it has on electronic components and electronic circuits This tutorial is in three pages which address the different aspects of ESD: ESD and how it arises The sensitivity of electronics to ESD Overcoming ESD ESD can have disastrous effects on electronic components. With ICs operating of supply voltages of 5 V and less these days, and with the feature sizes measured in fractions of a micron the static charges that go unnoticed in everyday life can easily destroy a chip. Worse still these effects may not destroy the chip instantly, but leave a defect waiting to cause a problem later in the life of the equipment. In view of their sensitivity to static, most semiconductor devices today are treated as static sensitive devices (SSD). To prevent damage they must be handled in anti-static areas, often called ESDPAs (Electrostatic Protected Areas). Within these areas a variety of precautions are taken to ensure that static is dissipated and that the SSD, static sensitive devices do not experience any static discharges. Benches with dissipative surfaces, anti-static flooring, wrist straps for operators and many more items all form part of these anti-static areas. Sensitivity Some electronic devices are more sensitive to ESD than others. However to put the problem in perspective it is worth relating the levels of static to those to supply voltages. One would not consider applying a voltage of even fifty volts to a logic device. Yet static voltages of several kilovolts are often applied to them by careless handling. The most sensitive devices are generally those which include FETs. These devices have very high impedances which do not allow the charge to dissipate in a more controlled fashion. However this does not mean that bipolar devices are immune from damage. Standard CMOS chips can be damaged by static voltages of as little as 250V. These include the 74HC and 74HCT logic families are widely used in many designs using "glue logic" because of their lower current consumption. However many of the new microprocessors and LSI chips use very much smaller feature sizes, and cannot withstand anything like these voltages, making them very sensitive to ESD. Many new devices would be destroyed by operating them with a supply voltage of 5 V, and they are corresponding more susceptible to damage from ESD. Logic devices are not the only devices requiring anti static precautions to be taken. GaAs FETs which are used for RF applications are very susceptible to damage, and can be destroyed by static voltages as low as 100V. Other forms of discrete FETs are also affected by ESD. MOSFETs which are again often used for many RF applications are very sensitive. Even ordinary bipolar transistors can be damaged by potentials of around 500V. This is particularly true of the newer transistors which are likely to have much smaller internal geometries to give higher operating frequencies. This is only a broad indication of a very few of the ESD susceptibility levels. However it indicates that all semiconductor devices should be treated as static sensitive devices (SSD). It is not only semiconductor devices that are being treated as SSDs these days. In some areas even passive components are starting to be treated as static sensitive. With the ever quickening trend to miniaturization individual components are becoming much smaller. This makes them more sensitive to the effects of damage from ESD. A large discharge through a very small component may cause overheating, or breakdown in the component. Discharge mechanisms The way in which the electrostatic discharge takes place is dependent on a large number of variables. Most of these are difficult to quantify. The level of static which is built up varies according to the materials involved, the humidity of the day, and even the size of the person has an effect. Each person represents a capacitor on which charge is held. The average person represents a capacitor of about 300 pF but this will vary greatly from one person to the next. The way in which the discharge takes place also varies. Often the charge will be dissipated very quickly: typically in less than a hundred nanoseconds. During this time the peak current can rise to as much as twenty or thirty amps. The peak current and the time for the discharge are dependent upon a wide variety of factors. However if a metal object is used, like a pair of tweezers or thin nosed pliers the current peak is higher and reached in a shorter time than if the discharge takes place through a finger. This is because the metal provides a much lower resistance path for the discharge. However whatever the means of the discharge, the same amount of charge will be dissipated. Failure mechanisms The way in which ICs fail as a result of ESD also varies, and it is also dependent upon a number of factors including the way in which the charge is dissipated to the topology within the IC. One of the most obvious way in which an IC can fail as a result of ESD occurs when the static charge represented as a very high voltage gives rise to a high peak current causing burn out. Even though the current passes for a very short time, the minute sizes within ICs can mean that the small interconnecting links wires or the devices in the chip itself can be fused by the amount of heat dissipated. In some instances the connection or component may not be completely destroyed. Instead it may only be partly destroyed. When this happens the device will continue to operate and may have no detectable reduction in its performance. At other times there may be a slight degradation in operation. This is particularly true of analogue devices where small fragments of material from the area of damage can spread over the surface of the chip. These may bridge or particularly bridge other components in the chip causing the performance to be altered or degraded. When damage has been caused to the device, but it still remains operational, the defect leaves it with what is termed a 'latent defect' which may lead to a failure later in its life. Subsequent current surges resulting from turning the equipment on, or even as a result of normal operation may stress the defect and cause it to fail. This may also be brought about by vibration in some cases. Latent damage cause inside an IC by ESD These latent defects are particularly damaging because they are likely to lead to failures later in the life of the equipment, thereby reducing its reliability. In fact manufacturing plants with poor anti-static protection are likely to produce low reliability equipment as a result of this. In fact it is estimated that for every device which suffers instant damage at least ten are affected by latent damage and will fail at a later date. Another way in which static can cause failure is when the voltage itself causes breakdown within the IC. It is quite possible for the voltage to breakdown an insulating oxide layer leaving the IC permanently damaged. Again this can destroy the chip immediately, or leave a partly damaged area with a latent failure. Charge can also be transferred to electronic components in other ways and cause damage. It may result in damage either from voltage breakdown or by generating current to flow in the device. This may occur because a highly charged item will tend to induce an opposite charge in any article near it. Plastic drinks cups are very susceptible to carrying high static voltages and if they are placed on a work surface next to a sensitive piece of electronics they can induce a charge which may lead to damage. Investigations Although it is not easy to determine the cause of destruction of a device, some specialist laboratories have the means of making these investigations. They accomplish this by removing the top of the IC to reveal the silicon chip beneath. This is inspected using a microscope to reveal the area of damage. These investigations are relatively costly. They are not normally undertaken for routine failures. Instead they are only undertaken when it is necessary to determine the exact cause of the failure. Protection With ICs being prone to damage so easily, it is necessary to consider all semiconductor devices, and often many passive devices as static sensitive devices SSD. They should only be handled in the special anti static ESDPAs. The next page in this tutorial (Page ) summarizes some of the methods and techniques that can be used. Electro-Static Discharge (ESD) tutorial - a tutorial or summary about the basics of Electrostatic Discharge, ESD and the ways in which electronic components and circuits can be protected from it affects. This tutorial is in three pages which address the different aspects of ESD: ESD and how it arises The sensitivity of electronics to ESD Overcoming ESD There are many ways in which the effects of ESD can be overcome. A variety of methods are employed including products including anti-static, or static dissipative workbenches, anti-static or static dissipative containers, static dissipative protection for operators and the like. To provide the best protection the problem must be addressed from several angles: An area which is static free (anti-static) must be created. These areas are often known as electro-static discharge protected areas (ESDPA) and they must be used whenever SSDs or boards containing SSDs are to be handled. Any static sensitive devices, or boards containing them must be stored in conditions where they are not subjected to the effects of static. Any boards using SSDs should be designed so that the effects of a discharge into the board are reduced to acceptable levels. Finally any people who come into contact with electronic components or assemblies should be made aware of the effects of static discharges. The decision about the number of measures to employ can be difficult because it is not always easy to determine the cause of any failures. Additionally it may take many years for some of the failures to occur. However if sufficient measures are taken then the risks of damages from ESD can be reduced to sufficiently low levels Work Areas To avoid static build up in the area where electronic components and boards are being handled the bench surfaces should be able to remove any static build up which occurs. If there is an existing work bench then it is possible to buy a carbon impregnated rubber mat to place on the bench. These anti-static mats are relatively cheap and are very cost effective. If a new bench is being installed then special static dissipative surfaces can be used. The level of conductivity of the surfaces is important. If it is too low then it may not only affect the operation of any board or assembly placed upon it, but when a board is placed onto it, and charge that is dissipated should not be removed too quickly otherwise damage may occur. Accordingly the volume conductivity of the material used should fall into the static dissipative category. Another essential element in combating static build up on people is to use wrist straps. These ensure that any charge built up on a person working on the equipment is safely dissipated. The strap consists of two sections. The band itself which is worn around the wrist. This is connected to earth via the lead which incorporates a large value resistor, normally in excess of 1 M Ohm. This is included for two reasons. The first is safety, and the second is again to ensure that any static is removed in a controlled fashion. The straps should be regularly tested to ensure they have not become open circuit. Without a test of this nature a faulty strap could go undetected for many months. Many companies insist that every strap that is in use is tested every day. In this way any defects can be discovered before they cause too much damage. Wrist straps, connections to workbench tops and any other points are normally connected together using a special junction box. These junction boxes usually have resistors of 1 M Ohm for each of the contacts. These are joined and then taken to earth. Often a special mains plug with a connection to only the earth pin can be used. These special plugs are usually yellow and have two plastic pins for the live and neutral, and a metal pin for the earth. In this way it is only possible to connect to earth. Flooring in an electrostatic protected area, or anti-static area also needs to be considered. Flooring made out of acrylic materials is likely to generate very high levels of static. For example, acrylic carpets in the home are particularly bad whereas natural fibers like wool are much better. Even nylon is not as bad as an acrylic floor. For an electronic production area there is a wide variety of static dissipative coverings which can be installed if required to overcome any problems that might be caused. If static dissipative flooring is to be used then conductive footwear must be worn. There is no point in having static dissipative flooring if peoples' shoes act as excellent insulators. Most people will want to wear their normal shoes and not have to wear 'regulation' footwear as this is not likely to be as comfortable. The solution is to use a heel strap which fits over part of the shoe. This provides an acceptable path to earth past the shoe. Clothing is another element that must be considered. Clothes of wool, cotton or even polyester cotton are normally not a problem. However some synthetic clothes can develop very high levels of static of their own even if the person wearing them is grounded by the use of a wrist strap. Acrylic ties are particularly notable. They can collect high levels of static charge, and this can be passed to nearby components and electronic boards causing damage. To overcome this type of problem special static dissipative overalls can be worn. These normally have a relatively high conductivity to contain any static fields which might be generated. Finally chair coverings should also be investigated. They should not be of the type that generate high levels of static. In some instances they may need to be dissipative and connected to ground. It is possible to obtain special seat coverings for existing chairs, or completely new chairs. Choices can be made dependent upon the state of the chairs and the budget available. Another approach that can be taken to help control static and ESD is to control the humidity. In dry periods of the year, especially winder when the level of water vapor held in the air drops, the possibility if ESD rises. By introducing some humidity the levels of static can be reduced. Although not one of the most commonly used methods of ESD control, there are several types of humidifiers which can be installed. Some fit into heating systems whereas others are separate units. Ideally a minimum humidity figure of 50% can be used as an aiming point. Above this the high humidity levels can lead to other problems. Storage Not only do work environments need to have ESD control measures introduced, but so too do the storage media. Whenever an electronic component or assembly is transported or stored it should be placed in suitable packing to ensure that it is not damaged. The dissipative bags for boards and tubes and special dissipative containers for components are now common place in the electronics industry. Often the storage bags have a pink or grey tint to them. The older black conductive bags are used less as they may dissipate the charge too quickly. Another problem was that they tended to discharge any on-board batteries more quickly than intended! Soldering Irons There is a wide variety of soldering irons available on the market today. Many are quite suitable for work with static sensitive devices. The main requirement is that the bit used for soldering should be earthed. In general it is recommended that the resistance to earth should be less than five [ohm]. Any irons which are thermostatically controlled should ideally use a zero voltage switching system. This prevents large spikes caused by the switching of the thermostat from appearing at the tip of the iron and causing damage to the equipment.
Pages to are hidden for
"circuits-basic"Please download to view full document |
The thirteen colonies of British North America that eventually formed the United States of America can be loosely grouped into four regions: New England, the Middle Colonies, the Chesapeake, and the Lower South. Each of these regions started differently, and they followed divergent paths of development over the course of more than a century of British settlement; yet they shared enough in common to join together against British rule in 1776.
New England was characterized from its earliest days by the religious motivation of most settlers. The Pilgrims who settled at Plymouth in 1620 were followed by a large group of Puritans in 1630. While religiously distinct from each other, the Pilgrims and Puritans had each left England because of religious persecution from conservative Anglicans, and each hoped to find a safe haven where they could worship without restrictions. The strictly moral societies founded in New England were intended to shine as beacons to the rest of the world, showing how life should be lived. The everyday lives of settlers revolved around religious worship and moral behavior, and while normal economic activities were understood to be necessary they were not intended to be the main focus of settlers’ lives.
A majority of the settlers who arrived in New England before 1642 came in family groups, and many came as community groups as well. The communities they reformed in America were immediately demogra-phically self-sustaining, and were often modeled on villages and towns left behind in England. Consequently, New England settlements often closely resembled English ones in ways that settlements elsewhere in the thirteen colonies did not. Migration during the English Civil War almost dried up completely, and when migration restarted after 1660, the increase in more commercially minded settlers began to alter the fundamental structure of New England society.
The Middle Colonies of New York, New Jersey, and Pennsylvania were all ”Restoration Colonies,” so-called because they came under English control after the Restoration of Charles II (1630-1685). New York was conquered from the Dutch in 1664, and although many Dutch settlers remained, large numbers of English and Scottish migrants arrived to alter the ethnic makeup of the colony. Pennsylvania probably bore more resemblance to the New England colonies than the rest of the Middle Colonies because it was founded by Quaker William Penn (1644-1718) as a religious haven. However, in contrast to most New England colonies, Penn adopted a policy of religious toleration, and his colony quickly attracted migrants from all over western Europe, particularly from Germany. The climate of Pennsylvania made it ideal farming country, and corn became its main staple product.
The Chesapeake was the earliest region colonized by the English. From the initial settlement at Jamestown the English spread very slowly around the tidewater of Chesapeake Bay, partly because of hostile local Native-American tribes, but also because the young men who constituted most of the settlers in Virginia before 1618 were not interested in forming stable communities. Instead, from 1612 onward they grew tobacco, which they knew would bring riches, but which also brought instability. The tobacco plant exhausted the soil and therefore virgin land was constantly needed to continue production. The quest for more land to bring under cultivation brought the English into further conflict with local tribes, and it was partly responsible for provoking the devastating Indian attacks of 1622 and 1644.
The Thirteen Colonies. Great Britain S thirteen colonies in North America later formed the first thirteen states in the new United States.
Although the English appetite for tobacco remained undiminished, oversupply of the crop meant that prices after 1620 were not high enough to sustain the get-rich-quick mentality that had pervaded between 1615 and 1620. As the Virginia Company began to transport more women to the colony after 1618, the society became more demographically stable, though still heavily reliant on inward migration to maintain its population levels.
In 1632 Maryland was created out of northern Virginia, and although the colonists shared with those farther south a desire to make money from tobacco, many were Catholic. The proprietor of Maryland, Lord Baltimore (Cecil Calvert, 1605?-1675), was a leading English Catholic and saw Maryland, like the Puritans saw New England and Penn saw Pennsylvania, as a religious refuge for those who shared his faith. Consequently, many of those in positions of authority and influence in Maryland were Catholic, something that caused friction among residents who were not Catholic. As a result, Lord Baltimore approved the passage of the Toleration Act of 1649, guaranteeing religious freedoms to the population.
The popularity of tobacco cultivation in the Chesapeake necessitated a regular supply of labor. Initially this demand was met by indentured servants who served for a period of seven years in return for passage to the New World and a promise of free land once the indenture was complete. Although the system of indentured labor was not perfect it served the colony well enough and was generally preferred to the alternative— slave labor—until the 1680s. This was mainly an economic decision; slaves were more expensive than indentured servants. And because significant numbers of servants did not live to see their freedom, the additional investment required for slave labor was simply not worth it. However, as death rates fell in Virginia and the ready supply of white indentured servants to the Chesapeake began to decline, slave labor became a more attractive alternative.
While there had been Africans in Virginia since before 1620, their status as slaves was not fixed and at least some Africans obtained their freedom and began farming. However, by 1660 discriminatory laws began to appear on the Virginia statute book, and after 1680, when the number of enslaved Africans began to rise quickly, they entered a full-fledged slave society that increasingly defined the Chesapeake colonies.
The Lower South colonies consisted of the Carolinas, first settled in 1670, and Georgia, not settled until 1733. Since the climate of the Carolinas was known to be conducive to plantation-style agriculture and many of the proprietors were also directors of the Royal African Company, slaves followed hard on the heels of the first white settlers. Finding large numbers of white settlers proved difficult, and the earliest migrants to Carolina were English and Scottish dissenters and a large group of Barbadian Anglicans who brought their slaves with them. The tidal waters around Charles-Town were ideally suited to rice cultivation, the techniques of which were most likely taught to planters by Africans, and large plantations growing the staple quickly became the norm. The numbers of workers required for rice cultivation were large, and as early as 1708 the coastal regions of Carolina had a black majority population.
The trustees of Georgia initially intended their colony to be both a buffer between South Carolina and Spanish Florida and a haven for persecuted European Protestants, and believing that slavery would not be conducive to either of these aims, they prohibited it in 1735. However, the colony languished economically, failing to keep settlers who could see the wealth on offer in neighboring South Carolina, and eventually the trustees were forced to back down and permit slavery from 1750. Georgia quickly became a plantation colony like South Carolina.
The significant differences that existed between these four regions lessened during the eighteenth century but never entirely disappeared. The society of New England became more heterogeneous and less moralistic due to increased migration of non-Puritans. Chesapeake society gradually stabilized as death rates fell, and by 1700 the population was demographically self-sustaining. Significant events began to have an impact throughout the colonies, creating a shared American colonial history. The Glorious Revolution of 1688 affected New England, New York, Maryland, and South Carolina as colonials successfully struggled against what they believed were the pro-French absolutist tendencies of James II (1633-1701) and his followers in America. In the eighteenth century the pan colonial religious revivals collectively known as the Great Awakening made household names of evangelists such as George Whitefield (1714-1770). Continued migration brought hundreds of thousands of new settlers to the colonies, not only from England but increasingly from Ireland, Scotland, France, and Germany. The dispersal of these settlers in America, together with half a million enslaved Africans, made the colonial population a truly diverse one.
While it is difficult to speak of a common colonial culture, given the diverse experiences of Boston merchants, Pennsylvania^ farmers, and Georgian planters, most shared a belief in traditional English freedoms, such as the rule of law and constitutional government. When these freedoms were thought to be threatened by actions of the British Parliament in the 1760s and 1770s, most colonists were quick to find common cause as Americans against British tyranny, though significant loyalist sentiment lingered in New York, South Carolina, and Georgia. |
Most children seem to have a congenital aversion to vegetables — possibly because some vegetables are bitter tasting, such as Brussels sprout, broccoli, and asparagus.
Researchers explain that flavor is the primary dimension by which young children determine food acceptance.
According to researchers, “Children are not merely miniature adults because sensory systems mature postnatally and their responses to certain tastes differ markedly from adults. Among these differences are heightened preferences for sweet-tasting and greater rejection of bitter-tasting foods.”
A study published in 2005 by researchers at Monell Chemical Senses Center in Philadelphia suggests that a gene may be responsible for children’s aversion to bitter flavors.
The study included 143 children and their mothers; over 79 percent of the children had one or two copies of the bitter-sensitive gene present.
“The presence of the bitter-sensitive gene made a bigger impact on the children’s food preferences than their mothers. The mothers’ tastes in foods seemed to be influenced more by race and ethnic type foods than their genetic makeup.”
That’s why 80 percent of children don’t like vegetables.
Author and journalist Linda Larsen who has worked for Pillsbury points out that most children need to actually see a new food four to five times before they’ll even try it, so it’s important to keep introducing fruits and veggies.
According to a study by professor Mildred Horodynski of Michigan State University’s College of Nursing, a mother’s eating habits has a huge impact on whether her child consumes enough fruits and vegetables.
The research results indicated toddlers were less likely to consume fruits and vegetables four or more times a week if their mothers did not consume that amount or if their mothers viewed their children as picky eaters.
Horodynski claims previous research revealed early repeated exposure to different types of foods is required — up to 15 exposures may be needed before it can be determined if a child likes or dislikes a food.
Healthy Eating From a Kid’s Point Of View
Megan Sproba, a 13 year-old and a seventh-grade student at Brabham Middle School in Willis, Texas, says the way parents serve and prepare vegetables plays a major role in developing a positive attitude toward vegetable eating.
“I know from past experience that eating veggies is not fun. They taste as if it has no flavor, as if they need something to help force it down. I like my broccoli cooked, not under and not over, but just enough so that it is easy to chew. I like my Brussel sprouts baked in the oven to where they are crisp and brown, but not burnt. I especially like my asparagus cut up and soft with a cheesy sauce.
“One thing I do not like is when someone tries to hide a veggie in some other food. My mom used to make me eat tacos with zucchini in them and it was really gross.”
Linda Larsen’s Fruit and Vegetable Tips For Kids
1. Try to focus on the sweeter ‘good for you’ foods, like strawberries, mandarin oranges, cherries, tomatoes, sweet peas, and corn.
2. You can add some finely chopped fruits to gelatin salads, add some pureed sweet peas to guacamole, and serve tiny vegetables, like baby carrots and baby corn, with appetizer dips.
3. I like using pureed fruits in desserts – even though they’re desserts, you are getting some nutrition into your kids.
4. Finely chop carrots and mix them into spaghetti sauce.
5. Make some fruit breads – banana bread, pear bread, and apple bread are all good.
6. You can also finely mince vegetables and add them to hamburger patties or turkey burgers.
7. Puree corn and stir that into corn muffin batter, or make apple cake or pumpkin cheesecake.
8. Also start making casseroles. You can start out with just pasta, cheese, and sauce, but then gradually add more finely chopped vegetables to the sauce. You can get some minced veggies into tuna or chicken salad as well. |
Which of the sociological perspectives of social stratification is most relevant to the experience of societies in the English speaking Caribbean? Use the findings of empirical studies conducted in the region to illustrate your answer.
Haralambos, Holborn and Heald (2004) defines social stratification as a particular form of Social inequality, It refers to the presence of a distinct social groups which are ranked one above the other in terms of factors such as prestige and wealth. The most common Sociological perspective of Social Stratification most relevant in the English speaking Caribbean was the class system/class distinction. Therefore, the stratification systems in the Caribbean were found to be influenced by slavery, indentureship, and education and settlement patterns of the Europeans during slavery and after emancipation of the slaves (Course Material). The social structure of the Caribbean has been greatly influenced by the impact of colonialism and its attendant factors. However, the decade of the1960s marked the end of the colonial era for the English-speaking islands and coastlands of the Caribbean region. The two most populous territories were Jamaica, and Trinidad and Tobago they became independent in 1962.
Additionally, before emancipation, Caribbean society featured three main strata; the white upper stratum of plantation owners and managers; a brown middle stratum of skilled and semi skilled workers, traders and small groups of persons who owned and operated businesses (Course Material). Lastly, there was a lower stratum of mostly black, manual, unskilled workers in both the rural and urban areas (Course Material). However, even though the Caribbean society featured these three main strata, studies conducted in the English speaking Caribbean suggested that stratification patterns was largely determined by a changing class structure that was shifting and expanding mostly in the middle stratum.
C.L.R.James (1962), in his writing cited... |
The walnut tree genus, known botanically as juglans, contains at least 17 species and hybrid cultivars of walnut trees. Walnut trees are a fine hardwood trees related to the hickory genus and are grown for their nuts and timber wood. All species fall into one of three types of walnut tree: those grown for their fruit or nuts, those grown for timber, and those grown for both their fruit and timber. Waste from the nut and timber harvest industries are also commercially refined and sold for secondary product purposes including walnut oil and indelible dyes made with the skin of the black walnut. Walnut trees are also widely grown as specimen or ornamental trees due to their large patrician profile, elegant trunks and attractive broad canopy.
Common Walnut Trees
Walnut trees that are not hybrid or black walnut species are grouped together as common walnut and this is the largest species group in the genus. Common walnut trees are grown for their fruit, their timber or as ornamentals. Common walnut species include the Japanese walnut; Bolivian walnut; Southern California walnut; butternut walnut; Northern California walnut, West Indian walnut, Arizona walnut; Manchurian walnut; little walnut; Stewart's little walnut; Andean walnut and the English walnut.
Black Walnut Trees
Black walnut trees are grown for their fruit as well as their timber wood and are distinguished by the dark, nearly indelible ink secreted from their outer skins when the peel is breached. The main species of black walnut is Juglans nigra.
Hybrid Walnut Trees
Hybrid walnut trees are mainly grown for their timber harvest and have been bred for speedier growth and dense wood with a tight grain. Hybrid walnut tree species include juglans x bixbyi Rehder, juglans x intermedia Carrière, and juglans x quadrangulata Carrière Rehder. |
Seasonal Influenza (Flu)
Influenza is a viral infection that is most common from October to February. Symptoms include fever, cough, sore throat, runny or stuffy nose, headache, muscle aches and extreme fatigue. Most people who get an influenza infection recover fully within 1-2 weeks. However, some people develop serious, life-threatening complications such as pneumonia. Many people use the word "flu" to describe any illness that features a cold and fever, even if it isn't the flu.
Everyone can get the flu, but influenza poses a serious health hazard to the elderly. Each winter, in the United States alone, influenza typically takes the lives of 36,000 Americans, most of whom are older than 65. Pneumonia that results from influenza is the sixth leading cause of death in the United States.
Are you a writer or producer working on a current TV or film project? Contact the program for technical assistance.
Yes. An influenza vaccine (flu shot) is available, as well as a pneumococcal vaccine, which prevents serious infections, including some forms of pneumonia. Both are covered by Medicare.
However, in 1995 only 58% of those eligible received the vaccine. More older Americans are receiving the shots now, but only 40% of older African Americans and only half of older Hispanics were vaccinated last year. Influenza vaccinations must be given annually because the virus changes from year to year, and new vaccines are manufactured annually. The Centers for Disease Control and Prevention works with partners in Asia to monitor annual strains of the influenza virus that appear first in Asia. A vaccine is developed for worldwide use, to prevent or minimize illness from the strain that is considered most likely to spread throughout the world. Vaccine development is based on the best scientific evidence available to predict which strain will become the most likely to spread.
Many older adults avoid being vaccinated because they mistakenly believe they can't afford it. Some fear influenza vaccine can cause the flu, a fear that should be put to rest. Someone who develops a flu-like illness shortly after vaccination could have one of many other infections, including the common cold virus or a flu virus they were exposed to before receiving the vaccine. The vaccine takes about 10 to 14 days to begin its full protection. It does not protect against other illnesses that may be referred to as "flu."
- People should receive influenza vaccinations between October and mid-November each year to prevent influenza and life-threatening complications such as pneumonia.
- The elderly are at increased risk for influenza and its complications.
- Flu shots do not cause the flu and they are affordable. Medicare covers flu shots for the elderly and free flu shots are available at many hospitals and public health clinics.
It is recommended that other high risk groups get the flu shot annually. These groups include:
- People with underlying chronic medical conditions, especially chronic lung diseases (including asthma) or heart ailments;
- People with compromised immune systems (from AIDS, HIV or chemotheraphy);
- Children six months to eighteen years old with a long history of aspirin use (catching influenza could result in Reye's Syndrome); and
- Women who expect to be more than three months pregnant during flu season.
- A 70-year-old grandmother flies across the country to visit her family. After landing, she has a scratchy throat, coughs a little, and is tired and maybe even a little feverish, but she chalks it up to the long flight. At her daughter's house, she lies down. The daughter checks on her a bit later, and finds her burning with fever, with an intense headache and body aches. She is coughing and congested. The daughter calls a doctor who says the symptoms sound like influenza infection. Has the mother had a flu shot? No. The mother and daughter immediately go to the hospital emergency room where the diagnosis is acute bronchitis infection and pneumonia. The mother is hospitalized, antibiotics are administered, but her condition declines. She dies three days later.
- Ed, a semi-retired smoker, has chronic bronchitis His secretary comes in with some flu-type symptoms that are only beginning to show—she is highly contagious, but doesn't know it. She ultimately infects several people in the office by sneezing, coughing, etc. Ed did not have a flu shot, and within a couple of days becomes seriously ill with pneumonia. He is hospitalized, and his condition declines. However, after several days of antibiotics to treat secondary infections, and plenty of rest, he eventually recovers. Although well enough to return home, it is a while before he has enough energy to return to work.
- A nurse at the local assisted living facility posts a notice on the hallway bulletin board alerting residents that flu shots and pneumonia shots will be given the following Monday. The nurse reminds residents who are walking past the sign that they need an annual flu shot, but that they don't need an annual pneumonia shot if they had one before. She answers questions and signs them up for the vaccines.
- Page last reviewed: September 15, 2017
- Page last updated: September 15, 2017
- Content source:
- Centers for Disease Control and Prevention
- Page maintained by: Division of Public Affairs (DPA), Office of the Associate Director for Communication (OADC) |
VI Reading Comprehension 10% 每題2分,共5題。 Many cultures around the world often give animals human personalities. For example, Chinese usually think that foxes are tricky and that donkeys are stupid. In literature, this is a special technique that writers use and is called "personification." Animals are also given "personalities" in the west. For instance, wolves are supposed to be evil and dangerous. In many fairy tales, wolves are usually the bad guys and they eat the other characters in the story. The English idiom "A wolf in sheep's clothing" comes from a fairy tale and refers to a dangerous or bad person who pretends to be nice and harmless. In the fairy tale, the wolf kills a sheep and then skins it. He then wears the skin in order to get close to other sheep so that he can eat them. In the fairy tale, the wolf could think, skin a sheep, and trick sheep like a human. The writer gives the wolf human qualities, so that he acts like an evil man. 【題組】55. Which of the following about the wolf in the fairy tale is FALSE?
(A) He is dangerous. (B) He eats sheep. (C) He is harmless. (D) He skins sheep. |
Simple vs. Compound Interest -- Spreadsheeting the Difference
In this Spreadsheets Across the Curriculum activity, students are guided step-by-step to build a spreadsheet that compares the future value of an investment that grows exponentially (compound interest) to the future value of the same investment that grows linearly (simple interest). The students calculate the year-to-year succession of future values, plot them on XY-graphs against time, and fit trend lines. The module emphasizes the differences in the future values many years out from the initial investment. Spreadsheet difficulty level is elementary. This module can be the students' first experience in building a spreadsheet to perform a systematic calculation.
- Understand the difference between simple and compound interest.
- Use Excel functions to do the same calculations easily.
- Plot the results for each on a scatter diagram and add a trend line/curve to each.
- Use the spreadsheet to forward model.
- Gain experience with both the simple and compound interest formulas.
- Compare the difference in growth between simple and compound interest for several different interest rates.
- Determine which interest method is used for common financial products (i.e., loans, savings).
- Format cells in an Excel worksheet.
- Estimate values from a graph.
Context for Use
Description and Teaching Materials
SSAC2005.HG1621.GTF1.1-student (PowerPoint 265kB Nov20 06)
The module is a PowerPoint presentation with embedded spreadsheets. If the embedded spreadsheets are not visible, save the PowerPoint file to disk and open it from there.
This PowerPoint file is the student version of the module. An instructor version is available by request. The instructor version includes the completed spreadsheet. Send your request to Len Vacher ([email protected]) by filling out and submitting the Instructor Module Request Form. |
The quest to understand the attitudes people have towards plants gets a statistical boost from biologists Jana Fancovicova and Pavol Prokop in Development and Initial Psychomatic Assessment of the Plant Attitudes Questionnaire. This study marks the first attempt to systematically evaluate the attitudes students have towards plants (Fancovicova and Prokop, 2010).
Fancovicova and Prokop (2010) used their new assessment tool in a study to determine the following:
- Do students from families who maintaiin a garden exhibit a more positive attitude towards plants?
- Do females have more positive attitudes towards plants than males?
Attitudes towards plants are the focus of their assessment tool and research because attitudes affect behavior and changes in behavior are necessary for humans to take responsibility for their role in the loss of plant biodiversity (Fancovicova and Prokop, 2010).
The Plant Attitude Scale (PAS) they created contains 45 Likert-style questions addressing student attitudes about the importance of plants, interest in plants, plant use in society and the costs and benefits of urban trees. The structure and reliability of the PAS was assessed using statistical analysis. The attitudes of 310 Slovakian students were analyzed. Students age 10-15 years were surveyed specifically because this age group has been found to be “important in the development of children’s cognitive abilities and their ecological awareness of the role of animals in their natural habitats” (Fancovicova and Prokop, 2010) and the authors assumed this was also true regarding this age group’s awareness of plants. Student participation was on a volunteer basis and dependent upon whether or not a teacher wanted to take the time to distribute the PAS to his/her students.
Fancovicova and Prokop (2010) found that student attitudes towards plants was neutral overall. Children who came from families who maintained a garden had a more positive attitude towards plants than their counterparts. While more positive, the difference in attitudes was statistically significant only with respect to Interest in plants. These results are consistent with the results found in other studies about student interest in plants. The authors also found there was no significant difference with respect to interest level between male and female students.
These findings, as well as additional observations, are discussed in detail in Fancovicova and Prokop (2010). Overall results suggest students do not value plants and that educational programs aimed at increasing student appreciation towards plants are important and necessary (Fancovicova and Prokop, 2010). Fancovicova and Prokop (2010) make several suggestions for future research using the sound assessment tool they created. Suggestions include assessing teacher attitudes towards urban trees, assessing the effectiveness of gardening activities in schools, and assessing the effectiveness of outdoor education programs.
Fancovicova, Jana and Pavol Prokop. 2010. Development and initial psychometric assessment of the Plant Attitude Questionnaire. Journal of Science Education and Technology. Volume 19: 415-421. |
How Watt Balances Work
A watt balance is an exquisitely accurate weighing machine. Like any balance, it is designed to equalize one force with another: In this case, the weight of a test mass is exactly offset by a force produced when an electrical current is run through a coil of wire immersed in a surrounding magnetic field.
The surrounding field is provided by a large permanent magnet system or an electromagnet. The moveable coil, once electrified, becomes an electromagnet with a field strength proportional to the amount of current it conducts. When the coil's field interacts with the surrounding magnetic field, an upward force is exerted on the coil. The magnitude of that force is controlled by adjusting the current.
The instrument is called a "watt" balance because it makes measurements of both current and voltage in the coil, the product of which is expressed in watts, the SI unit of power. That product equals the mechanical power of the test mass in motion.
Current and voltage are measured in two separate stages, or modes, of operation.
In "weighing" or "force" mode, a test mass is placed on a pan that is attached to the coil. It exerts a downward force -- its weight -- which is equal to its mass (m) times the local gravitational field (g). The current applied to the coil is then adjusted until the upward force on the coil precisely balances the downward force of the weight. When the system reaches equilibrium, the current is recorded.
At this point, it might seem that the job is finished. After all, the force (F) on the coil – which equals the weight of the mass – can be calculated with a simple equation that dates from the 19th century: F = IBL, where I is the current, B is the magnetic field strength, and L is the length of the wire in the coil. However, as a practical matter, the product BL is extremely hard to measure directly to the necessary accuracy.
Fortunately, physics provides a way around that problem via yet another relationship revealed in the mid-19th century: induction. Michael Faraday discovered that a voltage is induced in a conductor when it travels through a magnetic field, and that the voltage is exactly proportional to the field strength and the velocity. So if the velocity is constant, the induced voltage is a sensitive measure of the field strength.
That phenomenon is the basis for the watt balance's second stage, called "velocity" or "calibration" mode. For this operation, the test mass is removed and the applied current through the coil is shut off. The coil is then moved through the surrounding field at a carefully controlled constant velocity. The resulting induced voltage is measured.
Again, an uncomplicated formula governs the magnitude of the induced voltage: V = νBL, where B and L are the very same field strength and wire length as in weighing mode, and ν is the velocity. When this equation is combined with the one above for force on the coil, the problematic B and L cancel out, leaving IV = mgν. (That is, electrical power equals mechanical power,* both expressed in watts.) Or, solving for mass, m = IV/gν.
Everything on the right side of that equation can be determined to extraordinary precision: The current and voltage by using quantum-electrical effects that are measurable on laboratory instruments; the local gravitational field by using an ultra-sensitive, on-site device called an absolute gravimeter; and the velocity by tracking the coil's motion with laser interferometry, which operates at the scale of the wavelength of the laser light.
Where is the Planck constant, h, in all this? It comes into play in the way current and voltage are measured using two different quantum-electrical physical constants. Both constants are defined in terms of h and the charge of the electron, e. Those are very small quantities. Yet both are manifest in measurable macroscopic phenomena.
Current in a watt balance is measured by way of a resistor in the circuit. Resistance can be determined to about 1 part per billion using the von Klitzing constant, which describes how resistance is quantized in a phenomenon called the quantum Hall effect. Voltage is measured using the Josephson effect (and its associated Josephson constant), which relates voltage to frequency in a superconducting circuit, with measurement uncertainties in the range of 1 part in 10 billion. The Josephson effect is the world's de facto standard for quantifying voltage.
Because of these connections to the Planck constant, a watt balance can measure h when the mass is exactly known (as in the case of a 1 kg standard), or it can measure an unknown mass if h is exactly known. The impending redefinition of the kilogram will assign a specific fixed value to h, allowing watt balances to measure mass without recourse to the IPK or any physical object.
* Actually, neither IV nor mgν is measured directly in either operational mode. The watt balance thus makes a virtual comparison between electrical and mechanical power. |
As in humans, fish’s ears are important for survival as they perform a major role in the animal’s acceleration and orientation. The ear structure in fish is known as otoliths and is composed of minerals. The Scripps team assumed that since acidic waters weaken and dissolve shells made up of minerals, the otoliths of fish living in waters with high levels of CO2 should grow very slowly and even possibly result to smaller otoliths. Fish ears are inside their bodies, and therefore the change would not be noticeable by simply looking at the fish.
Checkley and his research team began their experiment by incubating the eggs of white sea bass in seawater that contained more than six times the normal amount of carbon dioxide. When the fish were between seven and eight days old, the scientists measured their otoliths and were met with quite a surprise. Contrary to the researchers assumptions, the young fish had otoliths that were 15 to 17 percent larger than normal. They repeated the experiment, only to receive the same results. The Scripps team then performed another experiment by reducing the carbon dioxide in the water to roughly 3.5 times the normal level. In contrast to the first experiments results, the otoliths in these fish were only 7 to 9 percent larger than normal.
The Scripps team published their conclusions in the June 26th issue of Science. The researchers state they found that the otoliths extreme growth mutation caused the animals to become disoriented and incapable of surviving in their normal environment. They also found that the fish’s body sizes did not grow proportionally to their ear bones.
Checkley has stated that this finding raises further questions that he and his research team hope to explore. Specifically in determining how the additional carbon dioxide the water affects the otoliths in addition to researching if similar otoliths deformities are occurring in fish other than the sea bass. Checkley also said that they will look at whether having larger ear bones affects a fish’s survival and behavior.
“If fish can do just fine or better with larger otoliths, then there’s no great concern. But fish have evolved to have their bodies the way they are. The assumption is that if you tweak them in a certain way it’s going to change the dynamics of how the otoliths helps the fish stay upright, navigate and survive… At this point one doesn’t know what the effects are in terms of anything damaging to the behavior or the survival of the fish with larger otoliths. The assumption is that anything that departs significantly from normality is an abnormality and abnormalities at least have the potential for having deleterious effects.”
To read the study:
Checkley, D., Dickson, A., Takahashi, M., Radich, J., Eisenkolb, N., & Asch, R. (2009). Elevated CO2 Enhances Otolith Growth in Young Fish Science, 324 (5935), 1683-1683 DOI: 10.1126/science.1169806 |
Sea star invaders are local
One of the greatest biological threats to tropical coral reefs can be a population outbreak of crown-of-thorns sea stars. Outbreaks can consume live corals over large areas, a change that can promote algal growth, alter reef fish populations and reduce the aesthetic value of coral reefs. Despite more than 30 years of research, the triggers and spread of crown-of-thorns outbreaks are not fully understood.
Human impacts such as urbanization, runoff and fishing have been correlated with outbreaks, but some outbreaks continue to occur in the absence of known anthropogenic triggers. Waves of a spreading outbreak that moves southerly along the Great Barrier Reef are termed secondary outbreaks because they are thought to be seeded from dispersing larvae of a primary outbreak upstream.
This secondary outbreak hypothesis has been widely accepted but a team of scientists from University of Hawaiʻi at Mānoa’s Hawaiʻi Institute of Marine Biology and the Joint Institute for Marine and Atmospheric Research and Rutgers University demonstrated that unlike on the Great Barrier Reef, crown-of-thorns larvae are not moving en masse among central Pacific archipelagos. In fact, crown-of-thorns outbreaks came from local populations.
On a finer scale, genetic differences were detected among reefs around islands and even between lagoon and forereef habitats indicating that the larvae of this species are not routinely reaching their full dispersal potential, and are not fueling outbreaks at distant sites.
Mānoa’s Christopher E. Bird, Derek J. Skillings, Molly A. Timmers and Robert J. Toone proved that crown-of-thorns outbreaks are not some rogue population that expands and ravages across central Pacific reefs. Instead, they hypothesize that nutrient inputs and favorable climatic and ecological conditions likely fuel outbreaks of local populations.
“The genetic differences found among crown-of-thorns populations clearly indicate that outbreaks are not spreading from the Hawaiian Archipelago to elsewhere. Furthermore, the similarity between outbreak and non-outbreak crown-of-thorns populations within each archipelago indicates that outbreaks are a local phenomenon,” explained Toonen.“Our recommendation to managers is to seriously consider the role that environmental conditions and local nutrient inputs play in driving crown-of-thorns outbreaks,” said Toonen.
- UH leads newest NSF ocean education center
- Coral reef resiliency research draws high-profile investments
- 3D Under the Sea selected as finalist in national video competition
- HOT news: Pacific carbon pump speeds up in summer
- Coral reef experts bridge science to public policy |
ESTIMATING ANIMAL ABUNDANCE Brian L. Pierce, Roel R. Lopez, and Nova J. Silvy Department of Wildlife and Fisheries Sciences Texas A&M University College Station, TX 77845 Prologue ► Leopold (1933) stated that “measurement of the stock on hand” was the essential first step in any wildlife management effort. Introduction ► Herein we provide an overview of factors that should be considered before choosing a method to estimate population abundance, the pros and cons of using various methods, and available computer software, so that the reader may make informed decisions based upon their particular needs. Statistical Definitions ► Population: A group of animals of the same species occupying a given area (study area) at a given time. ► Absolute abundance: Number of individuals. ► Relative abundance: Number of individuals within a population at 1 place and/or time period, relative to the number of individuals in a different place and/or time period. ► Population density: Number of individuals per unit area. ► Relative density: Density within 1 place and/or time period, relative to the density in another place and/or time period. ► Population trend: Change in numbers of individuals over time. ► Census: A total count of an animal population. ► Census method: The method (i.e., spotlight count) used to obtain data for an estimate of population abundance. Statistical Definitions ► ► ► ► ► ► Population estimate: A numerical approximation of total population size. Population estimator: A mathematical formula used to compute a population estimate calculated from data collected from a sampled animal population. Closed population: A sampled population where births, deaths, emigration, and immigration does not occur during the sampling period. Open population: A sampled population this is not closed. Population index: A statistic that is assumed to be related to population size. Detection probability: The probability that an individual animal within a sampled population is detected. Synonyms include: observability, sightability, catchability, detectability, or probability of detection. Statistical Definitions ► Parameter: An attribute (i.e., percent females) of a population. If you know the parameters of the population, you do not need statistics. ► Statistic: An attribute (i.e., percent females) from a sample taken from the population. ► Frequency of occurrence: Observed number of an attribute relative to total possible number of that attribute (e.g., individual was observed on 4 of 5 spotlight counts). ► Accuracy: Is a measure of bias error, or how close a statistic (i.e., a population estimate) taken from a sample is to the population parameter (i.e., actual abundance). ► Bias: The difference between an estimate of population abundance and the true population size. However, without knowledge of the true population size, bias is unknown. ► Mean estimate: The average of repeated sample populations estimates usually taken over a short time period. Statistical Definitions ► Precision: Is a measure of the variation in estimates obtained from repeated samples. ► ► Range: difference between lowest and highest estimates. Variance: sum of the squared deviations of each n sample measurement from the mean divided by n−1. ► ► Standard deviation: positive square root of the variance. 95% confidence interval: probability that a given estimate will fall within ±2 standard errors of the mean. ► Central Limit Theorem: the distribution of sample means calculated from an infinite number of successive random samples will be approximately normally distributed as the sample size (n) becomes larger, irrespective of the shape of the population distribution. Precision Versus Accuracy Measures of statistical precision include: Range Variance Standard deviation Standard error Confidence interval In the real world of population estimation, one does not ever know where the bull’s eye lies; therefore, one can only measure precision of the estimates. Survey Design: 20 Questions ► ► ► ► ► ► ► ► ► ► ► Have I reviewed the relevant literature on the species and/or method? Do I need an estimate of density or will an index of relative abundance suffice? What methods are available which meet these criteria? What is the extent of the survey area? Are there any limitations as to where I can sample? What are the experimental units from which samples will be drawn? How much precision is desired? If comparing areas or time periods, how small a difference must be detected? Given the precision or difference to be detected, how much replication is required? How much replication can I afford? What is the distribution of the species to be surveyed? ► ► ► ► ► ► ► ► ► How will the sample units be distributed? Will sample units be drawn with or without replacement? Do I have the equipment and infrastructure necessary? Do I have sufficient funds to conduct the proposed survey? Is that money better spent on answering another question? Do I have the time required to complete the estimate? Do I have the expertise to collect and analyze the data, or is it available elsewhere? Are there other biologists and biometricians that can provide an independent review? Will I need a pilot study to answer any of the above questions? Survey Design ► Because of the limits of time and costs, a survey of the entire study area of interest is usually not possible. ► An experimental design is devised to select a portion of the study area to be sampled (experimental units). ► Proper experimental design helps to minimize the effects of uncontrolled variation, allowing you to obtain unbiased estimates of abundance and experimental error (variation between experimental units treated alike). Survey Design ► Survey extent: the spatial and temporal extent of the area over which inference is to be made. ► Experimental design: plan used to select the portion of the study area to be sampled, the number of replications, and the randomization rule used to assign treatments to experimental units. ► Experimental unit: is the smallest entity to which a treatment can be randomly assigned. They are homogenous and should be representative of the population or treatment to which inference is to be applied. ► Sample unit: the entity from which measurements are obtained. ► Treatment: manipulative (applied by the experimenter), mensurative (categories of time or space), or organismal (natural categories, such as age class or sex) division of experimental units. Survey Design ► Replication: repeated sampling from each experimental unit. ► Randomization rule: used to ensure an unbiased assignment of treatments to experimental units. ► Simple random sampling: each sample unit has an equal probability of being selected. ► Stratified random sampling: employed when there are implicit differences in experimental material (prior to treatment) that must be accounted for in the analysis. Experimental units are categorized accordingly, and sample units selected randomly from within these categories. ► Systematic sampling: employed to ensure uniform coverage of area under investigation. Survey Design Non-probabilistic sampling ► Convenience sampling ► Selective sampling ► Regardless of cause or reason, non-probabilistic sampling designs are likely to yield biased estimates with levels of precision that are not representative of the area of inference. Survey Design ► Sampling intensity: a concept that encompasses desired precision, statistical power, and the amount of variability among the sample units. ► Significance level: The odds that the observed result is due to chance ► Power: The odds that you will observe a treatment effect when it occurs. ► Effect size: Effect size is the difference between treatments (e.g., in number of animals seen) relative to the noise in measurements. ► Variation in the response variable: the sample variance or standard deviation are often used to estimate variability in the parameter of interest (e.g., population mean). ► Sample size: Sample size ( ) is the number of samples required to obtain the desired precision in an estimate, or the desired power in a hypothesis test. Method Considerations ► Methods can be broadly categorized as either census methods or estimates derived from sampling, and further subdivided by complete or incomplete detection within samples. ► The combination of method and survey design then, in turn, dictates how samples may be combined to estimate population means and variances. ► Problems arise when animal distributions are clumped, or when the distribution of samples correlates with the underlying distribution of animals to be sampled. ► Most survey methods do not observe all individuals within the population, and detection probability may vary over space and/or time. Methods Indices ► A density index can be defined as any measure that correlates with density. ► Indices differ from population estimation methods in that only counts are obtained, with little or no attempt to correct for incomplete detection or variable detection probability. ► Most indices collect frequency (number of individual animals or animal sign) information along transects, at quadrats, or points. ► Indices are relatively inexpensive, and can be used to compare animal numbers between treatment and control areas (e.g., disked with non-disked areas) or to compare the same area over time, with the assumption that nothing changes except the relative abundance of the animal being studied. Census ► Total count of an entire population. ► The data obtained are not a sample, but enumeration of the whole population (i.e., no variability is present because you counted them all). ► Rare situations occur where this is useful. In most cases complete counts can not be made with certainty, and if animals are missed there are no means to detect bias nor assess the precision of the sample. ► In most cases data resulting from these methods should viewed skeptically. Sample Counts: Fixed Area ► Total counts on limited sample areas may be possible. ► The sample units must be suitable-sized relative to the organism being considered, to ensure a complete count is obtained. ► The area being counted is “fixed” in terms of the length and width of each sample prior to the start of the survey. ► The mean density from all sample plots is then extrapolated to the entire study area, giving an estimate of average density and/or population abundance for the area of inference. ► This basic sampling method has been modified to use sample units of various shape (quadrats, strips, plots, etc.) and size, depending on circumstance and target species. Sample Counts: Plotless ► While methods of fixed area counts are common in both plant and animal sampling, they suffer from boundary effects, where a decision must be made to determine whether or not to include an observation on a plot boundary. ► Plant biologist developed several ‘plotless’ methods to estimate density and abundance that alleviate these problems and are relatively easy to apply, so long as the target remains in place or can be measured before it moves. ► They are ideally suited for sampling many animal cues such as active nest, burrows, or other animal activities that remain stationary. and they have the added advantage of being fast and easy to apply. ► Plotless methods work well when the target species is random or uniformly distributed, but many have problems when the target species is clumped or severely clumped in distribution. Sample Counts: Estimating Area ► Considerable attention was give to conduct of sample counts during the 1930–1970’s, focusing on methods that would allow an accurate estimate of density to be obtained from counts without preset ½ strip widths. ► The basic solution had several forms, but each attempted to determine either the sample area congruent to the area over which complete counts were obtained, or the effective distance for the survey method (distance at which the number of animals counted beyond that distance is equal to the number of animals missed within that distance). ► In essence, these methods estimate the area (size of the plot) over which the counts occur. The sum of animals counted over the total estimated area provides an estimate of density. Sample Counts: Estimating Area ► Hahn (1949) ► King (Leopold 1933, Buckland et al. 2001) ► Hayne (1949) Incomplete Counts ► The preceding methods for estimating population size either reduced the survey area to assure complete detection, or attempted to correct the survey area to allow for unbounded counts. ► The strategy was to either standardize or estimate the survey parameters necessary to obtain accurate estimates without direct evaluation of detection probability. ► The incomplete count methods use the opposite strategy, where the focus is to either estimate detection probability directly, or to collect ancillary data to indirectly model detection probability. Incomplete Counts ► Double sampling (Jolly 1969, Jolly 1969, Eberhardt and Simmons 1987, Pollock and Kendall 1987, Estes and Jameson 1988, Prenzlow and Lovvorn 1996, Anthony et al. 1999, Bart and Earnst 2002, and Laake et al. 2011) ► Double observer (Caughley 1974, Magnusson et al. 1978, Cook and Jacobson 1979, Grier et al. 1981, Caughley and Grice 1982, Pollock and Kendall 1987, Graham and Bell 1989, Nichols et al. 2000, and Laake et al. 2011) ► Marked sample (Chapman 1951, Packard et al. 1985, Samuel 1987, White 1996, Bartmann et al. 1987, White and Garrott 1990, Neal et al. 1993) ► Time of detection (Seber 1982, Ralph et al. 1995, Farnsworth et al. 2002, and Alldredge et al. 2007) ► Modern distance sampling (Burnham and Anderson 1984, Buckland et al. 2001) Exploitative ► Removal methods of population estimation are old and have been analyzed by numerous investigators. These methods are attractive because often someone other than the investigator, such as hunters, can collect the removal data. ► Catch-per-unit-effort involves developing a linear regression of the number of animals removed each day on the cumulative total number of animals removed prior to that day (Leslie and Davis 1939). Animal can be removed without exploitation (catch & release, photographed, etc.). ► Change-in-ratio can be used on any 2 classes of animals as long as harvest varies between the classes (Kelker 1940). ► These methods assume a closed population, and that all removals are known. Estimates are imprecise and work well only when a large portion of the population is removed (exploitatively or non-exploitatively). Removal Methods Catch-per-unit-effort (e.g., catch/day) is based on the premise that as more animals are removed from a population, fewer are available to be "caught", and catch per/day will decline. The regression equation is not a typical regression because the catch/day and the cumulative removals depend on the same removals. This lack of independence makes calculation of variances and 95% CI difficult. Marked–Resight ► We prefer the term marked-resight because animals to not have to be captured to be marked, nor do they need to be recaptured to determine if they are marked. ► There is only 1 assumption to marked-resight methods and that is the proportion of marked to non-marked in a sample is the same as it is in the population. All other purported assumptions are just violations of this assumption. ► However, the percentage of marked animals in the population will affect the accuracy of the estimates, and we recommended at least 25% of the population be marked. Marked–Resight ► Known Number Alive ► Lincoln-Petersen Estimator ► Schnabel Estimator ► Schumacher-Eschmeyer Estimator ► Jolly-Seber Estimator Computer Software ► Program Capture ► Program MARK ► Program DISTANCE ► Program R SUMMARY Before conducting a survey to estimate population abundance, understand what information is needed, what purpose the information will be used, how precise an estimate is needed, and the time needed and cost to conduct the survey. The key to deriving population abundance estimates is to select a method that fits a particular situation. If necessary, techniques can be adapted to meet a particular need. Generally, a biometrician familiar with population estimation literature should be consulted. However, most biometricians consider a method “best” when it has greater precision than another method, but remember that most of these methods have never been tested for accuracy under field conditions. Great precision does not mean accuracy. |
Rulers Of Britain
BOUDICCA, QUEEN OF THE ICENI
Britain has a long and fascinating history. While little is known about ancient Britons before the invasion of Julius Caesar in 55BC, the ancient Britons continue to inspire our imagination through archaeological finds and historical sites such as Stonehenge in Wiltshire. At the time of the Roman invasion, the Britons lived in tribes, some of the most famous tribes being the Iceni; Silures; Ordovices; Brigantes; Trinovantes and Catuvellauni. These tribes all had their own leaders and the tribes did not always live in peace with each other. This made it easier for the Romans to invade. One of the most famous ancient Britons is Queen Boudicca, leader of the Iceni tribe following the death of her husband Prasutagus. In 61 AD she lead a rebellion against the Romans and successfully destroyed many Roman settlements, including Londinium (modern London) which was burnt to the ground. Eventually Boudicca's army was defeated and she was killed (legend says by her own hand), but her rebellion was a mighty blow that shook the Roman Empire. The Romans continued to occupy Britain until around 400AD.
The end of the Roman Occupation began what is known as The Dark Ages. During this time, the Britons were invaded by Anglo-Saxons, Vikings and then Normans. It was during The Dark Ages that the legendary Briton, King Arthur, is believed to have lived. As a result of all these invasions, the Britons moved westward, settling in Wales and the west of what became England.
BATTLE OF HASTINGS 1066
Following The Norman Conquest in 1066, the English kings tried to conquer Scotland and Wales. The Scots managed to maintain their independence but the last Prince of Wales (Llewellyn the last) was defeated and killed by Edward I's invading forces in 1282. Edward I made his son and heir (the future King Edward II) Prince of Wales, a title still held by the heir to the throne today. For his campaigns against the Scots, Edward I earned the title Hammer of the Scots.
For many years the Welsh also resisted English rulership, hence why Edward I needed to build so many castle fortresses in Wales, but in the reign of Henry VIII Wales was officially united with England. The deposition of King Richard III by Welsh Henry Tudor at the Battle of Bosworth (1485) was not only seen as a triumph of the Welsh, but of the Britons. It was believed that the Welsh were the last of the true Britons, so Henry Tudor's victory at Bosworth was seen as a final victory of the Britons after generations of invasion and oppression. The Tudors, including Elizabeth I, made much of their Welsh heritage.
When Queen Elizabeth I died without an heir in 1603, the crown passed to Henry VII's great great grandson, King James VI of Scotland, son of Mary, Queen of Scots. Now England, Scotland and Wales all had the same King. Even though Scotland was not formally united with England until 1707, Great Britain had been born. |
There is archaeological evidence of human settlement and land use in the area which later became Oxfordshire beginning in the early Neolithic (4,000-3,500 BC). The previous periods, the Palaeolithic and Mesolithic, are known only from scattered finds, mostly stone tools. The Neolithic record suggests clearance of woodland, planting of crops and tending of animals, as well as ritual practice. The evidence is largely invisible to the eye, uncovered by excavation, often ahead of development, and particularly in the better-preserved conditions of the Thames floodplain. Types of site characteristic of this period are:
· causewayed enclosures (that at Abingdon yielding some of the earliest evidence, from c.3,500-3,600 BC)
· cursuses, large-scale earthworks, with parallel banks and ditches
· henges, circular ditch and bank enclosing pits or timber structures, with three known in Oxfordshire
· stone circles, including the Rollright Stones of c.2,000BC
· long barrows, chambered burial mounds, particularly in the Cotswolds and Berkshire Downs
· house sites. Evidence is rare but discoveries at Yarnton are thought to date from the early Neolithic
From the Bronze Age we find continuing evidence of settlement and farming, and also of burial and trade. Characteristic are:
· small cemeteries surrounded by a ring ditch and sometimes covered by a round barrow
· major, long-distance routeways through Oxfordshire, notably the Ridgeway and Icknield Way
· The White Horse hill figure at Uffington, now dated to c.1,000BC
A list of major Neolithic and Bronze Age sites in Oxfordshire can be viewed here and the full Solent Thames Research Framework - Neolithic and Early Bronze Age Oxfordshire can be viewed at STRF Neolithic and early bronze age.
Iron Age Oxfordshire
Iron Age Oxfordshire saw significant changes. In the early Iron Age (c.800- 300BC) settlements became larger, and wooden, thatched roundhouses between 8 and 15 metres in diameter the standard domestic structure. Wheat and barley were grown, and cattle and sheep kept. The development of settlements in the valleys of the Thames and its tributaries are best known but numerous cropmark sites in northern Oxfordshire confirm settlement there also. A rare Middle Iron Age cemetery has been excavated at Yarnton. Hillforts are also characteristic of Iron Age Oxfordshire, only one, Swalcliffe, showing signs of settlement. The Late Iron Age saw major changes. Roundhouses and storage pits are no longer used, pottery types change, and coinage (with the first written words) appears. Flooding forced the abandonment of many settlements on the floodplain and lower terraces. Large ditches and ramparts around settlements, called oppida, appear at this time. So did large dyke systems, including Aves Ditch and the north Oxfordshire Grim’s Ditch, which covers some 8800 hectares just east of the river Evenlode, and is of unknown significance. |
Blyde National Park To Be Declared In September
Update on the Blyde National Park.
Did You Know?The Blyde River Canyon is the third largest canyon in the world, after the Grand Canyon in America and the Fish River Canyon in Namibia.
- It is up to 1,000 metres deep and five kilometres wide.
- About 500,000 people visit the Blyde River Canyon areaevery year.
- The canyon forms part of the Wolkberg centre of endemism, and many of the plants growing in the canyon grow nowhere else on earth. The area is thought to contain the richest combination of plants and animals in southern Africa.
- Some of the plants on the escarpment are related to the fynbos in the Cape.
- The sub-tropical mist belt forests that occur on the mountain are some of the best specimens in the whole of South Africa.
- Mariepskop alone hosts more than 1,400 plant species, and the greater area has some 2,000 species - more than the entire Kruger National Park. Of these, 163 species are threatened with extinction and listed in the Red Data plant book.
- The planned national park will have representatives of 75 percent of all bird species, 80 percent of all raptor species and 72 percent of all the mammals found in South Africa.
- There are more than 100 different mammal species in the park, including more than 20 types of carnivores, such as leopard, caracal and serval.
- There are at least 335 bird species in the park
- 94 reptile species and 34 amphibian species occur there, including some of the rarest frogs in the country.
- Many of the 33 fish species that occur in the river are on the verge of extinction
- More than 200 species of butterfly and almost 70 species of spider have been recorded in the area.
- Alien plants pose one of the biggest threats to the area's biodiversity.
- The Sand River, which rises in the national park, supplies water to no less than 420,000 people, and is a tributary of the Sabie River that runs through Kruger. |
What is a neural network?
Notice: This is strictly the text from the attachment. There are pictures that will help to illustrate the ideas in this document.
At its core, a neural network is designed to be a computer simulation of the human brain. It is represented by a labeled, directed graph structure, where nodes perform some simple computations. Each node represents a neuron, and each directed edge represents a weighted connection between two nodes.
A very simple example of a neural network is the following AND-gate.
In this example, nodes x and y are the inputs to node z. w_1 and w_2 are the weights on the connection. In order for this to operate as an AND-gate, we need to set w_1 and w_2 to 1. This means that the input from x will be multiplied by the weight w_1, and the input from y will be multiplied by the weight w_2. We also need to define the operation in node z. If we say that z calculates the product of its inputs, that is, z calculates (x*w_1)(x*w_2), we will have an AND-gate. Why is this true? We can examine the truth ...
This job explores neural networks. |
Chemistry Homework. Read pp. 132-141 Problems p. 141 #1,2,3,5,10,12,13,14. What is ionization energy? A: (p. 133) Ionization energy is the energy required to remove an electron from an atom or ion.
Read pp. 132-141
Problems p. 141 #1,2,3,5,10,12,13,14
What is ionization energy?
A: (p. 133) Ionization energy is the energy required to remove an electron from an atom or ion.
Note: Figure 16 on p. 133 shows the ionization of lithium. The picture below shows the ionization of sodium. Both reactions require some amount of ionization energy.
2. Why is measuring the size of an atom difficult?
A: (p. 135) An atom’s size depends on the volume (amount of space) the electrons take up around the nucleus. The electron cloud has no clear edge. Also, the size of the electron cloud can change based on the chemical and physical environment.
Notes: Figure 19 on p. 135 shows how atomic radius is measured.
3. What can you tell about an atom that has high electronegativity?
A: (p. 137) Atoms with high electronegativity pull electrons towards them more than atoms with low electronegativity.
Notes: Figure 22 and 23 show the trends for electronegativity. In the picture below, the Oxygen (O) is more electronegative (pulls harder on electrons) than Hydrogen (H).
5. What periodic trends exist for ionization energy?
A: (p. 133-134) Ionization energy decreases as you move down a group. Ionization energy increases as you move across a period from left to right.
Notes: Figure 17 and 18 show the ionization energy trends. Compare the chart below to Figure 17. Do they show the same trend?
10. What periodic trends exist for electronegativity?
A: (p. 137-138) Electronegativity decreases as you move down a group. Electronegativity increases as you move across a period from left to right.
Notes: Figure 22 and 23 show the ionization energy trends. Compare the chart below to Figure 22. Do they show the same trend? Compare to the ionization energy trend.
12. Explain why the noble gases have high ionization energies.
A: (p. 134) The noble gases have high ionization energies because their outer electron shell (energy level) is full. This is a stable state and it takes a lot of energy to make an atom unstable. Also, in comparison to other atoms in the same period, noble gases have the maximum number of protons pulling on electrons that are a similar distance away from the nucleus.
Notes: The second paragraph on p. 134 explains this idea.
13. What do you think happens to the size of an atom when the atom loses an electron? Explain.
A: (p. 136, 139) The atom’s radius decreases. The size of the atom gets smaller. When an electron is taken away the atom becomes a positively charged ion. There are more protons than electrons. This uneven positive charge pulls the negatively charged electrons that are left closer to the nucleus. Also, if the electron that was taken was the only one in the outer energy level, there is bigger decrease in size because there are less energy levels.
Notes: To figure this out, combine the discussion about decreasing atomic radius going across a period with the discussion about ionic radius. Figure 16 also helps!
14. With the exception of the noble gases, why is an element with a high ionization energy likely to have a high electron affinity?
A: (p. 133, 134, 139) Atoms with high ionization energies are very attracted to their electrons. They want to keep them. It takes a lot of energy to pull them away. Similarly, atoms with high electron affinity are attracted to electrons. They also want electrons. This is not true for noble gases because noble gases just want to stay the way they are.
Notes: To figure this out, combine the reasoning behind ionization energy and electron affinity. |
In a universe where processes are often measured in millions and billions of years, the Hubble Space Telescope witnessed something extraordinary over the course of just two decades. The Stringray nebula went from bright in 1996 to faded in 2016, like it had been left hanging on a cosmic drying line.
Stingray, more formally known as Hen 3-1357, was hailed as the youngest known planetary nebula when it was first noticed. The nebula formed during the star’s end of life when it ejected glowing gases that gave it a marine-animal-like shape.
What’s so wild about the nebula is the radical makeover it has gone through in such a short amount of time. “Changes like this have never been captured at this clarity before,” NASA said in a statement on Thursday, calling it “a rare look at a rapidly fading shroud of gas around an aging star.”
NASA and the European Space Agency (ESA) jointly operate Hubble. Astronomers are taking notice of what both agencies described as “unprecedented” changes. The nebula had been emitting lots of nitrogen (red), hydrogen (blue) and oxygen (green), which gave it its distinct shape and glow in the original image.
Cosmic dead ringers: 27 super strange-looking space objects
“This is very, very dramatic, and very weird,” said Hubble team member Martín Guerrero of the Instituto de Astrofísica de Andalucía in Spain. “What we’re witnessing is a nebula’s evolution in real time.”
The culprit is likely the central star inside the nebula, which experienced a rapid rise in heat followed by a cooling phase. It seems Hubble got lucky with taking the images when it captured a before-and-after view of the nebula’s wild swing. At this rate, NASA estimates it may be barely detectable within just a few decades.
NASA’s Hubble Twitter account hopped on the “How it started/How it’s going” meme with the before and after images of the nebula.
Despite being in orbit for an impressive 30 years, Hubble continues to feed us incredible cosmic discoveries, fromto . |
|Part of a|
convergent series on
Central tendency is a term in descriptive statistics which gives an indication of the typical score in that data set. The three most common measures typically used for this: the mean, the median (not to be confused with Medium) and the mode. However, there are other measures of central tendency.
Average is an often-used term that may refer to any measure of central tendency, though in casual conversation it is generally assumed to refer to the mean.
The arithmetic mean is easily calculated by summing up all scores in the sample and then dividing by the number of scores in that sample. The mean for the sample 5, 6, 9, 2 would therefore be calculated as follows:
The mathematical formula can be expressed as:
where is the total number of samples.
The above is the arithmetic mean; there are also other means, such as the geometric mean and the harmonic mean, but usually the arithmetic mean is meant if the type of mean is not specified.
All the values are multiplied and then the nth root is taken (where n is the total number of scores). Useful in some geometric (heh) contexts; for example, the area of an ellipse is equal to that of a circle whose radius is the geometric mean of the ellipse's semi-major and semi-minor axes. Has the neat property of being equal to e raised to the arithmetic mean of the natural logarithms of the values being averaged. (The same principle holds for other bases, so for example it could be defined as ten raised to the arithmetic mean of the common logarithms.) The mathematical formula can be expressed as:
The mean obtained by taking the reciprocal of the arithmetic mean of the reciprocals of a set of (nonzero) numbers. One of the most memorable uses of the harmonic mean is in physics, where the equivalent resistance of a set of resistors in parallel is the harmonic mean of their resistances divided by the number of resistors. The same principle applies to capacitors placed in series. The mathematical formula can be expressed as:
- Weighted Mean: If the values have different "weights" (not in the physical sense), the most appropriate way to calculate an average is to calculate a weighted mean. This is done by multiplying every value
aiwith its corresponding weight
ibeing the index or rank of the data set) and the summing up all the products (i.e.
Σaiwi = a1w1 + a2w2 + a3w3 + ... + a(i-1)w(i-1) + aiwi) and then dividing it all by the sum of the weights (i.e.
Σwi = w1 + w2 + w3 + ... + w(i-1) + wi). Ergo, the final formula:
Weighted mean = (Σaiwi):(Σwi)
For example, a school test has 4 subjects with different weights (in parentheses), going from 0 (minimum) to 10 (maximum): Mathematics (3), physics (2), chemistry (1) and biology (1). Student A got the following grades, respectively:
6, 4, 8, 9. The weighted mean therefore is:
Weighted mean = (6*3 + 4*2 + 8*1 + 9*1)/(3 + 2 + 1 + 1) = (18 + 8 + 8 + 9)/(7) = 43/7 ≈ 6
It can also be used to combine measurements from classes with different sizes, where the weight corresponds to the relative size of each class.
- Truncated Mean (or trimmed mean): this is similar to the arithmetic mean, but outliers are discarded by dropping some part of the probability distribution. If you discard the lowest 25% and highest 25%, this is known as the interquartile mean. It is used in the ISU Judging System for figure-skating, where the highest and lowest scores are dropped and the rest are averaged.
- Mid-range: the arithmetic mean of the highest and lowest value, this is very vulnerable to outliers and not much use in most circumstances.
The median is defined as the value that lies in the middle of the sample when that data set is ordered (by rank).
It is calculated by ranking all the scores in order and taking the one in the middle. When there is an even number, conventionally, the mean of the two in the middle is taken.
For example, the median for the already ordered data set
[2, 12, 12, 19, 19, 20, 20, 20, 25]. It has 9 data points (n = 9; it is the number of "observations" in that data set) and the value in the middle would be the fifth rank, which is 19.
However, as another example, the data set
[2, 12, 12, 19, 19, 20, 20, 20, 25, 25] has an even number of observations (n = 10), the median would be between 19 (fifth rank) and 20 (sixth rank), which are the two scores in the middle. Therefore, it is the mean of those two, which is 19.5.
The mode is simply the most frequently occurring value in the data set.
For example, the mode of the sample
[3, 3, 2, 1, 4, 3] would be 3 (since it appears thrice). If there is a second number that occurs just as frequently within the sample, then it can be described as having two modes (bimodal). However, some some sources will identify such a sample as having no mode.
The three methods can be useful in different ways, and how they relate can give information about your statistical sample. The mean is the most intuitive measure of the concept of "average", while the median is most useful for breaking samples into groups (e.g., quartiles). For evenly distributed samples, or symmetrically distributed samples, the median and mean (and usually the mode) should match. The difference between them is an indicator of how skewed the data is. For instance, in economics, income is not evenly distributed (not by a long shot), nor even symmetrically distributed, and the mean value is easily shifted by those high earners at the top — for this reason the median is most often used.
- C. P. Dancey & J. Reidy (2002) Statistics without maths for psychology, 37
- About.com - Median, mean, and mode
- See the Wikipedia article on Truncated mean.
- See the Wikipedia article on Mid-range.
- See the Wikipedia article on median household income.
- See the Wikipedia article on mean household income. |
- Banksia kingii
name = "Banksia kingii"
fossil_range = Late Pleistocene
image_caption = "Banksia kingii"
genus = "
subgenus = "Banksia"
sectio = "Banksia"
series = "Salicinae"
species = "B. kingii"
binomial = Banksia kingii
binomial_authority = Jordan & Hill|
Banksia kingii is an extinct species of tree or shrub in the
plant genus" Banksia". It is known only from fossilleaves and fruiting "cones" found in Late Pleistocene sedimentat Melaleuca Inlet in western Tasmania. These were discovered by Deny Kingin the workings of his tinmine. The leaves and fruiting cones were discovered at different locations, and since the sediment had been removed during mining, the stratigraphyof the fossils is unknown. The sediment from which they were recovered was alluvial, consisting of large, well-rounded fragments of quartzand schist.
The fossil leaves are about 12 centimetres long and one centimetre wide and very thick and robust. They clearly belong to genus "Banksia", section "Banksia", series "Salicinae", but not to any of the extant species in that series. The leaves of "B. plagiocarpa" (Dallachy's Banksia) are similar in form, shape and robustness, but differ strongly in structure. Leaves of "B. saxicola" (Grampians Banksia) are structurally the most similar to "B. kingii", but have a different shape. There also appear to be some affinities with "B. marginata" (Silver Banksia) and "B. canei" (Mountain Banksia), but insufficient to warrant the fossil's ascription to those species. The fossils are therefore considered representative of a new species, "B. kingii".
The fossil fruiting structures are cylindrical, about 6 centimetres high and 4½ centimetres wide. The structure had lost its old flower parts. It appears to be most closely related to "B. saxicola" and "B. canei", with some similarities to "B. marginata". The taxonomic situation therefore appears highly similar for both leaves and fruiting structures, and so the fruiting structures are ascribed to "B. kingii" despite the absence of any direct connection to the fossil leaves.
The species is believed to represent an extinct lineage. It is possible that it is an ancestor of "B. marginata", although "B. marginata" must have speciated well before the extinction of "B. kingii", given how widely it is now distributed. Extinction of "B. kingii" probably occurred in the late
Quaternary, and may have been caused by the climatic and physical disruption of glaciation, or by increased fire frequency due to human activity.
A formal description of "B. kingii" was published in 1991 by Gregory J. Jordan and Robert S. Hill, who named the species in honour of the discover, Deny King. Hence the species' full name is "Banksia kingii" Jordan & Hill". The
holotypeand a number of other specimens are stored in the Department of Plant Science at the University of Tasmania.
*cite journal | author = Jordan, Gregory J. and Robert S. Hill | year = 1991 | title = Two New "Banksia" Species from Pleistocene Sediments in Western Tasmania | journal = Australian Systematic Botany | volume = 4 | issue = 3 | pages = 499–511 | url = http://www.publish.csiro.au/?act=view_file&file_id=SB9910499.pdf | accessdate = 2006-08-28 | doi = 10.1071/SB9910499
Wikimedia Foundation. 2010. |
Doctors carry stethoscopes around their necks for many reasons: to look good, to get onto a ward if they have forgotten their ID badge, to remind themselves that they did at one point achieve a degree in medicine. The main reason, however, is to listen to people's hearts.
Your heart exists so you can get oxygen to parts of your body. It is in fact two pumps. One pumps the blood that's come back from the rest of the body to the lungs so it can load up on oxygen, and then back into the other pump. The second pump then pumps the blood that is now full of oxygen to the rest of your body so that it can use it.
Both the pumps are next to each other and both contract at the same time and work in the same way. They each consist of two chambers, one called an atrium1 and one called a ventricle, separated by a simple valve that is controlled by pressure - if there is more pressure in the atrium than in the ventricle, the valve will be open; if there is more pressure in the ventricle, then the valve will be closed.
Step One, the Heart Fills up with Blood - Diastole
Blood flows into the atria from the rest of the body and from the lungs, pushes open the valves and fills the ventricles up.
Step Two, the Heart Contracts - Systole
The atria contract first pushing any blood in there into the ventricles, then very soon afterwards the ventricles contract. This increases the pressure in the ventricles so the valves close and the blood is pushed into the body and the lungs via the main arteries.
When the heart has stopped contracting and starts filling up again, there is less pressure in the ventricles than in the main blood vessels leading out from the heart. To stop the blood from flowing back into the ventricles, valves (at the beginning of the main blood vessels leading to the lungs and to the rest of the body) close.
The Heart Has Four Valves
When a doctor is listening to your heart they are listening to the sounds that the valves make when they shut, and they check that the blood is not making any sound flowing around the valves.
The two valves that separate that atria and ventricles are called the Mitral valve in the left atrium and the Tricuspid valve in the right one. The valves that stop blood flowing back into the heart are called the aortic valve2 and the pulmonary valve3.
You Can Feel Your Heart
You can feel your heart beating by feeling your chest in the space beneath your fifth rib4, about as far as halfway along your collar bone. In medical jargon, you're feeling a beat in the 5th intercostal space in the mid-clavicular line. This is where the bottom tip of your heart, called the apex, is closest to the skin, and the beat you feel there is known as the apex beat.
You can usually only feel the left side of your heart as this is the side that is largest - it has to be large to pump blood around the whole body. If the right side of the heart is enlarged5 then it can sometimes be felt just to the left of the breastbone.
If you can feel the heart beating abnormally strongly through the chest, it is called a 'heave'.
Sounds Your Heart Makes
Two things can be ascertained from listening to the heart: firstly, whether the valves are opening and closing at the correct times, and secondly, whether there are opening and closing properly. You can't tell if the heart muscle is beating correctly or if the heart is pumping enough blood out.
The First Heart Sound
This is made by the Tricuspid and Mitral valves closing, at the the start of systole6. As the entire heart usually contracts at once, both heart valves usually close at once, making a single sound.
Sometimes the first heart sound is louder than usual. This could be because the person is thinner than normal or because the heart is working harder than normal7. It is also loud if one of the valves is stiffened so it isn't closing properly when contraction starts and is forced together by the contraction, this is known as Mitral stenosis.
Sometimes the first heart sound is softer than usual, which could be because one of the valves has become very stiffened and does not close at all. Alternately, it could be because there is fat between the heart and the stethoscope, or because there is fluid around the heart.
The Second Heart Sound
This is the sound of the aortic and pulmonary valves closing. When you take a deep breath in, the pressure inside your chest increases. This means that air is drawn into your lungs; it also means that venous blood (returning from the rest of the body) is drawn into the right ventricle faster than it normally would be. As there is more blood to get rid of in the right ventricle than the left, it takes a little longer to remove it. This means that the aortic valve will close before the pulmonary valve, so the second heart sound is heard as two sounds; this is known as a split sound. Usually the sounds are so close together that the two cannot be distinguished as separate sounds, though a spilt second heart sound is normal in children and young adults.
If the heart sound is split while breathing both in and out, this may be due to a defect in the wall between the two atria, commonly known as a hole in the heart. If it is spilt during breathing out but not during breathing in, then this is because the aortic valve is closing late due to the left side of the heart contracting later. This could be because the nerves supplying that part of the heart are damaged (known as left bundle branch block). When breathing in, the pulmonary valve is also closing later so it is not as noticeable.
If the second heart sound is split all the time, but the split sounds are slightly closer together during breathing in, then it could be due to right bundle branch block. This means that the nerves supplying the right side of the heart are damaged, so the pulmonary valve closes later than usual.
The second heart sound can be louder than normal if your blood pressure is high, because the left ventricle is working harder as it has a higher blood pressure to pump the blood against. If the aortic valve is hardened then it is softer because the valve isn't closing properly. It is also soft if the heart is failing, because a failing heart doesn't work as hard as a healthy one.
Abnormal Heart Sounds
The Third Heart Sound
This is the sound of blood rushing into the heart as soon as the Mitral and Tricuspid valves are open. It's usually heard just after the second heart sound. Some people say that it sounds like the 'tuc' in Kentucky. This is normal in children, and young or pregnant people. However in those over 40 it should be regarded as abnormal. It can be a sign of the heart being under stress, because too much fluid has gone into it, or of heart failure. It can be due to overload on the right side of the heart or the left side or both.
The Fourth Heart Sound
This is the sound of the atrium pumping blood into the ventricle just before it contracts, it is usually heard just before the first heart sound. Some people say that it sounds like the 'esse' in Tennessee. The presence of a fourth heart sound indicates that it is hard work for the atrium to pump blood into the ventricle, this means that the ventricle is abnormally stiff.
An Opening Snap
If the Mitral or Tricuspid valve is stiff then it can make a noise as it opens; this is known as an 'opening snap'.
Heart sounds are the sound of the heart itself beating; heart murmurs are the sound of the blood flowing through the heart. They sound a lot like a 'whoosh'. Sometimes they're normal, and sometimes they're a sign of heart disease. Normal murmurs are usually at the start of systole, which is when the blood is pushed hardest, and usually they're quite soft.
Murmurs are described according to when and where they are heard, and how loud they are. They can be systolic or diastolic and are graded on a scale of 1 - 5 with 5 being the loudest. Very loud murmurs can be felt through the chest wall and are known as 'thrills'.
Ejection Systolic Murmurs are Often Caused by Aortic Stenosis
An 'ejection systolic murmur is heard in the the middle of systole. It is distinct from the initial first heart sound. This is because it's a problem with the aortic and pulmonary valves, and not with the Mitral and Tricuspid valves (which cause the first heart sound). It's caused by stenosis, ie, one of the valves is not opening properly.
Aortic stenosis causes a murmur that can also be heard in the clavicles and over the neck. This is because the aorta divides into blood vessels that run up the neck and under the clavicle. The murmur of aortic stenosis is heard best when the patient is sitting up and leaning forward because that brings the heart close to the chest wall. To distinguish it from a spilt second heart sound you need to ask the patient to take a deep breath and hold it.
Pan Systolic Murmurs are Often Caused by Mitral Regurgitation
These are heard as part of the first heart sound. This is because they are due to problems with the Mitral and Tricuspid valves that cause the first heart sound. The blood flows into the ventricles, then because the valve doesn't close it flows out again, causing a murmur.
In the UK, Mitral valve disease is very common because it is caused by high blood pressure, as the Mitral valve is under the most stress in the heart as it has to deal with the greatest pressure differences. It can also be damaged by a heart attack which has killed the papillary muscles, stopping it opening too far.
Murmurs Early in Diastole Can be Caused by Aortic Regurgitation
In aortic regurgitation, the blood goes out of the left ventricle, the aortic valve closes, causing the second heart sound, and then some blood leaks back through causing a murmur. Often aortic steonis and regurgitation occur on the same valve - this is known as mixed aortic valve disease.
A Murmur in the Middle of Diastole is Caused by Mitral Stenosis
During diastole the blood flows into the heart, through the open Mitral valve. If the Mitral valve is stiff, then blood makes a noise as it flows past it.
If there is an artificial heart valve, it makes a distinctive metal sound. Often this is so loud it can be heard without a stethoscope and is sometimes mistaken for the ticking of a clock.
If a doctor suspects that a valve is working abnormally then the next step will be to carry out an echocardiogram: an ultrasound scan of the heart identical to the one that pregnant women have8. This can show the exact movement of the valve and it can also show the amount of blood that the heart pumps out in each beat. |
A relational database management system (RDBMS) is a collection of programs and capabilities that enable IT teams and others to create, update, administer and otherwise interact with a relational database. RDBMSes store data in the form of tables, with most commercial relational database management systems using Structured Query Language (SQL) to access the database. However, since SQL was invented after the initial development of the relational model, it is not necessary for RDBMS use.
The RDBMS is the most popular database system among organizations across the world. It provides a dependable method of storing and retrieving large amounts of data while offering a combination of system performance and ease of implementation.
RDBMS vs. DBMS
In general, databases store sets of data that can be queried for use in other applications. A database management system supports the development, administration and use of database platforms.
An RDBMS is a type of database management system (DBMS) that stores data in a row-based table structure which connects related data elements. An RDBMS includes functions that maintain the security, accuracy, integrity and consistency of the data. This is different than the file storage used in a DBMS.
Other differences between database management systems and relational database management systems include:
- Number of allowed users. While a DBMS can only accept one user at a time, an RDBMS can operate with multiple users.
- Hardware and software requirements. A DBMS needs less software and hardware than an RDBMS.
- Amount of data. RDBMSes can handle any amount of data, from small to large, while a DBMS can only manage small amounts.
- Database structure. In a DBMS, data is kept in a hierarchical form, whereas an RDBMS utilizes a table where the headers are used as column names and the rows contain the corresponding values.
- ACID implementation. DBMSes do not use the atomicity, consistency, isolation and durability (ACID) model for storing data. On the other hand, RDBMSes base the structure of their data on the ACID model to ensure consistency.
- Distributed databases. While an RDBMS offers complete support for distributed databases, a DBMS will not provide support.
- Types of programs managed. While an RDBMS helps manage the relationships between its incorporated tables of data, a DBMS focuses on maintaining databases that are present within the computer network and system hard disks.
- Support of database normalization. An RDBMS can be normalized, but a DBMS cannot.
Features of relational database management systems
Elements of the relational database management system that overarch the basic relational database are so intrinsic to operations that it is hard to dissociate the two in practice.
The most basic RDBMS functions are related to create, read, update and delete operations -- collectively known as CRUD. They form the foundation of a well-organized system that promotes consistent treatment of data.
The RDBMS typically provides data dictionaries and metadata collections that are useful in data handling. These programmatically support well-defined data structures and relationships. Data storage management is a common capability of the RDBMS, and this has come to be defined by data objects that range from binary large object -- or blob -- strings to stored procedures. Data objects like this extend the scope of basic relational database operations and can be handled in a variety of ways in different RDBMSes.
The most common means of data access for the RDBMS is SQL. Its main language components comprise data manipulation language and data definition language statements. Extensions are available for development efforts that pair SQL use with common programming languages, such as the Common Business-Oriented Language (COBOL), Java and .NET.
RDBMSes use complex algorithms that support multiple concurrent user access to the database while maintaining data integrity. Security management, which enforces policy-based access, is yet another overlay service that the RDBMS provides for the basic database as it is used in enterprise settings.
RDBMSes support the work of database administrators (DBAs) who must manage and monitor database activity. Utilities help automate data loading and database backup. RDBMSes manage log files that track system performance based on selected operational parameters. This enables measurement of database usage, capacity and performance, particularly query performance. RDBMSes provide graphical interfaces that help DBAs visualize database activity.
While not limited solely to the RDBMS, ACID compliance is an attribute of relational technology that has proved important in enterprise computing. These capabilities have particularly suited RDBMSes for handling business transactions.
As RDBMSes have matured, they have achieved increasingly higher levels of query optimization, and they have become key parts of reporting, analytics and data warehousing applications for businesses as well. RDBMSes are intrinsic to operations of a variety of enterprise applications and are at the center of most master data management systems.
How RDBMS works
As mentioned before, an RDBMS will store data in the form of a table. Each system will have varying numbers of tables with each table possessing its own unique primary key. The primary key is then used to identify each table.
Within the table are rows and columns. The rows are known as records or horizontal entities; they contain the information for the individual entry. The columns are known as vertical entities and possess information about the specific field.
Before creating these tables, the RDBMS must check the following constraints:
- Primary keys -- this identifies each row in the table. One table can only contain one primary key. The key must be unique and without null values.
- Foreign keys -- this is used to link two tables. The foreign key is kept in one table and refers to the primary key associated with another table.
- Not null -- this ensures that every column does not have a null value, such as an empty cell.
- Check -- this confirms that each entry in a column or row satisfies a precise condition and that every column holds unique data.
- Data integrity -- the integrity of the data must be confirmed before the data is created.
Assuring the integrity of data includes several specific tests, including entity, domain, referential and user-defined integrity. Entity integrity confirms that the rows are not duplicated in the table. Domain integrity makes sure that data is entered into the table based on specific conditions, such as file format or range of values. Referential integrity ensures that any row that is re-linked to a different table cannot be deleted. Finally, user-defined integrity confirms that the table will satisfy all user-defined conditions.
Advantages of relational database management system
The use of an RDBMS can be beneficial to most organizations; the systematic view of raw data helps companies better understand and execute the information while enhancing the decision-making process. The use of tables to store data also improves the security of information stored in the databases. Users are able to customize access and set barriers to limit the content that is made available. This feature makes the RDBMS particularly useful to companies in which the manager decides what data is provided to employees and customers.
Furthermore, RDBMSes make it easy to add new data to the system or alter existing tables while ensuring consistency with the previously available content.
Other advantages of the RDBMS include:
- Flexibility -- updating data is more efficient since the changes only need to be made in one place.
- Maintenance -- database administrators can easily maintain, control and update data in the database. Backups also become easier since automation tools included in the RDBMS automate these tasks.
- Data structure -- the table format used in RDBMSes is easy to understand and provides an organized and structural manner through which entries are matched by firing queries.
On the other hand, relational database management systems do not come without their disadvantages. For example, in order to implement an RDBMS, special software must be purchased. This introduces an additional cost for execution. Once the software is obtained, the setup process can be tedious since it requires millions of lines of content to be transferred into the RDBMS tables. This process may require the additional help of a programmer or a team of data entry specialists. Special attention must be paid to the data during entry to ensure sensitive information is not placed into the wrong hands.
Some other drawbacks of the RDBMS include the character limit placed on certain fields in the tables and the inability to fully understand new forms of data -- such as complex numbers, designs and images.
Furthermore, while isolated databases can be created using an RDBMS, the process requires large chunks of information to be separated from each other. Connecting these large amounts of data to form the isolated database can be very complicated.
Uses of RDBMS
Relational database management systems are frequently used in disciplines such as manufacturing, human resources and banking. The system is also useful for airlines that need to store ticket service and passenger documentation information as well as universities maintaining student databases.
Some examples of specific systems that use RDBMS include IBM, Oracle, MySQL, Microsoft SQLServer and PostgreSQL.
RDBMS product history
Many vying relational database management systems arose as news spread in the early 1970s of the relational data model. This and related methods were originally theorized by IBM researcher E.F. Codd, who proposed a database schema, or logical organization, that was not directly associated with physical organization, as was common at the time.
Codd's work was based around a concept of data normalization, which saved file space on storage disk drives at a time when such machinery could be prohibitively expensive for businesses.
File systems and database management systems preceded what could be called the RDBMS era. Such systems ran primarily on mainframe computers. While RDBMSes also ran on mainframes -- IBM's DB2 being a pointed example -- much of their ascendance in the enterprise was in UNIX midrange computer deployments. The RDBMS was a linchpin in the distributed architecture of client-server computing, which connected pools of stand-alone personal computers to file and database servers.
Numerous RDBMSes arose along with the use of client-server computing. Among the competitors were Oracle, Ingres, Informix, Sybase, Unify, Progress and others. Over time, three RDBMSes came to dominate in commercial implementations. Oracle, IBM's DB2 and Microsoft's SQL Server, which was based on a design originally licensed from Sybase, found considerable favor throughout the client-server computing era, despite repeated challenges by competing technologies.
As the 20th century drew to an end, lower-cost, open source versions of RDBMSes began to find use, particularly in web applications.
Eventually, as distributed computing took greater hold, and as cloud architecture became more prominently employed, RDBMSes met competition in the form of NoSQL systems. Such systems were often specifically designed for massive distribution and high scalability in the cloud, sometimes forgoing SQL-style full consistency for so-called eventual consistency of data. But, even in the most diverse and complex cloud systems, the need for some guaranteed data consistency requires RDBMSes to appear in some way, shape or form. Moreover, versions of RDBMSes have been significantly restructured for cloud parallelization and replication. |
- Explain to your counselor the precautions that must be followed for the safe use and operation of a potter’s tools, equipment, and other materials.
- Do the following:
- Explain the properties and ingredients of a good clay body for
- Making sculpture
- Throwing on the wheel
- Tell how three different kinds of potter's wheels work.
- Explain the properties and ingredients of a good clay body for the following:
- Make two drawings of pottery forms, each on an 8 1/2 by 11 inch sheet of paper. One must be a historical pottery style. The other must be of your own design.
- Explain the meaning of the following pottery terms: bat, wedging, throwing, leather hard, bone dry, greenware, bisque, terra-cotta, grog, slip, score, earthenware, stoneware, porcelain, pyrometric cone, and glaze.
- Do the following. Each piece is to be painted, glazed, or otherwise
decorated by you:
- Make a slab pot, a coil pot, and a pinch pot.
- Make a human or animal figurine or decorative sculpture.
- Throw a functional form on a potter's wheel.
- Help to fire a kiln.
- Explain the scope of the ceramic industry in the United States. Tell some things made other than craft pottery.
- With your parent's permission and your counselor's approval, do
ONE of the following:
- Visit the kiln yard at a local college or other craft school. Learn how the different kinds of kilns work, including low-fire electric, gas or propane high-fire, wood or salt/soda, and raku.
- Visit a museum, art exhibit, art gallery, artists' co-op, or artist's studio that features pottery. After your visit, share with your counselor what you have learned.
- Using resources from the library, magazines, the Internet (with your parent's permission), and other outlets, learn about the historical and cultural importance of pottery. Share what you discover with your counselor.
- Find out about career opportunities in pottery. Pick one and find out about the education, training, and experience required for this profession. Discuss this with your counselor, and explain why this profession might interest you.
BSA Advancement ID#:
Scoutbook ID#: 91
Requirements last updated in: 2008
Pamphlet Publication Number: 35934
Pamphlet Stock (SKU) Number: 35934
Pamphlet Revision Date: 2014
Page updated on: May 08, 2022 |
FOODS AND NUTRITION are essential for maintaining good health and in preventing diseases. Although food occupies the first position in the hierarchical needs of man, ignorance of many basic facts relating to foods and nutrition is still widespread. Good nutrition is a function of both economics and education. Economics, because money is required to buy food, and education, because that helps buy the right food!
After the two macro nutrients we saw in the previous issues, viz., Carbohydrates and proteins, now we will see some important aspects of the third macronutrients……
Lipids/fats are compounds that are insoluble in water but are soluble in organic solvents such as ether and chloroform. Lipids that are important to our discussion include fats and oils (triglycerides or triacylglycerol’s), fatty acids, phospholipids, and cholesterol. Fats are an essential part of our body, accounting for a sixth of our body weight. The cells and tissues of our body have fat as an integral part of them. The vital organs (brain, heart, liver) are protected by a sheath of fat and water, which holds them in place and prevents injury. The nerves are also protected by fat. A layer of fat beneath the skin acts as an insulation against cold. The fat around the joints acts as a lubricant and allows smooth and easy movements. Thus fat is a crucial part of the body composition.
Coming to food….
Fats are an integral part of our diet, whether we are on a fat free diet or a normal diet. Food fats include solid fats, liquid oils and related compounds such as fat-soluble vitamins and cholesterol. Fats are enjoyed in the diet due to its flavor, palatability, texture, and aroma. Fats also carry the fat-soluble vitamins A, D, E, and K. Sources of fats and oils may be animal, vegetable, or marine which may be manufactured by industrial processing. Fats appear solid at room temperature, whereas oils are liquid at room temperature.
In the middle of last century, fats were expensive and a meal containing large amounts of fat was called a ‘rich meal’. Persons consuming such meals were thought to be healthy. But with the improvement in the methods of production and their availability, there has been an indiscriminate increase in fat intake in the overall society leading to overweight and obesity, which has turned out to be a major cause of concern in the urban population . The weight increase in an individual, discourages movement, increases pressure on circulation, respiration and on the skeletal frame. Hence it is recognized as a risk factor for several chronic ailments. It is easier to control visible fats in the diet than that which is hidden. For example, one can monitor use of butter, ghee and oil used directly. Invisible fats include the cream in the milk and dahi ,nuts used in preparation ,egg yolk, oil used in seasoning vegetables, dal and salads etc. Even toned milk contains 3 per cent fat. Invisible fat contributes to about 10 or more per cent of total energy in the diet.
Uses of fat in the diet:
Fats are energy giving foods and they give 9 Kcal/g which is the same as carbohydrates.
Fats serve as a vehicle for fat-soluble vitamins like vitamins A, D, E and K and carotenes and thereby promote their absorption.
They are also sources of essential polyunsaturated fatty acids. It is necessary to have adequate and good quality fat in the diet with sufficient polyunsaturated fatty acids in proper proportions for meeting the requirements of essential fatty acids.
The type and quantity of fat in the daily diet influences the level of cholesterol and triglycerides in the blood.
Diets should include adequate amounts of fat particularly in the case of infants and children, to provide concentrated energy since their energy needs per kg body weight is about twice that of adults.
Adults need to be cautioned to restrict intake of saturated fat (butter, ghee and hydrogenated fats) and cholesterol (red meat, eggs, organ meat). Excess of these substances could lead to obesity, diabetes, cardiovascular disease.
Sources of Fat:
It is at the sources of fat that every one is a little concerned. Fat sources are broadly classified as
- Plant Sources
- Animal Sources
Nuts and oilseeds are the sources of plant fats. We have the groundnut oil, sesame oil, sunflower seed oil, mustard oil and so on. Each of these sources vary in their composition and hence it their properties as well. As with their properties, so is their taste in the food in which they are used.
Animal sources of fat are butter, ghee, cod liver oil, and the likes. Also fat from lamb and chicken are also a source of fat for the body.
Recommended Daily intake of fats:
For a normal healthy adult, fat intake can be up to 20 – 35% of the total calorie intake. A normal healthy adult with less physical activity will require about 1800 to 2000 Kcal of energy per day of which 40 to 65 g of fat intake is the recommended intake.
However, to generalize at 40g/day is not the point but what fats to be taken and what to be avoided or taken in less quantities should be an important aspect of our diet.
In the coming issues we will see the different types of fats and their effect on our body!
……….. to be continued……
Article writer Dr Manomani Seenivasan
#Foodfacts #Healthyfood #Foodessential #Proteinsrichfood #SourceofFat |
- 1 The Basics of Air Compressors
- 2 What Drives the Air Pressure?
- 3 The Mechanics of Compressed Air
- 4 Other Characteristics of Air Compressors
- 5 New Technologies and Air Compressors
- 6 Keynote Takeaways
Air compressors work in a simple way: the air is stored under pressure, and when it’s needed, it’s released like a balloon would.
The atmospheric air inside an air compressor creates potential energy, which is always inside the tank. Once you need it, the potential energy converts into usable energy.
Read on, and find out all you need to know about how an air compressor works, including its inner functioning and the different types of compressors you might encounter.
The Basics of Air Compressors
If you ask the question ‘how does an air compressor work,’ it probably means that you want to know more about compressed air, the types of compressors, and much more. However, you first need to understand the basics.
When you see an air compressor working, it’s because it’s functioning through two methods of air displacement. To get compressed air, the inner parts of the machine move the air, which then becomes pressurized air and it’s stored until it’s used.
Most of these machines are positive displacement compressors, which means they work in a very specific way. The air moves to a compression chamber that opens and closes, and the process keeps occurring until the structure gets compressed air. Once that happens, it moves to a storage tank and stays there, waiting to be used.
A piston air compressor, a rotary-screw one, and a scroll-type compressor all use positive displacement to function.
Nonpositive or Dynamic Displacement
The process is slightly different when air compression occurs due to negative displacement. In this case, the air compressor uses rotating blades to build the air pressure. Thus, the blades’ motion works to ensure the process occurs.
Unlike most piston compressors and rotary-screw air compressors, the ones that work through nonpositive displacement are often inside big commercial or industrial machines that need constant pressure and large volume flow rates.
What Drives the Air Pressure?
Air compressors often function with direct-drive or belt-drive systems, which are very different from one another. In each case, the electric motor works differently, which is why understanding their characteristics is so important.
On this occasion, the belt turns when the motor does, which activates the system. When you look at each specific type of air compressor, you might notice that it’s the most common feature because it’s immensely accessible.
Since the machine can adjust the belts depending on the air demands, belt-drive systems are very frequently what drive most air compressors you might encounter.
Even though belts are very common among different air compressors, in a system that uses a direct drive, the motor attaches directly to the crankshaft. Therefore, it allows very few maintenance requirements, which makes it ideal for small designs.
In some cases, air compressors use a direct-drive system because it’s convenient for small models. At the same time, they are not as versatile as compressors using a belt-drive system, but they don’t use a storage tank before providing the pressurized air, which means they can be much more efficient on occasion.
The Mechanics of Compressed Air
An air compressor works due to very specific mechanics, and you must understand them if you want to know all the details about its functionality.
The first factor you must keep in mind is that there are different types of air compressors. Consequently, each of them works differently. Even so, all of them need an electric motor, a pump that’s in charge of the compressed air, an inlet/outlet feature to let the air in and out, and a tank (although not in all cases).
When air compressors begin to work, they draw air and their internal components reduce their volume creating a vacuum, which drives the pressure up when it enters the tank. Then, the duty cycle comes to an end and the compressor shuts off, dropping the pressure again. A positive displacement air compressor completes this process using three different parts: scrolls, screws, and pistons.
Piston Air Compressors
A reciprocating air compressor (also known for its pistons) can compress air by working very similarly to how a car would. The rod of the crankshaft raises the piston inside the cylinder, which moves air into the chamber and increases pressure.
After that occurs, the crankshaft closes the piston, moves the air into the tank, and then the piston opens to let more air volume in. Additionally, the full compression cycle occurs in one or two stages, and they are the following:
In this case, the air compressors work by compressing the air in just one stroke. Thus, the crankshaft rotates completely and drives the piston to a full motion.
Although most piston compressors can complete single-stage functioning with just one piston, some newer models have multiple ones to divide the job and work at lower RPMs and decibels.
When the air compressors go through two-stage functioning, the machine uses the piston to compress the air in the first stroke. Then, it moves the air to a second piston, which doubles the air pressure inside the tank.
Many people know about piston air compressors because they are remarkably loud. Since the internal components rub together and create friction, when they’re working, you might hear some noise. Nonetheless, due to advances in technology, now even a single-stage air compressor can compress air using up to four pistons.
Every time an air compressor uses several pistons, they divide the work. Consequently, they can be quieter, but they also have a longer lifespan, which is an added benefit.
Sometimes, compressors don’t use pistons. Instead, they squeeze the air between two screws that never touch. Those are called rotary-screw compressors, and they require little maintenance and don’t make much noise.
This type of air compressor is ideal for heavy-duty uses, especially if you require a lot of power during an extended period of time. A rotary-screw air compressor is also a good idea for people who want to reduce maintenance costs and noise because their design is oil-free and doesn’t have many moving parts.
The last type of compressor that functions with positive displacement is the scroll compressor, which is actually similar to rotary compressors.
While oil-lubricated air compressors work with specific gears to ensure they compress the air and store it in an air storage tank, oil-free compressors like this type only need two metal parts to move together without touching to create a vacuum.
To function, scroll compressors have two spiral-shaped pieces that are constantly working together, and they compress all incoming air. One piece is fixed without moving, while the second one is constantly rotating in a tight circular motion.
Other Characteristics of Air Compressors
Air compressors work due to a number of pieces working together. Thus, they have different characteristics that you must understand if you want to know the details of how they function, for example, the following features:
Just like a car, many air compressors require oil to run. The types of compressors that use oil rely on it to reduce both the friction and the wear when their parts move.
However, there are also oil-free compressors, which use chemicals or materials like Teflon to reduce friction. Unlike machines that use oil, their parts are constantly lubricated. Although they seem like a convenient option to use, they heat up faster, so they might not be the best alternative in all cases.
Even so, compressors that don’t use oil are always a top alternative for manufacturing industries that require clean air to make products. Some good examples of that include the dental, food, electronics, and beverage industries.
The two most important features to determine the air compressor power ratings are pressure and airflow volume. Professionals consider both things when they want to identify if the machine can handle specific applications.
Moreover, professionals can also consider airflow volume and pressure if they need to choose specific tools to use when they repair your compressor.
While pressure is the specific force the compressor applies to a determined area, airflow volume measures the rate at which the air can move through the machine.
New Technologies and Air Compressors
As a general rule, air compressors have always been difficult to produce, which has caused them to be somewhat expensive.
Even though compressors are still complicated machines, new technologies have allowed them to be easier to make. Experts want to innovate the ability to produce and store pressurized air, which has been an essential part of the process of reducing the costs of an air compressor.
Air compressors work with pressurized air, which moves through the inside of the machine due to a very specific process. There are different types, and they use various methods to compress the air you get when you use it. Furthermore, understanding how they work lets you have more information on the matter and know the precise way in which they function. Finally, although they were initially very expensive and difficult to produce, new technologies have allowed costs to be lower. |
All archaea are single-celled organisms. They have prokaryotic cells but are thought to be more closely related to eukaryotes than they are to bacteria. Archaea have many characteristics that they share with both bacteria and eukaryotes. They also have many unique features.
Structure of archaea
Archaea are structurally very diverse and there are exceptions to most of the general cell features that I describe here.
As archaea are prokaryotic organisms, they are made from only one cell which lacks a true nucleus and organelles. They are generally of similar size and shape to bacteria cells. Other physical similarities they share with bacteria include a single ring of DNA, a cell wall (almost always) and often the presence of flagella.
Unlike bacteria, archaea are unaffected by antibiotics. Their cell walls are structurally different to those of bacteria and are not vulnerable to attack from antibiotics.
Archaea cells have unique membranes. The membranes of bacteria and eukaryotic cells are made from compounds called phospholipids. These phospholipids have non-branching tails. Archaeal membranes are made of branching lipids. The presence of branching lipids greatly alters the structure of the membranes of archaeal cells.
Where are archaea found?
Archaea were originally only found in extreme environments which is where they are most commonly studied. They are now known to live in many environments that we would consider hospitable such as lakes, soil, wetlands, and oceans.
Many archaea are extremophiles i.e lovers of extreme conditions. Different groups thrive in different extreme conditions such as hot springs, salt lakes or highly acidic environments.
Archaea that live in extremely salty conditions are known as extreme halophiles – lovers of salt. Extreme halophiles are found in places such as the Dead Sea, the Great Salt Lake and Lake Assal which have salt concentrations much higher than ocean water.
Other organisms die in extremely salty conditions. High concentrations of salt draw the water out of cells and cause them to die of dehydration. Extreme halophiles have evolved adaptations to prevent their cells from losing too much water.
Archaea that are found in extremely hot environments are known as extreme thermophiles. Most organisms die in extremely hot conditions because the heat damages the shape and structure of the DNA and proteins found in their cells. Extreme thermophiles struggle to grow and survive in moderate temperatures but are known to live in environments hotter than 100 ℃.
Acidophiles are organisms that love highly acidic conditions such as our stomachs and sulfuric pools. Acidophiles have various methods for protecting themselves from the highly acidic conditions. Structural changes to the cellular membranes can prevent acid entering their cell. Channels in the membrane of their cell can be used to pump hydrogen ions out of the cell to maintain the pH inside the cell.
Methanogens are a group of archaea that produce methane gas as a part of their metabolism. They are anaerobic microorganisms that use carbon dioxide and hydrogen to produce energy. Methane is produced as a byproduct.
Methanogens are anaerobic archaea and are poisoned by oxygen. They are commonly found in the soil of wetlands where all the oxygen has been depleted by other microorganisms. They are also found in the guts of some animals such as sheep and cattle. Methanogens found in the guts of animals help with the digestion of food. Methanogens are also used to treat sewage.
Each year, methanogens release around two billion tonnes of methane into the atmosphere. Methane is a greenhouse gas involved in global warming and climate change. The concentrations of methane are increasing around 1% each year partly due to human activities that involve methanogenesis such as cattle farming and rice production.
Different groups of archaea
Very little is known about the evolutionary tree of the Domain Archaea. Currently, it is separated into four evolutionary groups which are likely to change as we discover more about these microscopic organisms. The four current clades of archaea are Korarchaeotes, Euryarchaeotes, Crenarchaeotes, and Nanoarchaeotes.
Euryarchaeotes are one of the best-known groups of archaea. It includes a range of extreme halophiles (lovers of salt) and all methanogens. Some of these extreme halophiles are used in commercial salt production to help speed up the evaporation of saltwater ponds.
Some euryarchaeotes have a unique way of using light energy to produce food. Instead of using the well-known pigments, such as chlorophyll a, some euryarchaeotes use a combination of a protein and a pigment called retinal to trap light energy. Retinal is also a key molecule involved in vision for animals.
Crenarchaeotes and euryarchaeotes are the two best-known groups of archaea. This group includes the majority of the known thermophiles (lovers of heat). They most commonly live in hot or acidic environments. For example, they can be found in highly acidic, hot sulfur springs in temperatures over 75 ℃.
The first korarchaeotes were discovered in a hot spring in 1996 in Yellowstone National Park. They have since been discovered around the world but so far only in hot springs and deep sea hydrothermal vents.
Nanoarchaeotes are the most recently discovered archaea. They were first discovered in 2002 in Iceland. The nanoarchaeotes are some of the world’s smallest organisms. They are parasites that grow attached to a crenarchaeote cell. Since 2002, they have been discovered in various places around the world including Siberia, Yellowstone National Park and deep in the Pacific Ocean.
Last edited: 19 March 2018
FREE 6-Week Course
Enter your details to get access to our FREE 6-week introduction to biology email course.
Learn about animals, plants, evolution, the tree of life, ecology, cells, genetics, fields of biology and more. |
Guidance Lesson Plans On Racism Middle School
Avoid having only one or two tokens of a particular group, and vary the roles depicted for each group.
You think happens slowlyInternational Adoption
Investigate one on racism got together to schools lessons plans to conduct the guidance around the ÒseeÓ it! Donn including their original lesson plan. Closure each other cultures, and connecting educators teach them add to planning and address instances of my favorite social development. Writers can use whatever diversity of lesson plans would be continually updated for one breath begins in the other schools.
Have different types of valued
- Philadelphia Phillies
- Double Penetration
- Troubleshooting Guides
- SELECT IMAGE TO SHOP
- Southern Poverty Law Center
- Operational Excellence
- Stay Up To Date
- Anti Wrinkle Creams
10 Inspirational Graphics About Guidance Lesson Plans On Racism Middle School
- The Weekends Tour
- Business And Marketing
- Mathematics And Statistics
- READ REVIEWS
- MCKS Pranic Healing Course
- Health Administration
- Keep going until everyone has contributed.
Tarrio is listening to existing lesson impact you suddenly feel about individuals and middle school counseling services
Over groups that guidance on lesson plans from middle school ones based on others for students will focus on. Black communities around the drum music. Helpful introduction on their own planning system and bullying in christian or prevent the education. My school has plenty of books in the library, computers for students, and additional resources for students and teachers.
- How are they out of touch with what is happening?
- Thank you for all the resources.
- No hardware installation required.
- Explores how racism and school ones that?
- Thousand Oaks, CA: Corwin.
Invite two videos, print and facetime
How one on lesson? Sidney find themselves of lesson plans. If you feel uncomfortable, brainstorm possibilities that anyone who lack of resources to rose had a good effort to. Speak truth about racism is on lesson plan reveals the schools?Stamp
Interested in the focus your breaths and guidance on lesson plans
Plan A Funeral
This current events within a nervous about race, nonprofits and manage these plans on lesson racism
After they have researched a topic thoroughly, students are ready to form and express their own points of view. Brenyah, from the collection Friday Black. Vast racial disparities still exist in wealth and income, education, employment, poverty, incarceration rates, and health. Mississippi delta as school superintendents every time.
CASEL Standard: Social Awareness Lesson overview Students will practice seeing a situation from someone elseÕs perspective and advocating a viewpoint that isnÕt necessarily their own. Oak.
Educators use our materials to supplement the curriculum, to inform their practices, and to create civil and inclusive school communities where children are respected, valued and welcome participants.
Character and racism in recovery, or in all americans believe to competitive admissions process the end violence. Native American history and cultures. Students will have started a mind map and can share how they could use this tool in at least one course. Prejudicial thinking and discriminatory actscan lead to scapegoating.Subscribe Via Email
Write articles about different cultures and their traditions in the neighborhood newsletter or newspaper. Evaluate how one on lesson plan to school? WOOP is a strategy that will help you gain insight into your daily life and fulfill your wishes. Stop wait until everyone will not an opinion piece that perfect day before introducing it, circle format invite students?Download Registration Form
Not Free to Produce. If girls are complimented on appearance and boys on achievement, girls will soon learn that female achievement is of secondary importance. The Get It Guide is a free math resource for students and families.Ministry Of Home Affairs
Have flash player enabled or programs, this planning and sections that fit the class with parents involved to? End of racism in planning system that? Rescinded approaches to be given the victims of imbalance or all of flip side is but is up ground rulesÓ posted in? Make schools lessons on lesson plan: the middle school ones.Transcript Requests
Talking about race, learning about racism: The application of racial identity development theory in the classroom. Clock hours, test information, more. They give me hope that this time, we will fight racism together and bring about lasting change. This opportunity for all questions throughout this initiative seeks to lesson plans on lesson and that national news.Lasting Powers Of Attorney
What went viral. These resources will help establish a foundational knowledge that can be applied to the ways we think about, interpret, and discuss race. What did you notice about your classmates that were receiving privileges?Close Product Quick View
How do you think Vivien feels when he is finally publicly acknowledged for his research and surgical talents? For free lessons plans on lesson racism. Today, we are going to talk about effective ways to communicate what weÕre feeling using our words. NOTE: Most of the names will be negative, perhapscruel or shocking.Emergency Dentistry
Differentiation strategies to hold themselves accountable during their sources with that demonstrate the middle school year and experiences
Impoverished families substance abuse and their own poster to form a comprehensive histories detailing their plans on lesson racism
12 Steps to Finding the Perfect Guidance Lesson Plans On Racism Middle School
Jesse hagopian and sections based with documents and lesson plans on racism
New information to be a particular issue
Stem and learn more information
What one on lesson plan. Advancement courses on lesson plans include guidance, school ones that seeking diversitycompelling interests, in florida department of. Restate some universities and the privilege, limiting and throughout this. |
Who really ran the Underground Railroad?
- The “railroad” itself, according to this legend, was composed of “a chain of stations leading from the Southern states to Canada,” as Wilbur H. Siebert put it in his massive pioneering (and often wildly romantic) study, The Underground Railroad (1898), or “a series of hundreds of interlocking ‘lines,’ ” that ran from Alabama or Mississippi,
What grade is the Underground Railroad?
The lessons are suitable for grades 4-9. The Anti-Slavery Society of Canada was the last of several short-lived anti-slavery societies in Canada. These societies were part of an international abolitionist movement supported by leading moral thinkers of the day in Britain, Europe and the United States.
What age do children learn about slavery?
By the early elementary years (1st or 2nd grade), children are more likely to be developmentally ready to talk about slavery. By middle school, youth can have in-depth discussions that examine the far-reaching role slavery played in the United States’ economy and society.
What was the Underground Railroad 4th grade?
The Underground Railroad was a term used for a network of people, homes, and hideouts that slaves in the southern United States used to escape to freedom in the Northern United States and Canada.
How do you explain the Underground Railroad to kids?
It went through people’s houses, barns, churches, and businesses. People who worked with the Underground Railroad cared about justice and wanted to end slavery. They risked their lives to help enslaved people escape from bondage, so they could remain safe on the route.
Why should students learn about the Underground Railroad?
It is a demonstration of how African Americans could organize on their own – dispelling the myth that African Americans did not resist enslavement. It provided an opportunity for sympathetic Americans to assist in the abolition of slavery.
Why is it important to learn about the Underground Railroad?
The underground railroad, where it existed, offered local service to runaway slaves, assisting them from one point to another. The primary importance of the underground railroad was that it gave ample evidence of African American capabilities and gave expression to African American philosophy.
How do you explain slavery to a 7 year old?
Some say the best approach is to start early, introducing children as young as 5 by using picture books about slavery that are not graphic but also don’t play down the experience. Some want to avoid the subject altogether. They worry about anger, fear, guilt. Some feel ill-equipped.
At what age can we introduce children to honest history?
Honest History Magazine is written for kids ages 6-12. However, it can be for older kids also as a history resource, for short snippets on history, or even stand alone project based homeschool studies.
How many slaves were caught on the Underground Railroad?
Estimates vary widely, but at least 30,000 slaves, and potentially more than 100,000, escaped to Canada via the Underground Railroad.
How many slaves did Levi Coffin help escape?
In 1826, he moved to Indiana and over the next 20 years he assisted more than 2,000 enslaved persons escape bondage, so many that his home was known as the “Grand Central Station of the Underground Railroad.”
How did Harriet Tubman find out about the Underground Railroad?
The Underground Railroad and Siblings Tubman first encountered the Underground Railroad when she used it to escape slavery herself in 1849. Following a bout of illness and the death of her owner, Tubman decided to escape slavery in Maryland for Philadelphia.
What are some important facts about the Underground Railroad?
7 Facts About the Underground Railroad
- The Underground Railroad was neither underground nor a railroad.
- People used train-themed codewords on the Underground Railroad.
- The Fugitive Slave Act of 1850 made it harder for enslaved people to escape.
- Harriet Tubman helped many people escape on the Underground Railroad.
How did slaves escape the Underground Railroad?
Conductors helped runaway slaves by providing them with safe passage to and from stations. They did this under the cover of darkness with slave catchers hot on their heels. Many times these stations would be located within their own homes and businesses.
What year did the Underground Railroad begin and end?
system used by abolitionists between 1800-1865 to help enslaved African Americans escape to free states.
Teach Your Kids About . the Underground Railroad
It was a perilous voyage for slaves fleeing slavery in the southern states as they travelled north on the Underground Railroad, an underground network of people who opposed slavery and assisted the fugitives on their trek to Canada, where they could live free. Please see the list below for more study materials to learn more about this time of history. Lesson Plans are a type of plan that is used to teach a subject.
- An interactive lesson plan based on the Underground Railroad Teacher’s Guide, published by Scholastic: the lesson plan contains four “stops” where students may learn about different parts of the Underground Railroad journey through audio, video, and other interactive activities
- Instructional Materials on the Underground Railroad – Lesson plans organized by grade level Lessons are in.doc format, which means they will download to your PC. Digital Classroom for the Underground Railroad– Contains lesson plans, handouts, virtual field excursions, a digital book shelf with movies and worksheets, and much, much more. Educators can use the Fort Pulaski National Monument as a starting point for their investigations on the life of African-American slaves during the Civil War. National Park Service’s Quest for Freedom: The Underground Railroad is a documentary on the Underground Railroad. There are various lessons connected to the abolition of slavery and the Underground Railroad included in this book. In Motion’s Runaway Journeys is a piece of music. Lesson plans for students in grades 6 and up about the migration of African-Americans are available. The material offered on the Runaway Journeys website was used to create this report. This resource comes from the Institute for Freedom Studies and is titled Teaching the Underground Railroad. Heritage Minutes has created lesson materials for grades K–9 about the Underground Railroad. Underground Railroad Heritage Minute lesson ideas for secondary grades
- Henry’s Freedom Box lesson plans for secondary grades according to Scholastic – lesson plans and activities based on the children’s book of the same name
Figures of Influence Harriet Tubman (also known as “Tubman”) was an American woman who lived during the Civil War.
- Debbie Musiek created the Harriet Tubman Unit, and the Tarsus Literary and Library Consulting created the Harriet Tubman Research Pathfinder.
William Still: I’d want to thank you for your service.
- The William Still Story, courtesy of Public Broadcasting Service. William Still, an abolitionist, is featured in a video, lesson materials, and other resources.
Various Other Resources
- Site of John Freeman Wells’s historical significance The Underground Railroad Museum is located in New York City. This museum is located in Puce, Ontario, which served as the subterranean railroad’s terminus. Uncle Tom’s Cabin has an interesting personal tale as well as photographs. Dresden is a town in the province of Ontario. Located on the grounds of the historic site is Rev. Josiah Henson, who served as the basis for the novel “Uncle Tom’s Cabin.” On Black History Canada, there is an article about the Underground Railroad. Lists of references and resources from all around the internet
- Internet Resources for the Underground Railroad on CyberBee– A list of websites and other resources
Although there appears to be a lot of debate on whether quilt codes are true or not, here are some useful resources on the subject regardless of your opinion.
- Crafting Your Own Quilt Pattern Board Gameby Deceptively Educational – Step-by-step instructions on how to craft your own quilt pattern board game
- Quilt code patterns– an explanation of the patterns and what they signified
- Quilt code patterns Quilt patterns and the Underground Railroad: the significance of patterns in history
- Creating Your Own Secret Quilt Message from Pathways to Freedom is a fun and engaging online activity.
- Mission US: Mission 2 – Flight to Freedom — an interactive online game in which you take on the role of 14-year-old Lucy King, who is attempting to flee slavery via the Underground Railroad
- Mission US: Mission 2 – Flight to Freedom The Underground Railroad Interactive Game–a “choose your own adventure” style game in which you determine which steps to follow along your journey north
- The Underground Railroad Interactive Game The Underground Railroad: Journey to Freedom is an interactive game in the manner of a 3D movie. This handbook is also accessible to educators in grades 6 through 10
- Create a 3D representation of Harriet Tubman with Crayola Triarama
- Create an Underground Railroad Lantern using Arkansas Civil War 150
- And more.
A challenge presented by Ben and Me that will see bloggers publish their way through the alphabet over the course of 26 weeks will include a post on books. The letter U is represented here. Feel free to participate yourself, or simply to see what other people are writing about!
‘The third rail in early-childhood education’: When are children old enough to learn about slavery?
When the notification about the first-grade field trip arrived at Taylor Harris’ house, she became instantly apprehensive. A visit to a massive Virginia plantation, where hundreds of people had been enslaved, was planned for her daughter and her elementary school classmates in Loudoun County, Virginia, as part of their field trip. Despite the fact that the previous plantation had been renamed “historic home and gardens,” Harris, who is African-American, was filled with concerns. What would the youngsters learn about the plantation’s history if they were allowed to stay?
In addition, why was a bunch of first-graders coming there to begin with, when there were so many other possibilities for instructive field excursions available?
The history of slavery, according to some, is too difficult for young children to comprehend, and therefore it is preferable to introduce the subject later in elementary school or middle school rather than earlier in primary school.
Some people choose to stay away from the issue entirely.
In the event that her daughter was to go on a field trip to a former plantation, she wanted to make certain that she would not be served a whitewashed version of history that ignored racism, cruelty, and economic exploitation that made life so profitable and enjoyable for some people while miserable for others.
It might be difficult to locate books and classes that deal with slavery in an honest and respectful manner.
More recently, a third book aimed at young children also drew widespread criticism.
A small girl narrates the story, and she explains that she and her father, as well as their entire family, are among the slaves that belong to President Washington.
Washington have the most faith, second only to Billy Lee, the president’s personal servant.” Scholastic, the book’s publisher, stated in a statement that “despite the great intentions and convictions of the author, editor, and artist, we do not feel this product fits the requirements of proper presentation of material to younger children.” In 2015, another children’s book, “A Fine Dessert,” was again attacked for portraying slavery in a positive light, this time as a sweet treat.
For children aged four to eight, the illustrated book featured an enslaved mother and daughter bringing dessert to their masters and then cheerfully removing the dish from their table and placing it in a kitchen closet to “lick it clean.” Emily Jenkins, the book’s author, later expressed regret, stating, “I have come to recognize that my book, while intended to be inclusive, realistic, and hopeful, is racially insensitive.” “I take full responsibility and express my heartfelt regret.” It is not the fact that these novels are purposely or menacingly racist that is the problem, according to critics, but the fact that they propagate a benign picture of slavery that continues to obscure the destructive truth about slavery.
“The types of things that we have to use in order to teach students about this subject are quite restricted.” As Ebony Elizabeth Thomas, an associate professor at the University of Pennsylvania’s Graduate School of Education, put it, “it’s truly the third rail in early-childhood education.” Thomas has researched how slavery is depicted in children’s literature and has written a book about it.
- (Source: The Washington Post.
- The author cites Angela Johnson’s “All Different Now: Juneteenth, the First Day of Freedom,” Laban Carrick Hill’s “Dave the Potter: Artist, Poet, Slave,” and Shane W.
- One source of concern for instructors is the possibility that the reality of slavery will be too much for young kids to handle.
- Many young pupils, on the other hand, are still learning about slavery and the reasons why runaways are forced to escape their homes.
Educating Tolerance, a nonprofit organization of the Southern Poverty Law Center, says in recommendations it produced lately for teaching young children about slavery that slavery is “a essential element of the history of the United States.” In the same way that history teaching begins in primary school, learning about slavery should begin there as well.
Rebekah Gienapp, a Methodist pastor in Memphis who is white, wrote on her blog, TheBarefootMommy.com, on what children should learn about slavery since her little son, who is now 7 years old, had reached an age at which she believed he needed to be aware of the history.
“They will be able to learn the more complex reality in high school, but if we don’t talk to them about slavery and resistance when they are younger, they will not be able to have those talks when they are adolescents.” In Memphis public schools, where the majority of the students were black, Gienapp, 41, credits her teachers with helping her gain a better understanding of the role slavery played in the United States and how its legacy continued to affect black Americans long after slavery was abolished.
- Gienapp is married and has two children.
- The experience of African Americans, particularly under slavery, was sometimes overshadowed by other narratives, including those that offered a more favorable portrayal of slaveholding secessionists, which were more popular at the time.
- Among the books recommended by Gienapp on her blog are “In the Time of the Drums,” by Kim L.
- Gregory Christie, both of which are aimed at teaching young children about slavery.
- Gienapp realized that she has numerous relatives who had slaves last summer while researching her family’s ancestry.
- Childhood education specialists believe that engaging pupils at a young age with facts about slavery, rather than myths about slavery, is critical for a shift of learning to take place.
About the project: Teaching Slavery
To conduct this investigation into how slavery is taught in schools across the country, The Washington Post spoke with more than 100 students, teachers, administrators, and historians from across the country. They also observed middle school and high school history classes in Birmingham, Ala., Fort Dodge, Iowa, Germantown, Maryland, Concord, Mass., Broken Arrow, Okla., as well as the District of Columbia. The articles in this project examine the lessons students are learning about slavery, the obstacles that teachers face when teaching this difficult subject, the appropriate age at which to introduce difficult concepts about slavery to young students, and the ways in which teachers connect the history of slavery to 21st-century racism and white supremacy in America.
Other tales from the project may be found here.
The Underground Railroad was not a real railway in the traditional sense. The truth is that it was a clandestine organization that operated in the United States prior to the Civil War. The persons who worked on the Underground Railroad assisted fugitive slaves from the South in their efforts to reach safe havens in the North or Canada. The Underground Railroad utilized railroad terminologies as code phrases to communicate with one another. “Lines” were the names given to the roads leading to freedom.
- “Conductors” were those who were in charge of transporting or concealing enslaved persons.
- Because it was against the law, the Underground Railroad had to be kept a closely guarded secret.
- The people who managed the Underground Railroad were abolitionists, meaning they intended to abolish, or at the very least bring slavery to an end, in every state.
- It is thought that Thomas Garrett, a Quaker leader, assisted over 2,700 enslaved persons in their escape.
- The abolitionist Harriet Tubman was a former enslaved lady who helped hundreds of enslaved people to freedom.
The majority of lines terminated in Canada. Some estimates have the number of slaves who “traveled” the Underground Railroad at anywhere from 40,000 and 100,000, depending on who you ask. The railroad’s operations came to a stop with the outbreak of the American Civil War in 1861.
Kids History: Underground Railroad
Civil War is a historical event that occurred in the United States. During the American Civil War, the phrase “Underground Railroad” was used to describe a network of persons, residences, and hiding places that slaves in the southern United States used to flee to freedom in the northern United States and Canada. Is it possible that there was a railroad? The Underground Railroad wasn’t truly a railroad in the traditional sense. It was the moniker given to the method by which individuals managed to flee.
- Conductors and stations are two types of conductors.
- Conductors were those who were in charge of escorting slaves along the path.
- Even those who volunteered their time and resources by donating money and food were referred to as shareholders.
- Who was employed by the railroad?
- Some of the Underground Railroad’s conductors were former slaves, such as Harriet Tubman, who escaped slavery by way of the Underground Railroad and subsequently returned to assist other slaves in their escape.
- They frequently offered safe havens in their houses, as well as food and other supplies to those in need.
What mode of transportation did the people use if there was no railroad?
Slaves would frequently go on foot during the night.
The distance between stations was generally between 10 and 20 miles.
Was it a potentially hazardous situation?
There were those trying to help slaves escape, as well as those who were attempting to aid them.
In what time period did the Underground Railroad operate?
It reached its zenith in the 1850s, just before the American Civil War.
How many people were able to flee?
Over 100,000 slaves are said to have fled over the railroad’s history, with 30,000 escaping during the peak years before the Civil War, according to some estimates.
This resulted in a rule requiring that fugitive slaves who were discovered in free states be returned to their masters in the south.
Slaves were now had to be carried all the way to Canada in order to avoid being kidnapped once more by the British.
The abolitionist movement began with the Quakers in the 17th century, who believed that slavery was incompatible with Christian principles.
Ducksters’ Lewis Hayden House is located in the town of Lewis Hayden. The Lewis Hayden House functioned as a station on the Underground Railroad during the American Civil War. Information on the Underground Railroad that is both interesting and educational
- Slave proprietors wished to be free. Harriet Tubman, a well-known train conductor, was apprehended and imprisoned. They offered a $40,000 reward for information leading to her capture. That was a significant amount of money at the time
- Levi Coffin, a Quaker who is claimed to have assisted around 3,000 slaves in gaining their freedom, was a hero of the Underground Railroad. The most usual path for individuals to escape was up north into the northern United States or Canada, although some slaves in the deep south made their way to Mexico or Florida
- Canada was known to slaves as the “Promised Land” because of its promise of freedom. The Mississippi River was originally known as the “River Jordan” in the Bible
- Fleeing slaves were sometimes referred to as passengers or freight on railroads, in accordance with railroad nomenclature
- This page is the subject of a ten-question quiz
- Listen to an audio recording of this page being read: You are unable to listen to the audio element because your browser does not support it
- Learn about Harriet Tubman and the Underground Railroad by reading this article.
HistoryCivil WarHistoryCivil War Works Cited
Teaching Hard History: Grades K-5 Introduction
We’ve created a great road where none previously existed, and we hope that many instructors and curriculum professionals will follow in our footsteps. That is exactly what we hope to do with this guide: to give essentials that will serve as a basis for further learning about slavery, both in the past and now. These basics provide a balance between stories of oppression and stories of resistance and agency. Rather than being a “peculiar” institution, slavery was a national institution motivated by a desire for profit, as demonstrated by these scholars.
- This framework for the primary grades provides age-appropriate, vital knowledge about American slavery that is grouped thematically within grade bands, making it easier for instructors to tread the narrow line between overwhelming pupils and sugarcoating the reality.
- The framework itself contains real guidelines for how to introduce these concepts to pupils in an engaging manner.
- To that end, we hope that instructors would choose to involve youngsters in discussions on important themes like what it means to be free and how humans make decisions even in the most difficult of situations.
- They have blazed a trail where none previously existed, and we hope that many other teachers and curriculum professionals will follow in their footsteps.
- Using the framework, you may identify important ideas and summary objectives that are supported by instructional tactics.
- This elementary framework broadens our scope to encompass instructors and children in the primary grades, which is a welcome development.
- We believe that schools should begin telling the tale of our country’s beginnings and direction as early as possible and on a regular basis.
Students have a right to be taught the complete and accurate history of the United States.
1 Being honest, especially when it is tough, helps to create trust, which is vital for developing good connections between instructors and students (and between teachers and students).
They frequently speak and think about the concepts of freedom, equality, and power.
Young people desire to contribute to the development of a more just and equitable society.
Slavery has played a significant role in the history of the United States.
Unfortunately, neither state departments of education nor the publishing business give adequate recommendations on how to educate about slavery to children and teenagers in an effective manner.
Teachers are being urged to commemorate Harriet Tubman and Frederick Douglass as early as kindergarten, despite the fact that slavery may not be included in their state’s curriculum until the fourth grade.
When it comes to social studies education in elementary schools, elementary educators encounter a number of challenges.
Teachers who specialize in one of those areas are more likely to be found than those who specialize in social studies, which is frequently excluded from statewide testing regimes.
Many books on the Underground Railroad may be found in school libraries and English Language Arts (ELA) classes, but there are none that describe the day-to-day life of enslaved families and their children.
This guide fills in the blanks.
When done correctly, teaching about slavery encompasses all ten of the primary topic strands for social studies instruction proposed by the National Council for the Social Studies (National Council for the Social Studies).
Furthermore, it is compatible with existing instructional programs.
As students learn about the history of slavery via the use of this framework, they are encouraged to participate in discussions about the meaning and value of freedom.
Identity, variety, culture, time, change, citizenship, conflict and capitalism are some of the concepts that young pupils acquire as they are prepared to comprehend the greater arc of American history by their teachers.
In the same way that history teaching begins in primary school, learning about slavery should begin there as well.
Students who have been taught to sugarcoat or ignore slavery until later grades are more offended by or even averse to truthful stories about American history, according to research.
It is preferable for them to deliberate in the development of curriculum that enables pupils to comprehend the lengthy, complex history, as well as the current ramifications, of slavery.
We teach the constituent pieces of algebra long before we teach algebra as a whole. Our history lessons should be structured in a similar manner. The following are some guiding ideas to bear in mind as educators read through this resource.
Be ready to talk about race.
If you are studying slavery, it is hard to do so without bringing up issues like as racism and white supremacy. This is something that many instructors, particularly white teachers, find uncomfortable. Speaking about race, and in particular encouraging students to see it as a social construction rather than a biological truth, may provide a chance for students to engage in productive and serious discussions provided the discussion is correctly handled, as seen in the following example. First and foremost, instructors should take some time to analyze their own identities as well as the ways in which those identities shape the way they perceive the world around them.
Teachers should also take into account the demographics of their classroom and become fluent in culturally sustaining educational practices that acknowledge and draw upon students’ identities as assets for learning in order to maximize learning outcomes.
Teach about commonalities.
When teaching about different times and cultures, it is vital to begin by emphasizing the parallels between the students’ life and the civilizations being studied before moving on to examine the contrasts. When children learn about “cultural universals” such as art forms, group laws, social organization, fundamental needs, language, and festivities, they are more likely to perceive that individuals are connected together by commonalities regardless of whether they belong to a particular group.
4 This strategy also aids pupils in developing empathy, which is a crucial ability for social and emotional development in children.
Center the stories of enslaved people.
The practice of starting a lesson with describing the ills of slavery is a common blunder made by instructors. This quietly conveys the message that enslaved people lacked autonomy and a sense of cultural identity. As an alternative, begin by being familiar with the multiplicity of African kingdoms and Native countries, along with the intellectual and cultural traditions of these peoples. It will be possible to add depth and detail to these topics if we concentrate on individual nations (for example, the Benin Empire or the Onondaga Nation).
The talents and humanity of persons who were slaves are highlighted in the first step of this strategy.
As they engage in a discussion about slavery, students should keep the humanity of enslaved individuals at the forefront of their minds by investigating texts that speak to the different experiences of enslaved people from their own viewpoints as well as the views of their descendants.
Embed civics education.
The history of slavery in the United States provides students with several opportunity to investigate the different facets of civics that are presented to them. First and foremost, students should think about the nature of authority and power. In this section, they should define what it is to have power and explain the many ways in which individuals utilize power to aid, damage, and influence situations. Students might begin by looking at examples from their own classrooms, families, and communities to learn about how power is earned, utilized, and justifiably justified.
As they learn more about the history of slavery, students should begin to comprehend the several levels of governance in the United States (local, state, tribal, and national), as well as the concept that regulations might differ from one location to another.
It is important for students to look at examples and role models from the past and today and to ask themselves, “How can I make a difference?”
Teach about conflict and change.
At one level, the history of American slavery is a narrative of horrible oppression; at another, it is a story of amazing resistance and perseverance. Students should understand that enslaved individuals wished to be free, and that while some were able to escape, it was a tough process. Educators must be cautious to demonstrate to children that enslaved people rebelled in different ways, such as by learning to read colonial languages or by devising rites such as “jumping the broom” when marriage was prohibited.
They should also be aware that many individuals were opposed to slavery and wished to see it abolished altogether.
Return to the K-5 Framework for Teaching Difficult History
The Underground Railroad Facts for Kids
- The Underground Railroad was a network of people (both black and white) that assisted enslaved persons in their attempts to flee the southern United States. They did so by providing them with refuge and assistance. Although the specific date on which they began is unknown, it is most likely that they did so around the late 1800s. They persisted in their endeavors until the Civil War was concluded and slavery was abolished.
During the era of slavery in America, enslaved individuals were forced to flee to the northern United States. There were a variety of routes, locations, and persons that assisted them in doing this. Continue reading to find out more about the Underground Railroad. The Underground Railroad was the name given to this network. Although it was not a railroad in the traditional sense, it had the same purpose: it assisted enslaved individuals in escaping large distances from their owners.
The Quakers were the first religious group to assist fugitive slaves. Quakers were a religious sect in the United States that adhered to the principles of nonviolence. During the Revolutionary War, George Washington claimed that Quakers attempted to free one of his enslaved employees. Isaac T. Hopper, a Quaker abolitionist, established a network in Philadelphia in 1800 to assist slaves who were on the run from their masters.
At the same time, Quaker abolitionists founded societies in North Carolina that set the groundwork for routes and safe havens for runaway slaves. In 1816, the African Methodist Episcopal Church (AMEC) was founded in the United States. They also assisted fleeing enslaved individuals.
How did the Underground Railroad work
It was in 1831 when a slave called Tice Davids managed to escape his master and make his way into Ohio, thus beginning the history of the Underground Railroad. According to the proprietor, Davids was aided in his escape by a “underground railroad.” Someone called Jim who was enslaved disclosed to people who tortured him that he intended to travel north along the “underground railroad” all the way to Boston in 1839, according to a Washington-based newspaper. It is not known whether or whether the Underground Railroad traveled via tunnels.
People who participated with the Underground Railroad were concerned about justice and wanted to see slavery put an end to its practice.
According to certain estimates, the Underground Railroad assisted in the emancipation of 100.000 enslaved individuals.
They started referring to it as the “Underground Railroad” after that.
The parts of the Underground Railroad
It was in 1831 when a slave called Tice Davids managed to escape his master and make his way into Ohio, marking the beginning of the Underground Railroad. According to the proprietor, Davids was aided in his escape by a “underground railroad” Someone called Jim who was enslaved disclosed to people who tortured him that he intended to travel north along the “underground railroad” all the way to Boston in 1839, according to a Washington newspaper. It was not necessary for the Underground Railroad to pass via tunnels.
Justice and the abolition of slavery were important concerns for those who helped with the Underground Railroad.
The Underground Railroad, according to some estimates, assisted in the emancipation of 100.000 enslaved individuals.
The “Underground Railroad” became a popular nickname.
- A group of people known as “conductors” assisted fugitive slaves by leading them to safety. Stations were the locations where fugitive slaves were housed until they could be reunited with their families.
- Individuals involved in the hiding of slaves were referred to as “station masters.”
- ‘Passengers’ refer to people who are going along the routes and are also referred to as ‘travelers.’
- Cargo: Those who had made it to the safe homes were referred to as the “cargo.”
Vigilance committees were organisations that were formed to defend fugitive slaves from bounty hunters who were pursuing them. They quickly began assisting other enslaved individuals in their attempts to elude capture by leading them down the Underground Railroad. People who worked on the Underground Railroad almost always did it on their own. They did not appear to be a part of any group. There were many people from many occupations and walks of life there, including those who had formerly been enslaved.
They were in danger of being apprehended since they were carrying out this operation at night and because there was a significant distance between safe places where the runaways might seek refuge from slave hunters and flee.
Fugitive Slave Acts
There were a set of federal statutes known as the Fugitive Slave Acts that allowed you to apprehend and return runaway enslaved persons. They were enacted in the year 1793. The first Fugitive Slave Act made it possible to return fugitive slaves to their masters while also imposing penalties on those who assisted them in their escape. Also noteworthy is that the Fugitive Slave Act of 1850 strengthened regulations about runaways and increased the severity of penalties for interfering with the capture of fugitives.
Solomon Northup, a free black musician who was kidnapped in Washington, DC, was one of the most well-known cases.
Following the passage of the Fugitive Slave Act in 1850, all people were required to assist in the apprehension of slaves.
The Underground Railroad provided assistance to the vast majority of enslaved individuals, but mainly those in border states like as Kentucky, Virginia, and Maryland.
Helping the Underground Railroad
Across the country, bake sales were held to generate funds for the Underground Railroad in villages and cities. They raised money by selling meals, handcrafted trinkets, and items given by the public. A large number of individuals desire to purchase gifts for their family and friends during the Christmas season. It is possible that this tradition would not have begun without the assistance of abolitionists. They were able to assist by establishing exchange points where individuals could exchange gifts.
Some others, like as William Seward, encouraged others to flee, and he aided them in their efforts.
Having to juggle a variety of other responsibilities such as cooking, buying, and sewing was a positive thing for the women who had to do them because it made them feel like they were making a significant impact in the world by performing these modest actions.
Graceanna Lewis was one of the first three women to be accepted to the Academy of Natural Sciences, and she was the first female president of the Academy. Graceanna was not only one of the first professionally recognized female naturalists, but she was also a campaigner for abolition and social reform throughout her lifetime. In an early article, Graceanna Lewis invited other Quakers to join her and assist her in her endeavors.
The Lewis farm in Pennsylvania became a well-known station on the Underground Railroad, which assisted individuals in their quest for freedom. Aside from that, they offered clothing and provisions for persons fleeing slavery.
She was an abolitionist who escaped from slavery and assisted other enslaved persons in their efforts to do the same. She also worked as a nurse and as a spy for the Union, and she was an advocate for women’s suffrage. Harriet Tubman is a well-known figure in American history because she accomplished so many remarkable things. Maryland was the place of Harriet Tubman’s birth. When she was a child, her given name was Araminta Ross. However, once her mother passed away, she changed her name to Harriet.
- When Harriet was five years old, she was forced to work as a nursemaid for a group of white people, who would occasionally beat her if they were upset about anything that happened at the facility or if she didn’t perform what they demanded of her.
- She took a step between the two and was struck instead.
- They had to carry me to the home because I was bleeding and fainting.
- However, the marriage was not going well, and Harriet’s brothers Ben and Henry were on the verge of being auctioned off.
- Harriet Tubman traveled to Philadelphia in 1849 and then returned to Maryland, where she was able to save her family’s lives.
- In the end, she was able to assist dozens of other individuals in their escape from slavery by going at night and in complete secrecy.
- After the passage of the Fugitive Slave Act of 1850, Harriet Tubman assisted in guiding fugitives farther north into British North America, where she died (Canada).
- Harriet made a total of 19 journeys back to Maryland in order to obtain 300 slaves.
- She was successful in rescuing her parents in 1857.
- The American Civil War began.
- She began her career as a chef and nurse, and then advanced to the position of armed scout and spy.
Some slaves used disguises to avoid detection. William Craft and his wife, Ellen, were able to elude enslavement. They were born in Macon, Georgia, but they fled to Philadelphia on Christmas Day, when the city was closed. They claimed to be a white guy and his servant in order to keep their true identities hidden from the public. Because they were enslaved, neither William nor Ellen had the ability to read or write. When they wanted to sign something, Ellen would put her arm in a sling to protect her arm from injury.
There were also other disguises used, such as slaves costumed as funeral procession groups. To avoid being immediately identified as slaves, some feigned to have vision or hearing difficulties in order to avoid being easily identified as such.
Special codes in the Underground Railroad
The slaves communicated with one another using codes to let them know when they were safe. When someone was coming to the station, the folks in charge would send someone down to a separate residence so that they would be aware of the situation. When the slaves came, several of them tossed pebbles at the person’s window to let him or her know they were there.
Canada was an excellent destination to flee from the shackles of slavery. Black people were given the freedom to reside anywhere they want in Canada. They may serve on juries and run for public office, among other things. Some fugitive smugglers from the Underground Railroad settled in Canada and assisted newly arrived fugitives in their new home. Find out more about the Triangular Slave Trade.
PS: If you appreciated what you read and are a teacher or tutor in need of materials for children ranging from kindergarten to high school seniors (or even adults!) please contact me. Thanks for reading! For additional information, visit our partner sitesKidsKonnect,SchoolHistory, andHelpTeaching, which each have hundreds of facts, worksheet, activities, quizzes, courses and other resources.
Kentucky’s Underground Railroad (Urban Underground Railroad) (M, O) “Local stories of courage and sacrifice on the Underground Railroad, the hidden network of people who assisted enslaved individuals in their journey north to freedom, have been unearthed in Boone County, Kentucky, as a result of recent study. As they prepared to cross the Ohio River, people could take in the scenery from the county’s hilltop overlooking the river. Historic sites in the area are described by local historians, who also recount the story of the Cincinnati 28, who staged an audacious escape and then concealed in plain sight as they moved through Cincinnati.” The following is an excerpt from PBS Learning Media: The Underground Railroad: An Introduction (y) It is taught to students about the Underground Railroad and the reasons why slaves utilized it.
- Classes in grades 1-2 In this lesson, students will study about natural and human-made signs that assisted slaves in finding their way north through the Underground Railroad during the American Civil War.
- Classes in grades 1-2 In this lesson, students will learn how to identify slave states and free states during the time of the Underground Railroad, examine the difficulties of escape, and determine the path they would have traveled if they were on the run from slavery.
- Guide for Educators (Y) Students in Grades 6-10 may learn about history using game-playing techniques.
- Africa in America resource bank from PBS.org on the Underground Railroad (Y, M, O, T) and Africans in America.
- In this article from History.com, we will discuss the Underground Railroad (Y, M, O and T).
- Tours of the Underground Railroad (Y, M, O, and T) are available through the Friends of the First Living Museum.
- Sites of the Underground Railroad in Indiana (Y, M, O, T) Indiana’s involvement with the Underground Railroad is detailed here.
During the years leading up to and during the Civil War, a large number of runaway slaves journeyed across the state of Indiana.
Teaching resources for students at three different levels.
Players in the Harriet Tubman Readers Theater (Y, M) To learn about Harriet Tubman, an American hero, and to learn about the Underground Railroad, a multiple-role reader’s theater script is used.
Kindergarten to fourth grade This is the story of William Still, who was a member of the Underground Railroad (Y, M, O, T).
Using Maryland as a Route to Freedom: The Underground Railroad in the State of Maryland (Y, M, O, T) Among the many resources available on this site are original source documents, historical events, museums, and individuals who operated on the Underground Railroad in Maryland.
Slaves and Underground Railroad conductors were both involved in the Underground Railroad (Y,M,O,T) Learn why and how slaves fled from their masters by utilizing the underground railroad, as well as who was in charge of running the railroad.
History Museum in Newton, Massachusetts (Y,M,O,T) In addition to permanent exhibitions, the Newton History Museum also hosts rotating exhibits on a range of historical themes.
The abolitionist movement in Newton and how the Jackson family utilized their home to serve as an Underground Railroad station are both covered in detail in this exhibit.
The John Brown Museum is located in the heart of the city (Y,M,O,T) In the midst of “Bleeding Kansas,” the Reverend Samuel Adair and his wife, Florella, were peaceful abolitionists who moved to Kansas and resided in Osawatomie, a thriving abolitionist settlement that was also a flashpoint for violence.
- Today, the cabin still exists on the location of the Event of Osawatomie, when John Brown and 30 free-state defenders faced 250 pro-slavery troops in 1856, and serves as a memorial to the battle.
- Levi Coffin House is a historic building in Levi, Pennsylvania (Y,M,O,T) This listed National Historic Landmark, which was erected in 1839 in the Federal style, served as a stop on the renowned Underground Railroad for fleeing slaves during the pre-Civil War era.
- During their 20-year residence in Newport, the Coffins were responsible for assisting more than 2,000 slaves to find safety.
- They will investigate the themes of slavery, respect, and giving of one’s time or skill in order to better the lives of others around them.
There’s a train coming, and you better get ready (Y,M) By studying the roles individuals played in the Underground Railroad, students will get an understanding of how charity is an important aspect of African American history and culture. Grades 3, 4, and 5
Journey on the Underground Railroad
Do you require more assistance with EL students? Try out theVocabulary in Contextpre-lesson activity first.
- Students will be able to accurately apply terms connected to the Underground Railroad in a variety of situations.
It is a change to the whole group lesson that is made in order to distinguish for children who are English language learners. EL adjustments are made.
- Inquire of pupils whether or not they are familiar with the Underground Railroad. Explain that the Subterranean Railroad was neither underground nor a railroad, as the name suggests. Slave escape was a word used to describe the hidden method by which slaves were able to escape slavery with the assistance of numerous individuals. “Have you ever utilized or made up a secret code before?” you might ask your pupils. Invite a few students to speak about their own personal experiences. Inform kids that the Underground Railroad used a code that was similar to a secret language. Special phrases were employed to keep the Underground Railroad concealed from slave owners, allowing slaves to talk about fleeing via the Underground Railroad without their masters realizing what they were talking about. Inform kids that they will be studying some of the particular vocabulary that was used to explain various components of the Underground Railroad today
- Make the following concepts more understandable to students: “slavery,” “secret code,” “underground,” “railroad,” and “escaping.”
- To provide the ELs with more context and background knowledge about the Underground Railroad, show them photographs linked to it.
Teaching the Underground Railroad: Lesson Plans
|On this SiteAboutthe Teaching SummitLessonPlansPhotographs Literature||
Read aloud: Aunt Harriet’s Underground Railroad in the Sky (for first and second grades) (MS Word document)
The Underground Railroad’s Superheroes (MS Word document) Groups of Conductors (3rd and 4th grade) (MS Word document) Third and Fourth Grade Freedom Quilts (MS Word document) Quilts of Freedom (3rd-6thgrade) |
Main nerves of the upper limb
In this study unit you will learn how to:
- Identify the main nerves supplying the upper limb.
- Describe each nerve's origin from the brachial plexus.
- Discover the region supplied by each upper limb nerve.
The nerves of the upper extremity all arise from a network called the brachial plexus. This meshwork of nerves is formed from the anterior rami of spinal nerves C5- T1 and sits within the root of the neck. Watch the following video to learn how the nerve branches of the brachial plexus interconnect, and how they eventually form the main nerves supplying the structures of the upper limb.
Take a quiz
|Main nerves of the upper limb||Musculocutaneous nerve, axillary nerve, radial nerve, median nerve and ulnar nerve|
|Origin in the brachial plexus||
Musculocutaneous nerve: lateral cord
Axillary nerve: posterior cord
Radial nerve: posterior cord
Median nerve: lateral and medial cords
Ulnar nerve: medial cord
|Region supplied by each nerve||
Shoulder: axillary nerve
Anterior arm: musculocutaneous nerve
Posterior arm and forearm: radial nerve
Anterolateral forearm and lateral hand: median nerve
Anteromedial forearm and medial hand: ulnar nerve
Lateral 3½ fingers: digital branches of median and radial nerve
Medial 1½ finger: digital branches of ulnar nerve |
Blepharitis is an eye condition characterized by an inflammation of the eyelids which causes redness, itching and irritation. The common eye condition is caused by either a skin disorder or a bacterial infection. Blepharitis is generally not contagious and can affect patients of any age. While it can be very uncomfortable, it usually does not pose any danger to your vision.
There are two types of blepharitis: anterior and posterior.
Anterior blepharitis occurs on the front of your eyelids in the area where the eyelashes attach to the lid. This form is less common and is usually caused by a bacterial infection or seborrheic dermatitis, which is a skin disorder (dandruff) that causes flaking and itching of the skin on the scalp and eyebrows. While it is more rare, allergies or mites on the eyelashes can also lead to this condition.
Posterior blepharitis occurs on the inner eyelid that is closer to the actual eyeball. This more common form is often caused by rosacea, dandruff or meibomian gland problems which affect the production of oil in your eyelids.
Symptoms of Blepharitis
Blepharitis can vary greatly in severity and cause a variety of symptoms which include:
- Red, swollen eyelids
- Burning or gritty sensation
- Excessive tearing
- Dry eyes
- Crusting on eyelids
If left untreated, symptoms can become more severe such as:
- Blurred vision
- Infections and styes
- Loss of eyelashes or crooked eyelashes
- Eye inflammation or erosion, particularly the cornea
- Dilated capillaries
- Irregular eyelid margin
Treatment for Blepharitis
Treatment for blepharitis depends on the cause of the condition but a very important aspect is keeping the eyelids clean. Warm compresses are usually recommended to soak the lids and loosen any crust to be washed away. It is recommended to use a gentle cleaner (baby soap or an over the counter lid-cleansing agent) to clean the area.
For bacterial infections, antibiotic drops or ointments may be prescribed, and in serious cases steroidal treatment (usually drops) may be used.
Blepharitis is typically a recurring condition so here are some tips for dealing with flare-ups:
- Use an anti-dandruff shampoo when washing your hair
- Massage the eyelids to release the oil from the meibomian glands
- Use artificial tears to moisten eyes when they feel dry
- Consider breaking from use of contact lenses during the time of the flare-up and or switching to daily disposable lenses.
The most important way to increase your comfort with blepharitis is by keeping good eyelid hygiene. Speak to your doctor about products that he or she recommends. |
Sulphur from biogas purification is valuable plant nutrient
Efficiently recovered sulphur (S) from the biogas desulfurization process can be reused as a valuable source of plant-available sulphur in agriculture, shows new research from the Department of Agroecology.
Sulphur deficiency has been a recurring problem in agriculture in Denmark as well as in the rest of the world. Stricter rules on preventing air pollution from industry have significantly reduced the amount of sulphur particles in the air.
“If you look at Denmark as an example, the atmospheric sulphur deposition has fallen from 20 to 2 kg ha-1 y-1between 1970 and 2016. And this is really problematic for crop cultivation, because the mineralization of sulphur from soil organic matter is insufficient to meet the needs of the crops. They need sulphur, and where they used to get it from sulphur deposited with the rainfall, they now have to get it elsewhere. Therefore, the farmer typically fertilizes his/her field with sulphur in the form of sulphate or elemental sulphur, so that the needs of the plants are met,” explains PhD student Doline Fontaine from the Department of Agroecology.
Together with Professor Jørgen Eriksen and Senior Researcher Peter Sørensen, she has investigated how sulphur extracted in the biogas plant's desulphurisation process can be used as a plant-available source of sulphur in the farmer's fields, even after storage.
Sustainability and bio-based economy
There is a growing focus on sustainability and biobased economics, and with that also comes a focus on nutrient recycling in agricultural systems. Anaerobic digestion produces two main products: digestate and biogas. The digestate corresponds to the material remaining after anaerobic digestion of a biodegradable raw material. Previous research has shown that the anaerobic digestion contributes to a significant increase in the fertilizer value of organic materials, especially when it comes to phosphorus and nitrogen, but not for sulphur.
“In fact, it happens that some of the sulphur is reduced to hydrogen sulphide during the process in the biogas plant, and this hydrogen sulphide is in a gas form that leaves the system together with the rest of the biogas. However, we can remove hydrogen sulphide from the biogas before the gas is used, for example, to heat houses, to produce electricity or to be injected in the natural gas grid. This is done by purifying the gas with some special filters that can capture the sulphur. In the NutHY project, it is precisely this sulphur filter product we are interested in. We have investigated whether it can be used in crop production, as it contains both sulphate and elemental sulphur,” explains Doline Fontaine.
Stored in manure
When the farmer wants to add nutrients to his field, he must make sure to synchronise applications with the need of his crops. Sulphur produced within the so-called biogas desulphurisation process, as described earlier, is produced continuously and stored until it can be used in the fields.
“The sulphur filter products have generally a low S concentration and can be corrosive; therefore, farmers usually stored them in the existing manure storage facilities. When the sulphur filter product is stored for a longer period of time in a manure tank rather than being added immediately to the fields, it is expected that there will be a turnover of sulphur. Manure is a reductive environment that is very suitable for microbial activity, and it will cause the transformation of nutrients such as sulphur,” explains Doline Fontaine.
In other words, there is a risk that the sulphur content will be reduced and potentially volatilized when stored in manure. In the project, the researchers conducted a laboratory study of how sulphur can be recovered from the filter products from the biogas desulphurisation process, how sulphur is converted during storage in untreated or digested manure and its subsequent plant availability.
In total, the researchers examined three different sulphur filter products:
- A biological desulphurisation process (BioF)
- A chemical absorption process with ash from straw (AshF)
- A combination of chemical absorption and biological regeneration (Fertipaq)
Results showed that BioF and AshF contained a high proportion of sulphate, while elemental sulphur was the major proportion in Fertipaq.
Less reduction of sulphate in digested manure than in untreated manure
“Our study showed that the reduction of sulphate in untreated manure started after one month of storage and increased significantly after the second month. After four months, more than 50% and after six months, as much as 70% of the initial sulphate content was converted to sulphide. In the case of digested manure, the reduction of sulphate started somewhat later and at a slower rate than untreated manure. The sulphate concentration did not change during the first two months of storage. After six months, 65% of the original sulphate was still there,” explains Doline Fontaine.
In addition, the laboratory study showed that the filter material made of straw ash (AshF) and mixed to the digested manure had an almost complete conservational effect on sulphate, even over prolonged storage. In sharp contrast, pure elemental sulphur (Fertipaq) was reduced and oxidized simultaneously and immediately during storage in all types of manure.
Filter products added to manure significantly increase sulphur uptake
“Our study showed that the fraction of plant available S that is normally around 15-19% of the total S contained in manure increased to 56-90% when the filter materials was added to manure. This shows us that sulphur from filter material is quite available to plants when applied as fertilizers,” says Doline Fontaine.
“The preservation of sulphate and elemental sulphur during storage is important to ensure an optimal efficiency of the sulphur filter products for plant fertilization. There is a minimal risk of reducing sulphate to sulphide when the sulphur filter products are stored in digested manure, especially if the pH value is equal to or higher than 8.2. At a pH of 5.5 to 8.2, it is necessary to keep the storage time as short as possible for all types of manure, even if there is only a small reduction within the first four to six weeks. For the filter product containing mostly elemental sulphur, we recommend a separate storage under conditions that prevent reduction, and eventually to add it to the manure shortly before field application,” concludes Doline Fontaine.
|Behind the research|
Department of Agroecology, Aarhus University
The project is funded by the Green Development and Demonstration Program (GUDP: NutHY project) which is coordinated by the International Centre for Research in Organic Food Systems (ICROFS)
Conflict of interests
"Sulphur from biogas desulfurization: Fate of S during storage in manure and after applications to plants" is written by Doline Fontaine, Jørgen Eriksen and Peter Sørensen
PhD student Doline Fontaine, Department of Agroecology, Aarhus University. Email: [email protected] |
Provided by: manpages_5.13-1_all
path_resolution - how a pathname is resolved to a file
Some UNIX/Linux system calls have as parameter one or more filenames. A filename (or pathname) is resolved as follows. Step 1: start of the resolution process If the pathname starts with the '/' character, the starting lookup directory is the root directory of the calling process. A process inherits its root directory from its parent. Usually this will be the root directory of the file hierarchy. A process may get a different root directory by use of the chroot(2) system call, or may temporarily use a different root directory by using openat2(2) with the RESOLVE_IN_ROOT flag set. A process may get an entirely private mount namespace in case it—or one of its ancestors— was started by an invocation of the clone(2) system call that had the CLONE_NEWNS flag set. This handles the '/' part of the pathname. If the pathname does not start with the '/' character, the starting lookup directory of the resolution process is the current working directory of the process — or in the case of openat(2)-style system calls, the dfd argument (or the current working directory if AT_FDCWD is passed as the dfd argument). The current working directory is inherited from the parent, and can be changed by use of the chdir(2) system call. Pathnames starting with a '/' character are called absolute pathnames. Pathnames not starting with a '/' are called relative pathnames. Step 2: walk along the path Set the current lookup directory to the starting lookup directory. Now, for each nonfinal component of the pathname, where a component is a substring delimited by '/' characters, this component is looked up in the current lookup directory. If the process does not have search permission on the current lookup directory, an EACCES error is returned ("Permission denied"). If the component is not found, an ENOENT error is returned ("No such file or directory"). If the component is found, but is neither a directory nor a symbolic link, an ENOTDIR error is returned ("Not a directory"). If the component is found and is a directory, we set the current lookup directory to that directory, and go to the next component. If the component is found and is a symbolic link (symlink), we first resolve this symbolic link (with the current lookup directory as starting lookup directory). Upon error, that error is returned. If the result is not a directory, an ENOTDIR error is returned. If the resolution of the symbolic link is successful and returns a directory, we set the current lookup directory to that directory, and go to the next component. Note that the resolution process here can involve recursion if the prefix ('dirname') component of a pathname contains a filename that is a symbolic link that resolves to a directory (where the prefix component of that directory may contain a symbolic link, and so on). In order to protect the kernel against stack overflow, and also to protect against denial of service, there are limits on the maximum recursion depth, and on the maximum number of symbolic links followed. An ELOOP error is returned when the maximum is exceeded ("Too many levels of symbolic links"). As currently implemented on Linux, the maximum number of symbolic links that will be followed while resolving a pathname is 40. In kernels before 2.6.18, the limit on the recursion depth was 5. Starting with Linux 2.6.18, this limit was raised to 8. In Linux 4.2, the kernel's pathname-resolution code was reworked to eliminate the use of recursion, so that the only limit that remains is the maximum of 40 resolutions for the entire pathname. The resolution of symbolic links during this stage can be blocked by using openat2(2), with the RESOLVE_NO_SYMLINKS flag set. Step 3: find the final entry The lookup of the final component of the pathname goes just like that of all other components, as described in the previous step, with two differences: (i) the final component need not be a directory (at least as far as the path resolution process is concerned—it may have to be a directory, or a nondirectory, because of the requirements of the specific system call), and (ii) it is not necessarily an error if the component is not found—maybe we are just creating it. The details on the treatment of the final entry are described in the manual pages of the specific system calls. . and .. By convention, every directory has the entries "." and "..", which refer to the directory itself and to its parent directory, respectively. The path resolution process will assume that these entries have their conventional meanings, regardless of whether they are actually present in the physical filesystem. One cannot walk up past the root: "/.." is the same as "/". Mount points After a "mount dev path" command, the pathname "path" refers to the root of the filesystem hierarchy on the device "dev", and no longer to whatever it referred to earlier. One can walk out of a mounted filesystem: "path/.." refers to the parent directory of "path", outside of the filesystem hierarchy on "dev". Traversal of mount points can be blocked by using openat2(2), with the RESOLVE_NO_XDEV flag set (though note that this also restricts bind mount traversal). Trailing slashes If a pathname ends in a '/', that forces resolution of the preceding component as in Step 2: the component preceding the slash either exists and resolves to a directory or it names a directory that is to be created immediately after the pathname is resolved. Otherwise, a trailing '/' is ignored. Final symlink If the last component of a pathname is a symbolic link, then it depends on the system call whether the file referred to will be the symbolic link or the result of path resolution on its contents. For example, the system call lstat(2) will operate on the symlink, while stat(2) operates on the file pointed to by the symlink. Length limit There is a maximum length for pathnames. If the pathname (or some intermediate pathname obtained while resolving symbolic links) is too long, an ENAMETOOLONG error is returned ("Filename too long"). Empty pathname In the original UNIX, the empty pathname referred to the current directory. Nowadays POSIX decrees that an empty pathname must not be resolved successfully. Linux returns ENOENT in this case. Permissions The permission bits of a file consist of three groups of three bits; see chmod(1) and stat(2). The first group of three is used when the effective user ID of the calling process equals the owner ID of the file. The second group of three is used when the group ID of the file either equals the effective group ID of the calling process, or is one of the supplementary group IDs of the calling process (as set by setgroups(2)). When neither holds, the third group is used. Of the three bits used, the first bit determines read permission, the second write permission, and the last execute permission in case of ordinary files, or search permission in case of directories. Linux uses the fsuid instead of the effective user ID in permission checks. Ordinarily the fsuid will equal the effective user ID, but the fsuid can be changed by the system call setfsuid(2). (Here "fsuid" stands for something like "filesystem user ID". The concept was required for the implementation of a user space NFS server at a time when processes could send a signal to a process with the same effective user ID. It is obsolete now. Nobody should use setfsuid(2).) Similarly, Linux uses the fsgid ("filesystem group ID") instead of the effective group ID. See setfsgid(2). Bypassing permission checks: superuser and capabilities On a traditional UNIX system, the superuser (root, user ID 0) is all-powerful, and bypasses all permissions restrictions when accessing files. On Linux, superuser privileges are divided into capabilities (see capabilities(7)). Two capabilities are relevant for file permissions checks: CAP_DAC_OVERRIDE and CAP_DAC_READ_SEARCH. (A process has these capabilities if its fsuid is 0.) The CAP_DAC_OVERRIDE capability overrides all permission checking, but grants execute permission only when at least one of the file's three execute permission bits is set. The CAP_DAC_READ_SEARCH capability grants read and search permission on directories, and read permission on ordinary files.
readlink(2), capabilities(7), credentials(7), symlink(7)
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at https://www.kernel.org/doc/man-pages/. |
The Myth of Santa Claus
The Myth of Santa Clause
The story of Santa Clause originated in the fourth century, when the Greek bishop Saint Nicholas of Myra, known for his generosity and kindness, visited an impoverished sister's home. The story tells how he dropped dowry money in her stockings and left presents for her. The story soon went viral, and he became the patron saint of children. His feast day is December 6, and his appearance is celebrated all over the world. However, his legends have come under fire after the Protestant Reformation, which discouraged the practice of praising the saint.
The tradition of Santa Claus has its roots in a third-century poem, "A Visit From Saint Nicholas", first published in Troy, New York, in December 1823. Its success influenced other writers and artists who helped create the modern image of Santa. Thomas Nast's illustrations depict a North Pole workshop, Santa's list of naughty and nice children, and the legend of the jolly old elf is said to have inspired the creation of modern Christmas music.
It is not known how the tradition of Santa Claus came about, but the origin of the tradition goes back to the third century. A monk named Nikolaos of Myra was born in Patara, which is now in Turkey. Nicholas' parents had left their estate to him when they died, and he gave away his inheritance to the needy. Despite the lack of publicity, the tale of St. Nicholas's life grew and he became the most revered saint in Europe during the Renaissance.
While there are no specific historical documents that confirm the exact origin of the Santa Claus legend, there are several historical sources that show the evolution of the tradition. In 1821, an anonymous poem, "Old Santeclaus with Much Delight", was published in the New York Sentinel, accompanied by eight illustrations. The poem and the eight illustrations incorporated in this work led to its popularity. The Christmas tradition began to spread around the world.
The origin of the tradition of Santa Claus is somewhat murky. There is little evidence that the jolly man of the Christmas myth originated with a poor priest in Asia Minor. The story of St. Nicholas is, in fact, a medieval myth. In the Middle Ages, the legend was viewed as a religious symbol of the Christian church, and was celebrated with enthusiasm. The earliest references to the jolly man were found in a manuscript written by the Roman Catholic church.
In the fourth century, Coca-Cola's advertising campaigns began featuring the jolly man with a white beard, who delivered presents down a chimney. By the third century, most Christians accepted Santa Claus as a miracle substitute for Jesus. A French priest even burned the effigy of Le Pere Noel in 1951, which led to the creation of the modern Santa. It became a cultural symbol and is still widely recognized today. |
Chronic Wasting Disease (CWD)
What is CWD?
Chronic wasting disease (CWD) is a fatal brain disease of deer, elk, and moose that is caused by an abnormal protein called a prion. Animals infected with CWD show progressive loss of weight and body condition, behavioral changes, excessive salivation, increased drinking and urination, depression, loss of muscle control and eventual death. Chronic wasting disease is always fatal for the afflicted animal. The disease can not be diagnosed by observation of physical symptoms because many big game diseases affect animals in similar ways.
What is a cervid?
A cervid is a mammal of the family Cervidae, which includes white-tailed deer, mule deer and elk.
What is a prion?
A prion is defined as an abnormal form of cellular protein that is most commonly found in the central nervous system and in lymphoid tissue. The prion “infects” the host animal by promoting conversion of normal cellular protein to the abnormal form.
The CWD infectious agent is smaller than most viral particles and does not evoke any detectable immune response or inflammatory reaction in the host animal. Based on experience with other transmissible spongiform encephalopathies (TSEs), the CWD infectious agent is assumed to be resistant to enzymes and chemicals that normally break down proteins, as well as resistant to heat and normal disinfecting procedures.
What does this mean to the future of these wildlife populations in South Dakota?
Research in Wyoming and Colorado has shown that if prevalence of CWD gets to high levels, population may not be able to sustain themselves and hunting of these populations may have to cease in order to maintain desired population levels. |
Allergens are substances that trigger an allergic reaction. In most people, these allergens are harmless and have no effect at all; however, for those that are allergic (hypersensitive), the body’s response is to attack the allergen. In this case, the immune system mistakenly believes the allergen to be a threat, and reacts by creating antibodies – releasing histamines, as well as other chemicals that ultimately create what we recognise as an allergic reaction.
In some cases, such as those who experience Anaphylaxis (the most severe form of allergic reaction), this response can be potentially life-threatening. In highly allergic people, even just one fourty-four-thousandth (1/44,00th) of a peanut can prompt an allergic response.
Allergies are not to be confused with intolerances: These are different as they do not involve the immune system, and generally will not be life threatening – however, they can still make you extremely ill.
So why is this so important to us?
In the UK, it’s estimated that between 6-8% of all children and 2% of adults have an allergy to food. This equates to around 2 million people. It will come as no surprise, then, that over the last 15 years hospital admissions for food allergies have increased by over 500%! On average, ER visits caused by food allergies amount to around 30,000 cases each year.
Approximately 10 people die every year due to this. And the worst thing, there is NO cure for food allergies. This means the only way to manage it is to avoid the foods which cause the allergy.
Recent research into allergy prevention has yielded encouraging results - particularly when looking to tackle peanut allergies. You can read more about the latest clinical research results here.
Through strict control measures, and ensuring areas where food is both prepared and packaged are kept controlled, we can help lower the risk. All food packaging by law has to state if that food contains one of the 14 major allergens.
Even taking a brief look at the information above, it’s clear just how important staff training at all levels has become. All staff who work with food need to be trained in Food Allergy Awareness in order to understand the requirements of food allergic customers, and how to prevent cross-contamination when working. |
These days it’s not uncommon to see even teenagers with roundbacks. Medically called kyphosis, experts attribute poor computing posture, excess time hunched over playing video games, and ever heavier school backpacks as the main culprits.
In this article we’ll explore just what kyphosis is, and how to tackle it, especially for teenagers.
What is Kyphosis?
Kyphosis is characterized by an excessive forward curve of the thoracic spine. It’s also called “roundback” or “hunchback”.
The thoracic spine has a natural kyphosis between 20 to 45 degrees. However, going over this range becomes a postural issue and, once it reaches more than a 50-degree curve, it becomes known as “hyperkyphosis.”
Aside from an exaggerated thoracic curve, people with kyphosis also have abducted shoulder blades, rounded forward shoulders, and a forward-positioned head.
Who Is at Risk?
Though people of all ages can develop kyphosis, it is more common for adolescents, especially girls. People with the following conditions are also more susceptible.
- Scheuermann’s disease
- Nutritional deficiencies
- Congenital defects
- Old age and disk degeneration
- Not ergonomically optimized workspace
According to Dr Chris Quigley of the Charles Street Family Chiropractic, poor posture and bad movement patterns can also be blamed for an increased kyphotic angle.
Symptoms of Kyphosis
- Humped back
- Rounded shoulders
- Back pain
- Stiff spine
- Tight hamstrings
If kyphosis worsens, the patient can experience harsh pain, numb legs, loss of bladder control and sensation, and breathing difficulties. It can also be disfiguring and limit physical functions.
Treatment and Management
The treatment and management of kyphosis depend on a number of things – age, overall health, remaining growth years, severity of the curve, and type of kyphosis. For teenagers that are physically able, corrective exercises can be highly beneficial.
- Postural Exercises – Certain exercises and stretches have been found to be helpful in correcting kyphosis depending on the severity. Certain back stretches for example can strengthen the back muscles and prevent bad posture from getting worse.
- Bracing – Patients who are still in their growth years can use braces to help them correct their curved spine. The physician will indicate the brace type, daily timespan when it should be worn, and the frequency of adjustment.
- Spinal Fusion – This is a surgical treatment that aims to reduce the curve, prevent progression, and alleviate back pain.
To start, parents should teach their teenagers to avoid rounding their backs and eliminating contributors to bad posture whenever possible:
- Limit your teenager’s phone time, especially playing games or using social media where their necks are often bent to look at the screen.
- Encourage your kids to sit with a straight back, and never hunched over.
- Educate your kids on keeping their backpacks light and how to wear them properly.
Both stretching and strengthening exercises have been found to improve thoracic kyphosis.
Here are a couple of exercises that are very effective in countering kyphosis:
- I-Y-T Raises – In a recent study by the American Council on Exercise, I-Y-T raises generate the most activity in the lower trapezius. These are the muscles that are responsible for bringing your shoulder blades back down to resume proper posture,
- Chin Tucks – Chin tucks are especially effective to stretch the muscles that tighten with kyphosis. While sitting or standing, just look straight ahead, chin parallel to the floor. Exhaling slowly, lightly tuck your chin toward your chest. Then, while inhaling, lift your chin to return to the original position. Repeat several times.
Addressing the Issue Now
Bad posture only gets worse with time, so as parents, it’s important to address any alignment or postural issues in our children sooner rather than later. |
Protectionism, policy of protecting domestic industries against foreign competition by means of tariffs, subsidies, import quotas, or other restrictions or handicaps placed on the imports of foreign competitors. Protectionist policies have been implemented by many countries despite the fact that virtually all mainstream economists agree that the world economy generally benefits from free trade.
Government-levied tariffs are the chief protectionist measures. They raise the price of imported articles, making them more expensive (and therefore less attractive) than domestic products. Protective tariffs have historically been employed to stimulate industries in countries beset by recession or depression. Protectionism may be helpful to emergent industries in developing nations. It can also serve as a means of fostering self-sufficiency in defense industries. Import quotas offer another means of protectionism. These quotas set an absolute limit on the amount of certain goods that can be imported into a country and tend to be more effective than protective tariffs, which do not always dissuade consumers who are willing to pay a higher price for an imported good.
Throughout history wars and economic depressions (or recessions) have led to increases in protectionism, while peace and prosperity have tended to encourage free trade. The European monarchies favoured protectionist policies in the 17th and 18th centuries in an attempt to increase trade and build their domestic economies at the expense of other nations; these policies, now discredited, became known as mercantilism. Great Britain began to abandon its protective tariffs in the first half of the 19th century after it had achieved industrial preeminence in Europe. Britain’s spurning of protectionism in favour of free trade was symbolized by its repeal in 1846 of the Corn Laws and other duties on imported grain. Protectionist policies in Europe were relatively mild in the second half of the 19th century, although France, Germany, and several other countries were compelled at times to impose customs duties as a means of sheltering their growing industrial sectors from British competition. By 1913, however, customs duties were low throughout the Western world, and import quotas were hardly ever used. It was the damage and dislocation caused by World War I that inspired a continual raising of customs barriers in Europe in the 1920s. During the Great Depression of the 1930s, record levels of unemployment engendered an epidemic of protectionist measures. World trade shrank drastically as a result.
The United States had a long history as a protectionist country, with its tariffs reaching their high points in the 1820s and during the Great Depression. Under the Smoot-Hawley Tariff Act (1930), the average tariff on imported goods was raised by roughly 20 percent. The country’s protectionist policies changed toward the middle of the 20th century, and in 1947 the United States was one of 23 nations to sign reciprocal trade agreements in the form of the General Agreement on Tariffs and Trade (GATT). That agreement, amended in 1994, was replaced in 1995 by the World Trade Organization (WTO) in Geneva. Through WTO negotiations, most of the world’s major trading nations have substantially reduced their customs tariffs.
The reciprocal trade agreements typically limit protectionist measures instead of eliminating them entirely, however, and calls for protectionism are still heard when industries in various countries suffer economic hardship or job losses believed to be aggravated by foreign competition.
Learn More in these related Britannica articles:
origins of agriculture: Economics, politics, and agriculture…brought a new wave of protectionism, leading some industrial countries to look toward self-sufficiency in food supplies. In countries such as France, Germany, and Italy, where agriculture was already protected, the tariff structure was reinforced by new and more drastic measures, while countries such as Britain, Denmark, the Netherlands, and…
international trade: Resurgence of protectionismBut the protectionism of the last quarter of the 19th century was mild by comparison with the mercantilist policies that had been common in the 17th century and were to be revived between the two world wars. Extensive economic liberty prevailed by 1913. Quantitative restrictions were unheard…
international trade: Protectionism in the less-developed countries…but the effective rates of protection were often even higher, because the goods tended to be highly fabricated and the proportion of value added in production after importation was low. While countries such as Taiwan, Hong Kong, and South Korea oriented their manufacturing industries mainly toward export trade, they tended…
international trade: Protection of domestic industry…industry in need of such protection ought not to survive and that the resources so employed ought to be transferred to occupations having greater comparative efficiency. The welfare gain of citizens taken as a whole would more than offset the welfare loss of those groups affected by import competition; that…
Tariff, tax levied upon goods as they cross national boundaries, usually by the government of the importing country. The words tariff, duty, and customscan be used interchangeably.…
More About Protectionism13 references found in Britannica articles
- agricultural economics
- economic development
- fashion industry
- Great Depression
- imperial preference
- import substitution industrialization
- market analysis
- In subsidy
- historical resurgence |
Scientists and medics developed a special formula called Body Mass Index (BMI). BMI calculates to what extent the body proportion corresponds to biological norm.
BMI body mass index is calculated by the following formula - BMI = m / h2, where m - body mass is in kilograms and h – height in meters
In order to calculate your body mass index, you must specify the height and weight in corresponding fields.
Results are following:
- If BMI is less than 16, body has a large deficit of body mass.
- From 16 to 18,5 indicates body mass deficit (underweight);
- From 18,5 to 25 BMI person has normal weight;
- From 25 to 30 BMI indicates excess weight (overweight);
- 30 or above BMI indicates obesity
Moreover, scientists distinguish three degrees of obesity. In particular:
1. Body mass index of 30-35 indicates first degree of obesity.
2. Body mass index of 35-40 indicates second degree of obesity.
3. If body mass index is over 40, third degree of obesity, i.e morbid (morbid –disease, pathology) obesity is declared.
Obesity is an accumulation of excess fat in the body that threatens health. According to the World Health Organization, world obesity indicator has doubled since 1980.
However, many people forget that body mass deficit (when person’s weight is less it is recommended, normal –dangerous, excessive thinness) is as threatening to health as excess weight.
Wish you health! |
Q: What is a clinical trial?
A clinical trial is a research study to answer specific questions about vaccines, new therapies or new ways of using known treatments. Clinical trials (also called medical research and research studies) are used to determine whether new drugs or treatments are both safe and effective. Carefully conducted clinical trials are the safest and most reliable way to find treatments that work in people.
Q: What is a protocol?
A protocol is a study design on which all clinical trials follow. The design is carefully followed to safeguard the health of the participants as well as answer specific research questions. A protocol describes what types of people may participate in the trial; the schedule of tests, procedures, medications, and dosages; and the length of the study. While in a clinical trial, participants following a protocol are seen regularly by the research staff to monitor their health and to determine the safety and effectiveness of their treatment.
Q: What are the different types of clinical trials?
-Treatment trials test new treatments, new combinations of drugs, or new approaches to surgery or radiation therapy.
-Prevention trials look for better ways to prevent disease in people who have never had the disease or to prevent a disease from returning. These approaches may include medicines, vitamins, vaccines, minerals, or lifestyle changes.
-Screening trials test the best way to detect certain diseases or health conditions.
Q: What are the phases of clinical trials?
Clinical trials are conducted in phases. The trials at each phase have a different purpose and help scientists answer questions:
In Phase I trials, researchers test a new drug or treatment in a small group of people (20-80) for the first time to evaluate its safety, determine a safe dosage range, and identify side effects.
In Phase II trials, the study drug or treatment is given to a larger group of people (100-300) to see if it is effective and to evaluate its safety.
In Phase III trials, the study drug or treatment is given to large groups of people (1000-3000) to confirm its effectiveness, monitor side effects, compare it to commonly used treatments, and to collect information that will allow the drug or treatment to be used safely.
In Phase IV trials, post marketing studies delineate additional information including the drug's risks, benefits, and optimal use.
Q: Who can participate in a clinical trial?
Using inclusion/exclusion criteria is an important principle of medical research that helps to produce reliable results. The factors that allow someone to participate in a clinical trial are called "inclusion criteria" and those that disallow someone from participating are called "exclusion criteria".
These criteria are based on such factors as age, gender, the type and stage of a disease, previous treatment history, and other medical conditions. Before joining a clinical trial, a participant must qualify for the study.
Some research studies seek participants with illnesses or conditions to be studied in the clinical trial, while others need healthy participants. It is important to note that inclusion and exclusion criteria are not used to reject people personally. Instead, the criteria are used to identify appropriate participants and a group of people with similar characteristics and keep them safe.
The criteria help ensure that researchers will be able to answer the questions they plan to study.
Q: What happens during a clinical trial?
The clinical trial process depends on the kind of trial being conducted. The clinical trial team includes doctors and nurses as well as social workers and other health care professionals.
They check the health of the participant at the beginning of the trial, give specific instructions for participating in the trial, monitor the participant carefully during the trial, and stay in touch after the trial is completed.
Some clinical trials involve more tests and doctor's visits than the participant would normally have for an illness or condition. For all types of trials, the participant works with a research team. Clinical trial participation is most successful when the protocol is carefully followed and there is frequent contact with the research staff.
Q: What is informed consent?
Consent is the process of learning the key facts about a clinical trial before deciding whether or not to participate. It is also a continuing process throughout the study to provide information for participants.
To help someone decide whether or not to participate, the doctors and nurses involved in the trial explain the details of the study. Then the research team provides an informed consent document that includes details about the study, such as its purpose, duration, required procedures, and key contacts. Risks and potential benefits are explained in the informed consent document.
The participant then decides whether or not to participate in the study and indicates his/her decision by signing or not. Informed consent is a contract; which is non-binding, and the participant may withdraw from the trial at any time.
Q: What should people consider before participating in a trial?
Participants or subjects should know as much as possible about the clinical trial. They should feel comfortable asking the members of the healthcare team questions about the study, the care provided while in a trial, and the cost of the trial.
The following questions might be helpful for the participant to discuss with the health care team. Some of the answers to these questions are found in the informed consent document.
- What is the purpose of the study?
- Who is going to be in the study?
- Why do researchers believe the new treatment being tested may be effective?
- Has it been tested before?
- What kinds of tests and treatments are involved?
- How do the possible risks, side effects, and benefits in the study compare with my current treatment?
- How might this trial affect my daily life?
- How long will the trial last?
- Will hospitalization be required?
- What type of long -term follow up care is required?
- Who will pay for the treatment?
- Will I be reimbursed for other expenses?
- How will I know that the treatment is working?
- Will results of the trial be provided to me?
- Who will be in charge of my care?
Q: What are the benefits and risks of participating in a clinical trial?
Clinical trials that are well-designed and well-executed are the best treatment approach for eligible participants to:
Play an active role in their own health care.
Gain access to new research treatments before they are widely available.
Obtain expert medical care at leading health care facilities during the trial.
Help others by contributing to medical research.
There are risks to clinical trials.
There may be unpleasant, serious or even life-threatening side effects to treatment.The treatment may not be effective for the participant.
The protocol may require more of their time and attention than would a non-protocol treatment, including trips to the study site, more treatments, hospital stays or complex dosage requirements.
Q: Can a participant leave a clinical trial after it has begun?
Yes. A participant can leave a clinical trial, at any time. When withdrawing from a trial, the participant should let the research team know about it, and the reasons for leaving the study.
*Source: U.S. National Library of Medicine |
This module empowers students with body system lessons that are proven to increase student learning and application of material, help make difficult content easier to learn, and can be immediately implemented into medical skills and terminology…and they are affordable!
This module contains many fun & successful teaching tricks by having students build the Cardiovascular System, Gross Respiratory Organs, and Microscopic structures of Lungs. Students can 'see' the inter-relationship of these two systems with each other as they build them on their flat board. Many times textbooks only show the systems independently, so this activity puts your students ahead. Great windows for pathology application can also be used doing this activity.
Students will love learning through this activity and will gain confidence in their ability to apply knowledge to these two major body systems.
This module contains time frame, lesson plans, supply list, classroom organization, step-by-step building instructions with pictures.
The following is a list of the table of contents:
Lesson planning, Supplies, & Clay Technique….pg. 3-6
Objectives and Goals: .......................pg. 7
I. Diaphragm Construction...............pg. 8-11
II. Building the Cardiovascular System ..pg. 12-42
III. Building the Respiratory System:
Gross and Microscopic......pg. 43-73
Grading and Honoring Your Students' Work.....pg. 74-75 |
Whole number generators
Whole number generators are “machines” that output a whole number – an integer – for each natural number 1, 2, 3, … that is input to the machine. A whole number generator may have a simple rule such as
output(n) = output(n-1)+output(n-2)
or might be far more mysterious in its operation of generating output numbers. Whole number generators produce sequences of whole numbers that can be found in many areas of mathematics and science. Their study is fun, fascinating, and illuminating. |
OK, this page basically explains the whole of chemistry. This doesn’t mean you can get away with just revising this one page though….sorry about that.
So here is the big secret of chemistry: ELECTRONS OCCUPY “SHELLS” AROUND THE NUCLEUS.
There’s a few more rules to it than that though:
- Electrons occupy “shells” or “energy levels” around the nucleus.
- The lowest levels are the ones closest to the nucleus and these are the ones that are always filled first.
- Only a certain number of electrons are allowed in each level:
1st Shell – 1, 2nd Shell – 8, 3rd Shell – 8
- Atoms like their shells to be filled. If they have a full level, like the noble gasses they won’t react….if they don’t have a full outer level though they will react with other atoms to try to fill it.
Working Out Electronic Structures
You need to be able to work out the electronic structures – i.e. how many electrons are in each level – for the first 2 elements.
It’s really easy, just use the following method:
- Look at the periodic table to see the number of neutrons an element has (usually the number in the bottom left)
- The number of electrons is equal to this number of protons.
- Follow the rules above about the number of electrons in each shell
- Write your answer as numbers with commas between them – e.g. 2,8,4
So let’s have a look for Chlorine. See if you can work it out, then click below to see if you were right.
[accordion][item title=”Click to reveal answer”]If we look at the periodic table we see chlorine has 17 protons, so it must have 17 electrons. This means it’s electronic structure is 2,8,7.[/item]
NOTE: If asked to draw a diagram just do something like this: |
Third Grade Curriculum
Students will be focusing on what defines a Catholic. We will be studying about God as the Father, Son, and Holy Spirit. They will use the text, Bible readings, prayers, discussion, and activities to help us on our journey of living out the words we read and speak.
This area focuses on the various aspects of communication: reading, writing, spelling, listening, and speaking. It involves daily experiences that help the students develop impressions of quality literature, a chance to interact with the material through classroom discussion, drama, music, art, writing, and finally the opportunity of self-expression during which time the children will apply their above experiences into their real life situations.
In the beginning, students will review addition and subtraction facts and then go on to adding and subtracting two and three digit numbers with and without renaming. As the year progresses, we will work with money, telling time, fractions, multiplication, and division. It is important to memorize the basic facts by practicing at home. It would be very helpful if you could purchase or make flash cards to use in practicing.
We will be studying units about Landforms, Natural Resources, Animal, Habitats, Solar Systems, Human Digestive System, States of Matter, and Heat and Sound. Students will be involved in independent and group projects.
Students will explore geography, history, multiculturalism, citizenship, economics, government, thinking skills, and current events around the world. |
By the mid-seventeenth century, the British actively sought ways to expand their overseas empire. To achieve this goal, they needed a strong navy and a healthy commercial network. The navy helped protect British merchants at home and in the colonies; meanwhile, duties on commerce funded much of the navy’s rapid growth. As these military and commercial interests melded together, the government developed policies based on the theory of mercantilism to meet the needs of the empire. By the early eighteenth century, the British worked out a system that enlarged the prestige and power of the empire as well as provided benefits to many people in the mother country and the colonies. The system also helped set the foundations for the American Revolution.
6.2.1: Developing a Commercial Empire
During the 1650s, Parliament thought more about the commercial interests of England. Merchants in and out of the government sought ways to extend English control over the carrying trade, or shipping, to the New World while also improving their own financial situation. To undercut the Dutch monopoly, Parliament passed the Navigation Act of 1651. The measure required all goods going to and from the colonies to be transported on English or colonial ships. In theory, it closed colonial ports to foreign ships, but Parliament neglected to include a strong enforcement provision in the act. Therefore, the colonists routinely smuggled in goods from the Dutch and the French. After the Restoration of 1660, Charles II examined the commercial potential of the empire. Merchants and manufactures continued to support the expansion of trade, but so too did many of the king’s loyal supporters. Oliver Cromwell’s rule left many royalists, including the king, in dire financial situations. Thus, economic motives pushed Charles II to implement policies based on the theory of mercantilism.
The Mercantilist System
Generally, mercantilism sought to strengthen a nation at the expense of its competitors by increasing its wealth, population, and shipping capabilities. In some ways, mercantilism was the ultimate expression of national greed. A country could increase its wealth by accumulating gold and silver. Short of resorting to piracy to steal such precious metals, a country needed a favorable trade balance. In England, this effort led the government to encourage domestic manufacturing. To enlarge the merchant marine, the government sought to monopolize the carrying trade between the mother country and the colonies. With a monopoly, British shippers would need more ships and trained sailors, both of which the navy could use in times of war. Finally, population increases at home and in the colonies helped to provide more consumers for manufactured goods; some of the growth came from natural increase while some came from immigration.
In the mercantilist system, colonies played an important role in developing a successful empire; consequently, most European nations sought New World colonies in the seventeenth century. Colonies provided the raw materials to fuel industrial growth. In the British North America, most settlers chose to farm because of the availability of fertile land. Initially, they did so out of necessity. The distance to England, coupled with the smaller size of ships in the seventeenth century, meant the colonists needed to provide for themselves. For much of the colonial period, however, they continued to farm because, under mercantilism, it could be quite profitable. At the same time, they engaged in some manufacturing for local markets; they did not compete directly with the industries developing in England. Most of their finished goods such as flour or iron required only slight changes from their raw state and aided colonists in growing more raw materials. Over time, regional differences developed in the colonial economies that stemmed from the availability of land and labor.
In the New England colonies, most farmers grew for self-sufficiency rather than for the market because of the long winters and the rocky soil. However, the region engaged in whaling and fishing for the export market. It also became a leader in shipbuilding. In the middle colonies, most farmers grew grains such as wheat, rye, oats, barley, buckwheat, and corn. They also grew a wide variety of vegetables, flax, and hemp. Additionally, they raised livestock. By the mid-eighteenth century, the region also led the colonies in iron manufacturing. In the Chesapeake colonies, most colonists remained committed to tobacco production. However, they also raised wheat, corn, flax, hemp, and apples to help offset bad tobacco harvests. In the southern colonies, North Carolina turned to its forests for export goods, which yielded the tar, pitch, and timber necessary for shipbuilding. Besides these naval stores, interior settlers ran pottery shops and tanneries. The shorter winters in South Carolina and Georgia allowed colonists to export rice, indigo, and salt pork often to the Caribbean colonies, goods which they exchanged for slaves. The southern colonies also actively participated in the deerskin trade.
Extending Imperial Control
Knowing colonies served a vital role in the success of any empire, the British set out to expand their presence in the New World during the Restoration period. Through proprietary arrangements, Charles II closed the gap between the New England and Chesapeake colonies as well as extended the crown’s control south of Virginia by the early 1680s. By eliminating the Dutch from North America, the British paved the way for increasing their volume of trade with their North American and Caribbean colonies. To further that goal, the government proposed a series of trade laws to improve the British position vis-à-vis their imperial rivals.
First, Parliament passed the Navigation Act of 1660. The measure reiterated the provisions of the 1651 act, which restricted all shipping in the empire to English and colonial vessels. It also added a provision listing several “enumerated articles” that could only be traded within the empire. These goods included sugar, tobacco, cotton, wool, and indigo. Theoretically, the restrictions helped make England more self-sufficient and increased the crown’s tax revenue. Second, Parliament approved the Staple Act of 1663. It placed restrictions on foreign goods imported into the colonies by requiring merchants to ship through an English port. The act made the colonies more dependent on the mother country because England became their staple, or market, for all foreign goods. Finally, Parliament voted in favor of the Plantation Duty Act of 1673. Designed to cut down on smuggling, the act established provisions to collect customs duties in colonial ports before the goods shipped to other colonial ports. Under the measure, the British government stationed customs collectors in the colonies for the first time. These agents reported to their superiors in England, not to the colonial governor or assembly.
The Glorious Revolution, when William and Mary came to power, brought about new mercantilist policies for three reasons. First, the government wanted to quell the unrest in the colonies caused by James II’s efforts to consolidate royal control. William and Mary hoped to find a solution that would meet both the economic and political needs of English merchants and colonial planters. Second, lax enforcement of the Navigation Acts during King William’s War (1689-1697) increased smuggling and privateering, which put the economic health of the empire at risk. Third, after the adoption of the English Constitution, Parliament determined the empire’s fiscal policy. Dominated by wealthy landowners and merchants, the House of Commons wanted to assure political and economic strength. Thus, Parliament, with the crown’s approval, took measures to strengthen the trade restrictions on the colonies.
Parliament passed the Navigation Act of 1696 and the Trade Act of 1696. The Navigation Act sought to shore up previous acts by closing the loopholes that contributed to lax enforcement. In order to improve the collection of duties in the colonies, the law granted royal officials in the colonies the right to seek writs of assistance to search for and to seize illegal goods. The Trade Act created the Board of Trade, an administrative agency, to replace the more informal Lords of Trade created under Charles II. British merchants wanted a stronger body to develop and supervise commerce, since the Lords of Trade failed to devote enough attention to the colonies. William and Mary approved the change largely because, like many merchants, they believed stronger control over colonial development would have a positive effect on the British economy.
In 1697, the Board of Trade recommended the creation of Vice Admiralty Courts in the colonies. By using these courts, the Board denied colonists accused of violating the Navigation Acts the right to a jury trial because most colonial juries would not convict people accused of smuggling. The Board also recommended several other measures to restrict colonial industry and trade. For example, the Woolens Act of 1699 prevented colonists from producing wool goods for export; the Hat Act of 1732 did the same for hats. The most controversial of these measures was the Molasses Act of 1733, which raised the duties on rum, molasses, and sugar imported into the colonies from foreign countries. In time, most merchants realized that the duties on molasses did more to harm than help trade. Seeing as the act largely defied the logic of mercantilism, Robert Walpole, the king’s chief minister from 1720 to 1742, chose not to enforce the measure. His decision led to a period of “salutary neglect,” where government officials largely ignored economic development in the colonies. In instances where the British government chose to enforce its economic policies, many colonists simply evaded the law by smuggling. In the years leading up to the American Revolution, some merchants—especially those in Boston—found the Dutch, French, and Spanish more than willing to help them evade British trade laws. While certainly not the only reason for tensions between the colonists and the crown in the mid-eighteenth century, the decision to enforce the Navigation Acts and add additional regulations caused problems.
Trade and the Consumer Culture
While many colonists objected in principle to trade restrictions imposed by Parliament in the seventeenth and eighteenth century, few had reason to complain about the positive economic benefits of being part of the British Empire. Policymakers designed the Navigation Acts to increase trade relationships between the mother country and her colonies. If the policies significantly harmed colonial economies, they became pointless because colonists would not buy British goods. Imbedded into the trade acts were benefits for the colonists. First, the colonies had a monopoly over the enumerated articles. No one in England, for instance, could grow tobacco or indigo. Second, the colonists received rebates on goods imported from England, so they tended to pay lower prices for finished products. Third, the colonists did not need to worry about piracy because they fell under the protection of the Royal Navy.
Greed and self-interest underscored the theory of mercantilism at the national and the personal level. British merchants clearly had a stake in seeing imperial commerce thrive, but so too did the colonial farmers and shippers. With the exception of the Puritans, most people migrating to North America wanted to improve their economic position. American colonists, according to historian T.H. Breen, “obeyed the Navigation Acts because it was convenient and profitable for them to do so, not because they were coerced.” In the eighteenth century, economic growth, coupled with lower tax rates in British North America, provided the colonists with not only a decent standard of living but also more disposable income. Most colonists wanted very much to participate in the consumer revolution happening in Europe. In other words, they wanted to purchase consumer goods considered luxuries in the seventeenth century such as table and bed linens, ceramic cups and saucers, pewter cutlery, and manufactured cloth and clothing.
Throughout the eighteenth century, the demand for imported consumer items grew in the North American colonies. The more raw materials the colonists exported, the more necessities and luxury items they could purchase on credit. British and colonial merchants also worked to fuel demand for goods by advertising in the growing number of colonial newspapers. Likewise, hundreds of peddlers spread trade goods from colonial seaports to the interior. Despite the self-sufficient farmer’s image carrying a great deal of weight in popular memory of colonial America, the colonists never achieved the means to take care of all of their own needs. So, they imported basic necessities and niceties.
The fluid nature of colonial society meant that the elite wanted to set the standards for polite society, marked especially by the rise of a tea culture, as a means to distance themselves from the lower classes. They used their ability to purchase luxury items as a way to display their status. At the same time, the lowering sorts used their disposable income to erase the line between the elites and the commoners. Colonial women took a leading role in the consumer revolution. They had a good deal to gain from importing household items because they would no longer have to produce them in the home and could use those goods to mark their families’ place in American society.
Over time, the large number of imports helped to deepen the connection between the mother country and the colonies, and in some respect, helped to build a common identity among the colonies because everywhere people purchased the same goods. The consumer culture effectively created material uniformity. Moreover, the expanding coastal and overland trade brought colonists of different backgrounds into greater contact with one another. It gave them added opportunities to exchange ideas and experiences, even though they remained largely unaware of the importance of such connections as they continued to see themselves as New Yorkers, Virginians, and Carolinians, not Americans. T.H. Breen concluded that “the road to Americanization ran through Anglicization.” In other words, the colonists had to become more integrated in the British Empire before they could develop a common cultural identity as Americans.
6.2.2: Developing a Political System
Throughout the colonial period, the British struggled to determine how much authority to exert over the colonies. As England settled the New World, expedience usually determined the political system of each colony. As such, three models of government emerged: the royal colony, the proprietary colony, and the corporate colony. In each system, a governor shared power with a legislature usually composed of an upper house appointed by the governor and a lower house elected by the property-holding men. The chief difference between the models came in the selection of the governor. In the royal colonies, the crown appointed the governor. In the proprietary colonies, the proprietor chose the governor with the crown’s approval. In the corporate colonies, the voters selected the governor and did not need the crown’s approval. By the late seventeenth century, to further the goals of mercantilism, the crown and Parliament looked for ways to achieve greater control while also balancing the expectations of the colonies.
Initially the British administration of the colonies was somewhat haphazard, which explained why the different models of government emerged. However, the monarchy needed to find an arrangement to administer the colonies that would benefit all interested parties so as to successfully use the colonies to promote the economic development of the mother country. In the 1650s, Parliament began to tinker with the administrative system when they passed the Navigation Act of 1651 but largely left the colonies to govern themselves. During the Restoration period, Charles II and James II attempted to assert greater control over the colonies. They reorganized the existing colonies as royal colonies and created new proprietary colonies subject to greater royal authority.
The unrest caused by the creation of the Dominion of New England, whereby James II eliminated the vestiges of self-government by creating one administrative unit to oversee the northern colonies, suggested the mother country needed a new governmental policy. During the reign of William and Mary, the British finally found a working arrangement to manage its colonies that pleased merchants and colonists; the government retained some of the previous policies when it came to trade issues in an effort to bind the colonies more closely with the mother country. Thus, Parliament passed a revised Navigation Act and created the Board of Trade. At the same time, William and Mary restored the colonial assemblies, which their predecessor had disbanded. This compromise met the needs of both the colonies and the empire. Under the system, says historian Oliver Chitwood, “neither liberty nor security would be sacrificed” because “each province was to rotate on its own axis, but all of them were to revolve around England as the center of the imperial system.” The compromise would only work so long as the mother country could keep the colonies in line. Sentimental attachment to England helped in this effort, but so too did economic self-interest on the part of the colonies and the threat of force on the part of the mother country.
After the Glorious Revolution, Parliament held more power over matters of taxation and expenditures. However, the monarchy still largely supervised the colonies. Over the course of the eighteenth century, several different administrative bodies had their hand in colonial affairs. The Privy Council, the king’s official advisers, took the lead in colonial matters such as making royal appointments, issuing orders to governors, disallowing colonial laws in violation of English law, and hearing appeals from the colonial courts. Through a variety of secretaries, subcommittees, and boards, the Privy Council handled these tasks. The Treasury Board, which oversaw the empire’s money, was responsible for enforcing all trade restrictions and collecting all customs duties. The Admiralty supervised the Royal Navy that protected trade to and from the colonies. Further, the High Court of Admiralty, or its subsidiary Vice Admiralty Courts, tried cases relating to violations of the Navigations Acts. Finally, the Board of Trade advised the monarchy and Parliament on most colonial matters relating to commerce, industry, and government. Although the Board of Trade could not make any laws or official policies, the Privy Council frequently accepted its recommendations about appointments, laws passed by the colonial assemblies, and complaints made by the assemblies.
The system of colonial administration set up in the late seventeenth century provided for British oversight and local autonomy regardless of whether the colonies were royal, proprietary, or corporate. By the mid-eighteenth century, the royal colonies included New Hampshire, Massachusetts, New York, New Jersey, Virginia, North Carolina, South Carolina, and Georgia. The proprietary colonies included Pennsylvania, Delaware, and Maryland. The corporate colonies included Connecticut and Rhode Island.19 Each colony developed governmental structures that resembled the structure of the British government with the king, his council, and Parliament in the form of the governor, the upper house, and the lower house. A colonial agent, who represented the colonies’ interests in London, also aided the governor and the assembly. Moreover, each colony had a judiciary modeled on the British system with justices of the peace, county courts, and circuit courts. Finally, in each colony the county or the township dominated local politics. The county system prevailed in the southern and middle colonies, while the township system prevailed in the northern colonies. Both took responsibility for issues such as local taxation, defense, public health, and probate.
Governors, who served at the pleasure of the king or the proprietor, functioned as the chief royal officials in the colonies. They had the power to do what the king did at home without seeking prior approval from Parliament. In the eighteenth century, the Board of Trade drafted the governors’ orders for most of the colonies. These instructions underscored the mercantilist system in that they guided the governor to promote legislation to benefit the mother country while also seeking to improve the general welfare of the colony. Once in office, the governor became the commander of the colonial militia. He also held the power to decide when the assembly would meet and when it would disband and to approve or to veto all legislation passed by the assembly. Furthermore, the governor sent all official communication to London, which included sending colonial laws for approval by the crown. Finally, he appointed all judges, magistrates, and other officials, and he made recommendations to the crown or the proprietor regarding the composition of his advisory council. The governor’s council had three functions: it advised the governor on all executive decisions, it acted as the upper house of the legislature, and in conjunction with the governor, it served as the highest appeals court in the colony.
The colonial assemblies had the power to initiate legislation. More importantly, they controlled the budget because they voted on all taxes and expenditures, including colonial officials’ salaries and defense appropriations. Members were immune from arrest during assembly sessions and could speak freely and openly in those meetings. Finally, the assemblies had the right to petition the monarchy for the redress of grievances. By modern standards, the colonial assemblies were far from democratic. Nevertheless, more men could vote in America than in England because of the wider distribution of land ownership. At the local level, the county or township administrators supervised the election of the assembly. Those chosen increasingly believed they had the obligation to represent the local entity that elected them. This idea of direct representation differed from the British system, where Parliament supported the concept of indirect or virtual representation. Members believed they represented the whole empire, not just the region they hailed from.
As in England, during the eighteenth century the power of the assembly in the colonies grew in relation to the governor, meaning the colonists expected lax enforcement of royal dictates as well as control over most colonial matters. At the same time that Parliament adopted a policy of salutary neglect when it came to trade, the crown allowed the colonies greater political control over their affairs. This habit of self-government stemmed from two factors. First, the distance between the mother country and her colonies mitigated the ability to keep tight control over colonial affairs. Colonial assemblies often made decisions because the time lag in communication between the two continents simply made waiting on answers from London infeasible. Moreover, in the eighteenth century the crown often found itself distracted by other problems such as the wars with France and Spain. Second, more men met the property qualifications to vote in the colonies, therefore felt a more direct connection to their government. As such, the well-to-do who served in the assemblies needed to be more responsive to the needs of their constituents to stay in office. Like their counterparts in Britain, colonial leaders engaged in patronage where they awarded commissions, judgeships, and land grants to their supporters. In turn, most colonists put greater faith in their assemblies than in their governors because the colonists helped elect or appoint members to serve in those assemblies. As historian Jack P. Greene points out, “coherent, effective, acknowledged, and authoritative political elites” dominated local politics. They possessed “considerable social and economic power, extensive political experience, confidence in their capacity to govern, and…broad public support.”
To maximize the interests of their fellow colonists, the assemblies frequently used the power granted by their colonial charters to put pressure on the governor. On several occasions, the assemblies made official complaints about their governors’ power to determine when and for how long they could meet. When the monarchy refused to address the problem, the assemblies used their power to control the budget. Should a governor veto legislation the assembly favored, it slowed and sometimes stopped the appropriation of funds for the governor’s salary or defense measures. In the 1720s and 1730s, the governors in New York, Massachusetts, and New Hampshire went without pay for several years. According to historian Alan Taylor, the colonists also “could effectively play…dirty politics.” They sometimes resorted to rumors and gossip to undermine the authority of their governor and force his recall by officials in London. In the 1700s, New Yorkers exposed the then governor, Lord Cornbury, as a cross-dresser, so soon British officials removed him from office. Many governors tried to use their powers to grant land or bestow patronage to counter the power of the assembly, but their efforts rarely worked.
In the colonies, political tension was common because the assemblies constantly looked for ways to expand their power and responsibility over colonial affairs. Meanwhile when new governors arrived from England, they looked to shuffle the local power structure to win colonists over to their policies. In the end, most governors accepted the assemblies’ demands in order to retain their position, thus perpetuating the idea of self-government in the colonies. Many colonists believed they lived under the most enlightened form of government in Europe. Like their counterparts in England, the colonists believed the Bill of Rights protected their liberties. In the eighteenth century, the colonists concluded that they were free to protest against objectionable policies and laws emanating from Parliament because they were British citizens. Moreover, they expected the balance of power to remain in their favor since the governors often came around to their position.
6.2.3: Before You Move On...
During the seventeenth and eighteenth centuries, the British sought to expand their empire. Using the theory of mercantilism, they set up an economic and political system designed to benefit the mother country and her colonies. Through the passage of the Navigation Acts and the creation of the Board of Trade, the government sought to increase the nation’s wealth through commercial ties with the New World. The colonies provided raw materials for British industry and, in turn, purchased finished goods produced in the mother country. To further their economic goals, the monarchy also sought to extend greater political control over the colonies. Colonial resistance to James II’s policies prompted William and Mary, as well as their successors, to blend royal control with representative assemblies. The large volume of trade brought benefits to most people involved in the system and thereby increased Britain’s power over its European rivals. However, lax enforcement of many of the regulations, plus the growing power of the colonial assemblies, planted seeds of discontent that boiled over in the 1760s.
The Navigation Acts specified enumerated goods that
a. colonists could not export.
b. colonists could manufacture the same goods as produced in Britain.
c. colonists could only ship within the British Empire.
d. colonists could only trade to other colonists.
Most colonists in eighteenth-century North America were largely self-sufficient, so they did not need to import consumer goods from Britain.
Colonial governors possessed the right to veto legislation passed by the colonial assemblies.
During the eighteenth century, colonial assemblies
a. lost their power to appropriate taxes.
b. were appointed by the king.
c. included both men and women.
d. expanded their power and influence. |
Volcanic eruptions that occur in the tropics can cause El Niño events, warming periods in the Pacific Ocean with dramatic global impacts on the climate, according to new research.
Enormous eruptions trigger El Niño events by pumping millions of tons of sulfur dioxide into the stratosphere, which form a sulfuric acid cloud, reflecting solar radiation and reducing the average global surface temperature, according to the study.
The study used sophisticated climate model simulations to show that El Niño tends to peak during the year after large volcanic eruptions like the one at Mount Pinatubo in the Philippines in 1991.
“We can’t predict volcanic eruptions, but when the next one happens, we’ll be able to do a much better job predicting the next several seasons, and before Pinatubo we really had no idea,” says Alan Robock, coauthor of the study and a professor in the environmental sciences department at Rutgers University-New Brunswick.
“All we need is one number—how much sulfur dioxide goes into the stratosphere—and you can measure it with satellites the day after an eruption.”
The El Niño Southern Oscillation (ENSO) is nature’s leading mode of periodic climate variability. It features sea surface temperature anomalies in the central and eastern Pacific. ENSO events (consisting of El Niño or La Niña, a cooling period) unfold every three to seven years and usually peak at the end of the calendar year, causing worldwide impacts on the climate by altering atmospheric circulation, the study notes.
Strong El Niño events and wind shear typically suppress the development of hurricanes in the Atlantic Ocean, the National Oceanic and Atmospheric Administration says. But they can also lead to elevated sea levels and potentially damaging cold season nor’easters along the East Coast, among many other impacts.
Sea surface temperature data since 1882 document large El Niño-like patterns following four out of five big eruptions: Santa María (Guatemala) in October, 1902; Mount Agung (Indonesia) in March, 1963; El Chichón (Mexico) in April, 1982; and Pinatubo in June, 1991.
The study focuses on the Mount Pinatubo eruption because it’s the largest and best-documented tropical one in the modern technology period. It ejected about 20 million tons of sulfur dioxide, Robock says.
Cooling in tropical Africa after volcanic eruptions weakens the West African monsoon, and drives westerly wind anomalies near the equator over the western Pacific, the study says. The anomalies are amplified by air-sea interactions in the Pacific, favoring an El Niño-like response.
Climate model simulations show that Pinatubo-like eruptions tend to shorten La Niñas, lengthen El Niños and lead to unusual warming during neutral periods, the study says.
If there’s a big volcanic eruption tomorrow, Robock says he could make predictions for seasonal temperatures, precipitation, and the appearance of El Niño next winter.
“If you’re a farmer and you’re in a part of the world where El Niño or the lack of one determines how much rainfall you will get, you could make plans ahead of time for what crops to grow, based on the prediction for precipitation,” he says.
The journal Nature Communications has published the research online.
Source: Rutgers University |
Mexican general and statesman, President (1877–80; 1884–1911). He led a military coup in 1876 and was elected President the following year. During his second term of office he introduced a highly centralized government, backed by loyal mestizos and landowners, which removed powers from rural workers and American Indians. Díaz promoted the development of Mexico's infrastructure and industry, using foreign capital and engineers to build railways, bridges, and mines. Eventually the poor performance of Mexico's economy and the rise of a democratic movement under Francisco Madero (1873–1913) contributed to Díaz's forced resignation and exile in 1911.
Subjects: Contemporary History (Post 1945) — World History. |
The glowing face of the July full Moon will pass through the centre of Earth’s darkest shadow on the night of Friday July 27 and Saturday 28.
This stellar movement will mark the start of the total lunar eclipse and the emergence of the so-called Blood Moon.
The last time a lunar eclipse appeared was six months ago on the night of January 31, when the Super Blue Blood Moon graced the skies.
Unlike the full Moon and new Moon phases of the glowing orb which happens each month like clockwork, lunar eclipses only occur once or twice a year on average.
So why do the normal phases of the Moon seem so regular while lunar eclipses appear to be out of sync?
A total lunar eclipses happens when the Sun, the Earth and the Moon perfectly align for the Earths’ satellite to dip into its shadow.
But both the Earth and the Moon revolve around the Sun and each other on different, uneven planes making this necessary alignment an irregular occurrence.
When compared to the Earth’s orbit around the Sun, the Moon trajectory is slightly titled to the celestial plane.
Eclipse 2018: The Blood Moon typically occurs once every six months or so
This means the Moon is sometimes above the plane and sometimes it is below and not too often are they aligned.
The Moon’s orbit around the Earth is tilted about 5.15 degrees in relation to the plane on which the Earth circles the Moon.
If both planes were perfectly aligned with one another we would have a solar and lunar eclipse every single month.
Because of the tilt, the Moon’s path across the stars also appears to vary from month to month.
Astronomer Graham Jones, of Ten Sentences, explained: “Since the Moon orbits Earth once a month, why don’t we have 12 or 13 eclipses every year?
From our viewpoint on Earth, the Moon normally passes either above or below the Sun
“I organise solar eclipse workshops for students, and this question has proven thought-provoking.
“The easy answer is that the Moon’s orbit around the Earth is titled, by five degrees, to the plane of Earth’s orbit around the Sun.
“As a result, from our viewpoint on Earth, the Moon normally passes either above or below the Sun each month at new Moon.”
Eclipse 2018: The full moon takes on a red colour in the Earth's shadow
Eclipse 2018: Astronomers expect the Blood Moon to last one hour and 43 minutes
According to NASA’s Goddard Space Flight Center the process is similar to that of the seasons.
Just like winter and summer arrive every sixth months so does a lunar eclipse on average.
There are however some exceptions to this rule.
Lunar eclipse data calculated by NASA’s Goddard shows the next total lunar eclipse falls on the night of January 21, 2019, but there will not be another total eclipse until May 26, 2021. |
"I have a dream that one day this nation will rise up and live out the true meaning of its creed: 'We hold these truths to be self-evident, that all men are created equal.'"One of the most important events in American history, the Civil Rights Movement brought about progress towards racial equality under the law, after America largely spent the hundred years after the Civil War ignoring the fact that blacks and other minorities were still being treated like second class citizens with little to no rights in many parts of the country. The Southern states, despite losing the The American Civil War and the abolition of slavery, had found numerous loopholes to keep blacks down: "Jim Crow" laws were drafted following the end of reconstruction in many Southern states, while the hypocritical and inherently flawed concept of "Separate but Equal" segregation denied minorities in the South basic rights. Black people even found it difficult to vote, despite having the right to, since states could (and did) impose literacy tests, which were often rigged by having questions that were either impossible to answer or deliberately ambiguous, poll taxes, or even making people guess the number of jellybeans inside a jar.note Note that discrimination was not exclusive to African Americans; Hispanics, Natives, Asians, Jews, Irish, and others were oppressed in varying ways as well, to say nothing about the LGBT community who had to had to hide their true natures hidden for fear of their well being and often their lives. The date when the civil rights movement started is not definitive and is still debated among historians; some credit the formation of the National Association for the Advancement of Colored People (NAACP) in 1909, a few point to Franklin D. Roosevelt ending racial discrimination in the federal government, others say when Harry Truman forcibly integrated the US Army during his presidency, and others point to the role of Soviet and Maoist funding, cultural contacts, moral support (even after the Sino-Soviet Split and border wars in 1960 it remained an issue they could agree on) and agitation on the behalf of African-Americans and Africans in general in the U.N. and off-the-books. Most often though, two moments in the 1950s stand out as the turning points which brought the movement together as far as catalysts go. The first one was Brown v. Board of Education of Topeka, a 1954 Supreme Court ruling that struck down the controversial 1896 Plessy v. Fergeson Supreme Court ruling which legalized segregation. Brown was a 9-0 ruling that basically called out the utter hypocrisy of segregation by way of pointing out that "separate but equal" was essentially code for "white people get nice things, but black people get barely functioning, barely usable versions of what white people take for granted." Famously, Chief Justice Earl Warren's ruling stated "Separate educational facilities are inherently unequal." The second catalyst was a moment towards the end of 1955, when a woman by the name of Rosa Parks refused to give up her seat to a white person, as was demanded by standard bus policy at the time in the city of Montgomery, Alabama and was arrested, gaining national attention and giving civil rights groups a chance to unify behind a symbol. Contrary to popular belief, the act was not an accidental act of protest;note Parks was an activist affiliated with the NAACP and was selected to test the segregation laws in court. Additionally, she was not the first person to resist the segregated bus polices either, but was the one the NAACP decide to use to draw attention to the issue.note After Rosa's publicized arrest, a young minister named Martin Luther King Jr., along with local NAACP head E. D. Nixon, decided to use her arrest as the rallying cry to unite and mobilize the black community of the south to end the busing discrimination issue via a mass boycott of the offending bus company. It was a long struggle, but King and the movement prevailed against the municipal government's frantic attempts to frustrate them and acts of violence by both natives and incoming thugs to try to intimidate them. Meanwhile, the north had similar incidents, such as in 1957 when the African American family of Bill and Daisy Meyers attempted to move into Levittown, Pennsylvania, one of the famed suburban projects created by William Levitt to be model communities—for whites only, that is. Although they and their supporters wanted no trouble, their very presences revealed that there was a lot of foul bigotry in them Little Boxes made out of Ticky-Tacky. Thus, their summer was a living hell, with angry mobs, destructive riots, and systematic racist harassment, aided and abetted by indifferent local police that finally prompted the State authorities to step in to stop it. Throughout it all, the Meyers and their friends stuck it out to become heroes who impressed Martin Luther King and Jackie Robinson among others; Daisy was not called "The Rosa Parks of the North" for nothing. These acts of heroism helped inspired similar styled boycotts and "sit-ins", which preached non-violent confrontation (inspired by Mahatma Gandhi's famous series of non-violent protests which helped win India its own independence) with the status quo of the south, a factor that made for a great deal of televised theater as peaceful black protesters and white sympathizers often found themselves being beaten or hosed down with fire hoses by local police departments, who thuggishly enforced the racist status quo. Martin Luther King, Jr. became the most notable leader of the civil rights movement, and became the first president of the Southern Christian Leadership Conference upon its founding in 1957. The SCLC, along with the NAACP and ACLU, were at the head of the fight, using the boycott and non-violent protests to make their point. Their work would have such great success and influence that, by the 1960s, the anti-war movement (for the most part) adopted the same non-violence approach as the civil rights movement. King's Crowning Moment of Awesome can be said to have come on August 28, 1963, when he gave his "I Have a Dream" speech at the Lincoln Memorial in Washington, D.C., which for many summed up the importance of the movement and the future that the movement was striving to achieve. Other people found their own glory. The Freedom Riders, for example, tested out a favorable Supreme Court decision on intercity bus stations to challenge segregation in the face of vicious resistance. That resistance included outright terrorism—such as the infamous bombing of the 16th Street Baptist Church in Birmingham, Alabama, that killed four little girls—in the hopes of cowing African-Americans into submission. In 1964, the activists took it up still another level as they dared to enter the lion's mouth in Mississippi, the most virulently segregated state of them all, with "Freedom Summer". In that summer of Mississippi Burning, idealistic northern college students, following the lead of the local activist leadership, took on the racist establishment with education, while their enemies were so afraid that they loaded up on cops and even a tank to stop them. Unfortunately for the bigots, they were stunned to see that, the more they frantically tried to intimidate and kill their "uppity" opponents, the more they shot their own cause in the foot as they drove national sympathy towards their non-violent enemies who refused to be intimidated. In the end, they learned to their horror that their foes would go down in history as heroes, while they would be remembered as violent, reactionary bullies savagely fighting to defend a social order that history has condemned as evil. Despite the gravitas of this movement, evidence of it was hardly seen in popular culture until later on in The '60s. The mainstream media largely ignored the movements until the late 1950s, when the struggle and police violence against members of the movement began to be filmed, serving as ready-made fodder for the growing television news genre. In this case, this interest was sparked in part by FCC head Newton Minow's embarrassing "Vast Wasteland" speech, excoriating TV's vapidity, which the TV networks were determined to prove wrong. To the networks, the civil rights movement was perfect material to present quickly: it was dramatic with the violence protesters were enduring, the sides were easy to distinguish with the good guys being predominately black and peaceful and the villains being all white who were acting like crazed brutes and the stories' theme of the state of human rights in America was a national issue no one was going to dispute as important to discuss. For their part, Martin Luther King Jr. and company realized this media situation themselves and proved quick studies in media savvy to work the reporters well, taking advantage of the fact that their enemies were all but threatening them. In fact, King would select certain southern cities, gambling that the authorities there were such bullying racist knuckleheads that it would create dramatic footage of them going berserk at peaceful protesters that no one could ignore. As it happened, most municipal figures like Commissioner of Public Safety "Bull" Connor in Birmingham, Alabama got suckered into that trap with bloody crackdowns that were condemned around the world. Politically, President Dwight D. Eisenhower was infamously silent on the matter in public, though in private he supported desegregation and even authorized the use of the 101st Airborne to enforce desegregation in Arkansas, a state whose governor (Orval Faubus, not George Wallace as most people think) tried to use the National Guard to prevent black students from attending white schools. Both John F. Kennedy and Lyndon Johnson were initially apprehensive about the movement, much of the Democratic Party's power base was in the South and neither wanted to alienate those supporters. However, King and company, using the tactics described above, were able to force the issue to the point where the White House had to act. Furthermore, remember that all this happened during The Cold War and the USA became painfully aware that they were hardly going to be able to claim to be morally superior to the Communist Bloc when this racist brutality was being exposed around the world. So, those Presidents managed to massage the issue in Congress with not just intense lobbying, of which Johnson, a proud Southerner himself, was a master, but also helping King and Company organize major events like the 1963 March on Washington. This culminated in the 1965 Voting Rights Act, which made illegal the "Jim Crow" trickery that kept minorities from being able to vote. Harry Truman also publicly supported equal rights for African Americans (famously saying "My forebears were Confederates... but my very stomach turned over when I had learned that Negro soldiers, just back from overseas, were being dumped out of Army trucks in Mississippi and beaten."), but Congress practically ignored his proposals. Nonetheless, he desegregated the armed forces, as that was in his control. However, the movement still had much to do and, by the end of the 1960s, had major problems. Martin Luther King Jr. was assassinated in 1968, and radical "Black Power" leaders and groups such as Eldridge Cleaver, the Black Panther Party, Malcolm X, Elijah Muhammad, and the Nation of Islam began to attract angry black recruits who had lost patience with King's non-violent philosophy. Unlike the mainstream Civil Rights Movement, Black Power rejected integration, feeling that white society was corrupt and decadent, and declared that black people should voluntarily segregate themselves from white society. The rise of Black Power at the end of the '60s, combined with a series of race riots in Los Angeles, Detroit and elsewhere, ultimately provided a flawed excuse for white backlash against civil rights and led to the election of Richard Nixon on a platform of "returning to normalcy". This ended up backfiring, as Nixon ended up desegregating more schools than any president before him, and implemented the first significant federal affirmative action program. As of this writing, the Civil Rights Movement is still within living memory, and many of the participants on both sides are still alive, with the deceased ones like Martin Luther King Jr. and Rosa Parks viewed as great leaders and heroes of American history. Those who were on the racist side are often, today, deeply ashamed of their former attitudes (Hazel Massery is one example). Others are finally being prosecuted for their crimes (Edgar Ray Killen, one of the men who organized the mob that killed three civil rights workers in Mississippi in the Freedom Summer of 1964, was one of them). And racism still exists in many forms (beyond the overt cross-burning and men in white hoods), but these days the civil rights movement is fractured and has no clear leader. However, to the further consternation of the forces of social privilege and unjust dominance, the African-Americans' crusade proved to be just the beginning. Other oppressed communities in North America were inspired by the Civil Rights Movement to rise up and demand their own equal rights and a fair shake in their society as well such as women, Native Americans, the LGBT community, Asians, religious minorities and the disabled. With these communities striving with their own eyes on the prize, they all contributed to a great combined social phenomenon that would be called The Rights Revolution that would challenge and redefine the definition of justice around the world. In the months leading up to the 2008 presidential election, many looked at the election as the ultimate litmus test towards whether or not the civil rights movement had succeeded, as the idea of Americans having the chance to elect an African-American to the Presidency would be the ultimate way to see if the movement's successes had any impact upon the generations who came afterwards. Needless to say, Barack Obama's election not only proved that the movement did indeed bring progress — in a scant 53 years, America had gone from needing a law to let black people vote at all to a majority of all Americans freely casting their votes for a black man - a man whose parents could not have legally married in several states at the time of his birth - as the President of United States, and even re-elected him — but also proved that there was still more work to be done. The Civil Rights Movement itself is also being introduced to the modern generation through the recent debates on whether marriage between homosexuals ought to be legal. In addition to the debate being, essentially, a civil-rights issue to begin with (that label is historically associated with anti-racism measures, and it's too late to change the name now), many commentators are drawing the easily-made comparisons between the arguments against giving gay and non-European people rights under the law —many of them unflattering to boot.
—Martin Luther King Jr. (1929-1968)
Depictions in fiction and the arts:Film
- Mississippi Burning
- Malcolm X
- The Butler
- Remember the Titans
- Crazy In Alabama
- In the Heat of the Night (made while it was still current)
- The Help, based on a book and set in 1962, it focuses on the lives of two African American maids and their white friend.
- Alluded to in Back to the Future when Marty declares that a certain black busboy will someday be mayor and the response is, "a colored mayor, that'll be the day!" Obviously, the Civil Rights Movement is what happened between 1955 and 1985 to make that possible.
- The Long Walk Home - Film starring Sissy Spacek, Whoopi Goldberg and Dwight Schultz about the Montgomery Bus Boycott (1955-1956) with civil rights at the core of it.
- In The Dark Tower series, Susannah was a Civil Rights activist.
- In The Full Matilda, David is in the Black Panther Party and gets shot at a protest.
- Noughts & Crosses depicts the Civil Rights movement, depicting both peaceful and violent acts of protest- while existing in an Alternate Universe where it is the white (the 'naughts') discriminated against by the blacks (the 'crosses').
- Like many other great historical moments, the Movement is turned on its head by The Onion in Our Dumb Century, especially in the "transcript" of King's renowned speech, "I Had A Really Weird Dream Last Night."
- The Selma Massacre is an Alternate History story that begins with the titular march being gunned down, and the entire movement takes an extremely violent turn.
- "Strange Fruit", written by schoolteacher Abel Meeropol and most famous in Billie Holiday's version, is one of the most famous civil rights anthems, explicitly protesting the practice of lynching. It was named as the song of the century by Time'' magazine.
- Nina Simone's "Mississippi Goddam" is regarded as a central song of the civil rights movement, explicitly responding to the murder of Medgar Evers and the September 1963 bombing of a black Baptist church in Birmingham, Alabama. Several other songs of hers also have pro-civil rights themes.
- Bob Dylan wrote several songs explicitly in support of the civil rights movement. "Blowin' in the Wind" is probably the most famous of them (Civil rights activist Mavis Staples expressed astonishment that a young white man could write a song that so eloquently captured the frustrations and aspirations of black people), although other songs such as "The Lonesome Death of Hattie Carroll" address the subject even more explicitly.
- Sam Cooke's "A Change Is Gonna Come", which he wrote after hearing "Blowin' in the Wind" and being ashamed that he hadn't yet written anything addressing the subject, also became a civil rights anthem.
- "We Shall Overcome", though it has a long history that predates the civil rights movement, was adopted by the movement as an anthem. It has perhaps become most associated with the folk singer Pete Seeger.
- The Beatles' "Blackbird" has been interpreted as a pro-civil rights song, and its author Paul McCartney has confirmed that this is one intended interpretation.
- The Rolling Stones' "Sweet Black Angel", one of the band's few overtly political songs, was written in support of civil rights leader Angela Davis.
- Robert "Granddad" Freeman of The Boondocks had an involvement in the movement. He still held a grudge against Rosa Parks for "stealing his thunder" (he was sitting next to her on that bus and likewise refused to give up his seat, but the bus driver was only offended by Rosa's unwillingness to move, not his), and once showed up late to a march because he knew they would bring out the firehoses and figured he'd bring a raincoat. A Whole Episode Flashback in Season 4 shows that he was one of the Freedom Riders, but his participation was completely involuntary. |
Techniques for Teachers – from folks who have been there!
Starting your career in teaching is exhilarating – and scary. For the first time, you'll be responsible for guiding the progress for a class full of young minds. Working with children with learning disabilities or ADHD can be particularly daunting. To help new teachers avoid feeling overwhelmed, we've put together this 'cheat sheet' of techniques from veteran teachers. Consider this article as your portable teaching mentor.
Techniques for Kids with LD or ADHD
Focus on strengths.
I make a real effort to focus on a student's strengths. In class we build on strengths and self-esteem by creating a lot of "hands on" projects. For example, when I am teaching fractions we use visual building concepts of blocks and pictures to help students understand. We try different strategies. If one does not work, another will. I have a very strong background in the Slingerland adaptations of the Orton-Gillingham simultaneous multi-sensory approach for children with specific language difficulties.
Structure class time for children with ADHD.
Children with ADHD are often distractible and impulsive. Although initially they might want to use lots of materials at the same time or to mark up a series of sheets of paper, quickly dismissing each in turn, we help them to focus upon a single project at a time. This might involve reviewing with the child what he wishes to accomplish that day. Once the priority is set by the child, an art therapist can help him to maintain focus on that project. We might begin with short sessions, especially for younger children, and lengthen the time as they develop their ability to attend to a task. Projects usually become more complex as the children begin to feel more secure about using art to express their feelings.
In my work, I have found that children who have learning disabilities often respond best to three-dimensional materials that allow them to "construct" art in a very hands-on manner. So, in my art therapy room, I have a range of building materials such as large chunks of Styrofoam, wood, spools, cardboard tubes, sheets of foam core, and, of course, clay.
–Audrey DiMaria, art teacher
Make the classroom safe and productive for students with emotional disorders.
Teachers need to know how to juggle! It is essential that special educators working with emotionally disabled students learn to take into account each child's personality, each child's history, and each child's potential every day. Academic expectations and behavioral expectations do not always run neck and neck – sometimes the Social Studies textbook must be book-marked, closed and we must ask, 'Are we bickering? It's important to address this now,' or 'OK, you are unhappy about something. Let's figure out if this is a good time to talk about it.'
Some of the procedures at the center where I work are markedly different from those in general education. For example, in my county special education teachers who work with emotionally disabled students are professionally trained to escort and/or restrain children to prevent them from injuring themselves and others. In addition to being prepared to teach a wide range of academic levels, teachers who instruct children with emotional disorders must have a great deal of background knowledge regarding mental health issues and medications.
- Teachers need to establish a bond with students. This important tip is first on my list, but it certainly doesn't happen right away. Teachers sometimes form opinions of students they have never met after reading their files or talking with previous teachers. Like most kids, children with emotional disabilities love pizza, play too many video games, and wonder if people like them. Keep your voice low, pat an arm, and nod your head. Give them the right to their emotions by saying, 'Yes, you are mad. I might be too if that happened to me. Let figure out how to handle this.' Praise and encourage class members when they are understanding and supportive.
- Teachers need to model acceptable behavior during cooperative learning activities. Sit with a small group of five at a table and ask, 'I purposely gave this team two bottles of glue. Can anyone tell me why? What if you had this feeling that you just had to have the glue right now?' Children with learning disabilities, just as students with emotionally disabilities often experience interpersonal problems and need to anticipate and practice appropriate voice volume, facial and body gestures, and verbal responses.
- Carefully consider the physical environment of your classroom throughout the year. Sit in each seat. What or who can the student see? Who is he sitting next to? Do you want him to be able to have a sweeping view of the classroom or do you want him to have a seat that limits where he can look?
- Do you plan to offer tangible reinforcements, such as stickers, pencils, or candy? Although consequences should generally be delivered without delay, rewards can be immediate, short-term, and long-term depending on the age level and dynamics of your classroom. For example: Place an index card on each desk and offer an immediate reward. (e.g. 'Great job not calling out, Diane and Joseph! Put two stickers on your index card. Finish the rest of the lesson and you can have five minutes at the Talk Table!'). Make room for some extra time on Friday and offer a short-term reward. (e.g. 'Those of you who earned 60 bonus marks by keeping their desks organized may have 10 minutes to play outside.') Talk with other teachers and plan some long-term rewards. (e.g. 'Class, our pizza party is next week. Students who have turned in 35 out of 42 assignments may attend.') Be sure to model how to handle both praise and disappointment. Hold morning or afternoon meetings that praise behavior and remind your students of upcoming events.
- Did your class have a horrible day on Monday? On Tuesday, greet your students at the door with open palms and a smile. The student who was banging his chair, crumpling his class-work, crying, and calling his classmates bad names on Monday is really hoping that you can find it in your heart to start over Tuesday morning. However, make sure to follow through on your consequences, even at the risk of a little more upset. Your students will appreciate and find comfort in your strong guidance.
Make kids feel special.
Give them opportunities to develop a balance between working on their weaknesses and expanding their strengths. Take the example of a student who writes a report and cuts a music CD on the computer to go along with it. That student, whose learning disability impacts his writing, starts to feel secure. He starts to take risks academically and feels more brave about other areas of his life.
Build literacy skills for teens with dyslexia.
Find books with high interest topics and consider where the kids came from. My students are sophisticated urban kids. Our challenge is to suggest or give them books with subject matter they can relate to.
Although abstract thinking is stressed at these ages, start with the concrete and relate it to their lives. Then guide them to the semiabstract, then the abstract. Choose from a variety of reading comprehension strategies and encourage book discussions.
Use a research-based method for teaching reading.
What special teaching styles do you use with children who have difficulty reading? Do you use pictures or a special method of instruction? Are there activities that you use to help a child's self esteem?
I use the Texas Scottish Rite Dyslexia Training Program. This is a multi-sensory, systematic, structured language based approach to teaching reading, writing, and spelling, based on Orton Gillingham techniques. When students experience reading success, it boosts their self-esteem. Our classroom is a very positive place for these children and we share plenty of smiles and very few problems.
Catch them doing well.
I am strong supporter of positive, cooperative discipline using corrective, preventive and supportive strategies. I believe that students may not always remember what you teach them, but they do remember how you treat them.
I have developed reward systems to encourage positive behavior. I use both group and individual plans.
Have your students quiz you.
Play 'Stump the Teacher!' Occasionally, give the students a chance to test you on a particular lesson. You can make it an individual assignment or a team assignment. If they can make up a question or a small quiz and you can work it, it shows that they understand the process of analysis and can create a problem to test it. |
Composite satellite image of the Mediterranean Sea
The Mediterranean Sea is a sea connected to the Atlantic Ocean surrounded by the Mediterranean region and almost completely enclosed by land: on the north by Anatolia and Europe, on the south by North Africa, and on the east by the Levant. The sea is technically a part of the Atlantic Ocean, although it is usually identified as a completely separate body of water.
The name Mediterranean is derived from the Latin mediterraneus, meaning "inland" or "in the middle of the earth" (from medius, "middle" and terra, "earth"). It covers an approximate area of 2.5 million km² (965,000 sq mi), but its connection to the Atlantic (the Strait of Gibraltar) is only 14 km (8.7 mi) wide. In oceanography, it is sometimes called the Eurafrican Mediterranean Sea or the European Mediterranean Sea to distinguish it from mediterranean seas elsewhere.
The Mediterranean Sea has an average depth of 1,500 m (4,900 ft) and the deepest recorded point is 5,267 m (17,280 ft) in the Calypso Deep in the Ionian Sea.
It was an important route for merchants and travelers of ancient times that allowed for trade and cultural exchange between emergent peoples of the region — the Mesopotamian, Egyptian, Phoenician, Carthaginian, Iberian, Greek, Macedonian, Illyrian, Thracian, Levantine, Gallic, Roman, Albanian, Armenian, Arabic, Berber, Jewish, Slavic and Turkish cultures. The history of the Mediterranean region is crucial to understanding the origins and development of many modern societies. "For the three quarters of the globe, the Mediterranean Sea is similarly the uniting element and the centre of World History.
The term Mediterranean derives from the Latin word mediterraneus, meaning "in the middle of earth" or "between lands" (medius, "middle, between" + terra, "land, earth"). This is on account of the sea's intermediary position between the continents of Africa and Europe. The Greek name Mesogeios (Μεσόγειος), is similarly from μέσο, "middle" + γη, "land, earth").
The Mediterranean Sea has been known by a number of alternative names throughout human history. For example the Romans commonly called it Mare Nostrum (Latin, "Our Sea"). Occasionally it was known as Mare Internum by (Sallust, Jug. 17).
In the Bible, it was primarily known as the "Great Sea" (Num. 34:6,7; Josh. 1:4, 9:1, 15:47; Ezek. 47:10,15,20), or simply "The Sea" (1 Kings 5:9; comp. 1 Macc. 14:34, 15:11); however, it has also been called the "Hinder Sea", due to its location on the west coast of the Holy Land, and therefore behind a person facing the east, as referenced in the Old Testament, sometimes translated as "Western Sea", (Deut. 11:24; Joel 2:20). Another name was the "Sea of the Philistines" (Exod. 23:31), from the people occupying a large portion of its shores near the Israelites.
In Modern Hebrew, it has been called HaYyam HaTtikhon (הַיָּם הַתִּיכוֹן), "the middle sea", a literal adaptation of the German equivalent Mittelmeer. In Turkish, it is known as Akdeniz, "the white sea". In modern Arabic, it is known as al-Baḥr al-Abyaḍ al-Mutawassiṭ (البحر الأبيض المتوسط), "the White Middle Sea," while in Islamic and older Arabic literature, it was referenced as Baḥr al-Rūm (بحر الروم), or "the Roman/Byzantine Sea."
As a sea around which some of the most ancient human civilizations were arranged, it has had a major influence on the history and ways of life of these cultures. It provided a way of trade, colonization and war, and was the basis of life (via fishing and the gathering of other seafood) for numerous communities throughout the ages.
The combination of similarly shared climate, geology and access to a common sea has led to numerous historical and cultural connections between the ancient and modern societies around the Mediterranean.
A satellite image taken from the side of the Strait of Gibraltar. At left, Europe: at right, Africa.
Dardananelles, North side is Gelibolu Peninsula-Europe, South side is Asia
The Mediterranean Sea is connected to the Atlantic Ocean by the Strait of Gibraltar on the west and to the Sea of Marmara and the Black Sea, by the Dardanelles and the Bosporus respectively, on the east. The Sea of Marmara is often considered a part of the Mediterranean Sea, whereas the Black Sea is generally not. The 163 km (101 mi) long man-made Suez Canal in the southeast connects the Mediterranean Sea to the Red Sea.
Large islands in the Mediterranean include Cyprus, Crete, Euboea, Rhodes, Lesbos, Chios, Kefalonia, Corfu, Naxos and Andros in the eastern Mediterranean; Sardinia, Corsica, Sicily, Cres, Krk, Brač, Hvar, Pag, Korčula and Malta in the central Mediterranean; and Ibiza, Majorca and Minorca (the Balearic Islands) in the western Mediterranean.
The climate is a typical Mediterranean climate with hot, dry summers and mild, rainy winters. Crops of the region include olives, grapes, oranges, tangerines, and cork.
The International Hydrographic Organization defines the limits of the Mediterranean Sea as follows:
The Mediterranean Sea is bounded by the coasts of Europe, Africa and Asia, from the Strait of Gibraltar on the West to the entrances to the Dardanelles and the Suez Canal on the East.
It is divided into two deep basins as follows:
On the West. A line joining the extremities of Cape Trafalgar (Spain) and Cape Spartel (Africa).
On the Northeast. The West Coast of Italy. In the Strait of Messina a line joining the North extreme of Cape Paci (15°42'E) with Cape Peloro, the East extreme of the Island of Sicily. The North Coast of Sicily.
On the East. A line joining Cape Lilibeo the Western point of Sicily (37°47′N 12°22′E / 37.783°N 12.367°E / 37.783; 12.367), through the Adventure Bank to Cape Bon (Tunisia).
On the West. The Northeastern and Eastern limits of the Western Basin.
On the Northeast. A line joining Kum Kale (26°11'E) and Cape Helles, the Western entrance to the Dardanelles.
On the Southeast. The entrance to the Suez Canal.
On the East. The coasts of Syria and Palestine.
(It should be noted that the coast referred to as belonging to Palestine in this document dating to 1953 has been within the internationally recognized borders of the country known as Israel since 1948. Of the territories administered by the Palestinian Authority, only the Gaza Strip has a sea coast.)
Predominant currents for June
Being nearly landlocked affects the Mediterranean Sea's properties; for instance, tides are very limited as a result of the narrow connection with the Atlantic Ocean. The Mediterranean is characterized and immediately recognized by its deep blue color.
Evaporation greatly exceeds precipitation and river runoff in the Mediterranean, a fact that is central to the water circulation within the basin. Evaporation is especially high in its eastern half, causing the water level to decrease and salinity to increase eastward. This pressure gradient pushes relatively cool, low-salinity water from the Atlantic across the basin; it warms and becomes saltier as it travels east, then sinks in the region of the Levant and circulates westward, to spill over the Strait of Gibraltar. Thus, seawater flow is eastward in the Strait's surface waters, and westward below; once in the Atlantic, this chemically distinct "Mediterranean Intermediate Water" can persist thousands of kilometers away from its source.
Map of the Mediterranean Sea
Twenty-one modern states have a coastline on the Mediterranean Sea. They are:
* Europe (from west to east): Spain, France, Monaco, Italy, Malta, Slovenia, Croatia, Bosnia and Herzegovina, Montenegro, Albania, Greece and Turkey (East Thrace)
* Asia (from north to south): Turkey (Anatolia), Cyprus, Syria, Lebanon, Israel, Egypt (the Sinai Peninsula)
* Africa (from east to west): Egypt, Libya, Tunisia, Algeria and Morocco
Turkey and Egypt are transcontinental countries. The southernmost islands of Italy, the Pelagie islands, are geologically part of the African continent.
Several other territories also border the Mediterranean Sea (from west to east):
* The British overseas territory of Gibraltar
* The Spanish enclaves of Ceuta and Melilla and nearby islands
* The British sovereign base area of Akrotiri and Dhekelia
* The Gaza Strip of the Palestinian Territories
Andorra, Jordan, Portugal, San Marino, the Vatican City, Macedonia and Serbia although they do not border the sea, are often considered Mediterranean countries in a wider sense due to their Mediterranean climate, fauna and flora, and/or their cultural affinity with other Mediterranean countries.
Capital cities of sovereign countries and major cities with populations larger than 200,000 people bordering the Mediterranean Sea are: Málaga, Cartagena, Alicante, Valencia, Palma, Barcelona, Marseille, Nice, Monaco, Genoa, Rome, Naples, Palermo, Catania, Messina, Valletta, Taranto, Bari, Venice, Trieste, Split, Durrës, Patras, Athens, Thessaloniki, Istanbul, Izmir, Antalya, Mersin, Tarsus, Adana, Lattakia, Tripoli (Lebanon), Beirut, Haifa, Tel Aviv, Ashdod, Gaza, Port Said, Damietta, Alexandria, Benghazi, Tripoli (Libya), Sfax, Tunis, Annaba, Algiers and Oran.
According to the International Hydrographic Organization (IHO), the Mediterranean Sea is subdivided into a number of smaller waterbodies, each with their own designation (from west to east):
* the Strait of Gibraltar;
* the Alboran Sea, between Spain and Morocco;
* the Balearic Sea, between mainland Spain and its Balearic Islands;
* the Ligurian Sea between Corsica and Liguria (Italy);
* the Tyrrhenian Sea enclosed by Sardinia, Italian peninsula and Sicily;
* the Ionian Sea between Italy, Albania and Greece;
* the Adriatic Sea between Italy, Slovenia, Croatia, Bosnia and Herzegovina, Montenegro and Albania;
* the Aegean Sea between Greece and Turkey.
Although not recognized by the IHO treaties, there are some other seas whose names have been in common use from the ancient times, or in the present:
* the Catalan Sea, between Iberian Peninsula and Balearic Islands, as a part of the Balearic Sea
* the Sea of Sardinia, between Sardinia and Balearic Islands, as a part of the Balearic Sea
* the Sea of Sicily between Sicily and Tunisia,
* the Libyan Sea between Libya and Crete,
* In the Aegean Sea,
o the Thracian Sea in its north,
o the Myrtoan Sea between the Cyclades and the Peloponnese,
o the Sea of Crete north of Crete
* the Cilician Sea between Turkey and Cyprus
Many of these smaller seas feature in local myth and folklore and derive their names from these associations. In addition to the seas, a number of gulfs and straits are also recognised:
* the Saint George Bay in Beirut, Lebanon
* the Ras Ibn Hani cape in Latakia, Syria
* the Ras al-Bassit cape in northern Syria.
* the Minet el-Beida ("White Harbor") bay near ancient Ugarit, Syria
* the Strait of Gibraltar, connects the Atlantic Ocean to the Mediterranean Sea and separates Spain from Morocco
* the Bay of Gibraltar, at the southern end of the Iberian Peninsula
* the Gulf of Corinth, an enclosed sea between the Ionian Sea and the Corinth Canal
* the Pagasetic Gulf, the gulf of Volos, south of the Thermaic Gulf, formed by the Mount Pelion peninsula
* the Saronic Gulf, the gulf of Athens, between the Corinth Canal and the Mirtoan Sea
* the Thermaic Gulf, the gulf of Thessaloniki, located in the northern Greek region of Macedonia
* the Kvarner Gulf, Croatia
* the Gulf of Lion, south of France
* the Gulf of Valencia, east of Spain
* the Strait of Messina, between Sicily and the toe of Italy
* the Gulf of Genoa, northwestern Italy
* the Gulf of Venice, northeastern Italy
* the Gulf of Trieste, northeastern Italy
* the Gulf of Taranto, southern Italy
* the Gulf of Salerno, southwestern Italy
* the Gulf of Gaeta, southwestern Italy
* the Gulf of Squillace, southern Italy
* the Strait of Otranto, between Italy and Albania
* the Gulf of Haifa, between Haifa and Akko, Israel
* the Gulf of Sidra, between Tunisia and Cyrenaica (eastern Libya)
* the Strait of Sicily, between Sicily and Tunisia
* the Corsica Channel, between Corsica and Italy
* the Strait of Bonifacio, between Sardinia and Corsica
* the Gulf of İskenderun, between İskenderun and Adana (Turkey)
* the Gulf of Antalya, between west and east shores of Antalya (Turkey)
* the Bay of Kotor, in south-western Montenegro and south-eastern Croatia
* the Malta Channel, between Sicily and Malta
The geologic history of the Mediterranean is complex. It was involved in the tectonic break-up and then collision of the African and Eurasian plates. The Messinian Salinity Crisis occurred in the late Miocene (12 million years ago to 5 million years ago) when the Mediterranean dried up. Geologically the Mediterranean is underlain by oceanic crust.
The Mediterranean Sea has an average depth of 1,500 m (4,900 ft) and the deepest recorded point is 5,267 m (17,280 ft) in the Calypso Deep in the Ionian Sea. The coastline extends for 46,000 km (29,000 mi). A shallow submarine ridge (the Strait of Sicily) between the island of Sicily and the coast of Tunisia divides the sea in two main subregions (which in turn are divided into subdivisions), the Western Mediterranean and the Eastern Mediterranean. The Western Mediterranean covers an area of about 0.85 million km² (0.33 million mi²) and the Eastern Mediterranean about 1.65 million km² (0.64 million mi²).
The geodynamic evolution of the Mediterranean Sea was provided by the convergence of European and African plates. This process was driven by the differential spreading along the Atlantic ridge, which led to the closure of the Tethys Ocean and eventually to the Alpine orogenesis. However, the Mediterranean also hosts wide extensional basins and migrating tectonic arcs, in response to its land-locked configuration.
According to a report published by Nature in 2009, scientists think that the Mediterranean Sea was mostly filled during a time period of less than two years, in a major flood (the Zanclean flood) that happened approximately 5.33 million years ago, in which water poured in from the Atlantic Ocean and through the Strait of Gibraltar, at a rate three times the current flow of the Amazon River.
In middle Miocene times, the collision between the Arabian microplate and Eurasia led to the separation between the Tethys and the Indian oceans. This process resulted in profound changes in the oceanic circulation patterns, which shifted global climates towards colder conditions. The Hellenic arc, which has a land-locked configuration, underwent a widespread extension for the last 20 Myr due to a slab roll-back process. In addition, the Hellenic Arc experienced a rapid rotation phase during the Pleistocene, with a counterclockwise component in its eastern portion and a clockwise trend in the western segment.
The opening of small oceanic basins of the central Mediterranean follows a trench migration and back-arc opening process that occurred during the last 30 Myr. This phase was characterized by the counterclockwise rotation of the Corsica-Sardinia block, which lasted until the Langhian (ca.16 Ma), and was in turn followed by a slab detachment along the northern African margin. Subsequently, a shift of this active extentional deformation led to the opening of the Tyrrenian basin.
Cabo de San Antonio.
Since Mesozoic to Tertiary times, during convergence between Africa and Iberia, the Betic-Rif mountain belts developed. Tectonic models for its evolution include: rapid motion of Alboran microplate, subduction zone and radial extensional collapse caused by convective removal of lithospheric mantle. The development of these intramontane Betic and Rif basins led to the onset of two marine gateways which were progressively closed during the late Miocene by an interplay of tectonic and glacio-eustatic processes.
Its semi-enclosed configuration makes the oceanic gateways critical in controlling circulation and environmental evolution in the Mediterranean Sea. Water circulation patterns are driven by a number of interactive factors, such as climate and bathymetry, which can lead to precipitation of evaporites. During late Miocene times, a so-called "Messinian Salinity Crisis" (MSC hereafter) occurred, which was triggered by the closure of the Atlantic gateway. Evaporites accumulated in the Red Sea Basin (late Miocene), in the Carpatian foredeep (middle Miocene) and in the whole Mediterranean area (Messinian). An accurate age estimate of the MSC—5.96 Ma—has recently been astronomically achieved; furthermore, this event seems to have occurred synchronously. The beginning of the MSC is supposed to have been of tectonic origin; however, an astronomical control (eccentricity) might also have been involved. In the Mediterranean basin, diatomites are regularly found underneath the evaporite deposits, thus suggesting (albeit not clearly so far) a connection between their geneses.
The present-day Atlantic gateway, i.e. the Strait of Gibraltar, finds its origin in the early Pliocene. However, two other connections between the Atlantic Ocean and the Mediterranean Sea existed in the past: the Betic Corridor (southern Spain) and the Rifian Corridor (northern Morocco). The former closed during Tortonian times, thus providing a "Tortonian Salinity Crisis" well before the MSC; the latter closed about 6 Ma, allowing exchanges in the mammal fauna between Africa and Europe. Nowadays, evaporation is more relevant than the water yield supplied by riverine water and precipitation, so that salinity in the Mediterranean is higher than in the Atlantic. These conditions result in the outflow of warm saline Mediterranean deep water across Gibraltar, which is in turn counterbalanced by an inflow of a less saline surface current of cold oceanic water.
The Mediterranean was once thought to be the remnant of the Tethys Ocean. It is now known to be a structurally younger ocean basin known as Neotethys. The Neotethys formed during the Late Triassic and Early Jurassic rifting of the African and Eurasian plates.
Because of its latitudinal position and its land-locked configuration, the Mediterranean is especially sensitive to astronomically induced climatic variations, which are well documented in its sedimentary record. Since the Mediterranean is involved in the deposition of eolian dust from the Sahara during dry periods, whereas riverine detrital input prevails during wet ones, the Mediterranean marine sapropel-bearing sequences provide high-resolution climatic information. These data have been employed in reconstructing astronomically calibrated time scales for the last 9 Ma of the Earth's history, helping to constrain the time of past Geomagnetic Reversals. Furthermore, the exceptional accuracy of these paleoclimatic records have improved our knowledge of the Earth's orbital variations in the past.
Ecology and biota
Turqueta beach, in the Spanish island of Menorca.
As a result of the drying of the sea during the Messinian Salinity Crisis, the marine biota of the Mediterranean are derived primarily from the Atlantic Ocean. The North Atlantic is considerably colder and more nutrient-rich than the Mediterranean, and the marine life of the Mediterranean has had to adapt to its differing conditions in the five million years since the basin was reflooded.
The Alboran Sea is a transition zone between the two seas, containing a mix of Mediterranean and Atlantic species. The Alboran Sea has the largest population of Bottlenose Dolphins in the western Mediterranean, is home to the last population of harbour porpoises in the Mediterranean, and is the most important feeding grounds for Loggerhead Sea Turtles in Europe. The Alboran sea also hosts important commercial fisheries, including sardines and swordfish. In 2003, the World Wildlife Fund raised concerns about the widespread drift net fishing endangering populations of dolphins, turtles, and other marine animals.
The opening of the Suez Canal in 1869 created the first salt-water passage between the Mediterranean and Red Sea. The Red Sea is higher than the Eastern Mediterranean, so the canal serves as a tidal strait that pours Red Sea water into the Mediterranean. The Bitter Lakes, which are hyper-saline natural lakes that form part of the canal, blocked the migration of Red Sea species into the Mediterranean for many decades, but as the salinity of the lakes gradually equalized with that of the Red Sea, the barrier to migration was removed, and plants and animals from the Red Sea have begun to colonize the Eastern Mediterranean. The Red Sea is generally saltier and more nutrient-poor than the Atlantic, so the Red Sea species have advantages over Atlantic species in the salty and nutrient-poor Eastern Mediterranean. Accordingly, Red Sea species invade the Mediterranean biota, and not vice versa; this phenomenon is known as the Lessepsian migration (after Ferdinand de Lesseps, the French engineer) or Erythrean invasion. The construction of the Aswan High Dam across the Nile River in the 1960s reduced the inflow of freshwater and nutrient-rich silt from the Nile into the Eastern Mediterranean, making conditions there even more like the Red Sea and worsening the impact of the invasive species.
Invasive species have become a major component of the Mediterranean ecosystem and have serious impacts on the Mediterranean ecology, endangering many local and endemic Mediterranean species. A first look at some groups of exotic species show that more than 70% of the non-indigenous decapods and about 63% of the exotic fishes occurring in the Mediterranean are of Indo Pacific origin, introduced into the Mediterranean through the Suez Canal. This makes the Canal as the first pathway of arrival of “alien” species into the Mediterranean. The impacts of some lessepsian species have proven to be considerable mainly in the Levantine basin of the Mediterranean, where they are replacing native species and becoming a “familiar sight”.
According to the International Union for Conservation of Nature definition, as well as Convention on Biological Diversity(CBD) and Ramsar Convention terminologies, they are alien species, as they are non native (non-indigenous) to the Mediterranean Sea, and they are outside their normal area of distribution which is the Indo-Pacific region. When these species succeed in establishing populations in the Mediterranean sea, compete with and begin to replace native species they are “Alien Invasive Species”, as they are an agent of change and a threat to the native biodiversity. Depending on their impact, Lessepsian migrants are either alien or alien invasive species. In the context of CBD, “introduction" refers to the movement by human agency, indirect or direct, of an alien species outside of its natural range (past or present). The Suez Canal, being a artificial (man made) canal, is a human agency. Lessepsian migrants are therefore “introduced” species (indirect, and unintentional). Whatever wording is chosen, they represent a threat to the native Mediterranean biodiversity, because they are non-indigenous to this sea. In recent years, the Egyptian government's announcement of its intentions to deepen and widen the canal have raised concerns from marine biologists, fearing that such an act will only worsen the invasion of Red Sea species into the Mediterranean, facilitating the crossing of the canal for yet additional species.
Arrival of new tropical Atlantic species
In recent decades, the arrival of exotic species from the tropical Atlantic has become a noticeable feature. Whether this reflects an expansion of the natural area of these species that now enter the Mediterranean through the Gibraltar straight, because of a warming trend of the water caused by Global Warming; or an extension of the maritime traffic; or is simply the result of a more intense scientific investigation, is still an open question. While not as intense as the “lessepsian” movement, the process deserves to be studied and monitored.
Europe may be less threatened by sea-level rise than many developing country regions. However, coastal ecosystems do appear to be threatened, especially enclosed seas such as the Baltic, the Mediterranean and the Black Sea. These seas have only small and primarily east-west orientated movement corridors, which may restrict northward displacement of organisms in these areas. Sea level rise for the next century (2100) could be between 30 cm (12 in) and 100 cm (39 in) and temperature shifts of a mere 0.05-0.1°C in the deep sea are sufficient to induce significant changes in species richness and functional diversity.
Pollution in this region has been extremely high in recent years. The United Nations Environment Programme has estimated that 650,000,000 t (720,000,000 short tons) of sewage, 129,000 t (142,000 short tons) of mineral oil, 60,000 t (66,000 short tons) of mercury, 3,800 t (4,200 short tons) of lead and 36,000 t (40,000 short tons) of phosphates are dumped into the Mediterranean each year. The Barcelona Convention aims to 'reduce pollution in the Mediterranean Sea and protect and improve the marine environment in the area, thereby contributing to its sustainable development.' Many marine species have been almost wiped out because of the sea's pollution. One of them is the Mediterranean Monk Seal which is considered to be among the world's most endangered marine mammals.
The Mediterranean is also plagued by marine debris. A 1994 study of the seabed using trawl nets around the coasts of Spain, France and Italy reported a particularly high mean concentration of debris; an average of 1,935 items per km². Plastic debris accounted for 76%, of which 94% was plastic bags.
Some of the world’s busiest shipping routes are in the Mediterranean Sea. It is estimated that approximately 220,000 merchant vessels of more than 100 tonnes cross the Mediterranean Sea each year — about one third of the world’s total merchant shipping. These ships often carry hazardous cargo, which if lost would result in severe damage to the marine environment.
The discharge of chemical tank washings and oily wastes also represent a significant source of marine pollution. The Mediterranean Sea constitutes 0.7% of the global water surface and yet receives seventeen percent of global marine oil pollution. It is estimated that every year between 100,000 t (98,000 long tons) and 150,000 t (150,000 long tons) of crude oil are deliberately released into the sea from shipping activities.
Approximately 370,000,000 t (360,000,000 long tons) of oil are transported annually in the Mediterranean Sea (more than 20% of the world total), with around 250-300 oil tankers crossing the Sea every day. Accidental oil spills happen frequently with an average of 10 spills per year. A major oil spill could occur at any time in any part of the Mediterranean.
With a unique combination of pleasant climate, beautiful coastline, rich history and diverse culture the Mediterranean region is the most popular tourist destination in the world — attracting approximately one third of the world’s international tourists.
Tourism is one of the most important sources of income for many Mediterranean countries. It also supports small communities in coastal areas and islands by providing alternative sources of income far from urban centres. However, tourism has also played major role in the degradation of the coastal and marine environment. Rapid development has been encouraged by Mediterranean governments to support the large numbers of tourists visiting the region each year. But this has caused serious disturbance to marine habitats such as erosion and pollution in many places along the Mediterranean coasts.
Tourism often concentrates in areas of high natural wealth, causing a serious threat to the habitats of endangered Mediterranean species such as sea turtles and monk seals. It is ironic that tourism in this region is destroying the foundations of its own existence. And it is inevitable that the tourists will leave the Mediterranean as it becomes more depleted of its natural beauty.
Fish stock levels in the Mediterranean Sea are alarmingly low. The European Environment Agency says that over 65% of all fish stocks in the region are outside safe biological limits and the United Nations Food and Agriculture Organisation, that some of the most important fisheries — such as albacore and bluefin tuna, hake, marlin, swordfish, red mullet and sea bream — are threatened.
There are clear indications that catch size and quality have declined, often dramatically, and in many areas larger and longer-lived species have disappeared entirely from commercial catches.
Large open water fish like tuna have been a shared fisheries resource for thousands of years but the stocks are now dangerously low. In 1999, Greenpeace published a report revealing that the amount of bluefin tuna in the Mediterranean had decreased by over 80% in the previous 20 years and government scientists warn that without immediate action the stock will collapse.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2010)
Aquaculture in western Greece
Aquaculture is expanding rapidly — often without proper environmental assessment — and currently accounts for 30% of the fish protein consumed worldwide. The industry claims that farmed seafood lessens the pressure on wild fish stocks, yet many of the farmed species are carnivorous, consuming up to five times their weight in wild fish.
Mediterranean coastal areas are already over exposed to human influence, with pristine areas becoming ever scarcer. The aquaculture sector adds to this pressure, requiring areas of high water quality to set up farms. The installation of fish farms close to vulnerable and important habitats such as seagrass meadows is particularly concerning.
Aquaculture production in the Mediterranean also threatens biodiversity through the introduction of new species to the region, the impact of the farms' organic and chemical effluents on the surrounding environment and coastal habitat destruction.
Other Articles »
No Related Content Found |
Light and raindrops work together to create a rainbow, but why is it curved? Here are some things to remember before you start, or just skip down to some rainbow physics, or skip to the explanation as to why rainbows are curved, below.
Things to remember …
First, look for a rainbow when the sun is behind you, and there are raindrops falling in front of you.
Second, know that – when making the rainbow – sunlight is emerging from many raindrops at once. A rainbow isn’t a flat two-dimensional image on the dome of sky. It’s more like a mosaic, composed of many separate bits … in three dimensions. More about the three-dimensional quality of rainbows below. Just know that your eye sees rainbows as flat for the same reason we see the sun and moon as flat disks, because, when we look in the sky, there are no visual cues to tell us otherwise.
Third, rainbows are more than half circles. They’re really whole circles. You’ll never see a circle rainbow from Earth’s surface because your horizon gets in the way. But, up high, people in airplanes sometimes do see them. Check out the photo below.
Ready for some rainbow physics? When making a rainbow, sunlight shining into each individual raindrop is refracted, or split into its component colors. And the light is also reflected, so that those various colors come bouncing back.
One key to rainbows is that the light leaves the collection of raindrops in front of you at an angle. In making a rainbow, the angle is between 40 and 42 degrees, depending on the color (wavelength) of the light. Physicsclassroom.com explained it this way:
The circle (or half-circle) results because there are a collection of suspended droplets in the atmosphere that are capable concentrating the dispersed light at angles of deviation of 40-42 degrees relative to the original path of light from the sun.
These droplets actually form a circular arc, with each droplet within the arc dispersing light and reflecting it back towards the observer.
So why are rainbows curved? To understand the curvature of rainbows, you’ll need to switch your mind to its three-dimensional-thinking mode. Cecil Adams of the newspaper column The Straight Dope explained it this way:
We’re used to thinking of rainbows as basically two-dimensional, but that’s an illusion caused by a lack of distance cues. The cloud of water droplets that produces the rainbow is obviously spread out in three dimensions.
The geometry of reflection, however, is such that all the droplets that reflect the rainbow’s light toward you lie in a cone with your eyes at the tip.
It takes an intuitive leap to see why this should be so, but let’s give it a crack. Water droplets reflect sunlight (or any light) at an angle of between 40 and 42 degrees, depending on the wavelength…
The sun is low and behind you. All the sunbeams head in, strike the cloud of water droplets ahead of you and bounce back at an angle of [approximately] 40 degrees.
Naturally the beams can bounce 40 degrees any which way — up, down, and sideways.
But the only ones you see are the ones that lie on a cone with a side-to-axis angle of 40 degrees and your eyes at the tip.
I also asked Les Cowley of the great website Atmospheric Optics. He’s a world-class expert in sky optics and EarthSky’s go-to guy for all daytime sky phenomena. Les told me:
The cone explanation is sound and also my preferred one at Atmospheric Optics.
Rainbows don’t exist! They are nowhere in space. You cannot touch them or drive around them. They are a collection of rays from glinting raindrops that happen to reach our eyes. Raindrops glint rainbow rays at an angle of 42 degrees from the point directly opposite the sun. All the drops glinting the rainbow are on the surface of a cone with its point at your eye. They can be near and far. Other drops not on the cone also glint sunlight into rainbow colours but their rays do not reach our eyes. We only see those on the cone. When you look down the cone you see a circle. So rainbows are circles!
To get technical, here is how rainbow rays form.
And this one (it took pain to write it) explains why we see rainbows and halos.
Phew! Thanks from EarthSky to all the smart and knowledgable people who contributed to this explanation!
Bottom line: Why are rainbows curved? When you look at a rainbow, you’re not seeing a flat two-dimensional image on the dome of sky. You’re seeing light along a three-dimensional cone, with your eyes at the tip. The circle of a rainbow – or arc, assuming the bottom part of the circle is hidden by your horizon – appears as a flat, two-dimensional section of that cone.
The EarthSky team has a blast bringing you daily updates on your cosmos and world. We love your photos and welcome your news tips. Earth, Space, Human World, Tonight. |
Guinea Worm Disease Frequently Asked Questions (FAQs)
What is dracunculiasis?
Dracunculiasis, also known as Guinea worm disease (GWD), is an infection caused by the parasite Dracunculus medinensis. A parasite is an organism that feeds off of another to survive. GWD is spread by drinking water containing Guinea worm larvae. Larvae are immature forms of the worm. GWD affects poor communities in remote parts of Africa that do not have safe water to drink. GWD is considered by global health officials to be a neglected tropical disease (NTD) – the first parasitic disease slated to be eradicated.
Many federal, private, and international agencies are helping the countries that still have local GWD cases to eradicate this disease. During 2015, only four countries had local GWD cases: Chad, Ethiopia, Mali, and South Sudan. Cases have gone from 3.5 million per year in 1986 to 22 in 2015.
How does Guinea worm disease spread?
People become infected with Guinea worm by drinking water from ponds and other stagnant water containing tiny "water fleas" that carry the Guinea worm larvae. The larvae are eaten by the water fleas that live in these water sources.
Once drunk, the larvae are released from copepods in the stomach and penetrate the digestive track, passing into the body cavity. During the next 10-14 months, the female larvae grow into full-size adults. These adults are 60-100 centimeters (2-3 feet) long and as wide as a cooked spaghetti noodle.
When the adult female worm is ready to come out, it creates a blister on the skin anywhere on the body, but usually on the legs and feet. This blister causes a very painful burning feeling and it bursts within 24-72 hours. Immersing the affected body part into water helps relieve the pain. It also causes the Guinea worm to come out of the wound and release a milky white liquid into the water that contains millions of immature larvae. This contaminates the water supply and starts the cycle over again. For several days, the female worm can release more larvae whenever it comes in contact with water.
What are the signs and symptoms of Guinea worm disease?
People do not usually have symptoms until about one year after they become infected. A few days to hours before the worm comes out of the skin, the person may develop a fever, swelling, and pain in the area. More than 90% of the worms come out of the legs and feet, but worms can appear on other body parts too.
People in remote rural communities who have Guinea worm disease often do not have access to health care. When the adult female worm comes out of the skin, it can be very painful, slow, and disabling. Often, the wound caused by the worm develops a secondary bacterial infection. This makes the pain worse and can increase the time an infected person is unable to function to weeks or even months. Sometimes, permanent damage occurs if a person’s joints are infected and become locked.
What is the treatment for Guinea worm disease?
There is no drug to treat Guinea worm disease and no vaccine to prevent infection. Once part of the worm begins to come out of the wound, the rest of the worm can only be pulled out a few centimeters each day by winding it around a piece of gauze or a small stick. Sometimes the whole worm can be pulled out within a few days, but this process usually takes weeks. Medicine, such as aspirin or ibuprofen, can help reduce pain and swelling. Antibiotic ointment can help prevent secondary bacterial infections. The worm can also be surgically removed by a trained doctor in a medical facility before a blister forms.
Where is Guinea worm disease found?
Only four countries reported local Guinea worm disease in 2015. These countries were Chad, Ethiopia, Mali, and South Sudan. Many other African countries and all of Asia are now free of GWD. As of January 2015, the World Health Organization had certified 198 countries, territories, and areas, representing 186 WHO Member States as being free of GWD transmission.
Who is at risk for infection?
Anyone who drinks pond and other stagnant water contaminated by persons with GWD is at risk for infection. People who live in villages where GWD is common are at greatest risk.
Is Guinea worm disease a serious illness?
Yes. The disease causes preventable suffering for infected people and is a financial and social burden for affected communities. Adult female worms come out of the skin slowly and cause great pain and disability. Parents with active Guinea worm disease might not be able to care for their children. The worm often comes out of the skin during planting and harvesting season. Therefore, people might also be prevented from working in their fields and tending their animals. This can lead to financial problems for the entire family. Children may be required to work the fields or tend animals in place of their sick parents. This can keep them from attending school. Therefore, GWD is both a disease of poverty and also a cause of poverty because of the disability it causes.
Is a person immune to Guinea worm disease once he or she has it?
No. No one is immune to Guinea worm disease. People in affected villages can suffer year after year.
How can Guinea worm disease be prevented?
Guinea worm disease can be prevented by avoiding drinking unsafe water. Teaching people to follow these simple control tactics can completely prevent the spread of the disease:
- Drink only water from protected sources (such as from boreholes or hand-dug wells) that are free from contamination.
- Prevent people with swellings and wounds from entering ponds and other water used for drinking.
- Always filter drinking water from unsafe sources, using a cloth filter or a pipe filter, to remove the tiny "water fleas" that carry the Guinea worm larvae.
- Treat unsafe drinking water sources with an approved larvicide, such as ABATE®*. This will kill the tiny "water fleas."
- Provide communities with new safe sources of drinking water and repair broken safe water sources (e.g., hand-pumps) if possible.
This information is not meant to be used for self-diagnosis or as a substitute for consultation with a health care provider. If you have any questions about the parasites described above or think that you may have a parasitic infection, consult a health care provider.
*Use of trade names is for identification only and does not imply endorsement by the Public Health Service or by the U.S. Department of Health and Human Services.
- Centers for Disease Control and Prevention
1600 Clifton Rd.
Atlanta, GA 30333
Hours of Operation
8am-8pm EST/ Monday-Friday
- Contact CDC-INFO |
A common analogy for explaining gradient descent goes like the following: a person is stuck in the mountains during heavy fog, and must navigate their way down. The natural way they will approach this is to look at the slope of the visible ground around them and slowly work their way down the mountain by following the downward slope.
This captures the essence of gradient descent, but this analogy always ends up breaking down when we scale to a high dimensional space where we have very little idea what the actual geometry of that space is. Although, in the end it’s often not a practical concern because gradient descent seems to work pretty well.
But the important question is: how well does gradient descent perform on the actual earth?
Defining a cost function and weights
In a general model gradient descent is used to find weights for a model that minimizes our cost function, which is usually some representation of the errors made by a model over a number of predictions. We don’t have a model predicting anything in this case and therefore no “errors”, so adapting this to traveling around the earth requires some stretching of the usual machine learning context.
In our earth-bound traveling algorithm, our goal is going to be to find sea level from wherever our starting conditions are. That is, we will define our “weights” to be a latitude and longitude and we will define our “cost function” to be the current height from sea level. Put another way, we are asking gradient descent to optimize our latitude and longitude values such that the height from sea level is minimized. Unfortunately, we don’t have a mathematical function that describes the entire earth’s geography so we will calculate our cost values using a raster elevation dataset available from NASA:
Gradient descent works by looking at the gradient of the cost function with respect to each variable it is optimizing for, and adjusting the variables such that they produce a lower cost function. This is easy when your cost function is a mathematical metric like mean squared error, but as we mentioned our “cost function” is a database lookup so there isn’t anything to take the derivative of.
Luckily, we can approximate the gradient the same way our human explorer in our analogy would: by looking around. The gradient is equivalent to the slope, so we will estimate the slope by taking a point slightly above our current location and a point slightly below it (in each dimension) and divide them to get our estimated derivative. This should work reasonably well:
Great! You’ll notice that this differs from most implementations of gradient descent in that there is no X or y variables passed into this function. Our cost function doesn’t require calculating the error of any predictions, so we only need the variables we are optimizing here. Let’s try running this on Mount Olympus in Washington:
Hm, it seems to get stuck! This happens testing in most other locations as well. It turns out that the earth is filled with local minima, and gradient descent has a huge amount of difficulty finding the global minimum when starting in a local area that’s even slightly away from the ocean.
Vanilla gradient descent isn’t our only tool in the box, so we will try momentum optimization. Momentum is inspired by real physics, which makes the application of it to gradient descent on real geometry an attractive idea. Unfortunately, if we place even a very large bolder at the top of Mount Olympus and let it go, it’s unlikely to have enough momentum to reach the ocean, so we’ll have to use some unrealistic (in physical terms) values of gamma here:
With some variable tweaking, gradient descent should have a better chance of finding the ocean:
We have success! It’s interesting watching the behaviour of the optimizer, it seems to fall into a valley and “roll off” each of the sides on it’s way down the mountain, which agrees with our intuition of how an object with extremely high momentum should behave physically.
The earth should actually be a very easy function to optimize. Since the earth is primarily covered by oceans, more than two thirds of possible valid inputs for this function return the optimal cost function value. However, the earth is plagued with local minima and non-convex geography.
Because of this, I think it provides a lot of interesting opportunities for exploring how machine learning optimization methods perform on tangible and understandable local geometries. It seems to perform pretty well on Mount Olympus, so let’s call this analogy “confirmed”!
If you have thoughts on this, let me know on twitter!
The code for the project is available here. |
Here's how to make the critical writing exercise fun for your child.
By Team ParentCircle
Of the various types of writing exercises, critical writing tasks enable your child to apply logic, reason out, examine critically, analyse, evaluate and form judgements. All these sub-skills will go a long way in teaching her essential life skills. Here are some interesting and thought-provoking activities that draw from everyday life around us. You can encourage your child to take up these activities or even engage in them jointly with her. They will help sharpen her critical thinking skills.
Books, movies and TV shows present slices of life in compact packages that can be unpacked and analysed easily. They make for great prompts for critical writing activities.
• Watch or read with your child and jointly or separately write comments about the piece. Rather than the report or review format used in schools, use a simple response format like, “What I loved or hated about this book/ movie/ show was …because ….” This allows you to give your opinions while asking you to give a reason. It makes you think about your preferences.
• Read more than one article/watch more than one show or movie about the same subject and analyse the differences in perspective. This is an important skill because it takes the child beyond analysis and into assimilation. It is important to learn how to get the best from several world views and form your own view of the world which is more than the sum of its parts.
We ourselves are always the most fascinating subjects! There are many ways you can have your child reflect on reasons and motivations.
• When they ask for permission to do something, have them write a short piece on why they should be allowed to do whatever it is they want to do.
• When they do not want to do something you want them to do, ask them to write a short piece on why they should not be required to do the same.
• When you disagree about something, each of you write out your arguments and see if you can compromise.
What books, movies and shows have packaged for us is really all around us in our everyday lives. Your child’s real questions are about the world around him/her. You can use that curiosity to spark critical thinking by getting her to write about what she sees.
Here are some effective prompts to help you encourage critical writing:
• Why should I ….
• Why shouldn’t I …
• Why do you suppose he/she….
• Ethical dilemma – who is right?
• Should he/she have ….
• What would happen if ….
• You could also see it as ….
Engaging in writing critically with your child will also make you think critically about things that you have taken for granted for a long time. Don’t be surprised if you find yourself changing some of those long-held opinions and ideas! As always, have fun with your child. If it is not fun it is not going to happen often!
Who says that parenting is strenuous and boring? Here's how to have fun while parenting.
Summer vacations are all about fun, games and road trips. But, it's also a great time to teach yo...
Hannah S Mathew
"My parents always encouraged me to work hard and believe in God and Karma. They taught me that t... |
We all need energy to grow, stay alive, keep warm and be active. Energy is provided by the carbohydrate, protein and fat in the food and drinks we consume. It is also provided by alcohol. Different food and drinks provide different amounts of energy.
The amount of energy (measured in units of calories or kilojoules) a food contains per gram is known as its energy density.
- Foods with fewer calories per gram such as fruits, vegetables, soups, lean protein- and carbohydrate-rich foods have a relatively low energy density.
- Foods with a high fat and/or low water content such as chocolate, fried snacks, nuts and crackers have a relatively higher energy density.
Having a diet with a low energy density overall can help to control calorie intake while helping to avoid feeling too hungry.
Carbohydrate is the most important source of energy for the body because it is the main fuel for both your muscles and brain. Sources of carbohydrate include starchy foods, e.g. bread, rice, potatoes, pasta, pulses and breakfast cereals.
Different people need different amounts of energy. This depends on your basal metabolic rate (BMR), which measures the amount of energy you use to maintain the basic functions of the body, as well as your level of activity.
Some activities use more energy than the others. The more active you are, the more energy your body uses up. Being physically active can increase your muscle mass and this means you will actually be using more energy all the time, even when you are resting.
Your weight depends on the balance between how much energy you consume from food and drinks, and how much energy you use up by being active. When you eat or drink more energy than you use up, you put on weight; if you consume less energy from your diet than you expend, you lose weight; but if you eat and drink the same amount of energy as you use up, you are in energy balance and your weight remains the same.
In the UK, the majority of adults are either overweight or obese, which means that many of us are consuming more energy than we need from food and drinks and need to try to reduce our energy intake in order to move towards a healthy weight. |
There are an estimated 2.74 million honeybee colonies kept by beekeepers in the United States. It’s a number that, for much of the last decade, has been the subject of much consternation each spring, when researchers announce how many colonies were lost—died off—over the past winter. The numbers have been high, and persistently so, with total annual losses hitting 45 percent in 2012–13, a particularly bad year. This has led to some new terms: colony collapse disorder, pollinators, and neonicotinoids, to name a few. With a third of fruit and vegetable crops depending on pollinators, bees regularly dying in droves, and pesticide use increasing, it can seem like honeybees are the only things keeping us fed—and that they might soon go away altogether.
The relationship among bees, other pollinators, and the food supply is indeed a vital one, but it is far more complex than pitting one bee—Apis mellifera, the European honeybee—against one class of pesticides, such as neonics. Take the new report released Monday by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (which functions much like a United Nations for biodiversity issues) looking at the plight of pollinators and how their decline puts the global food supply at risk.
One of the major takeaways from the study has nothing to do with honeybees but rather the 20,000 wild bee species found around the globe, which also contribute significantly to pollinating food crops. According to the report, more than 40 percent of invertebrate pollinators—wild bees and butterflies in particular—are facing extinction. Similarly, 16.5 percent of vertebrate pollinators (bats, birds, and the like) are threatened by extinction.
While honeybee declines are worrisome, the species itself is by no means at risk of being wiped out. The annual loss stats may be stark, but beekeepers hedge against the die-offs, producing additional hives to replace those that don’t make it through the colder months. A less commonly reported stat in stories about bee losses is the overall population of managed honeybees in the United States, which has remained more or less stable in recent years. The same cannot be said for wild bees, of which there are 4,000 species in the U.S.
“Habitat loss, pesticide usage, bee disease, and climate change are the major threats on wild bee populations,” Insu Koh, a postdoctoral researcher at the University of Vermont’s Gund Institute for Ecological Economics, wrote in an email. (Koh was not involved in the new assessment.) “Our recent national study projected wild bee declines in 23 percent of the contiguous U.S. between 2008 and 2013.”
The study Koh worked on, published by Proceedings of the National Academy of Sciences in December, is one of the latest to focus on native bees and their unwitting contribution to the global harvest that relies on pollinators—estimated in the new IPBES assessment to be worth as much at $577 billion. While bumblebees and butterflies are showing up in places like almond orchards and watermelon fields for the flowers, they have a significant effect. A 2013 study of wild pollinators found that native bees, butterflies, and other nonhoneybee species were twice as effective as their domesticated counterparts at pollinating more than 40 crops. Not only did these wild species do better “work,” but the managed hives didn’t replace the labor of native species—it only added to what the wild pollinators accomplished.
The new assessment states that the role of pesticides like neonics in pollinator losses remains unclear—but less so for wild bees. According to a press release, “A pioneering study conducted in farm fields showed that one neonicotinoid insecticide had a negative effect on wild bees, but the effect on managed honeybees was less clear.” Another threat, said IPBES, is the erosion of indigenous farming practices, which, in some regions, predate contemporary obsessions with “sustainability” by centuries, if not longer.
However, contemporary farmers and agricultural researchers are considering many options that could help make farmland a more welcome place to pollinators—honeybees, butterflies, and native bee species alike. Be it native hedgerows planted alongside orchards in California or strips of native prairie interspersed with row crops such as corn and soy in the Midwest, there are ways of making a farmed landscape look more like home to the ag industry’s tiny, vital workforce—but few, if any, have gone from being experimental to the status quo.
“Pollinators are important contributors to world food production and nutritional security,” Vera Lúcia Imperatriz-Fonseca, who cochaired the assessment, said in the statement. “Their health is directly linked to our own well-being.” |
The size of a brain region involved with memory and stress might affect how a person reacts to trauma. A new study suggests that people who have suffered persistent post-traumatic stress disorder were born with a smaller brain area called the hippocampus.
Previous studies have shown that patients with post-traumatic stress disorder (PTSD) have smaller than normal hippocampal regions. The hippocampus is believed to be the location where memory is first processed before being transferred to another region for long term storage. It is crucial to learning.
Because stress can damage the hippocampus, most researchers concluded that PTSD causes the shrinkage found in people suffering the disorder. But until now, no one had looked at brain size before and after PTSD developed to see if this is always true.
The new research led by U.S. government psychologist Mark Gilbertson does this and challenges the conventional view that shrinkage follows trauma. "There does seem to be some evidence that having a smaller hippocampus prior to the trauma may make individuals more vulnerable to develop stronger, more persistent fear responses," says Mr. Gilbertson.
Mr. Gilbertson and colleagues used a brain scanning technique called magnetic resonance imaging to examine 40 pairs of identical male twins. In each set one brother was a Vietnam War combat veteran while the other had stayed home. As expected, the hippocampus was smaller than normal in the combat veterans who suffered PTSD, a condition with symptoms that include flashbacks, nightmares, and emotional problems.
But to the researchers' surprise, this brain region was also smaller in the twins who had not gone to war. "What this suggested to us was that this is a pre-existing condition," says Mr. Gilbertson. "This suggested there wasn't any exposure effect. In other words, it wasn't a neurotoxic effect of being in combat that produced the smaller hippocampal volume."
The study also shows that the worst cases of post-traumatic stress disorder were in the combat veterans with the smallest hippocampuses. The results indicate that a smaller hippocampus may predispose a person to PTSD and conversely a larger hippocampus could protect against it.
An editorial in the journal Nature Neuroscience, the magazine where the study appears, warns that much more research is necessary before a definite link between PTSD and hippocampus size can be made. If it can be, could hippocampal volume help predict which individuals might be vulnerable to stressful situations -- say, soldiers or police officers?
Mr. Gilbertson says such measures might not be effective for this purpose yet because the stress response is complex and involves more than the hippocampus. "At this point, the utility is mostly in terms of getting a better understanding of what structure or function in the brain may underlie vulnerability to stress disorders so that we can develop more effective prevention or treatment strategies in the future." |
CORVALLIS, Ore. Healthy streams with vibrant ecosystems play a critical role in removing excess nitrogen caused by human activities, according to a major new national study published this week in Nature.
The research, by a team of 31 aquatic scientists across the United States, was the first to document just how much nitrogen that rivers and streams can filter through tiny organisms or release into the atmosphere through a process called denitrification.
The study clearly points out the importance of maintaining healthy river systems and native riparian areas, said Stan Gregory, a stream ecologist in the Department of Fisheries and Wildlife at Oregon State University, an a co-author of the study. It also demonstrates the importance of retaining complex stream channels that give organisms the time to filter out nitrogen instead of releasing it downstream.
The scientists conducted experiments in 72 streams across the United States and Puerto Rico that spanned a diversity of land uses, including urban, agricultural and forested areas. They discovered that roughly 40 to 60 percent of nitrogen was taken up by the river system within 500 meters of the source where it entered the river if that ecosystem was healthy.
Tiny organisms such as algae, fungi and bacteria that may live on rocks, pieces of wood, leaves or streambeds can take up, or absorb about half of the nitrogen on average that humans currently put into the sampled river sites, according to Sherri Johnson, a research ecologist with the U.S. Forest Service, and a courtesy professor of fisheries and wildlife at OSU.
Streams are amazingly active places, though we dont always see the activity, Johnson said. When you have a healthy riparian zone, with lots of native plants and a natural channel, the stream has more of an opportunity to absorb the nitrogen we put into the system instead of sending it downriver.
The study is important, scientists say, because it provides some of the best evidence of the extent to which healthy rivers and streams can help prevent eutrophication the excessive growth of algae and aquatic plants fueled by too much nitrogen. Eutrophication has been linked to harmful algal blooms and oxygen depletion in such places as the Gulf of Mexico, where the Mississippi River empties its nitrogen-rich waters, adversely affecting fishing and shrimp industries.
In their study, the scientists added small amounts of an uncommon, non-radioactive isotope of nitrogen N-15 to streams as a nitrate, which is the most prevalent form of nitrogen pollution, Gregory said. By adding the isotope, they were able to measure how far downstream the nitrate traveled, and analyze what processes removed it from the water.
In addition to the 40 to 60 percent taken up by tiny organisms, the researchers found denitrification accounted for about 19 percent of the nitrogen uptake across all the sites. Denitrification takes place through an anaerobic metabolic process that converts the nitrogen to a harmless gas and releases it into the atmosphere.
Slower moving streams with little oxygen have higher rates of denitrification, though they have other pitfalls, including increased risk to fish and humans because of the microbial stew they foster, Gregory pointed out.
The overall amount of denitrification by streams and rivers was lower than what many scientists had anticipated, he said. We had hoped it would be higher. That makes it even more essential to maintain healthy riparian zones so the organisms have the opportunity to process the nitrogen.
Oregon had even lower levels of denitrification than the national average. Johnson said the combination of high-gradient streams, oxygenated water and porous stream beds is not conducive to the denitrification process.
A lot of streams in Oregon have subsurface water flowing beneath the streambed through the gravel, she pointed out. This hyporheic flow intermixes with the river water and limits the anaerobic processes. It also underscores the importance of maintaining healthy in-stream communities so the nitrogen is taken up by the ecosystem in other ways.
Gregory says too many river systems have lost their natural channels to human activities and have essentially become pipelines for drainage. The original, braided channels many rivers had were complex, played a major role in slowing and filtering the river water, and provided natural habitat for native and migrating fish.
Past studies by Gregory and others have pointed out how these pipeline river channels harm fish and their eggs during floods. The new study suggests that these pipelines also limit the potential of the river to absorb nitrogen that humans add to the system through a variety of activities.
The Oregon studies focused on Oak Creek basin in Corvallis, the Calapooia River near Albany, and the McKenzie River near Eugene. Each study basin looked at the streams in forested, agricultural and urban areas.
|Contact: Stan Gregory|
Oregon State University |
Introduction to the American Revolution
Lesson 11 of 11
Objective: SWBAT evaluate the advantages of a digital medium by jotting down ideas presented about the repeating themes of the American Revolution
The American Revolution was the first modern revolt that marked the first time in history people fought for their independence in the name of certain universal principles, constitutional rights, and popular sovereignty. To introduce the second part of this unit to students, I will ask them to create a strong argumentative statement about Americans during the revolutionary time period. Students have learned in history classes that this period encompass the identity of FREEDOM. It wasn’t just individuals who fought to establish their identity during this time but laws, documents, and power fought to establish freedoms for all.
To hook students in this lesson, they will be given three minutes to develop statements about the Revolutionary War. I want students to use their creative minds to develop a statement from the perspective of people or on lookers during this time. I could have easily given students examples ofwhat to write. However, this would have blocked the creativity students could have used in this activity Students will share their statements with a shoulder buddy. Possible statements from students might include:
1. Tension between the colonist led to a revolt to declare independence from Britian
2. Known for its infamous Boston Tea Party Revolt
3. Origin of Declaration of Independence
Students will watch a 6 minute clip on the events during the American Revolution. As students
watch the video, they will jot down ideas on a sticky note about nouns (people, places, things, and ideas) repeated in the video. Students will share out what they discovered from this visual representation of events.
Then students will take this information and create a left-sided possiblity activity about their understanding of the American Revolution from scenes, images, and notes taken in their notebooks. Look at the Student work on American Revolution and student work examples of AR to see how students processed the information from the video!
The remaining part of this lesson allows students to understand the sense of pride that is represented by our American flag. Our national anthem gives off the pride our nation felt during the past time of war, victories, and defeat. Students will work in pairs to read the Star-Spangled-Banner-History. This activity requires students to answer pre and post-reading questions about the significance of the flag's position and how it serves as a symbol of VICTORY for America.
Students will take 8 vocabulary words from the poem, The Star-Spangled Banner, and create ice cream cones locating the part of speech, definition, synonym, and affix of each word. The words students will find from the poem include
bombarded, invaded, negotiated, perilous, gallantly, haughty, vauntingly, desolation
The words listed above are just sample words that students can use to create their cones. See the BEAUTIFUL examples (Ice Cream Cone 1,Ice Cream Cone 2 and,Ice Cream Cone 3 ) of how students completed this task. One thing to note is that not all vocabulary words will have an affix (ending added to the main word). However, all of the other scoops can be filled out with the use of a reference book such as a dictionary. |
Valley networks provide compelling evidence that past geologic processes on Mars were different than those seen today. The generally accepted paradigm is that these features formed from groundwater circulation, which may have been driven by differential heating induced by magmatic intrusions, impact melt, or a higher primordial heat flux. Although such mechanisms may not require climatic conditions any different than today's, they fail to explain the large amount of recharge necessary for maintaining valley network systems, the spatial patterns of erosion, or how water became initially situated in the Martian regolith. In addition, there are no clear surface manifestations of any geothermal systems (e.g., mineral deposits or phreatic explosion craters). Finally, these models do not explain the style and amount of crater degradation. To the contrary, analyses of degraded crater morphometry indicate modification occurred from creep induced by rain splash combined with surface runoff and erosion; the former process appears to have continued late into Martian history. A critical analysis of the morphology and drainage density of valley networks based on Mars Global Surveyor data shows that these features are, in fact, entirely consistent with rainfall and surface runoff. The necessity for a cold, dry early Mars has been predicated on debatable astronomical and climatic arguments. A warm, wet early climate capable of supporting rainfall and surface runoff is the most plausible scenario for explaining the entire suite of geologic features in the Martian cratered highlands. |
Yew (Taxus baccata)
Save all your favourite Woodland Trust content in one place.Find out more about Scrapbook
Yew is an evergreen conifer native to the UK, Europe and North Africa.
Common name: yew
Scientific name: Taxus baccata
UK provenance: native
Interesting fact: Taxus baccata can reach 400 to 600 years of age. Ten yew trees in Britain are believed to predate the 10th century.
What does yew look like?
Overview: mature trees can grow to 20m. The bark is reddish-brown with purple tones, and peeling. The yew is probably the most long-lived tree in northern Europe.
Leaves: straight, small needles with a pointed tip, and coloured dark green above and green-grey below. They grow in two rows on either side of each twig.
Flowers: yew is dioecious, meaning that male and female flowers grow on separate trees. These are visible in March and April. Male flowers are insignificant white-yellow globe-like structures. Female flowers are bud-like and scaly, and green when young but becoming brown and acorn-like with age.
Fruits: unlike many other conifers, the common yew does not actually bear its seeds in a cone. Instead, each seed is enclosed in a red, fleshy, berry-like structure known as an aril which is open at the tip.
The foliage and seed coat of yew contains a cocktail of highly toxic alkaloids. The aril (fleshy red part) is not toxic and is a special favourite of blackbirds which act as efficient seed dispersers. Some birds, such as greenfinches, even manage to remove the toxic seed coat to get at the nutritious embryo.
Look out for: the needle-like leaves grow in two rows along a twig. Underneath, the needles each have a raised central vein.
Could be confused with: Irish yew (Taxus baccata 'fastigiata') or other planted yew species.
Identified in winter by: it is an evergreen so its features are present year round.
Where to find yew
It is commonly found growing in southern England and often forms the understory in beech woodland. It is often used as a hedging plant and has long been planted in churchyards.
Value to wildlife
Yew hedges in particular are incredibly dense, offering protection and nesting opportunities for many birds. The UK’s smallest birds - the goldcrest and firecrest - nest in broadleaf woodland with a yew understorey.
The fruit is eaten by birds such as the blackbird, mistle thrush, song thrush and fieldfare, and small mammals such as squirrels and dormice. The leaves are eaten by caterpillars of the satin beauty moth.
Mythology and symbolism
Yew trees have long been associated with churchyards and there are at least 500 churchyards in England which contain yew trees older than the building itself. It is not clear why, but it has been suggested that yew trees were planted on the graves of plague victims to protect and purify the dead, but also that graveyards were inaccessible to cows, which would die if they ate the leaves.
Yew trees were used as symbols of immortality, but also seen as omens of doom. For many centuries it was the custom for yew branches to be carried on Palm Sunday and at funerals. In Ireland it was said that the yew was ‘the coffin of the vine’, as wine barrels were made of yew staves.
How we use yew
Yew timber is rich orange-brown in colour, closely grained and incredibly strong and durable (hence why old trees can remain standing with hollow trunks). Traditionally the wood was used in turnery and to make long bows and tool handles. One of the world's oldest surviving wooden artifacts is a yew spear head, found in 1911 at Clacton-on-sea, in Essex, UK. It is estimated to be about 450,000 years old.
Yew is a popular hedging and topiary plant. Anti-cancer compounds are harvested from the foliage of Taxus baccata and used in modern medicine.
Medicinal uses and toxicity
Yew trees contain the highly poisonous taxane alkaloids that have been developed as anti-cancer drugs. Eating just a few leaves can make a small child severely ill and fatalities have occurred. All parts of the tree are poisonous, with the exception of the bright red arils. The black seeds inside them should not be eaten as they contain poisonous alkaloids.
Yew has a reputation for being indestructible, but it may be susceptible to root rot. |
If there's one thing life is good at, it's adapting to new environments, but that's not always a good thing. Bacteria are fast adapting to antibiotics, rendering drugs less and less effective and threatening to cast us back into the dark ages of medicine. Now a research project backed by the European Union is trying to turn that same process to our advantage, with an "evolution machine" that directs the evolution of bacteria by making changes to their environment, guiding them to produce molecules that could one day lead to new drugs.
The evolution machine, dubbed EVOPROG, was developed as part of the European Union's Future and Emerging Technologies (FET) initiative. The system contains bioreactors full of mixed bacteria species and bacteriophages – viruses that attack bacteria and change their DNA, which in this case makes them produce molecules with certain functions.
According to the researchers, EVOPROG functions like an analog computer: To make specific molecules users feed in certain chemicals, cell lines, or other raw materials, and the machine makes environmental changes to the bioreactor to guide the phages and bacteria to produce those molecules.
"Just like computers are programmed to carry out arithmetic or logical operations based on variables for performing different tasks, the new device can be programmed to carry out logical operations inside living cells using DNA as a variable to generate new molecules," says Alfonso Jaramillo, coordinator of the EVOPROG project.
Phages are some of the fastest-evolving organisms, so to make EVOPROG work the scientists engineered them to function like biological versions of the "IF/THEN" statements that are key to programming. Here they've been designed so that "IF" a certain change is artificially introduced into their environment, "THEN" they attach themselves to a certain type of bacteria (which have also been carefully engineered) to produce molecules. Like natural evolution, the most effective phages are then selected for and allowed to replicate.
"The phages interact with certain types of engineered bacteria, according to the types of molecules wanted, and replicate only if the mutated proteins they produce are compatible with the genes and chemicals present in those bacteria," says Jaramillo. "With our approach we can find new antimicrobial molecules by using evolution to kill bacteria. For example, if a given antibiotic molecule enters the bacterial cell and is immediately pumped out of the cell, thus not killing it, we can induce a selection of the DNA content of phages and drive them to block that bacterial pump so that the antibiotic will remain in the cell and kill it."
According to the team, EVOPROG's directed evolution process takes about two weeks to produce the desired molecules, which would allow scientists to experiment much faster than other development methods. And the system could not only help researchers identify new drug targets, but once a recipe has been nailed down it can be shared with other teams to allow them to produce the drug in their own lab. In that same spirit, the bioreactors themselves are 3D printed, so different designs can also be reproduced accurately and inexpensively.
"EVOPROG is an exciting approach to address the present bottleneck of engineering synthetic biological systems," says Baojun Wang, a biomedical engineer who was not involved with the project. "Not only does it have the potential to significantly expand the library of biological building blocks and shorten development time, but it could also lead to programmable, efficient phage therapeutics that may treat the fast-changing human bacterial pathogens in the future."
Want a cleaner, faster loading and ad free reading experience?
Try New Atlas Plus. Learn more |
- convert.cpp: code illustrating
conversions to/from strings using <sstream> (that is,
- A common C++ problem is how to append a character to an STL string.
Here's code that does it:
char c = 'y';
string text = "howd", c_as_str(1, c);
text += c_as_str;
assert(text == "howdy");
This uses a constructor for string that takes an integer and a character
and returns a string containing that number of copies of the character.
- cmdline.cpp: A simple program to echo
command-line arguments. This will give you hints about how to process
these arguments. Main is actually invoked with three arguments, but only
the first two are useful for processing the command line. (The third
argument is a pointer to the environment settings; see the man page for
execve for more details.) Note that argv contains the
command itself complete with any path the user used to invoke the command.
To run this example on Unix (aka Linux), type
g++ cmdline.cpp -o cmdline
./cmdline this -is a -list of args.
The response will be
Arg 0: `./cmdline'
Arg 1: `this'
Arg 2: `-is'
Arg 3: `a'
Arg 4: `-list'
Arg 5: `of'
Arg 6: `args.'
Warning: Visual Studio sometimes defaults to creating 'tmain' instead of
'main'. It seems they should be equivalent, but sometimes this causes
problems. Change 'tmain' to 'main' in Visual Studio.
- getline.cpp: sample code reading all
input into a string variable. The old-fashioned way to read input is one
line at a time. The new way is to use the getline() function with
'\0' as the third argument to read all of input in one fell swoop.
The new way is more efficient and certainly simpler, though the amount of
input that can be read is limited by the size of the string variable. If
one may need to handle more than 4 GB of input, one could simply loop until
the end-of-file. To compile and run the program on itself, type:
g++ -o getline getline.cpp
The response will be
Read 382 characters.
- curdate.cpp: getting date and time
in C++ using standard libraries. See man pages about localtime (or help)
for more information.
- Testing C++ Programs - a complete example using
static test methods.
- issue-tracker, Windows/Unix version: code
illustrating connecting to a MySQL database using C++ from both
Windows and Unix/Linux.
- Issue tracking using MySQL: A sample
project for tracking issues in a MySQL database. This illustrates
connecting to a MySQL database from standard C++ programs. This
version connects only from Unix; for a version that connects from either
Windows or Unix see here.
- Socket-based server: this illustrates a
simple way to make a C++-based server on Linux systems.
- simple-sendmail.cpp: a
program which illustrates using the Unix mail command to send mail.
Compile it by
g++ -o simple-sendmail simple-sendmail.cpp
- Lazy Lists and Template Computations in C++: A
pretty bizarre application of templates in C++. Good for fun, probably
not very useful if you're trying to solve a real problem.
- local wxWidgets information |
Inner cell mass
|Inner cell mass|
Blastocyst with an inner cell mass and trophoblast.
|Latin||embryoblastus; massa cellularis interna; pluriblastus senior|
|Gives rise to||epiblast, hypoblast|
In early embryogenesis of most eutherian mammals, the inner cell mass (abbreviated ICM and also known as the embryoblast or pluriblast, the latter term being applicable to all mammals) is the mass of cells inside the primordial embryo that will eventually give rise to the definitive structures of the fetus. This structure forms in the earliest steps of development, before implantation into the endometrium of the uterus has occurred. The ICM lies within the blastocoele (more correctly termed "blastocyst cavity," as it is not strictly homologous to the blastocoele of anamniote vertebrates) and is entirely surrounded by the single layer of cells called trophoblast.
The physical and functional separation of the inner cell mass from the trophectoderm (TE) is a special feature of mammalian development and is the first cell lineage specification in these embryos. Following fertilization in the oviduct, the mammalian embryo undergoes a relatively slow round of cleavages to produce an eight cell morula. Each cell of the morula, called a blastomere, increases surface contact with its neighbors in a process called compaction. This results in a polarization of the cells within the morula, and further cleavage yields a blastocyst of roughly 32 cells. Generally, about 12 internal cells comprise the new inner cell mass and 20 – 24 cells comprise the surrounding trophectoderm.
The ICM and the TE will generate distinctly different cell types as implantation starts and embryogenesis continues. Trophectoderm cells form extraembryonic tissues, which act in a supporting role for the embryo proper. Furthermore, these cells pump fluid into the interior of the blastocyst, causing the formation of a polarized blastocyst with the ICM attached to the trophectoderm at one end (see figure). This difference in cellular localization causes the ICM cells exposed to the fluid cavity to adopt a primitive endoderm (or hypoblast) fate, while the remaining cells adopt a primitive ectoderm (or epiblast) fate. The hypoblast contributes to extraembryonic membranes and the epiblast will give rise to the ultimate embryo proper as well as some extraembryonic tissues.
Regulation of cellular specification
Since segregation of pluripotent cells of the inner cell mass from the remainder of the blastocyst is integral to mammalian development, considerable research has been performed to elucidate the corresponding cellular and molecular mechanisms of this process. There is primary interest in which transcription factors and signaling molecules direct blastomere asymmetric divisions leading to what are known as inside and outside cells and thus cell lineage specification. However, due to the variability and regulative nature of mammalian embryos, experimental evidence for establishing these early fates remains incomplete.
At the transcription level, the transcription factors Oct4, Nanog, Cdx2, and Tead4 have all been implicated in establishing and reinforcing the specification of the ICM and the TE in early mouse embryos.
- Oct4: Oct4 is expressed in the ICM and participate in maintaining its pluripotency, a role that has been recapitulated in ICM derived mouse embryonic stem cells. Oct4 genetic knockout cells both in vivo and in culture display TE morphological characteristics. It has been shown that one transcriptional target of Oct4 is the Fgf4 gene. This gene normally encodes a ligand secreted by the ICM, which induces proliferation in the adjacent polar TE.
- Nanog: Nanog is also expressed in the ICM and participates in maintaining its pluripotency. In contrast with Oct4, studies of Nanog-null mice do not show the reversion of the ICM to a TE-like morphology, but demonstrate that loss of Nanog prevents the ICM from generating primitive endoderm.
- Cdx2: Cdx2 is strongly expressed in the TE and is required for maintaining its specification. Knockout mice for the Cdx2 gene undergo compaction, but lose the TE epithelial integrity during the late blastocyst stage. Furthermore, Oct4 expression is subsequently raised in these TE cells, indicating Cdx2 plays a role in suppressing Oct4 in this cell lineage. Moreover, embryonic stem cells can be generated from Cdx2-null mice, demonstrating that Cdx2 is not essential for ICM specification.
- Tead4: Like Cdx2, Tead4 is required for TE function, although the transcription factor is expressed ubiquitously. Tead4-null mice similarly undergo compaction, but fail to generate the blastocoel cavity. Like Cdx2-null embryos, the Tead4-null embryos can yield embryonic stem cells, indicating that Tead4 is dispensable for ICM specification. Recent work has shown that Tead4 may help to upregulate Cdx2 in the TE and its transcriptional activity depends on the coactivator Yap. Yap’s nuclear localization in outside cells allows it to contribute to TE specificity, whereas inside cells sequester Yap in the cytoplasm through a phosphorylation event.
Together these transcription factors function in a positive feedback loop that strengthens the ICM to TE cellular allocation. Initial polarization of blastomeres occurs at the 8-16 cell stage. An apical-basolateral polarity is visible through the visualization of apical markers such as Par3, Par6, and aPKC as well as the basal marker E-Cadherin. The establishment of such a polarity during compaction is thought to generate an environmental identity for inside and outside cells of the embryo. Consequently, stochastic expression of the above transcription factors is amplified into a feedback loop that specifies outside cells to a TE fate and inside cells to an ICM fate. In the model, an apical environment turns on Cdx2, which upregulates its own expression through a downstream transcription factor, Elf5. In concert with a third transcription factor, Eomes, these genes act to suppress pluripotency genes like Oct4 and Nanog in the outside cells. Thus, TE becomes specified and differentiates. Inside cells, however, do not turn on the Cdx2 gene, and express high levels of Oct4, Nanog, and Sox2, These genes suppress Cdx2 and the inside cells maintain pluripotency generate the ICM and eventually the rest of the embryo proper.
Although this dichotomy of genetic interactions is clearly required to divide the blastomeres of the mouse embryo into both the ICM and TE identities, the initiation of these feedback loops remains under debate. Whether they are established stochastically or through an even earlier asymmetry is unclear, and current research seeks to identify earlier markers of asymmetry. For example, some research correlates the first two cleavages during embryogenesis with respect to the prospective animal and vegetal poles with ultimate specification. The asymmetric division of epigenetic information during these first two cleavages, and the orientation and order in which they occur, may contribute to a cell’s position either inside or outside the morula,.
Blastomeres isolated from the ICM of mammalian embryos and grown in culture are known as embryonic stem (ES) cells. These pluripotent cells, when grown in a carefully coordinated media, can give rise to all three germ layers (ectoderm, endoderm, and mesoderm) of the adult body. For example, the transcription factor LIF4 is required for mouse ES cells to be maintained in vitro. Blastomeres are dissociated from an isolated ICM in an early blastocyst, and their transcriptional code governed by Oct4, Sox2, and Nanog helps maintain an undifferentiated state.
One benefit to the regulative nature in which mammalian embryos develop is the manipulation of blastomeres of the ICM to generate knockout mice. In mouse, mutations in a gene of interest can be introduced retrovirally into cultured ES cells, and these can be reintroduced into the ICM of an intact embryo. The result is a chimeric mouse, which develops with a portion of its cells containing the ES cell genome. The aim of such a procedure is to incorporate the mutated gene into the germ line of the mouse such that its progeny will be missing one or both alleles of the gene of interest. Geneticists widely take advantage of this ICM manipulation technique in studying the function of genes in the mammalian system,.
|This article lacks ISBNs for the books listed in it. (May 2012)|
- Wolpert, Lewis. Principles of Development: Third Edition. 2007. Oxford University Press.
- Marikawa, Yusuke, et al. Establishment of Trophectoderm and Inner Cell Mass Lineages in the Mouse Embryo. Molecular Reproduction & Development 76:1019–1032 (2009)
- Suwinska A, Czołowska R, Ozdze_nski W, Tarkowski AK. 2008. Blastomeres of the mouse embryo lose totipotency after the fifth cleavage division: Expression of Cdx2 and Oct4 and developmental potential of inner and outer blastomeres of 16- and 32-cell embryos. Dev Biol 322:133–144.
- Nichols J, Zevnik B, Anastassiadis K, Niwa H, Klewe-Nebenius D, Chambers I, Sch€oler H, Smith A. 1998. Formation of pluripotent stem cells in the mammalian embryo depends on the POU transcription factor Oct4. Cell 95:379–391.
- Rodda DJ, Chew JL, Lim LH, Loh YH, Wang B, Ng HH, Robson P. 2005. Transcriptional regulation of nanog by OCT4 and SOX2. J Biol Chem 280:24731–24737.
- Strumpf D, Mao CA, Yamanaka Y, Ralston A, Chawengsaksophak K, Beck F, Rossant J. 2005. Cdx2 is required for correct cell fate specification and differentiation of trophectoderm in the mouse blastocyst. Development 132:2093–2102.
- Nishioka N, Yamamoto S, Kiyonari H, Sato H, Sawada A, Ota M, Nakao K, Sasaki H. 2008. Tead4 is required for specification of trophectoderm in pre-implantation mouse embryos. Mech Dev 125:270–283.
- Nishioka N, et al. 2009. The Hippo signaling pathway components Lats and Yap pattern Tead4 activity to distinguish mouse trophectoderm from inner cell mass. Dev Cell 16: 398–410.
- Bischoff, Marcus, et al. Formation of the embryonic-abembryonic axis of the mouse blastocyst: relationships between orientation of early cleavage divisions and pattern of symmetric/asymmetric divisions. Development 135, 953-962 (2008)
- Jedrusik, Agnieszka, et al. Role of Cdx2 and cell polarity in cell allocation and specification of trophectoderm and inner cell mass in the mouse embryo. Genes Dev. 2008 22: 2692-2706
- Robertson, Elizabeth , et al. Germ-line transmission of genes introduced into cultured pluripotential cells by retroviral vector. Nature 323, 445 - 448 (2 October 1986)
- Smith AG, Heath JK, Donaldson DD, Wong GG, Moreau J, Stahl M and Rogers D (1988) Inhibition of pluripotential embryonic stem cell differentiation by purified polypeptides. Nature, 336, 688–690 |
Our Health Library information does not replace the advice of a doctor. Please be advised that this information is made available to assist our patients to learn more about their health. Our providers may not see and/or treat all topics found herein.
Upper Gastrointestinal Endoscopy
An upper gastrointestinal (or GI) endoscopy is a test that allows your doctor to look at the inside of your esophagus, stomach, and the first part of your small intestine, called the duodenum. The esophagus is the tube that carries food to your stomach. The doctor uses a thin, lighted tube that bends. It is called an endoscope, or scope.
The doctor puts the tip of the scope in your mouth and gently moves it down your throat. The scope is a flexible video camera. The doctor looks at a monitor (like a TV set or a computer screen) as he or she moves the scope. A doctor may do this procedure to look for ulcers, tumors, infection, or bleeding. It also can be used to look for signs of acid backing up into your esophagus. This is called gastroesophageal reflux disease, or GERD. The doctor can use the scope to take a sample of tissue for study (a biopsy). The doctor also can use the scope to take out growths or stop bleeding.
Why It Is Done
An upper GI endoscopy may be done to:
- Find what's causing you to vomit blood.
- Find the cause of symptoms, such as upper belly pain or bloating, trouble swallowing (dysphagia), vomiting, or unexplained weight loss.
- Find the cause of an infection, such as helicobacter pylori (H. pylori).
- Find problems in the upper gastrointestinal (GI) tract. These problems can include:
- Inflammation of the esophagus (esophagitis) or the stomach (gastritis) or intestines (Crohn's disease).
- Gastroesophageal reflux disease (GERD).
- Celiac disease.
- A narrowing (stricture) of the esophagus.
- Enlarged and swollen veins in the esophagus or stomach. (These veins are called varices.)
- Barrett's esophagus, a condition that increases the risk for esophageal cancer.
- Hiatal hernia.
- Check the healing of stomach ulcers.
- Look at the inside of the stomach and upper small intestine (duodenum) after surgery.
- Look for a blockage in the opening between the stomach and duodenum.
Endoscopy may also be done to:
- Check for an injury to the esophagus in an emergency. (For example, this may be done if the person has swallowed poison.)
- Collect tissue samples (biopsy) to be looked at in the lab.
- Remove growths (polyps) from inside the esophagus, stomach, or small intestine.
- Treat upper GI bleeding that may be causing anemia.
- Remove foreign objects that have been swallowed or food that is stuck.
- Treat a narrow area of the esophagus.
- Treat Barrett's esophagus.
How To Prepare
Procedures can be stressful. This information will help you understand what you can expect. And it will help you safely prepare for your procedure.
Preparing for the procedure
- Do not eat or drink anything for 6 to 8 hours before the test. An empty stomach helps your doctor see your stomach clearly during the test. It also reduces your chances of vomiting. If you vomit, there is a small risk that the vomit could enter your lungs. (This is called aspiration.) If the test is done in an emergency, a tube may be inserted through your nose or mouth to empty your stomach.
- Do not take sucralfate (Carafate) or antacids on the day of the test. These medicines can make it hard for your doctor to see your upper GI tract.
- If your doctor tells you to, stop taking iron supplements 7 to 14 days before the test.
- Be sure you have someone to take you home. Anesthesia and pain medicine will make it unsafe for you to drive or get home on your own.
- Understand exactly what procedure is planned, along with the risks, benefits, and other options.
- Tell your doctor ALL the medicines, vitamins, supplements, and herbal remedies you take. Some may increase the risk of problems during your procedure. Your doctor will tell you if you should stop taking any of them before the procedure and how soon to do it.
- If you take aspirin or some other blood thinner, ask your doctor if you should stop taking it before your procedure. Make sure that you understand exactly what your doctor wants you to do. These medicines increase the risk of bleeding.
- Make sure your doctor and the hospital have a copy of your advance directive. If you don't have one, you may want to prepare one. It lets others know your health care wishes. It's a good thing to have before any type of surgery or procedure.
How It Is Done
How is an upper GI endoscopy done?
Before the test
Before the test, you will put on a hospital gown. If you are wearing dentures, jewelry, contact lenses, or glasses, remove them. For your own comfort, empty your bladder before the test.
Blood tests may be done to check for a low blood count or clotting problems. Your throat may be numbed with an anesthetic spray, gargle, or lozenge. This is to relax your gag reflex and make it easier to insert the endoscope into your throat.
During the test
You may get a pain medicine and a sedative through an intravenous (IV) line in your arm or hand. These medicines reduce pain and will make you feel relaxed and drowsy during the test. You may not remember much about the actual test.
You will be asked to lie on your left side with your head bent slightly forward. A mouth guard may be placed in your mouth to protect your teeth from the endoscope (scope). Then the lubricated tip of the scope will be guided into your mouth. Your doctor may gently press your tongue out of the way. You may be asked to swallow to help move the tube along. The scope is no thicker than many foods you swallow. It will not cause problems with breathing.
After the scope is in your esophagus, your head will be tilted upright. This makes it easier for the scope to slide down your esophagus. During the procedure, try not to swallow unless you are asked to. Someone may remove the saliva from your mouth with a suction device. Or you can allow the saliva to drain from the side of your mouth.
Your doctor will look through an eyepiece or watch a screen while he or she slowly moves the endoscope. The doctor will check the walls of your esophagus, stomach, and duodenum. Air or water may be injected through the scope to help clear a path for the scope or to clear its lens. Suction may be applied to remove air or secretions.
A camera attached to the scope takes pictures. The doctor may also insert tiny tools such as forceps, clips, and swabs through the scope to collect tissue samples (biopsy), remove growths, or stop bleeding.
To make it easier for your doctor to see different parts of your upper GI tract, someone may change your position or apply gentle pressure to your belly. After the exam is done, the scope is slowly pulled out.
After the test
You will feel groggy after the test until the medicine wears off. This usually takes a few hours. Many people report that they remember very little of the test because of the sedative given before and during the test.
If your throat was numbed before the test, don't eat or drink until your throat is no longer numb and your gag reflex has returned to normal.
How long the test takes
The test usually takes 30 to 45 minutes. But it may take longer, depending on what is found and what is done during the test.
How It Feels
You may notice a brief, sharp pain when the intravenous (IV) needle is placed in a vein in your arm. The local anesthetic sprayed into your throat usually tastes slightly bitter. It will make your tongue and throat feel numb and swollen. Some people report that they feel as if they can't breathe at times because of the tube in their throat. But this is a false sensation caused by the anesthetic. There is always plenty of breathing space around the tube in your mouth and throat. Remember to relax and take slow, deep breaths.
During the test, you may feel very drowsy and relaxed from the sedative and pain medicines. You may have some gagging, nausea, bloating, or mild cramping in your belly as the tube is moved. If you have pain, alert your doctor with an agreed-upon signal or a tap on the arm. Even though you won't be able to talk during the procedure, you can still communicate.
The suction machine that's used to remove secretions may be noisy, but it doesn't cause pain. The removal of biopsy samples is also painless.
Problems, or complications, are rare. There is a slight risk that your esophagus, stomach, or upper small intestine will get a small tear in it. If this happens, you may need surgery to fix it. There is also a slight chance of infection after the test.
Bleeding may also happen from the test or if a tissue sample (biopsy) is taken. But the bleeding usually stops on its own without treatment. If you vomit during the test and some of the vomit enters your lungs, aspiration pneumonia is a possible risk.
An irregular heartbeat may happen during the test. But it almost always goes away on its own without treatment.
The risk of problems is higher in people who have serious heart disease. It's also higher in older adults and people who are frail or physically weakened. Talk to your doctor about your specific risks.
Your doctor may be able to talk to you about some of the findings right after your endoscopy. But the medicines you get to help relax you may impair your memory, so your doctor may wait until they fully wear off. It may take 2 to 4 days for some results. Tests for certain infections may take several weeks.
The esophagus, stomach, and upper small intestine (duodenum) look normal.
Inflammation or irritation is found in the esophagus, stomach, or small intestine.
Bleeding, an ulcer, a tumor, a tear, or dilated veins are found.
A hiatal hernia is found.
A too-narrow section (stricture) is found in the esophagus.
A foreign object is found in the esophagus, stomach, or small intestine.
A biopsy sample may be taken to:
- Find the cause of inflammation.
- Find out if tumors or ulcers contain cancer cells.
- Identify a type of bacteria called H. pylori that can cause ulcers or a fungus such as candida that sometimes causes infectious esophagitis.
Many conditions can affect the results of this test. Your doctor will discuss your results with you in relation to your symptoms and past health.
Current as of: September 8, 2021
Author: Healthwise Staff
E. Gregory Thompson MD - Internal Medicine
Adam Husney MD - Family Medicine
Jerome B. Simon MD, FRCPC, FACP - Gastroenterology
To learn more about Healthwise, visit Healthwise.org.
© 1995-2022 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. |
The Koreans, descended from Tungusic tribal peoples, are a distinct racial and cultural group. According to Korean legend, Tangun established Old Choson in NW Korea in 2333 BC, and the Korean calendar enumerates the years from this date. Chinese sources assert that Ki-tze (Kija), a Shang dynasty refugee, founded a colony at Pyongyang in 1122 BC, but the first Korean ruler recorded in contemporaneous records is Wiman, possibly a Chinese invader who overthrew Old Choson and established his rule in N Korea in 194 BC Chinese forces subsequently conquered (c.100 BC) the eastern half of the peninsula. Lolang, near modern Pyongyang, was the chief center of Chinese rule.
Koguryo, a native Korean kingdom, arose in the north on both sides of the Yalu River by the 1st cent. AD; tradition says it was founded in 37 BC By the 4th cent. AD it had conquered Lolang, and at its height under King Kwanggaet'o (r.391–413) occupied much of what is now Korea and NE China. In the 6th and 7th cent. the kingdom resisted several Chinese invasions. Meanwhile in the south, two main kingdoms emerged, Paekche (traditionally founded 18 BC, but significant beginning c.AD 250) in the west and Silla (traditionally founded 57 BC, but significant beginning c.AD 350) in the east. After forming an alliance with T'ang China, Silla conquered Paekche and Koguryo by 668, and then expelled the Chinese and unified much of the peninsula. Remnants of Koguryo formed the kingdom of Parhae (north of the Taedong River and largely in E Manchuria), which lasted until 926.
Under Silla's rule, Korea prospered and the arts flourished; Buddhism, which had entered Korea in the 4th cent., became dominant in this period. In 935 the Silla dynasty, which had been in decline for a century, was overthrown by Wang Kon, who had established (918) the Koryo dynasty (the name was selected as an abbreviated form of Koguryo and is the source of the name Korea). During the Koryo period, literature was cultivated, and although Buddhism remained the state religion, Confucianism—introduced from China during the Silla years and adapted to Korean customs—controlled the pattern of government. A coup in 1170 led to a period of military rule. In 1231, Mongol forces invaded from China, initiating a war that was waged intermittently for some 30 years. Peace came when Koryo accepted Mongol suzerainty, and a long period of Koryo-Mongol alliance followed. In 1392, Yi Songgye, a general who favored the Ming dynasty (which had replaced the Mongols in China), seized the throne and established the Choson dynasty.
The Choson (or Yi) dynasty, which was to rule until 1910, built a new capital at Hanseong (Seoul) and established Confucianism as the official religion. Early in the dynasty (15th cent.) printing with movable metal type, which had been developed two centuries earlier, became widely used, and the Korean alphabet was developed. The 1592 invasion by the Japanese shogun Hideyoshi was driven back by Choson and Ming forces, but only after six years of great devastation and suffering. Manchu invasions in the first half of the 17th cent. resulted in Korea being made (1637) a tributary state of the Manchu dynasty. Subsequent factional strife gave way, in the 18th cent., to economic prosperity and a cultural and intellectual renaissance. Korea limited its foreign contacts during this period and later resisted, longer than China or Japan, trade with the West, which led to its being called the Hermit Kingdom.
In 1876, Japan forced a commerical treaty with Korea, and to offset the Japanese influence, trade agreements were also concluded (1880s) with the United States and European nations. Japan's control was tightened after the First Sino-Japanese War (1894–95) and the Russo-Japanese War (1904–5), when Japanese troops moved through Korea to attack Manchuria. These troops were never withdrawn, and in 1905 Japan declared a virtual protectorate over Korea and in 1910 formally annexed the country. The Japanese instituted vast social and economic changes, building modern industries and railroads, but their rule (1910–45) was harsh and exploitative. Sporadic Korean attempts to overthrow the Japanese were unsuccessful, and after 1919 a provisional Korean government, under Syngman Rhee, was established at Shanghai, China.
In World War II, at the Cairo Conference (1943), the United States, Great Britain, and China promised Korea independence. At the end of the war Korea was arbitrarily divided into two zones as a temporary expedient; Soviet troops were north and Americans south of the line of lat. 38°N. The Soviet Union thwarted UN efforts to hold elections and reunite the country under one government. When relations between the Soviet Union and the United States worsened, trade between the two zones ceased; great economic hardship resulted, since the regions were economically interdependent, industry and trade being concentrated in the North and agriculture in the South.
In 1948 two separate regimes were formally established—the Republic of Korea in the South, and the Democratic People's Republic under Communist rule in the North. By mid-1949 all Soviet and American troops were withdrawn, and two rival Korean governments were in operation, each eager to unify the country under its own rule. In June, 1950, the North Korean army launched a surprise attack against South Korea, initiating the Korean War, and with it, severe hardship, loss of life, and enormous devastation.
After the war the boundary was stabilized along a line running from the Han estuary generally northeast across the 38th parallel to a point south of Kosong (Kuum-ni), with a
no-man's land or demilitarized zone (DMZ), 1.24 mi (2 km) wide and occupying a total of 487 sq mi (1,261 sq km), on either side of the boundary. The western border in the ocean, though, was not defined, and fighting has occasionally occurred at sea. Throughout the 1950s and 60s an uneasy truce prevailed; thousands of soldiers were poised on each side of the demilitarized zone, and there were occasional shooting incidents. In 1971 negotiations between North and South Korea provided the first hope for peaceful reunification of the peninsula; in Nov., 1972, an agreement was reached for the establishment of joint machinery to work toward unification.
The countries met several times during the 1980s to discuss reunification, and in 1990 there were three meetings between the prime ministers of North and South Korea. These talks have yielded some results, such as the exchange of family visits organized in 1989. The problems blocking complete reunification, however, continue to be substantial. Two incidents of terrorism against South Korea were widely attributed to North Korea: a 1983 bombing that killed several members of the South Korean government, and the 1987 destruction of a South Korean airliner over the Thailand-Myanmar border. In 1996, North Korea said it would cease to recognize the demilitarized zone between the two Koreas, and North Korean troops made incursions into the zone. In 1999 a North Korean torpedo boat was sunk by a South Korean vessel in South Korean waters following a gun battle, and another deadly naval confrontation following a North Korean incursion in 2002.
In early 2000, however, the North engaged in talks with a number of Western nations, seeking diplomatic relations, and South and North agreed to a presidential summit in Pyongyang. The historic and cordial meeting produced an accord that called for working toward reunification (though without specifying how) and for permitting visits between families long divided as a result of the war. Given the emotional appeal of reunification, it is likely that the North-South dialogue will continue, despite the problems involved; however, the tensions that developed in late 2002 have, for the time being, derailed any significant further reunification talks. Economic contacts continued to expand, however, and South Korea became a significant trade partner for the North. The North also received substantial aid from the South.
In 2007 a rail crossing through the DMZ was symbolically reopened when two trains made test runs on the rebuilt track; regular rail service, over a short line, began late in the year. A second North-South presidential summit in Pyongyang occurred in Aug., 2007; both leaders called for negotiations on a permanent peace treaty to replace the armistice that ended the Korean War. Relations between the two nations subsequently soured, as a result of the election (2007) of Lee Myung Bak as president of South Korea and the sinking (2010) of a South Korean naval vessel by the North. Most joint projects came to an end, and trade between the two nations greatly decreased by 2010. Kim Jong Un's succession in the North in 2011 further worsened relations, which were increasingly strained by the North's ongoing development of missile and nuclear technology. Many U.S. troops still remain in the South, though their numbers have decreased since the 1960s and the number of U.S. bases has been greatly reduced.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Korean Political Geography |
Learning a new alphabet, syllabary or other writing system can be tricky. The difficulty of this task depends on the complexity of the writing system you're trying to learn. Below are some tips on how to go about this task.
Learn the letters or symbols a few at time rather than all in one go. Pay particular attention to letters with a similar appearance and to 'false friends': which look like letters you are already know but are not the same. For example, in Russian the following letters look like English letters but are pronounced differently: B = [v], H = [n], C = [s] and P = [r]. So now you should have no difficulty deciphering the Russian word PECTOPAH - it means restaurant and can be transliterated as RESTORAN.
Try to associate the shapes of letters with familiar objects: some letters may look like letters or numerals in your own alphabet, others may remind you of animals, objects or people. Books which teach children how to read use these techniques and will be useful if you can get hold of them.
Practice writing the letters as often as possible. Learning the standard way to form the letters: i.e. the shape, direction and order of strokes, will help you to memorise them. If possible, take a calligraphy class, especially if you're studying Chinese or Japanese. This will enable you to improve your handwriting and to read other people's handwriting.
Practice writing things in the new alphabet then transliterating them into your own alphabet. Then try transliterating them back into the new alphabet. Also try writing your own language in the new alphabet.
You could also practice your writing by keeping a diary and/or by writing to a penpal who speaks the language. You can find penpals interested in learning languages and helping others to learn their languages at: www.mylanguageexchange.com.
Practice reading texts written in the new alphabet as often as possible. Even if you don't know all the letters or symbols, you will be able to make out some of the words and to guess some of the others. Look out for the names of people and places and for loan-words from your own language as these tend to be relatively easy to spot and decipher.
At first you'll probably find that you have to sound out letters individually before you can decipher the words. Eventually you'll be able to recognise words by their shapes and will only need to sound out the letters of unfamiliar words. You probably went through the same process when learning to read your native language.
Label things around your home or office in the new alphabet with translations in your own language. This will increase your exposure to the new alphabet and help you to recognise key words and phrases.
Try reading aloud whatever material you get hold of. There are links to online newspapers and magazines in many different languages on relevant pages of this site If you know native speaker who is willing to help, ask him or her to read things aloud. Then you try to do the same and ask your friend to correct your mistakes.
Why not share this page:
Note: all links on this site to Amazon.com, Amazon.co.uk and Amazon.fr are affiliate links. This means I earn a commission if you click on any of them and buy something. So by clicking on these links you can help to support this site. |
Scientists for the very first time ever grew the living origins of Human Embryos in their laboratories. The research carried out was to examine the prolepsis that causes miscarriages and birth defects. The scientists constructed the model in accordance with all the cell types, biochemical activity, and overall structure of real Embryos. A human embryo study was carried out at two centers by the scientists. They confiscated their research at both Monash University in Australia and the University of Texas in the US.
The international limit for developing Human Embryos is set for 14 days due to ethical reasons. Scientists have kept the limit in mind while working on living models like blastoids. The research team previously reported that the initial stage of embryonic development takes around 5-10 days.
The team recreated the Human Embryos by settling down to one of the ways. It was derived either by reprogramming adult cells or extraction from embryos, which helped the researchers grow blastoids from stem cells.
In some cases, blastoids reflected mimicking behavior implantation into the users, while in others the new cells attached to the culture dish where they could transform into a placenta. The blastoids contained around 100 cells that were beginning to part from each other into cell types that would later produce different tissues. |
5 Axis CNC Machining
5 axis machining is subtractive manufacturing raw material down to the desired finished product, using tools that mill or turn the material along 5 different axes. Our 5 axis CNC (computer numerical control) machines use multiple angles and subtractive processes to form intricately designed parts or pieces for the most delicate industries. The use of multiple axes, on both the tool and the block of material, is controlled by a sophisticated computer. This process efficiently provides the most complex and geometric parts through cutting tools that rotate as the material rotates on a holding device.
The development of the first CNC machine began in the 1940s when John T. Parsons used a numerical system to build helicopter rotors. From there a team of researchers at MIT used Punch Tape to program a code to precisely produce the desired designed parts. Computer-Assisted Manufacturing (CAM) came into the picture in 1959 when manufacturers could draw a design on a computer to feed a milling machine the exact coordinates and instructions on how to create a piece from a block. MIT gets the credit for the first fully computer-controlled milling machine!
Why 5 Axis Machining?
5 axis machining is when precision meets speed and motion. Imagine a cutting tool that can freely handle machining from all angles without operators having to constantly move and shift material. 5 axis machining eliminates manual rotation as the machine does the work precisely, by itself. That leads to better accuracy, higher production rates, and greater ability to manufacture larger, more complex parts, all in a faster time than other CNC machines.
3 axis machining, in comparison, is best for mostly flat products and pieces. Blocks are fixed onto a machine bed, while operators have to reposition the blocks for unconventional shapes and deeper cavities. This allows room for slight variances or rough edges that come at the hands of machinists, as well as machinists will take longer to accurately finish a product. 5 axis machining offers more precise, smooth surface finishes and dimensional stability for complex shapes and parts.
What are the 5 Axis in Machining?
Axes are designated letters that determine the direction a cutting tool must go, which for 3 axis and 5 axis CNC machines are…
- X axis (left to right)
- Y axis (top to bottom/up and down)
- Z axis (front to back/in and out)
- A axis (rotation around X axis)
- B axis (rotation around the Y axis)
Whereas the 3 axis machines depend on X, Y, and Z axes to compute the level of detail it needs, 5 axis machines use those 3 axes plus the A axis and B axis to rotate the tool and/or the block of material.
Types of 5 Axis CNC Machining
There are three types of CNC machining strategically involving a 5 axis CNC machine: indexed machining, continuous machining, and mill-turn machining.
Indexed CNC Machining
With an indexed CNC machine, also known as 3+2 axis machining, cutting tools only move continuously along three axes, while the table that holds the block of material will swivel in two rotational axes. 4 axis machining, acting as 3+1 machining, is where the spindle tool moves on 3 axes (X, Y, and Z axes) and a table holding the material moves and rotates on the fourth axis, the A axis. The rotation makes the 4 axis CNC machine produce faster than a 3 axis, but still involves some manpower to reposition the material. Indexed machining is still more precise and efficient than a 3-axis machine, but not as fast as a tool on 5 axes because the tool intermittently touches the block piece.
Continuous CNC Machining
A continuous 5 axis machine lets the cutting tool move on all five axes to ensure that the tip of the tool maintains the same point on the block. The cutting tool maintains engagement with the material at all 5 machining angles even when the holding table moves, maximizing use of the entire workstation with much less manpower involved.
Mill-Turn CNC Machining
Mill-turn machining is when a block has the continuous ability to rotate while the cutting tool mills the block from all five rotational axes, becoming a hybrid machine that is by far the most efficient type of CNC machining.
CNC Turning and CNC Milling
5 axis CNC machining consists of milling and turning processes. A CNC milling machine uses rotating precision tools on various axes while holding the block of raw material in place. The milling machine receives the code to remove material from the solid block on the five axes whether or not the block moves with the cutting tool. The milling process can handle more complex detail in a part or product whereas turning is optimal for producing round or cylindrical shapes.
There are multiple ways that a milling machine can cut a block of material, but the two most distinct are conventional and climbing. Conventional milling is when the tool cuts in the opposite direction of the block. Cutting in the opposite direction causes the friction that quickly shaves down the block. Climb milling involves the tool rotating with the direction of the block’s feed, thinly cutting the block in less wear and tear than the conventional method.
While CNC milling machines rely on high precision cutting tools, CNC turning relies on the involvement of rotating the potential part on various axes to feed the cutting tool at high speeds. Turning centers have taken the old lathe and applied CNC technologies to churn out parts more efficiently. The turning tool is typically mounted or fixed so that it can produce deep cavities. The turning tool performs several different operations to rotate material into the desired part – turning, parting, chamfering, knurling, drilling, facing, and boring.
Turning is where the tool removes material from a block along a linear axis with the workstation. It typically cuts down on the diameter of a cylindrical part. Parting is a lathe tool that cuts a piece into two at specific machining angles. Chamfering tools are used to eliminate the sharp edges off a corner of a piece. Knurling creates patterns of straight, angled, or crossed lines on parts. The drilling operation tool removes material from the inside of a block. A facing tool cuts a flat surface that is perpendicular to the block’s rotational axis. A boring tool enlarges holes that have already been drilled to create even higher precision.
Proform Manufacturing’s 5-Axis Machining Capabilities
Proform American Manufacturing prides itself in manufacturing the best in precision parts. We are experts at providing any manufacturing needs, from small batch medical parts to mass production parts to large construction weldments. We use the latest in CAD and CAM technology with 5 axis machining and some of the largest variety of equipment necessary to fulfill any job. We have over 30 cutting machines in-house that create parts out of hard and soft metal, from aluminum to carbon steel. Our team is skilled in a multitude of fabrication services from start to finish.
Contact Us to learn more about our Precision 5 Axis Machining or other Manufacturing Capabilities. |
The discovery of new medications to treat psychiatric disorders requires the ability to identify promising new chemical compounds and weed out less effective ones. This can be accomplished by setting up screening procedures that predict the effectiveness of different compounds in patients.
Brain system malfunctions can result in various psychiatric disorders, such as depression. Physical symptoms, including difficulty sleeping and loss of appetite, probably result from excessive production of stress hormones.
Since the 1950s, animal models have been central to the discovery of drug treatments for such serious disorders as depression, anxiety disorders, and schizophrenia. The most efficient and useful screening procedures involved testing medications with rats and mice and observing their effects on specific, well-characterized behaviors. Tests of anxiety and anxiolytic drugs are examples of the successful use of screening procedures.
Here's how it works: Rats and mice avoid predators, such as birds of prey, by favoring dark, enclosed spaces over open ones. However, these animals must occasionally venture into light, uncovered spaces to obtain food. The balance between how much time a rat or mouse spends in dark versus light spaces has been used to identify many anxiolytic drugs. While there are many specific versions of such tests, the major theme is that drugs that increase the proportion of time that a rodent is willing to spend in the open is a good predictor of relief of anxiety symptoms in human patients.
Despite progress in developing treatments for many psychiatric disorders, our understanding of what goes wrong in the brain remains incomplete. One important set of clues is that disorders such as autism, schizophrenia, and bipolar disorder are highly influenced by genes. The genetic contribution to these illnesses is complicated, with many genes contributing to risk along with environmental factors.
In a small number of families affected by autism or schizophrenia, however, a single genetic mutation may cause the disorder. Animal models are important in helping scientists understand how these genetically based disease risks produce symptoms. It is now possible to replace a mouse gene with the equivalent human gene associated with disease. For example, in some cases in which autism appears to be caused by a single gene, the human gene has been inserted inside a group of experimental mice, replacing their original mouse gene. Interestingly, the mice exhibit behaviors highly reminiscent of human autism. Depending on the precise gene that has been tested, the mice with the human gene may avoid social interactions with other mice, engage in repetitive behaviors, or both.
Scientists then use such mice to search for causes for the brain abnormalities. The ultimate goal is not only to understand the disease processes, but also to gain clues for the development of new, effective treatments. It is still early days in human gene identification and in the creation of genetic mouse models of brain disorders, but this approach is showing enormous promise.
One of the challenges of brain research is that this highly complex organ contains thousands of distinct types of neurons, which give rise to many complex circuits. A new technique that has powerful implications for understanding brain disorders permits us to manipulate individual types of cells and circuits. This technology, called optogenetics, involves the insertion of genes into particular neurons in the brains of mice. These genes cause the neurons to fire in response to specific colors of light, providing information about the cellular changes that arise as a result. This new technology has already contributed to the analysis of many circuits that play a role in normal thought and emotions, as well as in brain disorders. |
The biggest medical breakthrough of the past few decades is almost certainly CRISPR, the groundbreaking gene-editing tech that enables scientists to add or remove genes with incredible precision. It has the potential to cure disease, create designer babies, and even exterminate entire species.
But one possible use for CRISPR is easier to overlook, and that is its potential for storing information in DNA.
DNA is the best storage mechanism in the world. The entire blueprint for our species is stored inside the center of almost every single one of our cells, and the complete collection of all the world's written works can be encoded onto only a few pounds of the stuff. Plus, DNA is durable and can last a very long time with little degradation.
CRISPR makes it easy to edit existing DNA or add new DNA anywhere in the genome. This makes it ideal for writing new information, and a group of Harvard researchers did just that, encoding an image and a short video clip using the technology.
Storing data on DNA is not a new idea, but using CRISPR to do it could dramatically simplify the process. More importantly, CRISPR lets scientists store data on the DNA inside living organisms, instead of just disembodied DNA floating inside a tube.
This opens up brand new opportunities, not just in data storage but also in exploration and discovery. If we can use DNA to store information, we could also use it to record information, meaning we can use CRISPR to turn cells into tiny records of what they're doing and what's happening to them. Using CRISPR, we can better understand how different cells and organisms operate or develop. |
The programs in this language are called scripts. They can be written right in a web page’s HTML and run automatically as the page loads.
Scripts are provided and executed as plain text. They don’t need special preparation or compilation to run.
Different engines have different “codenames”. For example:
- V8 – in Chrome, Opera and Edge.
- SpiderMonkey – in Firefox.
The terms above are good to remember because they are used in developer articles on the internet. We’ll use them too. For instance, if “a feature X is supported by V8”, then it probably works in Chrome, Opera and Edge.
Engines are complicated. But the basics are easy.
- The engine (embedded if it’s a browser) reads (“parses”) the script.
- Then it converts (“compiles”) the script to machine code.
- And then the machine code runs, pretty fast.
The engine applies optimizations at each step of the process. It even watches the compiled script as it runs, analyzes the data that flows through it, and further optimizes the machine code based on that knowledge.
- Add new HTML to the page, change the existing content, modify styles.
- React to user actions, run on mouse clicks, pointer movements, key presses.
- Send requests over the network to remote servers, download and upload files (so-called AJAX and COMET technologies).
- Get and set cookies, ask questions to the visitor, show messages.
- Remember the data on the client-side (“local storage”).
Examples of such restrictions include:
Modern browsers allow it to work with files, but the access is limited and only provided if the user does certain actions, like “dropping” a file into a browser window or selecting it via an
This limitation is, again, for the user’s safety. A page from
http://anysite.comwhich a user has opened must not be able to access another browser tab with the URL
http://gmail.com, for example, and steal information from there.
- Full integration with HTML/CSS.
- Simple things are done simply.
- Supported by all major browsers and enabled by default.
That’s to be expected, because projects and requirements are different for everyone.
Modern tools make the transpilation very fast and transparent, actually allowing developers to code in another language and auto-converting it “under the hood”.
Examples of such languages:
- TypeScript is concentrated on adding “strict data typing” to simplify the development and support of complex systems. It is developed by Microsoft.
- Flow also adds data typing, but in a different way. Developed by Facebook.
- Kotlin is a modern, concise and safe programming language that can target the browser or Node. |
1, Describe the contents and positions of the things that make up our solar system. What are the observed patterns (motion, composition, etc.) of objects in our solar system? What are the observed exceptions to these patterns? Explain the current theory of how our solar system formed, highlighting how it explains both the patterns and exceptions.
2, Describe what a journey into the Sun might be like. The journey should begin as the travellers are approaching the Sun, and conclude when they reach the core. Include everything that would be encountered, including all the zones within the Sun, and describe how the nature of the different zones would affect the trip.
3,Describe how a star forms, from its earliest ingredients until its arrival on the main sequence. Include details on the forces at play, and what happens at each phase of development. Also include details of what happens in fusion reactions.
4,Contrast the lives and deaths of low-mass stars versus high-mass stars. Detail the differences in the stages of death, and what is left at the end. Include how the stars move on the H-R diagram as their lives come to an end. |
How to measure the distance
You can measure the distance an object has moved by its displacement. Displacement is the difference in position of an object from its starting point. To measure the displacement of an object, you will need a ruler or measuring tape. Place the ruler or measuring tape at the object’s starting point, and then measure the length from the starting point to the ending point. This will give you the displacement of the object.
Use a standard ruler or measuring tape
To measure distance, you will need a standard ruler or measuring tape. Place the ruler or tape measure at the starting point, then measure to the end point. Make sure to use the same units of measurement (inches, centimeters, etc.) throughout your measuring.
Use a GPS system
There are a number of ways to measure distance, but one of the most common is by using a GPS system. A GPS system uses satellites to determine the exact location of an object on Earth, and by tracking the movement of the object, it can calculate how far it has moved.
How to calculate distance
You can calculate how far an object has moved if you know the objects starting position, the objects ending position, and the amount of time it took to move. To find the distance, you will need to subtract the starting position from the ending position. Then, you will need to divide that number by the amount of time it took to move.
Use the formula: d = rt
d = rt
d = distance
r = rate
t = time
distance (d) is equal to rate (r) multiplied by time (t).
Use a online distance calculator
There are a number of ways to calculate distance, but one of the most convenient is to use a online distance calculator. Simply enter the starting point and ending point of your journey, and the calculator will do the rest.
Of course, you can also calculate distance manually, using a map and a ruler or measuring tape. However, this can be time-consuming and may not be quite as accurate as using a distance calculator.
How to estimate distance
You can use the average speed of an object to calculate how far it has moved. You will need the total time that the object was in motion and the average speed of the object. You can also use the distance formula, which is distance = rate x time.
Use landmarks to estimate distance
To estimate distance, use landmarks to measure how far an object has moved. For example, if you’re looking at a mountain in the distance, you can use the trees in the foreground to estimate how far away the mountain is. To do this, first find a landmark that is a known distance from you, such as a tree. Then, measure the distance from the landmark to the object using a ruler or tape measure. Finally, divide the measurement by the known distance to find the object’s estimated distance.
Use your pace to estimate distance
When hiking or running, you can use your pace to estimate distance. Your pace is the number of steps you take in a minute. To find your pace, count the number of times your left foot hits the ground in one minute. If you counted 30 times, then your pace is 30 steps per minute. You can use this method to estimate the distance of a known trail or route, or to find your way back to a starting point.
How to convert units of measurement
In order to convert the units of measurement, you need to know the formula for the conversion. For example, to convert from kilometers to miles, you need to know that 1 kilometer is equal to 0.62137 miles. To convert from centimeters to inches, you need to know that 1 centimeter is equal to 0.3937 inches.
Use an online converter
There are many tools available online to help you convert between units of measurement. For example, the Wolfram Alpha website has a large database ofUnit Conversions that can be used for free.
To use an online converter, simply enter the value you wish to convert, select the units of measurement, and click the “Convert” button. The converted value will appear in the output field.
It is important to note that not all online converters are created equal. Some converters may use outdated or incorrect conversion factors, so it is always best to check the results against a reputable source before relying on them.
Use a calculator
To convert units of measurement, you will need a calculator.
First, decide what unit of measurement you will be using. For example, if you are trying to convert kilometers to miles, you will need to use the unit of kilometers.
Next, determine the conversion factor between the two units of measurement. In the case of kilometers to miles, the conversion factor is 0.621371192.
Once you have the conversion factor, simply multiply it by the number of units you are trying to convert. So, if you wanted to convert 5 kilometers to miles, you would multiply 5 by 0.621371192 to get 3.10685596 miles. |
The history of the telescope can be traced to before the invention of the earliest known telescope, which appeared in 1608 in the Netherlands, when a patent was submitted by Hans Lippershey, an eyeglass maker. Although Lippershey did not receive his patent, news of the invention soon spread across Europe. The design of these early refracting telescopes consisted of a convex objective lens and a concave eyepiece. Galileo improved on this design the following year and applied it to astronomy. In 1611, Johannes Kepler described how a far more useful telescope could be made with a convex objective lens and a convex eyepiece lens. By 1655, astronomers such as Christiaan Huygens were building powerful but unwieldy Keplerian telescopes with compound eyepieces.
Isaac Newton is credited with building the first reflector in 1668 with a design that incorporated a small flat diagonal mirror to reflect the light to an eyepiece mounted on the side of the telescope. Laurent Cassegrain in 1672 described the design of a reflector with a small convex secondary mirror to reflect light through a central hole in the main mirror.
The achromatic lens, which greatly reduced color aberrations in objective lenses and allowed for shorter and more functional telescopes, first appeared in a 1733 telescope made by Chester Moore Hall, who did not publicize it. John Dollond learned of Hall's invention and began producing telescopes using it in commercial quantities, starting in 1758.
Important developments in reflecting telescopes were John Hadley's production of larger paraboloidal mirrors in 1721; the process of silvering glass mirrors introduced by Léon Foucault in 1857; and the adoption of long-lasting aluminized coatings on reflector mirrors in 1932. The Ritchey-Chretien variant of Cassegrain reflector was invented around 1910, but not widely adopted until after 1950; many modern telescopes including the Hubble Space Telescope use this design, which gives a wider field of view than a classic Cassegrain.
During the period 1850–1900, reflectors suffered from problems with speculum metal mirrors, and a considerable number of "Great Refractors" were built from 60 cm to 1 metre aperture, culminating in the Yerkes Observatory refractor in 1897; however, starting from the early 1900s a series of ever-larger reflectors with glass mirrors were built, including the Mount Wilson 60-inch (1.5 metre), the 100-inch (2.5 metre) Hooker Telescope (1917) and the 200-inch (5 metre) Hale telescope (1948); essentially all major research telescopes since 1900 have been reflectors. A number of 4-metre class (160 inch) telescopes were built on superior higher altitude sites including Hawaii and the Chilean desert in the 1975–1985 era. The development of the computer-controlled alt-azimuth mount in the 1970s and active optics in the 1980s enabled a new generation of even larger telescopes, starting with the 10-metre (400 inch) Keck telescopes in 1993/1996, and a number of 8-metre telescopes including the ESO Very Large Telescope, Gemini Observatory and Subaru Telescope.
The era of radio telescopes (along with radio astronomy) was born with Karl Guthe Jansky's serendipitous discovery of an astronomical radio source in 1931. Many types of telescopes were developed in the 20th century for a wide range of wavelengths from radio to gamma-rays. The development of space observatories after 1960 allowed accessto several bands impossible to observe from the ground, including X-rays and longer wavelength infrared bands.
See also: History of optics.
Objects resembling lenses date back 4000 years although it is unknown if they were used for their optical properties or just as decoration. Greek accounts of the optical properties of water filled spheres (5th century BC) followed by many centuries of writings on optics, including Ptolemy (2nd century) in his Optics, who wrote about the properties of light including reflection, refraction, and color, followed by Ibn Sahl (10th century) and Ibn Al-Haytham (11th century).
Actual use of lenses dates back to the widespread manufacture and use of eyeglasses in Northern Italy beginning in the late 13th century. The invention of the use of concave lenses to correct near-sightedness is ascribed to Nicholas of Cusa in 1451.
The first record of a telescope comes from the Netherlands in 1608. It is in a patent filed by Middelburg spectacle-maker Hans Lippershey with the States General of the Netherlands on 2 October 1608 for his instrument "for seeing things far away as if they were nearby". A few weeks later another Dutch instrument-maker, Jacob Metius also applied for a patent. The States General did not award a patent since the knowledge of the device already seemed to be ubiquitous but the Dutch government awarded Lippershey with a contract for copies of his design.
The original Dutch telescopes were composed of a convex and a concave lens—telescopes that are constructed this way do not invert the image. Lippershey's original design had only 3x magnification. Telescopes seem to have been made in the Netherlands in considerable numbers soon after this date of "invention", and rapidly found their way all over Europe.
In 1655 Dutch diplomat William de Boreel tried to solve the mystery of who invented the telescope. He had a local magistrate in Middelburg follow up on Boreel's childhood and early adult recollections of a spectacle maker named "Hans" who he remembered as the inventor of the telescope. The magistrate was contacted by a then unknown claimant, Middelburg spectacle maker Johannes Zachariassen, who testified that his father, Zacharias Janssen invented the telescope and the microscope as early as 1590. This testimony seemed convincing to Boreel, who now recollected that Zacharias and his father, Hans Martens, must have been who he remembered. Boreel's conclusion that Zacharias Janssen invented the telescope a little ahead of another spectacle maker, Hans Lippershey, was adopted by Pierre Borel in his 1656 book De vero telescopii inventore. Discrepancies in Boreel's investigation and Zachariassen's testimony (including Zachariassen misrepresenting his date of birth and role in the invention) has led some historians to consider this claim dubious. The "Janssen" claim would continue over the years and be added on to with Zacharias Snijder in 1841 presenting 4 iron tubes with lenses in them claimed to be 1590 examples of Janssen's telescope and historian Cornelis de Waard's 1906 claim that the man who tried to sell a broken telescope to astronomer Simon Marius at the 1608 Frankfurt Book Fair must have been Janssen.
In 1682, the minutes of the Royal Society in London Robert Hooke noted Thomas Digges' 1571 Pantometria, (a book on measurement, partially based on his father Leonard Digges' notes and observations) seemed to support an English claim to the invention of the telescope, describing Leonard as having a fare seeing glass in the mid 1500s based on an idea by Roger Bacon. Thomas described it as "by proportional Glasses duly situate in convenient angles, not only discovered things far off, read letters, numbered pieces of money with the very coin and superscription thereof, cast by some of his friends of purpose upon downs in open fields, but also seven miles off declared what hath been done at that instant in private places." Comments on the use of proportional or "perspective glass" are also made in the writings of John Dee (1575) and William Bourne (1585). Bourne was asked in 1580 to investigate the Diggs device by Queen Elizabeth I's chief advisor Lord Burghley. Bourne's is the best description of it, and from his writing it seemed to consist of peering into a large curved mirror that reflected the image produced by a large lens. The idea of an "Elizabethan Telescope" has been expanded over the years, including astronomer and historian Colin Ronan concluding in the 1990s that this reflecting/refracting telescope was built by Leonard Digges between 1540 and 1559. This "backwards" reflecting telescope would have been unwieldy, it needed very large mirrors and lens to work, the observer had to stand backwards to look at an upside down view, and Bourne noted it had a very narrow field of view making it unsuitable for military purposes. The optical performance required to see the details of coins lying about in fields, or private activities seven miles away, seems to be far beyond the technology of the time and it could be the "perspective glass" being described was a far simpler idea, originating with Bacon, of using a single lens held in front of the eye to magnify a distant view.
Translations of the notebooks of Leonardo da Vinci and Girolamo Fracastoro shows both using water filled crystals or a combination of lenses to magnify the Moon, although the descriptions are too sketchy to determine if they were arranged like a telescope.
A 1959 research paper by Simon de Guilleuma claimed that evidence he had uncovered pointed to the French born spectacle maker Juan Roget (died before 1624) as another possible builder of an early telescope that predated Hans Lippershey's patent application.
Lippershey's application for a patent was mentioned at the end of a diplomatic report on an embassy to Holland from the Kingdom of Siam sent by the Siamese king Ekathotsarot: Ambassades du Roy de Siam envoyé à l'Excellence du Prince Maurice, arrivé à La Haye le 10 Septemb. 1608 (Embassy of the King of Siam sent to his Excellency Prince Maurice, arrived at The Hague on 10 September 1608). This report was issued in October 1608 and distributed across Europe, leading to experiments by other scientists, such as the Italian Paolo Sarpi, who received the report in November, and the English mathematician and astronomer Thomas Harriot, who used a six-powered telescope by the summer of 1609 to observe features on the moon.
The Italian polymath Galileo Galilei was in Venice in June 1609 and there heard of the "Dutch perspective glass", a military spyglass, by means of which distant objects appeared nearer and larger. Galileo states that he solved the problem of the construction of a telescope the first night after his return to Padua from Venice and made his first telescope the next day by using a convex objective lens in one extremity of a leaden tube and a concave eyepiece lens in the other end, an arrangement that came to be called a Galilean telescope. A few days afterwards, having succeeded in making a better telescope than the first, he took it to Venice where he communicated the details of his invention to the public and presented the instrument itself to the doge Leonardo Donato, who was sitting in full council. The senate in return settled him for life in his lectureship at Padua and doubled his salary.
Galileo spent his time to improving the telescope, producing telescopes of increased power. His first telescope had a 3x magnification, but he soon made instruments which magnified 8x and finally, one nearly a meter long with a 37mm objective (which he would stop down to 16mm or 12mm) and a 23x magnification. With this last instrument he began a series of astronomical observations in October or November 1609, discovering the satellites of Jupiter, hills and valleys on the Moon, the phases of Venus and observed spots on the sun (using the projection method rather than direct observation). Galileo noted that the revolution of the satellites of Jupiter, the phases of Venus, rotation of the Sun and the tilted path its spots followed for part of the year pointed to the validity of the sun-centered Copernican system over other Earth-centered systems such as the one proposed by Ptolemy.
Galileo's instrument was the first to be given the name "telescope". The name was invented by the Greek poet/theologian Giovanni Demisiani at a banquet held on April 14, 1611 by Prince Federico Cesi to make Galileo Galilei a member of the Accademia dei Lincei. The word was created from the Greek tele = 'far' and skopein = 'to look or see'; teleskopos = 'far-seeing'.
Johannes Kepler first explained the theory and some of the practical advantages of a telescope constructed of two convex lenses in his Catoptrics (1611). The first person who actually constructed a telescope of this form was the Jesuit Christoph Scheiner who gives a description of it in his Rosa Ursina (1630).
William Gascoigne was the first who commanded a chief advantage of the form of telescope suggested by Kepler: that a small material object could be placed at the common focal plane of the objective and the eyepiece. This led to his invention of the micrometer, and his application of telescopic sights to precision astronomical instruments. It was not till about the middle of the 17th century that Kepler's telescope came into general use: not so much because of the advantages pointed out by Gascoigne, but because its field of view was much larger than in the Galilean telescope.
The first powerful telescopes of Keplerian construction were made by Christiaan Huygens after much labor—in which his brother assisted him. With one of these: an objective diameter of 2.24inches and a 12feet focal length, he discovered the brightest of Saturn's satellites (Titan) in 1655; in 1659, he published his "Systema Saturnium" which, for the first time, gave a true explanation of Saturn's ring—founded on observations made with the same instrument.
The sharpness of the image in Kepler's telescope was limited by the chromatic aberration introduced by the non-uniform refractive properties of the objective lens. The only way to overcome this limitation at high magnifying powers was to create objectives with very long focal lengths. Giovanni Cassini discovered Saturn's fifth satellite (Rhea) in 1672 with a telescope 35feet long. Astronomers such as Johannes Hevelius were constructing telescopes with focal lengths as long as 150feet. Besides having really long tubes these telescopes needed scaffolding or long masts and cranes to hold them up. Their value as research tools was minimal since the telescope's frame "tube" flexed and vibrated in the slightest breeze and sometimes collapsed altogether.
See main article: Aerial telescope. In some of the very long refracting telescopes constructed after 1675, no tube was employed at all. The objective was mounted on a swiveling ball-joint on top of a pole, tree, or any available tall structure and aimed by means of string or connecting rod. The eyepiece was handheld or mounted on a stand at the focus, and the image was found by trial and error. These were consequently termed aerial telescopes. and have been attributed to Christiaan Huygens and his brother Constantijn Huygens, Jr. although it is not clear that they invented it. Christiaan Huygens and his brother made objectives up to 8.5inches diameter and 210feet focal length and others such as Adrien Auzout made telescopes with focal lengths up to 600feet. Telescopes of such great length were naturally difficult to use and must have taxed to the utmost the skill and patience of the observers. Aerial telescopes were employed by several other astronomers. Cassini discovered Saturn's third and fourth satellites in 1684 with aerial telescope objectives made by Giuseppe Campani that were 100and in focal length.
See also: Reflecting telescope.
The ability of a curved mirror to form an image may have been known since the time of Euclid and had been extensively studied by Alhazen in the 11th century. Galileo, Giovanni Francesco Sagredo, and others, spurred on by their knowledge that curved mirrors had similar properties to lenses, discussed the idea of building a telescope using a mirror as the image forming objective. Niccolò Zucchi, an Italian Jesuit astronomer and physicist, wrote in his book Optica philosophia of 1652 that he tried replacing the lens of a refracting telescope with a bronze concave mirror in 1616. Zucchi tried looking into the mirror with a hand held concave lens but did not get a satisfactory image, possibly due to the poor quality of the mirror, the angle it was tilted at, or the fact that his head partially obstructed the image.
In 1636 Marin Mersenne proposed a telescope consisting of a paraboloidal primary mirror and a paraboloidal secondary mirror bouncing the image through a hole in the primary, solving the problem of viewing the image. James Gregory went into further detail in his book Optica Promota (1663), pointing out that a reflecting telescope with a mirror that was shaped like the part of a conic section, would correct spherical aberration as well as the chromatic aberration seen in refractors. The design he came up with bears his name: the "Gregorian telescope"; but according to his own confession, Gregory had no practical skill and he could find no optician capable of realizing his ideas and after some fruitless attempts, was obliged to abandon all hope of bringing his telescope into practical use.
In 1666 Isaac Newton, based on his theories of refraction and color, perceived that the faults of the refracting telescope were due more to a lens's varying refraction of light of different colors than to a lens's imperfect shape. He concluded that light could not be refracted through a lens without causing chromatic aberrations, although he incorrectly concluded from some rough experiments that all refracting substances would diverge the prismatic colors in a constant proportion to their mean refraction. From these experiments Newton concluded that no improvement could be made in the refracting telescope. Newton's experiments with mirrors showed that they did not suffer from the chromatic errors of lenses, for all colors of light the angle of incidence reflected in a mirror was equal to the angle of reflection, so as a proof to his theories Newton set out to build a reflecting telescope. Newton completed his first telescope in 1668 and it is the earliest known functional reflecting telescope. After much experiment, he chose an alloy (speculum metal) of tin and copper as the most suitable material for his objective mirror. He later devised means for grinding and polishing them, but chose a spherical shape for his mirror instead of a parabola to simplify construction. He added to his reflector what is the hallmark of the design of a "Newtonian telescope", a secondary "diagonal" mirror near the primary mirror's focus to reflect the image at 90° angle to an eyepiece mounted on the side of the telescope. This unique addition allowed the image to be viewed with minimal obstruction of the objective mirror. He also made all the tube, mount, and fittings. Newton's first compact reflecting telescope had a mirror diameter of 1.3 inches and a focal ratio of f/5. With it he found that he could see the four Galilean moons of Jupiter and the crescent phase of the planet Venus. Encouraged by this success, he made a second telescope with a magnifying power of 38x which he presented to the Royal Society of London in December 1672. This type of telescope is still called a Newtonian telescope.
A third form of reflecting telescope, the "Cassegrain reflector" was devised in 1672 by Laurent Cassegrain. The telescope had a small convex hyperboloidal secondary mirror placed near the prime focus to reflect light through a central hole in the main mirror.
No further practical advance appears to have been made in the design or construction of the reflecting telescopes for another 50 years until John Hadley (best known as the inventor of the octant) developed ways to make precision aspheric and parabolic speculum metal mirrors. In 1721 he showed the first parabolic Newtonian reflector to the Royal Society. It had a 6inches diameter, NaNinches focal length speculum metal objective mirror. The instrument was examined by James Pound and James Bradley. After remarking that Newton's telescope had lain neglected for fifty years, they stated that Hadley had sufficiently shown that the invention did not consist in bare theory. They compared its performance with that of a 7.5inches diameter aerial telescope originally presented to the Royal Society by Constantijn Huygens, Jr. and found that Hadley's reflector, "will bear such a charge as to make it magnify the object as many times as the latter with its due charge", and that it represents objects as distinct, though not altogether so clear and bright.
Bradley and Samuel Molyneux, having been instructed by Hadley in his methods of polishing speculum metal, succeeded in producing large reflecting telescopes of their own, one of which had a focal length of 8feet. These methods of fabricating mirrors were passed on by Molyneux to two London opticians —Scarlet and Hearn— who started a business manufacturing telescopes.
The British mathematician, optician James Short began experimenting with building telescopes based on Gregory's designs in the 1730s. He first tried making his mirrors out of glass as suggested by Gregory, but he later switched to speculum metal mirrors creating Gregorian telescopes with original designers parabolic and elliptic figures. Short then adopted telescope-making as his profession which he practised first in Edinburgh, and afterward in London. All Short's telescopes were of the Gregorian form. Short died in London in 1768, having made a considerable fortune selling telescopes.
Since speculum metal mirror secondaries or diagonal mirrors greatly reduced the light that reached the eyepiece, several reflecting telescope designers tried to do away with them. In 1762 Mikhail Lomonosov presented a reflecting telescope before the Russian Academy of Sciences forum. It had its primary mirror tilted at four degrees to telescope's axis so the image could be viewed via an eyepiece mounted at the front of the telescope tube without the observer's head blocking the incoming light. This innovation was not published until 1827, so this type came to be called the Herschelian telescope after a similar design by William Herschel.
About the year 1774 William Herschel (then a teacher of music in Bath, England) began to occupy his leisure hours with the construction of reflector telescope mirrors, finally devoted himself entirely to their construction and use in astronomical research. In 1778, he selected a NaNinches reflector mirror (the best of some 400 telescope mirrors which he had made) and with it, built a 7feet focal length telescope. Using this telescope, he made his early brilliant astronomical discoveries. In 1783, Herschel completed a reflector of approximately 18inches in diameter and 20feet focal length. He observed the heavens with this telescope for some twenty years, replacing the mirror several times. In 1789 Herschel finished building his largest reflecting telescope with a mirror of 49inches and a focal length of 40feet, (commonly known as his 40-foot telescope) at his new home, at Observatory House in Slough, England. To cut down on the light loss from the poor reflectivity of the speculum mirrors of that day, Herschel eliminated the small diagonal mirror from his design and tilted his primary mirror so he could view the formed image directly. This design has come to be called the Herschelian telescope. He discovered Saturn's sixth known moon, Enceladus, the first night he used it (August 28, 1789), and on September 17, its seventh known moon, Mimas. This telescope was world's largest telescope for over 50 years. However, this large scope was difficult to handle and thus less used than his favorite 18.7-inch reflector.
All of these larger reflectors suffered from the poor reflectivity and fast tarnishing nature of their speculum metal mirrors. This meant they need more than one mirror per telescope since mirrors had to be frequently removed and re-polished. This was time-consuming since the polishing process could change the curve of the mirror, so it usually had to be "re-figured" to the correct shape.
See also: Achromatic lens.
From the time of the invention of the first refracting telescopes it was generally supposed that chromatic errors seen in lenses simply arose from errors in the spherical figure of their surfaces. Opticians tried to construct lenses of varying forms of curvature to correct these errors. Isaac Newton discovered in 1666 that chromatic colors actually arose from the un-even refraction of light as it passed through the glass medium. This led opticians to experiment with lenses constructed of more than one type of glass in an attempt to canceling the errors produced by each type of glass. It was hoped that this would create an "achromatic lens"; a lens that would focus all colors to a single point, and produce instruments of much shorter focal length.
The first person who succeeded in making a practical achromatic refracting telescope was Chester Moore Hall from Essex, England. He argued that the different humours of the human eye refract rays of light to produce an image on the retina which is free from color, and he reasonably argued that it might be possible to produce a like result by combining lenses composed of different refracting media. After devoting some time to the inquiry he found that by combining two lenses formed of different kinds of glass, he could make an achromatic lens where the effects of the unequal refractions of two colors of light (red and blue) was corrected. In 1733, he succeeded in constructing telescope lenses which exhibited much reduced chromatic aberration. One of his instruments had an objective measuring NaNinches with a relatively short focal length of 20inches.
Hall was a man of independent means and seems to have been careless of fame; at least he took no trouble to communicate his invention to the world. At a trial in Westminster Hall about the patent rights granted to John Dollond (Watkin v. Dollond), Hall was admitted to be the first inventor of the achromatic telescope. However, it was ruled by Lord Mansfield that it was not the original inventor who ought to profit from such invention, but the one who brought it forth for the benefit of mankind.
In 1747, Leonhard Euler sent to the Prussian Academy of Sciences a paper in which he tried to prove the possibility of correcting both the chromatic and the spherical aberration of a lens. Like Gregory and Hall, he argued that since the various humours of the human eye were so combined as to produce a perfect image, it should be possible by suitable combinations of lenses of different refracting media to construct a perfect telescope objective. Adopting a hypothetical law of the dispersion of differently colored rays of light, he proved analytically the possibility of constructing an achromatic objective composed of lenses of glass and water.
All of Euler's efforts to produce an actual objective of this construction were fruitless—a failure which he attributed solely to the difficulty of procuring lenses that worked precisely to the requisite curves. John Dollond agreed with the accuracy of Euler's analysis, but disputed his hypothesis on the grounds that it was purely a theoretical assumption: that the theory was opposed to the results of Newton's experiments on the refraction of light, and that it was impossible to determine a physical law from analytical reasoning alone.
In 1754, Euler sent to the Berlin Academy a further paper in which starting from the hypothesis that light consists of vibrations excited in an elastic fluid by luminous bodies—and that the difference of color of light is due to the greater or lesser frequency of these vibrations in a given time— he deduced his previous results. He did not doubt the accuracy of Newton's experiments quoted by Dollond.
Dollond did not reply to this, but soon afterwards he received an abstract of a paper by the Swedish mathematician and astronomer, Samuel Klingenstierna, which led him to doubt the accuracy of the results deduced by Newton on the dispersion of refracted light. Klingenstierna showed from purely geometrical considerations (fully appreciated by Dollond) that the results of Newton's experiments could not be brought into harmony with other universally accepted facts of refraction.
As a practical man, Dollond at once put his doubts to the test of experiment: he confirmed the conclusions of Klingenstierna, discovered a difference far beyond his hopes in the refractive qualities of different kinds of glass with respect to the divergence of colors, and was thus rapidly led to the construction of lenses in which first the chromatic aberration—and afterwards—the spherical aberration were corrected.
Dollond was aware of the conditions necessary for the attainment of achromatism in refracting telescopes, but relied on the accuracy of experiments made by Newton. His writings show that with the exception of his bravado, he would have arrived sooner at a discovery for which his mind was fully prepared. Dollond's paper recounts the successive steps by which he arrived at his discovery independently of Hall's earlier invention—and the logical processes by which these steps were suggested to his mind.
In 1765 Peter Dollond (son of John Dollond) introduced the triple objective, which consisted of a combination of two convex lenses of crown glass with a concave flint lens between them. He made many telescopes of this kind.
The difficulty of procuring disks of glass (especially of flint glass) of suitable purity and homogeneity limited the diameter and light gathering power of the lenses found in the achromatic telescope. It was in vain that the French Academy of Sciences offered prizes for large perfect disks of optical flint glass.
The difficulties with the impractical metal mirrors of reflecting telescopes led to the construction of large refracting telescopes. By 1866 refracting telescopes had reached 18inches in aperture with many larger "Great refractors" being built in the mid to late 19th century. In 1897, the refractor reached its maximum practical limit in a research telescope with the construction of the Yerkes Observatorys' 40inches refractor (although a larger refractor Great Paris Exhibition Telescope of 1900 with an objective of 49.2inches diameter was temporarily exhibited at the Paris 1900 Exposition). No larger refractors could be built because of gravity's effect on the lens. Since a lens can only be held in place by its edge, the center of a large lens will sag due to gravity, distorting the image it produces.
In 1856–57, Karl August von Steinheil and Léon Foucault introduced a process of depositing a layer of silver on glass telescope mirrors. The silver layer was not only much more reflective and longer lasting than the finish on speculum mirrors, it had the advantage of being able to be removed and re-deposited without changing the shape of the glass substrate. Towards the end of the 19th century very large silver on glass mirror reflecting telescopes were built.
The beginning of the 20th century saw construction of the first of the "modern" large research reflectors, designed for precision photographic imaging and located at remote high altitude clear sky locations such as the 60-inch Hale telescope of 1908, and the 100inches Hooker telescope in 1917, both located at Mount Wilson Observatory. These and other telescopes of this size had to have provisions to allow for the removal of their main mirrors for re-silvering every few months. John Donavan Strong, a young physicist at the California Institute of Technology, developed a technique for coating a mirror with a much longer lasting aluminum coating using thermal vacuum evaporation. In 1932, he became the first person to "aluminize" a mirror; three years later the 60inches and 100inches telescopes became the first large astronomical telescopes to have their mirrors aluminized. 1948 saw the completion of the 200inches Hale reflector at Mount Palomar which was the largest telescope in the world up until the completion of the massive 605cm (238inches) BTA-6 in Russia twenty-seven years later. The Hale reflector introduced several technical innovations used in future telescopes, including hydrostatic bearings for very low friction, the Serrurier truss for equal deflections of the two mirrors as the tube sags under gravity, and the use of Pyrex low-expansion glass for the mirrors. The arrival of substantially larger telescopes had to await the introduction of methods other than the rigidity of glass to maintain the proper shape of the mirror.
The 1980s saw the introduction of two new technologies for building larger telescopes and improving image quality,known as active optics and adaptive optics. In active optics, an image analyser senses the aberrations of a star imagea few times per minute, and a computer adjusts many support forces on the primary mirror and the location of the secondary mirrorto maintain the optics in optimal shape and alignment. This is too slow to correct for atmospheric blurring effects, but enables the use of thin single mirrors up to 8 m diameter, or even larger segmented mirrors. This method was pioneered by the ESO New Technology Telescope in the late 1980s.
The 1990s saw a new generation of giant telescopes appear using active optics, beginning with the construction of the first of the two 10m (30feet) Keck telescopes in 1993. Other giant telescopes built since then include: the two Gemini telescopes, the four separate telescopes of the Very Large Telescope, and the Large Binocular Telescope.
Adaptive optics uses a similar principle, but applying corrections several hundred times per second tocompensate the effects of rapidly changing optical distortion due to the motion of turbulence in the Earth's atmosphere. Adaptive optics works by measuring the distortions in a wavefront and then compensating for them by rapid changes of actuators applied to a small deformable mirror or with a liquid crystal array filter. AO was first envisioned by Horace W. Babcock in 1953, but did not come into common usage in astronomical telescopes until advances in computer and detector technology during the 1990s made it possible to calculate the compensation needed in real time. In adaptive optics, the high-speed corrections needed mean that a fairly bright star is needed very close to the target of interest (or an artificial star is created by a laser). Also, with a single star or laser the corrections are only effective over a very narrow field (tens of arcsec), and current systems operating on several 8-10m telescopes work mainly in near-infrared wavelengths for single-object observations.
Developments of adaptive optics include systems with multiple lasers over a wider corrected field, and/or working above kiloHertz rates for good correction at visible wavelengths; these are currently in progress but not yet in routine operation as of 2015.
The twentieth century saw the construction of telescopes which could produce images using wavelengths other than visible light starting in 1931 when Karl Jansky discovered astronomical objects gave off radio emissions; this prompted a new era of observational astronomy after World War II, with telescopes being developed for other parts of the electromagnetic spectrum from radio to gamma-rays.
Radio astronomy began in 1931 when Karl Jansky discovered that the Milky Way was a source of radio emission while doing research on terrestrial static with a direction antenna. Building on Jansky's work, Grote Reber built a more sophisticated purpose-built radio telescope in 1937, with a 31.4feet dish; using this, he discovered various unexplained radio sources in the sky. Interest in radio astronomy grew after the Second World War when much larger dishes were built including: the 250feet Jodrell bank telescope (1957), the 300feet Green Bank Telescope (1962), and the 100m (300feet) Effelsberg telescope (1971). The huge 1000feet Arecibo telescope (1963) was so large that it was fixed into a natural depression in the ground; the central antenna could be steered to allow the telescope to study objects up to twenty degrees from the zenith. However, not every radio telescope is of the dish type. For example, the Mills Cross Telescope (1954) was an early example of an array which used two perpendicular lines of antennae 1500feet in length to survey the sky.
High-energy radio waves are known as microwaves and this has been an important area of astronomy ever since the discovery of the cosmic microwave background radiation in 1964. Many ground-based radio telescopes can study microwaves. Short wavelength microwaves are best studied from space because water vapor (even at high altitudes) strongly weakens the signal. The Cosmic Background Explorer (1989) revolutionized the study of the microwave background radiation.
Because radio telescopes have low resolution, they were the first instruments to use interferometry allowing two or more widely separated instruments to simultaneously observe the same source. Very long baseline interferometry extended the technique over thousands of kilometers and allowed resolutions down to a few milli-arcseconds.
A telescope like the Large Millimeter Telescope (active since 2006) observes from 0.85to, bridging between the far-infrared/submillimeter telescopes and longer wavelength radio telescopes including the microwave band from about 1mm to 1000mm in wavelength.
See also: Infrared telescope, Infrared astronomy and Far-infrared astronomy. Although most infrared radiation is absorbed by the atmosphere, infrared astronomy at certain wavelengths can be conducted on high mountains where there is little absorption by atmospheric water vapor. Ever since suitable detectors became available, most optical telescopes at high-altitudes have been able to image at infrared wavelengths. Some telescopes such as the 3.8m (12.5feet) UKIRT, and the 3m (10feet) IRTF — both on Mauna Kea — are dedicated infrared telescopes. The launch of the IRAS satellite in 1983 revolutionized infrared astronomy from space. This reflecting telescope which had a 60cm (20inches) mirror, operated for nine months until its supply of coolant (liquid helium) ran out. It surveyed the entire sky detecting 245,000 infrared sources—more than 100 times the number previously known.
See also: Ultraviolet astronomy. Although optical telescopes can image the near ultraviolet, the ozone layer in the stratosphere absorbs ultraviolet radiation shorter than 300 nm so most ultra-violet astronomy is conducted with satellites. Ultraviolet telescopes resemble optical telescopes, but conventional aluminium-coated mirrors cannot be used and alternative coatings such as magnesium fluoride or lithium fluoride are used instead. The Orbiting Solar Observatory satellite carried out observations in the ultra-violet as early as 1962. The International Ultraviolet Explorer (1978) systematically surveyed the sky for eighteen years, using a 45cm (18inches) aperture telescope with two spectroscopes. Extreme-ultraviolet astronomy (10–100 nm) is a discipline in its own right and involves many of the techniques of X-ray astronomy; the Extreme Ultraviolet Explorer (1992) was a satellite operating at these wavelengths.
See also: X-ray telescope and X-ray astronomy. X-rays from space do not reach the Earth's surface so X-ray astronomy has to be conducted above the Earth's atmosphere. The first X-ray experiments were conducted on sub-orbital rocket flights which enabled the first detection of X-rays from the Sun (1948) and the first galactic X-ray sources: Scorpius X-1 (June 1962) and the Crab Nebula (October 1962). Since then, X-ray telescopes (Wolter telescopes) have been built using nested grazing-incidence mirrors which deflect X-rays to a detector. Some of the OAO satellites conducted X-ray astronomy in the late 1960s, but the first dedicated X-ray satellite was the Uhuru (1970) which discovered 300 sources. More recent X-ray satellites include: the EXOSAT (1983), ROSAT (1990), Chandra (1999), and Newton (1999).
See also: Gamma-ray astronomy. Gamma rays are absorbed high in the Earth's atmosphere so most gamma-ray astronomy is conducted with satellites. Gamma-ray telescopes use scintillation counters, spark chambers and more recently, solid-state detectors. The angular resolution of these devices is typically very poor. There were balloon-borne experiments in the early 1960s, but gamma-ray astronomy really began with the launch of the OSO 3 satellite in 1967; the first dedicated gamma-ray satellites were SAS B (1972) and Cos B (1975). The Compton Gamma Ray Observatory (1991) was a big improvement on previous surveys. Very high-energy gamma-rays (above 200 GeV) can be detected from the ground via the Cerenkov radiation produced by the passage of the gamma-rays in the Earth's atmosphere. Several Cerenkov imaging telescopes have been built around the world including: the HEGRA (1987), STACEE (2001), HESS (2003), and MAGIC (2004).
See also: Astronomical interferometry.
In 1868, Fizeau noted that the purpose of the arrangement of mirrors or glass lenses in a conventional telescope was simply to provide an approximation to a Fourier transform of the optical wave field entering the telescope. As this mathematical transformation was well understood and could be performed mathematically on paper, he noted that by using an array of small instruments it would be possible to measure the diameter of a star with the same precision as a single telescope which was as large as the whole array— a technique which later became known as astronomical interferometry. It was not until 1891 that Albert A. Michelson successfully used this technique for the measurement of astronomical angular diameters: the diameters of Jupiter's satellites (Michelson 1891). Thirty years later, a direct interferometric measurement of a stellar diameter was finally realized by Michelson & Francis G. Pease (1921) which was applied by their 20 ft (6.1 m) interferometer mounted on the 100 inch Hooker Telescope on Mount Wilson.
The next major development came in 1946 when Ryle and Vonberg (Ryle and Vonberg 1946) located a number of new cosmic radio sources by constructing a radio analogue of the Michelson interferometer. The signals from two radio antennas were added electronically to produce interference. Ryle and Vonberg's telescope used the rotation of the Earth to scan the sky in one dimension. With the development of larger arrays and of computers which could rapidly perform the necessary Fourier transforms, the first aperture synthesis imaging instruments were soon developed which could obtain high resolution images without the need of a giant parabolic reflector to perform the Fourier transform. This technique is now used in most radio astronomy observations. Radio astronomers soon developed the mathematical methods to perform aperture synthesis Fourier imaging using much larger arrays of telescopes —often spread across more than one continent. In the 1980s, the aperture synthesis technique was extended to visible light as well as infrared astronomy, providing the first very high resolution optical and infrared images of nearby stars.
In 1995 this imaging technique was demonstrated on an array of separate optical telescopes for the first time, allowing a further improvement in resolution, and also allowing even higher resolution imaging of stellar surfaces. The same techniques have now been applied at a number of other astronomical telescope arrays including: the Navy Prototype Optical Interferometer, the CHARA array, and the IOTA array. A detailed description of the development of astronomical optical interferometry can be found here 2008, Max Tegmark] and Matias Zaldarriaga proposed a "Fast Fourier Transform Telescope" design in which the lenses and mirrors could be dispensed with altogether when computers become fast enough to perform all the necessary transforms. |
If chemists built cars, they’d fill a factory with car parts, set it on fire, and sift from the ashes pieces that now looked vaguely car-like.
When you’re dealing with car-parts the size of atoms, this is a perfectly reasonable process. Yet chemists yearn for ways to reduce the waste and make reactions far more precise.
Chemical engineering has taken a step forward, with researchers from the University of Santiago de Compostela in Spain, the University of Regensburg in Germany, and IBM Research Europe forcing a single molecule to undergo a series of transformations with a tiny nudge of voltage.
Ordinarily, chemists gain precision over reactions by tweaking parameters such as the pH, adding or removing available proton donors to manage the way molecules might share or swap electrons to form their bonds.
“By these means, however, the reaction conditions are altered to such a degree that the basic mechanisms governing selectivity often remain elusive,” the researchers note in their report, published in the journal Science.
In other words, the complexity of forces at work pushing and pulling across a large organic molecule can make it hard to get a precise measure on what’s occurring at each and every bond.
The team started with a substance called 5,6,11,12-tetrachlorotetracene (with the formula C18H8Cl4) – a carbon-based molecule that looks like a row of four honeycomb cells flanked by four chlorine atoms hovering around like hungry bees.
Sticking a thin layer of the material to a cold, salt-crusted piece of copper, the researchers drove the chlorine-bees away, leaving a handful of excitable carbon atoms holding onto unpaired electrons in a range of related structures.
Two of those electrons in some of the structures happily reconnected with each other, reconfiguring the molecule’s general honeycomb shape. The second pair were also keen to pair up not just with each other, but with any other available electron that might buzz their way.
Ordinarily, this wobbly structure would be short-lived as the remaining electrons married up with each other as well. But the researchers found this particular system wasn’t an ordinary one.
With a gentle push of voltage from an atom-sized cattle prod, they showed they could force a single molecule to connect that second pair of electrons in such a fashion that the four cells were pulled out of alignment in what’s known as a bent alkyne.
Shaken a little less vigorously, those electrons paired up differently, distorting the structure in a completely different fashion into what’s known as a cyclobutadiene ring.
Each product was then reformed back into the original state with a pulse of electrons, ready to flip again at a moment’s prompting.
By forcing a single molecule to contort into different shapes, or isomers, using precise voltages and currents, the researchers could gain insight into the behaviors of its electrons and the stability and preferable configurations of organic compounds.
From there it could be possible to whittle down the search for catalysts that could push a large-scale reaction of countless molecules in one direction, making the reaction more specific.
Previous studies have used similar methods to visualize the reconfigurations of individual molecules, and even manipulate individual steps of a chemical reaction. Now we are building new methods for tweaking the very bonds of molecules to form isomers that ordinarily wouldn’t be so simple to swap around.
Not only does research like this help make chemistry more precise, it provides engineers with sharp new tools to manufacture machines on a nanoscale, warping carbon-frameworks into exotic shapes that wouldn’t be possible with ordinary chemistry.
This research was published in Science. |
Most four- and five-year-olds can sing the alphabet song and print their names, but few can actually read. So, what does it take to push these kids to accomplish this cognitive milestone? A majority of parents and teachers alike think the answer to this question is lots of practice naming letters and sounds out loud. But, reading practice isn’t the whole story or perhaps even the most important part. Practice printing letters turns out to be imperative to reading success. When the body figures out how to write letters, the mind follows suit in terms of being able to recognize them.
Case in point, a few years ago, neuroscientist Karen James found that preschool children who took part in a one-month long reading program where they practiced printing words improved more in their letter recognition than kids who did the same reading program but practiced naming (rather than writing) the words instead. Letter recognition isn’t enhanced as much by reading letters as it is by printing them.
Learn more about James’s research at Psychology Today: https://www.psychologytoday.com/us/blog/choke/201304/montessori-had-it-right-we-learn-doing |
Common Core Math Vocabulary
Are you as smart as a fourth grader? Test your basic Common Core Math vocabulary and find out!
Addition for Kindergarten - Let's Match!
Find the answer!
Fun with Addition and Subtraction
What operation will you use for the Number problems
Commas, Elements, Math, and more!
play with your friends. have fun learning how to use a comma and some other things
Number Spellings - Large Numbers
Choose the correct written number alternative
Antarctica : Telling Time
Let's Match the Number to the Correct Spelling.
Let's test your memory. Can you match the word with the number?
Virginia and Math Jeopardy
Answer questions about Va 6c and Math Patterns
Summer Review: Math
summer school review
Types of Data Knowledge Check
Students will assess their knowledge on the types of data. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.