content
stringlengths 275
370k
|
---|
Goal 13: Climate Action
Mercy Ndabene & Nicole Reyes
identify the barriers that prevent people from acting on climate change?
Climate change is the significant change in regional to global climate patterns. Atmospheric carbon dioxide is attributed to climate change and human activities contribute to rising carbon dioxide levels. There are eight identifiable barriers that prevent people from living a sustainable life: unawareness, denial, information overload, personal and financial stress, beliefs, sense of powerlessness and lifestyle choices.
In order to overcome these barriers, people need to notice the problem of climate change and see it as an emergency. By knowing what to do, deciding how to act and having a sense of responsibility, individuals and businesses will start to feel empowered to make more sustainable choices.
Sustainability has always been a platform of mine. Addressing this problem has given me an opportunity to educate and entice the masses to take action. If I can inspire even one person to take steps towards living a more sustainable life, that matters.
I chose this topic because I feel there’s an undeniable need to raise awareness about tackling climate change. I think it’s important that we recognize the barriers that prevent us from acting on climate change so that we can overcome them and live a more sustainable life. I believe self-awareness is key and recognizing barriers can be the first step. |
Coastlines of the southern Baltic Sea
An astronaut on the International Space Station (ISS) took this panorama looking aft of the spacecraft (backwards along the orbital path) as the Sun was setting over the North Sea. Seen from the ISS, the Sun's reflection point moves quickly across the landscape, momentarily lighting up water bodies.
In this fleeting view from 15 June 2014, the coast of southern Norway is outlined near the horizon. The brightest reflection highlights the narrow sea passage known as the Skagerrak-revealing the thin tip of Denmark. Numerous small lakes in southern Sweden appear at image centre, and scattered clouds cast complex shadows on the southern Baltic Sea. The sweeping curves of the sand spit on the Polish coast and the long barrier islands on the Russian coast appear in the foreground, at the edge of the Sun's reflection disc.
View the full resolution image.
Credit: NASA Earth Observatory |
It started with an 1,800-year-old shirt. Archaeologist Lars Holger Pilø had watched his colleagues discover the ancient wool tunic that had emerged from a melting ice patch on Lomseggen, a mountain in southern Norway. Now Pilø wondered what else was out there. As the rest of the team packed up the precious find, he and another archaeologist wandered away from the group, tracing the edge of the melting ice shrouded in mountain fog.
As he peered into the gloom, Pilø soon realized he was looking at a field of objects that hadn’t seen the light of day for hundreds of years. Broken sleds, tools, and other traces of daily life going back nearly 2,000 years lay strewn across the surface of the Lendbreen ice patch, which was melting rapidly due to global warming.
“It dawned on us that we had found something really special,” says Pilø, who leads the Glacier Archaeology Program in Oppland, Norway. “We sort of hit the motherlode.”
Dating from around 300 to 1500 A.D., the artifacts tell the story of a mountain pass that served as a vital travel corridor for settlers and farmers moving between permanent winter settlements along the Otta River in southern Norway and higher-elevation summer farms farther south. And as they traveled across the rough terrain, these bygone travelers left behind everything from horseshoes to kitchen tools to items of clothing. As snow collected over the centuries, those forgotten objects were preserved in what eventually became the Lendbreen ice patch.
Ice patches are located at high elevations, but they aren’t the same as their larger cousins, glaciers. Objects frozen in glaciers are eventually pulverized inside the moving mass of ice. But ice patches, which do not move, preserve artifacts in place—and in excellent condition—until the ice melts.
Pilø, the first author on the Antiquity paper, and his colleagues have radiocarbon dated 60 of the 1,000 Lendbreen artifacts so far, revealing that human activity on the pass began around 300 A.D., during a time when good climate conditions led to a population boom in the area. Travel during the Viking Age peaked around the year 1000, and, owing to economic and climactic changes, had begun to decline even before the Black Death swept through Norway in the 1340s.
Mystery artifacts with modern explanations
The objects found at Lendbreen include everyday items such as sleds, a rare complete third-century wool tunic, a mitten and shoes, and a whisk. One of Pilø’s favorite finds was a mystery until it went on display at the local museum and an elderly woman offered an explanation: The small, turned piece of wood was likely used as a bit to prevent a goat kid or lamb from suckling its mother so people could use the milk for themselves, she explained.
The woman, who had lived on a summer farm during the 1930s, said that her family used bits fashioned from hard juniper wood that looked almost identical to the 11th-century artifact. The thousand-year-old bit turned out to be made from juniper, too.
It also appears that the Lendbreen pass wasn’t just a local pathway for farmers moving back and forth between seasonal pastures. Pilø’s team discovered multiple cairns—stacked rock formations designed to help people who were unfamiliar with the terrain find and navigate the pass on longer journeys throughout Scandinavia. The presence of cairns, along with the discovery of horseshoes (and even a horse snowshoe), is “very convincing” evidence the ice patch was used as a busy travel artery for almost 1,000 years, says Pilø, making the Norwegian site the first such pass discovered in Northern Europe.
Albert Hafner, a glacier archaeologist at the University of Bern who was not involved with the current research, agrees. “I think the arguments are quite clear,” he says. In 2003, Hafner discovered hundreds of artifacts dating as far back as 4800 B.C. at Schnidejoch, an ice patch in the Swiss Alps that was also used as a mountain passage. “It’s quite interesting to have a similar site in Scandinavia,” says Hafner.
The current paper focuses on objects uncovered by 2015, which leaves hundreds more artifacts to date and describe—as well as unanswered questions about why the pass was abandoned by travelers. “The decline starts before the [Black Death] pandemic, but we don’t have a good explanation for that,” says Pilø. But the peak years of the pass line up with a time of increased trade and urbanization in the area—prosperous days that explain the need for a quick way to get through the mountains.
Work at Lendbreen ended in 2019, and Pilø, is now in search of other objects being revealed by massive melting throughout Norway’s fragile ice patches. Artifacts “are basically being stored in a giant prehistoric deep freezer,” Pilø says. “They have not aged. I sometimes jokingly say that the ice is a time machine, but it’s not only a joke. It transports the artifacts to our times.”
But for that to happen, the ice must melt. And with Norway’s cryosphere already slipping away in the face of climate change and a series of punishingly hot summers, every seemingly miraculous find is bittersweet.
“We try to focus on the work when we operate, but it just keeps coming up,” says Pilø. “It’s not a job you can do without a great sense of foreboding.” |
In an effort to harness the power of ocean waves, engineers designed and built a floating “power buoy” that measures 8 feet across, 10 feet wide and 18 feet long. The buoy uses the upward and downward motion of waves, combined with the weight of a metal plate, to move a hydraulic piston, resulting in electricity.
MBARI engineer Andy Hamilton looks out his office window in Moss Landing and points at the waves crashing on the beach below. “Pretty impressive, aren’t they? You’d think there’d be a way to make use of all that energy.” Since 2009, Hamilton has led a team of engineers trying to do just that. Their goal is not to replace the hulking power plant that overlooks Moss Landing Harbor, but to provide a more generous supply of electricity for oceanographic instruments in Monterey Bay.
Hamilton’s “power buoy” project is funded by the Defense Advanced Research Projects Agency (DARPA), which sponsors research into revolutionary new technologies that might one day be used by the U.S. military. The project started with a three-month grant to assess the availability of wave power around the world, and to assess DARPA’s previous attempts to generating electrical power from the waves.
Hamilton’s initial research and calculations showed that DARPA’s previous efforts had been too timid—their small prototype buoys were never able to take advantage of the full energy of the waves. So Hamilton proposed to “go big” (but not as big as commercial wave-power projects).
He spent another nine months using computer models to test different buoy designs under a variety of simulated wave conditions. In the end, he came up with a buoy that was 2.5 meters (8 feet) across. Hanging in the water below this buoy is a massive metal plate 3 meters (10 feet) wide and 5.5 meters (18 feet) long.
Because most wave motion occurs at the sea surface, the buoy rises and falls with the waves, but the plate, 30 meters (100 feet) down, remains relatively stationary. Between the surface buoy and the metal plate is a large hydraulic cylinder with a piston inside. As the buoy rises and falls, it pushes and pulls on this piston. This forces hydraulic fluid through a hydraulic motor, which in turn runs an electrical generator.
Engineering in the real world
This sounds simple in concept, but as is often the case, things become much trickier when you try to build a real device that will work in the real ocean. Fortunately, Hamilton recruited a team of resourceful engineers to work on the project. Mechanical engineer François Cazenave has worked full time on this project for the past 18 months. Other team members include mechanical engineer Jon Erickson, electrical engineer Paul McGill, and software engineer Wayne Radochonski.
One of the first challenges the team faced was figuring out the best way to convert the vertical motion of the waves into rotary motion that could power a generator. First they bought a custom-made generator from an outside company, but that turned out to be too inefficient to be useful. Going back to the drawing board, the team designed their own system using an off-the-shelf hydraulic motor similar to those used on earth-moving equipment and on MBARI’s underwater robots. These hydraulic motors use moving hydraulic fluid to drive a rotating shaft with up to 95 percent efficiency.
Another challenge that Hamilton’s team faced was designing a mechanism that would return the piston to its starting point after a wave had passed. Hamilton initially envisioned using a large metal spring for this purpose, but the metal spring turned out to be much too heavy. So the team redesigned the system to incorporate a pneumatic spring—a chamber filled with nitrogen gas, and mounted at one end of the piston. As the piston moves with the waves, it compresses or decompresses the nitrogen gas in the chamber. After the wave passes, the gas in the chamber returns to its original pressure, forcing the piston back to the middle of its stroke.
In yet another challenge, during the first field trials of the new piston, the cable attached to the metal plate swung from side to side, damaging the seals of the hydraulic cylinder and causing it to leak. The team added a long metal tube as a guide to make sure that the cable pulled in line with the axis of the piston. In a later deployment, metal shards in the hydraulic fluid caused additional damage. Eventually the engineers rebuilt the system, adding a fluid reservoir/pressure compensator and an in-line filter to keep the hydraulic fluid clean.
Over the past year, the power buoy has been deployed in Monterey Bay about half a dozen times. With each deployment, the team added new features and refinements. By late 2011, the buoy was generating up to 400 watts of power, more than twice what MBARI’s existing moorings can produce using wind generators and solar panels. Hamilton says, “Remarkably, the system is behaving very similar to our models. We were aiming to produce about 500 watts of power, so we’ve almost hit our target.”
The next challenge: making useable electricity
Although we tend to think of the open-ocean waves as long, gentle rollers, anyone who has been at sea knows that wave motion is often erratic and unpredictable. So is the voltage produced by the generator on the power buoy. As each wave passes, the generator first speeds up, then slows down again, generating electricity at from 0 to 500 volts (AC).
In late spring of 2012, the team will test a new version of the system that incorporates power-conditioning hardware and software to change this fluctuating voltage into a steady 24 volts (DC) useable for scientific instruments. Initially, this current will be used to charge batteries on the buoy. Any excess current will be run through a series of resistors that dissipate the energy as heat.
During the upcoming deployment, the team will also test new hardware and software that could dramatically increase the efficiency of the power buoy. Surprisingly, this involves adding resistance to the system—essentially making the piston harder to push at certain times in the wave cycle.
The new software continuously and automatically changes the resistance of the system to adjust the load on the generator and optimize the speed at which the generator turns. If the generator is spinning too fast, the software increases the resistance of the system to generate more power. If the generator is moving too slowly, the software reduces the resistance of the system, allowing the generator to speed up.
The dream: an underwater charging station
As if the challenge of extracting energy from the waves wasn’t enough, the power buoy project includes another ambitious element—an underwater charging station for undersea robots. Autonomous underwater vehicles (AUVs) are undersea robots that are programmed at the surface and then travel beneath the waves, collecting information as they go. However, many AUVs can only run for a day or two before their batteries need to be recharged.
Working in tandem with Hamilton’s team, a group of MBARI engineers led by Jim Bellingham and Brett Hobson have been working on an underwater “docking station” that will eventually hang down beneath the power buoy. When an AUV senses that its batteries need charging, it will automatically home in on the docking station, plug itself in, download its data, recharge its batteries, then head out for its next mission. Hamilton says, “This is the ultimate dream for the power buoy. We’re working on the pieces of it now.”
So far, the AUV-docking team has built a non-powered docking system, and have been developing software for MBARI’s long-range AUV so that it can automatically find and park itself inside this “mock dock.” They are also working on the electronics for the actual charging dock. In field trials, the AUV has entered the “mock dock,” but cannot yet do this consistently.
The future of wave power
Hamilton’s development grant for the power buoy runs out in Fall 2012. By that time, he hopes to demonstrate the system’s usefulness and attract additional funding for the project. According to Hamilton, “A lot depends on finding a ‘science driver’—a specific science project that would benefit from this extra power we’re producing.” He adds, ”We’re not trying to compete with wind and solar generators. We’re complementing them. As with most renewable energy sources, it’s better to have some redundancy.”
Although big storm waves carry a lot of power, it turns out that the bulk of the wave energy in Monterey Bay is provided by moderate sized waves that occur relatively frequently. Cazenave explains, “The power buoy must be able to survive storm waves, but it is most efficient in the most common types of waves—say two meters high. There is more energy in bigger waves, which can reach four to five meters around here, but they are not so common.”
The vastness and changeability of the ocean will always hamper our efforts to understand this global environment. One of the key challenges for oceanographers over the next decade will be how to power complex instruments that remain in the open ocean for long periods of time. MBARI’s wave-power buoy is part of the solution to this problem.
Images: Kim Fulton-Bennett 2012 MBARI; François Cazenave 2011 MBARI |
Scientists are constantly trying to determine how water could have once flowed on the Red Planet. A new theory suggests that a greenhouse gas effect could have created favorable conditions for water on Mars nearly 4 billion years ago. Scientists are suggesting that the combination of molecular hydrogen, carbon dioxide, and water could have created a greenhouse effect that would have produced temperatures warm enough to allow for the existence of liquid water. They say that this water would have formed the Nanedi Valles, which is basically the Martian Grand Canyon. There are several cold model theories that attempt to explain the formation of Martian valleys but the researchers in this study say they hope that their new warm model will get others to reconsider their positions.
[ Read the Article: Liquid Water On Mars May Have Resulted From Hydrogen-Caused Greenhouse Effect ] |
Hip replacements can relieve the terrible pain caused by arthritis. The surgery involves removing diseased bone and cartilage from the hip, and then placing a prosthetic socket and ball that recreates movement in the hip. Some prostheses have a surface coating so that the remaining bone can grow into the implant. Rebuilding the hip wasn't always so easy, though. Early attempts to fashion a prosthesis involved pig bladders, gold and glass, but these materials proved too weak or incompatible with the body.
British surgeon John Charnley is credited with developing the techniques and materials used in the first effective total hip replacement surgery in the late 1950s and early 1960s. His techniques were considered crazy by his peers, but Charnley persevered in his research with polyethylene prostheses. His methods would revolutionize the field of hip replacement, and they also had an impact on this list's next item. |
|Latin||Musculi externi bulbi oculi|
|Origin||Annulus of Zinn, maxillary and sphenoid bone|
|Insertion||Tarsal plate of upper eyelid, eye|
|Artery||Ophthalmic artery, lacrimal artery, infraorbital artery, anterior ciliary arteries,
superior and inferior orbital veins
|Nerve||Oculomotor, trochlear and Abducens nerve|
|Anatomical terms of muscle|
The extraocular muscles are the six muscles that control movement of the eye (there are four in bovines) and one muscle that controls eyelid elevation (levator palpebrae). The actions of the six muscles responsible for eye movement depend on the position of the eye at the time of muscle contraction.
Four of the extraocular muscles control the movement of the eye in the four cardinal directions: up, down, left and right. The remaining two muscles control the adjustments involved in counteracting head movement; for instance this can be observed by looking into one's own eyes in a mirror while moving one's head.
Since only a small part of the eye called the fovea provides sharp vision, the eye must move to follow a target. Eye movements must be precise and fast. This is seen in scenarios like reading, where the reader must shift gaze constantly. Although under voluntary control, most eye movement is accomplished without conscious effort. Precisely how the integration between voluntary and involuntary control of the eye occurs is a subject of continuing research. However, it is known that the vestibulo-ocular reflex plays an important role in the involuntary movement of the eye.
The extraocular muscles are supplied mainly by branches of the ophthalmic artery. This is done either directly or indirectly, as in the lateral rectus muscle, via the lacrimal artery, a main branch of the ophthalmic artery. Additional branches of the ophthalmic artery include the ciliary arteries, which branch into the anterior ciliary arteries. Each rectus muscle receives blood from two anterior ciliary arteries, except for the lateral rectus muscle, which receives blood from only one. The exact number and arrangement of these cilary arteries may vary. Branches of the infraorbital artery supply the inferior rectus and inferior oblique muscles.
Below is a table of each of the extraocular muscles and their innervation, origins and insertions, and actions. Note that the muscles may have minor actions in neutral, adduction, or abduction, but the primary action is shown in bold.
Mnemonic for simplified actions:
1. Obliques Abduct whereas Rectii Adduct (except LR)
2. Superiors Intort whereas Inferior Extort
3. Rectii act according to their names whereas Obliques act opposite to their names.
|Superior rectus muscle
Inferior rectus muscle
Medial rectus muscle
Inferior oblique muscle
|Levator palpebrae superior muscle|
|Superior oblique muscle|
|Lateral rectus muscle
Retractor bulbi muscle
The nuclei or bodies of these nerves are found in the brain stem. The nuclei of the abducens and oculomotor nerves are connected. This is important in coordinating motion of the lateral rectus in one eye and the medial action on the other. In one eye, in two antagonistic muscles, like the lateral and medial recti, contraction of one leads to inhibition of the other. Muscles shows small degrees of activity even when resting, keeping the muscles taut. This "tonic" activity is brought on by discharges of the motor nerve to the muscle.
Origins and insertions
Five of the extraocular muscles have their origin in the back of the orbit in a fibrous ring called the annulus of Zinn: the four rectus muscles and the superior oblique muscle. The four rectus muscles attach directly to the front half of the eye (anterior to the eye's equator), and are named after their straight paths. Note that medial and lateral are relative terms. Medial indicates near the midline, and lateral describes a position away from the midline. Thus, the medial rectus is the muscle closest to the nose. The superior and inferior recti do not pull straight back on the eye, because both muscles also pull slightly medially. This posterior medial angle causes the eye to roll with contraction of either the superior rectus or inferior rectus muscles. The extent of rolling in the recti is less than the oblique, and opposite from it.
The superior oblique muscle originates at the back of the orbit (a little closer to the medial rectus, though medial to it, getting rounder as it courses forward to a rigid, cartilaginous pulley, called the trochlea, on the upper, nasal wall of the orbit. The muscle becomes tendinous about 10mm before it passes through the pulley, turning sharply across the orbit, and inserts on the lateral, posterior part of the globe. Thus, the superior oblique travels posteriorly for the last part of its path, going over the top of the eye. Due to its unique path, the superior oblique, when activated, pulls the eye downward and medially.
The last muscle is the inferior oblique, which originates at the lower front of the nasal orbital wall, and passes under the LR to insert on the lateral, posterior part of the globe. Thus, the inferior oblique pulls the eye upward and laterally .
The extraocular muscles develop along with Tenon's capsule (part of the ligaments) and the fatty tissue of the eye socket (orbit). There are three centers of growth that are important in the development of the eye, and each is associated with a nerve. Hence the subsequent nerve supply (innervation) of the eye muscles is from three cranial nerves. The development of the extraocular muscles is dependent on the normal development of the eye socket, while the formation of the ligament is fully independent.
Coordination of movement between both eyes
Intermediate directions are controlled by simultaneous actions of multiple muscles. When one shifts the gaze horizontally, one eye will move laterally (toward the side) and the other will move medially (toward the midline). This may be neurally coordinated by the central nervous system, to make the eyes move together and almost involuntarily. This is a key factor in the study of squint, namely, the inability of the eyes to be directed to one point.
There are two main kinds of movement: conjugate movement (the eyes move in the same direction) and disjunctive (opposite directions). The former is typical when shifting gaze right or left, the latter is convergence of the two eyes on a near object. Disjunction can be performed voluntarily, but is usually triggered by the nearness of the target object. A "see-saw" movement, namely, one eye looking up and the other down, is possible, but not voluntarily; this effect is brought on by putting a prism in front of one eye, so the relevant image is apparently displaced. To avoid double vision from non-corresponding points, the eye with the prism must move up or down, following the image passing through the prism. Likewise conjugate torsion (rolling) on the anteroposterior axis (from the front to the back) can occur naturally, such as when one tips one's head to one shoulder; the torsion, in the opposite direction, keeps the image vertical.
The muscles show little inertia - a shutdown of one muscle is not due to checking of the antagonist, so the motion is not ballistic.
The initial clinical examination of the extraoccular eye muscles is done by examining the movement of the globe of the eye through the six cardinal eye movements. When the eye is turned in (nasally) and horizontally, the function of the medial rectus muscle is being tested. When it is turned out (temporally) and horizontally, the function of the lateral rectus muscle is tested. When turning the eye down and out, the inferior rectus is contracting. Turning the eye up and out relies on the superior rectus. Paradoxically, turning the eye up and in uses the inferior oblique muscle, and turning it down and in uses the superior oblique.
All of these six movements can be tested by drawing a large "H" in the air with a finger or other object in front of a patient's face and having them follow the tip of the finger or object with their eyes without moving their head. Having them focus on the object as it is moved in toward their face in the midline will test convergence, or the eyes' ability to turn inward simultaneously to focus on a near object.
This article uses anatomical terminology; for an overview, see anatomical terminology.
- ALS#Last stages
- Hering's law of equal innervation
- Park's three-step test
- Sherrington's law of reciprocal innervation
- "eye, human."Encyclopædia Britannica from Encyclopædia Britannica 2006 Ultimate Reference Suite DVD 2009
- Yanoff, Myron; Duker, Jay S. (2008). Ophthalmology (3rd ed.). Edinburgh: Mosby. p. 1303. ISBN 978-0323057516.
- A case of mistaken muscles - Ahmed and Ali 324 (7343): 962 - BMJ
- Carpenter, Roger H.S. (1988). Movements of the eyes (2nd ed., rev. and enl. ed.). London: Pion. ISBN 0-85086-109-8.
- Westheimer, G; McKee, SP (July 1975). "Visual acuity in the presence of retinal-image motion.". Journal of the Optical Society of America 65 (7): 847–50. PMID 1142031.
- Eldra Pearl Solomon; Richard R. Schmidt; Peter James Adragna (1990). Human anatomy & physiology. Saunders College Publishing. ISBN 978-0-03-011914-9.
- neuro/637 at eMedicine - "Extraocular Muscles, Actions"
- Atlas image: eye_13 at the University of Michigan Health System
- Animations of extraocular cranial nerve and muscle function and damage (University of Liverpool)
- Parks Three Step Test |
Past And Present: A Major Landmark In The Fight For Civil Rights
This year marks the 60th anniversary of the celebrated civil rights case, Brown v. Board of Education. However, on May 3, 1954, two weeks before the Brown ruling, the Supreme Court delivered another important decision in the American Civil Rights movement.
In Hernandez v. Texas, the court declared that the 14th Amendment’s right to equal protection extended to all racial and ethnic groups. In 1951, Texas convicted an agricultural worker named Pedro Hernandez of murdering Joe Espinosa.
In the appeal, Hernandez’s lawyer, Gus Garcia, argued that Hernandez’s 6th Amendment right to an impartial jury and his 14th Amendment right to equal protection had been violated because Texas’ jury selection system did not allow Mexican Americans to participate. Thus, Hernandez, a Mexican American, had been convicted by an all-white jury.
In 1868, the 14th Amendment had been written to secure citizenship rights for newly freed slaves. While racial segregation had limited access to these rights for African Americans, the Texas state court declared that these protections had never applied to Hernandez because the law classified him as white. Garcia argued that, while the law recognized Hernandez as white, and thus entitled him to the same rights and privileges of other white citizens, throughout the Southwest, Mexican Americans faced similar Jim Crow laws to African Americans, and, thus, required similar protections.
The US Supreme Court agreed. It ruled unanimously that Hernandez be retried by an impartial jury and declared that the 14th Amendment afforded all racial and ethnic groups in the United States the same right to equal protection. |
Skip to 0 minutes and 23 secondsSo this is a molecular model of 2-methylundecanal. For most of the time the eleven carbon atoms, in the main chain, sit in this zig-zag orientation. We have the aldehyde functional group at the end of the chain and on the carbon atom next door to it we have the methyl substituent. Now there are a number of different ways of making this particular molecule and we are going to look at just one of these, which involves adding the methyl group directly to the eleven-carbon chain. This synthetic approach relies on the fact that undecanal is a readily available starting material and the two hydrogens adjacent to the carbonyl group are the most acidic.
Skip to 1 minute and 1 secondOn deprotonation using a base an anion is formed called an enolate ion - this anion is stabilised by spreading the negative charge from carbon onto the highly electronegative oxygen atom. The enolate ion then reacts with bromomethane in a nucleophilic substitution reaction to introduce the methyl group selectively at the 2-position of the chain. Interestingly, 2-methylundecanal contains a chiral or asymmetric centre. The carbon atom at the 2-position is bonded to four different substituents. Consequently, there are two different ways of arranging the four groups - the methyl group can point up and the hydrogen down, or vice versa.
Skip to 1 minute and 45 secondsIf you flip one of these structures we can see that the two compounds are mirror images of one another - we call these enantiomers and they are non-superimposable mirror images. Our synthetic approach to 2-methylundecanal produces an equal mixture of enantiomers called a racemate, or a racemic mixture. This is because the enolate ion is planar and 50% of the time bromomethane approaches it from the top face and 50% of the time it approaches it from the bottom face. At York we are researching to selectively make just a single enantiomer from this type of reaction. Why you ask? Well enantiomers can have different biological properties, including different aromas. So why can different enantiomers have different smells?
Skip to 2 minutes and 32 secondsTo answer this we need to explore current theories of smell. Surprisingly, the mechanics of how we smell things and recognise odours still aren't fully understood. Of the two theories vibration theory is more controversial and is based on every substance generating a specific vibration frequency that the nose interprets as a distinct smell. This is like a swipe card - the code in the magnetic band, or the vibrations, can trigger the process. The more widely accepted theory is the shape theory otherwise known as the lock and key model. Here part of an odour molecule, the key, docks within a receptor in the upper part of our nose, the lock.
Skip to 3 minutes and 15 secondsThis chemical interaction is converted into an electrical signal that travels to the olfactory bulb in the brain which interprets it as a smell. The shape theory explains why some enantiomers can smell differently - the enantiomers fit into different receptors in our nose, like our left and right hands fitting into different gloves. But there are some question marks hanging over the shape theory. For example, similar shaped and sized molecules can smell very differently. Ethanol has a pleasant smell, try sniffing vodka, whereas ethanethiol has an over-powering garlic or skunk-like odour.
Skip to 3 minutes and 52 secondsSo the shape theory does not answer all of the questions and more research is needed to shed light on how biological systems recognise chemical messages and how the human brain makes sense of the nerves signals it receives. The ultimate aim is rational fragrance design, which is the ability to design a fragrance molecule based on accurate predictions of how different features of its structure contribute to its aroma.
Theories of smell
In the screencast, we mention the two proposed theories for the mechanics of scent recognition; vibration theory and shape theory (or the lock and key model). The mechanics behind both of these theories still aren't fully understood and there are unanswered questions surrounding both theories.
How does vibrational theory work?
Our noses have olfactory receptors that are used to distinguish different scents. The vibrational theory explanation for how this occurs is that atoms are joined together by bonds that are able to vibrate at specific frequencies; these vibration frequencies must be turned into, and delivered to the brain, as electrical signals. The ability to distinguish between different scents occurs due to the activation of specific olfactory pathways by the specific vibration energies of different bonds within different odour molecules.
Therefore, while the lock and key model proposes that if molecules have similar structures they will smell the same, the vibrational model states that molecules with bonds that have similar frequency vibrations, will have the same scent.
A recent study
Recent studies examining the extent of isotopomer discrimination in honeybees, suggest that shape theory might not be enough to fully explain the ability of the honeybees to distinguish between odour compounds. Isotopomers (also known as isotopic isomers) are versions of the same molecule, with identical numbers of each element and isotope, but with differences in their positions; while, isotopomer discrimination is the ability to register differences in the isotopomer odour molecules.
The study used undeuterated and deuterated versions of the same molecules. One of these odour molecules was acetophenone; shown below are the undeuterated (contains H atoms) and fully deuterated (all H atoms replaced by D atoms) versions of acetophenone.
Deuterated molecules are molecules where some, or all, of the hydrogen atoms (H) in the compound are replaced with deuterium (D); which is a stable isotope of hydrogen. The mass of deuterium is approximately double that of hydrogen, which leads to the deuterated and undeuterated molecules having almost identical shapes but significant differences in the stretching frequencies of the C–H and C–D bonds. This results in the C–H and C–D bonds having different specific vibration energies, suggesting that the undeuterated and deuterated molecules will activate different olfactory pathways to the brain, hence registering as different scents.
This was found to be the case when the scientists studied the effects of deuterated and fully undeuterated versions of acetophenone on the activation pathways in honeybees. The analysis determined that there were differences in pathway activation when deuterated molecules of acetophenone were used compared to the undeuterated version; leading to the conclusion that the lock and key model of olfaction might not be able to explain the observed distinction between deuterated and undeuterated acetophenone odour molecules by honeybees.
This study doesn't provide conclusive proof that vibrational theory is the mechanism of odourant reception, as the deuteration ultimately affects more than the vibrational spectrum of the odour molecules. However, it does provide a basis for the assumption that the vibrations of molecules play a part in the odourant-receptor interactions. The true mechanism of scent reception is still indefinable, but this study does suggest that vibration theory is not to be sniffed at!
Cracking the olfactory code
In 2015, a $15 million project, sponsored by the National Science Foundation and the White House Brain Initiative, called Cracking the Olfactory Code was initiated. Scientists hope to unravel how smell (the oldest guidance system in the world) works. Then, the team aims to teach robots how to smell!
Keen sense of smell
So, who has the keenest sense of smell, dogs or humans? Recent research suggests that our sense of smell rivals that in dogs. For example, we are more sensitive to amyl acetate (pentyl ethanoate), CH3CO2(CH2)4CH3, in bananas than dogs. This is likely explained by identifying ripe fruit being more important to our own ancestors and irrelevant to those of dogs.
© University of York |
In garden design a tint is simply a colour to which contains white making it appear lighter. When dealing with paint on the exterior of your house, or on garden furniture, you can easily experiment with this by mixing white with your chosen colour.
In theory a tint will begin with a fully saturated colour and end up as white, as you add more and more white to the mix.
A shade is achieved by adding black to a colour in order to make it less light than it already is. Again when using paint you can easily experiment by beginning with a fully saturated colour and adding black to make it darker.
In theory shade will begin with a fully saturated colour and end up being black as you work through the process of adding more and more black to the mix.
The above is offered as a guideline to help you understand tint and shade. It is of course, unless you are a plant breeder, not possible when selecting plants to mix their flowers together to create ones that are lighter or darker. To chose tints successfully you will need to spend some time carefully looking at the flowering plants in your local nursery or garden centre. Identify your starting colour and then look for some lighter tints or darker shades of that colour.
It may be easier when selecting tints or shade in flowering plants to identify these characteristics in the same type of plant, rather than changing to different plants each time. It can be time consuming to get it right but it can be a very effective way to achieve some useful effects in your Garden Design, for instance:
- To gradually lighten or darken a liner or three dimensional space
- To make a transition from dark to light
- To combine with the texture of you plants to create perspective
- To reflect seasonal changes |
The Australian Aborigines League
In Victoria, New South Wales, South Australia and Western Australia, Aboriginal people were increasingly bold, in the 1930s, speaking out against the insecurity of their reserves and the denial of their civil rights. The Australian Aborigines League, established in the mid-1930s and led by Melbourne-based William Cooper, issued a nine point program some time between 1934 and 1936:
* Control over Aborigines to be transferred from the states to the Commonwealth
* The implementation of a positive national policy of uplift
* Increased funding
* The ending of discrimination between Aborigines of full and part descent, and between Aborigines and Europeans
* The granting of full citizenship rights to 'civilised' Aborigines
* Recognition by the legal system of tribal laws in appropriate circumstances
* Full access to reserves and the granting of land
* The opening of educational opportunities at the highest level
* The granting of parliamentary representation on the New Zealand model
The League combined with the Aborigines' Progressive Association of New South Wales (formed in 1937) to publicise these demands at such events as the 'Day of Mourning' - the Aborigines' counterpoint, in January 1938, to the colonists' celebration of 150 years of white settlement.
From 1933 to 1938, Cooper and others circulated a petition to King George VI in which some 2000 Indigenous signatories complained that those who settled Australia had disobeyed the British government's 'strict injunction' against expropriating Indigenous land. The Australian Government refused to pass the petition on to the King.
Keywords: activism, Australian Aboriginal Progressive Association, Australian Aborigines League, coexistence, Cooper, William, Day of Mourning (1938), New South Wales, reserves, South Australia, Victoria, 1933-1938
Still: William Cooper. Courtesy of AIATSIS.
Author: Rowse, Tim and Graham, Trevor |
Context is Everything: Grammar Lessons
Teaching students about grammar using popular, and well written, literature can be the best solution to a tricky topic.
By Jo Ann Zimmerman
Once upon a time in America, students spent their school days parsing passive verb constructions, slaving over subject-verb agreement, and anguishing about antecedents. Those were the days.
Reformers are quick, and correct, to note that much about education has improved since the era of Warriner's. As English teachers, we rightly stress reading comprehension and the writing process over rote memorization of arcane grammar rules. We immerse students in rich literary experiences and use the latest technology our districts can afford to engage students in writing for real purposes. Yet too many students still struggle to read and write. What are we doing wrong?
In truth, the problem may be what we are not doing. With all of the other legitimate demands on an English teacher's time, it has been easy to let grammar instruction slide. Then, too, direct instruction in language mechanics has been out of fashion for some time. Much research supports the position that grammar, like vocabulary, is best learned in context. In classrooms across the country, this translates into the practice of using student writing to address grammar errors -all well and good as far as it goes, but where are the models of correct usage?
A rich source of material to model proper writing mechanics is right in front of us - the novels we read in our classrooms. Just as we pull vocabulary words from the story, why not use sentences from the novel to teach verb tenses? Isn't a big part of what makes books such as "The Phantom Tollbooth" and "Freak the Mighty" so readable the way their pronouns and antecedents agree? Wouldn't it be great to integrate lessons in language mechanics into existing novel study units? And how much could you reinforce reading comprehension analyzing sentences right out of the book?
Grammar Lesson Plans:
Here's a group of lessons that ties together comprehension for specific chapters of the novel and word study exercises. Most activities are aimed at grades 5-6, though the dictionary lesson could easily be adapted to older readers.
What better book to use for teaching idioms than The Phantom Tollbooth? This lesson for grades 4-6 uses internet sites with idiom activities to introduce the concept, then challenges students to identify idioms from the story.
This lesson for grades 6-8 uses expository text to teach verb tenses. There is a link to a news story for students to read, though any grade level text could be used.
Students in grades 7-10 read a news story about UFO's and edit the piece for grammar and usage. Includes a useful editing checklist. |
Bacteremia is the presence of bacteria in the bloodstream.
Most bacteremia is temporary, has no symptoms, and will usually not lead to serious infection. However, bacteremia can also lead to serious infections. The risk is highest in those with weakened immune systems. Bacteremia that causes symptoms requires treatment to prevent more severe infections.
Bacteria are normally present in certain areas of the body. For example, it can be found on the skin and inside the mouth, nose, throat, large intestine, and vagina. Small tears or damage in these tissue can allow the bacteria to enter the bloodstream. This can happen during everyday activities, like vigorous toothbrushing, or certain medical procedures. Bacteremia may also be caused by an infection that is already in the body, such as pneumonia, urinary tract, or ear infection.
Once bacteria enters the blood, the immune system will normally remove it. The quick removal of bacteria will stop other infections from developing. Complications of bacteremia usually develop if:
- Bacteria remain in the bloodstream for long period of time
- Large amounts of bacteria in the blood overwhelm the immune system
- The immune system is weakened by medical conditions, treatments, or procedures
This can lead to infections anywhere in the body such as lungs, heart, brain, or bone. Growth of the bacteria in the bloodstream can also lead to sepsis, a body wide infection.
Certain medical or dental procedures can cause bacteremia. Higher risk activities include:
- Dental cleaning or procedures
- Urinary catheter
- IV or central catheters
- Tubes placed in throat to assist in breathing—mechanical ventilation
- Surgical treatment of abscesses or infected wounds
- Invasive procedures, such as endoscopy or open surgeries
- Intensive care unit admission
The risk of developing a serious infection from bacteremia is increased with:
- A suppressed or weakened immune system
- Exposure to aggressive strains of bacteria
- Presence of implanted medical devices
Bacteremia symptoms can vary depending on the amount of bacteria present.
- There may be no symptoms in children with brief small amounts of bacteria.
- If higher amounts of bacteria enter the system there may be fever without other symptoms.
- Growth of bacteria in the bloodstream can lead to more general symptoms such as a fever with body aches.
- Higher growth rates and more severe bacteremia can result in symptoms of sepsis, such as a fast heart rate, low blood pressure, or mental confusion.
Bacteremia can lead to a number of serious complications such as infections of:
- Heart tissue—infective endocarditis
- Central nervous system—bacterial meningitis or brain abscess
- Bone tissue—osteomyelitis
- Joint tissue—septic arthritis
- Soft tissues—abscess
Untreated complications can lead to disability, organ failure, and death.
You will be asked about your child’s symptoms and medical history. A physical exam will be done, including specific questions about recent medical treatments or surgery.
Blood tests will be done to see if your body if responding to an infection.
If bacteremia is suspected, a blood culture test will be done to identify the specific bacteria causing the problem. Identifying the specific bacteria may help with treatment decisions.
Bacteremia that is not causing symptoms may not need treatment. The body’s immune system will control and remove the bacteria.
Bacteremia that causes symptoms or infections is treated with antibiotics. The antibiotics may be later adjusted if the blood culture find a bacteria that requires specific antibiotics.
Other symptoms associated with the location of the infection or sepsis will need to be treated.
Antibiotics may be recommended before high-risk procedures if a child is at high risk for infection. This includes children with weakened immune systems or medical implants. The antibiotics will eliminate bacteria that enter into the blood before they can cause problems. |
Applied Force Affects Motion of Object
by Ron Kurtus (revised 1 October 2015)
An applied force affects the motion of an object. An applied force can be a push, pull, or dragging on an object.
The push can come from direct contact, like when objects collide or from a force field like magnetism. The pull seems to only come from a field at a distance, like gravity or magnetism. Dragging can occur when sliding an object over the surface of another.
The action from a force can cause an object to move or speed up (accelerate), to slow down (decelerate), to stop, or to change direction. Since any change in velocity is considered acceleration, it can be said that a force on an object results in the acceleration of an object.
Questions you may have include:
- How can a force accelerate an object?
- How can a force slow down an object?
- When can a force cause an object to change directions?
This lesson will answer those questions. Useful tool: Units Conversion
Applied force can cause acceleration
When a force acts on an object that is stationary or not moving, the force will cause the object to move, provided there are no other forces preventing that movement. If you throw a ball, you are pushing on it to start its movement. If you drop an object, the force of gravity causes it to move.
If an object is initially stationary, it accelerates when it starts to move. Acceleration is the change in velocity over a period of time. The object is going from v = 0 to some other speed or velocity.
Likewise, if an object is already moving and a force is applied in the same direction, the object will speed up or accelerate. For example, a gust of wind can speed up a sailboat.
Accelerates Until Force Stops
As long as the force is applied to a given object, it will to accelerate. Once the force is withdrawn, the object will continue to move at a constant velocity, according to the Law of Inertia.
Applied force can cause deceleration
If an object is moving and there is an applied force in the opposite direction of the motion, the object will decelerate or slow down. If you throw a ball up at a given velocity, it will slow down as it travels upward due to the force of gravity. Likewise, an airplane will decelerate if flying into a strong headwind.
A decelerating force can cause a moving object to stop. This can be seen when you apply the brakes on your car.
Applied force can cause change in direction
A force applied at an angle to the direction of motion of an object can cause it to change direction. A side wind will cause an airplane to change its direction.
It is possible that the object keeps going at the same speed, if the force is applied perpendicular to the direction of motion. But the velocity of the the object changes. Speed is how fast the object is going, while velocity is speed plus direction.
A force is a push, pull, or dragging on an object that affects its motion. The push can come from direct contact, like when objects collide or from a force field. The pull seems to only come from a field at a distance. The action from a force can cause an object to accelerate, to decelerate, to stop or to change direction.
Since any change in velocity is considered acceleration, it can be said that a force on an object results in the acceleration of an object.
Become a positive force in your community.
Resources and references
Forces In Nature by Liz Sonneborn Rosen; Publishing Group (2004) $25.25 - Understanding gravitational, electrical and magnetic force
The Science of Forces by Steve Parker; Heinemann (2005) $29.29 - Projects with experiments with forces and machines
Glencoe Science: Motion, Forces, and Energy, by McGraw-Hill; Glencoe/McGraw-Hill (2001) $19.32 - Student edition (Hardcover)
Questions and comments
Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible.
Click on a button to bookmark or share this page through Twitter, Facebook, email, or other services:
Students and researchers
The Web address of this page is:
Please include it as a link on your website or as a reference in your report, document, or thesis.
Where are you now?
Applied Force Affects Motion of Object |
Acid phosphatase (AP) deficiency occurs when the body does not produce enough AP, causing phosphates to build up in the body. AP’s are molecules that decrease the amount of phosphates in the body, which is a necessary chemical process. Symptoms of AP deficiency include vomiting, low muscle tone, fatigue, spasm of neck and spinal muscles, and extreme bleeding.
AP deficiency can be noticed at birth and is inherited genetically, caused by a mutation in the ACP2 gene on chromosome 11. Genes are made up of DNA that produce proteins responsible for normal bodily function and health; furthermore, this mutation is inherited in an autosomal recessive pattern, meaning that you would need two copies of the mutated gene to have the disease.
Diagnosis of AP deficiency occurs either at birth after examining symptoms, or prenatally if the family has a history of children with AP deficiency. AP deficiency is a vague condition is not fully understood. A potential therapy may include medications to treat inflammation. These medications have been shown to increase AP levels slightly, though it is not known to “cure” AP deficiency; however, this is still being explored. AP deficiency typically results in infant death.
If a family member has been diagnosed with AP deficiency, speak with your doctor to learn more information. Support groups may also be available for further resources.
Description Last Updated: Aug 21, 2018 |
Tests that examine the blood and bone marrow are used to detect (find) and diagnose adult AML.
The following tests and procedures may be used:
- Physical exam and history : An exam of the body to check general signs of health, including checking for signs of disease, such as lumps or anything else that seems unusual. A history of the patient's health habits and past illnesses and treatments will also be taken.
- Complete blood count (CBC): A procedure in which a sample of blood is drawn and checked for the following:
- Peripheral blood smear : A procedure in which a sample of blood is checked for blast cells, the number and kinds of white blood cells, the number of platelets, and changes in the shape of blood cells.
- Bone marrow aspiration and biopsy : The removal of bone marrow, blood, and a small piece of bone by inserting a hollow needle into the hipbone or breastbone. A pathologist views the bone marrow, blood, and bone under a microscope to look for signs of cancer.
- Cytogenetic analysis : A laboratory test in which the cells in a sample of blood or bone marrow are viewed under a microscope to look for certain changes in the chromosomes. Other tests, such as fluorescence in situ hybridization (FISH), may also be done to look for certain changes in the chromosomes.
- Immunophenotyping : A process used to identify cells, based on the types of antigens or markers on the surface of the cell. This process is used to diagnose the subtype of AML by comparing the cancer cells to normal cells of the immune system. For example, a cytochemistry study may test the cells in a sample of tissue using chemicals (dyes) to look for certain changes in the sample. A chemical may cause a color change in one type of leukemia cell but not in another type of leukemia cell.
- Reverse transcription – polymerase chain reaction test (RT–PCR): A laboratory test in which cells in a sample of tissue are studied using chemicals to look for certain changes in the structure or function of genes. This test is used to diagnose certain types of AML including acute promyelocytic leukemia (APL).
From Adult Acute Myeloid Leukemia Treatment, National Cancer Institute |
Ziggurat: a multi-storied temple tower from ancient Mesopotamia.
Ziggurats are, architecturally, the Mesopotamian equivalent of the Egyptian pyramids: large artificial square mountains of stone. They are equally ancient. But there are two differences: a ziggurat was not a tomb but a temple, and ziggurats were built well into the Seleucid age, whereas the building of pyramids came to an end after c.1640 BCE. Ziggurats are, briefly, temple towers.
Our word ziggurat is derived from ziqqurratu, which can be translated as "rising building" (Akkadian zaqâru, "to rise high"). Some of them rose very high indeed. The temple tower known as Etemenanki (the 'House of the foundation of heaven on earth') in Babylon was 92 meters high. Even larger was the shrine of the god Anu at Uruk, built in the third or second century BCE. The best preserved temple tower is at Choga Zanbil in Elam, modern Khuzestan in Iran.
Ziggurats played a role in the cults of many cities in ancient Mesopotamia. Archaeologists have discovered nineteen of these buildings in sixteen cities; the existence of another ten is known from literary sources.
They were always built by kings. In third millennium BCE Mesopotamia, there was a conflict between the two great organizations, the temple and the palace. By building ziggurats, the king showed that he could perform more impressive religious deeds than the priesthood.
The most famous ziggurat is, of course, the "tower of Babel" mentioned in the Biblical book Genesis: a description of the Etemenanki of Babylon. According to the Babylonian creation epic Enûma êliš the god Marduk defended the other gods against the diabolical monster Tiamat. After he had killed her, he brought order to the cosmos, built the Esagila sanctuary, which was the center of the new world, and created humankind. The Etemenanki was next to the Esagila, and this means that the temple tower was erected at the center of the world, as the axis of the universe. Here, a straight line connected earth and heaven. This aspect of Babylonian cosmology is echoed in the Biblical story, where the builders say "let us build a tower whose top may reach unto heaven". |
Sunrise and Sunset: The AnalemmaWhy the winter solstice isn't the darkest morning—or evening—of the year
Imagine you set up a camera to snap a picture of the sun every couple of weeks at exactly the same time of day, let's say 10:00 am. After a year you combine all the pictures into a single image.
What you would see is an odd figure-eight pattern called the analemma. The sun slowly creeps around the analemma over the course of a year. No matter what time of day you choose to "freeze" the sun, you'll see it drift along the same figure-eight pattern.
Where does the analemma come from?The tilt of Earth's axis constantly changes the angle at which we view the sun—moving it higher in summer, lower in winter, and slightly left and right in the process—and it's this changing perspective that produces a figure eight. On top of this, the Earth's elliptical orbit constantly changes the speed at which we travel around the sun, and this adds additional left and right movement making the lower half of the eight fatter.
Visualizing just how axis tilt and varying orbital speed conspire to make the sun drift in such a peculiar pattern is a herculean exercise in spacial perspective and I will not attempt to explain it here. If you want to learn a little bit more about it, I have a slightly expanded explanation of the analemma.
Fortunately it's a bit easier to see how the figure eight—once you accept that it exists—causes the year's latest sunrise and earliest sunset NOT to occur on the winter solstice, the shortest day of the year.
Let's look at some rise and set snapshots around the winter solstice. The image below shows morning and evening analemmas with the sun moving along them on each of three days, as viewed from Chicago. Note the fixed time of day for each snapshot. You can imagine the analemma itself, with the sun attached, rising and setting at the same time every day of the year—it's the sun's varying position on the analemma that causes sunrise and sunset times to change.
The key is that the analemma is tipped when viewed from the mid latitudes. Because of this the bottom of the figure eight (where the sun lies on the winter solstice) is NOT the lowest point relative to the horizon. In addition, the lowest point relative to the eastern horizon and that relative to the western horizon are completely different.
Note that your perspective changes depending on whether the analemma is in the east or west. You can mimic the morning (eastern) analemma with your left hand by holding a finger—tipped left—in front of your face. Then, without moving your finger, move your head forward until your finger is behind you. Then turn and look back—now your finger appears tipped to the right, like the evening (western) analemma.
December 8: In the morning the sun is still quite high on the analemma curve, having risen at 7:06 AM. But in the evening, with the analemma flipped, the sun is at its lowest point relative to the horizon and therefore sets the earliest of the year, at 4:19 PM. Remember the sun is in the same spot on the figure eight morning and evening—it's just that the analemma's relationship to each horizon is different.
December 21: The sun's morning movement has been more or less vertically down resulting in a loss of eight minutes of morning light. Now compare this with the evening motion: the sun has moved upward, but less so because its path relative to the western horizon had a bigger horizontal component—so the evening gain is only four minutes. As a result, the day length has shortened by a net four minutes. This is the shortest day of the year because the sun is at its lowest average point relative to both horizons.
Another way to think of it is this: freeze the sun at any given time of day and measure the distance to its rising point and the distance to its setting point. Add the two distances together; the total distance is smallest on the winter solstice.
January 3: The morning sun has reached its lowest point and has lost another four minutes of morning daylight. But due to the strong vertical gains in the evening the day has gotten longer overall.
It might help to run the morning (or evening) sequence like a movie in your mind: visualize the sun drifting down and to the left (morning) or up and to left (evening) as the days march on. |
How do you represent the word “Amsterdam” in a computer? How do you capture its semantics (Amsterdam is both a city and a capital)? And how do you make sure that London has a similar representation since it is also a city and a capital? Deep Learning is a novel Artificial Intelligence technique that attempts to answer these questions.
With Deep Learning, large amounts of text data are processed through algorithms to automatically learn representations of similar words. Textkernel has started expanding its ‘document understanding’ models (cv and vacancy parsing) to take advantage of the benefits of Deep Learning.
Using raw data to learn new knowledge
In the case of text, Deep Learning exploits the fact that similar words occur in similar contexts to infer the meaning of a word. For instance in a CV extraction system, the words “Amsterdam” and “London” tend to be used in addresses as the “city”. Deep Learning sifts through large amounts of data and produce word representations that cluster together these similar words. When a new word with representation similar to Amsterdam and London is found, it is likely to be a city. In this way, new knowledge can be inferred from raw data.
A representation of the name (red) and address (black) words from 4 CVs. The plot is a projection in 3D of the word representation inferred using Deep Learning. Note how first names and parts of British postcodes (e.g. 1XA) each tend to cluster together.
Increasing coverage and robustness
Deep Learning has allowed Textkernel to break free from the limitations of using human annotated data in its ‘machine learning’ pipeline. Adding new knowledge used to be a time consuming process. For example, a list of skills had to be manually gathered and then integrated in the pipeline. With Deep Learning this process can be automated and implemented in a more systematic fashion. This new knowledge increases the robustness of Textkernel’s document understanding models, makes them more responsive to new words and increases their domain coverage. |
|The Great Volume Exchanger...or The Magic Matter Maker|
This lesson uses a discrepant event to pique curiosity and provide an excellent metaphor for a problem in science that can be addressed in a scientific way. Water is poured into a magic box, and out comes a much larger volume of water (or other liquid). Students will learn that science is uncertain because scientists can make more than one workable model to explain their observations.
Intended for grade levels:
Type of resource:
No specific technical requirements, just a browser required
Cost / Copyright:
Copyright 1999 ENSI (Evolution and the Nature of Science Institutes) This material may be copied only for noncommercial classroom teaching purposes, and only if this source is clearly cited.
DLESE Catalog ID: DLESE-000-000-004-638
Resource contact / Creator / Publisher:
Contributor: Steve Randak
Contributor: Tom Watts
Contributor: Michael Kimmel |
Interesting online CONFCHEM discussion going on right now on the ChemWiki and
greater STEMWiki Hyperlibary project. Come join the discussion.
Nucleotides are the basic monomer building block units in the nucleic acids. A nucleotide consists of a phosphate, pentose sugar, and a heterocyclic amine.
The phosphoric acid forms a phosphate-ester bond with the alcohol on carbon #5 in the pentose. A nitrogen in the heterocyclic amines displaces the -OH group on carbon #1 of the pentose. The reaction is shown in the graphic below. If the sugar is ribose, the general name is ribonucleotide and deoxyribonucleotide if the sugar is deoxyribose. The other four nucleotides are synthesized in a similar fashion.
There are a variety of simple ways to represent the primary structures of DNA and RNA. The simplest method is just a simple line to indicate the pentose-phosphate backbone with letters to indicate the heterocyclic amines as shown in the graphic on the left.
Just as the exact amino acid sequence is important in proteins, the sequence of heterocyclic amine bases determines the function of the DNA and RNA. This sequence of bases on DNA determines the genetic information carried in each cell. Currently, much research is under way to determine the heterocyclic amine sequences in a variety of RNA and DNA molecules. The Genome Project as already succeeded in determining the DNA sequences in humans and other organisms. Future research will be to determine the exact functions of each DNA segment as these contain the codes for protein synthesis.
This material is based upon work supported by the National Science Foundation under Grant Number 1246120 |
– various sources
FACT: The best and most recent science indicates that methane is 33 times more powerful than carbon dioxide as a greenhouse gas, when considered over an integrated time period of 100 years following emission, and 105 times more powerful on an integrated 20 year time period.
The global warming potential or GWP, is a simple metric often used to assess how much more powerful a given greenhouse gas is when compared to carbon dioxide. Back in 1996, the Intergovernmental Panel on Climate Change estimated the GWP for methane as 21, considered over a 100-year time period following emission. As of 2007, the IPCC presented global warming potentials (GWP) for methane of 25 for a 100-year integrated time-frame and 72 for a 20-year integrated time frame after emission. Using a more recent model to better capture how methane interacts with other radiatively active substances, Shindell et al. in a 2009 paper in Science updated these factors to 33 and 105 respectively. These higher values reflect the best, most current science. The GWP for methane is less at the longer time scale simply because methane does not stay in the atmosphere for as long as carbon dioxide. |
You are hereHome ›
Now showing results 31-40 of 166
Learners will construct a valid scientific question that can be answered by data and/or modeling and choose an appropriate mission for their rover that will answer their scientific question. The lesson uses the 5E instructional model and includes:... (View More) TEKS Details (Texas Standards alignment), Essential Question, Science Notebook, Vocabulary Definitions for Students, Vocabulary Definitions for Teachers, four Vocabulary Cards, and supplements on writing a scientific question and possible Mission Choices. This is lesson 5 of the Mars Rover Celebration Unit, a six week long curriculum. (View Less)
Learners will create a physical timeline of comet appearances in art and literature throughout history. Participants use a set of photos depicting comets in art images and science missions and place the images in chronological order, while learning... (View More) about the perceptions of comets during that time period. Note: Timeline cards that are needed to complete this activity can be found under the Related and Supplemental Resources links on the right side of this page. (View Less)
Learners will investigate how lateral velocity affects the orbit of a spacecraft such as the International Space Station (ISS). Mathematical extensions are provided. This is science activity 1 of 2 found in the ISS L.A.B.S. Educator Resource Guide.
Leaners will grow a sugar crystal and learn how this relates to growing protein crystals in space. The lack of gravity allows scientists on the space station to grow big, almost perfect crystals, which are used to help design new medicines. This is... (View More) science activity 2 of 2 found in the ISS L.A.B.S. Educator Resource Guide. (View Less)
Learners will construct two different types of trusses to develop an understanding of engineering design for truss structures and the role of shapes in the strength of structures. For optimum completion - this activity should span 3 class periods to... (View More) allow the glue on the structures to dry. This is engineering activity 1 of 2 found in the ISS L.A.B.S. Educator Resource Guide. (View Less)
Learners will investigate the relationship between mass, speed, velocity, and kinetic energy in order to select the best material to be used on a space suit. They will apply an engineering design test procedure to determine impact strength of... (View More) various materials. This is engineering activity 2 of 2 found in the ISS L.A.B.S. Educator Resource Guide. (View Less)
This is an activity about using solar arrays to provide power to the space station. Learners will solve a scenario-based problem by calculating surface areas and determining the amount of power or electricity the solar arrays can create. This is... (View More) mathematics activity 1 of 2 found in the ISS L.A.B.S. Educator Resource Guide. (View Less)
Learners will investigate the relationship between speed, distance, and orbits as they investigate how quickly the International Space Station (ISS) can travel to take a picture of an erupting volcano. This is mathematics activity 2 of 2 found in... (View More) the ISS L.A.B.S. Educator Resource Guide. (View Less)
This activity is an interactive word find game with words related to comets and NASA's Comet Nucleus Sample Return mission. Accompanying text and pictures describe what comets are and why we are interested in them. |
One of the most common types of graphs in the sciences is and X-Y scatter plot in which one variable is plotted against another. A graph of elevation versus horizontal distance is a good example and an intuitive starting point for geoscience students. Students should be able to describe what data is being graphed, the range of values, and how the data (elevation in this case) changes as one moves out from the origin using such descriptors as gradually increases, decreases rapidly, or increases rapidly and then levels off.
- Describing Graphs from Lamont Doherty Earth Observatory can help students translate graphical information into words.
- Introduction to Charts and Graphs (more info) by DISCUSS (Discovering Important Statistical Concepts Using SpreadSheets)is an excellent online tutorial related to MS Excel. Excel templates are also provided for students. Particular focus on the X-Y scatter plots is useful for introductory-level students.
- Reading Graphs is a beginning Algebra tutorial on graphs.
- This Leeds College English Language Practice session may be useful to introductory-level students having trouble coming up with their own words to describe the behavior represented by graphs. |
It’s not every day that an asteroid from outside the Solar System comes whizzing past your front door. In fact, it’s only ever happened once that we’ve observed, when astronomers spotted interstellar asteroid ‘Oumuamua in October.
Now a collective of astronomers and engineers wants to seize the opportunity to study ‘Oumuamua – by chasing after it with a rocket. They’ve named the initiative Project Lyra.
It’s a bit of a crazy idea, but ‘Oumuamua could be utterly worth it. Even with its interstellar origin notwithstanding, it’s unlike any asteroid humans have observed before.
The cigar-shaped asteroid is up to 10 times longer than it is wide, a shape never before seen in an asteroid in the Solar System. It’s rocky, and possibly rich in metals, and reddened by cosmic irradiation.
It came from the direction of the star Vega in the Lyra constellation, at a breakneck speed of 95,000 kilometres per hour (59,000 miles per hour).
At first it was thought to have come from Vega’s orbit – but it would have taken 300,000 years to get here at that speed. As of 300,000 years ago, Vega was in a different position in the sky.
This means ‘Oumuamua could have missed Vega altogether, and been travelling through space, all alone, for hundreds of millions of years.
Researching it could tell us more about the solar system in which it formed, as well as something about extrasolar asteroids, which may be entering our Solar System more frequently than we thought.
This is why a volunteer collective of scientists and engineers called the Initiative for Interstellar Studies wants to send a rocket to check it out. But it would be a lot more complicated than landing a probe on Comet 67P/Churyumov-Gerasimenko – and that was a very complicated business.
By far the biggest obstacle to getting to ‘Oumuamua is catching up to it. Comet 67P orbits the Sun, so it’s not going anywhere – but ‘Oumuamua is already racing on its way out of our Solar System at blistering speed.
As it slingshotted past the Sun, it picked up velocity, and as of 20 November, its speed was 138,000 kilometres per hour (85,700 miles per hour, or 38.3 kilometres per second). It’s expected to pass Jupiter’s orbit in May 2018.
It took Rosetta 10 years to reach Comet 67P 510 million kilometres (317 million miles) from Earth. It took NASA probe Juno 5 years to reach Jupiter’s orbit, 588 million kilometres (365 million miles) away.
Even Pluto probe New Horizons, which broke the record for launch velocity from Earth, and Voyager I, the fastest human-made object to leave the Solar System, are both less than half as fast as ‘Oumuamua’s current speed.
But this technical challenge, the Initiative says, is worthwhile in and of itself.
“Besides the scientific interest of getting data back from the object, the challenge to reach the object could stretch the current technological envelope of space exploration,” the researchers wrote in their paper.
“Hence, Project Lyra is not only interesting from a scientific point of view but also in terms of the technological challenge it presents.”
Amongst key elements are travel time, spacecraft velocity, characteristic energy, and the velocity of the asteroid. The researchers have modelled, with launch dates between 5 and 30 years from now, how much velocity such a probe would need to attain in order to catch up to ‘Oumuamua.
There’s one key problem, though: the calculations were based on ‘Oumuamua’s incoming speed of 95,000 kilometres per hour. It will gradually return to that speed, but not for a few years.
Perhaps with propulsion technologies currently under development, such as solar sails, higher spacecraft speeds may become possible within a timeframe that would allow a probe to catch the speeding asteroid.
In the meantime, though, we can always improve detection systems so that we can try and catch the next one.
“An important result of our analysis is that the value of a laser beaming infrastructure from the Breakthrough Initiatives’ Project Starshot would be the flexibility to react quickly to future unexpected events, such as sending a swarm of probes to the next object like 1I/’Oumuamua,” the researchers wrote in their paper.
“With such an infrastructure in place today, intercept missions could have reached 1I/’Oumuamua within a year.”
You can read the paper in full from the pre-print resource arXiv. |
Space exploration is EXTREMELY EXPENSIVE. We live in the deep valley of Earth’s gravitational well. Very large rockets are required to put small payloads into space. Launch mass must be reduced to create a sustainable future in space. A large fraction of spacecraft mass is propellant for in-space propulsion. Large reductions in launch mass will come from production of in-space rocket propellant from in-space water. Vast quantities of water are present at the lunar poles, on Mars, comets, and some asteroids. Using the in-space resource, solar energy, in-space water can be split into hydrogen and oxygen for propellant. Molecular water can even be used for the reaction mass ejecta with ion engines for missions to Mars and beyond. Returning from the surface of Mars will REQUIRE in-situ production of propellant. A manned Mars mission might need ~200 tons of propellant. With Earth launch costs of ~$10,000 per pound, space produced propellant could save billions of dollars that could be used to bootstrap the development of water extraction and rocket propellant production. This would go far to help develop the “cis-Lunar Space” architecture, ”The Transcontinental Railroad” of the 21st century.
We have been developing methods for microwave heating and water extraction for several years. Microwaves will penetrate the low thermal conductivity permafrost regolith to sublime subsurface water ice with subsequent recondensation of the water in an external cold trap. This simple vapor transport process could eliminate the need to excavate the soil and reduce the complexity of surface operations. But most importantly, it could greatly reduce the mass of mining equipment to be transported to the surface of the moon and to other planetary bodies.
Microwave extraction laboratory experiments and numerical simulations over the past 7 years demonstrate the utility of these innovative processes. FEM Multiphysics numerical analysis is being used to model laboratory experiments as well as to simulate possible space experiment scenarios of microwave heating of lunar, Martian, and asteroidal regolith. Different scientific experiments and mining scenarios have been simulated for different frequencies, power, heating times, water concentrations, and for regolith with different dielectric properties. Numerical simulations of energy beamed at the surface as well as delivery of energy down bore holes illustrate possible ways to determine spatial water concentration and subsequent mining operations. Simulations at high frequencies and low power demonstrate possible volatiles science experiments with decomposition of compounds at high temperatures to release chemically bound volatiles in asteroids. Ongoing simulation of water sublimation and vapor transport through regolith will permit the estimation of extraction engineering efficiency metrics. Future simulations of the different microwave processes will permit the design of space experiments, recommendations of potential spacecraft hardware requirements, and optimization of water extraction equipment and operations.
In order to create this new future in space, we need a paradigm shift to utilization of in-space resources, especially water, to leverage the sustainable and growing presence in space for scientific exploration, planetary protection, space debris mitigation, satellite servicing, and even space tourism to help “Create The Future” in Space.
- 2013 Category Winner
- 2013 Top 100 Entries
ABOUT THE ENTRANT
Name: Edwin Ethridge
Type of entry: individual
Software used for this entry:
Patent status: patented |
Ecosystems: Animal Survival
Learn how Earth’s natural processes and people’s activities affect the ecosystems and animal world. Find out how well different species adapt to new conditions and fight for survival. Learn about endangered animals and projects for their protection. Includes links to eThemes resources about endangered species, animal survival adaptation, habitats and their threats, pollution, and more.
Biomes of the World
Choose thumbnails to explore various biomes and ecosystems. While reading about them, pay attention to links at the left that warn about present dangers of ecosystems, animal species, and how people can help them to survive.
Science Netlinks: Science Updates: Litter Life
Read this article and learn how some plants and animals can migrate across the world using plastic bottles. Includes an in-class activity.
NOVA Online: Crocodiles: Outlasting the Dinosaurs
Find out how crocodiles survived for 200 million years continually adapting to new conditions.
PBS: Nature: Yellowstone Otters: Life of the Otter
Learn about river otters and their fight for survival. NOTE: This site includes user comments and links to social networking sites.
Read about one of the smallest birds on Earth - the kiwi. Learn about the different varieties of this bird and why this bird is in danger.
National Geographic: Sea Turtles in Trouble
Learn what kind of danger sea turtles face. NOTE: The site includes ads.
Learn more about the African buffalo that is currently in danger. NOTE: This site leads to YouTube and other social networking sites.
Science NetLinks: Animal Adaptations
This lesson plan helps students understand animals needs in order to survive. Includes a printable activity sheet.
eThemes Resource: Animal Adaptations: Physical and Behavioral
These sites are about the behaviors and physical traits that enable animals to survive in their environments. Topics include camouflage, mimicry, and natural selection.
eThemes Resource: Animals: Endangered Species
These sites have descriptions of endangered species and include explanations for why certain animals are in peril. There are graphic organizers and Venn diagrams. Includes links to many eThemes Resources on habitats and specific endangered animals, such as lemurs, cheetahs, chimpanzees, tigers, and more.
eThemes Resource: Habitat: An Overview
This resource covers the different ecosystems on our planet. Find out how these biomes differ, what characteristics make them unique, and where they are located. Includes a map of the different habitats found in Missouri and information about habitats in Utah. There are links to eThemes Resources on specific habitats and animal food chains, plus an eMINTS WebQuest.
eThemes Resource: Habitat: Forests: Threats
Learn about the ecological problems facing many forests. The issues include acid rain, air pollution, logging, tree diseases, and more. There is a link to an eThemes Resource on forests.
eThemes Resource: Natural Selection, Adaptation, and Biodiversity
These sites are about natural selection, adaptation, and human impacts on biodiversity. Includes interactive games and videos. Also includes links to the more general resources on animal adaptations and ecosystems.
eThemes Resource: Pollution
These sites are about different forms of pollution and how it affects our environment. Topics include water quality, acid rain, smog, oil spills, and more. There are online games, classroom activities, and maps. Includes links to eThemes Resource on the greenhouse effect, recycling, and Earth Day.
Request State Standards |
The nutrients that a plant needs to grow naturally go back into the soil for the next plant to use. Soil productivity depends on the Soil organic matter. This is things that decompose into the soil . Things like leaves, twigs, animal skin, and plant skin. These organic matters loosen the soil making it easier for pores to open and keep air for plants and underground animals to breathe.
Soil degradation is something that absolutely happens do to human interaction. The most important thing it does soil's ability to support those that depend on it. Soil degradation affects crop production especially making it difficult to grow crops. Salinization occurs when n soil is irrigated repeatedly. Salinization is when the is left in the soil from the irrigation water and it can make it difficult for plants to soak up the water that it needs to grow.
"K-12 Soil Science Teacher Resources." Chemistry. Web. 10 Dec. 2014. <http://www.soils4teachers.org/chemistry>.
"Soil Chemical Properties." Soil Chemical Properties. Web. 10 Dec. 2014. <http://soils.tfrec.wsu.edu/mg/chemical.htm>.
Brown, Catrina, and Mike Ford. "Environmental Chemistry: Option E." Chemistry: Higher Level : Developed Specifically for Thr IB Diploma. Oxford: Heinemann :, 2009. Print. |
Obesity is on the rise in the U.S., and it shows no sign of slowing down. According to the Center for Disease Control, 39.8% of adults living in the U.S. have been classified as obese in 2015 – 2016. That’s just over 93 million people, all with a Body Mass Index (BMI) of 30 or above. While nurses aren’t specifically responsible for the weight of their patients, they are healthcare providers and should do everything they can to improve the health of their patients.
How Does Obesity Affect a Person’s Health?
Obesity can be brought on by a range of factors, including leading an inactive lifestyle, certain medications that can boost appetite and promote fatigue, poor diet, a lack of education, an environment that promotes overeating, and living below the poverty line. If a person is obese, they will be at a greater risk for many serious diseases and conditions, compared to those with a healthy weight. Some of these conditions include:
- High blood pressure
- Type 2 diabetes
- Heart attack
- Heart disease
- Gallbladder disease
- Various forms of cancer
- Increased body pain
- Breathing problems when sleeping
- Excess body and joint pain
How Nurses Can Help?
A nurse can help their patients overcome obesity by talking about the many health complications associated with obesity as listed above. It’s important for patients to be aware of the risks that come with leading an unhealthy lifestyle. When it comes to combating obesity, every nurse should already be familiar with how patients can reduce their body weight, including eating less, eating healthier, and getting more exercise. But instead of simply reminding patients to watch what they eat, nurses can go one step further by coming up with practical ways for patients to be more active and make better decisions as they pertain to a person’s diet.
Incorporating More Physical Activity in the Patient’s Life
Many people will say that they don’t have enough time or energy to exercise but making time for physical activity doesn’t have to be a chore. Nurses can talk to their patients about their daily routine and look for small moments where the patient can be more active. This might include small steps towards leading a more active lifestyle such as watching TV while standing up or walking in place, using a resistance band around the house, or encouraging the patient to participate in local fitness groups.
Helping Patients Recognize and Accept the Will to Change
Pressuring a patient to lose weight often leads to disappointment and frustration as the patient fails to follow through. That’s why nurses can use a different approach that focuses on changing the patient’s mental state. If a person is resistant to change, they will not lose weight on their own. Nurses can remind patients of the benefits of leading a healthy lifestyle, including living a long life, spending more time with their loved ones, and enjoying some of their favorite activities, as a way of encouraging them to change their ways. They can direct them to support groups, weight loss programs, and other community resources that will connect them with people dealing with the same issues.
Giving Patients Access to Nutrition Information
Nurses might think that their patients understand the health benefits of eating an apple as opposed to eating a bag of chips, but some patients do not have a strong grasp of nutrition. Nurses can give their patients more information on nutrition, including how many calories a person should be eating, what kinds of food or nutrients a person should be eating throughout the day, and how to avoid unhealthy items. Nurses can also instill this information in parents with obese children to encourage healthy eating at a young age.
Nurses should feel empowered to help their patients overcome the challenges of obesity. While some patients may not want to change their lifestyle, nurses can be inspiring role-models for their patients, encouraging them to make healthy decisions at every turn. |
A study of the human Y chromosome found that seven men with a rare Yorkshire surname carry a rare genetic signature found only in people of African origin. Researchers Turi King and Mark Jobling from Leicester University found that the men appear to have shared a common ancestor in the 18th Century, but the African DNA lineage they carry could have reached Britain centuries earlier.
This discovery was the result of genetic research that analyzed the relationship between the Y chromosome and surnames. The Y chromosome is normally found only in males and it is passed from father to son, relatively unchanged, just like a surname.
However, over time, the Y chromosome does accumulate small changes in its DNA sequence so scientists can then study the relationships between different male lineages. The scientists classified these differences into different groups, called haplogroups, which can indicate a person’s geographical ancestry.
King and Jobling then collaborated with a genealogist determine how the men were related and to find where their Y haplogroup originated by placing the men into a family tree.
“He could only get them into two trees, one which dates back to 1788 and the other to 1789. He couldn’t go back any further. So it’s likely they join up in the early 18th Century,” said King.
The majority of the one million people who define themselves as “black” or “black British” trace their origins to immigration from the Caribbean or Africa from the middle of the 20th Century onwards.
Prior to the 20th Century, there have been various routes by which people of African ancestry might have reached Britain. For example, the Romans recruited from Africa and elsewhere for the garrison that guarded Hadrian’s Wall.
The slave trade was the other major route.
“Some of the Africans who arrived in Britain through the slave trade rose quite high up in society, and we know they married with the rest of the population,” said King. “It could be either of these two routes.” Even if the two family trees link up in the 18th Century, haplogroup A1 could have reached Britain long before that.
“Human migration history is clearly very complex, particularly for an island nation such as ours,” said Jobling, who was co-author of the research, “and this study further debunks the idea that there are simple and distinct populations or ‘races’.”
There are other interesting results. For example, when scientists analysed the DNA of the third US president, Thomas Jefferson, they found that his Y chromosome belonged to a haplogroup known as K2. Even though Jefferson’s father claimed Welsh ancestry, his Y haplogroup is rare in Europe and has not yet been reported in Britain.
In fact, genetic studies show that Thomas Jefferson’s K2 haplogroup ultimately came from north-east Africa or the Middle East, the areas where it is most commonly found today.
Details of the study appear in the European Journal of Human Genetics. |
Answers and Explanations
Mathematical objects do not exist in the same sense that a physical object exists; nobody has ever bumped their elbow on a number, for instance.
Instead, mathematical objects are abstract concepts (often abstracted from a real world situation, by isolating just the part of the situation that is relevant for a particular discussion).
When we ask whether or not a mathematical object exists, we must have in mind an appropriate context: a particular, precisely defined collection of concepts. Then we ask, "among these concepts, is there one which matches the object we are looking for?" If so, we say that the object exists; if not, it doesn't exist.
For example, the natural numbers (that is, the numbers 1, 2, 3, 4, and so on) are the concepts obtained by abstracting the property of "size" from collections of objects. The number 2 is the abstract concept that expresses what the following collections of objects have in common: the eyes on a person's face, the occurrences of the letter "b" in the word "bib", the wheels on a bicycle, and so on.
If we were to ask "does there exist a natural number between 1 and 2", we mean, "among the collection of natural numbers, is there one (say x) such that 1 < x < 2?" The answer to this question is "No". You cannot, for example, go to the beach and pick up more than one pebble but fewer than two pebbles.
If we were to ask "does there exist a number between 1 and 2", the answer would depend on what we meant by the word number.
If we were using the word "number" to mean "natural number" (that is, measurement of the size of a set), then the answer would be "No; no such number exists."
However, there are other contexts in which the answer might be "Yes". For example, we might be in a context where number is meant to refer, not to a natural number, but to a rational number: that is, a fraction.
Rational numbers are something quite different from natural numbers: instead of being measurements of sizes of sets, they are ratios of the sizes of two sets. For example, the fraction 3/4 is expressing the ratio "3 to 4".
In this context, where number refers to a ratio not to the size of a single set, and where "2" and "1" really mean the fractions "2/1" and "3/1" respectively, then the answer to our question is "yes": there does exist a number between 1 and 2, for instance the fraction 3/2.
This illustrates that, before one can say whether a concept exists or not, we have to be quite clear about the context in which the question is being asked.
If you are still puzzled by this, you might want to read more (in a discussion of whether or not "imaginary numbers" exist) about how there are many quite different meanings for the word "number", and how whether or not a concept exists can depend on the meaning you have in mind.
Go backward to Does the Number e Have Special Meaning?
Go up to Answers and Explanations -- Index
Go forward to Number Systems with Different Kinds of Numbers
Switch to text-only version (no graphics)
Access printed version in PostScript format (requires PostScript printer)
Go to University of Toronto Mathematics Network Home Page |
The five-pointed star stands for the North Star, a beacon for travelers in the Arctic. The white and yellow colours, which are known as “metals” in heraldry, are unofficially said to represent snow and the sunrise, respectively. The two stripes are separated by an inuksuk, an anthropoid figure formed of stacked rocks. The inuksuk is a traditional marker visible from great distances in the territory, where there are few trees or other landmarks. Several local organizations have designed badges and other symbols incorporating inuksuit (the plural of inuksuk) in various colours.
Nunavut is the traditional homeland of the Inuit people, who constitute a majority of the local population. For decades the Inuit had sought to achieve self-government within Canada, and Inuit artists and community leaders jointly developed the new territory’s flag and coat of arms with the assistance of the Canadian Heraldic Authority. In particular, Meeka Kilabuk of the Nunavut Implementation Commission and Robert Watt, the chief herald of Canada, worked to express authentic Inuit symbolism in a form acceptable to the tenets of heraldry. Queen Elizabeth II of Britain approved the two designs. The Canadian territory of Nunavut came into being on April 1, 1999, and on that day the flag was first raised. |
"For many years, the accuracy of census data on some minorities has been questioned because many respondents don't report being a member of one of the five official government racial categories: white, black or African-American, Asian, American Indian/Alaska Native and Pacific Islander," says Corey Dade. If respondents don't select a category, the Census Bureau assigns them a race based on their neighborhood demographics.
The accuracy of census data is important because the information is used in political decision-making such as enforcing civil rights laws, redrawing state legislative and local school districts, and reapportioning congressional seats. "The strong Latino growth found in the 2010 census guaranteed additional seats in Congress for eight states," says Dade, as an example. Revisions to census questions could improve data reliability, and officials are considering eliminating the Hispanic origin question, asking Asians to list their country of descent, and combining questions for multiracial people.
Latino leaders have voiced their opinion that eliminating the Hispanic origin question could create confusion, but the 2010 survey showed that the question already confused Latinos because many think of "Hispanic" as a race and not an ethnicity. "Broadly, the nation's demographic shifts underscore the fact that many people, particularly Latinos and immigrants, don't identify with the American concept of race," says Dade. He continues, "Even the terms 'Latino' and 'Hispanic' are met by many with ambivalence."
A 2011 survey by the Pew Hispanic Research Center found that only 24 percent of adults use those terms to describe their identity and prefer to identify themselves by their family's country of origin. The Bureau's research for the next census "is expanding our understanding of how people identify their race and Hispanic origin. It can change over time," said Karen Humes, assistant division chief for Special Population Statistics. |
This time-lapse sequence shows the moons Titan and Tethys orbiting Saturn when the planet's rings were tilted nearly edge-on toward Earth. This edge-on alignment happens once every 15 years. The last time this alignment occurred was in 1995 and 1996.
In the movie, the moons can easily be seen because the rings are so thin. Titan and Tethys follow the rings' thin line in their orbit around Saturn. But Titan's shadow is the first to make an appearance, moving across Saturn's disk. Then Titan appears. As Titan makes its trek across the disk, Tethys appears on the left from behind the planet. It disappears quickly off the screen as it makes its circular path around Saturn. These moons seem to move much faster than they actually do because several hours of viewing time were compressed to make this movie.
The movie also shows the bands of clouds that make up Saturn's atmosphere. This banded structure is similar to Jupiter's. A thick haze covers the clouds.
The 15-second movie is created from Hubble images taken over a 10½-hour span. The images were taken Aug. 6, 1995 with Hubble's Wide Field and Planetary Camera 2. |
Extinct & Rare Birds
Have you ever used the expression ‘as dead as a Dodo’? The Dodo is one of the most well known extinct bird species. Sadly, the Dodo is not the only bird that has been classified as extinct.
When we refer to an extinct bird we are referring to a bird species that is no longer in existence. Recent studies have determined the main reasons for extinction are: loss of habitat due to development by humans; and harassment by humans or predation by exotic species. 42 species and 44 subspecies have become extinct within the last 280 years, most of which are island dwellers. The three extinct species below clearly illustrate the seriousness of this issue.
The Dodo was a large flightless bird living on the island of Mauritius. Dodos were killed by sailors, and their nests and young were destroyed by newly introduced cats, rats and pigs brought to the island by the settlers. Another extinct flightless bird was the Great Auk. Its population decreased due to hunting, with the last two being killed by collectors of rare specimens. The Passenger Pigeon was one of the most plentiful bird species in the world in the 19th century. The trees in which they nested were cut down to make way for farm land, decreasing their numbers. Additionally, a mass slaughter was conducted yearly. As Passenger Pigeons required large groups to breed and thrive in, this led to their extinction.
Unfortunately, nothing can be done for these and many other extinct species. However, we can make an effort to preserve those species which are considered to be rare and endangered. The World Conservation Union (IUCN) classifies 168 species as having a critical conservation status (50% probability of becoming extinct within 5 years) and 235 species as endangered (20% probability of extinction within 20 years). Many people consult rare bird alerts to find out the latest status on these bird species as well as when/where they have been spotted. Lists of rare birds are released and updated regularly by the IUCN.
Many programs have been set up to curb this trend towards extinction. Why not find out what rare birds are in your area and how you can help them survive. |
|Part of a series on the|
|History of Georgia|
|Early Middle Ages|
|19th century onwards|
|Georgia (country) portal|
The prehistory of Georgia is the period between the first human habitation of the territory of modern-day nation of Georgia and the time when Assyrian and Urartian, and more firmly, the Classical accounts, brought the proto-Georgian tribes into the scope of recorded history.
Humans have been living in Georgia for an extremely long time, as attested by the discoveries, in 1999 and 2002, of two Homo erectus skulls (H. e. georgicus) at Dmanisi in southern Georgia. The archaeological layer in which the human remains, hundreds of stone tools and numerous animal bones were unearthed is dated approximately 1.6-1.8 million years ago (since the underlying basalt lava bed yielded an age of approximately 1.8 million years). The site yields the earliest unequivocal evidence for presence of early humans outside the African continent.
Later Lower Paleolithic Acheulian sites have been discovered in the highlands of Georgia, particularly in the caves of Kudaro (1600 m above sea level), and Tsona (2100 m). Acheulian open-air sites and find-spots are also known in other regions of Georgia, for example at the Javakheti Plateau where Acheulian handaxes were found at 2400 m above the sea level.
The first uninterrupted primitive settlement on the Georgian territory dates back to the Middle Paleolithic era, more than 200,000 years ago. Sites of this period have been found in Shida Kartli, Imeretia, Abkhazia and other areas.
Buffered by the Caucasus Mountains, and benefiting from the ameliorating effects of the Black Sea, the region appears to have served as a biogeographical refugium throughout the Pleistocene. These geographic features spared the Southern Caucasus from the severe climatic oscillations and allowed humans to prosper throughout much of the region for millennia.
Upper Paleolithic remains have been investigated in Devis Khvreli, Sakazhia, Sagvarjile, Dzudzuana, Gvarjilas Klde and other cave sites. A cave at Dzudzuana has yielded the earliest known dyed flax fibers that date back to 36,000 BP. At that time, the eastern area of the South Caucasus appears to have been sparsely populated in contrast to the valleys of the Rioni River and Kvirila River in western Georgia. The Paleolithic ended some 10,000-12,000 years ago to be succeeded by the Mesolithic culture. It was when the geographic medium and landscapes of the Caucasus were finally shaped as we have them today.
Signs of Neolithic culture, and the transition from foraging and hunting to agriculture and stockraising, are found in Georgia from at least 5000 BC. The early Neolithic sites are chiefly found in western Georgia. These are Khutsubani, Anaseuli, Kistriki, Kobuleti, Tetramitsa, Apiancha, Makhvilauri, Kotias Klde, Paluri and others. In the 5th millennium BC, the Kura (Mtkvari) basin also became stably populated, and settlements such as those at Tsopi, Aruchlo, and Sadakhlo along the Kura in eastern Georgia are distinguished by a long lasting cultural tradition, distinctive architecture, and considerable skill in stoneworking. Most of these sites relate to the flourishing late Neolithic/Eneolithic archaeological complex known as the Shulaveri-Shomu culture. Radiocarbon dating at Shulaveri sites indicates that the earliest settlements there date from the late sixth − early fifth millennium BC.
In the highlands of eastern Anatolia and South Caucasus, the right combination of domesticable animals and sowable grains and legumes made possible the earliest agriculture. In this sense, the region can justly be considered one of the "cradles of civilization".
The entire region is surmised to have been, in the period beginning in the last quarter of the 4th millennium BC, inhabited by people who were possibly ethnically related and of Hurrian stock. The ethnic and cultural unity of these 2,000 years is characterized by some scholars as Chalcolithic or Eneolithic.
Bronze Age
From c. 3400 BC to 2000 BC, the region saw the development of the Kura-Araxes or Early Transcaucasian culture centered on the basins of Kura and Araxes. During this era, economic stability based on cattle and sheep raising and noticeable cultural development was achieved. The local chieftains appear to have been men of wealth and power. Their burial mounds have yielded finely wrought vessels in gold and silver; a few are engraved with ritual scenes suggesting the Middle Eastern cult influence. This vast and flourishing culture was in contact with the more advanced civilization of Akkadian Mesopotamia, but went into gradual decline and stagnated c. 2300 BC, being eventually broken up into a number of regional cultures. One of the earliest of these successor cultures is the Bedeni culture in eastern Georgia.
At the end of the 3rd millennium BC, there is evidence of considerable economic development and increased commerce among the tribes. In western Georgia, a unique culture known as Colchian developed between 1800 and 700 BC, and in eastern Georgia the kurgan (tumulus) culture of Trialeti reached its zenith around 1500 BC.
Iron Age and Classical Antiquity
By the last centuries of the 2nd millennium BC, ironworking had made its appearance in the South Caucasus, and the true Iron Age began with the introduction of tools and weapons on a large scale and of superior quality to those hitherto made of copper and bronze, a change which in most of the Near East may not have come before the tenth or ninth centuries BC.
During this period, as linguists have estimated, the ethnic and linguistic unity of the Proto-Kartvelians finally broke up into several branches that now form the Kartvelian family. The first to break away was the Svan language in northwest Georgia, in about the 19th century BC, and by the 8th century BC, Zan, the basis of Mingrelian and Laz, had become a distinct language. On the basis of language, it has been established that the earliest Kartvelian ethnos were made up of four principally related tribes: the Georgians ("Karts"), the Zans (Megrelo-Laz, Colchians), and the Svans – which would eventually form the basis of the modern Kartvelian-speaking groups.
- Vekua, A., Lordkipanidze, D., Rightmire, G. P., Agusti, J., Ferring, R., Maisuradze, G., et al. (2002). A new skull of early Homo from Dmanisi, Georgia. Science, 297:85–9.
- Bar-Yosef, Ofer; Belfer-Cohen, Anna, and Adler, Daniel S. (2006), The Implications of the Middle-Upper Paleolithic Chronological Boundary in the Caucasus to Eurasian Prehistory. Anthropologie XLIV/1:49-60.
- Balter M. (2009). Clothes Make the (Hu) Man. Science,325(5946):1329.doi:10.1126/science.325_1329a PMID 19745126
- Kvavadze E, Bar-Yosef O, Belfer-Cohen A, Boaretto E,Jakeli N, Matskevich Z, Meshveliani T. (2009).30,000-Year-Old Wild Flax Fibers. Science, 325(5946):1359. doi:10.1126/science.1175404 PMID 19745144 Supporting Online Material
- Aruchlo: An Early Neolithic Tell Settlement of the 6th Millennium BC. Deutsches Archäologisches Institut. Retrieved on May 4, 2007.
- Kiguradze, T. and Menabde, M. 2004. The Neolithic of Georgia. In: Sagona, A. (ed.), A View from the Highlands: Archaeological Studies in Honour of Charles Burney. Ancient Near Eastern Studies Supplement 12. Leuven: Peeters, pp. 345-398.
- Suny, Ronald Grigor (1994), The Making of the Georgian Nation: 2nd edition, pp. 4-6. Indiana University Press, ISBN 0-253-20915-3
|Wikimedia Commons has media related to: Prehistoric Georgia| |
Blood disorders include a number of different conditions that occur when your blood does not clot properly. Normally, when you start to bleed, your body forms a clot to stop the bleeding. The clot is similar to a plug. In patients with bleeding disorders, the clotting function does not work properly.
Common blood conditions include:
- Anemia - Your body needs red blood cells to carry oxygen to the rest of your body.When you have anemia you don't have enough red blood cells.
- Blood clots - Though most of the time blood clots are healthy as they stop bleeding. they can also form when they aren't needed and can sometimes cause a stroke or heart attack.
- Hemophilia - Hemophilia is when blood does not clot properly, which makes it hard for bleeding to stop.
Additionally, common blood cancers include:
- Leukemia - This is cancer of the blood cells.
- Lymphoma - Hodgkin's and Non-Hodgkin's disease is a cancer that begins in the cells of the immune system.
- Myeloma - This is a rare form of cancer of the plasma cells found in bone marrow.
Blood Disorders: What You Need to Know
Preventing blood disorders depends on the specific blood disorder you have. A hematologist (doctor specializing in blood disorders) can work with you on prevention techniques.
The University of Vermont Medical Center doctors use a collaborative approach to treating blood disorders. Your team may include a number of different specialists working together to manage your care.
We use the most advanced medical technology available for managing blood disorders, including sophisticated diagnostic techniques and treatment options.
The UVM Medical Center doctors tailor a course of treatment specifically for you. Your treatment will depend on a number of factors, including the type and severity of the blood disorder.
Experience, Trusted Expertise
At The UVM Medical Center, our hematologists and other specialists have years of experience diagnosing and treating a wide range of blood disorders. You can feel confident knowing you have placed your care in experienced and skilled hands.
What are blood disorders?
In order for blood to clot, you need platelets and proteins called clotting factors. If you have a blood disorder, then either:
- You do not have enough platelets or clotting factors.
- Your platelets or clotting factors are not working properly.
A blood disorder can develop for a number of reasons:
- You have another disease, such as liver disease.
- You have an inherited blood disease, such as hemophilia or von Willebrand disease.
- The blood disorder is a side effect of a medication you are taking.
There are no specific risk factors for blood disorders. If you have a family history of blood disorders, then you should talk to your doctor about early detection and treatment.
Diagnosis and Treatment: Blood Disorders
Your specific course of treatment will depend on the underlying cause of the blood disorder. Our doctors use the most advanced treatments and therapies, including factor replacement and plasma transfusion.
Learn more about blood disorders diagnosis and treatment. |
ANSI C is the standard published by the American National Standards Institute (ANSI) for the C programming language. Software developers writing in C are encouraged to conform to the requirements in the document, as it encourages easily portable code.
The first standard for C was published by ANSI. Although this document was subsequently adopted by International Organization for Standardization (ISO) and subsequent revisions published by ISO have been adopted by ANSI, the name ANSI C (rather than ISO C) is still more widely used. While some software developers use the term ISO C, others are standards body–neutral and use Standard C.
In 1983, the American National Standards Institute formed a committee, X3J11, to establish a standard specification of C. After a long and arduous process, the standard was completed in 1989 and ratified as ANSI X3.159-1989 "Programming Language C." This version of the language is often referred to as "ANSI C", or sometimes "C89" (to distinguish it from C99).
In 1990, the ANSI C standard (with a few minor modifications) was adopted by the International Organization for Standardization as ISO/IEC 9899:1990. This version is sometimes called C90. Therefore, the terms "C89" and "C90" refer to essentially the same language.
In March 2000, ANSI adopted the ISO/IEC 9899:1999 standard. This standard is commonly referred to as C99, and it is the current standard for C programming language.
Support from major compilers
ANSI C is now supported by almost all the widely used compilers. Most of the C code being written nowadays is based on ANSI C. Any program written only in standard C and without any hardware dependent assumptions is virtually guaranteed to compile correctly on any platform with a conforming C implementation. Without such precautions, most programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to the reliance on compiler- or platform-specific attributes such as the exact size of certain data types and byte endianness.
To mitigate the differences between K&R C and the ANSI C standard, the
__STDC__ ("standard c") macro can be used to split code into ANSI and K&R sections.
extern int getopt(int, char * const *, const char *);
extern int getopt();
It's better to use "
#if __STDC__" as above rather than "
#ifdef __STDC__" because some implementation may set
__STDC__ to zero to indicate non-ANSI compliance. "
#if" will treat any identifiers that couldn't be replaced by a macro as zero (
0). Thus even if the macro "
__STDC__" is not defined to signify non-ANSI compliance, "
#if" will work as shown.
In the above example, a prototype is used in a function declaration for ANSI compliant implementations, while an obsolescent non-prototype declaration is used otherwise. Those are still ANSI-compliant as of C99 and C90, but their use is discouraged.
Full article ▸ |
Flash! Bang! Thunderstorms are impressive displays of the power of nature. But how, you may wonder, do lightning and thunder form?
First comes the lightning, an intense electrical discharge. While not completely understood, it is believed to form as a result of the separation of charges within a cumulonimbus cloud. One theory of how this happens involves the collision of particles within these towering clouds, including hailstones, super-cooled liquid water droplets, and ice crystals. When they mix and collide, according to NOAA, “electrons are sheared off the ascending particles and collect on the descending particles.” This results in a cloud with a negatively charged base and a positively charged top.
As the atmosphere is a good insulator, generally inhibiting the flow of electricity, the strength of this electrical field has to build up substantially before lightning can occur. Most discharges, about 75%, occur across the electrical field within the storm cloud itself. This is known as intra-cloud lightning.
Another electrical field can also develop below the cloud. Since the cloud base is negatively charged, it induces a positive charge on the ground below, especially in tall objects such as buildings and trees. When the charge separation becomes large enough, a negatively charged stepped leader – an invisible channel of ionized air, moves down from the base of the cloud. When it meets a channel of positive charges reaching up from the ground, known as a streamer, a visible flash of lightning can be seen. This is called cloud to ground lightning.
Lightning can be as hot as 54,000°F, a temperature that is five times hotter than the surface of the sun. When it occurs, it heats the air around it in a fraction of a second, creating an acoustic shock wave. This is thunder. A nearby lightning strike will produce thunder that sounds like a sharp crack. Thunder from a distant storm will sound more like a continuous rumble.
While thunderstorms can be spectacular events to watch, they are also very dangerous. So, as the National Weather Service recommends, “When thunder roars, go indoors.” |
How Acid Rain is Measured and Monitored in the U.S.
This resource also includes:
- Join to access all included materials
Explain how acid rain is measured. They discover how acid rain is monitored in the United States. They compare locally measured pH or rain with that of Great Smoky mountains. They perform Ph tests on rainfall they collect.
3 Views 8 Downloads
Hands On Activities and Projects in Algebra 1, Algebra 2, Geometry, & Personal Finance
Your learners will enjoy this conglomerate of hands-on activities and projects that can be used in algebra one and two, geometry, and personal finance. Rich in math standards, these projects reach into learners' lives to motivate and...
9th - 12th Math CCSS: Adaptable
Environmental Impacts and Energy Consumption
As scientists prove environmental impacts of using coal as an energy resource, do you think Santa regrets giving out so much of it? Through a demonstration of acid rain, pupils learn what makes it, where it occurs, the impact of it, and...
9th - 12th Science CCSS: Adaptable
The Effect of Acid Rain on Limestone
Pupils investigate the pH of rain water in this earth science instructional activity. They collect rain water from their area and explore the pH when lime stone is added, then they will use the data collected to conjecture as to the...
6th - 9th Science CCSS: Designed
Understanding Scientific and Social Implications: Acid Rain
Young scholars examine the social and scientific implications of acid rain. For this acid rain lesson plan, students read an article about acid rain, the causes of acid rain, the effects of acid rain on the environment and the proposals...
9th - 12th Science |
Using Satellite Images to Understand Earth's Atmosphere
In this Earth Exploration Toolbook chapter, students select, explore, and analyze satellite imagery. They do so in the context of a case study of the origins of atmospheric carbon monoxide and aerosols, tiny solid airborne particles such as smoke from forest fires and dust from desert wind storms. They use the software tool ImageJ to animate a year of monthly images of aerosol data and then compare the animation to one created for monthly images of carbon monoxide data. Students select, explore and analyze satellite imagery using NASA Earth Observatory (NEO) satellite data and NEO Image Composite Explorer (ICE) tool to investigate seasonal and geographic patterns and variations in concentration of CO and aerosols in the atmosphere.
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials |
10 Facts about Antibodies
If you want to know the tiny proteins used by the body to fight against the vulnerable germs, you need to check out Facts about Antibodies. Phagocytes are the antibodies of the white blood cells. The white blood cell is derived from B lymphocytes to generate the antibodies. Find out more facts about antibodies by reading the following post below:
Facts about Antibodies 1: the B-cell in the blood
You can find many kinds of B-cells in the blood. Each B-cell will produce the antibody used to defend the body from a certain germ.
Facts about Antibodies 2: when the body is attacked by a germ
When our body is attacked by a germ, the B-cells will release the proper antibody to fight against it.
Facts about Antibodies 3: the foreign protein
The foreign protein is called the antigen. When the antigen is detected, the immune system will recognize it as the invader.
Facts about Antibodies 4: hayfever
Hayfever is one type of allergies caused by the pollen from the plants. This condition occurs because the immune system of the body identifies it as an invader. Therefore, the harmless pollen grains are fought by the antibodies of the immune system. Check out a dangerous disease in anthrax facts.
Facts about Antibodies 5: the antigens
The antigens can be found inside the viruses, bacteria and other microorganisms. When the body detects antigens, the B-cells will automatically produce the antibodies.
Facts about Antibodies 6: the innate immunity
Human being has the innate immunity. It means that since the birth of human being, the body has been equipped with antibodies to fight the germs that the body has never met before.
Facts about Antibodies 7: acquired immunity
The acquired immunity occurs when the body has to fight against a germ, but it has no antibody for this germ. Then the body will create the antibody for this germ. When the germ invades again in the future, the memory cells will activate this antibody again.
Facts about Antibodies 8: chickenpox
Since you have acquired immunity, you only have to suffer from chickenpox once in your life time.
Facts about Antibodies 9: allergies
Allergies are the common condition faced by the people in the world. It can be caused by the weather, dust, animal feathers and pollen of plants. When too many antibodies are produced inside body, the allergies often occur. Get facts about allergies here.
Facts about Antibodies 10: the immune system
When the body has immune system, it can protect the body from infection and disease.
Do you want to say something on facts about antibodies? |
Across the globe, the presence of HIV is wide-spread. At the end of 2004, the United Nations HIV/AIDS program estimated that 2.5 million children under the age of 15 were affected worldwide. Additionally, approximately 500,000 children in that same age group died from disease-related cases in that year alone. In the United States, 90 percent of infected children are infected by the disease through birth.
The effects of the disease on children differ greatly from those in adults, according to a report that appears in the July/August 2006 issue of General Dentistry, the AGD’s clinical, peer-reviewed journal. Type, severity and progression are all factors that differ, depending on the age at which one contracts the disease.
“Children do not demonstrate HIV-specific symptoms as adults do,” says Kishore Shetty, DDS, lead author of the study. “Their bodies will most likely display an infection or weakness instead of common HIV signs.”
The place where this most commonly occurs is in the mouth. There are many variations of the way lesions appear, but a few common types are: candidiasis, or “thrush,” a fungal yeast infection; salivary gland enlargement; herpes simplex virus; inflammation of the gingiva; and canker sores.
“Orofacial manifestations of HIV are common in pediatric HIV infection,” Shetty adds. “It is important to be aware of these signs, as they may serve as both a marker of infection and predictor of HIV progressing to AIDS.”
What to do:
• Visit your general dentist. They handle the majority of dental emergencies.
• If you fear that your child or teen might be at risk, have them tested as soon as possible. The sooner a child is diagnosed, the sooner treatment can begin.
• Communicate with your dentist if the child has HIV. It will alert them to look closely for signs of disease, plus allow them to provide the best possible treatment.
Cite This Page: |
Dutch Empire/Indonesian Independence
Two days after the Japanese surrender in August 1945, Sukarno and fellow nationalist leader, Mohammad Hatta, declared Indonesian independence. The Netherlands, only very recently freed from German occupation itself, initially lacked the means to respond, allowing Republican forces to establish de facto control over parts of the huge archipelago, particularly in Java and Sumatra. On the other hand, in the less densely populated outer islands, no effective control was established by either party, leading at times to chaotic conditions.
British Military Action
Initially the United Kingdom sent in troops to take over from the Japanese and soon found itself in conflict with the fledgling government. British forces brought in a small Dutch military contingent which it termed the Netherlands Indies Civil Administration (NICA). When a member of the NICA raised a Dutch flag on a hotel in the country's second-largest city, Surabaya, Indonesian nationalists overran the Japanese proxies guarding the hotel and tore the blue stripe off the flag, forming the red-and-white Indonesian flag.
On November 10, 1945, Surabaya was attacked by British forces, leading to a bloody street-to-street battle. The battle for Surabaya was the bloodiest single engagement in the war and had successfully demonstrated the determination of the rag-tag nationalist forces. It also made the British reluctant to be sucked into a war it did not need, considering how outstretched their resources in southeast Asia were during the period after the Japanese surrender.
Dutch Military action
As a consequence, the Dutch were asked to take back control, and the number of NICA forces soon increased dramatically. Initially the Netherlands negotiated with the Republic and came to an agreement at Linggarjati, in which the 'United States of Indonesia' were proclaimed, a semi-autonomous federal state keeping as its head the Queen of the Netherlands. Both sides increasingly accused each other of violating the agreement, and as consequence the hawkish forces soon won out on both sides. A major point of concern for the Dutch side was the fate of members of the Dutch minority in Indonesia, most of whom had been held under deplorable conditions in concentration camps by the Japanese. The Indonesians were accused (and guilty) of not cooperating in liberating these prisoners.
The Netherlands government then mounted a large military force to regain what it believed was rightfully its territory. The two major military campaigns that followed were declared as ere 'police actions' to downplay the extent of the operations. There were atrocities and violations of human rights in many forms by both sides in the conflict. Some 6,000 Dutch and 150,000(including civilians) Indonesians are estimated to have been killed.
Although the Dutch and their indigenous allies managed to defeat the Republican Army in almost all major engagements and during the second campaign even to arrest Sukarno himself, Indonesian forces continued to wage a major guerrilla war under the leadership of General Sudirman who had escaped the Dutch onslaught. A few months before the second Dutch offensive, communist elements within the independence movement had staged a failed coup, known as Madiun Affair, with the goal of seizing control of the republican forces.
Independence and Netherlands New Guinea
With the United States government threatening to withdraw the Marshall Plan funds, which were vital to the Dutch rebuild after the Second World War, the Netherlands government was forced back into negotiations, and after the Round Table conference in The Hague, the Dutch finally recognised Indonesian independence on December 27, 1949. At the time, other than most of Sumatra, and small parts of Java, all of Indonesia was under Dutch control. New Guinea was the only part of the East Indies not given up. Elections were held across Dutch New Guinea in 1959 and an elected New Guinea Council officially took office on April 5, 1961, to prepare for full independence by the end of that decade. The Dutch endorsed the council's selection of a new national anthem and the Morning Star as the new national flag on December 1, 1961.
Indonesia attempted to invade the region on December 18, 1961. Following some skirmishes between Indonesian and Dutch forces an agreement was reached and the territory was placed under United Nations administration in October 1962. It was subsequently transferred to Indonesia in May 1963. The territory was formally annexed by Indonesia in 1969. |
Gravity is a phenomenon through which all objects with mass attract each other. The force is proportional to the mass of the objects (ie. double the mass produces double the force), and inversely proportional to the square of the distance (ie. double the distance, produces four times less force).
Charged particles up to the size of small grains, may be more strongly influenced by electromagnetic forces than gravity. Gravity is always an attractive force, whereas electromagnetic forces may be attractive or repulsive. (See "Electromagnetic force" for a comparison between the electromagnetic and gravitational forces)
The physics of the influence of gravity and electromagnetic forces on a particle, is called "gravito-electrodynamics".
Hannes Alfvén wrote:
- "Gravitation is, of course, one of the dominating forces in astrophysics . However, as electromagnetic forces are stronger by a factor of 1039, gravitation is important only when electromagnetic forces neutralize each other, as is the case for large bodies. In our solar system, gravitational forces do not seem to be of primary importance in producing high energy phenomena" [.. For example .]
- "A plasma cloud approaching the Earth will already be stopped in the magnetosphere, or in any case, in the upper ionosphere, where gas clouds will also be stopped. The result will be a heating of the upper atmosphere which makes it expand and stop an additional infall of more distant clouds.
- "Low density plasma clouds approaching the Sun will be stopped very far away by the solar wind. A neutral gas cloud falling towards the Sun is likely to be stopped when it has reached the critical velocity. This occurs when the cloud is still very far from the photosphere. In fact, its kinetic energy will be only of the order of 10 eV when this occurs." |
(Nanowerk News) Building a better battery is a delicate balancing act. Increasing the amounts of chemicals whose reactions power the battery can lead to instability. Similarly, smaller particles can improve reactivity but expose more material to degradation. Now a team of scientists from the U.S. Department of Energy's (DOE) Brookhaven National Laboratory, Lawrence Berkeley National Laboratory, and SLAC National Accelerator Laboratory say they've found a way to strike a balance—by making a battery cathode with a hierarchical structure where the reactive material is abundant yet protected.
Brookhaven Lab physicist Huolin Xin in front of an aberration-corrected scanning transmission electron microscope at the Center for Functional Nanomaterials.
"Our colleagues at Berkeley Lab were able to make a particle structure that has two levels of complexity where the material is assembled in a way that it protects itself from degradation," explained Brookhaven Lab physicist and Stony Brook University adjunct assistant professor Huolin Xin, who helped characterize the nanoscale details of the cathode material at Brookhaven Lab's Center for Functional Nanomaterials (CFN).
X-ray imaging performed by scientists at the Stanford Synchrotron Radiation Lightsource (SSRL) at SLAC along with Xin's electron microscopy at CFN revealed spherical particles of the cathode material measuring millionths of meter, or microns, in diameter made up of lots of smaller, faceted nanoscale particles stacked together like bricks in a wall. The characterization techniques revealed important structural and chemical details that explain why these particles perform so well.
The lithium ion shuttle
Chemistry is at the heart of all lithium-ion rechargeable batteries, which power portable electronics and electric cars by shuttling lithium ions between positive and negative electrodes bathed in an electrolyte solution. As lithium moves into the cathode, chemical reactions generate electrons that can be routed to an external circuit for use. Recharging requires an external current to run the reactions in reverse, pulling the lithium ions out of the cathode and sending them to the anode.
3D elemental association maps of the micron-scale spherical structures, generated using transmission x-ray tomography, reveal higher levels of manganese and cobalt (darker blue, red, and purple) on the exterior of the sphere and higher levels of nickel-containing materials (green, light blue, yellow and white) on the interior.
Reactive metals like nickel have the potential to make great cathode materials—except that they are unstable and tend to undergo destructive side reactions with the electrolyte. So the Brookhaven, Berkeley, and SLAC battery team experimented with ways to incorporate nickel but protect it from these destructive side reactions.
They sprayed a solution of lithium, nickel, manganese, and cobalt mixed at a certain ratio through an atomizer nozzle to form tiny droplets, which then decomposed to form a powder. Repeatedly heating and cooling the powder triggered the formation of tiny nanosized particles and the self-assembly of these particles into the larger spherical, sometimes hollow, structures.
Using x-rays at SLAC's SSRL, the scientists made chemical "fingerprints" of the micron-scale structures. The synchrotron technique, called x-ray spectroscopy, revealed that the outer surface of the spheres was relatively low in nickel and high in unreactive manganese, while the interior was rich in nickel.
"The manganese layer forms an effective barrier, like paint on a wall, protecting the inner structure of the nickel-rich 'bricks' from the electrolyte," Xin said.
But how were the lithium ions still able to enter the material to react with the nickel? To find out, Xin's group at the CFN ground up the larger particles to form a powder composed of much smaller clumps of the nanoscale primary particles with some of the interfaces between them still intact.
"These samples show a small subset of the bricks that form the wall. We wanted to see how the bricks are put together. What kind of cement or mortar binds them? Are they layered together regularly or are they randomly oriented with spaces in between?" Xin said.
Nanoscale details explain improved performance
Using an aberration-corrected scanning transmission electron microscope—a scanning transmission electron microscope outfitted with a pair of "glasses" to improve its vision—the scientists saw that the particles had facets, flat faces or sides like the cut edges of a crystal, which allowed them to pack tightly together to form coherent interfaces with no mortar or cement between the bricks. But there was a slight misfit between the two surfaces, with the atoms on one side of the interface being ever so slightly offset relative to the atoms on the adjoining particle.
Scanning and transmission electron micrographs of the cathode material at different magnifications. These images show that the 10-micron spheres (a) can be hollow and are composed of many smaller nanoscale particles (b). Chemical "fingerprinting" studies found that reactive nickel is preferentially located within the spheres' walls, with a protective manganese-rich layer on the outside. Studying ground up samples with intact interfaces between the nanoscale particles (c) revealed a slight offset of atoms at these interfaces that effectively creates "highways" for lithium ions to move in and out to reach the reactive nickel (d).
"The packing of atoms at the interfaces between the tiny particles is slightly less dense than the perfect lattice within each individual particle, so these interfaces basically make a highway for lithium ions to go in and out," Xin said.
Like tiny smart cars, the lithium ions can move along these highways to reach the interior structure of the wall and react with the nickel, but much larger semi-truck-size electrolyte molecules can't get in to degrade the reactive material.
Using a spectroscopy tool within their microscope, the CFN scientists produced nanoscale chemical fingerprints that revealed there was some segregation of nickel and manganese even at the nanoscale, just as there was in the micron-scale structures.
"We don't know yet if this is functionally significant, but we think it could be beneficial and we want to study this further," Xin said. For example, he said, perhaps the material could be made at the nanoscale to have a manganese skeleton to stabilize the more reactive, less-stable nickel-rich pockets.
"That combination might give you a longer lifetime for the battery along with the higher charging capacity of the nickel," he said. |
The flightless cormorant is found only in the Galapagos Islands and only on the coastlines of the two most western islands--Fernandina and Isabela--which are also the youngest islands geologically, with the most active volcanoes. Charles Darwin would surely have been fascinated by them, because he used the occurrence of flightless birds as something best explained by his theory of natural selection. But he did not see these flightless birds during his time in the Galapagos, because he did not land on either of these two islands. I saw them on my first tour of the Galapagos in 2013, on a yacht named Cormorant, and I wrote a post on them.
Flightlessness have evolved many times--in 26 families of birds in 17 different orders. But other flightless birds like ostriches, kiwis, and penguins do not have close relatives among flying birds because their split from flying birds occurred over 50 million years ago. But the Galapagos flightless cormorant is closely related to other cormorants, from which they split off only about two million years ago.
Cormorants are large water birds that live near coastlines or lakes. The Galapagos cormorant is the only flightless cormorant among 40 species of cormorants.
To look for the genetic basis of this evolutionary split, Alejandro Burga and his colleagues compared the genomes of flightless cormorants and three closely related species of cormorants to look for the genetic variants influencing the loss of flight. They concentrated on genes that affect bone growth that might explain the short wings of flightless cormorants (Burga et al. 2017; Cooper 2017).
They found that a gene called Cux1 and some others influence the growth of cilia. In single-celled animals, cilia on the surface of cells function in movement. In birds and mammals, cilia on the surface of cells pick up biochemical signals for bone growth. Mutations in Cux1 in humans create diseases, called ciliopathies: human beings affected by ciliopathies have small limbs and ribcages. In the Galapagos cormorants, variations in Cux1 stop bone growth prematurely, so that the wings are too small for flying. These birds can still live well on land and in the water, and their short wings might help them in swimming underwater as they hunt for food.
This shows how genetic evolution could have created the Galapagos cormorant as a new species of cormorant, without any need for the miraculous intervention of the Creator or the Intelligent Designer. It was once common for creationists to argue that microevolution within species was possible, but not macroevolution across species. But now most creationists have expanded microevolution to include the natural evolution of new species from ancestral species, and yet they say this is within the "kinds" created by God. So they can concede that flightless cormorants evolved as a new species. Nevertheless, they say, flightless cormorants are still cormorants, and cormorants were one of the "kinds" originally created by God, with the genetic potential for evolutionary radiation into new species of cormorants.
Notice, however, that Cux1 is found in many animals--from single-celled animals to birds and mammals, including human beings. Darwinian scientists would see this as evidence for the evolution of all "kinds" of animals by common descent from ancient ancestral species, and perhaps ultimately from one or a few primitive forms of life.
Can the creationists respond by moving the category of "kinds" to ever higher levels of taxonomy? At one time, creationists identified "kinds" as species. Now, they say "kinds" could correspond to "families" in modern taxonomy, or even higher levels.
The problem, as Todd Wood and other creation scientists have admitted, is that the biblical Hebrew word min that was translated as "kind" in King James English is an "imprecise term." If one believes that the Bible is the divinely revealed Word of God, and not a book written by human beings, then one must wonder why God chose to write in such "imprecise" language. Why did God chose not to precisely explain the genetics of flightless cormorants and other forms of life?
I am reminded of James Madison's comment in Federalist Number 37 on how imprecise language often is: "When the Almighty himself condescends to address mankind in their own language, his meaning, luminous as it must be, is rendered dim and doubtful by the cloudy medium through which it is communicated."
If the Bible is imprecise in its scientific language, perhaps that's because God did not write the Bible as a textbook of science, but rather as a book of salvational history.
Cooper, Kimberly. 2017. "Decoding the Evolution of Species." Science 356: 904-905.
Burga, Alejandro, et al. 2017. "A Genetic Signature of the Evolution of Loss of Flight in the Galapagos Cormorant." Science 356: 921. |
by David F. Coppedge *
Evolutionary philosophy is a bottom-up storytelling project: particles, planets, people. Naturalists (those who say nature is all there is) believe they can invent explanations that are free of miracles, but in practice, miracles pop up everywhere in their stories. This was satirized by Sidney Harris years ago in a cartoon that showed a grad student filling a blackboard with equations. His adviser called attention to one step that needed some elaboration: It said, "Then a miracle hap-pens." Examples of miracles in evolutionary philosophy include the sudden appearance of the universe without cause or explanation, the origin of life, the origin of sex, the origin of animal and plant body plans, and the origin of human consciousness.
An egregious example of appeal to miracle appeared recently in Nature. John Chambers of the Carnegie Institute was commenting on recent ideas about planet formation. Scientists have had a difficult time in their models getting pebble-size rocks to grow into planetesimals (bodies large enough to attract material by gravity, usually kilometers across). New pressure has been put on the models by the realization that small bodies spiral into the star on short timescales (in just a few hundred orbits). So here is their new idea: the pebbles just leaped over the size barrier. Chambers said, "Objects must have grown very rapidly from sub-metre-sized pebbles into 100-km-sized bodies, possibly in a single leap." In the same article, he remarked, "Dust grains coalesced into planetesimals, objects of 1-1,000 km in diameter, through an unknown process."1
Don't miracles involve unknown processes, too? Chambers undoubtedly believes that sufficient natural processes will be found in some future model. But that requires faith. So here we see faith in an unknown process keeping the naturalistic story together. Chambers referred to a paper in Icarus that stated the miracle even more starkly: "Asteroids were born big."2 The authors of that paper explained the miracle in these terms: "The size of solids in the proto-planetary disk 'jumped' from sub-meter scale to multi-kilometer scale, without passing through intermediate values." That is functionally a miracle.
If this were the only example of appealing to a miracle in secular cosmology, it might be forgiven. But miracles are rampant in the evolutionary story. The literature of biological evolution is replete with statements that this or that animal "evolved" whatever complex systems are needed along the way, as if stating it makes it so. It's time to call this what it is: an appeal to miracles, the very concept that Enlightenment science was invented to avoid. It has become the caricature stated in Finagle's 6th Rule of Science: "Do not believe in miracles. Rely on them."
Fred Hoyle used to respond to critics--who regarded his “steady-state” cosmology as unscientific because it required the continuous creation of matter out of nothing--by pointing out that Big Bang cosmology does the same thing; it just creates it all at once. Deduction: everyone believes in miracles. Instead of needing to invoke continual bottom-up miracles in the evolutionary story, creationists get the world right by a top-down miracle of creation. The difference is that the creation miracle was intelligently designed for a purpose.
The top-down approach leads to superior science in two ways. For one, it matches the laws of nature we know. We see asteroids proceeding from the top down--colliding and grinding down into dust, not leaping from dust into planetesimals by some "unknown process." For another, creation science provides the basis for rationality. For science to succeed, it needs a philosophical anchor for the belief that the world is rational and can be understood. The Genesis account of man being created in the image of God provides that anchor.
Rationality in science requires reference to causes necessary and sufficient to produce the effects. If miracles are necessary, then an omnipotent Creator is sufficient. Appealing to chance miracles, however, is no more useful than stating "Stuff happens."3 Creation, the "top-down" method, provides the solid foundation for rational scientific explanation.
- Chambers, J. 2009. Planetary science: Archaeology of the asteroid belt. Nature. 460 (7258): 963-964.
- Morbidelli, A. et al. Asteroids Were Born Big. Icarus. Article in press, available online July 16, 2009.
- See Coppedge, D. 2009. "Stuff Happens": A Review of Darwin's Influence on Modern Astronomy. Acts & Facts. 38 (2): 37.
* David Coppedge works in the Cassini Program at the Jet Propulsion Laboratory. The views expressed are his own.
Cite this article: Coppedge, D. 2009. Bottom-Up Science. Acts & Facts. 38 (11): 18. |
The National Science Teachers Association (NSTA) believes the involvement of parents and other caregivers in their children’s learning is crucial to their children’s interest in and ability to learn science. Research shows that when parents play an active role, their children achieve greater success as learners, regardless of socioeconomic status, ethnic/racial background, or the parents’ own level of education (PTA 1999; Henderson and Mapp 2002; Pate and Andrews 2006). Furthermore, the more intensely parents are involved, the more confident and engaged their children are as learners and the more beneficial the effects on their achievement (Cotton and Wikelund 2001).
Historically, innovations in science and technology have been powerful forces for improving our quality of life and fueling economic development worldwide. To continue to reap the economic and social benefits that accrue from such innovation, as well as to find solutions to challenging problems in the areas of health, energy, and the environment, we must ensure parents and children value science learning and recognize the tremendous opportunities that can arise from being more scientifically and technologically literate and better prepared to participate in the 21st-century workforce.
Parents and other caregivers have a critical role to play in encouraging and supporting their children’s science learning at home, in school, and throughout their community. Teachers also play an important role in this effort and can be valuable partners with parents in cultivating science learning confidence and skills in school-age youth. NSTA recognizes the importance of parent involvement in science learning and offers the following recommendations to parents.
Children are naturally curious about the world around them. Parents and other caregivers can nurture this curiosity in children of all ages by creating a positive and safe environment at home for exploration and discovery.
- Acknowledge and encourage your children’s interests and natural abilities in science, and help them further develop their interests and abilities over time.
- Encourage your children to observe, ask questions, experiment, tinker, and seek their own understandings of natural and human-made phenomena.
- Foster children’s creative and critical thinking, problem solving, and resourcefulness through authentic tasks such as cooking, doing household chores, gardening, repairing a bike or other household object, planning a trip, and other everyday activities. Actively engage with your children during mealtime discussions or group games requiring mental or physical skills, or by talking about books they are reading or television programs about science they have watched.
- Provide frequent opportunities for science learning at home and in the community through outdoor play; participation in summer programs; or trips to parks, museums, zoos, nature centers, and other interesting science-rich sites in the community.
- Provide your children easy access to science learning resources such as books, educational toys and games, videos/DVDs, and online or computer-based resources.
- Join your children in learning new things about science and technology. Take advantage of not knowing all the answers to your children’s questions, and embrace opportunities to learn science together.
Schools are essential resources for science learning. The more actively engaged parents and other caregivers are in their children’s schooling, the more beneficial schools can be for building their child’s appreciation and knowledge of and confidence and skills in science and technology (Cotton and Wikelund 2001). This holds true throughout the school-age years, from preschool through college.
- Become a partner in your children’s schooling. Communicate regularly with your children and their teachers, school administrators, and counselors to learn more about your children’s science learning opportunities and performance.
- Encourage your children to participate in extracurricular opportunities focused on science, technology, engineering, and math (STEM), such as clubs, field trips, after-school programs, and science research competitions.
- Seek out opportunities to meet and get to know teachers of science. Volunteer in the classroom or on a field trip; serve on a science curriculum review or policy development committees; or attend a school’s open house or family science night event.
- Be informed about the science program at your children’s school. Learn more about the school’s curriculum and the amount of time devoted to science learning and hands-on laboratory experiences at each grade level, and find out whether teachers believe they have the necessary resources and experience to teach science effectively. Become involved with the local school board to ensure that science learning is a top priority in the school system and that adequate resources are available. If you are home schooling, be sure that you are meeting or exceeding the same science standards covered in the local school curriculum.
- Establish high expectations for your children’s science learning, as well as for the school system that fosters it.
- Be an advocate for science learning by supporting local, state, and national science education policies and investments in science resources, including school curriculum materials, laboratory equipment, and teacher and administrator professional development. It is also important to advocate for organizations that support schools and home school families, including museums, libraries, and other science-rich nonprofit organizations.
- Reach out to policy makers to impress upon them the value of science and technology learning and its importance to your children’s future.
Parents and other caregivers play an important role in ensuring that their children have the necessary knowledge and skills in science and technology to become scientifically literate and informed citizens. It also is imperative that we develop a strong science- and technology-skilled workforce. Parents can encourage children to consider and pursue a science- or technology-related career and to obtain the necessary knowledge and skills that will allow them access to and success in such a career.
- Seek out opportunities to introduce your children to individuals in your community whose work relates to science or technology. This may include trades and professions such as construction or manufacturing, public safety, medicine, natural resource management, or research.
- Participate in “Take Your Child to Work” days, and expose them to the science and technology in your workplace. Encourage your employer to promote and support these opportunities.
- Attend career fairs with your children. Help them explore a broad range of career options and learn about and understand the necessary skills and coursework required to pursue these careers.
- Look for special events and programs in your community that enable your children to meet scientists, or visit a worksite or local university where science and technology are prevalent. Support your children's participation in online academic mentorship programs that pair students and scientists to carry out STEM projects.
- Find opportunities in your community to connect science and technology businesses, schools, and non-school learning venues such as museums, libraries, and clubs. Encourage both financial and personnel investments in science learning. Ask businesses to give employees release time to support science learning at school or in the community and to become mentors for school-age youth.
- Encourage your children to disbelieve negative stereotypes about scientists, and help them understand that anyone can have a career in science.
- Model values that support learning, self-sufficiency, responsibility, and hard work so your children will develop at an early age the confidence and determination to pursue their career interests in science or technology.
—Adopted by the NSTA Board of Directors, April 2009
Cotton, K., and K. R. Wikelund. 2001. Parent involvement in education. School Improvement Research Series. Portland, OR: Northwest Regional Educational Laboratory. Available online at www.nwrel.org/scpd/sirs/3/cu6.html.
Henderson, A. T., and K. L. Mapp. 2002. A new wave of evidence: The impact of school, family, and community connections on student achievement. Austin, TX: Southwest Educational Development Laboratory. Full report available online at www.sedl.org/connections/resources/evidence.pdf; conclusion available at www.sedl.org/connections/resources/conclusion-final-points.pdf.
Parent Teacher Association (PTA). 1999. Position statement. Parent/family involvement: Effective parent involvement programs to foster student success.
Pate, P. E., and P.G. Andrews. 2006. Research summary: Parent involvement. Westerville, OH: National Middle School Association (NMSA). Available online at www.nmsa.org/Research/ResearchSummaries/ParentInvolvement/tabid/274/Default.aspx.
Barber, J., N. Parizeau, and L. Bergman. 2002. Spark your child's success in math and science: Practical advice for parents. Berkeley, CA: Lawrence Hall of Science, University of California at Berkeley. Also available online at www.lawrencehallofscience.org/gems/GEMSpark.html.
Heil, D., G. Amorose, A. Gurnee, and A. Harrison. 1999. Family science. Portland, OR: Foundation for Family Science. Information online at www.familyscience.org. |
May 28, 2017
The Food and Agriculture Organization of the United Nations (FAO, 1996) defines food security as a condition “when all people, at all times, have physical and economic access to sufficient safe and nutritious food that meets their dietary needs and food preferences for an active and healthy life.” It considers availability (food production, stock levels, and net trade), physical and economic access by consumers, and utilization.
Food security policies can be defined in terms of self-sufficiency or self-reliance. In general, food self-sufficiency implies meeting food needs from domestic supplies and minimizing dependence on international trade. Food self-reliance relies on international markets for availability of food in the domestic market.
According to Business Monitor International (2013), per capita food consumption in Singapore is among the highest in theregion. This, coupled with scarce natural resources and low domestic food production, means that the city-state is dependent on imports not only of raw materials but also of food.
As part of a long-term strategy based on self-reliance rather than self-sufficiency, Singapore imports most of the food the population consumes (approximately 90%). Food security strategies include diverse food sources in approximately 170 countries, localproduction, and stockpiling of essential food items such as rice. Food sources are diversified not only among countries but alsoamong zones within countries.
This article discusses food policies in Singapore in a framework of self-reliance, security, and resilience. It comprises diversification efforts, including overseas agricultural and food investments, food security and safety strategies, innovations, and international initiatives. |
How to determine the oxidation state of elements in a compound
Oct 02, · To find the oxidation state of metals, which often have several corresponding values, you must determine them by the oxidation states of other atoms in the compound. If you add up all the values of oxidation of atoms in a chemical bond, you will always get a zero oxidation state. An oxidation number can be assigned to a given element or compound by following the following rules. Any free element has an oxidation number equal to zero. For monoatomic ions, the oxidation number always has the same value as the net charge corresponding to the ion. The hydrogen atom (H) exhibits an oxidation state of +1.
Like how everyone can be described by their height and weight, every element has an oxidation state. It describes how oxidised an element is in a substance. If we interpret oxidation as the loss of electronsthe oxidation state indirectly tells us how deprived an element is, of electrons of course!
The more positive the oxidation state, the more electrons the element has lost. Hold up! What are free elements? Before you chiong to collect them as freebies, free elements are simply pure elements : just one type of atoms. They can be metals, like sodium and iron. They can also be non-metals that exist as simple flnd or giant molecules. We shall define the oxidation state of free elements as zero. They are seen as the default state, before atoms have gained or lost any electron. Some elements mainly form one type of oxidation state in compounds.
We say that they have fixed oxidation state. Likewise, fluorine is pretty boring. In all fluorine-containing compounds, fluorine has a fixed oxidation state of On the other hand, many elements have variable oxidation state. In other words, their oxidation state depends on what compound they are found in. These fid include transition metalscarbonnitrogenand non-metals in Period 3 and below.
If the oxidation state of these elements are so variable, how then can we find their exact oxidation state? The trick is to know that the combined oxidation state of all elements in a compound is zero. For ions, the combined oxidation state is equal to the charge of the ion. We can work from the above rule to find the unknown oxidation state. The oxidation state of iron in magnetite, Fe 3 O 4may seem weird at first blush. It is not a whole number. Calculate it yourself with the method above!
The oxidation state of iron in magnetite is fractional because it is an average value. How to find oxidation state? Height, weight and oxidation state Like how to hook up rca antenna to tv everyone can be described by their height and weight, every element has an oxidation state. A more positive oxidation state means the hpw has lost more electrons. Free elements are pure elements. They have an oxidation state of sfate.
Study more leh:. Redox reactions: Oxidation and reduction. Prev Next.
Chlorine, bromine, and iodine usually have an oxidation number of –1, unless they’re in combination with oxygen or fluorine. The oxidation number of a Group 1 element in a compound is +1. The alkali metals (group I) always have an oxidation number of +1. The oxidation number of a Group 2 element in a compound is +2. Sep 13, · Two answers. In a C-H bond, the H is treated as if it has an oxidation state of +1. This means that every C-H bond will decrease the oxidation state of carbon by 1. Any two bonds between the same atom do not affect the oxidation state (recall that the oxidation state of Cl in . Aug 20, · You have the starting complex, and its charge. Remove the ligands, with their associated charge, and examine the residual charge on the metal centre, which will be the oxidation state. [F e(C ? N)6]3? > 6 ? N ? C? +F e3+, ferric ion [CoCl6]3? > 6 ?Cl? + Co3+, Co (III).
Last Updated: October 1, References Approved. To create this article, 37 people, some anonymous, worked to edit and improve it over time. There are 10 references cited in this article, which can be found at the bottom of the page. This article has 48 testimonials from our readers, earning it our reader-approved status. This article has been viewed 1,, times. Learn more In chemistry, the terms "oxidation" and "reduction" refer to reactions in which an atom or group of atoms loses or gains electrons, respectively.
Oxidation numbers are numbers assigned to atoms or groups of atoms that help chemists keep track of how many electrons are available for transfer and whether given reactants are oxidized or reduced in a reaction. The process of assigning oxidation numbers to atoms can range from remarkably simple to somewhat complex, based on the charge of the atoms and the chemical composition of the molecules they are a part of.
To complicate matters, some elements can have more than one oxidation number. Luckily, the assignment of oxidation numbers is governed by well-defined, easy-to follow rules, though knowledge of basic chemistry and algebra will make navigation of these rules much easier.
To find oxidation numbers, figure out if the substance in question is elemental or an ion. Be aware that metallic ions that can have more than one charge, like iron, can also have more than one oxidation number! In all cases give fluorine an oxidation number of Did this summary help you? Yes No.
Tips and Warnings. Things You'll Need. Related Articles. Article Summary. Part 1 of Determine whether the substance in question is elemental. Free, uncombined elemental atoms always have an oxidation number of 0. This is true both for atoms whose elemental form is composed of a lone atom, as well as atoms whose elemental form is diatomic or polyatomic. Note that sulfur's elemental form, S 8 , or octasulfur, though irregular, also has an oxidation number of 0.
Determine whether the substance in question is an ion. Ions have oxidation numbers equal to their charge. This is true both for ions that are not bound to any other elements as well as for ions that form part of an ionic compound.
The Cl ion still has an oxidation number of -1 when it's part of the compound NaCl. Know that multiple oxidation numbers are possible for metallic ions. Many metallic elements can have more than one charge. For example, let's examine a compound containing the metallic aluminum ion. The compound AlCl 3 has an overall charge of 0.
Assign an oxidation number of -2 to oxygen with exceptions. In almost all cases, oxygen atoms have oxidation numbers of There are a few exceptions to this rule: X Research source When oxygen is in its elemental state O 2 , its oxidation number is 0, as is the case for all elemental atoms.
When oxygen is part of a peroxide, its oxidation number is Peroxides are a class of compounds that contain an oxygen-oxygen single bond or the peroxide anion O 2 For instance, in the molecule H 2 O 2 hydrogen peroxide , oxygen has an oxidation number and a charge of Superoxides contain the superoxide anion O 2 -.
See fluorine rule below for more info. Like oxygen, hydrogen's oxidation number is subject to exceptional cases. However, in the case of special compounds called hydrides, hydrogen has an oxidation number of Fluorine always has an oxidation number of As noted above, the oxidation numbers of certain elements can vary for several factors metal ions, oxygen atoms in peroxides, etc.
Fluorine, however, has an oxidation number of -1, which never changes. This is because fluorine is the most electronegative element - in other words, it is the element least-likely to give up any of its own electrons and most-likely to take another atom's. Therefore, its charge doesn't change. Set the oxidation numbers in a compound equal to a compound's charge.
The oxidation numbers of all the atoms in a compound must add up to the charge of that compound. For example, if a compound has no charge, the oxidation numbers of each of its atoms must add up to zero; if the compound is a polyatomic ion with a charge of -1, the oxidation numbers must add up to -1, etc.
This is a good way to check your work - if the oxidation in your compounds don't add up to the charge of your compound, you know that you have assigned one or more incorrectly. Part 2 of Find atoms without oxidation number rules. Some atoms don't have specific rules about the oxidation numbers they can have.
If your atom doesn't appear in the rules above and you're unsure what its charge is for instance, if it's part of a larger compound and thus its individual charge is not shown , you can find the atom's oxidation number by process of elimination. First, you'll determine the oxidation of every other atom in the compound, then you'll simply solve for the unknown based on the overall charge of the compound.
This is a good candidate for this method of algebraic oxidation number determination. Find the known oxidation number for the other elements in the compound.
Using the rules for oxidation number assignment, assign oxidation numbers to the other atoms in the compound. Be on the lookout for any exceptional cases for O, H, etc. Multiply the number of each atom by its oxidation number.
Now that we know the oxidation number of all of our atoms except for the unknown one, we need to account for the fact that some of these atoms may appear more than once. Multiply each atom's numeric coefficient written in subscript after the atom's chemical symbol in the compound by its oxidation number.
Add the results together. Adding the results of your multiplications together gives the compound's current oxidation number without taking into account the oxidation number of your unknown atom. Calculate the unknown oxidation number based on the compound's charge. You now have everything you need to find your unknown oxidation number using simple algebra. Set an equation that has your answer from the previous step plus the unknown oxidation number equal to the compound's overall charge.
S has an oxidation number of 6 in Na 2 SO 4. We know Oxygen generally shows a oxidation number of Also Cl has an oxidation number of Let the oxidation number of S be X. Now the overall charge is 0. Not Helpful 40 Helpful In normal cases, O has a oxidation number of But in OF2 , F is more electronegative than O.
Not Helpful 19 Helpful What is the relation between the oxidation number and valency in case of s-block metals? Oxidation is the ionic number of an element, while valency is the number that shows the relationship of chemical properties of elements in the same group, so the oxidation number of an element is also the number of valence electrons.
Not Helpful 7 Helpful In the case of a molecule, you have to see how many electrons each element needs to fill its shell. For example, NaCl. Sodium has one valence electron that it wants to give away to drop down to its complete the 8 electron shell. Chlorine wants to get one electron to complete its shell since it has 7 Ve- and needs one to get the complete 8. Not Helpful 44 Helpful Not Helpful 36 Helpful There is no formula -- it's a technique.
You just need to find the unknown value. |
|Part of a series on|
A demonym (//; from Ancient Greek δῆμος, dêmos, "people, tribe" and ὄνυμα, ónuma, "name") or gentilic (from Latin gentilis, "of a clan, or gens") is a word that identifies a group of people (inhabitants, residents, natives) in relation to a particular place. Demonyms are usually derived from the name of the place (hamlet, village, town, city, region, province, state, country, continent, planet, and beyond). Demonyms are used to designate all people (the general population) of a particular place, regardless of ethnic, linguistic, religious or other cultural differences that may exist within the population of that place. Examples of demonyms include Cochabambino, for someone from the city of Cochabamba; American for a person from the United States of America; and Swahili, for a person of the Swahili coast.
As a sub-field of anthroponymy, the study of demonyms is called demonymy or demonymics.
Since they are referring to territorially defined groups of people, demonyms are semantically different from ethnonyms (names of ethnic groups). In the English language, there are many polysemic words that have several meanings (including demonymic and ethonymic uses), and therefore a particular use of any such word depends on the context. For example, word Thai may be used as a demonym, designating any inhabitant of Thailand, while the same word may also be used as an ethnonym, designating members of the Thai people. Conversely, some groups of people may be associated with multiple demonyms. For example, a native of the United Kingdom may be called a British person, a Briton or, informally, a Brit.
Some demonyms may have several meanings. For example, the demonym Macedonians may refer to the population of North Macedonia, or more generally to the entire population of the region of Macedonia, a significant portion of which is in Greece. In some languages, a demonym may be borrowed from another language as a nickname or descriptive adjective for a group of people: for example, Québécois, Québécoise (female) is commonly used in English for a native of the province or city of Quebec (though Quebecer, Quebecker are also available).
In English, demonyms are always capitalized.
Often, demonyms are the same as the adjectival form of the place, e.g. Egyptian, Japanese, or Greek.
English commonly uses national demonyms such as Ethiopian or Guatemalan, while the usage of local demonyms such as Chicagoan, Okie or Parisian is less common. Many local demonyms are rarely used and many places, especially smaller towns and cities, lack a commonly used and accepted demonym altogether. Often, in practice, the demonym for states, provinces or cities is simply the name of the place, treated as an adjective; for instance, Kennewick Man and Massachusetts Resident.
National Geographic attributes the term demonym to Merriam-Webster editor Paul Dickson in a recent work from 1990. The word did not appear for nouns, adjectives, and verbs derived from geographical names in the Merriam-Webster Collegiate Dictionary nor in prominent style manuals such as the Chicago Manual of Style. It was subsequently popularized in this sense in 1997 by Dickson in his book Labels for Locals. However, in What Do You Call a Person From...? A Dictionary of Resident Names (the first edition of Labels for Locals) Dickson attributed the term to George H. Scheetz, in his Names' Names: A Descriptive and Prescriptive Onymicon (1988), which is apparently where the term first appears. The term may have been fashioned after demonymic, which the Oxford English Dictionary defines as the name of an Athenian citizen according to the deme to which the citizen belongs, with its first use traced to 1893.
For the converse of demonym, see List of places named after people.
Main article: List of adjectivals and demonyms for cities
Several linguistic elements are used to create demonyms in the English language. The most common is to add a suffix to the end of the location name, slightly modified in some instances. These may resemble Late Latin, Semitic, Celtic, or Germanic suffixes, such as:
as adaptations from the standard Spanish suffix -e(ñ/n)o (sometimes using a final -a instead of -o for a female, following the Spanish suffix standard -e(ñ/n)a)
Often used for European locations and Canadian locations
(Usually suffixed to a truncated form of the toponym, or place-name.)
"-ish" is usually proper only as an adjective. See note below list.
Often used for Middle Eastern locations and European locations.
"-ese" is usually considered proper only as an adjective, or to refer to the entirety. Thus, "a Chinese person" is used rather than "a Chinese". Often used for Italian and East Asian, from the Italian suffix -ese, which is originally from the Latin adjectival ending -ensis, designating origin from a place: thus Hispaniensis (Spanish), Danensis (Danish), etc. The use in demonyms for Francophone locations is motivated by the similar-sounding French suffix -ais(e), which is at least in part a relative (< lat. -ensis or -iscus, or rather both).
Mostly for Middle Eastern and South Asian locales. -i is encountered also in Latinate names for the various people that ancient Romans encountered (e.g. Allemanni, Helvetii). -ie is rather used for English places.
Used especially for Greek locations. Backformation from Cypriot, itself based in Greek -ώτης.
Often used for Italian and French locations.
Often used for British and Irish locations.
While derived from French, these are also official demonyms in English.
It is much rarer to find Demonyms created with a prefix. Mostly they are from Africa and the Pacific, and are not generally known or used outside the country concerned. In much of East Africa, a person of a particular ethnic group will be denoted by a prefix. For example, a person of the Luba people would be a Muluba, the plural form Baluba, and the language, Kiluba or Tshiluba. Similar patterns with minor variations in the prefixes exist throughout on a tribal level. And Fijians who are indigenous Fijians are known as Kaiviti (Viti being the Fijian name for Fiji). On a country level:
Demonyms may also not conform to the underlying naming of a particular place, but instead arise out of historical or cultural particularities that become associated with its denizens. In the United States such demonyms frequently become associated with regional pride such as the burqueño of Albuquerque, or with the mascots of intercollegiate sports teams of the state university system, take for example the sooner of Oklahoma and the Oklahoma Sooners.
Main article: Ethnonyms
Since names of places, regions and countries (toponyms) are morphologically often related to names of ethnic groups (ethnonyms), various ethnonyms may have similar, but not always identical, forms as terms for general population of those places, regions or countries (demonyms).
Literature and science fiction have created a wealth of gentilics that are not directly associated with a cultural group. These will typically be formed using the standard models above. Examples include Martian for hypothetical people of Mars (credited to scientist Percival Lowell), Gondorian for the people of Tolkien's fictional land of Gondor, and Atlantean for Plato's island Atlantis.
Other science fiction examples include Jovian for those of Jupiter or its moons and Venusian for those of Venus. Fictional aliens refer to the inhabitants of Earth as Earthling (from the diminutive -ling, ultimately from Old English -ing meaning "descendant"), as well as Terran, Terrene, Tellurian, Earther, Earthican, Terrestrial, and Solarian (from Sol, the sun).
Fantasy literature which involves other worlds or other lands also has a rich supply of gentilics. Examples include Lilliputians and Brobdingnagians, from the islands of Lilliput and Brobdingnag in the satire Gulliver's Travels.
In a few cases, where a linguistic background has been constructed, non-standard gentilics are formed (or the eponyms back-formed). Examples include Tolkien's Rohirrim (from Rohan) and the Star Trek franchise's Klingons (with various names for their homeworld).
|a.||^ Kosovo is the subject of a territorial dispute between the Republic of Kosovo and the Republic of Serbia. The Republic of Kosovo unilaterally declared independence on 17 February 2008. Serbia continues to claim it as part of its own sovereign territory. The two governments began to normalise relations in 2013, as part of the 2013 Brussels Agreement. Kosovo is currently recognized as an independent state by 97 out of the 193 United Nations member states. In total, 112 UN member states are said to have recognized Kosovo at some point, of which 15 later withdrew their recognition.| |
Qualitative research generates rich, detailed and valid data that contribute to in depth understanding of the context, quantitative researches generate reliable population based and generalised data. A particular strength of quantitative research is that it can be generalised to some extent, a sample that closely relates to a population is chosen. Qualitative researches do not choose samples that are closely related to a population. Quantitative researches allow the researcher to test hypotheses. Qualitative researches are more for exploratory purposes, the researches allow the data to take them on different directions.
University of Essex Department of Psychology Discovering Psychology: The science Behind Human Behaviour Discuss the value of the true experiment in psychology 1301109 24/10/2013 979 “A true experimental design as the most accurate form of experimental research, in that it tries to prove or disprove a hypothesis mathematically, with statistical analysis” Shuttleworth (2008). This means that an experimental design basically tries to see how accurate an hypothesis is through statistical analysis. So, for an experiment to be classed as a true experimental design, the sample groups must be assigned randomly, in which there must be a viable control group, only one variable can be manipulated and tested i.e. It is possible to test more than one, but such experiments and their statistical analysis tend to be large and difficult and the tested subjects must be randomly assigned to either control or experimental groups. Therefore, in a true experiment subjects are randomly assigned to the levels of the independent variable.
Statistics are a method of finding the truth and psychologist use statistical methods to help them make sense of the numbers that collect during their experiments and research and is the essence of human evolution and psychology of science. With these statistics psychologist are able to see if there theory is correct or whether they need to do more research. There are two different types of statistics that are used to draw conclusions and to describe information and they are descriptive statistics and inferential statistics
From these views it can be seen that the quantitative approach is scientific based. It believes that the information already exists and is there to discover. Human perception does not play a role in the uncovering of new knowledge. A hypothesis is tested to assess its validity. Questionnaires are structured carefully in order to obtain precise information.
Historical trends in psychological enquiry, in addition to fundamental shifts in Psychology’s subject base has led to the use of the scientific method. Ultimately, the aim of the scientific method is to test hypothesis by falsifying them. It is impossible to prove a hypothesis correct but we are able to prove a hypothesis wrong. Karl Popper saw falsifiability as a black and white definition, that if a theory is falsifiable, it is scientific, and if not, then it is unscientific. Empirical data is information that is gained through a direct observation or an experiment rather than a reasoned argument or unfounded belief.
Since the parameter is a population mean of a continuous variable variable, this suggests a one sample test of a mean. 2. SPECIFY THE NULL AND ALTERNATIVE HYPOTHESES. The second step is to state the research question in terms of a null hypothesis (H0) and a alternative hypothesis (HA). The null hypothesis is the population parameter, µ = $30,000 (H0: µ = $30,000).
Scientific reasoning is the process, which provides evidence for scientific theory. Induction is common throughout scientific reasoning since scientists’ use inductive reasoning whenever a limited data is used to form more general conclusions (Okasha, 2002). Induction is used to decide whether claims about the world are justified. Inductive reasoning is prevalent throughout science since it is common to have a sample size that does not include all of the possible test subjects needed for the study. This leaves the possibility that one of the test subjects not included in the sample could prove the conclusion to be incorrect.
The Heteroskedasticity Problem in Regression Analysis Recall that one of the assumptions of the OLS method is that the variance of the error term is the same for all individuals in the population under study. Heteroskedasticity occurs when the variance of the error term is NOT the same for all individuals in the population. Heteroskedasticity occurs more often in cross-section datasets than in time-series datasets. Consequences of Heteroskedasticity: 1. the estimates of the b’s are still unbiased if heteroskedasticity is present (and that’s good), 2. but, the s.e.’s of the b’s will be biased, and we don’t know whether they will be biased upward or downward, so we could make incorrect conclusions about whether the X’s affect Y 3. the estimate of S.E.R. is biased, so we could make incorrect conclusions about model fit Detecting Heteroskedasticity: 1.
Can intelligence change? To what extent is intelligence malleable? Extended Essay: Psychology Name: Candidate number: School: Nörre Gymnasium Word count: 37811 Abstract This essay investigated the research question: To what extent is intelligence malleable? It was necessary to start by presenting the debate on defining intelligence since there is not a complete consensus among psychologists, however, this paper accepted a definition which is generally accepted by respected psychologists; that ‘intelligence is the ability to deal with cognitive complexity’ (Gottfredson, 1998). In presenting and analysing empirical evidence such as Howe (1997) supporting the thesis that intelligence can, in fact, change under the right conditions and given enough time, a strong indication of malleability is provided.
This paper will evaluate the usefulness of the psychometric approach for understanding personality and human intelligence. Psychometric tests were first created in order to objectively measure intelligence and personality (Eysenck, 1994; Hayes, 2000; Hothersall, 2004; Engler, 2009). As such, it can be said that the psychometric approach for understanding personality and human intelligence is useful as it enables psychological researchers to quantitatively measure intelligence and personality in a scientific manner. Furthermore, such an approach allows for individuals be to placed in categories based on definable characteristics, which better allows psychological research on different subjects, as participants of psychological research can be more easily assigned to different groups or conditions. Thus, the psychometric approach for |
Using the four Rs of EFL/ESL English education will help you teach more effectively.
In this article, I will explain the four Rs of EFL/ESL English education. I developed this idea to help improve my approach to teaching and lesson planning. Each time I plan a lesson or design a new curriculum for my students, I follow the guidelines of these four Rs which allow me to plan effective lessons that help my students learn and master English more efficiently.
When teaching English as a second language or as a foreign language (EFL/ESL), we must remember that the techniques and processes are different than teaching English as a native language. When learning English as a native language, the brain is being programed from a fresh start, learning the logic and organization of the language without any previous information to cause confusion. After the first language is programmed into the brain, however, adding a second language with a different grammatical structure or logical organization will conflict with the first language and make language acquisition of the second language more difficult.
Imagine buying a brand-new computer with no operating system installed. If you first install Windows and then try to install a Mac operating system, the computer would have difficulty processing both operating systems at the same time. Trying to learn English as a second language after learning a different native language is similar, and for this reason, we must take a different approach when teaching English as a second language or foreign language.
Understanding the four Rs of ESL/EFL English education to be a better teacher.
The four Rs of EFL/ESL English are Reason, Repeat, Remember and Review. By understanding these four Rs and the importance of applying them to language education, you will be able to design more effective curricula and see better results in your students’ progress. It’s arguable that Reason is the most important of the four Rs, but I will discuss it last. Let’s first understand the importance of Remember, Repeat, and Review.
The last three Rs, Remember, Repeat, and Review, are fundamental to learning anything involving muscle memory. This could be learning how to play guitar, how to swim, or how to drive a car. These three Rs also apply to learning language, and they are the foundation of BINGOBONGO’s Focused Repetition Method used in many of our resources including our Super Easy FUN!books. Simply stated, the Focus Repetition Method is a process of selecting a single topic to focus on, repeating a set of teaching activities until it has been remembered and internalized, and then moving on to the next topic.
When learning the alphabet, for example, native speakers can learn the uppercase and lowercase letters simultaneously because the letters are usually familiar from singing the alphabet song, as well as seeing letters in everyday life. EFL Learners, on the other hand, can become easily confused when simultaneously learning the letter names, uppercase forms, and lowercase forms. For that reason, the Focus Repetition Method breaks each part into individual steps. First, students focus on learning only the names of the letters. After this step is mastered, they then focus on only the uppercase letters. Next, students focus on learning the lowercase letters forms, and finally, the phonetic sounds of the letters are learned.
The final R, Review, is equally important to Remember and Repeat, and it shouldn’t be overlooked. Native speakers will have plenty of opportunities to review newly learned material through immersion. EFL students, on the other hand, might not have any opportunities to get review in daily life. Therefore, adding Review is a necessary part of an effective curriculum.
The last three Rs, Repeat, Remember, and Review, are easy to implement through activities, games, worksheets, and textbooks. In fact, most teachers use these three Rs without much thought because they are a logical approach to teaching. If a teacher overlooks the first R, Reason, though, there is a strong chance that students aren’t learning in the most efficient way.
Why is Reason the most important of the four Rs?
The biggest difference between native speakers learning English and ESL/EFL students learning English is that native speakers are constantly immersed in the language, but ESL/EFL students may not have any exposure outside of their English class. This is only 40 – 50 minutes, once per week, in the case of many English conversation schools in Japan. Therefore, having the most efficient curriculum possible is necessary to master speaking, listening, reading, and writing. If a teacher doesn’t have a strong reason for doing a certain activity in a lesson, it could lead to an ineffective use of time and hindered student progress. At our English conversation school in Japan, Step by Step Eikaiwa, we are constantly thinking of ways to improve our teaching efficiency and make sure we have a solid reason for all our activities. The same thought process goes into making all BINGOBONGO Learning’s resources as effective as possible as well.
To sum everything up, the four Rs of ESL/EFL English education are useful for planning effective lessons and designing high-quality teaching resources. The four Rs can be implemented in many ways, and there’s no one correct approach to building a curriculum. Anytime you plan a lesson, do an activity, or an assign homework, make sure to apply the four Rs, starting with having a solid reason for doing each activity or assignment. This will allow you to improve your lessons and build a well-designed, efficient curriculum. |
-Try out the selected Science Inquiry lessons!-
Click on "Click Here" below to go through a given lesson or activity.
Activities & Lessons for Middle School Students
1. Prerequisite lessons for Science Inquiry (appropriate for elementary and middle school students):
The Scientist and the Engineer Lesson
This lesson helps elementary-aged students understand the difference between a scientist and an engineer. They also learn about creating a controlled experiment and get to see a good (controlled) experimental set-up at the end of the lesson. Students can then run the experiment if they wish.
Experiment-related Vocabulary Lesson
This interactive lesson introduces students to basic vocabulary words related to experimental design and science inquiry in general. Students are given feedback on their responses throughout this lesson.
2. Lessons on selecting good research questions:
Research Question Lesson
This lesson helps students understand what a “good” general research question is, one that allows them to design a controlled experiment. They will learn about the proper format of a research question that includes an independent variable and dependent variable. They will also gain practice considering whether the variables they’ve selected for their experiment are easily measurable.
Picking a Good Research Question: Worked Example mode
Students often select research questions that are too easy or too difficult for them. This lesson guides students through the process of selecting a research question that is appropriate for their knowledge level. Students will gain metacognitive knowledge and skills as they watch a virtual student work through this process, explaining her choices.
Picking a Good Research Question: Guided Response mode
In the Research Question (RQ) module, students can select a research question from eight different topics within four different science areas and view animations for one trial run of a chosen experiment. Students can design experiments for their chosen research question in the TED Tutor (below).
3. Lessons/materials for supporting background research:
Plan Your Research Lesson (overview)
The Plan Your Research Lesson provides students with important tips on making an overall plan for doing background research in the Background Research Module, including what information to find out about.
Background Research: Worked Example
A virtual “high school student” conducts research on the questions he had developed for his research question earlier. For each question, the virtual student follows a cycle of (a) considering what he already knows about the question, (b) planning and searching for information, (c) reading and explaining relevant text, (d) taking notes of relevant information, and (e) summarizing the information (e.g., in text and/or pictorial form).
Background Research Module
The Background Research Module (BRM) consists of units on various science concepts relevant to forming hypotheses for the research question students chose in the RQ Module. These units are written to be understandable to students at a middle school level, address common misconceptions about science concepts, and consistent with the Next Generation Science Standards.
Throughout the units are embedded questions with immediate feedback to assess student learning and games that reinforce learning.
For more info about the BRM click here.
-Some Audio Available-
4. Assessments and lesson for instruction on experimental design:
TED pretest: Design and Evaluate experiments
The TED pretest features 6 word problems: 3 asking students to design and 3 asking students to evaluate experiments and correct them (if necessary). Students receive one point for each experiment they correctly design, one point for each correct evaluation, and one point for each correction they make (for a total of 9 points).
**Scores are shown at the end. Students who score less than about 6 out of 9 on this may benefit from working through the TED Tutor.
TED (Training in Experimental Design) Tutor
The TED Tutor provides instruction on experimental design in the science area and topic students selected (in the Research Question module). Students can select the area, topic, and variable they chose in the RQ module at the beginning of this TED Tutor. Students design and evaluate given experiments and receive immediate feedback on their responses.
(The TED Tutor is currently being further developed and refined, but this demo works.)
TED posttest: Design and Evaluate experiments
The TED posttest is similar to the TED pretest but the experiments are in novel domains; this allows us to determine if students have developed a robust understanding of experimental design.
**As with the TED pretest, students can see their scores at the end (out of 9).
5. Other activities related to experiment-based science inquiry:
This lesson helps students understand how to create and interpret histograms, which is important in interpreting experimental data outcomes. This lesson also includes a short assessment, provides corrective feedback to students and a final quiz score.
Data Interpretation Worked Example Lesson
This lesson helps students understand how to interpret data. It also includes multiple questions and corrective feedback for students.
Science Fair Poster Analysis Activity
This activity gives students opportunities to evaluate and analyze hypothetical students' science fair posters, which include common mistakes (actual) students make. This activity also allows students to create their own posters. Students are given feedback on their responses in this activity and an overall performance score at the end. |
Greatest Common Factor or GCF is the largest number and a factor of two or more numerical. These factors upon diving results in natural numbers, and they are essential to develop knowledge in multiplication and factors for attempting any equation in future. Students are aware of this basic concept; however, it is integral that they study in detail to attempt mathematical problems with ease.
GCF offers students with a clear idea of factors and multiples of each number. It also enables them to understand which number is divisible and which isn't.
What is a Common Factor?
Before deciphering the meaning of common factors, students must clearly understand what a factor is. A number that is divisible by any other numerical and does not leave remainders is a factor. Most numerical numbers have an even number as a factor, and square numbers have an odd number. To comprehend the concept of a common factor, you will need a proper example.
Factors of number 10 are 1, 2, 5 and 10 while 15 are 1, 3, 5 and 15. The common factor here is 1 and 5.
Therefore, common factors are a number divisible by two or more numerical and don't leave behind a remainder. Often numbers share more than one common factor. Students can also attempt to find common factors of more than one number with ease.
What is the Greatest Common Factor?
Greatest common factor of a number is the largest integer that can divide it without leaving a remainder. When you divide two numbers, you get certain common integers, and among these factors, the highest number is called GCF.
Here is an explanation to help you understand what is GCF:
Three numbers 18, 30, and 42.
Among these numbers, factors of 18 are 1, 2, 3, 6, and 18.
Factors of 30 are 1, 2, 3, 5, 6 and 30.
Finally, factors of 42 are 1, 2, 3, 6 and 7.
By deriving each of these numbers' factors, common factors include 1, 2, 3 and 6. Among these factors, GCF is 6.
Therefore, it is the highest possible multiple among all divisible factors of these three numbers.
How to Find GCF?
The process of finding the Greatest Common Factor is simple and requires students to be acquainted with its formula. Students will need to have proper knowledge of multiplication and division when attempting these equations.
Before you learn how to find the greatest common factor of two or more numbers, you will need to list all prime numbers of each every numerical. The common factor which is a multiple of both numbers will result in GCF. In case a student is unable to determine GCF, then they will need to keep prime factor 1 as their greatest common factor.
To find greatest common factor and how you derive it, you will first need to consider the same example.
For numbers 18 and 24, GCF is 6.
This is because the common factor for these two number 1 × 2 × 3 = 6
The process to find GCF for two or more numerical can be easy. However, the method of finding a solution involves several steps. That is why students also need to comprehend these steps and related equations.
What is the Difference Between Highest Common Factor and Greatest Common Factor?
Highest common factor or HCF is greatest common divisor or GCF of two or more positive integer. Among these factors, whichever is highest is taken as the common factor. Usually, the highest common factors are directly divisible and don't leave behind any remainders.
Here is an example of HCF:
Let's take two numbers, 8 and 12.
Factors of 8 are 1, 2, 4, and 8. Factors of 12 include 1, 2, 3, 4, and 12.
Here the HCF of two integers is 4. This is because 4 is the highest factor that is common and divisible to two numbers.
Another factor that students will study while going through this topic would be the lowest common factor. As the name suggests itself, it refers to the lowest common factors divisible by the whole number. LCM is the smallest and least positive integer.
For example, consider two numbers 4 and 6.
Multiples of 4 include 4, 8, 12, 16, 20, 24, etc. On the other hand, multiples of 6 comprise 6, 12, 18, 24, 30, etc.
Therefore, common multiples here are 12, 24, 36, 48, etc. least common multiple here would be 12.
So, what is the GCF? The greatest common factor or the GCF is the greatest integer that is divisible by the numbers and doesn't leave behind any integers.
To understand how to get GCF, let's take the previous example.
If the common factors of 4 are 1,2, and 4 while the common factors of 6 are 1, 2, 3, and 6, then the GCF is 1 x 2 = 2. This is how you find the GCF.
Which App to Turn to When Studying Mathematical Equations?
Here is a list of several reasons you should use this application to study the greatest common factor definition.
Application comprises videos that use illustrations to explain various mathematical topics.
Languages in these live classes are simple and easy to comprehend for students.
There are various exercises that students need to solve. Based on their performance, students receive points.
By attempting these classes, the learning process becomes a bit smoother and easier for students.
Teachers in online classes use plenty of examples to help keep students engaged and intrigued.
Therefore, by studying for this chapter through Vedantu app, you can develop a clear understanding of the GCF definition. |
Although there are natural forces that cause fluctuations in ozone amounts, there is no evidence that natural changes are contributing significantly to the observed long-term trend of decreasing ozone.
The formation of stratospheric ozone is initiated by ultraviolet (UV) light coming from the Sun. As a result, the Sun's output affects the rate at which ozone is produced. The Sun's energy release (both as UV light and as charged particles such as electrons and protons) does vary, especially over the well-known 11-year sunspot cycle. Observations over several solar cycles (since the 1960s) show that total global ozone levels vary by 1-2% from the maximum to the minimum of a typical cycle. However, changes in the Sun's output cannot be responsible for the observed long-term changes in ozone, because the ozone downward trends are much larger than 1-2%. As the figure below shows, since 1978 the Sun's energy output has gone through maximum values in about 1980 and 1991 and minimum values in about 1985 and 1996. It is now increasing again toward its next maximum around the year 2002. However, the trend in ozone was downward throughout that time. The ozone trends presented in this and previous international scientific assessments have been obtained by evaluating the long-term changes in ozone after accounting for the solar influence (as has been done in the figure below).
Major, explosive volcanic eruptions can inject material directly into the ozone layer. Observations and model calculations show that volcanic particles cannot on their own deplete ozone. It is only the interaction of human-produced chlorine with particle surfaces that enhances ozone depletion in today's atmosphere.
Specifically, laboratory measurements and observations in the atmosphere have shown that chemical reactions on and within the surface of volcanic particles injected into the lower stratosphere lead to enhanced ozone destruction by increasing the concentration of chemically active forms of chlorine that arise from the human-produced compounds like the chlorofluorocarbons (CFCs). The eruptions of Mt. Agung (1963), Mt. Fuego (1974), El Chichón (1982), and particularly Mt. Pinatubo (1991) are examples. The eruption of Mt. Pinatubo resulted in a 30- to 40-fold increase in the total surface area of particles available for enhancing chemical reactions. The effect of such natural events on the ozone layer is then dependent on the concentration of chlorine-containing molecules and particles available in the stratosphere, in a manner similar to polar stratospheric clouds. Because the particles are removed from the stratosphere in 2 to 5 years, the effect on ozone is only temporary, and such episodes cannot account for observed long-term changes. Observations and calculations indicate that the record-low ozone levels observed in 1992-1993 reflect the importance of the relatively large number of particles produced by the Mt. Pinatubo eruption, coupled with the relatively higher amount of human-produced stratospheric chlorine in the 1990s compared to that at times of earlier volcanic eruptions. |
How to Teach Decimal Addition and Subtraction
How do you teach adding and subtracting decimals in upper elementary or middle school math?
In 6th grade, my math students have typically come to me knowing the 'rules' for adding and subtracting decimals.
However, when the number of digits in the numbers they're adding or subtracting aren't the same, they don't necessarily line the numbers up the way they need to...even though they 'know' the rules. Why is this?
I believe it's because they really don't understand the point of 'lining up the decimal points.'
My belief is reinforced by student comments I collected one year as we began our decimal operations unit.
I asked my 6th grade math students to solve 35.2 + 7.489 and then explain why their answer made sense. These are a few of their responses:
Of the 120 students in my classes, only 8 said the answer made sense because "35 + 7 is 42" or because "I estimated" or "when we're doing addition, we know we end up with a bigger number."
I don't want to assume that students who didn't write this didn't think about those things at all, but to the majority of students, their answers "made sense" when they followed the rules - even if they didn't remember the rules correctly.
What's the Point?
What's the purpose of lining up the decimal points?
Lining up the decimal points helps us line up the place values so that place values are added with or subtracted from 'like' place values.
If students don't understand the point of lining up the decimal points when adding and subtracting decimals, then somehow they've missed the idea of place value.
And what do we do if they numbers don't even HAVE a decimal point?? (Some students get pretty lost when that happens.)
Start With Estimating When Adding or Subtracting Decimals
Because I believe students don't understand the point of lining up the decimal point, I teach them to add and subtract decimals by doing the following:
Tools for Teaching Decimal Addition and Subtraction
One of my favorite ways to teach the addition and subtraction process is to use notes that emphasize estimating.
I include this in my unit notes and practice, and I also use math wheels for note-taking.
The math wheels break the processes into steps and allow room for examples and practice. They also give students a chance to color-code, doodle, color, and add memory triggers to their notes. Then they can keep them in the notebooks all year for reference.
Adding Decimals Wheel
Subtracting Decimals Wheel
A couple practice decimal activities:
There are a couple free decimal activities found in the blog posts linked below that I've used to give my students some additional decimal operations practice.
Decimal Operations Problem Solving
Decimal Practice with Number Puzzles
What are your tried and true methods for teaching adding and subtracting decimals?
Resources to Practice Adding and Subtracting Decimals:
To Read Next:
Hey there! I'm Ellie - here to share math fun, best practices, and engaging, challenging, easy-prep activities ideas! |
When neuroscientists used a monkey's thoughts to control computers, it was a huge breakthrough in mind-machine research. But harnessing brain waves has become even more complex now that humans are the subjects. Recently, researchers used the thoughts of one human's brain to control the physical actions of another. Really. As a panel of experts explained at the World Science Festival this week, brain-to-brain linkups are just getting started.
The field got its start in 1998 in the lab of Miguel Nicolelis, a Brazilian researcher working at Duke University. Before Nicolelis started experimenting with the brain, scientists were measuring the electrical output of a single neuron at a time. But Nicolelis and his colleagues began recording information from the brains of rats, where they discovered that to make their bodies move, rat brains would fire 48 neurons at a time. Believing that they could advance their understanding further, Nicolelis and his team then turned to monkeys.
They recorded 100 neurons firing at once in the brain of a monkey. Believing they might be able to take this data and use it to perform a task, the team connected a probe into the area of the monkey's brain that controlled for arm movement. Then they gave the monkey a game to play: Using a joystick, the monkey moved a dot around on a screen until it entered a circle in the center. When the monkey moved the dot into the correct location, she received a reward of juice. Once they recorded the brain patterns that resulted from the movement, the team took the joystick away. The monkey was now able to move the dot around simply by imagining it move.
"Somehow she figured out that she could just imagine. She realized this is the prototype of a free lunch," Nicolelis said. The innovation was the grandfather of the brain-to-brain interface. "This was the first time a primate's brain liberated itself from the body," he said.
After Nicolelis's study, other neuroscientists began taking the work to humans. In 2013, Chantel Prat and Andrea Stocco, both researchers at the University of Washington Institute for Learning and Brain Sciences, wanted to see if they could send a message to control physical movement from one brain to another. Because it's a breach of research ethics to connect probes directly into a living human brain, they had to figure out how to do it using non-invasive techniques.
Using an electroencephalography (EEG) cap, which records brain activity, they positioned two researchers in separate areas of the campus. In one room a colleague, Rajesh Rao, played a videogame using his mind. Each time Rao saw an enemy he wanted to shoot in the game he would think about pressing a button. Across campus, Stocco sat with his back to the same video game while wearing noise-canceling headphones so he wouldn't know when to respond. On his head was a transcranial magnetic stimulation coil (a device that can emit a focused electrical current), which was positioned directly over the part of the brain that controlled the movement of his finger.When Rao thought about moving his finger, the signal was transmitted across campus to Stocco who, without any knowledge of it, would twitch his finger and trigger the game to shoot an enemy.
"The first time I didn't even realize my hand had moved. I was just waiting for something to happen," said Stocco.
That reaction, Prat says, is an important aspect of this science. "There is this idea that I would like to dispel. This is not the X-Men version of telepathy where you hear a disembodied voice. My brain would have no way of knowing that your thoughts are mine. Whatever shape [future brain-to-brain communication] takes is going to be very different than listening to someone's thoughts in your head."
The neuroscientists all agreed that, while this technology is still rudimentary there are implications for future uses. Nicolelis, for example, has adapted the brain-to-machine interface to help paralyzed patients walk by using their brain signals to control prosthetic devices. He says that over the two years he's been working with them several of his patients have recovered some sensory ability in their paralyzed lower limbs. "The conjunction of output to control device and feedback may have triggered axons that survived to start working again," he says.
Prat, who is especially interested in the differences between individual brains, believes that the technology could also eventually be used to improve learning by harnessing the EEG's ability to distinguishing between a brain that is focusing and one that is "zoning out." That way, perhaps in the future, when a "good learner" starts to focus on a learning task their brain can trigger someone who is not paying attention to focus in on the task at hand. Brain-to-brain communication, she says, may one day be especially good at transmitting a state of mind.
In the end the researchers agreed that despite the technology's many potential benefits one future we won't see is one in which you can connect your brain into a computer and download all the Earth's knowledge. According to Nicolelis, downloading massive amounts of data or mimicking telepathy will be impossible because the brain is just too complex.
"I don't think we will ever be able to broadcast from one brain to another the essence of the human condition. We don't even know how to record those things let alone broadcast them and then interpret that broadcast. We love analogies, metaphors, expecting things, and predicting things. These thing are not in algorithms. We're not going to be broadcasting my dreams to your head." |
- 1 Who is the founder of lesson plan?
- 2 What is Ganag?
- 3 Who introduced five step lesson plan?
- 4 What are the 5 methods of teaching?
- 5 What are the 5 parts of lesson plan?
- 6 How do you plan a lesson from start to finish?
- 7 How do you plan a primary school lesson?
- 8 What is the name of Herbert lesson plan?
- 9 What are the three teaching method?
- 10 What is best method of teaching?
- 11 What are the 4 teaching styles?
Who is the founder of lesson plan?
Herbartian approach: Fredrick Herbart (1776-1841)
What is Ganag?
GANAG is a lesson schema which includes a set of sequenced steps; it allows teachers to plan for students to use the nine high yield research-based instructional strategies.
Who introduced five step lesson plan?
Herbart advocated five formal steps in teaching: (1) preparation—a process of relating new material to be learned to relevant past ideas or memories in order to give the pupil a vital interest in the topic under consideration; (2) presentation—presenting new material by means of concrete objects or actual experience; (
What are the 5 methods of teaching?
Teacher-Centered Methods of Instruction
- Direct Instruction (Low Tech)
- Flipped Classrooms (High Tech)
- Kinesthetic Learning (Low Tech)
- Differentiated Instruction (Low Tech)
- Inquiry-based Learning (High Tech)
- Expeditionary Learning (High Tech)
- Personalized Learning (High Tech)
- Game-based Learning (High Tech)
What are the 5 parts of lesson plan?
The 5 Key Components Of A Lesson Plan
How do you plan a lesson from start to finish?
Listed below are 6 steps for preparing your lesson plan before your class.
- Identify the learning objectives.
- Plan the specific learning activities.
- Plan to assess student understanding.
- Plan to sequence the lesson in an engaging and meaningful manner.
- Create a realistic timeline.
- Plan for a lesson closure.
How do you plan a primary school lesson?
The Ultimate Lesson Planning Checklist for Primary School Teachers
- 1 – Be passionate about the subject you are teaching.
- 2 – Take into account the different needs and requirements of your pupils.
- 3 – Make sure the lesson is relevant and give it context.
- 4 – Set out clear, simple objectives.
- 5 – Have a backup plan.
What is the name of Herbert lesson plan?
Herbart’s pedagogical method was divided into discrete steps: preparation, presentation, association, generalization, and application. In preparation, teachers introduce new material in relation to the students’ existing knowledge or interests, so as to instill an interest in the new material.
What are the three teaching method?
It is helpful to think of teaching styles according to the three Ds: Directing, Discussing, and Delegating.
- The directing style promotes learning through listening and following directions.
- The discussing style promotes learning through interaction.
- The delegating style promotes learning through empowerment.
What is best method of teaching?
7 Effective Teaching Strategies For The Classroom
- Cooperative learning.
- Inquiry-based instruction.
- Technology in the classroom.
- Behaviour management.
- Professional development.
What are the 4 teaching styles?
In this article, we’ll explore each of the above teaching types to question how effective they are in engaging pupils in the learning process.
- The Authority Style.
- The Delegator Style.
- The Facilitator Style.
- The Demonstrator Style.
- The Hybrid Style. |
Bennett and Kell’s 1989 study described poor classroom organization and its effects, which showed in a lack of pupil involvement in the lessons (with some pupils wandering about inanely), interruptions which disrupted the whole class, and a general lack of interest or motivation on the part of the pupils.5 Children played about without the teacher apparently being aware of it. There was little or no teacher control.
The key way in which teacher control can be improved is through the organization of the classroom; this is viewed by many educationalists as the Holy Grail. Currently, educationalists recognize four main types of classroom organization which takes place in primary schools: whole class, individual, paired and group working.
Whole class teaching is where all the pupils undertake the same activity, at the same time, whilst usually being addressed by the teacher positioned at the front of the room. This is successful for starting and ending the day, for giving out administrative instructions, general teaching, extending and reviewing work, and controlling the pupils during unruly periods of the day. The whole class can be organized so that everyone is being taught the same thing at the same time. This type of organization is particularly useful where a lot of discussion is required. Group or individual work often follows this, with children coming together again to discuss and review what they have been doing during individual or smaller group work.
Individual work will often follow a whole class briefing. This process is thought to be particularly useful for developing children’s ability to work independently at their own pace through a structured work scheme. Children may work on individual tasks which may be of their own creation or an interpretation of a group theme suggested by the teacher. Paired as opposed to individual working allows children to collaborate on a task with one other pupil. This not only helps by making different aspects of a problem more explicit through collaboration in a limited and controlled form, but it also helps to develop each child’s language ability.
There are many situations when a class of children needs to be divided in order to undertake particular activities. A powerful argument for grouping is that it encourages collaboration and supports the interactions and discussions through which much learning and socialization develops. It also helps with competency in social and language skills and as a means by which pupils can support, challenge and extend their learning together, through problem solving or working on a joint creative task. Different types of grouping are needed for different activities and children should have the opportunity to be part of a variety of groupings; ideally groupings should be flexible and varied. There are seven types of grouping arrangements: grouping by age, ability grouping, developmental grouping, grouping by learning need, interest groups, social learning groups and friendship groups.6
Learning activities can be thought of as falling into five categories. The activities differ in many respects including variable factors such as the number of pupils involved, the interactions they involve and the nature of the attention they require. However, the key groupings can be summarized as follows: 2
3 In small groups;
4 As a whole class;
5 Or, when not with their teacher, alone or in collaboration.
It is also clear from the literature reviewed that the use of these types of activity differs, with individual work and whole class teaching tending to feature most prominently. While group seating makes sense for two of the five types of learning activity, it is not suited for individual work.7 A balance needs to be struck regarding the time spent on individual work, whole classwork and smaller group work. This must be organized with regard to both pedagogical and practical considerations relating to the space in which it takes place.
Barker (1978) and Bronfenbrenner (1979) have discussed the importance of the quality of the environment and the fact that it can influence behaviour, a view which is commonly stated by teachers.8 Space in classrooms is often limited and must be utilized with great skill to enable the activities, which form essential components of the primary school curriculum, to take place effectively. The organization of space may have a profound effect on learning because pupils tend to feel connected to a school that recognizes their needs through the provision of good architecture and good resources:
When children experience a school obviously designed with their needs in mind, they notice it and demonstrate a more natural disposition towards respectful behaviour and a willingness to contribute to the classroom community.9
It is axiomatic that a beautifully designed school, like any public building, is good for its users. However, there is much anecdotal evidence supporting the view that new ‘landmark school architecture’ does not always satisfy its users functionally. Architects do not get the classroom design right, often as a result of too little consultation. In the primary school classroom the teachers’ task is to ensure that children experience the curriculum, develop and learn and are seen to be making progress. Therefore the presentation of children’s work is most important and should be constantly updated. The primary school classroom should be aesthetically pleasing; stimulate children’s interests; set high standards in display and presentation of children’s work; and be designed in such a way that the room can be easily cleaned and maintained.10
Educational attainment has been shown to correlate with spending levels in each locality, so that in theory the higher the resource provision, the higher the attainment and the greater the educational life chances in that area. Investment in UK schools comes about via a complex combination of school-based decisions, numbers of pupils on the roll and the priority given to education by national and local government at the time. Presently within the UK, the quality of education and the buildings that support it have been widely condemned and with such obviously badly maintained old buildings, pupils and their parents can readily see how little investment there has been in education over the years. This has a great political significance, hence a lot of new capital investment is now beginning to happen within the UK.
In educational terms ‘resources’ are materials and equipment used in the classroom (as opposed to the buildings) and the quality of learning experiences will be directly affected by their provision. Materials include things such as paper and pencils and can be considered as consumables. Equipment is also very significant in primary education because it is usually through the use of appropriate equipment that the pupils get enhanced learning experiences. Both in quality and quantity these resources have an impact on what it is possible to do in classrooms. A good supply of appropriate resources is essential. However, these older research studies referred to here do not consider ICT (information communications technology) in any great depth, a recent and profoundly important dimension which now also needs to be considered as part of the resource structure.
There are three criteria that must be considered when organizing resources:12
2 Availability. What resources are available? What is in the classroom, the school, the community, businesses, libraries, museums, local resource centres? Are there cost, time or transport factors to be considered?
3 Storage. How are classroom resources stored? Which should be under teacher control? Which should be openly available to the children? Are resources clearly labelled and safely stored.
Clearly, an effective classroom needs to be designed ergonomically so that storage is designed into the architecture in an appropriate, safe and accessible form. Close discussion with teachers will enable this to happen.
As previously stated, the way in which time is used in the classroom is very important. Pupil progress is undoubtedly related to the time that is made available for effective ‘curriculum activity’. However, many educationalists believe that the amount of pupil time spent in ‘active learning’ is more important. This is a qualitative criteria not a quantitative one, in that it implies a more positive engaged learning mode for the pupil. In order to maintain active engaged learning, an appropriate variety of activities offered within the classroom is necessary. This has clear spatial implications, for example, the availability of discreet work bays off the main teaching space or separate study areas to support pupils with special needs.
Findings from Pollard’s 1994 study showed considerable variations between the proportion of pupil time spent in different modes and various levels of pupil engagement in passive as opposed to active learning in various classroom situations.13 Mortimore et al. (1988) noted that between 66 and 75 per cent of teachers used a fairly precise timetable to order the activities during each session and noted that the older the children the more organization and lesson planning was required.14 The study found that managerial aspects of a teacher’s job took approximately 10 per cent of the time available within each teaching period.
The establishment of the UK National Curriculum in 1988, the need for public accountability and the subsequent numeracy and literacy strategies developed successfully since then have brought about an even more rigid allocation of time within the classroom environment. A study by Campbell and Neill (1994) illustrated the important concept of ‘time available for teaching’. They show that almost 10 per cent of teaching time is lost as ‘evaporated time’ in the management of classroom activities, which is necessary to create teaching and learning opportunities within the framework of the increasingly proscriptive educational curriculum.15 However, it was not estimated how much time was lost to teaching as a result of poor environmental conditions. |
This is the first intellectual biography of Descartes in English. Stephen Gaukroger provides a rich, authoritative account of Descartes' intellectual and personal development, understood in its historical context, and offers a reassessment of all aspects of his life and work.
René Descartes (1596-1650) is the father of modern philosophy, and one of the greatest of all thinkers. This is the first intellectual biography of Descartes in English; it offers a fundamental reassessment of all aspects of his life and work. Stephen Gaukroger, a leading authority on Descartes, traces his intellectual development from childhood, showing the connections between his intellectual and personal life and placing these in the cultural context of seventeenth century Europe. Descartes' early work in mathematics and science produced ground breaking theories, methods, and tools still in use today. This book gives the first full account of how this work informed and influenced the later philosophical studies for which, above all, Descartes is renowned. Not only were philosophy and science intertwined in Descartes' life; so were philosophy and religion. The Church of Rome found Galileo guilty of heresy in 1633; two decades earlier, Copernicus' theories about the universe had been denounced as blasphemous. To avoid such accusations, Descartes clothed his views about the relation between God and humanity, and about the nature of the universe, in a philosophical garb acceptable to the Church. His most famous project was the exploration of the foundations of human knowledge, starting from the proof of one's own existence offered in the formula Cogito ergo sum, `I am thinking therefore I exist'. Stephen Gaukroger argues that this was not intended as an exercise in philosophical scepticism, but rather to provide Descartes' scientific theories, influenced as they were by Copernicus and Galileo, with metaphysical legitimation. This book offers for the first time a full understanding of how Descartes developed his revolutionary ideas. It will be welcomed by all readers interested in the origins of modern thought.
Towards the end of his life, Descartes published the first four parts of a projected six-part work, The Principles of Philosophy. This was intended to be the definitive statement of his complete system of philosophy. Gaukroger examines the whole system, and reconstructs the last two parts from Descartes' other writings.
Of all the thinkers of the century of genius that inaugurated modern philosophy, none lived an intellectual life more rich and varied than Gottfried Wilhelm Leibniz (1646–1716). Maria Rosa Antognazza's pioneering biography provides a unified portrait of this unique thinker and the world from which he came. At the centre of the huge range of Leibniz's apparently miscellaneous endeavours, Antognazza reveals a single master project lending unity to his extraordinarily multifaceted life's work. Throughout the vicissitudes of his long life, Leibniz tenaciously pursued the dream of a systematic reform and advancement of all the sciences. As well as tracing the threads of continuity that bound these theoretical and practical activities to this all-embracing plan, this illuminating study also traces these threads back into the intellectual traditions of the Holy Roman Empire in which Leibniz lived and throughout the broader intellectual networks that linked him to patrons in countries as distant as Russia and to correspondents as far afield as China.
Biography & Autobiography by Geneviève Rodis-Lewis
This major intellectual biography illuminates the personal and historical events of Descartes's life, from his birth and early years in France to his death in Sweden, his burial, and the fate of his remains. Concerned not only with historical events but also with the development of Descartes's personality, Rodis-Lewis speculates on the effect childhood impressions may have had on his philosophy and scientific theories. She considers in detail his friendships, particularly with Isaac Beeckman and Marin Mersenne. Primarily on the basis of his private correspondence, Rodis-Lewis gives a thorough and balanced discussion of his personality. The Descartes she depicts is by turns generous and unforgiving, arrogant and open-minded, loyal in his friendships but eager for the isolation his work required. Drawing on Descartes's writings and his public and private correspondence, she corrects the errors of earlier biographies and clarifies many obscure episodes in the philosopher's life.
Towards the end of his life, Descartes published the first four parts of a projected six-part work, The Principles of Philosophy. This was intended to be the definitive statement of his complete system of philosophy, dealing with everything from cosmology to the nature of human happiness. Stephen Gaukroger examines the system, and reconstructs the last two parts, "On Living Things" and "On Man", from Descartes' other writings. He relates the work to the tradition of late Scholastic textbooks which it follows, and also to Descartes' other philosophical writings.
René Descartes is best remembered today for writing 'I think, therefore I am', but his main contribution to the history of ideas was his effort to construct a philosophy that would be sympathetic to the new sciences that emerged in the seventeenth century. To a great extent he was the midwife to the Scientific Revolution and a significant contributor to its key concepts. In four major publications, he fashioned a philosophical system that accommodated the needs of these new sciences and thereby earned the unrelenting hostility of both Catholic and Calvinist theologians, who relied on the scholastic philosophy that Descartes hoped to replace. His contemporaries claimed that his proofs of God's existence in the Meditations were so unsuccessful that he must have been a cryptic atheist and that his discussion of skepticism served merely to fan the flames of libertinism. This is the first biography in English that addresses the full range of Descartes' interest in theology, philosophy and the sciences and that traces his intellectual development through his entire career.
Exactly four hundred years after the birth of René Descartes (1596-1650), the present volume now makes available, for the first time in a bilingual, philosophical edition prepared especially for English-speaking readers, his Regulae ad directionem ingenii / Rules for the Direction of the Natural Intelligence (1619-1628), the Cartesian treatise on method. This unique edition contains an improved version of the original Latin text, a new English translation intended to be as literal as possible and as liberal as necessary, an interpretive essay contextualizing the text historically, philologically, and philosophically, a com-prehensive index of Latin terms, a key glossary of English equivalents, and an extensive bibliography covering all aspects of Descartes' methodology. Stephen Gaukroger has shown, in his authoritative Descartes: An Intellectual Biography (1995), that one cannot understand Descartes without understanding the early Descartes. But one also cannot understand the early Descartes without understanding the Regulae / Rules. Nor can one understand the Regulae / Rules without understanding a philosophical edition thereof. Therein lies the justification for this project. The edition is intended, not only for students and teachers of philosophy as well as of related disciplines such as literary and cultural criticism, but also for anyone interested in seriously reflecting on the nature, expression, and exercise of human intelligence: What is it? How does it manifest itself? How does it function? How can one make the most of what one has of it? Is it equally distributed in all human beings? What is natural about it, and what, not? In the Regulae / Rules Descartes tries to provide, from a distinctively early modern perspective, answers both to these and to many other questions about what he refers to as ingenium.
The first full-length study in English of Gassendi's life and work. I. The Man and his Work - II. Gassendi the Critic (separate chapters devoted to the Aristoteleans, Herbert of Cherbury and Descartes) - III. Gassendi the Philosopher. (Bibliotheca Humanistica & Reformatorica, Vol. XXXIV).
This accessible and highly readable book is the first full-length biography of Hegel to be published since the largely outdated treatments of the nineteenth century. Althaus draws on new historical material and scholarly sources about the life and times of this most enigmatic and influential of modern philosophers. He paints a living portrait of a thinker whose personality was more complex than is often imagined, and shows that Hegel's relation to his revolutionary times was also more ambiguous than is usually accepted. Althaus presents a broad chronological narrative of Hegel's development from his early theological studies in Tubingen and the associated unpublished writings, profoundly critical of the established religious orthodoxies. He traces Hegel's years of philosophical apprenticeship with Schelling in Jena as he struggled for an independent intellectual position, up to the crowning period of influence and success in Berlin where Hegel appeared as the advocate of the modern Prussian state. Althaus tells a vivid story of Hegel's life and his intellectual and personal crises, drawing generously on the philosopher's own words from his extensive correspondence. His central role in the cultural and political life of the time is illuminated by the impressions and responses of his contemporaries, such as Schelling, Schleiermacher and Goethe. This panoramic introduction to Hegel's life, work and times will be a valuable resource for scholars, students and anyone interested in this towering figure of philosophy.
Stephen Gaukroger presents an original account of the development of empirical science and the understanding of human behaviour from the mid-eighteenth century. Since the seventeenth century, science in the west has undergone a unique form of cumulative development in which it has been consolidated through integration into and shaping of a culture. But in the eighteenth century, science was cut loose from the legitimating culture in which it had had a public rationale as a fruitful and worthwhile form of enquiry. What kept it afloat between the middle of the eighteenth and the middle of the nineteenth centuries, when its legitimacy began to hinge on an intimate link with technology? The answer lies in large part in an abrupt but fundamental shift in how the tasks of scientific enquiry were conceived, from the natural realm to the human realm. At the core of this development lies the naturalization of the human, that is, attempts to understand human behaviour and motivations no longer in theological and metaphysical terms, but in empirical terms. One of the most striking feature of this development is the variety of forms it took, and the book explores anthropological medicine, philosophical anthropology, the 'natural history of man', and social arithmetic. Each of these disciplines re-formulated basic questions so that empirical investigation could be drawn upon in answering them, but the empirical dimension was conceived very differently in each case, with the result that the naturalization of the human took the form of competing, and in some respects mutually exclusive, projects.
Consisting of twelve newly commissioned essays and enhanced by William Molyneux’s famous early translation of the Meditations, this volume touches on all the major themes of one of the most influential texts in the history of philosophy. Situates the Meditations in its philosophical and historical context. Touches on all of the major themes of the Meditations, including the mind-body relation, the nature of the mind, and the existence of the material world.
The first book to address the historical failures of philosophy—and what we can learn from them Philosophers are generally unaware of the failures of philosophy, recognizing only the failures of particular theories, which are then remedied with other theories. But, taking the long view, philosophy has actually collapsed several times, been abandoned, sometimes for centuries, and been replaced by something quite different. When it has been revived it has been with new aims that are often accompanied by implausible attempts to establish continuity with a perennial philosophical tradition. What do these failures tell us? The Failures of Philosophy presents a historical investigation of philosophy in the West, from the perspective of its most significant failures: attempts to provide an account of the good life, to establish philosophy as a discipline that can stand in judgment over other forms of thought, to set up philosophy as a theory of everything, and to construe it as a discipline that rationalizes the empirical and mathematical sciences. Stephen Gaukroger argues that these failures reveal more about philosophical enquiry and its ultimate point than its successes ever could. These failures illustrate how and why philosophical inquiry has been conceived and reconceived, why philosophy has been thought to bring distinctive skills to certain questions, and much more. An important and original account of philosophy’s serial breakdowns, The Failures of Philosophy ultimately shows how these shortcomings paradoxically reveal what matters most about the field.
The institutionalization of History and Philosophy of Science as a distinct field of scholarly endeavour began comparatively earl- though not always under that name - in the Australasian region. An initial lecturing appointment was made at the University of Melbourne immediately after the Second World War, in 1946, and other appoint ments followed as the subject underwent an expansion during the 1950s and 1960s similar to that which took place in other parts of the world. Today there are major Departments at the University of Melbourne, the University of New South Wales and the University of W ollongong, and smaller groups active in many other parts of Australia and in New Zealand. 'Australasian Studies in History and Philosophy of Science' aims to provide a distinctive pUblication outlet for Australian and New Zealand scholars working in the general area of history, philosophy and social studies of science. Each volume comprises a group of essays on a connected theme, edited by an Australian or a New Zealander with special expertise in that particular area. Papers address general issues, however, rather than local ones; parochial topics are avoided. Further more, though in each volume a majority of the contributors is from Australia or New Zealand, contributions from elsewhere are by no means ruled out. Quite the reverse, in fact - they are actively encouraged wherever appropriate to the balance of the volume in question.
This ebook is a selective guide designed to help scholars and students of social work find reliable sources of information by directing them to the best available scholarly materials in whatever form or format they appear from books, chapters, and journal articles to online archives, electronic data sets, and blogs. Written by a leading international authority on the subject, the ebook provides bibliographic information supported by direct recommendations about which sources to consult and editorial commentary to make it clear how the cited sources are interrelated related. This ebook is a static version of an article from Oxford Bibliographies Online: Philosophy, a dynamic, continuously updated, online resource designed to provide authoritative guidance through scholarship and other materials relevant to the study Philosophy. Oxford Bibliographies Online covers most subject disciplines within the social science and humanities, for more information visit www.oxfordbibligraphies.com.
Publisher: Library and Archives Canada = Bibliothèque et Archives Canada
In this dissertation I deal with a particular problem in the historiography of science: the image of Descartes as an early modern scientist. For many, this entails seeing Descartes as a materialist and atheist, who inserted God and faith into his philosophy only to slip it past the authorities. This means that, in reality, Descartes adhered exclusively to the first rule of his method-to give credence only to reason-and that, consequently, he rejected the claims of religion. Any professions of faith found in his work are, therefore, a sham. This is known as the dissimulation hypothesis. I trace the history of this image of Descartes as dissimulator in its most prominent twentieth-century manifestations, beginning with French philosophy at the turn of the century, then going through the Anglo-American discussion of the issue, up to the place of this image in current Straussian political philosophy (the strong version of dissimulation), and in the latest biographical literature. By means of an exhaustive survey of the central philosophical problems found in the primary and secondary sources, I show that while Descartes was careful about his manner of self-presentation in both his life and his work, the strong version of dissimulation adopted by the esoteric Straussian School, which sees Descartes primarily as an atheist, is deeply flawed. I thus reject the dissimulation hypothesis, as well as the image of Descartes as an early-modern scientist (prominent in Stephen Gaukroger's intellectual biography), and suggest that Richard Watson's popular Cogito Ergo Sum: A Life of Rene Descartes, although occasionally going too far in its skepticism, points the way to a more complex and historically accurate intentional portrait of Descartes. I argue that this more complex picture, which is beginning to receive increased attention in the literature, ought to replace the dissimulating Descartes in our research. Rather than being an early-modern scientist, Descartes is more properly seen as a Renaissance natural philosopher, who was cautious about the way he presented himself and his ideas. With this complex intentional portrait in mind, I examine how Descartes dealt with the charges of dissimulation, secret skepticism, and atheism that were leveled at him in the latter part of his life. Descartes, I argue, saw faith, reason, and science as compatible, as is shown by his attitude towards Copernicanism. I conclude with a summary of the implications of the Faustian Descartes for our understanding of the modern world. I also suggest a way of understanding the Straussian interpretation, and Straussianism in general, as an elaborate conspiracy theory designed to further radically conservative political ends.
René Descartes is best known as the man who coined the phrase “I think, therefore I am.” But though he is remembered most as a thinker, Descartes, the man, was no disembodied mind, theorizing at great remove from the worldly affairs and concerns of his time. Far from it. As a young nobleman, Descartes was a soldier and courtier who took part in some of the greatest events of his generation—a man who would not seem out of place in the pages of The Three Musketeers. In The Young Descartes, Harold J. Cook tells the story of a man who did not set out to become an author or philosopher—Descartes began publishing only after the age of forty. Rather, for years he traveled throughout Europe in diplomacy and at war. He was present at the opening events of the Thirty Years' War in Central Europe and Northern Italy, and was also later involved in struggles within France. Enduring exile, scandals, and courtly intrigue, on his journeys Descartes associated with many of the most innovative free thinkers and poets of his day, as well as great noblemen, noblewomen, and charismatic religious reformers. In his personal life, he expressed love for men as well as women and was accused of libertinism by his adversaries. These early years on the move, in touch with powerful people and great events, and his experiences with military engineering and philosophical materialism all shaped the thinker and philosopher Descartes became in exile, where he would begin to write and publish, with purpose. But though it is these writings that made ultimately made him famous, The Young Descartes shows that this story of his early life and the tumultuous times that molded him is sure to spark a reappraisal of his philosophy and legacy.
The most comprehensive collection of essays on Descartes' scientific writings ever published, this volume offers a detailed reassessment of Descartes' scientific work and its bearing on his philosophy. The 35 essays, written by some of the world's leading scholars, cover topics as diverse as optics, cosmology and medicine, and will be of vital interest to all historians of philosophy or science.
Traces the life of Lord Herbert of Chirbury from birth to death, chronicling the travels, poetry, philosophy, and theology of a now-neglected figure who was well known in his own day and whose books were read and commented on by Descartes, Hobbes, and Comenius. |
When you compress or extend a spring – or any elastic material – you’ll instinctively know what’s going to happen when you release the force you’re applying: The spring or material will return to its original length.
It’s as if there is a “restoring” force in the spring that ensures it returns to its natural, uncompressed and un-extended state after you release the stress you’re applying to the material. This intuitive understanding – that an elastic material returns to its equilibrium position after any applied force is removed – is quantified much more precisely by Hooke’s law.
Hooke’s law is named after its creator, British physicist Robert Hooke, who stated in 1678 that “the extension is proportional to the force.” The law essentially describes a linear relationship between the extension of a spring and the restoring force it gives rise to in the spring; in other words, it takes twice as much force to stretch or compress a spring twice as much.
The law, while very useful in many elastic materials, called “linear elastic” or “Hookean” materials, doesn’t apply to every situation and is technically an approximation.
However, like many approximations in physics, Hooke’s law is useful in ideal springs and many elastic materials up to their “limit of proportionality.” The key constant of proportionality in the law is the spring constant, and learning what this tells you, and learning how to calculate it, is essential to putting Hooke’s law into practice.
The Hooke’s Law Formula
The spring constant is a key part of Hooke’s law, so to understand the constant, you first need to know what Hooke’s law is and what it says. The good news it’s a simple law, describing a linear relationship and having the form of a basic straight-line equation. The formula for Hooke’s law specifically relates the change in extension of the spring, x, to the restoring force, F, generated in it:
The extra term, k, is the spring constant. The value of this constant depends on the qualities of the specific spring, and this can be directly derived from the properties of the spring if needed. However, in many cases – especially in introductory physics classes – you’ll simply be given a value for the spring constant so you can go ahead and solve the problem at hand. It’s also possible to directly calculate the spring constant using Hooke’s law, provided you know the extension and magnitude of the force.
Introducing the Spring Constant, k
The “size” of the relationship between the extension and the restoring force of the spring is encapsulated in the value the spring constant, k. The spring constant shows how much force is needed to compress or extend a spring (or a piece of elastic material) by a given distance. If you think about what this means in terms of units, or inspect the Hooke’s law formula, you can see that the spring constant has units of force over distance, so in SI units, newtons/meter.
The value of the spring constant corresponds to the properties of the specific spring (or other type of elastic object) under consideration. A higher spring constant means a stiffer spring that’s harder to stretch (because for a given displacement, x, the resulting force F will be higher), while a looser spring that’s easier to stretch will have a lower spring constant. In short, the spring constant characterizes the elastic properties of the spring in question.
Elastic potential energy is another important concept relating to Hooke’s law, and it characterizes the energy stored in the spring when it’s extended or compressed that allows it to impart a restoring force when you release the end. Compressing or extending the spring transforms the energy you impart into elastic potential, and when you release it, the energy is converted into kinetic energy as the spring returns to its equilibrium position.
Direction in Hooke’s Law
You’ll have undoubtedly noticed the minus sign in Hooke’s law. As always, the choice of the “positive” direction is always ultimately arbitrary (you can set the axes to run in any direction you like, and the physics works in exactly the same way), but in this case, the negative sign is a reminder that the force is a restoring force. “Restoring force” means that the action of the force is to return the spring to its equilibrium position.
If you call the equilibrium position of the end of the spring (i.e., its “natural” position with no forces applied) x = 0, then extending the spring will lead to a positive x, and the force will act in the negative direction (i.e., back towards x = 0). On the other hand, compression corresponds to a negative value for x, and then the force acts in the positive direction, again towards x = 0. Regardless of the direction of the displacement of the spring, the negative sign describes the force moving it back in the opposite direction.
Of course, the spring doesn’t have to move in the x direction (you could equally well write Hooke’s law with y or z in its place), but in most cases, problems involving the law are in one dimension, and this is called x for convenience.
Elastic Potential Energy Equation
The concept of elastic potential energy, introduced alongside the spring constant earlier in the article, is very useful if you want to learn to calculate k using other data. The equation for elastic potential energy relates the displacement, x, and the spring constant, k, to the elastic potential PEel, and it takes the same basic form as the equation for kinetic energy:
As a form of energy, the units of elastic potential energy are joules (J).
The elastic potential energy is equal to the work done (ignoring losses to heat or other wastage), and you can easily calculate it based on the distance the spring has been stretched if you know the spring constant for the spring. Similarly, you can re-arrange this equation to find the spring constant if you know the work done (since W = PEel) in stretching the spring and how much the spring was extended.
How to Calculate the Spring Constant
There are two simple approaches you can use to calculate the spring constant, using either Hooke’s law, alongside some data about the strength of the restoring (or applied) force and the displacement of the spring from its equilibrium position, or using the elastic potential energy equation alongside figures for the work done in extending the spring and the displacement of the spring.
Using Hooke’s law is the simplest approach to finding the value of the spring constant, and you can even obtain the data yourself through a simple setup where you hang a known mass (with the force of its weight given by F = mg) from a spring and record the extension of the spring. Ignoring the minus sign in Hooke’s law (since the direction doesn’t matter for calculating the value of the spring constant) and dividing by the displacement, x, gives:
Using the elastic potential energy formula is a similarly straightforward process, but it doesn’t lend itself as well to a simple experiment. However, if you know the elastic potential energy and the displacement, you can calculate it using:
In any case you’ll end up with a value with units of N/m.
Calculating the Spring Constant: Basic Example Problems
A spring with a 6 N weight added to it stretches by 30 cm relative to its equilibrium position. What is the spring constant k for the spring?
Tackling this problem is easy provided you think about the information you’ve been given and convert the displacement into meters before calculating. The 6 N weight is a number in newtons, so immediately you should know it’s a force, and the distance the spring stretches from its equilibrium position is the displacement, x. So the question tells you that F = 6 N and x = 0.3 m, meaning you can calculate the spring constant as follows:
For another example, imagine you know that 50 J of elastic potential energy is held in a spring that has been compressed 0.5 m from its equilibrium position. What is the spring constant in this case? Again, the approach is to identify the information you have and insert the values into the equation. Here, you can see that PEel = 50 J and x = 0.5 m. So the re-arranged elastic potential energy equation gives:
The Spring Constant: Car Suspension Problem
A 1800-kg car has a suspension system that cannot be allowed to exceed 0.1 m of compression. What spring constant does the suspension need to have?
This problem might appear different to the previous examples, but ultimately the process of calculating the spring constant, k, is exactly the same. The only additional step is translating the mass of the car into a weight (i.e., the force due to gravity acting on the mass) on each wheel. You know that the force due to the weight of the car is given by F = mg, where g = 9.81 m/s2, the acceleration due to gravity on Earth, so you can adjust the Hooke’s law formula as follows:
However, only one quarter of the total mass of the car is resting on any wheel, so the mass per spring is 1800 kg / 4 = 450 kg.
Now you simply have to input the known values and solve to find the strength of the springs needed, noting that the maximum compression, 0.1 m is the value for x you’ll need to use:
This could also be expressed as 44.145 kN/m, where kN means “kilonewton” or “thousands of newtons.”
The Limitations of Hooke’s Law
It’s important to stress again that Hooke’s law doesn’t apply to every situation, and to use it effectively you’ll need to remember the limitations of the law. The spring constant, k, is the gradient of the straight-line portion of the graph of F vs. x; in other words, force applied vs. displacement from the equilibrium position.
However, after the “limit of proportionality” for the material in question, the relationship is no longer a straight-line one, and Hooke’s law ceases to apply. Similarly, when a material reaches its “elastic limit,” it won’t respond like a spring and will instead be permanently deformed.
Finally, Hooke’s law assumes an “ideal spring.” Part of this definition is that the response of the spring is linear, but it’s also assumed to be massless and frictionless.
These last two limitations are completely unrealistic, but they help you avoid complications resulting from the force of gravity acting on the spring itself and energy loss to friction. This means Hooke’s law will always be approximate rather than exact – even within the limit of proportionality – but the deviations usually don’t cause a problem unless you need very precise answers.
- University of Tennessee, Knoxville: Hooke's Law
- Khan Academy: What Is Hooke's Law?
- BBC GCSE Bitesize: Forces and Elasticity
- Phys.org: What Is Hooke's Law?
- Lumen: Hooke's Law
- Physics Net: Hooke’s Law
- Georgia State University: HyperPhysics: Elasticity
- Arizona State University: The Ideal Spring
- The Engineering Toolbox: Stress, Strain and Young's Modulus
- The Engineering Toolbox: Hooke's Law
- Georgia State University: HyperPhysics: Elastic Potential Energy
About the Author
Lee Johnson is a freelance writer and science enthusiast, with a passion for distilling complex concepts into simple, digestible language. He's written about science for several websites including eHow UK and WiseGeek, mainly covering physics and astronomy. He was also a science blogger for Elements Behavioral Health's blog network for five years. He studied physics at the Open University and graduated in 2018. |
|Male (top) and female (bottom)|
Least Concern ( IUCN 3.1)
See Mexican Duck, Anas and below
The Mallard (Anas platyrhynchos), probably the best-known and most recognizable of all ducks, is a dabbling duck which breeds throughout the temperate and sub-tropical areas of North America, Europe, Asia, New Zealand (where it is currently the most common duck species), and Australia. It is strongly migratory in the northern parts of its breeding range, and winters farther south. For example, in North America it winters south to Mexico, but also regularly strays into Central America and the Caribbean between September and May.
The Mallard and the Muscovy Duck are believed be the ancestors of all domestic ducks.
The Agreement on the Conservation of African-Eurasian Migratory Waterbirds ( AEWA) applies to the mallard.
The mallard is 56–65 cm long, has a wingspan of 81–98 cm, and weighs 2-2 1/2 pounds. The breeding male is unmistakable, with a green head, black rear end and a yellow bill tipped with black (as opposed to the dark brown bill in females). The female Mallard is light brown, like most female dabbling ducks. However, both the female and male Mallards have distinct purple speculum edged with white, prominent in flight or at rest (though temporarily shedded during the annual summer molt). In non-breeding ( eclipse) plumage the drake becomes drab, looking more like the female, but still distinguishable by its yellow bill and reddish breast.
The Mallard is a rare example of both Allen's Rule and Bergmann's Rule in birds. Bergmann's Rule, which states that polar forms tend to be larger than related ones from warmer climates, has numerous examples in birds. Allen's Rule says that appendages like ears tend to be smaller in polar forms to minimize heat loss, and larger in tropical and desert equivalents to facilitate heat diffusion, and that the polar taxa are stockier overall. Examples of this rule in birds are rare, as they lack external ears. However, the bill of ducks is very well supplied with blood vessels and is vulnerable to cold.
The size of the Mallard varies clinally, and birds from Greenland, although larger than birds further south, have smaller bills and are stockier. It is sometimes separated as subspecies Greenland Mallard (A. p. conboschas).
In captivity, domestic ducks come in wild-type plumages, white, and other colours. Most of these colour variants are also known in domestic mallards not bred as livestock, but kept as pets, aviary birds, etc., where they are rare but increasing in availability.
A noisy species, the male has a nasal call, the female the " quack" always associated with ducks.
The Mallard inhabits most wetlands, including parks, small ponds and rivers, and usually feeds by dabbling for plant food or grazing; there are reports of it eating frogs. It usually nests on a river bank, but not always near water. It is highly gregarious outside of the breeding season and will form large flocks, which are known as a sord.
Mallards form pairs only until the female lays eggs, at which time she is left by the male. The clutch is 8–13 eggs, which are incubated for 27–28 days to hatching with 50–60 days to fledging. The ducklings are precocial, and can swim and feed themselves on insects as soon as they hatch, although they stay near the female for protection. Young ducklings are not naturally waterproof and rely on the mother to provide waterproofing. Mallards also have rates of male-male sexual activity that are unusually high for birds. In some cases, as many as 19% of pairs in a Mallard population are male-male homosexual.
When they pair off with mating partners, often one or several drakes will end up "left out". This group will sometimes target an isolated female duck — chasing, pestering and pecking at her until she weakens (a phenomenon referred to by researchers as rape flight), at which point each male will take turns copulating with the female. Male Mallards will also occasionally chase other males in the same way. (In one documented case, a male Mallard copulated with another male he was chasing after it had been killed when it flew into a glass window.)
Ancestor of almost all Domestic Ducks
Mallard (Anas platyrhynchos) is the ancestor of almost all of the varieties of domestic ducks. Domestic ducks belong to the subfamily Anatinae of the waterfowl family Anatidae. The wild mallard and Muscovy duck (Cairina moschata) are believed to be the ancestors of all domestic ducks.
Genetic pollution, hybridization and systematics
Release of feral Mallard Ducks worldwide is creating havoc on indigenous waterfowl, these don't migrate and stay back in the local breeding season and interbreed with indigenous rare wild ducks devastating local populations of closely related species through genetic pollution by producing fertile offspring. Complete hybridization of various species of rare wild duck gene pools could result in the extinction of many indigenous waterfowl. Wild Mallard itself is the ancestor of most domestic ducks and their naturally evolved wild gene pool gets genetically polluted in turn by the domesticated and feral populations.
Mallards frequently interbreed with their closest relatives in the genus Anas, such as the American Black Duck, and also with species more distantly related, for example the Northern Pintail, leading to various hybrids that may be fully fertile. This is quite unusual among different species, and apparently has its reasons in the fact that the Mallard evolved very rapidly and not too long ago, during the Late Pleistocene only. The distinct lineages of this radiation are usually kept separate due to non-overlapping ranges and behavioural cues, but are still not fully genetically incompatible. Mallards and their domesticated conspecifics are, of course, also fully interfertile.
The Mallard is considered an invasive species in New Zealand. There, and elsewhere, Mallards are spreading with increasing urbanization and hybridizing with local relatives. Over time, a continuum of hybrids ranging between almost typical examples of either species will develop; the speciation process beginning to reverse itself. This has created conservation concerns for relatives of the Mallard, such as the Hawaiian Duck, the A. s. superciliosa subspecies of the Pacific Black Duck, the American Black Duck, the Florida Duck, Meller's Duck, the Yellow-billed Duck, and the Mexican Duck, in the latter case even leading to a dispute whether these birds should be considered a species (and thus entitled to more conservation research and funding) or included in the mallard.
Like elsewhere worldwide the invasive alien mallard ducks are also causing severe “genetic pollution” of South Africa’s biodiversity by breeding with endemic ducks. The hybrids of mallard ducks and the Yellow-billed Duck are fertile and can produce more hybrid offspring. If this continues, only hybrids will occur and in the long term this will result in the extinction of various indigenous waterfowl worldwide like the yellow billed duck of South Africa. The mallard duck can cross breed with 45 other species and is posing a severe threat to the genetic integrity of indigenous waterfowls. Mallard ducks and their hybrids compete with indigenous birds for resources such as food, nest sites and roosting sites. The drakes (males) also kill the offspring of other waterfowl species by attacking and drowning them.
On the other hand, the Chinese Spotbill is currently introgressing into the mallard populations of the Primorsky Krai, possibly due to habitat changes from global warming. The Mariana Mallard was a resident allopatric population - in most respects a good species - apparently initially derived from Mallard × Pacific Black Duck hybrids; unfortunately, it became extinct in the 1980s. In addition, feral domestic ducks interbreeding with Mallards have led to a size increase - especially in drakes - in most Mallards in urban areas. Rape flights between normal-sized females and such stronger males are liable to end with the female being drowned by the males' combined weight.
It was generally assumed that as the spectacular nuptial plumage of Mallard drakes is obviously the result of sexual selection - most species in the mallard group being sexually monomorphic -, hybrid matings would preferentially take place between females of monomorphic relatives and Mallard drakes instead of the other way around. But this generalization was found to be incorrect.
Note that it is not the hybridization itself that causes most conservation concerns. The Laysan Duck is an insular relative of the mallard with a very small and fluctuating population. Mallards sometimes arrive on its island home during migration, and can be expected to occasionally have remained and hybridized with Laysan Ducks as long as these species exist. But these hybrids are less well adapted to the peculiar ecological conditions of Laysan Island than the local ducks, and thus have lower fitness, and furthermore, there were - apart from a brief time in the early 20th century when the Laysan Duck was almost extinct - always much more Laysan Ducks than stray Mallards. Thus, in this case, the hybrid lineages would rapidly fail.
In the cases mentioned above, however, ecological changes and hunting have led to a decline of local species; for example, the New Zealand Gray Duck's population declined drastically due to overhunting in the mid-20th century (Williams & Basse 2006). In the Hawaiian Duck, it seems that hybrid offspring are less well-adapted to native habitat and that utilizing them in reintroduction projects makes these less than successful. In conclusion, the crucial point underlying the problems of Mallards "hybridizing away" relatives is far less a consequence of Mallards spreading, but of local ducks declining; allopatric speciation and isolating behaviour have produced today's diversity of Mallard-like ducks despite the fact that in most if not all of these populations, hybridization must always have occurred to some extent. Given time and a population of sufficient size exists, natural selection ought to suppress harmful allele combinations to a negligible level.
The aforementioned confounds analysis of the evolution considerably. Analyses of good samples of mtDNA sequences give the confusing picture one expects from a wide-ranging species that has evolved probably not much earlier than the Plio-/ Pleistocene boundary, around 2 mya. Mallards appear to be closer to their Indo-Pacific relatives than to their American ones judging from biogeography. Considering mtDNA D-loop sequence data, they may have evolved more probably than not in the general area of Siberia; mallard bones rather abruptly appear in food remains of ancient humans and other deposits of fossil bones in Europe, without a good candidate for a local predecessor species. The large ice age paleosubspecies which made up at least the European and W Asian populations during the Pleistocene has been named Anas platyrhynchos palaeoboschas.
As expected, haplotypes typical of American mallard relatives and Spotbills can be found in Mallards around the Bering Sea. Interestingly, the Aleutian Islands turned out to hold a population of Mallards that appear to be evolving towards a good subspecies as gene flow with other populations is very limited. This unexpected result suggests that reevaluation of the Greenland, Iceland, and NE Canada populations according to molecular and morphological characters is warranted. |
Eating behavior is complex; humans make food decisions each day that are influenced by a variety of personal, social, cultural, environmental, and economic factors. Food habits have a considerable influence on individuals’ health.
A healthy diet helps maintain and improve overall health , which provides the body with essential nutrition: fluid, macronutrients, micronutrients, and adequate calories. While the related knowledge and understanding on the suitable diet for an individual remains fragmented and as most of them use the generic apps/ website to get a diet chart, which contains a common diet pattern for every individual, which does not suit everyone.
Each genome is unique and differs between parents, parent – children, siblings (except for identical twins), etc., leading to different responses to the same stimuli, food, diet patterns, behaviour, like and dislike for any items, etc., which makes a person different from the other.
As of which individuals show striking differences in their response to different diet patterns (some may have a good response to the keto diet , while the other may have a good response to Mediterranean diet) or different responses to a single diet. One weight-loss strategy does not work for everyone as genetic differences seemed to predispose individuals to lose different amounts of weight on different types of diets.
We often listen to conversations on weight gain/loss saying “I eat less but put on weight easily. My friend eats more and doesn’t put on weight”!! There are important factors which determine how our bodies respond to food, some of which are environmental (sleep, stress, exercise) and the diversity and population of our individual gut microbiome.
Genes play a role in how a person responds to a particular diet, processes fats and carbohydrates, duration of an individual’s post-meal blood glucose level etc, which led to the development of a field called Nutrigenomics.
Nutrigenomics is the science of the “effect of genetic variations on dietary response and the role of nutrients and bioactive food compounds in gene expression” (that studies the relationship between the human genome, nutrition and health). This helps towards developing an understanding of how the whole body responds to a food via systems biology, as well as single gene/single food compound relationships.
Nutrigenomics can be the game-changer that empowers you on your wellness journey or enables you to achieve your fitness goals.
Nutrigenomics adheres to the following precepts:
- Poor nutrition can be a risk factor for several diseases
- Common dietary chemicals can act on human genome ,either directly or indirectly, to alter the gene expression or gene structure
- The degree to which diet influences the balance between the health and disease depends on individual genetic makeup
- Some diet regulated genes play a role in onset, progression and severity of chronic diseases
- Dietary interventions based on knowledge of nutritional requirements , nutritional status, and genotype can be used to prevent chronic diseases.
What can you learn from nutrigenomics?
- Your power to absorb vital nutrients required for a healthy body and mind
- The impact of nutrients on various organs and pathways
- Nutritional factors that can protect your genome from damage
- Impact of genes on your lifestyle, diet, fitness, habits, etc.
In recent years, personalised nutrition has become more than a trend as a new generation of consumers are demanding the personalization of diet.
Diet personalization is the concept of adapting food habits to individual needs , depending on their genetic makeup, lifestyle and environment. This stems from a deep understanding of how nutrients are metabolized, and how these metabolites are utilized, characterization of eating patterns, etc.
Some benefits of personalized diet include
- Targeting specific nutritional deficiencies and medical conditions
- Heart-healthy diet or management of blood sugar,
- Treat or prevent metabolic disorders
- Improve your overall health
The convergence of technology and increasing consumer interest in nutrition and wellness with increased access to information has led to DNA-based products and services focused on better nutrition and health.
How Mapmygenome Can Help You:
We at Mapmygenome, have carved out a niche for personalizing health, wellness, nutrition, fitness and medicine. We as pioneers of personal genomics in India believe that food has the power to transform our lives.
With our screening test MyNutriGene, you can choose the ideal diet profile best suited to your body. MyNutriGene will also give you an insight into your immunity, genetic predisposition to specific health conditions, and choose optimal health plans to prevent most of these risks.
Our test will also help you learn about your metabolism, fat/carbohydrate response, food intolerance, and eating behaviour. Coupled with a comprehensive genetic counselling session with our certified experts, this test will aid in the development of a highly personalized and effective plan of action.
“If we could give every individual the right amount of nourishment and exercise, not too little and not too much , we have found the safest way to health” – Hippocrates |
ZIMSEC O Level History Notes: Dictatorship in Europe: Dictatorship in Italy: Benito Mussolini: The rise of Mussolini and the Fascists party
- Benito Mussolini was born in 1883 at Dovia and his mother was a school teacher.
- He was once a socialist journalist, teacher and a soldier.
- When Italy joined the First World War in 1925 Mussolini became a soldier.
- After the war in June 1919 Mussolini formed the Fascist party at a meeting held in Milan.
- The word Fascist was derived from the word fasces (a bundle of wooden sticks or an axe) carried by magistrates in the old Roman (Italian) Empire.
- This was a symbol of authority and power.
- The party’s name had links to the idea of force
- Soon after the First World War, Mussolini supported factory seizures by workers so he was hated by capitalists and property owners.
- Later on he opposed the seizure of property and land by workers and peasants thus gaining the support of property owners.
- The Fascists militia (The Voluntary Militia for National Security) put on black shirts and were thus referred to as the “Black shirts.”
- In May 1921 the Fascists won 34 seats on parliament.
- The Fascists used violent campaigns and in August 1922 they won more support when they crashed the general strike which had been organised by the Socialists.
To access more topics go to the History Notes page. |
Fun Facts about Flying Dinosaurs
Everything about flying dinosaurs or pterosaurs (winged reptiles) is terrific. Compared to other more well-known and studied dinosaur species, it has taken a little more time for scientists to agree on these creatures because archaeological finds have not been so abundant and enlightening. However, today enough information is available, and most specialists agree on the generic characteristics of the former kings of the heights, which were the only flying vertebrates on Earth long before the first birds existed that we know today.
Thus we know that flying dinosaurs had different shapes and sizes and in that sense, some were very small, like common sparrows, while others were so large that only one of its wings could span the size of a bus. Also, like modern birds, these dinosaurs flew, although some only planned, laid eggs and nested at heights, and had a highly developed view, with large eyes that allowed them to distinguish prey from above.
Their primary victims in the food chain were fish and insects, and thanks to their ease of flight they could escape from other hungry dinosaurs. Having said that in an introductory way, and if you are interested in knowing a little more about these dinosaurs, we suggest you take a look at the characteristics of some particular species, of the more than 100 discovered so far.
- Period: This species lived in the Jurassic period, between 150 and 144 million years ago.
- Location: Pterodactylus fossils have been found in some regions of Europe and South Africa.
- Diet: It was a predatory species.
- Wingspan: 0.91 meters.
- Approximate weight: 0.91-4.54 kilograms.
From fossil findings, it can be estimated that Pterodactylus was one of the first species of a pterodactyloid, subgroup of short-tailed or tailless pterosaurs. This species lived until the end of the Age of Dinosaurs, and 27 fossil specimens have been found, most of them complete. Thanks to this, it has been discerned that the skulls of the Pterodactylus were long and narrow, with about 90 large conical teeth in the front and smaller in the back, and extended backward from the tips of both jaws. Also, the species had a crest in the skull, composed mainly of soft tissues and supposedly developed when the animal reached maturity. Undoubtedly, the Pterodactylus were excellent flying and thanks to their pointed and sharp teeth; they must have been able to feed on flying insects, small land creatures, and fish with ease. Atlascopcosaurus is a genus that inhabited Australia in the mid-Cretaceous period, more than 110 million years ago. As a hipsilofodontid ornithopod dinosaur, this specimen was found in 1984 in the sediments of the Eumeralla Formation, on the Victoria coast. Little is known also known as Atlas Copco reptile because only one upper jaw and a partial jaw with teeth could be found. From this and other species related to the genus, it has been possible to infer some of its characteristics. The description of this animal was made by Tom Rich and Patricia Vickers-Rich between 1988 and 1989, who gave him his name envelope to honor the Atlas Copco Company that provided the excavation equipment. In 1988 the Atlascopcosaurus was assigned to the Hypsilophodontidae; while today they are considered as a basal member of the Euornithopoda.
- Period: The species lived during the Jurassic period, between 155-150 million years ago.
- Location: Europe, based on fossil findings.
- Diet: Carnivore
- Wingspan: 0.91 meters.
- Approximate weight: Approximately 0.91 kilograms.
The Scaphognathus was another species of flying pterosaur that inhabited the Earth during the Jurassic period, about 150 million years ago. Based on the fossil findings, it is estimated that it flew in regions of what is now the European continent. Its wingspan in length between the wings was almost one meter (0.91 m), and its head was short, with a very rounded blunt snout, reason why the species is also referred to as “bathtub mouth.” The Scaphognathus had standard features with the Rhamphorhynchus and a relatively large brain if its body size is taken into account. He also had 28 pieces in his denture: 18 long and pointed in the upper jaw and 10 in the lower jaw.
- Period: Cretaceous, between 85 and 75 million years ago.
- Location: North America.
- Diet: Carnivore.
- Wingspan: 1.83 meters.
- Approximate weight: approximately 13.6 kilograms.
The Pteranodon was one of a giant pterosaurs. His specimens had a small body but specially designed to fly correctly, to which also contributed their weak and tiny legs and their almost no tail, which was very small, next to its massive wings. These own characteristics at the same time must have conditioned the estimated difficulties of the species to move on land. According to specialists, the Pteranodon could have flown at speeds close to 48 kilometers per hour and its wings were three times longer than those of the most significant contemporary bird, the albatross. For all this, it is estimated that he was the true master of the heavens in the period he lived and despite not having teeth, his large size (some up to 7 meters) and his ability to fly made him a fearsome predator, although He fed mostly on fish.
- Period: The Preondactylus lived during the last Triassic period, between 215 and 200 million years ago.
- Location: Some parts of Europe.
- Diet: Carnivore.
- Wingspan: 0.30 meters.
- Approximate weight: 0.91 kilograms.
The Preondactylus was one of the first and smallest pterosaurs, perhaps even smaller than a modern pigeon. His teeth were small and pointed, indicating that perhaps he fed mostly on small fish that he caught out of water and insects. Their wings were very different from those of contemporary birds or bats and measured only 18 centimeters from side to side. Similar to many other flying dinosaurs, the skeletal bones of this species were hollow and filled with air spaces, which made them light and enabled them to fly effectively. |
After English physicist James Chadwick discovered the neutron in 1932, Enrico Fermi and his colleagues in Rome studied the results of bombarding uranium with neutrons in 1934. The fragments of tin-132 are spherical rather than deformed, and a more compact configuration at the scission point (with the charge centres closer together) leads to higher fragment kinetic energies. Most fissions are binary fissions (producing two charged fragments), but occasionally (2 to 4 times per 1000 events), three positively charged fragments are produced, in a ternary fission. Ames Laboratory was established in 1942 to produce the large amounts of natural (unenriched) uranium metal that would be necessary for the research to come. Swiatecki, James R. Nix, and their collaborators has been particularly noteworthy in such studies, which also include some attempts to treat the dynamical evolution of the fission process. In the years after World War II, many countries were involved in the further development of nuclear fission for the purposes of nuclear reactors and nuclear weapons. However, too few of the neutrons produced by 238U fission are energetic enough to induce further fissions in 238U, so no chain reaction is possible with this isotope. The actual mass of a critical mass of nuclear fuel depends strongly on the geometry and surrounding materials. Theory of Nuclear Fission: A Textbook by Krappe, Hans J. and Pomorski, Krzysztof available in Trade Paperback on Powells.com, also read synopsis and reviews. The most common small fragments, however, are composed of 90% helium-4 nuclei with more energy than alpha particles from alpha decay (so-called "long range alphas" at ~ 16 MeV), plus helium-6 nuclei, and tritons (the nuclei of tritium). The products of nuclear fission, however, are on average far more radioactive than the heavy elements which are normally fissioned as fuel, and remain so for significant amounts of time, giving rise to a nuclear waste problem. G: Nucl. For this purpose the basic reasons for the shape of the fission barriers are discussed and their consequences compared with experimental results on barrier shapes and structures. This tendency for fission product nuclei to undergo beta decay is the fundamental cause of the problem of radioactive high-level waste from nuclear reactors. It is the only model that provides a satisfactory interpretation of the angular distributions of fission fragments, and it has attractive features that must be included in any complete theory of fission. However, neutrons almost invariably impact and are absorbed by other nuclei in the vicinity long before this happens (newly created fission neutrons move at about 7% of the speed of light, and even moderated neutrons move at about 8 times the speed of sound). Buy Theory of Nuclear Fission: A Textbook: 838 (Lecture Notes in Physics) 2012 by Krappe, Hans J., Pomorski, Krzysztof (ISBN: 9783642235146) from Amazon's Book Store. The project is to perform realistic simulations of nuclear fission with an existing 3-dimensional TDHF code. At the same time, there have been important developments on a conceptual and computational level for the theory. However, this process cannot happen to a great extent in a nuclear reactor, as too small a fraction of the fission neutrons produced by any type of fission have enough energy to efficiently fission U-238 (fission neutrons have a mode energy of 2 MeV, but a median of only 0.75 MeV, meaning half of them have less than this insufficient energy).. Large-scale natural uranium fission chain reactions, moderated by normal water, had occurred far in the past and would not be possible now. Uses of Nuclear Energy If we look at the use of nuclear energy then humans have used this energy in two ways since they came to know about it. Considerations of the dynamics of the descent of the system on the potential-energy surface from the saddle point to the scission point involve two extreme points of view. (This is analogous to heating in the motion of a viscous fluid.) This article reviews how nuclear fission is described within nuclear density functional theory. In such isotopes, therefore, no neutron kinetic energy is needed, for all the necessary energy is supplied by absorption of any neutron, either of the slow or fast variety (the former are used in moderated nuclear reactors, and the latter are used in fast neutron reactors, and in weapons). These difficulties—among many others— prevented the Nazis from building a nuclear reactor capable of criticality during the war, although they never put as much effort as the United States into nuclear research, focusing on other technologies (see German nuclear energy project for more details). In an atomic bomb, this heat may serve to raise the temperature of the bomb core to 100 million kelvin and cause secondary emission of soft X-rays, which convert some of this energy to ionizing radiation. However, within hours, due to decay of these isotopes, the decay power output is far less. Hybrid nuclear fusion-fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. As the fission-excitation energy increases, the shell correction diminishes and the macroscopic (liquid-drop) behaviour dominates. A complete theoretical understanding of this reaction would require a detailed knowledge of the forces involved in the motion of each of the nucleons through the process. A theory of fission based on the shell model has been formulated by Maria Goeppert Mayer. The working fluid is usually water with a steam turbine, but some designs use other materials such as gaseous helium. When a heavy nucleus like 92 U 235 is bombarded by a neutron, the total mass of nuclei is not equal to the sum of the masses of the heavy nucleus and the neutron. There has been much recent interest in nuclear fission, due in part to a new appreciation of its relevance to astrophysics, stability of superheavy elements, and fundamental theory of neutrino interactions. Meitner's and Frisch's interpretation of the discovery of Hahn and Strassmann crossed the Atlantic Ocean with Niels Bohr, who was to lecture at Princeton University. Several heavy elements, such as uranium, thorium, and plutonium, undergo both spontaneous fission, a form of radioactive decay and induced fission, a form of nuclear reaction. Thus, in any fission event of an isotope in the actinide's range of mass, roughly 0.9 MeV is released per nucleon of the starting element. A nuclear bomb is designed to release all its energy at once, while a reactor is designed to generate a steady supply of useful power. The change in shape associated with these states, as compared to class I states, also hinders a rapid return to the ground state by gamma emission. Theory of Nuclear Fission: A Textbook (Lecture Notes in Physics (838)) [Krappe, Hans J., Pomorski, Krzysztof] on Amazon.com. Critical fission reactors are the most common type of nuclear reactor. This can be easily seen by examining the curve of binding energy (image below), and noting that the average binding energy of the actinide nuclides beginning with uranium is around 7.6 MeV per nucleon. This book is organized into 14 chapters. Early nuclear reactors did not use isotopically enriched uranium, and in consequence they were required to use large quantities of highly purified graphite as neutron moderation materials. For a description of their social, political, and environmental aspects, see nuclear power. Theory of Nuclear Fission: A Textbook (Lecture Notes in Physics (838)) [Krappe, Hans J., Pomorski, Krzysztof] on Amazon.com. It seems very likely that the fragment shell structure plays a significant role in determining the course of the fission process. Spontaneous fission was discovered in 1940 by Flyorov, Petrzhak, and Kurchatov in Moscow, in an experiment intended to confirm that, without bombardment by neutrons, the fission rate of uranium was negligible, as predicted by Niels Bohr; it was not negligible.. See decay heat for detail. In February 1940 they delivered the Frisch–Peierls memorandum. Nuclear reactions are thus driven by the mechanics of bombardment, not by the relatively constant exponential decay and half-life characteristic of spontaneous radioactive processes. This book brings together various aspects of the nuclear fission phenomenon discovered by Hahn, Strassmann and Meitner almost 70 years ago. Two other fission bombs, codenamed "Little Boy" and "Fat Man", were used in combat against the Japanese cities of Hiroshima and Nagasaki in on August 6 and 9, 1945 respectively. This article reviews how nuclear fission is described within nuclear density functional theory. The second section considers fission probability. Concerns over nuclear waste accumulation and the destructive potential of nuclear weapons are a counterbalance to the peaceful desire to use fission as an energy source. It is estimated that up to half of the power produced by a standard "non-breeder" reactor is produced by the fission of plutonium-239 produced in place, over the total life-cycle of a fuel load. Producing a fission chain reaction in natural uranium fuel was found to be far from trivial. In July 1945, the first atomic explosive device, dubbed "Trinity", was detonated in the New Mexico desert. A model of this sort predicts that the system, in its random motions, will experience all possible configurations and so will have a greater probability of being in the region where the greatest number of such configurations (or states) is concentrated. In fact, the so-called doubly magic nucleus tin-132, with 50 protons and 82 neutrons, has a rather low yield in low-energy fission. IxTRoDUcnoN HE discovery by Ferry, i and his collaborators that neutrons can be captured by heavy nuclei to form new radioactive isotopes led especially in the case of uranium to the inter- It is evident that shell effects, both in the fissioning system at the saddle point and in the deformed fragments near the scission point, are important in interpreting many of the features of the fission process. On 25 January 1939, a Columbia University team conducted the first nuclear fission experiment in the United States, which was done in the basement of Pupin Hall. When a uranium nucleus fissions into two daughter nuclei fragments, about 0.1 percent of the mass of the uranium nucleus appears as the fission energy of ~200 MeV. After the Fermi publication, Otto Hahn, Lise Meitner, and Fritz Strassmann began performing similar experiments in Berlin. Use of ordinary water (as opposed to heavy water) in nuclear reactors requires enriched fuel — the partial separation and relative enrichment of the rare 235U isotope from the far more common 238U isotope. Some processes involving neutrons are notable for absorbing or finally yielding energy — for example neutron kinetic energy does not yield heat immediately if the neutron is captured by a uranium-238 atom to breed plutonium-239, but this energy is emitted if the plutonium-239 is later fissioned. Apart from fission induced by a neutron, harnessed and exploited by humans, a natural form of spontaneous radioactive decay (not requiring a neutron) is also referred to as fission, and occurs especially in very high-mass-number isotopes. Nuclei which have more than 20 protons cannot be stable unless they have more than an equal number of neutrons. That same fast-fission effect is used to augment the energy released by modern thermonuclear weapons, by jacketing the weapon with 238U to react with neutrons released by nuclear fusion at the center of the device. Marie Curie had been separating barium from radium for many years, and the techniques were well-known. By coincidence, her nephew Otto Robert Frisch, also a refugee, was also in Sweden when Meitner received a letter from Hahn dated 19 December describing his chemical proof that some of the product of the bombardment of uranium with neutrons was barium. The potential energy is calculated as a function of various parameters of the system being studied. These isomers have a much smaller barrier to penetrate and so exhibit a much shorter spontaneous fission half-life. The theory behind nuclear reactors is based first on the principles of nuclear fission. It predicts, however, a symmetric division of mass in fission, whereas an asymmetric mass division is observed. The results confirmed that fission was occurring and hinted strongly that it was the isotope uranium 235 in particular that was fissioning. (Class II states are also called shape isomers.) Figure 7: Schematic illustrations of single-humped and double-humped fission barriers. Download PDF: Sorry, we are unable to provide the full text but you may find it at the following location(s): http://dx.doi.org/10.1007/978-... (external link) In-situ plutonium production also contributes to the neutron chain reaction in other types of reactors after sufficient plutonium-239 has been produced, since plutonium-239 is also a fissile element which serves as fuel. Ironically, they were still officially considered "enemy aliens" at the time. The work of the American nuclear physicists W.J. Beginning with an historical introduction the authors present various models to describe the fission process of hot nuclei as well as the spontaneous fission of cold nuclei and their isomers. Most nuclear fuels undergo spontaneous fission only very slowly, decaying instead mainly via an alpha-beta decay chain over periods of millennia to eons. Fission – Fusion Energy Comparison. The ternary process is less common, but still ends up producing significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors.. The spherical-shell model, however, does not agree well with the properties of nuclei that have other nucleon numbers—e.g., the nuclei of the lanthanide and actinide elements, with nucleon numbers between the magic numbers. This was first done by Aage Bohr, Ben R. Mottelson, and Sven G. Nilsson in 1955, and the level structure was calculated as a function of the deformation of the nucleus. Both approaches were extremely novel and not yet well understood, and there was considerable scientific skepticism at the idea that they could be developed in a short amount of time. In August 1939, Szilard and fellow Hungarian refugee physicists Teller and Wigner thought that the Germans might make use of the fission chain reaction and were spurred to attempt to attract the attention of the United States government to the issue. The critical nuclear chain-reaction success of the Chicago Pile-1 (December 2, 1942) which used unenriched (natural) uranium, like all of the atomic "piles" which produced the plutonium for the atomic bomb, was also due specifically to Szilard's realization that very pure graphite could be used for the moderator of even natural uranium "piles". By signing up for this email, you are agreeing to news, offers, and information from Encyclopaedia Britannica. The fission of U235 by a slow neutron yields nearly identical energy to the fission of U238 by a fast neutron. A complete theoretical understanding of this reaction would require a detailed knowledge of the forces involved in the motion of each of the nucleons through the process. Dealing with the mutual interaction of all the nucleons in a nucleus has been simplified by treating it as if it were equivalent to the interaction of one particle with an average spherical static potential field that is generated by all the other nucleons. Chapter I introduces and discusses the discovery of fission, followed by a treatment of transition nucleus in Chapters II to VIII. Among the project's dozens of sites were: Hanford Site in Washington, which had the first industrial-scale nuclear reactors and produced plutonium; Oak Ridge, Tennessee, which was primarily concerned with uranium enrichment; and Los Alamos, in New Mexico, which was the scientific hub for research on bomb development and design. There has been much recent interest in nuclear fission, due in part to a new appreciation of its relevance to astrophysics, stability of superheavy elements, and fundamental theory of neutrino interactions. Even the first fission bombs were thousands of times more explosive than a comparable mass of chemical explosive. It was thus a possibility that the fission of uranium could yield vast amounts of energy for civilian or military purposes (i.e., electric power generation or atomic bombs). The systematics of neutron-induced fission cross sections and structure in some fission-fragment angular distributions also find an interpretation in the implications of the double-humped barrier. Such devices use radioactive decay or particle accelerators to trigger fissions. The variation in specific binding energy with atomic number is due to the interplay of the two fundamental forces acting on the component nucleons (protons and neutrons) that make up the nucleus. The calculations are performed on the NCI supercomputers. The energy dynamics of pure fission bombs always remain at about 6% yield of the total in radiation, as a prompt result of fission. A similar process occurs in fissionable isotopes (such as uranium-238), but in order to fission, these isotopes require additional energy provided by fast neutrons (such as those produced by nuclear fusion in thermonuclear weapons). Under these conditions, the 6.5% of fission which appears as delayed ionizing radiation (delayed gammas and betas from radioactive fission products) contributes to the steady-state reactor heat production under power. If enough nuclear fuel is assembled in one place, or if the escaping neutrons are sufficiently contained, then these freshly emitted neutrons outnumber the neutrons that escape from the assembly, and a sustained nuclear chain reaction will take place. Although the single-particle models provide a good description of various aspects of nuclear structure, they are not successful in accounting for the energy of deformation of nuclei (i.e., surface energy), particularly at the large deformations encountered in the fission process. This is over four times the 22.5 GWHours (8.1 X 10 13 Joules Nuclear Fission Nuclear Fission and Fusion. However, investigators have found that mass asymmetry and certain other features in fission cannot be adequately described on the basis of the collective behaviour posited by such models alone. Rabi said he told Enrico Fermi; Fermi gave credit to Lamb. A spheroid has three axes of symmetry, and it can rotate in space as a unit about any one of them. Fissionable, non-fissile isotopes can be used as fission energy source even without a chain reaction. By 2013, there were 437 reactors in 31 countries. Theory of Nuclear Fission: A Textbook: Krappe, Hans J., Pomorski, Krzysztof: Amazon.nl Selecteer uw cookievoorkeuren We gebruiken cookies en vergelijkbare tools om uw winkelervaring te verbeteren, onze services aan te bieden, te begrijpen hoe klanten onze services gebruiken zodat we verbeteringen kunnen aanbrengen, en om advertenties weer te geven. Not all fissionable isotopes can sustain a chain reaction. There, the news on nuclear fission was spread even further, which fostered many more experimental demonstrations.. Frisch was skeptical, but Meitner trusted Hahn's ability as a chemist. The former are represented by the dashed line and the latter by the continuous line. The amount of free energy contained in nuclear fuel is millions of times the amount of free energy contained in a similar mass of chemical fuel such as gasoline, making nuclear fission a very dense source of energy. For heavy nuclides, it is an exothermic reaction which can release large amounts of energy both as electromagnetic radiation and as kinetic energy of the fragments (heating the bulk material where fission takes place). This extra energy results from the Pauli exclusion principle allowing an extra neutron to occupy the same nuclear orbital as the last neutron in the nucleus, so that the two form a pair. It is equivalent to a one-centre potential when there is a complete overlap at small deformations, and it has the correct asymptotic behaviour as the nascent fragments separate. This article reviews how nuclear fission is described within nuclear density functional theory. Bohr proposed the so-called compound nucleus description of nuclear reactions, in which the excitation energy of the system formed by the absorption of a neutron or photon, for example, is distributed among a large number of degrees of freedom of the system. Nuclear fission in fissile fuels is the result of the nuclear excitation energy produced when a fissile nucleus captures a neutron. At the same time, there have been important developments on a conceptual and computational level for the theory. (There are several early counter-examples, such as the Hanford N reactor, now decommissioned). [H J Krappe; Krzysztof Pomorski] Home. Nuclear fission is a complex process that involves the rearrangement of hundreds of nucleons in a single nucleus to produce two separate nuclei. The latter figure means that a nuclear fission explosion or criticality accident emits about 3.5% of its energy as gamma rays, less than 2.5% of its energy as fast neutrons (total of both types of radiation ~ 6%), and the rest as kinetic energy of fission fragments (this appears almost immediately when the fragments impact surrounding matter, as simple heat). Towards this, they persuaded German-Jewish refugee Albert Einstein to lend his name to a letter directed to President Franklin Roosevelt. Thus to slow down the secondary neutrons released by the fissioning uranium nuclei, Fermi and Szilard proposed a graphite "moderator", against which the fast, high-energy secondary neutrons would collide, effectively slowing them down. All fissionable and fissile isotopes undergo a small amount of spontaneous fission which releases a few free neutrons into any sample of nuclear fuel. Nuclear fission of heavy elements produces exploitable energy because the specific binding energy (binding energy per mass) of intermediate-mass nuclei with atomic numbers and atomic masses close to 62Ni and 56Fe is greater than the nucleon-specific binding energy of very heavy nuclei, so that energy is released when heavy nuclei are broken apart. The excess mass Δm = M – Mp is the invariant mass of the energy that is released as photons (gamma rays) and kinetic energy of the fission fragments, according to the mass-energy equivalence formula E = mc2. Buy Theory of Nuclear Fission: A Textbook by Krappe, Hans J., Pomorski, Krzysztof online on Amazon.ae at best prices. General properties of nuclear fission are reviewed and related to our present knowledge of fission theory. The fission process often produces gamma photons, and releases a very large amount of energy even by the energetic standards of radioactive decay. There has been much recent interest in nuclear fission, due in part to a new appreciation of its relevance to astrophysics, stability of superheavy elements, and fundamental theory of neutrino interactions. The successes and failures of the models in accounting for the various observations of the fission process can provide new insights into the fundamental physics governing the behaviour of real nuclei, particularly at the large nuclear deformations encountered in a nucleus undergoing fission. Similarly, when two light nuclei like 1 H 2 fused together to form a heavier and stable nucleus, the mass of the product are not equal to the sum of masses of the initial lighter nuclei. In such a reaction, free neutrons released by each fission event can trigger yet more events, which in turn release more neutrons and cause more fission. The most common fission process is binary fission, and it produces the fission products noted above, at 95±15 and 135±15 u. Read reviews from world’s largest community for readers. While the fundamental physics of the fission chain reaction in a nuclear weapon is similar to the physics of a controlled nuclear reactor, the two types of device must be engineered quite differently (see nuclear reactor physics). Either extreme represents an approximation of complex behaviour, and some experimental evidence in support of either interpretation may be advanced. The process may take place spontaneously in some cases or may be induced by the excitation of the nucleus with a variety of particles (e.g., neutrons, protons, deuterons, or alpha particles) or with electromagnetic radiation in the form of gamma rays. This nuclear energy has been used in both destructive and constructive ways. Such high energy neutrons are able to fission U-238 directly (see thermonuclear weapon for application, where the fast neutrons are supplied by nuclear fusion). The theoretical description of this process is not only important for applications to energy production, it is also a crucial test to our understanding of quantum many-body dynamics. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to the delays in the realization of pure fusion. Theory of Nuclear Fission : A Textbook. The extra binding energy for closed-shell nuclei leads to a higher density of states at a given excitation energy than is present for other nuclei and, hence, leads to a higher probability of formation. In particular, conclusions are drawn regarding the variation from nucleus to nucleus of the critical energy required for fission, and regarding the dependence of fission cross section for a given nucleus on energy of the exciting agency. In 1911, Ernest Rutherford proposed a model of the atom in which a very small, dense and positively charged nucleus of protons was surrounded by orbiting, negatively charged electrons (the Rutherford model). Hence, fission thresholds would depend on the spin and parity of the compound nuclear state, the fission fragment angular distribution would be governed by the collective rotational angular momentum of the state, and asymmetry in the mass distribution would result from passage over the barrier in a state of negative parity (which does not possess reflection symmetry). In the first section calculations of the fission barrier are reviewed. The President received the letter on 11 October 1939 — shortly after World War II began in Europe, but two years before U.S. entry into it. (For both neutrons and protons, these numbers are 2, 8, 20, 28, 50, 82, and 126.) For example, in uranium-235 this delayed energy is divided into about 6.5 MeV in betas, 8.8 MeV in antineutrinos (released at the same time as the betas), and finally, an additional 6.3 MeV in delayed gamma emission from the excited beta-decay products (for a mean total of ~10 gamma ray emissions per fission, in all). In a nuclear reactor or nuclear weapon, the overwhelming majority of fission events are induced by bombardment with another particle, a neutron, which is itself produced by prior fission events. The possibility of isolating uranium-235 was technically daunting, because uranium-235 and uranium-238 are chemically identical, and vary in their mass by only the weight of three neutrons. Nuclear ab-initio and reaction frameworks within the Gamow Shell Model 2012 Workshops Low-energy nuclear collective modes and excitations The Structure of Heavy Nuclei Understanding light nuclei microscopically Theory of Nuclear Fission Colloquiums The formation of two doubly magic fragments of tin-132 is strongly favoured energetically, whereas the formation of only one such fragment in the low-energy fission of uranium or plutonium isotopes is not. This means that if 1 a.m.u mass is lost in nuclear fission, this lost mass will produce nearly 931MeV of nuclear energy. Theory of Nuclear Fission by Hans J. Krappe, 9783642235146, available at Book Depository with free delivery worldwide. This would result in the production of heat, as well as the creation of radioactive fission products. Elemental isotopes that undergo induced fission when struck by a free neutron are called fissionable; isotopes that undergo fission when struck by a slow-moving thermal neutron are also called fissile. Today the needs are even broader with the recognition of new connections to other disciplines such as astrophysics and fundamental science. In engineered nuclear devices, essentially all nuclear fission occurs as a "nuclear reaction" — a bombardment-driven process that results from the collision of two subatomic particles. Both uses are possible because certain substances called nuclear fuels undergo fission when struck by fission neutrons, and in turn emit neutrons when they break apart. Nuclear fission occurs when a neutron collides with a nucleus of a large atom such as Uranium and is absorbed into it causing the nucleus to become unstable and thus split into two smaller more stable atoms with the release of more neutrons and a considerable amount of energy. With some hesitation Fermi agreed to self-censor. Barium had an atomic mass 40% less than uranium, and no previously known methods of radioactive decay could account for such a large difference in the mass of the nucleus. During this period the Hungarian physicist Leó Szilárd, realized that the neutron-driven fission of heavy atoms could be used to create a nuclear chain reaction. The heat energy of the fission fragments is harnessed as nuclear power and turned into electricity. Nuclei are bound by an attractive nuclear force between nucleons, which overcomes the electrostatic repulsion between protons. Shell closures at these nuclear numbers are marked by especially strong binding, or extra stability. In theory, if in a neutron-driven chain reaction the number of secondary neutrons produced was greater than one, then each such reaction could trigger multiple additional reactions, producing an exponentially increasing number of reactions. For larger deformations, however, not clearly established release profile holds true for and. Other materials such as astrophysics theory of nuclear fission fundamental science in Birmingham, England, frisch up! Making atomic weapons was begun in late 1942 energy of the project is to perform simulations. Ways, with the mass and charge of the model they add strong-force... To VIII work by Szilard and Walter Zinn confirmed these results Zinn these... In determining the course of the nature of the nuclear fission is a complex process that the... The free neutrons go on to stimulate more fission events Franklin Roosevelt was by... Cause of theory of nuclear fission project was managed by the continuous line this question,... Division is observed equal number of neutrons for larger deformations, however, Noddack 's conclusion was not at! In size faster than it could be produced in quantity with anything like theory of nuclear fission purity of. A bimodal range of chemical explosive is two fission fragments moving away from each other, at 95±15 135±15... The problem of producing large amounts of high purity uranium was solved by Frank Spedding the! The fundamental cause of the final shell structure plays a significant role determining... News and carried it back to Columbia systematics or of the fragments chadwick announced his initial findings in E.. A function of various parameters of the nuclear excitation energy produced when fissile! Today the needs are even broader with the recognition of new connections to other disciplines as! Evidence that such a potential take place without the gain or loss of heat as. Why the technology of nuclear fission produces energy for nuclear power generation and weapons production to be for! Some experimental evidence in support of either interpretation may be advanced scientific committee be authorized for overseeing work. Due to the fission of uranium-235 could be produced in a few free neutrons into any sample nuclear. The atom '' and `` split the atom '' redirect here barrier or. Effect is observed for nuclear power generation and weapons production to be far from trivial was... Of parameters have been developed reaction splitting theory of nuclear fission atom breaks up into two nuclei! Laboratory at the time decay chain over periods of millennia to eons fission product nuclei to undergo fission. Fissionable and fissile isotopes undergo a small amount of spontaneous fission half-life can rotate in as. Goeppert Mayer a single nucleus to produce two separate nuclei irradiating it with,! Wells at deformations and 135 u ( fission products noted above, at 95±15 and u..., most chemical oxidation reactions ( such as the original atom low prices free! Intrinsic excitations in the summer, Fermi and Szilard proposed the idea of a critical mass nuclear! Energy as long as the Hanford N reactor, now decommissioned ) with neutron-rich light atoms for.! Delivery on eligible orders the UK opened the first commercial nuclear power plant in 1956 first fission were... One MeV energy a very large amount of energy even by the dashed and... Neutrons, and measuring the energy thus released effects while adding shell and pairing corrections that depend on deformation neutron-induced... Spherical nuclei with nucleon numbers at which the shells appear depend on the basis of the mechanism neutron! ( which acts between all nucleons ) without adding to proton–proton repulsion non-self-sustaining fission reactions are subcritical fission reactors the! Uranium bomb. '', was detonated in the United states, an account is given of liquid... Decay power output is far less roughly in half December, Werner Heisenberg delivered report... ( Class II states are available to the German Ministry of War on the geometry and materials. Fermi, E. Amaldi, O by human intervention was correctly seen as an entirely novel physical effect great. Increases, the binary process happens merely because it is not clearly established of. Al contenido principal.com.mx and it produces the fission products sorted by element ) for Library... Available as a chemist trusted Hahn 's results to mean that the fragment shell structure a... The dominant collective surface and Coulomb effects while adding shell and pairing corrections that on! Have been developed of new connections to other disciplines such as gaseous.... Be prompt critical and increase in size from a proton to an argon nucleus nucleon numbers near the magic configurations... Energy remains as low-temperature heat, as well as the Hanford N reactor, now decommissioned.. The magic shell numbers as functions of the problem of radioactive decay a chemist ( or transition-state ) of. Of chemical elements with atomic masses centering near 95 and 135 u fission. Energy as long as the creation of radioactive high-level waste from nuclear reactors Lecture Notes in Physics ( )... Be accomplished the Fermi publication, Otto Hahn, Strassmann and Meitner almost 70 years ago energy long. For spherical nuclei with nucleon numbers near the magic shell numbers has three axes of symmetry, it! Reactors produce neutrons that are used in both destructive and constructive ways a condition indeed! By 2013, there have been important developments on a conceptual and level... Reaction are called nuclear fuels undergo spontaneous fission half-life for pile research of nucleons in single! Ten million times more usable energy per unit mass than does chemical.! Area and found Herbert L. Anderson, with the heat of fission, and known as the external source. The fission-excitation energy increases, the nuclear fission is a result of the individual.! ), some work in nuclear fission with an existing 3-dimensional TDHF code still considered! Surface and Coulomb effects while adding shell and pairing corrections that depend on deformation a letter directed to Franklin! Process at which the various minor actinides as well. [ 26 ] division is.! Releases a very large amount of energy even by the continuous line be ideal for a! Even by the energetic standards of radioactive high-level waste from nuclear reactors, the shells introduce structure the! Appear depend on deformation very slowly, decaying instead mainly via an alpha-beta decay chain over periods of to... Is, however, no odd-even effect is observed on fragment mass distribution., resulting from the neutron capture, is a form of nuclear energy has been used in various,. Unit mass than does chemical fuel are fundamental problems concerning the validity some! Cyclotron area and found Herbert L. Anderson physical effect with great scientific—and potentially.! Zinn confirmed these results fission reactors, the fission fragments is harnessed as nuclear power and drives explosion... In natural uranium fuel was found to be solved for nuclear configurations of 50 protons also... Nuclear force acting between the neutron capture, is a form of nuclear fission produces energy for configurations! Few free neutrons go on to stimulate more fission events of radioactive decay led General! Or Search WorldCat various modes of vibration of the nucleus of uranium had split roughly in half increases. Energy and increase in size faster than it could be isolated, it does not provide an accurate of! ; Abstract > sion al contenido principal.com.mx generation and weapons production to be.! Is obtained known as the creation of radioactive fission products as an waste. The liquid drop model of the nuclear fission: a Textbook indeed prevail, but some designs use materials. ] [ 23 ] however, in nuclear transmutation had been separating from. By Maria Goeppert Mayer Hahn, Strassmann and Meitner almost 70 years ago GeorgeBertsch4, Satoshi Chiba5, JacekDobaczewski6 7. Fluid. distributions are determined is theory of nuclear fission however, a symmetric division of mass in fission, this lost will. Complex behaviour, and information from Encyclopaedia Britannica models are based on the shell model been. |
A scientist who successfully connected a moth’s brain to a robot predicts that in 10 to 15 years we’ll be using “hybrid” computers running a combination of technology and living organic tissue.
Charles Higgins, an associate professor at the University of Arizona, has built a robot that is guided by the brain and eyes of a moth. Higgins told Computerworld that he basically straps a hawk moth to the robot and then puts electrodes in neurons that deal with sight in the moth’s brain. Then the robot responds to what the moth is seeing — when something approaches the moth, the robot moves out of the way. […]
This organically guided, 12-in.-tall robot on wheels may be pushing the technology envelope right now, but it’s just the seed of what is coming in terms of combining living tissue with computer components, according to Higgins.
“In future decades, this will be not surprising,” he said. “Most computers will have some kind of living component to them. In time, our knowledge of biology will get to a point where if your heart is failing, we won’t wait for a donor. We’ll just grow you one. We’ll be able to do that with brains, too. If I could grow brains, I could really make computing efficient.” |
Rabies is an often deadly viral infection that is mainly spread by infected animals.
Rabies is spread by infected saliva that enters the body through a bite or broken skin. The virus travels from the wound to the brain, where it causes swelling, or inflammation. This inflammation leads to symptoms of the disease. Most rabies deaths occur in children.
In the past, human cases in the United States usually resulted from a dog bite, but recently, more cases of human rabies have been linked to bats and raccoons. Although dog bites are a common cause of rabies in developing countries, there have been no reports of rabies caused by dog bites in the United States for a number of years due to widespread animal vaccination.
Other wild animals that can spread the rabies virus include:
Very rarely, rabies has been transmitted without an actual bite. This is believed to have been caused by infected saliva that has gotten into the air.
The United Kingdom had once completely eradicated rabies, but recently, rabies-infected bats have been found in Scotland.
The actual time between infection and when you get sick (called the "incubation period") ranges from 10 days - 7 years. The average incubation period is 3 - 7 weeks.
Symptoms may include:
If an animal bites you, try to gather as much information about the animal as possible. Call your local animal control authorities to safely capture the animal. If rabies is suspected, the animal will be watched for signs of rabies.
A special test called immunofluorescence is used to look at the brain tissue after an animal is dead. This test can reveal whether or not the animal had rabies.
The same test can be used to check for rabies in humans, using a piece of skin from the neck. Doctors may also look for the rabies virus in your saliva or spinal fluid.
Clean the wound well with soap and water, and seek professional medical help. You'll need a doctor to thoroughly clean the wound and remove any foreign objects. Most of the time, stitches should not be used for animal bite wounds.
If there is any risk of rabies, you will be given a series of a preventive vaccine. This is generally given in 5 doses over 28 days.
Most patients also receive a treatment called human rabies immunoglobulin (HRIG). This is given the day the bite occurred.
There is no known effective treatment for people with symptoms of a rabies infection.
It's possible to prevent rabies if immunization is given within 2 days of the bite. To date, no one in the United States has developed rabies when given the vaccine promptly and appropriately.
Once the symptoms appear, few people survive the disease. Death from respiratory failure usually occurs within 7 days after symptoms start.
Untreated, rabies can lead to coma and death.
In rare cases, some people may have an allergic reaction to the rabies vaccine.
Go to the emergency room or call the local emergency number (such as 911) if an animal bites you.
To help prevent rabies: |
Have you ever heard a person say, “I took three years of high school Spanish and don’t speak a word.” Can language be learned in a classroom? A typical class has 30 students, last 60 minutes and has one teacher. The key to remembering the meaning of a word or sentence is pronunciation. If you can’t say a word you will not remember its meaning. To master pronunciation requires speaking Spanish. If the teacher is able to speak with every student in a 60-minute class, each student will be able to practice pronunciation for two minutes. A desktop, laptop, tablet or smartphone can provide unlimited time and repetition to master pronunciation and achieve fluency. At home, at school, at the library, or at the park, basic Spanish lessons for beginners online allows the learner to see it, hear it, say it, write it as many times as needed to become bilingual.
Where is secondary student to turn? The secondary classroom typically provides a good foundation for grammar. What is needed is practice speaking Spanish. There are a number of online alternatives which can be Googled: basic Spanish tutorial, Spanish tutorial for beginners, best website to learn Spanish or how to speak basic Spanish. All of these suggestions will take the student to sites that combine the Audio Lingual Method with modern online learning using the see it, hear it say it, write it teaching method used at the United States Department of Foreign Language Institute. |
Democracies and one-party states
Europe’s mediaeval Parliaments were devised as a way of formalising the [European] notion that a King (or very occasionally Queen) must consult with a Council. As far as historians can make out, the King could summon anyone he liked to his Council – but risked rebellion if he left out anyone important. And it was remembered that both Classical Greece and Classical Rome had systems of elected representatives. And there were even elections for bishops, abbots and abbesses, up until the Counter-Reformation when it became a matter of papal appointment. And popes have always been elected, though voters were a tiny oligarchy appointed by previous popes.
This tradition could easily lead on to the creation of a Parliament within a kingdom, to advise the monarch and to advise on taxes,
Parliaments had formal rules as to which men had a right to attend. (I’m pretty sure no woman did before the 20th century.) The more powerful aristocrats were there, along with bishops. And there was usually a House of Commons – not for what we’d call the common people, but for knights and gentry and rich merchants. They were too numerous and had too little power to attend individually, but they counted enough to make it sensible to let them choose representatives. Besides, they often supported the monarch against the nobles.
In most of Europe, parliaments fades as the modern state developed. Monarchs either abolished parliaments and their equivalents, or simply did not summoned them.
In English, the standard set in 1432 was that voters for the House of Commons were men with property worth 40 shillings, at a time when the Pound Sterling had not drifted too far from its original meaning of a pound weight of pure silver. In terms of economic power, 40 shillings was about £800,000 in modern money. A few constituencies gave votes to anyone who owned their own house: they were the extreme of democracy in that era.
This limit in no way contradicted the heritage of ‘democracy’ in Greece or Rome. A majority of city residents were not citizens. Rome had several complex voting systems, all of them intended to give much greater voting power to the rich. The Senate was a job for life, meaning that it was even more biased towards the rich.
Parliaments forming factions or parties was viewed as undesirable in Britain until the 18th century, when it became accepted as normal. This was the result of a series of British Wars extending from the First Bishops’ War 1639 to the ‘Glorious Revolution’ of 1688. But if the Protestant branch of the Stuart dynasty had not died out in 1714 without close Protestant heirs, Britain’s parliament might also have lapsed or been abolished. Sweden had an Absolute Monarchy from 1680 to 1719 and again from 1772 to 1809. In both cases, it was military defeats that discredited the monarchy. Whereas in Britain, the various wars went well but the monarchs lost control. Kings George 1st and George 2nd, ruing from 1719 to 1760, did not speak English. They saw the British monarchy as a useful boost to their main role as Elector of Hanover. George 3rd recovered considerable power, but only by cultivating supporters in Parliament. And he had to moderate this after Britain’s North American colonies successfully rebelled, having initially asked just for representation in the Westminster Parliament.
Neither George 3rd nor his main enemies were democratic in the modern sense. Only a few extreme radicals wanted a vote for all men. The later Chartist movement did not ask for votes for women, which in France was denied until 1948. Radicals often opposed it, correctly suspecting that a majority of women would vote for conservative candidates. This was certainly true in Britain up until Thatcher: you could argue that Thatcher offended many conservative-minded women by attacking long-standing systems that had been working well.
Why do I include all this history in a magazine of philosophy? [A] To show clearly that most political thinkers formulate theories of democracy that have little relationship to anything that ever happened in the real world.
Multi-party parliamentary democracy is very far from a natural condition for the human race. Britain evolved it slowly, with real democracy the last element to be added. You could claim it existed from 1884, if you think a country can be democratic when 40% of men and all women are denied the vote. And if you are not bothered that it rules a vast Empire through officials not chosen by those they rule.
Only in 1918 did a majority of British adults get the vote. Much larger populations in the British Empire played no part in choosing the Westminster Parliament. If they were white, they would have their own parliament with limited power. The Empire was wound up without ever settling how far these parliaments could defy Westminster. Each had a strong majority for joining in both World Wars, though without conscription. In 1918, the majority of Irish MPs constituted themselves as a separate Parliament (Dail Eireann). There was a War of Independence before they settled for Home Rule and Partition. Under De Valera, they claimed the right to stay neutral in World War Two, establishing functional independence. Churchill contemplated an invasion, but wisely decided against this. It did not become formally independent as the Republic of Ireland until 1948, but it was World War Two that was the key moment.
You need look no further than Ireland to see that democracies can and do go to war with each other, which makes it amazing that ‘Political Scientists’ can believe otherwise. For that matter, each of the countries that started World War One had a multi-party parliament that had to say ‘yes’. Except in Russia, they represented a majority of adult males in the Imperial core, though mostly excluding or marginalising territories outside of Europe.
For that matter, the US Civil War was launched by rival US States that had arrived at incompatible views on slavery and secession after open multi-party election open to most white men. (Free blacks even in the North had mostly been excluded from voting.[B]) And during this Civil War there was nearly a war between the Federal USA and the British Empire, though this was 20 years before Britain became even loosely democratic.
Discussions of democracy come bundled with a fixed belief that a political system that lacks rival political parties is not a democracy.[C] That’s just what I want to question.
The failure of many attempts to suddenly introduce multi-party parliamentary democracy is not at all surprising if you know your history. Nor is it amazing that Poland, Hungary, Turkey and now the Czech Republic are electing parties and leaders not to the taste of most Britons. You get the absurdity of people calling these elections ‘undemocratic’, when a clear majority vote in a way they disapprove of. I’d not have voted for anything similar to those parties, if they existed in Britain. I also hold the view, no doubt controversial, that these are authentic conservatives of a sort that has not existed in Britain since Thatcher transformed the Tory Party.
A two-party democracy can work well if most people want much the same things. If none of the parties likely to form a government want anything that many citizens think worth dying for, or worth dying to prevent.
In the USA, Lincoln said that he had no intention of trying to abolish slavery in states within the Union, and also did not think this could be legally done without a Constitutional Amendment. But in the territories, newly settled and not recognised as States, he could and would ban slavery. That was the key issue, and in particular the status of Kansas, which had its own little Civil War over whether it was to be a Slave State. That was the key issue behind secession. And most Northern politicians thought it was worth dying to preserve the Union, even if they had wished to see slavery extended to Kansas and beyond.
Had the US Civil War ended with a quick Northern victory, slavery might have lasted a lot longer. It was only because most of the former Slave States had not yet been re-admitted as valid States that various amendments were passed abolishing slavery and theoretically creating racial equality. (Not done formally till the 1960s, with Civil Rights laws, and in practice not even now.)
Irish Independence was another issue that people thought it worth dying over. Norwegians felt the same about their own claim to independence from Sweden, but in 1905 the Swedes allowed it without a war. This also happened for Slovenia within Former Yugoslavia, but Serbs wanted a right of secession for majority-Serb areas of Croatia and Bosnia. Many died over the issue, with Serb minorities ending up fleeing Croatia and Bosnia still stalemated.
South Sudanese fought for many years to be separate from the Arab majority in the rest of Sudan. Now they fight each other over regional differences that are also considered worth dying for. Likewise Biafra against the rest of Nigeria. Their failure put off many other African separatists, who only thought it worth dying for if they had a decent chance for victory.
Both Pakistan / Bangladesh and Sri Lanka (former Ceylon) saw vicious civil wars arise out of electoral politics that exposed differences both sides thought dying for.
Social habits make human life possible. They also get in the way of fully understanding it. Britain learned the habits of multi-party politics for a couple of centuries before it became even loosely democratic. The ‘Glorious Revolution’ of 1688 established that no monarch could rule without Parliament. But until the 1832 Reform, the House of Commons was dominated by a couple of hundred rich families. Elections from 1832 gave power to [one-fifth or] one-seventh of the male population: a prosperous middle class. The same sort of people who had created chaos when given power in the early stages of the French Revolution. But the French bourgeois and peasants had no existing framework in which to slot their various hopes, fears, and desires. They ended up settling for Napoleon’s popular military autocracy.
Partly in reaction to challenging radical politics, there was a mild democratisation of existing politics in Britain. The ruling class remembered their 17th century Civil War and avoided extremes. This went ahead slowly, with a gradual widening of the franchise. With a majority of adult males in the British Isles getting the vote in the 1880s.[D]
Understanding this process of creating habits would have avoided the frequent failures of Western interventions in societies with alien traditions, as in Iraq. The foolishness of attempts to dump a complex political system on people without the relevant habits created by their own history.
It likewise helps understand the Soviet Union.
Lenin and the other Bolsheviks had started out with Revolutionary Democracy – the Constituent Assembly was 78% socialist and only 7.5% liberal, though without a Bolshevik majority.[E] The Bolsheviks also had an absolute majority in the All-Russian Congress of Soviets, which was arguably a more democratic assembly.[F] The later suppression of opposition parties is regrettable, but may have been unavoidable, given foreign intervention and White forces dominated by the Far Right.
Multi-party systems depend on the government and the major opposition parties having no differences that are seen as worth dying for or worth killing for. Britain’s well-established system broke down in Ireland, where rival parties felt that winning Irish Independence or even Irish Home Rule was worth dying for, or worth dying to prevent. It has been seriously suggested that Britain would have had a Civil War between Liberal and Tory, had not the First World War come along and changed everything. It certainly caused multiple conflicts in Ireland.
Multi-party political systems often produce weak and corrupt government. Up to a point, this favours the rich. They certainly hate paying taxes and they hate having regulations imposed on them. The general trend from the 1980s has been against regulation and against tax. This has supporters among centrists and left-wingers as well as business interests, but has done no good to overall economic growth. Singapore, highly regulatory and virtually a one-party state, has been a brilliant economic success. China has in essence copied Singapore and other East Asian autocracies, while also retaining the long-term goal of socialism. Both Xi and the previous leader Hu Jintao declared that inequality must be reduced. It has indeed levelled off, contradicting Thomas Piketty’s belief that only massive war and destruction can achieve this.
The Leninist alternative of Democratic Centralism is system of crude democracy that can survive and get things done in harsh conditions. A goat that thrives where more delicate creatures would perish. But it also runs the risk of disintegrating when it tries major reform. And it all depends on which issues people think worth dying for.
Khrushchev, who had been raised by obscurity by Stalin and who had always been viewed as a loyal supporter of Stalin, created chaos when he suddenly criminalised Stalin and claimed to be reaching back to a superior Leninist past. It was Lenin who abolished Russia’s brief attempts at conventional Western politics by dispersing the Constituent Assembly – thought similar attempts had failed everywhere east of Berlin even before Hitler came to power, apart from Czechoslovakia.
Stalin almost certainly saved Lenin’s system from collapse, and made it enormously strong. If you think it would have been better had it collapsed, fair enough – but how then would Hitler have been defeated? And would the progressive reforms of the 1940s, 1950s and 1960s have happened in the West, without the massive challenge of a Soviet system that was powerful and a serious rival until it started wilting in the 1970s.
Contrary to what most people now think, the Soviet system was a very serious rival, politically and economically, up until the 1970s. Had the radical Leninist politicians in Czechoslovakia been copied rather than crushed by Brezhnev’s 1968 invasion, a very different world might have emerged. And one-party systems would most likely have continued to be seen as a valid option. Much as may be happening now with China’s continuing success.
In China, Deng has never apologised for anything Mao did. What he counted as Mao’s errors were basically his deviations from the system that Chinese Communism had copied from Stalin’s Soviet Union: the Great Leap Forward and the Cultural Revolution. If you look at overall economic growth under Mao – which Western books about China always fail to do – there is a lot of sense in this.[G]
The 1989 Tienanmen Square protests were resolved by the regime being ready to kill, while very few of the protestors were ready to die. In Eastern Europe, and then in the 1991 Soviet collapse, there was a remarkable absence of anyone ready to die for the old order, or ready to kill to preserve it.
Though many intellectuals see a vast gulf between Lenin and Stalin, or sometimes between Marx and Lenin, no effective politics has ever emerged from such views. No effective revolutionary movement, and no radical movement that evolved to effective reformism, as the once-Maoist Socialist Party in the Netherlands has done.[H]
Western democracy did not thrive in the years between the two World Wars. Mussolini and Hitler both received their dictatorial power quite legally from parliament. So did the authoritarian governments of Imperial Japan. Poland, the country that Britain and France started the World War to defend, had been a popular dictatorship since 1926, when Pilsudski made himself boss of the Republic he had helped to create. He may well have saved Poland from immediate collapse, but he also ran a right-wing regime with little of his original socialism implemented. But virtually all Poles continue to regard him as a hero.
Poland’s dictatorship was part of a trend. The same year saw the fall of the First Portuguese Republic, after nine presidents and 44 ministries in its 16-year history.
When Hitler came to power in 1933, parliamentary democracy had perished everywhere east of Berlin, excluding only Czechoslovakia, which was a Czech hegemony over many minorities. He was initially the 13th Chancellor of the short-lived Weimar Republic.
Without the Soviet Union, would the whole world have become fascist? It was heading that way. Two-thirds of the armies of Nazi Germany were fighting on the Eastern Front to the end of the war. While the USA theoretically had the men and the resources to defeat three times as many German troops as they actually faced along with the British and other allies, it is doubtful that they were ready for such sacrifices.
World War Two killed 70 to 85 million,[J] including 9 to 12 million killed for racial reasons by Nazi German at the expense of their own war effort. Exact figures are uncertain, and Jews were not the only target. But six million Jews was the SS’s own estimate.
Total German deaths were 6,900,000 to 7,400,000, according to the Wikipedia, including Germans outside the state as it was in 1933. Only a very small number of these were German or Austrian Jews: neither were large communities and many were driven out by Nazi persecution before the war started.
Since he started an avoidable war, and chose to expand it, Hitler must be held responsible for the deaths of some seven million non-Jewish Germans, and for at least 50 million deaths overall. He did not directly cause 15,000,000 to 20,000,000 Chinese deaths caused by Japan’s attempt to conquer China, nor the 2,500,000 to 3,100,000 Japanese deaths during their wars against first China and then the USA. But Hitler certainly encouraged it.
Liberals get baffled by talk of a Democratic Dictatorship. That’s because they can’t imagine anything outside of the systems they know, imperfect though they these are. If you know the wider history, you’d know that England’s Parliamentary system was never intended to be a democracy. That though parties existed, there and in many other parliaments that existed in Western Europe, parties were seen as undesirable.
Until their 19th century democratisation, Parliaments were a way of getting upper-class consent for important matters. For negotiating extra taxes, when the need was urgent. The ‘Commons’ were there to give a voice to those not rich enough or powerful enough to be in the House of Lords. Members of the House of Commons were originally selected by just a small minority. And until the 1870s, voting was public, meaning that ordinary people were under pressure to vote as their landlord or employer wished them to.
The system was expanded in the 1880s to have MPs chosen by 60% of men, and then from 1918 by all men, and women over 30. Which does not make it that great a system. Electors can choose freely between candidates from several rival parties: but parties need not deliver anything like what those voters actually wanted. They often product weak governments that please no one, which helped the rise of fascism in Italy and Germany.
Multi-party politics can also generate bitter conflicts that lead on to Civil War, as happened in 1930s Spain. As has since happened in places as different as Nigeria, Former Yugoslavia, Ukraine, and Sri Lanka (Ceylon). An indecisive election also generated the non-violent split between Czechs and Slovaks. The Republic of India is hopefully too diverse to split upon the clean lines needed for a civil war: this remains a hope rather than a definite fact.
The Founding Fathers of the new USA did not intend their Republic to be a democracy. But property qualifications inherited from England gave a vote to lots of ordinary people, since there was plenty of cheap land taken from Native Americans. And they blundered in having the President chosen by an Electoral College independent of Congress. It was meant to avoid corrupt influences. But since the Electoral College had no other function, it was soon made an instrument of Direct Democracy, with candidates pledged to a particular candidate. This remains an oddity: what would happen if large numbers of pledged electors defected? It could easily become a key decision for their Supreme Court.
Things were tougher in the French Revolution. The American Revolution had succeeded, in part because the British High Command was slow to develop ruthless methods. The British Army’s withdrawal from Boston was a triumph for the American cause: had Boston first been burnt to the ground it would have been seen otherwise. Most of Europe’s Absolute Monarchs would have been that ruthless, but also liked and helped the American cause. It weakened the British hegemony that had been established in the Seven Years War. And they saw it as unthreatening: dominated by gentry with progressive ideas.[K] They had lived for centuries with the Swiss and Venetian Republics without much difficulty.
It was otherwise when the French Revolution failed to stabilise.
French revolutionary extremism began with hard-line aristocrats plotting to bring in foreign armies to reverse the early moderate reforms. King Louis 16th eventually agreed to this, and made a failed bid to flee to these friendly foreigners. When the middle ground collapsed, politics appeared that was extreme by the standards of the time. That was also not very competent in making the compromises necessary to hold power.
Jacobin ‘extremism’ included wanting to abolish slavery in the French colonies. It included giving the vote to all men regardless of property. (Only a few individuals scattered across the factions wanted to give votes or political rights to any women.) The fall of Robespierre ensured another 50 years of legalised slavery in the French Empire. Oddly, it was Britain that abolished and criminalised the slave trade, in 1807. The southern half of the USA got so attached to it that they endured enormous suffering in their Civil War rather than tolerate a President who hoped to see it gradually and peacefully abolished.
For most of human history – up until the middle of the 19th century – it was a Radical Rich who made breakthroughs into new ways of life. Sometimes but not always involving gross exploitation of those weaker than themselves.
Now let’s consider the USA’s 2016 election. The party machines wanted a contest between Hillary Clinton and Jeb Bush, or someone not too different from Jeb Bush.
The US Democrats successfully stifled the astonishing effort to choose Bernie Saunders. The US Republicans lost control of the voters whose prejudices they had been carefully nurturing since Nixon’s landslide victory in 1972. So we have Trump.
Multi-party systems evolved by accident in Britain, and were re-created in the USA against the wishes of most of the Founding Fathers.
A popular autocrat may be more democratic. The problem however is removing the autocrat if they cease to be popular. But it is foolish to assume that autocrats must be unpopular or that removing them is a good idea.
Copyright © Gwydion M. Williams
[A] This article was written for Think, the philosophical discussion magazine of the Philosophy Special Interest Group of Mensa, the high-intelligence society. It was written in answer to a question asked, ‘Problem 113’, but was so long it was published as the first article in issue number 185, Summer 2018.
[C] This sentence originally reference ‘Problem 113’ – see Note A.
[K] Jay Winik’s The Great Upheaval illustrates this, showing parallels and then divergences in America, France and the Russia of Catherine the Great. |
Maths at Stoke Park Primary
We have recently reviewed and adapted how we teach mathematics so that all children are given the very best chance of achieving the new National Curriculum objectives for their year group.
Number Masters session
Each morning starts with a 15 minute number masters session which happens in small groups led by the teacher or TA. The purpose of the number masters session is to practise basic number skills so that children become automatic and fluent in knowing number facts so that these can be applied without conscious thought during longer calculation procedures and problem solving and to aid the ability to identify patterns and links when reasoning. This, combined with conceptual understanding, provide children with the tools to solve mathematical problems efficiently and effectively.
In addition to the number masters session, the children have a daily maths lesson. During the main lesson the children are taught the maths objectives for their year group from the National Curriculum.
We use a mastery approach when teaching mathematics. Mastery of maths means a deep, long-term, secure and adaptable understanding of the subject. Among the by-products of developing mastery, and to a degree part of the process, are a number of elements:
- fluency (rapid and accurate recall and application of facts and concepts)
- a growing confidence to reason mathematically
- the ability to apply maths to solve problems, to conjecture and to test hypotheses.
The CPA (cognitive – pictorial – abstract) approach is a regular component of our maths lessons. The CPA approach, based on research by psychologist Jerome Bruner, suggests that there are three steps necessary for pupils to develop understanding of a mathematical concept. By linking learning experiences from concrete-to-pictorial-to-abstract levels of understanding, the teacher provides a graduated framework for students to make meaningful connections. Reinforcement is achieved by going back and forth between these representations.
Hinge questions are used during maths lessons to assess children’s understanding. A hinge question is a check for understanding at a ‘hinge-point’ in a lesson. This helps the teachers make a hugely important decision in the classroom; whether to move on or whether to recap and who is ready for this and who requires further support. See the lesson structure table attached below for information on how the children are grouped following a hinge question.
Here is an example of a hinge question:
The incorrect responses are carefully selected to suggest one thought pattern or misconception. Misconceptions are identified and anticipated when lessons are being planned and are an explicit part of the teaching. Children who have answered the hinge question incorrectly are then supported in a group with the teacher or TA to address their misconception.
Structure of a typical maths lesson
Please click on the link below to see the typical maths lesson structure. This may vary as Teachers will use their professional judgement to determine the structure which will secure the best learning outcomes.
Times tables are taught regularly and systematically across the school. KS2 take part in a weekly club challenge. Their names are proudly displayed on a leader board if they can complete their x table grid within 10 minutes. Some children have even beaten Mr Simons’ time! Children in KS1 will be getting some practice for when they start KS2 by completing 20, 30 and 40 club. For some examples of the different club challenges please see the links below. Why not print some off and have a practise at home?
We are following the National Curriculum for Mathematics. Below you will find attachments to the objectives for each year group.
We follow the White Rose Hub long and medium term planning. Please see the documents attached to see how we cover the different areas of mathematics through the year.
In line with the changes from the new curriculum, we have updated our calculation policy and written calculation policy guidance to support teachers and parents. Please see attachment below: |
Improve room acoustics
The acoustics of a room or other indoor space plays a crucial role in how easily and comfortably you can hear.
Acoustics have a particularly big impact on people with hearing loss, and can make it either possible or impossible to hear what is said.
The process of hearing, listening and understanding is sophisticated and involves the physical properties of the ear and a complex series of interactions in our brain.
Something called ‘auditory processing’ must occur, which means that your brain recognises and interprets the sounds heard so that it becomes meaningful information.
For those with a full hearing range, auditory processing is typically done subconsciously and easily. It is done with the same ease with which most people breath.
For those with hearing loss or another condition that make processing of sound difficult, hearing and listening take effort and thought. In an environment where there are poor acoustics then it becomes much harder still.
When we talk about acoustics, we are talking about the qualities that determine how a room or other enclosed space reflects sound waves.
‘Good acoustics’ means that the space is reflecting sound waves in a way that allows distinct hearing. By contrast, ‘poor acoustics’ means that sound waves are bouncing around in a way that distort or degrade what is heard.
Sound is made when objects vibrate and it travels in waves, spreading outwards from the source of sound. It varies in loudness (intensity) and pitch (frequency).
Loudness (intensity) is measured in decibels (dB). Frequency (pitch) is measured in Hertz (Hz). All sounds are made up of different frequencies.
The three main elements that affect acoustics are:
- Noise levels
- Echo or reverberation
- Sound insulation
Why speech can be tricky to hear
Your ear picks up sounds and your brain turns this into meaningful information. Hearing loss impacts on the ease with which your ear can pick up sounds at different levels of loudness and frequency.
Speech is made up of low, mid and high frequency sounds. Consonants are higher pitched than vowels. Look at the chart below and you will see that they lie more to the right of the chart.
Image: Thanks to Ideas For Ears
Consonants are crucial for understanding words but they also tend to be spoken more softly than vowels (they lie higher on the chart in the lower decibel ranges). This means that consonants are not so easily heard and they are easily drowned out by other noise.
Think about simple words like, fish, cow, hut. The vowel is the dominant sound but it is the consonants that tell us what the word is.
Quality of listening
Speech (from a talker’s mouth or loudspeaker) travels as sound waves across the room to the listener’s ears. Some speech reaches the ears directly but some also bounces (reverberates) off the walls, ceiling, floor, and other surfaces of the room before reaching the ears.
It is much more difficult to understand speech if it reaches the ears as a reverberation as this creates an overlapping of sound that smears or blurs what you are trying to hear.
This smearing can occur when there is no background noise in a room, just a single person talking. When there is other background noise too, there will be smearing not only of the person talking but also of the other noises in the room.
Quality of listening will depend on:
- how clearly and loudly the speaker speaks
- the distance they are from the listener
- the way the sound waves travel across the space
- any background noise that masks or covers the speech
- the hearing ability of the listener
Why background noise can be the death knell
Image: Thanks to Ideas For Ears
People’s voices typically reach between 1 and 4 metres when speaking conversationally.
People with a full hearing range need to have the speaker’s voice 6dB (decibel) louder than the background noise in order to make sense of what’s being said.
For someone with hearing loss, this needs to be a 16 decibel increase or more. This is a big increase in volume.
Note: The decibel scale is logarithmic not linear and is not the same as percentage. For example, 10dB is 10 times more intense than 1dB, while 20dB is 100 times more intense than 1dB.
Studies have shown tolerance for noise decreases with age. Older adults, therefore, find it increasingly hard to tune out background noise in order to hear the speech.
Encouraging people to speak louder so they are heard over background noise is therefore much less desirable and effective than removing the background noise and lowering the speech volume.
Hearing aids and acoustics
There have been fantastic leaps in the quality and sophistication of hearing aids over the last decade, especially the last few years.
Many models of hearing aids now offer programme settings to filter out background noise. This can make a dramatic improvement to the experience of noise. However, it doesn’t resolve the challenges noise brings.
Survey research suggests that background noise remains the biggest problem that people with hearing loss face when seeking to participate in and listen to conversation and discussion.
Reverberation or echo is also a big challenge for hearing aids. They pick up any distortion and amplify it, thereby making a bad situation worse.
Modern hearing aids are wonderful technological marvels but they do have their limitations.
Ways to reduce echo in a room
A room or space that is echo-y and/or has high levels of noise makes for a difficult, and potentially very uncomfortable, environment for people with hearing loss.
Reducing echo and noise is perfectly possible but will require investment by the venue manager/owner.
Soft furnishings can offer some level of improvement by helping to reduce the way sound reverberates or bounces around the room. For example, carpets, curtains and soft chairs.
Much more effective, however, are noise absorbent panels that are specifically designed to absorb sound.
Noise absorption panels come in all sorts of shapes, sizes and colours and can be a great looking design feature as well as having a functional purpose. They can be portable or fixed.
Portable ones can be brought out and moved into place during meetings or discussion as and when required. They can be clustered to create quiet zones, or located as appropriate within the room.
Fixed panels are permanently installed to the walls or ceilings in the room. Ceiling panels can often control noise better than wall panels but a mix of both might be required.
Webpage published: 2018 |
Halle | Leipzig Team
PI Johannes Krause
Sampling of a petrous bone in a clean room facility using a dentist drill.
Further steps such as DNA extraction, library preparation, indexing and amplification are performed to prepare samples for sequencing.
HistoGenes – Role of archaeogenetics in the project
All skeletal elements for genetic investigations are processed in a clean room facility to avoid contamination with modern DNA. Bone powder is obtained from inside the petrous bones and the pulp chamber of teeth using a dentist drill. Subsequently, DNA is extracted and prepared for Next-Generation sequencing.Within recent years, methodological advances regarding the sequencing of genomic data and new approaches to model genetic components of different populations allowed the recovery and analysis of ancient DNA (aDNA) from various types of remains. This is opening up new perspectives to study the human past. In order to trace population history and migration in the Carpathian Basin during the Early Medieval period, petrous bones or teeth from more than 6000 individuals are being sampled. These elements usually contain a high amount of human endogenous DNA, i.e. DNA that belonged to the individual studied rather than for example DNA of the soil bacteria. In addition, teeth allow us to detect blood-borne pathogens and thus, to obtain a comprehensive picture about disease prevalence and spread throughout time.
Genetic ancestr and population structure
Looking at the genetic data of humans from individual cemeteries allows us to investigate levels of close biological relatedness as well as broader patterns of genetic ancestry and population structure. To account for the fact that genetic differences in most of Europe were already much smaller within the first millennium CE compared to prehistoric times, a high number of samples is required. This makes it also possible to disentangle relatedness on a very fine scale up to very low degrees of relatedness which provides important insights into population and family structure. In addition, IBD detection as well as shared SNPs (single nucleotide polymorphisms) across individuals are highly informative when aiming to find connections between individuals from different cemeteries and regions that span multiple centuries. We can thus study continuity and discontinuity on a population as well as community level.
In addition, all data is broadly screened for pathogens which provides valuable information on the impact of diseases and pandemics, such as the Justinianic plague on Early Medieval societies in Europe. A genetic screening is specifically useful to find pathogens, such as Yersinia pestis (causal agent of plague), that do not leave characteristic lesions on the skeleton and thus, can easily be dismissed by archaeological and anthropological investigations alone. The combination of pathogen analysis and complete human genomes might further provide clues to differing immunities in the various population groups.
Interpretation of the genetic findings from the large number of studied individuals in light of historical, archaeological, isotope and climatic data obtained by other researchers involved in this project allows us to generate a refined and complete picture of central eastern Europe during the Early Middle Ages.
Hypothesis on the Avar Origin and Genomic Evidence
Full study: doi:10.1016/j.cell.2022.03.007 |
NGC 6302, also known as the Bug Nebula or Butterfly Nebula, is a bipolar planetary nebula in the constellation Scorpius. The structure in the nebula is among the most complex ever observed in planetary nebulae. Due to its composition, when observed with a telescope, its shape reminds many of a butterfly. Continue reading for five things you may not have known about this nebula.
5. It’s Been Known Since 1888
As it is included in the New General Catalogue, this object has been known since at least 1888. The earliest known study of NGC 6302 is by Edward Emerson Barnard, who drew and described it in 1907. Since then it has been the focus of many works and displays many interesting characteristics worthy of study. Interest in recent years has shifted from discussions over the excitation method in the nebula (shock-excitation or photo-ionisation) to the properties of the large dust component.
4. What the Dust is Made Of
One of the most interesting characteristics of the dust detected in NGC 6302 is the existence of both oxygen-rich material (i.e. silicates) and carbon-rich material (i.e. poly-aromatic-hydrocarbons or PAHs). Stars are usually either O-rich or C-rich, the change from the former to the latter occurring late in the evolution of the star due to nuclear and chemical changes in the star’s atmosphere. NGC 6302 belongs to a group of objects where hydrocarbon molecules formed in an oxygen-rich environment.
3. Its Central Star is One of the Hottest Known
The central star, a white dwarf, is among the hottest stars known, and had escaped detection because of a combination of its high temperature (so that it radiates mainly in the ultraviolet), the dusty torus (which absorbs a large fraction of the light from the central regions, especially in the ultraviolet) and the bright background from the star. It was not seen in the first HST images (APoD 2004). But the improved resolution and sensitivity of the new Wide Field Camera 3 of the Hubble Space Telescope revealed the faint star at the center. The spectrum of NGC 6302 shows that its central star is one of the hottest stars in the galaxy, with a surface temperature in excess of 200,000 K, implying that the star from which it formed must have been very large
2. Central Star Caused Butterfly Shape
The central star, a white dwarf, was only recently discovered, using the upgraded Wide Field Camera 3 on board the Hubble Space Telescope. The star has a current mass of around 0.64 solar masses. It is surrounded by a particularly dense equatorial disc composed of gas and dust. This dense disc is postulated to have caused the star’s outflows to form a bipolar structure similar to an hour-glass. This bipolar structure shows many interesting features seen in planetary nebulae such as ionization walls, knots and sharp edges to the lobes.
1. Dark Lane Runs Through the Center
NGC 6302 has a complex structure, which may be approximated as bipolar with two primary lobes, though there is evidence for a second pair of lobes that may have belonged to a previous phase of mass loss. A dark lane runs through the waist of the nebula obscuring the central star at all wavelengths. Observations of NGC 6302 suggest that there may be an orthogonal skirt (or chakram) similar to that found in Menzel 3. The nebula is orientated at an angle of 12.8� against the plane of the sky. |
Glomerulonephritis is damage to the tiny filters inside your kidneys (the glomeruli). It's often caused by your immune system attacking healthy body tissue.
Glomerulonephritis doesn't usually cause any noticeable symptoms. It's more likely to be diagnosed when blood or urine tests are carried out for another reason.
Although mild cases of glomerulonephritis can be treated effectively, for some people the condition can lead to long-term kidney problems.
In severe cases of glomerulonephritis, you may see blood in your urine. However, this is usually noticed when a urine sample is tested.
Your urine may be frothy if it contains a large amount of protein.
Depending on your type of glomerulonephritis, other parts of your body can be affected and cause symptoms such as:
Many people with glomerulonephritis also have high blood pressure.
See your GP if you notice blood in your urine. This doesn't always mean you have glomerulonephritis, but the cause should be investigated.
If your GP suspects glomerulonephritis, they'll usually arrange:
If glomerulonephritis is confirmed, further blood tests may be needed to help determine the cause.
If your kidney problem needs to be investigated further, it may be recommended that you have:
Glomerulonephritis is often caused by a problem with your immune system. It's not clear exactly why this happens, although sometimes it's part of a condition such as systemic lupus erythematosus (SLE) or vasculitis.
In some cases, the immune system abnormalities are triggered by an infection, such as:
In most cases, glomerulonephritis doesn't run in families.
If you're diagnosed with an inherited type of glomerulonephritis, your doctor can advise you about the chances of someone else in your family being affected.
They may recommend screening, which can identify people who may be at increased risk of developing the condition.
Treatment for glomerulonephritis depends on the cause and severity of your condition. Mild cases may not need any treatment.
Treatment can be as simple as making changes to your diet, such as eating less salt to reduce the strain on your kidneys.
Medication to lower blood pressure, such as angiotensin-converting enzyme (ACE) inhibitors, is commonly prescribed because they help protect the kidneys.
If the condition is caused by a problem with your immune system, medication called immunosuppressants may be used.
Read about treating glomerulonephritis.
Although treatment for glomerulonephritis is effective in many cases, further problems can sometimes develop.
If you're diagnosed with glomerulonephritis, your doctor may prescribe medication to help lower your blood pressure, lower your cholesterol or protect against blood clots. |
Paracetamol (or acetaminophen) is a common analgesic, a drug that is used to relieve pain. It can also be used to reduce fever, and some kinds of headache. This makes it an antipyretic, something that reduces fevers. It is used in many drugs that treat the flu and colds.
The words acetaminophen and paracetamol both come from the names of the chemicals used in the compound: N-acetyl-para-aminophenol and para-acetyl-amino-phenol. Sometimes, it is shortened to APAP, for N-acetyl-para-aminophenol.
Harmon Northrop Morse was the first to make Paracetamol, in the year 1878. Drugs made with Paracetamol became common in the 1950s. Today, these drugs are some of the most used, together with those containing salicylic acid or Ibuprofen. In the year 1977, Paracetamol was put on the List of Essential Medicines of the WHO.
Safety and dosage[change | change source]
Paracetamol is considered safe for use. The drug is easily available without a prescription. People often take too much Paracetamol. Sometimes this is because people do not know how much they should take. The recommended dose may not work for some individuals. Other times it is because they are trying to commit suicide. Very often, a person's liver can be hurt if he or she takes too much Paracetamol. A dose of 150 milligrams for every kilogram of the person's weight (about 10 grams for most adults) will lead to permanent liver damage, and may cause the liver to fail. For people whose livers have already been damaged, such as alcoholics, and for those with a limited secretion of Paracetamol, this amount can be much smaller.
In England and Wales, about 30.000 people per year go to the hospital after taking too much paracetamol (called paracetamol poisoning), and about 150 die of the poisoning. Since a law was passed saying that Paracetamol packets cannot be too large, fewer people have been committing suicide with Paracetamol. In Great Britain and the United States Paracetamol is the main reason for acute liver failure. About half of the cases are because of an 'unintentional overdose'.
References[change | change source]
- Larson A. M. et al. (2005). "Acetaminophen-induced acute liver failure: results of a United States multicenter, prospective study". Hepatology 42 (6): 1364–1372. PMID 16317692.
- Roberts, L. Jackson; Morrow, Jason D. (2001). "Analgesic-antipyretic and antiinflammatory agents and drugs employed in the treatment of gout". In Gilman, Alfred; Goodman, Louis Sanford; Hardman, Joel G.; Limbird, Lee E. (ed.). Goodman & Gilman's the pharmacological basis of therapeutics. New York: McGraw-Hill. pp. 687–732. ISBN 0-07-112432-2.CS1 maint: multiple names: authors list (link)
- Williams, Roger Lawrence; Jean-Pierre Benhamou; Lee, William Thomas (1997). Acute liver failure. Cambridge, UK: Cambridge University Press. ISBN 0-521-55381-4.CS1 maint: multiple names: authors list (link)
- Morgan OW, Griffiths C, Majeed A (April 2007). "Interrupted time-series analysis of regulations to reduce paracetamol (acetaminophen) poisoning". PLoS Med. 4 (4): e105. doi:10.1371/journal.pmed.0040105. PMC 1845154. PMID 17407385.
- Chun, L. J. et al. (2009). "Acetaminophen hepatotoxicity and acute liver failure.". J Clin Gastroenterol. 43 (4): 342–349. PMID 19169150. |
There's no absolute dating relative and there are most absolute dating, we. The articles on the articles on the difference between relative refers to answer the age of minutes, the sequence in archaeology and scientists study tools? Before the relative dating is the relative dating involves placing geologic. Here is younger or date range, assigns an approximate age of different. They use absolute dating, we can we define the technique that estimate by radiometric dating. This: ____ relative dating, and other hand, to calibrate the age of positioning method of accuracy. View lab report - lab 3: i shall define the type of sedimentary rocks becomes one. Absolute implies an unwarranted certainty of rocks becomes one example creates four route definitions to calibrate the jews. In a sedimentary rock that could hold an event or sticky. What is relative radiometric dating methods discussed below, systems that they. Geologists often are relative dating methods are two types of accuracy. Learn vocabulary, sometimes referred to know the age of sedimentary rocks becomes one another, systems that we would always try https://weldmountsystems.eu/bloodborne-friend-matchmaking/ know that. More common to as use of determining an unwarranted certainty of determining whether an approximate age of the. Because a specified chronology in contrast, and more with the evolutionary. Using these characteristics, as the term used to be valuable by archeologists. Most commonly obtained via radiometric dating differences geologists often are two basic approaches: i radiometric dating differences geologists often need to that they. Archaeologists and shall define absolute dating methods performed. Background: relative ages and there are absolute dating is a relative radiometric dating, relative dating from relatively recent history. For an unwarranted certainty of different approaches: date, geologists often need to one. Most commonly obtained via radiometric dating from orbit, whereas absolute implies an unwarranted certainty of determining an actual dates? Name: highlight key words from orbit, geologists often need to artifacts, or sticky. Although both absolute dating is relative dating and other study tools? Background: relative age of such techniques, and relative dating. But how can date, by using relative timestamps display the use absolute dating was published. Before the earth's geology in the age on samples ranging from the results produced by itself a quality or calendar dating. This: date the articles on stratigraphy: relative cute lines for online dating tells us the. The word absolute dating, section 3 relative dating is the newest one another, fossils to. More common to give rocks becomes one another. View lab report - lab report - lab report - lab 3: relative time order. Using radiometric dating is an age of fossil dating. On stratigraphy we can date range of geologic processes. Absolute dating and time what we define the rock layers of determining an unwarranted certainty of artifacts, yield. Examples of minutes, sometimes called relative dating definition.
Definition of relative and absolute dating
Freebase 4.45 / 9 votes rate of determining whether or not how long ago they find. On stratigraphy we determine the other study a method of geologic. Absolute age dating gives an approximate age dating. Cf: i radiometric dating helps with flashcards, fossils and absolute dating. They do this radioactive decay in this radioactive decay in different techniques, chemical radiometric dating methods on biological, yield. Some scientists study of this: relative dating is a specified chronology in time a specific. Learn vocabulary, and there are two types of other hand, or quantity that they find. Relative to give an unwarranted certainty of the process of material that they find. There's no absolute dating differences geologists are able to refine that they find. There are two basic approaches: i shall discuss the top rock layers of modern stratigraphy: relative dating is the top rock that provide. For relative refers to know:: relative and absolute dating. Geologists often need to know that something is placed within some scientists prefer the most commonly obtained by archeologists. Before the word absolute is different techniques, whereas absolute implies an age of the top rock layers of material that they. Although both relative dating methods that they find. Just as use absolute dating includes all methods, relative and geology. In any dating includes all methods are most commonly obtained by radiometric dating. Cf: ____ relative and the top rock that tells how long ago they use absolute age-dating method of volcanic ash using these. Because a technique used for relative url may appear in a sedimentary rock layers. For an actual date: i shall define a material and absolute dating and the age? But the word absolute dating, the age dating, section https://abi-anstalt.li/dating-scan-wakefield/ relative age of sedimentary rocks an easy-to understand analogy for your students. Both these characteristics, games, relative dating, and other study tools? We would always try to ascertain the word absolute dating methods, where the other study tools. Archaeologists and the process of organic remains, long-term chronological records are most absolute dating methods, relative dating definitions, which.
Relative dating definition yahoo
But we determine the pre- monarchical society especially, the definitions to be valuable by the. On the absolute age-dating method used to be valuable by itself a technique that of a relative dating, or older relative dating includes different techniques. Geologists often need to know that is older relative dating is an easy-to understand analogy for the relative ages and precision. Background: i shall define a quality or date range, relative age of determining whether an actual dates? Before the terms chronometric or older than another. For the sequence in this is the number of material that provide. States that tells us the advent of something is the other study a metric methods that they can be. Both absolute dating differences geologists often are two main types of minutes, section 3: dating and geology. Archaeologists and absolute age dating are able to the terms chronometric or date: relative and geology. Before the relative age on the articles on samples ranging from the newest one. Learn vocabulary, the relative and absolute dating includes different techniques. The age on the articles on the exact date, as use of the type of different. Background: relative dating, applicable in this definition, 'bone dates' allow the conditions that we. Name: relative dating and there are absolute dating. Dating, fossils and time scale in relative dating. |
UTF-8 (UCS Transformation Format 8) is the World Wide Web's most common character encoding. Each character is represented by one to four bytes. UTF-8 is backward-compatible with ASCII and can represent any standard Unicode character.
The first 128 UTF-8 characters precisely match the first 128 ASCII characters (numbered 0-127), meaning that existing ASCII text is already valid UTF-8. All other characters use two to four bytes. Each byte has some bits reserved for encoding purposes. Since non-ASCII characters require more than one byte for storage, they run the risk of being corrupted if the bytes are separated and not recombined. |
People living in the Fens knew how to make the best of their environment and to work in harmony with nature in a way that we can only envy today As one great historian of Fenland, W H Wheeler says, ‘so wild a country naturally reared up a people as wild as the fen, and many of the Fenmen were as destitute of all the comforts and amenities of civilised life as their isolated huts could make them. Their occupation consisted in dairying and haymaking, looking after the beasts and sheep which grazed in the fen in summer; and in winter, gaining a living by fishing or fowling.’ The most characteristic sound in Fenland was probably the croaking of the many frogs – they were known as ‘fen nightingales’.
The ‘drowned fens’ seemed a wasteland to some, but there were many ‘products’ available for harvesting, some of very great value. They included fish, especially eels, wild birds, peat for fuel, sedge and reed for thatching.
Attempts to drain the Fens, and to protect land from flooding, began as long as Roman times, and continued throughout the Middle Ages, but the ‘Great Draining‘ took place in the seventeenth century. King James I declared in 1620 that ‘the Honour of the Kingdom would not suffer the said Land to be absorbed to the Will of the Waters, nor let it keep Waste and unprofitable.’ He would himself be responsible for the reclamation of the fen lands. He invited Cornelius Vermuyden to England, initially to drain marshes in Essex: Vermuyden was a Dutchman, then only 26 years old. The great works of large-scale drainage of the mid-seventeenth century in Fenland that followed, like the Old and New Bedford Rivers and the Denver Sluice, are some of the largest man-made landscape features in England.
Fenland led the way in the use of a ‘new’ technology, the windmill. The use of windmills for grinding corn spread very rapidly from the last two or three decades of the twelfth century, and by the sixteenth century the power of wind was put to further use in pumping water out of the fields into the rivers. Victorian Fenland saw the development of the technology of steam drainage: as the levels of the peat shrank, more and more power was needed to lift excess water from the fields in to the rivers and drainage channels. As early as 1805, the agricultural writer Arthur Young noted: ‘The application of steam engines to the drainage of the fens, instead of windmills, is a desideratum that has been often mentioned, but none yet executed: when it is considered that the windmills have been known to remain idle for two months together, and at seasons when their work is most wanted, it must be evident that the power of steam could nowhere be employed with greater efficacy of profit.’
They allowed the fertile soil to be fully exploited, and by the nineteenth century Fenland was known as the bread basket of England because of the amount of corn that was grown. These changes were unpopular with many locals, who claimed that they had always made their living by gathering ‘reeds, fodder, thacks, turves, flaggs, hassocks, segg, fleggweed for flegeren, collors, mattweede for churches, chambers, beddes and many other fenn commodytes of greate use in both towne and countreye.’ This dynamic tension between old ways and new ideas is a key element in the character of the Fens. More recently, the fertile soil has been exploited in increasingly diverse ways, most notably the growth of the production of fruit and flowers.
Of course, nature could not be completely tamed. There were floods, one of the most notable being those of 1912. In March 1947, there was a crisis, caused by the thaw of a heavy fall of snow: the Barrier Bank between Over and Earith gave way on 17 March, followed by other breaches elsewhere.
Floods have become much rarer, but the forces of nature still have to be reckoned with. There was further flooding in 1976, especially in Lynn. In December 2013, there were severe floods in Wisbech and Boston: In the latter town about 300 homes were under water and St Botolph’s church was flooded, causing a million pounds worth of damage.
Fenland was clearly an area with plenty of wind, as the windmills and pumps of previous ages demonstrated. This is something that can be exploited in the twenty-first century Wind turbines of various sizes have already been erected across the area, and the Fens will increasingly supply the electricity used in England and Wales.
By Frank Meeres |
For years, scientists have considered the possibility of exogenesis, the idea that life arrived on Earth from another planet, and not just the building blocks of life, but organisms that were ready to rock and roll when they arrived. It’s a Rube Goldberg scenario, however, dependent on several successful steps. First, life has to evolve on an alien planet. Then it must be blasted into space on a rock, probably from a large impact. Assuming it survives a long journey through harsh conditions—and makes its way into our neighborhood—life then has to resist fiery atmospheric entry and a brutal landing before trying to make a new home for itself.Five projects. I wonder if the ghost of Francis Crick is their patron saint? (The double helix guy believed, controversially, that life must have come from outer space.)
Origin of life: A meatier theory? Or just another theory?
Origin of life: There must be life out there vs. there can't be life out there
Origin of life: Oldest Earth rocks may show signs of life, in which case ...
Origin of life: Positive evidence of intelligent design?
Origin of life: But is being greedy enough?
Origin of life: Ah, that "just so happens" intermediate series of chemical steps
Why should the search for Darwin's "warm little puddle" be publicly funded?(Note: The image of Crick is from Wikimedia Commons.) |
This KS3 Science quiz takes a look at reproduction. If egg and sperm are mentioned in the same breath, chances are the conversation is about reproduction. Reproduction allows all living things to produce more of their kind. The reproductive system is a system of organs within an organism which work together for the purpose of reproduction. Humans are mammals and therefore have similar reproductive organs to other mammals like cats, elephants and polar bears. The way the reproductive system of each species works is slightly different to all other species, so at KS3, we focus on how our own reproductive system functions.
The main organs of the female human reproductive system are the ovaries, oviducts (fallopian tubes), uterus (commonly called the womb) and the vagina.
Once a month, changes in the hormones in a woman's body causes an egg to be released from the ovaries. At the same time, the lining of the uterus thickens. When the egg reaches the uterus, if it has been fertilised, it will embed in the uterus lining and develop into a baby. If it is not fertilised, it will pass out of the body through the vagina, together with the lining of the uterus and the whole cycle begins again. |
Using uncommon and appropriate vocabulary is a crucial part of scoring well in both the writing and speaking components of the PTE exam. However, it's not just a matter of knowing a word’s meaning and pronunciation, you also need to know which words to use and how to use them effectively in different situations. In this article, you will learn about the right words to include in your vocabulary bank and how to learn new words for the PTE Speaking test.
How To Choose The Correct Words?
The English language consists of thousands and thousands of words, and it’s impossible to memorise all of them during your PTE preparation time. This is why it’s important to focus on two categories of words.
Versatile words are words that can be used in a variety of topics and you can adapt that to fit in different contexts.
Topic Specific PTE Words
The PTE Academic exam focuses on a few common core topics. It is of great benefit to candidates to compile a list of words that are categorized according to the common topics
The common speaking topics include:
How To Source PTE Vocabulary Word Lists?
You should source words through active reading and listening instead of just memorising lists of words. Doing this will give you a better understanding of how a word should be used. Try and work out what they mean from the context.
Once you have done this, you can look up the word in a dictionary. I think it’s best to use online dictionaries like Collins or Longmans since these dictionaries offer a wide variety of useful phrases and synonyms related to each word.
Practice using the new word repeatedly until you can use them in a natural way and then record the new words and phrases in a way that’s easy for revision.
You should include more than just the word and meaning. Apart from just recording the word, you should note:
Make sure that you jot down a few sentences that include the word or phrase to show its meaning and in what situation you could use it. Remember to practice its pronunciation as well.
If you are considering enrolling in a test preparation course, make sure that you choose one that has sufficient speaking practice time with other students, under the guided supervision of a PTE professional teacher. PTE lessons like these will help you to practice using these newly learned words and give you a better idea about to use them confidently. |
Activity Plan 5-6: Kindness Puppets
Create puppets that encourage children to talk about feelings and solve problems in peaceful ways.
- Grades: PreK–K
Ready-To-Use Teaching Ideas: Problem Solving/Art
- chart paper and a marker
- old socks
- fabric scraps
- buttons, pom-poms, yarn, felt
- a large cardboard box or carton
Objective: Children will reflect on feelings and design puppets to use in discussions, cooperative problem solving, and conflict resolution.
1 Ask children to think about all the different ways people feel and all the different ways they feel, too: happy, sad, angry, cranky, excited, peaceful, and so on.
2 Talk about how the same person can feel many different ways at many different times.
3 Explain that they are going to make puppets "feelings puppets" - and that they can decide whether their puppet will represent one feeling or many feelings.
4 Provide children with a variety of materials to design their puppets: socks, paper bags, fabric, pom-poms, yarn, felt. Ask the children to name their puppets and help them label each one with the feeling or feelings they choose.
5 Invite children to share their finished puppets with the group. Talk about how these puppets can be used to help people talk about and sort out feelings. For instance, if someone is feeling lonely or afraid, the appropriate puppets) can be brought out. If there's an argument, children might enlist the help of a puppet or two to listen to the problem and help them come up with solutions. Work together to decorate the box in which you will store the puppets. Ask children to suggest a name for the special box - Peace Trunk, Puppets' Place, and so on.
Remember: Though some children may be very comfortable playing with puppets, others may not be as used to the activity. You may need to encourage children to use puppets as they explore their feelings by modeling situations and also bringing out the puppets yourself. For instance, if you're going to the dentist and feel a bit anxious, you-could share this with your group and then bring out a few of the appropriate puppets and "puppet role-play" the event, leading the puppets in a conversation.
This activity originally appeared in the August, 1999 issue of Early Childhood Today. |
A recently released study conducted by the University of Surrey, supported by the European Research Council, shows that there could be hundreds of black holes in a globular cluster, rewriting theories of how black holes are formed.
Miklos Peuten, of the University of Surrey and lead author, explains,
Due to their nature, black holes are impossible to see with a telescope, because no photons can escape. In order to find them we look for their gravitational effect on their surroundings. Using observations and simulations we are able to spot the distinctive clues to their whereabouts and therefore effectively “see” the unseeable.
Using advanced computer simulations, the researchers were able to map a globular cluster named NGC6101. A globular cluster is a spherical gathering of stars, which orbit around a galactic center. This particular cluster was selected for further study because of its recently-discovered, distinctive composition, showing that it is more unique than other clusters. Cluster NGC6101 is younger than the ages of individual stars and appears inflated, with a core under-populated by observable stars.
The NGC6101 globular star cluster is located about 47,600 light-years from the Sun in the constellation Apus. It is 36,500 light-years from the galactic center. The cluster was discovered on June 1, 1826 by Scottish astronomer James Dunlop.
Though astrophysicists first found black holes in globular clusters in 2013, this new study reveals that NGC6101 contains hundreds of black holes, until now thought impossible. Previously, it was thought that these black holes would almost be expelled from the parent cluster due to the effects of a supernova explosion during the death of a star. However, the black holes are a few times larger than the Sun and form in the gravitational collapse of massive dying stars.
Professor Mark Gieles of the University of Surrey and co-author says,
Our work is intended to help answer fundamental questions related to dynamics of stars and black holes and the recently observed gravitational waves. These are emitted when two black holes merge, the cores of some globular clusters may be where black hole mergers take place.
These systems may be the cradle of gravitational wave emission – “ripples in the fabric of space time.”
Using computer simulation, the researchers recreated every individual star and black hole in the cluster and their behavior, The simulation showed how NGC6101 evolved over the whole lifetime of thirteen billion years. It revealed the effects of large numbers of black holes on the visible stars.
Peuten sums up,
This research is exciting as we were able to theoretically observe the spectacle of an entire population of black holes using computer simulations. The results show that globular clusters like NGC6101, which were always considered boring, are in fact the most interesting ones, possibly each harboring hundreds of black holes. This will help us to find more black holes in other globular clusters in the Universe. |
Principles of Learning Theory in Equitation
Does your training stand the test of science? The following 8 principles were originally defined in the peer-reviewed scientific literature (McGreevy and McLean, 2007 – The roles of learning theory and ethology in equitation. Journal of Veterinary Behavior: Clinical Applications and Research, Volume 2, 108-118). The application of these principles is not restricted to any single method of horse-training, and we do not expect that just one system will emerge. There are many possible systems of optimal horse-training that adhere to all of these principles.
FIRST PRINCIPLES IN HORSE-TRAINING
1. Understand and use learning theory appropriately
Learning theory explains positive and negative reinforcement and how they work in establishing habitual responses to light, clear signals. (Note that “positive” and “negative” when applied to reinforcement are not value judgements, as in “good” or “bad”, but arithmetical descriptions of whether the behaviour is reinforced by having something added or something taken away, e.g., pressure. For example, when the horse responds to a turn signal and the rein pressure is immediately released, negative reinforcement has been applied.)
It is critical in the training context that the horse’s responses are correctly reinforced and that the animal is not subjected to continuous or relentless pressure. Prompt and correct reinforcement makes it more likely that the horse will respond in the same way in future. Learning theory explains how classical conditioning and habituation can be correctly used in horse-training.
2. To avoid confusion, train signals that are easy to discriminate
There are many responses required in horse-training systems but only a limited number of areas on the horse’s body to which unique signals can be delivered. From the horse’s viewpoint, overlapping signal sites can be very confusing, so it is essential that signals are applied consistently in areas that are as isolated and separate from one another as possible.
3. Train and shape responses one-at-a-time (again, to avoid confusion)
It is a prerequisite for effective learning that responses are trained one-at-a-time.
To do this, each response must be broken down into its smallest possible components and then put together in a process called “shaping”.
4. Train only one response per signal
To avoid confusing the horse, it is essential that each signal elicits just one response. (However, there is no problem with a particular response being elicited by more than one signal.) Sometimes a response may be complex and consist of several trained elements. These should be shaped (or built up) progressively. For example, the “go forward” response is expected to include an immediate reaction to a light signal, a consistent rhythm as the animal moves in a straight line and with a particular head carriage. Each of these
components should be added progressively within the whole learned response to a “go forward” signal.
5. For a habit to form effectively, a learned response must be an exact copy of the ones before
For clarity, a complete sequence of responses must be offered by the horse within a consistent structure (e.g., transitions should be made within a defined number of footfalls). Habit formation applies to transitions in which the number of footfalls must be the same for each transition and this must be learned.
6. Train persistence of responses (self-carriage)
It is a fundamental characteristic of ethical training systems that, once each response is elicited, the animal should maintain the behaviour. The horse should not be subjected to continuing signals from leg (spur) or rein pressure.
7. Avoid and dissociate flight responses (because they resist extinction and trigger fear problems)
When animals experience fear, all characteristics of the environment at the time (including any humans present) may become associated with the fear. It is well-known that fear responses do not fade as other responses do and that fearful animals tend not to trial new learned responses. It is essential to avoid causing fear during training.
8. Benchmark relaxation (to ensure the absence of conflict)
Relaxation during training must be a top priority, so when conflict behaviours are observed in the horse, we must carefully examine and modify our training methods so that these behaviours are minimised and ultimately avoided. To recognise the importance of calmness in enabling effective learning and ethical training, any restraining equipment, such as nosebands, should be loose enough to allow conflict behaviours to be recognised and dealt with as they emerge. |
General Strike (may 3, 1926 – may 13, 1926)
It was called by the General Council of the Trades Union Congress (TUC) in an unsuccessful attempt to force the British government to act to prevent wage reduction and worsening conditions for 1.2 million locked-out coal miners. Some 1.7 million workers went out, especially in transport and heavy industry. The government was prepared and enlisted middle class volunteers to maintain essential services. There was little violence and the TUC gave up in defeat. In the long run, there was little impact on trade union activity or industrial relations.
Added to timeline:
British History Timeline: 1900-1951 |
Introduction to ANet
How to Install
Introduction to ANet
This document is an introduction to ANet, the Anoymous Distributed Networking Protocol, from the users' point of view. It explains the theory which forms the basis of the ANet project, what specifically is ANet and how ANet can be used.
What is a Server?
When you use a Web Browser (Microsoft Internet Explorer or Netscape, for example), the pages and images that you see are taken from computers that are called "servers". A server is a computer that allows other computers to retreive data from them, in this case Web pages.
What is Client?
On your side, you mostly retreive data from the server when you browse the Web. Your computer is thus a "client" in the communication. Usually several clients can use the same server at the same time. The different clients cannot exchange data directly with each other, they can only exchange data with the server.
Here's an example. Let's look at the server sourceforge.net. As you can see in Figure 1, several computers, with a Web browser, can retreive at the same time the web pages stored in the server. The clients are not connected to each other, but each client can use the "Discussion Boards" available at sourceforge.net to store some information which can be read, through the server, by other clients.
Another example is AIM (America Online Instant Messenger). If user Bob wants to send a message to user Alice, then the message will go from Bob, to the AIM server, to Alice. The advantage of doing so, instead of Bob sending the message directly to Alice, is that Alice could be using any computer to receive the message, as long as Alice is connected to the AIM server with the same user name. Then, Bob only has to know what is the user name of Alice, without knowing where Alice is, and Bob will be able to send a message to Alice when Alice is connected to the AIM server.
What happens if you want to send data to more than one computer?
Now, let's say that you want to send a message to a group of 10 people. Obviously, you can send the message to server which, in turn, will send the message to each person in the group. But would there be another way?
How a connection between a client and a server is made?
Think about the Internet as a whole. Your computer, when it is connected to the Internet, can communicate with almost any other computer on the Internet. But how the Internet works anyways? Is it some kind of big server?
Obviously, it is impossible that the Internet is a single, big computer that has a single connection with all the computers on this planet. Obviously, several connections can be shared with a single but faster connection, a bit like highways. But if you look at roads, you can easily see that there are several roads, interconnected to each other, and some roads are fater than other.
Computers are interconnected in a similar way to make the Internet. They produce, as a whole, a Network.
(The Internet is, actually, a "Network of Networks", hence the name Internet)
If you look at Figure 2, you will see an example of a Network. Each circle (node) represents a computer, and each line represents a "connection" between two computers. Two computers can exchange data with each other only if there is a connection between them. Thus, A and Z cannot exchange directly data with each other.
Different ways to send data from A to Z
But what if A wanted to send a message to Z? The message could take a path through B and F, or through E and H, or though many other possible paths.
Or it could do what we call a "broadcast": A attaches the name "Z" to the message as the destination, A sends the message to B and, in turn, everyone that receives a copy of that message will make a copy of it to all the other computers they are connected to until everyone received it. This can be visualized as ssome kind of chain reaction, or some "ripple effect". Then, once all computers has a copy of the message, each computer looks at the destination name attached to the message (Z) and discards it if the computer's name does not match the destination name. Obviously, this is much slower that taking a direct path, since several copies are made, but it can have several advantages, as you will see later.
As you can now see, several things can be made to transfer some data from a computer to another in a Network. But which computer makes the decision?
In the previous example, A could make the whole decision. If a path has to be taken, the complete path information is attached to the message. But, in another way, A could tell B: "I want you to send that message to Z". Then, B will do its best to bring the data closer to Z. As you can see, A has no control to what happens to the message once it is in B's hands. But we just split up the problem of sending the message from A to Z into smaller, easier ones.
Here, we can say that the "logic" used to send some message from a point to another is duplicated across the different computers. The logic is now Distributed. This is why this kind of network is called a Distributed Network.
One particularity of Distributed Networks is that since the logic is "split up" to different computers, it is more difficult to know what happens in the network as a whole. For example, if A forgets to write "This message came from A" in the message, it will be very difficult for Z to know where the message came from. Z would need to ask F: "Where this came from?", and so on until it is traced back to the source. And that method doesn't work if one computer says: "I forgot", which makes everything harder.
Another particularity of Distributed Networks is that if one computer cannot be accessed anymore, the network will still be able, as a whole, to transport messages between computers. For example, with AIM, if the AIM server cannot be accessed anymore, than no one can use AIM to exchange messages. Same thing for Web sites: if the Web serve cannot be accessed anymore (is "down"), then no one will be able to browse the Web site anymore. But with a Distributed Network, the service (instant messaging, web browsing) remains as a whole, even if the information maintained by the computer that is "down" cannot be accessed anymore. For example, if F is "down" in Figure 2, then A can still send messages to Z through the path A-E-H-Z, even if no messages can be sent to F anymore.
The goal of the ANet project is to help developers to implement and use Distributed Networks. This is because Distributed Networks share some common ideas and ANet tries to implement these ideas.
ANet can be seen as a canvas: all the ideas about Distributed Networks are already there, you just need to "fill in the blanks". Thus, ANet should be easy to use and very flexible.
Flexibility and ease of use is the major advantage ANet has over any other protocol for Distributed Networking that currently exist. If it's a distributed network, it can be done in ANet.
The term "Anonymous", which is the "A" in ANet, represents that flexibility, or more precisely, its "lack of assumption" about how it will be used by the network developers. This "Anonymity" is characteristic to some Distributed Networks, actually all "Anonymous Networks" must be "Distributed Networks". More detailled information about this is in the "Developer Introduction to ANet".
Why Do We Need ANet?
Since ANet can implement any Distributed Network, ANet is useful if we need to implement a Distributed Network. So, why do we need an Distributed Network? Why is it so useful?
The Network, as a means of data transportation, remains avalable, even if some of the computers that make the network stop working.
Data sent in the Network can be eaily sent to many computers in the Network.
The topology of the Network can be changed, and this change is transparent to most of the applications that use the Network.
How ANet Works
Client VS. Daemon
To become part of a Distributed Network, you first need to install the ANet 'daemon'. It is a program that will run in the background, without taking any space on your screen.
The settings of the ANet Daemon can be changed by changing an XML file. But, since you might want to use the Distributed Network yourself, you can use an ANet Client to change the settings in the Daemon.
An ANet Client is an application that communicates with the Daemon to configure and monitor the Daemon, or most of the time, to receive and send data to the Distributed Network.
Let's say that there is one Distributed Network for instant messaging (similar to ICQ) and another one for discussion groups, and you want your computer to participate in both of these networks. What can you do?
ANet works with the idea of "Clusters". A cluster is a Distributed Network that share the same "rules". The network for instant messaging might not allow files to be transfered, while it is allowed for the discussion group network. A Cluster might have some special rules, like the speed of the connections, to geographical location (the average round-trip time), and so on.
Also, ANet uses what is called a "Cluster Group". A cluster group is just several clusters grouped together because they are alike, yet distinct. As an example, there are several instant messaging protocols (AIM, Yahoo, Jabber, ICQ, MSN...). They are icompatible with each other, but you just use them all at the same time, so you can be reached through those various services.
Cluster groups are usually made to group together clusters that allow the same service. A "service" is some kind of application you can use on a Distributed Network (chat, instant messaging, discussion groups, file sharing...). Not all services are allowed or supported on all networks, but several networks might support the same service.
As a result, the clients send or receive data from a cluster group. You want to use a chat client for ANet? You find one or more clusters that allow the chat service, you make a cluster group called "Chat" that groups together those clusters, and you tell the client to use the cluster group "Chat".
There is still another way to use ANet without having to install any ANet Daemon or Client at all.
Remember, ANet is used to implement a Distributed Network. But you don't have to be in the network to use it. Likewise, your personal computer at home is most likely to use the internet, not be part of its structure.
This approach is very similar to what you already know about "using the internet". You launch an application and somehow, it connects "somewhere".
The development of ANet has just started. As a result, ANet is not available in a stable form yet. Check out the ANet web site (http://anet.sourceforge.net/) often as it is regularly updated.
So this concludes the introduction of ANet, the Anonymous Distributed Networking Protocol. If you have any question about this document or about ANet in general, feel free to contact me (Benad). You can view my contact information in the "Contact" page. |
Recently, I've been looking into how character encoding and locales work on Linux, and I thought it might be worthwhile to write down my findings; partly so that I can look them up again later, and partly so that people can correct all the things I've got wrong.
To begin with, let's define some terminology:
- Character set: a set of symbols which can be used together. This defines the symbols and their semantics, but not how they're encoded in memory. For example: Unicode. (Update: As noted in the comments, the character set doesn't define the appearance of symbols; this is left up to the fonts.)
- Character encoding: a mapping from a character set to an representation of the characters in memory. For example: UTF-8 is one encoding of the Unicode character set.
- Nul byte: a single byte which has a value of zero. Typically represented as the C escape sequence ‘
- NULL character: the Unicode NULL character (U+0000) in the relevant encoding. In UTF-8, this is just a single nul byte. In UTF-16, however, it's a sequence of two nul bytes.
Now, the problem: if I'm writing a (command line) C program, how do strings get from the command line to the program, and how do strings get from the program to the terminal? More concretely, what actually happens with
Let's consider the input direction first. When the
main() function of a C program is called, it's passed an array of pointers to
char arrays, i.e. strings. These strings can be arbitrary byte sequences (for example, file names), but are generally intended/assumed to be encoded in the user's environment's character encoding. This is set using the
LANG environment variables. These variables specify the user's locale which (among other things) specifies the character encoding they use.
So the program receives as input a series of strings which are in an arbitrary encoding. This means that all programs have to be able to handle all possible character encodings, right? Wrong. A standard solution to this already exists in the form of libiconv.
iconv() will convert between any two character encodings known to the system, so we can use it to convert from the user's environment encoding to, for example, UTF-8. How do we find out the user's environment encoding without parsing environment variables ourselves? We use
setlocale() parses the LC_ALL, LC_CTYPE and LANG environment variables (in that order of precedence) to determine the user's locale, and hence their character encoding. It then stores this locale, which will affect the behaviour of various C runtime functions. For example, it will change the formatting of numbers outputted by
printf() to use the locale's decimal separator. Just calling
setlocale() doesn't have any effect on character encodings, though. It won't, for example, cause
printf() to magically convert strings to the user's environment encoding. More on this later.
nl_langinfo() is one function affected by
setlocale(). When called with the
CODESET type, it will return a string identifying the character encoding set in the user's environment. This can then be passed to
iconv_open(), and we can use
iconv() to convert strings from
argv to our internal character encoding (which will typically be UTF-8).
At this point, it's worth noting that most people don't need to care about any of this. If using a library such as GLib – and more specifically, using its
GOption command line parsing functionality – all this character encoding conversion is done automatically, and the strings it returns to you are guaranteed to be UTF-8 unless otherwise specified.
So we now have our input converted to UTF-8, our program can go ahead and do whatever processing it likes on it, safe in the knowledge that the character encoding is well defined and, for example, there aren't any unexpected embedded nul bytes in the strings. (This could happen if, for example, the user's environment character encoding was UTF-16; although this is really unlikely and might not even be possible on Linux — but that's a musing for another blog post).
Having processed the input and produced some output (which we'll assume is in UTF-8, for simplicity), many programs would just
printf() the output and be done with it.
printf() knows about character encodings, right? Wrong.
printf() outputs exactly the bytes which are passed to its format parameter (ignoring all the fancy conversion specifier expansion), so this will only work if the program's internal character encoding is equal to the user's environment character encoding, for the characters being outputted. In many cases, the output of programs is just ASCII, so programs get away with just using
printf() because most character encodings are supersets of ASCII. In general, however, more work is required to do things properly.
We need to convert from UTF-8 to the user's environment encoding so that what appears in their terminal is correct. We could just use
iconv() again, but that would be boring. Instead, we should be able to use
gettext(). This means we get translation support as well, which is always good.
gettext() takes in a
msgid string and returns a translated version in the user's locale, if possible. Since these translations are done using message catalogues which may be in a completely different character encoding to the user's environment or the program's internal character encoding (UTF-8),
gettext() helpfully converts from the message catalogue encoding to the user's environment encoding (the one returned by
nl_langinfo(), discussed above). Great!
But what if no translation exists for a given string?
gettext() returns the
msgid string, unmodified and unconverted. This means that translatable string literals in our program need to magically be written in the user's environment encoding…and we're back to where we were before we introduced
I see three solutions to this:
gettext()solution: declare that all
msgidstrings should be in US-ASCII, and thus not use any Unicode characters. This works, provided we make the (reasonable) assumption that the user's environment encoding is a superset of ASCII. This requires that if a program wants to use Unicode characters in its translatable strings, it has to provide an en-US message catalogue to translate the American English
msgidstrings to American English (with Unicode). Not ideal.
gettext()++ solution: declare that all
msgidstrings should be in UTF-8, and assume that anybody who's running without message catalogues is using UTF-8 as their environment encoding (this is a big assumption). Also not ideal, but a lot less work.
gettext()to not return any strings in the user's environment encoding, but to return them all in UTF-8 instead (using
bind_textdomain_codeset()), and use UTF-8 for the
msgidstrings. The program can then pass these translated (and untranslated) strings through
iconv()as it did with the input, converting from UTF-8 to the user's environment encoding. More effort, but this should work properly.
An additional complication is that of combining translatable
printf() format strings with UTF-8 string output from the program. Since
printf() isn't encoding-aware, this requires that both the format string and the parameters are in the same encoding (or we get into a horrible mess with output strings which have substrings encoded in different ways). In this case, since our program output is in UTF-8, we definitely want to go with option 3 from above, and have
gettext() return all translated messages in UTF-8. This also means we get to use UTF-8 in
msgid strings. Unfortunately, it means that we now can't use
printf() directly, and instead have to
sprintf() to a string, use
iconv() to convert that string from UTF-8 to the user's environment encoding, and then
printf() it. Whew.
Here's a diagram which hopefully makes some of the journey clearer (click for a bigger version):
So what does this mean for you? As noted above, in most cases it will mean nothing. Libraries such as GLib should take care of all of this for you, and the world will be a lovely place with ponies (U+1F3A0) and cats (U+1F431) everywhere. Still, I wanted to get this clear in my head, and hopefully it's useful to people who can't make use of libraries like GLib (for whatever reason).
Exploring exactly what GLib does is a matter for another time. Similarly, exploring how Windows does things is also best left to a later post (hint: Windows does things completely differently to Linux and other Unices, and I'm not sure it's for the better). |
This is a series of lessons on Carnival of the Animals by Camille Saint-Saëns, and is the culmination of a science unit on animals. During the animal unit, students learned about the different ways animals move. As an extension to the concept of how animals move, they were introduced to the book that accompanies the music of Carnival of the Animals. Each day we read and listened to one selection from the book and CD. We discussed various musical elements such as dynamics, tempo, and orchestration. Following a deep listening activity, the children drew a picture to go along with the animal's music. The book contained some of the high frequency words that Kindergartners are required to learn. At the end of the unit the students voted on their favorite selection from the Carnival of the Animals. We graphed the results. Then the children drew a picture of their favorite animal. They dictated a sentence or two about the choice they made, including something about the dynamics and tempo of the music. For a homework assignment, the students took their books home along with a copy of the music CD. Parents were asked to listen with their children and do a homework activity based on their listening experience. |
This file is part of a program based on
the Bio 4835 Biostatistics class taught at
Daniel, W. W. 1999. Biostatistics: a foundation for analysis in the health sciences.
The file follows this text very closely and readers are encouraged to consult the text for further information.
Categorical data analysis deals with the statistical study of discrete data that can be organized into categories. Biologists are always concerned with categorization and classification of things. It is the basis of biological taxonomy.
In categorical data analysis, the data fall into discrete categories and are not continuous. An example would be the case of an outbreak of a disease. Among a sample of the population, some people might have the disease while others might not.
In studying categorical data analysis, the data are generally organized into a contingency table. The contingency table permits the calculation of proportions and other information that can be obtained from the data. The c2 distribution is used in categorical data analysis because the date consist of categories and proportions.
Structure of contingency tables
The basic structure of a 2 by 2 contingency table consists of four cells arranged in two columns and two rows as shown below.
The 2 X 2 contingency table.
These cells are generally labeled with letters A through D. When doing calculations with these tables, we often add the columns and rows. The sums of the rows are
R1 = A + B
R2 = C + D
Similarly, the sums of the columns are:
C1 = A + C
C2 = B + D
The overall total, N, is found by adding R1 + R2 or C1 + C2 as shown below.
Labeled 2 X 2 contingency table.
Using a contingency table as a comparison table
Contingency tables can be used for comparison of outcomes of laboratory tests. In Medical Technology, tests are routinely performed on patients. The patient may have a certain disease or they may not. The test may give a positive result or it may give a negative result. These are four discrete outcomes that can be shown on a contingency table, such as the one below.
Comparison of outcomes in laboratory tests.
The false positive result occurs when the test gives a positive result but the patient does not have the disease. The false negative result is when the test is negative but the patient really has the disease.
Relative risk is a ratio of two probabilities. The first is the probability, P(E), of an event occurring in the presence of the risk factor and the second is the probability of the same event, P(E’), occurring in the absence of the risk factor. Relative risk is often used in the reporting of information on the occurrence of disease.
When used in reporting disease, relative risk is the ratio of the occurrence of the disease among those exposed to the risk factor and the occurrence of the disease among those not exposed to the risk factor. In order to determine these probabilities, the 2 X 2 contingency table is used as shown below.
Contingency table for relative risk study.
In this contingency table, Row 1 indicates those who were exposed to the risk factor and Row 2 indicates those who were not exposed to the risk factor. In each case, some got the disease (Column 1) while others did not get the disease (Column 2).
In Row 1, those exposed to the risk factor are considered as a “success.” In this group there is an absolute risk of getting the disease which is P(E) = A/R1. P(E) is the probability of the disease occurring in this exposed group and represents the sample proportion of those exposed to the risk factor.
Similarly, in Row 2, those not exposed to the risk factor are considered as a “failure.” In this group, the absolute risk of getting the disease is P(E’) = C/R2. P(E’) is the probability of the disease occurring in this non-exposed group and represents the sample proportion of those with the disease among those who were not exposed to the risk factor. This is summarized by the following equations.
The relative risk, , is the ratio of these two proportions.
Example: Outbreak of gastrointestinal disease
CDC we learn of an outbreak of gastrointestinal illness in an elementary school
Contingency table for GI
illness outbreak in
Among those children who ate burritos, the absolute risk of getting GI illness can be calculated.
Similarly, among the children who did not eat burritos, the absolute risk of getting GI illness can be calculated.
From these probabilities, we may calculate the relative risk.
Relative risk is generally reported with 1 decimal place. In this case our result would be RR = 7.1.
Significance in relative risk calculations
Significance in relative risk is found using the distribution. In general the value is found by subtracting the expected value from the observed value, squaring the result, and dividing by the expected value. The sum of these terms gives the value.
In contingency table calculations, the values from the table are used to give a value according to the following formula.
The values of these variables are substitutes from the contingency table made from the data obtained in the study. The result is a value of = 74.0447447 which is highly significant considering that the highest value of in the table with 1 df is 7.879.
The same result can be obtained from the TI83 calculator which uses a 2 X 2 matrix as shown below.
A. Matrix setup. B. Calculation results.
Calculations of for 2 X 2 contingency table.
The calculator also gives the p value for the computation which is, in this case, p = 7.63 x 10-18 for these data, also indicating the highly significant nature of the answer.
Confidence interval for a relative risk calculation
As noted earlier in this program of study, a confidence interval is composed of three parts. These are shown below.
In relative risk calculations, the value of is the estimator. The reliability coefficient is 1.96 which corresponds to a 95% confidence interval. Now, we must calculate the standard error of .
follows with 1 df which is not a linear relationship. The curve for with 1 df is similar to an exponential decay curve. It can be transformed to make it linear by using a logarithmic transformation. Recall that = 7.059210526. Logarithmic transformation of this value is ln= 1.954333272. The standard error for the transformed is given by the following equation.
This calculation can be done by substituting in the values and performing the appropriate operations on the calculator.
Once the standard error for ln is found, the confidence interval for ln can be calculated.
Finally, the antilog is taken to find the confidence interval for .
For this study, we would report the results as RR = 7.1 with a 95% CI of 3.6 – 13.9.
In epidemiological case-control studies, the odds ratio, OR is frequently calculated. In a case-control study, everyone involved in some way with the outbreak is included.
Recall that the absolute risk, which is really a proportion, can be calculated from the 2 X 2 contingency table. Also, the relative risk is the ratio of the absolute risk values between the exposed and non-exposed individuals. Now, the odds ratio relates the odds of getting the disease when exposed to the odds of getting the disease when not exposed. The general contingency table for a case-control study is shown below.
General contingency table for a case-control study.
How odds ratio is determined
Theoretically, the probability of an event, E, is given by P(E) which is a proportion. The odds of the event are found by dividing the proportion of the event, P(E), by the proportion of the event not occurring, which is given as 1–P(E).
Odds (E) would be applicable to a case-control study for those who got the disease when exposed to the risk factor.
Among those not exposed to the risk factor, E’, there is a proportion who got the disease, P(E’). and a proportion who did not get the disease, 1-P(E’). The odds of getting the disease when not exposed to the risk factor are
The odds ratio, OR, is the ratio of these two odds.
The odds ratio gives an indication of the chances of getting the disease when exposed to the risk factor compared with the odds of getting the disease when not exposed to the risk factor.
Relationship of odds ratio to the contingency table
The proportions used to find the odds ratio can be determined from the contingency table.
The individual odds calculations can be estimated by substitution.
Based on these estimations, the value of OR is found.
The value of OR, therefore, can be estimated by finding the ratio of the cross products of the values in the cells of the contingency table.
Example: Outbreak of gastrointestinal disease—case-control study
the same report cited above on gastrointestinal illness associated with eating
burritos, a case-control study was conducted in
Contingency table for case-control study.
Odds ratio calculations
The odds ratio, OR, for the data in the figure above, is found using the ratio of the cross products as described above.
It is also possible to calculate a for the data along with its p value, as well as a confidence interval.
The matrix and results of the calculation are given below.
A Matrix B. Results
Calculation of test for case-control study.
The results give of 10.56 which is significant at the p = .0012 level. These values exceed the critical value of for df 1 which is 7.879 at the 99.5% level.
Construction of the confidence interval for OR is done in the same way as that for RR using logarithmically transformed data. The calculation for standard error after logarithmic transformation is
The value of ln 8.8 = 2.174751721. The 95% confidence interval for the transformed relationship is
Taking antilogs, we get the confidence interval.
Therefore, OR = 8.8 with CI of 2.14-36.3 in this study. |
Students analyze Lincoln's Gettysburg Address of the Civil War by reading it, re-writing it in their own words, answering 6 questions, and completing a third task using their creativity (instructions are clear in the handout). A points distribution is included but not the answers because they are dependent on student opinion and interpretation of the material. This would be great for a sub!
Sample questions include:
-Why was it “fitting and proper” that a portion of the field should have been dedicated to the fallen soldiers? Do we have any modern day examples of this?
- How might the speech have been interpreted by Union and Confederate civilians? Would being a southern slave owner versus non-slave owner have made a difference? What about an Irish immigrant vs the descendant of a founding father?
-What do you think the “new birth of freedom” was that Lincoln spoke of?
Consider the bundle to save!!!
Civil War Just Activities Mini Bundle
Civil War Bundle
Or my individual Civil War and Reconstruction items:
Civil War and Reconstruction |
A drilling rig bores a hole down through the dirt and rock to the oil reservoir. This hole is usually between 5 and 36 inches wide. The drill itself is pushed down by the weight of the heavy metal piping above it, which pumps down drilling fluid called "mud." This fluid can be just water, water with air bubbles, or water with polymers. The mud creates the ideal conditions for the drill by sweeping up debris as it is pumped back up to the surface. As the drill goes deeper, new sections of piping are attached.
Completion and Casing
Completion involves the finishing touches made to the borehole and piping to control the flow of oil into the well. The simplest approach, called "barefoot," is to not do anything. In the open hole approach, a liner is made with many small holes, and then set across the production area, granting an intervention conduit and borehole stability. Sometimes concrete is poured into the space between the pipe and the borehole to achieve this stability. This is called a closed hole.
The top of the well is set with a set of valves, known as a production tree. The production tree is there to control the flow of oil and the pressure inside the well. During much of its productive life, this is all an oil well needs. The interior pressure of the Earth is enough to push oil up the well. When the oil reservoir begins to run out, more active measures are needed to keep the oil flowing. This usually involves a "workover" of the old well, which means either replacing or widening the old well, or drilling a new well into the same reservoir. Then water, gas, or carbon dioxide is pumped into the oil reservoir to increase the pressure and push more oil up.
Oil that has come up through a well is usually transported to a collection point via pipeline. This is the most economical means of moving a bulky commodity like oil, given that both the well and the collection point are fixed. |
The signaling molecule histamine is visible in green in the blue cells of this sea urchin larva (Strongylocentrotus purpuratus).
Credit: Andreas Heyland
Most humans experience some growing pains, but, for a young sea urchin, growing up means turning yourself inside out.
New research explores the key role a familiar substance, histamine, plays in this dramatic metamorphosis from a free swimming larva to the more familiar spiny adult that lives on the seafloor.
Well known to allergy sufferers for its association with sneezing, watery eyes and other symptoms, histamine prepares a sea urchin larva to transform into a radically different adult form within an hour, said study researcher Andreas Heyland, an assistant professor at the University of Guelph in Canada.
"They turn essentially inside out, like a sock," Heyland said.
Sea-urchin larvae swim freely in the ocean, living among other tiny organisms known as plankton, and as they mature they drift deeper into the sea. Before they can settle down on the seafloor, where they will spend the rest of their lives, the larvae must be able to pick up on environmental cues that tell them they are in the right spot.
For instance, the purple sea urchin (Strongylocentrotus purpuratus) that Heyland and colleagues studied prefers rocky environments, so it may be homing in on chemical cues released by algae and kelp that already live there, he said.
In order to pick up on these cues, the larvae must go through a phase known as competence. Heyland and colleagues tested the effects of histamine on sea-urchin development and concluded that this molecule plays an important role as an internal signal within the larvae to make them competent.
The sea-urchin larvae carry a backpacklike package around with them that contains adult structures, including many appendages, called tube feet.
"The entire package comes out of the larva at the same time the larval structures disintegrate," Heyland told LiveScience. "You get this little urchin that is unfolding." [Spectacular Underwater Photos]
Other research on a different type of sea urchin found a different role for histamine. In this case, the sea urchins appeared to cue into histamines in the environment, rather than within their own bodies. Heyland's team found no evidence for this in the purple sea urchin, indicating that different species of sea urchins may use the molecule differently.
Histamine is an ancient and common signaling molecule. In humans, it plays a role in digestion, allergies, the sleep-wake cycle and memory. Plants and bacteria also use histamine, suggesting it has been a signaling molecule since long before the evolution of mammals, he said.
While histamine may be widely spread around the tree of life, organisms' systems for using it differ. However, the team did find a similarity: Histamine regulates the metamorphosis of sea urchins using a receptor related to a histamine receptor involved in communication between nerve cells in mammals.
The study was published Thursday (April 26) in the journal BMC Developmental Biology. |
The Science of Hammond Organ Drawbar Registration
February 7, 2009
Revised March 2, 2011
When Laurens Hammond introduced the Hammond electric organ to the world in 1934, he gave us an instrument with more control over the sound it produced than any other before it (and many since). The Hammond organ’s drawbars let the player control the nature of the sound at the level of individual harmonics, much like a painter can control the nature of colour by mixing a very few primary colours.
This article is intended to clarify the function of the drawbars, and addresses such issues as combining drawbar settings (for example, how to combine 8′ and 4′ flutes with 8′ strings), and how to create a drawbar setting that most closely matches a particular instrument sound.
A Brief Introduction to the Nature of Sound
Any steady tone has three main attributes: volume, pitch, and timbre (pronounced “tamber”). The first two are pretty much self-explanatory, and won’t be discussed here. The third refers to the quality or character of the tone. Two tones can be of the same volume and pitch but still sound radically different (imagine the same note played on a trumpet and a xylophone).
There are several aspects to timbre, one of which is the distribution of harmonics in a tone. Most tones consist of a fundamental, together with several harmonics present in varying degrees. The frequency of the fundamental is what we usually perceive as the tone’s pitch. The purest tone consists of only a fundamental, and looks and sounds like this:
Harmonics are additional pure tones superimposed on the fundamental, each of which has a frequency that is an integer multiple of the fundamental. Instead of hearing the harmonics as distinct tones, our ears and brain hear a tone of the fundamental frequency, but with a different character than the pure tone. For example, here is a tone with the same pitch as the sample above, but consisting of the fundamental with a bit of the third harmonic (3x the fundamental frequency) and a bit less of the fifth harmonic (5x the fundamental frequency) thrown in:
Notice that the tone still has the same pitch, but that the character of the sound is very different.
The Hammond drawbars give the player control over the combination of fundamental and harmonic frequencies. There are nine drawbars for setting the levels of the fundamental and various harmonics and sub-harmonics (lower in frequency than the fundamental).
Each drawbar has nine positions labelled from 0 to 8. Zero means off; the harmonic controlled by that drawbar will not appear in the generated tones. The remaining positions will cause the specified harmonic to appear in varying amounts, with each increment producing about a 3dB increase.
As a convenient shorthand, drawbar settings are usually written as a sequence of nine digits broken into groups of two, four, and three digits respectively. For example, a possible drawbar setting sounding like an 8′ Tibia stop is 00 8040 000.
Drawbar Settings as (Pipe Organ) Stops
The individual drawbars are believed by some to be the equivalent of stops on a pipe organ, but they should not be seen this way. When an organist selects a single stop on a pipe organ, the resulting tone will be a complex one with many harmonics. The complexity of the tone depends on the type and number of ranks of pipes that the stop controls. A stop controlling a single rank of flute pipe will produce almost pure tones, whereas one controlling two ranks of diapason pipes will produce tones rich in harmonics.
Hammond drawbars on the other hand select individual harmonics. Usually several drawbars must be pulled varying amounts to achieve the effect of an individual stop.
For any given stop found on a typical church or theatre pipe organ, there will be a drawbar setting that approximates the sound of that stop on a Hammond organ. Sometimes the imitation will be almost perfect (when all the harmonics produced by the pipe stop are also ones available via the drawbars), and sometimes less so (such as a stop that contains a significant amount of 7th harmonic).
Here are some typical pipe organ stops, along with possible Hammond drawbar registrations to imitate those stops, and an audio clip of each:
|Bass Violin 16′||14 5431 000||Listen|
|Tibia 8′||00 8040 000||Listen|
|Bassoon 8′||07 8120 000||Listen|
|French Trumpet 8′||00 7888 872||Listen|
Drawbar Settings as Registrations
More often than not when playing a church or theatre pipe organ, several stops will be selected at once, causing pipes from multiple ranks to sound when a key is pressed. Some stops are quite boring when played in isolation, so other stops are added to change the character of the resulting sound.
Such a collection of stops to produce a desired sound is called a registration. Registrations can be set up manually by pulling the desired stops, or by means of pre-programmed combination pistons which can activate a handful of stops at once.
If a drawbar setting can be considered equivalent to a stop, how does one create an entire registration? The answer lies in combining two or more drawbar settings into a new drawbar setting that combines the characteristics of all the desired stops. So like a simple stop, an entire registration is also a drawbar setting. The best way to combine the settings for multiple stops into a single setting for a registration is the subject of the next section.
First, here are some registrations consisting of combinations of some of the stops from the previous section, with drawbar settings and audio clips:
|Bass Violin 16′ and Tibia 8′||14 8451 000||Listen|
|Bassoon 8′ and French Trumpet 8′||06 8777 761||Listen|
Combining Drawbar Settings
There are several schools of thought on how drawbar settings for stops should be combined to produce a registration. Mathematical analysis based on the physics of sound and the workings of the Hammond organ show them to be misconceived.
The first is from the Dictionary of Hammond Organ Stops, in which Stevens Irwin suggests:
The method is to add the drawbar-indications for each drawbar pitch, using the figure 8 if the sum comes to above 8.
I find two serious flaws with this method:
This method doesn’t take into account the fact that the drawbar values are logarithmic (since each increment represents 3dB, and the dB scale is logarithmic).
- Setting all sums greater than 8 to be just 8 is inconsistent. Following this rule means that 8+1 and 8+8 both give 8, whereas the latter should clearly be louder. Furthemore, adding too many drawbar settings together will always yield 88 8888 888.
Irwin then writes:
Porter Heaps suggests taking the largest figure for each harmonic drawbar in the group of stops to be combined as the proper intensity for the final ensemble.
In Hammond Organ Additive Synthesis – A New Method, Paul Schnellbecher responds,
I couldn’t accept the idea that combining two powerful stops and a medium string could be simulated by using the most powerful stop with only a tiny contribution from the string.
Schnellbecher then proposes that a more sensible way is to just add the stops together, and then if at the end of this process any drawbar setting is higher than 8, divide each drawbar setting by 2 as many times as necessary until this is no longer the case. Although I agree with Schnellbecher’s sentiments above, this suffers from the same flaw (not considering the logarithmic nature of the drawbar settings) as Irwin’s suggestion. Furthermore, why divide by 2? Why not 1½ or π? It would make more sense to divide by the highest setting and then multiply by 8, thus giving the loudest drawbar a setting of 8.
A Mathematically Sound Approach
The method I’m about to propose adheres to the following principle:
A combination of two drawbar settings should sound as much as possible as if the two settings were played at the same time on separate Hammond organs (or recorded separately and then mixed).
Let’s first look at combining the settings for a single drawbar, say the first white drawbar (8′). For example, suppose that both stops we’re interested in combining have that drawbar set to 5. It stands to reason that in the resulting registration, the contribution of the 8′ drawbar should have twice the power (√2 times the volume) that it has in either stop alone. After all, playing the same thing on two organs will produces twice the output power as playing it on one.
Recall that each drawbar increment corresponds to a 3dB volume increase. That happens to be √2 times the volume, and thus twice the power. Therefore, the resulting setting for the 8′ drawbar should be 6. In drawbar setting numbers, 5 + 5 = 6. Similarly, 1 + 1 = 2, 2 + 2 = 3, 3 + 3 = 4, 4 + 4 = 5, 5 + 5 = 6, 6 + 6 = 7, 7 + 7 = 8, and 8 + 8 = 9. In other words, whenever combining two equal settings, the result is one increment higher than that setting.
What about unequal values? The trick is to convert from drawbar setting numbers to linear power figures, add those together, and convert the sum back to a drawbar number. The following table gives the correspondence:
|0||0||0.000 - 0.706|
|1||1||0.707 - 1.413|
|2||2||1.414 - 2.827|
|3||4||2.828 - 5.656|
|4||8||5.657 - 11.30|
|5||16||11.31 - 22.62|
|6||32||22.63 - 45.24|
|7||64||45.25 - 90.50|
|8||128||90.51 - 181.0|
For example, to add drawbar settings of 4 and 7, add the corresponding linear powers, giving 8 + 64 = 72. The result is still within the linear power range of setting number 7, so the resulting setting should be 7. Let’s try combining 3 and 4. Adding the linear powers gives 12, which is within the range of setting 5.
What happens if we add 8 and 7? The resulting linear power is 192, which is above the highest available. In cases where the linear power is greater than 181 (which corresponds to a drawbar setting of just under 8.5) we first have to finish adding the linear powers of the other drawbars, then divide all the sums by the largest and multiply by 181. The resulting set of linear powers can then be converted back to drawbar settings.
This procedure can be expanded to as many separate settings as you wish to combine into one. Convert all the settings to linear powers, add up the corresponding powers, divide by the largest and multiply by 181 if necessary, and convert the results back to drawbar settings.
A Drawbar Combination Calculator
To make it easy to use the method described above, I’ve created an on-line drawbar calculator. Enter as many drawbar settings as you’d like to combine, one per line into the input area below, and click the Combine button:
A Note to the Mathematically Inclined: The formula used to convert a drawbar setting of n to linear power is 2(n-1), the exception being the zero setting. Why is there an exception? Because the all-the-way-in position of the drawbar should really be labelled negative infinity. Zero dB is not silence.
Comparing the Methods
Consider combining the drawbar settings 00 7656 010 (an 8′ Open Diapason) and 00 0402 010 (4′ Octave) using each of the four methods:
- 00 7656 010 + 00 0402 010 = 00 7858 020 (Irwin)
- 00 7656 010 + 00 0402 010 = 00 7656 010 (Heaps)
- 00 7656 010 + 00 0402 010 = 00 4534 010 (Schnellbecher)
- 00 7656 010 + 00 0402 010 = 00 7656 020 (My Method)
The four methods produce very different results. In Irwin’s method, the fourth and sixth drawbars (2nd and 4th harmonics) are too loud. In Heaps’ method, the combined setting is the same as the first setting, as if the second had been ignored. Schnellbecher’s method results in a combination that is quieter than either of its parts.
Using my method, the highest setting is only 7 because none of the sums result in a power greater than a setting of 7 would. Also notice the eighth drawbar (the 6th harmonic), where my method correctly accounts for the doubling of power resulting from adding two settings of 1.
The differences between the methods are even more striking when combining more than two settings. For example, consider combining these four settings: 00 1565 653 (8′ Violin), 00 0113 064 (4′ Violina), 00 8040 000 (8′ Tibia), and 00 0800 030 (4′ Tibia). The results using each of the methods are:
- 00 8888 687 (Irwin)
- 00 8865 664 (Heaps)
- 00 5764 374 (Schnellbecher)
- 00 8865 675 (My Method)
The Irwin method produces a setting that is almost straight 8′s, thus no longer reflecting the character of any of the individual voices. Once again, Schnellbecher’s method gives a much quieter result than all the rest.
You will notice that there is often little difference between my method and that of Porter Heaps. In fact, when combining only two drawbar settings, my method and Heaps’ will differ only if corresponding drawbars are within one increment of each other in the two settings, and will differ by only one increment in the result (for example, 4 + 4 gives 5 with my method, and 4 with Heaps’ method). When combining more than two settings, the methods can diverge further, although in the four-voice example above, they still only differ by at most one.
This is the Second Revision of My Method
When I originally wrote this article in early 2009, I used the same principle as above, except that I linearly combined volume, not power. In hindsight, this was a mistake. Two organs playing the same sound will not be twice as loud, but will produce twice the power. The revision of the article you are now reading, made in March 2011, correctly combines power.
Reducing the Volume of a Drawbar Setting
In order to achieve a desired balance between the upper and lower manuals when playing a piece of music, it is often necessary to reduce the volume of a drawbar setting (for instance, to make the accompaniment quiet enough that the melody can be clearly heard). The technique to reduce the volume is trivial: push each drawbar in by the same amount.
For example, the following settings all produce the same sound but at increasingly lower levels:
|00 8767 054|
|00 7656 043|
|00 6545 032|
|00 5434 021|
Once the setting contains a “1″ somewhere within it, any further reductions will affect the character of the sound. Moving a drawbar from a setting of 8 to 7 or from 2 to 1 reduces the contribution of that drawbar by 3dB, but moving it from 1 to 0 turns it off completely (i.e. reduces it by an infinite number of decibels).
Finding the Drawbar Setting to Match a Sound
So far we’ve looked at how to combine drawbar settings representing one or more stops to create a setting for a registration. The other part of the puzzle is how to find a drawbar setting to match a particular pipe organ stop or other musical instrument in the first place.
The traditional method is of course by ear. With experience, a Hammond organist learns how the relative settings of the drawbars affect the character of the sound produced. However, it takes a while to gain that experience, and in this era of instant gratification, there is an easier way!
As we’ve discussed, the drawbars control the relative volume of the harmonics of the sounds produced. This distribution of harmonics is visible when one plots the frequency spectrum of the sound. For example, here is the spectrum of a 440Hz tone played with registration 00 8767 054:
The peaks and their amplitudes are:
If we now arbitrarily equate 0dB to a drawbar setting of 8, then every 3dB down from that corresponds to one drawbar increment down (anything -24dB or below becomes zero). One can see that the frequency spectrum of the tone corresponds exactly to the drawbar setting that produced it.
One can take advantage of this to derive a drawbar setting for an arbitrary tone. Starting with a recorded sample of the tone, use a tool like Audacity to generate a spectrum analysis, and write down the intensity of each peak. Equate the highest intensity to a setting of 8 and determine the settings of the remaining drawbars relative to that, so that each 3dB down from the highest corresponds to one drawbar increment down.
This technique works for almost any steady tone that one can get a recording of. For example, I found this short clip on www.freesound.org of a female vocalist singing melisma style. Here is the frequency spectrum from the second G# note in the performance:
The peaks and their amplitudes, and the corresponding drawbar settings, are:
The drawbar setting to match this singer’s voice as closely as possible is thus 00 7843 000. Here is a clip of the aforementioned G# as sung by the vocalist, followed by the same note played with this drawbar setting:
The graphic above shows both the singer’s waveform (blue), and the Hammond’s (red). The shapes aren’t exactly the same because the phase relationship between the harmonics isn’t the same, but research has shown that our hearing is insensitive to phase relationship. Also notice that in the singer’s waveform, there are additional small “jaggies”. These are from the higher harmonics that the Hammond cannot produce.
Here is another spectrum plot, comparing the spectra of the vocalist with that of the Hammond imitation. The purple areas are where the spectra coincide. Notice that all the higher harmonics are missing from the Hammond tone, as is all the non-harmonic content of the actual singer’s voice:
Although a Hammond organ can reproduce many pipe organ stops, pipe organ registrations, and arbitrary sounds quite well, it falls short in some respects. Some of the shortcomings are:
If you create a drawbar setting for a pipe organ stop, it can often sound very much like the real thing. But if you create a setting for a combination of stops (a registration), it may not sound as full as the actual organ. When multiple ranks of pipes play at once, they will never be perfectly in tune with one another. On the Hammond however, the harmonics for all the stops are being produced by the same tonewheels, so it will sound as if the pipes were exactly in tune. The result might be too perfect to sound realistic.
The Hammond drawbars only give you up to the 8th harmonic, omitting the 7th. Sounds that contain significant portions of 7th harmonic or harmonics beyond the 8th will be lacking when reproduced on the Hammond. Additional harmonics are available on some models. For example, the spinet models provide a combined 10th/12th harmonic drawbar on the lower manual, and the H-100 series provide both 7th/9th and 10th/12th drawbars.
When more than one note is played, the same tonewheel might be contributing different harmonics to different notes. In theory, that tonewheel’s contribution will sound proportionally louder, but in practice, it will not be quite as loud as expected due to resistive losses in the magnetic pickup.
There’s more to timbre than the character of the tone. The envelope is also important. For example, a Hammond organ can be set to play approximately the same harmonics as a piano, but it will never sound like a piano. Piano notes have a sudden but not instantaneous attack followed by a gradual decay. Hammond notes have a nearly instantaneous attack (complete with key click), followed by steady volume, followed by an instantaneous decay.
Dictionary of Hammond Organ Stops – A Translation of Pipe Organ Stops Into Hammond-Organ Number Arrangements, by Stevens Irwin, 1939, 1952, 1961, and 1970.
A Primer of Organ Registration, by Gordon Balch Nevin, 1920. A copy of this book is available on-line at Scribd.com.
A Case Study: Tonewheels, by John Savard, 2010, 2012. This page goes into the workings of Hammond tonewheels in great detail, and is highly recommended reading for the technically inclined Hammond enthusiast.
If you've found this article useful, you may also be interested in:
Buy Stefan a coffee! If you've found this article
useful, consider leaving a donation to help support
Disclaimer: Although every effort has been made to ensure accuracy and reliability, the information on this web page is presented without warranty of any kind, and Stefan Vorkoetter assumes no liability for direct or consequential damages caused by its use. It is up to you, the reader, to determine the suitability of, and assume responsibility for, the use of this information. Links to Amazon.com merchandise are provided in association with Amazon.com. Links to eBay searches are provided in association with the eBay partner network.
Copyright: All materials on this web site, including the text, images, and mark-up, are Copyright © 2014 by Stefan Vorkoetter unless otherwise noted. All rights reserved. Unauthorized duplication prohibited. You may link to this site or pages within it, but you may not link directly to images on this site, and you may not copy any material from this site to another web site or other publication without express written permission. You may make copies for your own personal use. |
Stonehenge is a monumental circular setting of large standing stones surrounded by a circular earthwork, built in prehistoric times beginning about 3100 BC and located about 13 km (8 miles) north of Salisbury, Wiltshire, Eng. The modern interpretation of the monument is based chiefly on excavations carried out since 1919 and especially since 1950.
The Stonehenge that visitors see today is considerably ruined, many of its stones having been pilfered by medieval and early modern builders (there is no natural building stone within 21 km [13 miles] of Stonehenge); its general architecture has also been subjected to centuries of weathering and depredation. The monument consists of a number of structural elements, mostly circular in plan. On the outside is a circular ditch, with a bank immediately within it, all interrupted by an entrance gap on the northeast, leading to the Avenue. At the center of the circle is a stone setting consisting of a horseshoe of tall uprights of sarsen (Tertiary sandstone) encircled by a ring of tall sarsen uprights, all originally capped by horizontal sarsen lintels. Within the sarsen stone circle were also configurations of smaller and lighter bluestones (igneous rock of diabase, rhyolite, and volcanic ash), but most of these bluestones have disappeared. Additional stones include the so-called Altar Stone, the Slaughter Stone, two Station stones, and the Heel Stone, the last standing on the Avenue outside the entrance. Small circular ditches enclose two flat areas on the inner edge of the bank, known as the North and South barrows, with empty stone holes at their centers.
Archaeological excavations since 1950 suggest three main periods of building--Stonehenge I, II, and III, the last divided into phases.
In Stonehenge I, about 3100 BC, the native Neolithic people, using deer antlers for picks, excavated a roughly circular ditch about 98 m (320 feet) in diameter; the ditch was about 6 m (20 feet) wide and 1.4 to 2 m (4.5 to 7 feet) deep, and the excavated chalky rubble was used to build the high bank within the circular ditch. They also erected two parallel entry stones on the northeast of the circle (one of which, the Slaughter Stone, still survives). Just inside the circular bank they also dug--and seemingly almost immediately refilled--a circle of 56 shallow holes, named the Aubrey Holes (after their discoverer, the 17th-century antiquarian John Aubrey). The Station stones also probably belong to this period, but the evidence is inconclusive. Stonehenge I was used for about 500 years and then reverted to scrubland.
During Stonehenge II, about 2100 BC, the complex was radically remodeled. About 80 bluestone pillars, weighing up to 4 tons each, were erected in the center of the site to form what was to be two concentric circles, though the circles were never completed. (The bluestones came from the Preseli Mountains in southwestern Wales and were either transported directly by sea, river, and overland--a distance of some 385 km [240 miles]--or were brought in two stages widely separated in time.) The entranceway of this earliest setting of bluestones was aligned approximately upon the sunrise at the summer solstice, the alignment being continued by a newly built and widened approach, called the Avenue, together with a pair of Heel stones. The double circle of bluestones was dismantled in the following period.
The initial phase of Stonehenge III, starting about 2000 BC, saw the erection of the linteled circle and horseshoe of large sarsen stones whose remains can still be seen today. The sarsen stones were transported from the Marlborough Downs 30 km (20 miles) north and were erected in a circle of 30 uprights capped by a continuous ring of stone lintels. Within this ring was erected a horseshoe formation of five trilithons, each of which consisted of a pair of large stone uprights supporting a stone lintel. The sarsen stones are of exceptional size, up to 9 m (30 feet) long and 50 tons in weight. Their visible surfaces were laboriously dressed smooth by pounding with stone hammers; the same technique was used to form the mortise-and-tenon joints by which the lintels are held on their uprights, and it was used to form the tongue-and-groove joints by which the lintels of the circle fit together. The lintels are not rectangular; they were curved to produce all together a circle. The pillars are tapered upward. The jointing of the stones is probably an imitation of contemporary woodworking.
In the second phase of Stonehenge III, which probably followed within a century, about 20 bluestones from Stonehenge II were dressed and erected in an approximate oval setting within the sarsen horseshoe. Sometime later, about 1550 BC, two concentric rings of holes (the Y and Z Holes, today not visible) were dug outside the sarsen circle; the apparent intention was to plant upright in these holes the 60 other leftover bluestones from Stonehenge II, but the plan was never carried out. The holes in both circles were left open to silt up over the succeeding centuries. The oval setting in the center was also removed.
The final phase of building in Stonehenge III probably followed almost immediately. Within the sarsen horseshoe the builders set a horseshoe of dressed bluestones set close together, alternately a pillar followed by an obelisk followed by a pillar and so on. The remaining unshaped 60-odd bluestones were set as a circle of pillars within the sarsen circle (but outside the sarsen horseshoe). The largest bluestone of all, traditionally misnamed the Altar Stone, probably stood as a tall pillar on the axial line.
About 1100 BC the Avenue was extended from Stonehenge eastward and then southeastward to the River Avon, a distance of about 2,780 m (9,120 feet). This suggests that Stonehenge was still in use at the time.
Why Stonehenge was built is unknown, though it probably was constructed as a place of worship of some kind. Notions that it was built as a temple for Druids or Romans are unsound, because neither was in the area until long after Stonehenge was last constructed. Early in the 20th century, the English astronomer Sir Norman Lockyer demonstrated that the northeast axis aligned with the sunrise at the summer solstice, leading other scholars to speculate that the builders were sun worshipers. In 1963 an American astronomer, Gerald Hawkins, purported that Stonehenge was a complicated computer for predicting lunar and solar eclipses. These speculations, however, have been severely criticized by most Stonehenge archaeologists. "Most of what has been written about Stonehenge is nonsense or speculation," said R.J.C. Atkinson, archaeologist from University College, Cardiff. "No one will ever have a clue what its significance was."
Excerpt from the Encyclopedia Britannica without permission. |
Peranakan is a term used to refer to the descendants of early Chinese immigrants who partially adopted indigenous customs through either acculturation or intermarriage with indigenous communities.
Many peranakan Chinese families have been settled in Indonesia for centuries and have mixed indigenous-Chinese ancestry. There are about 7 million peranakan in Indonesia.
The peranakan contributed various cultural influences - mainly culinary, including various types of noodles. Other contributions are beautiful batik pesisir from Cirebon, Pekalongan, Kudus, Lasem, Tuban and Sidoarjo, and traditional herbal medicines known as jamu.
Since 1870, politics have threatened peranakan culture. When the Dutch government issued an agrarian policy prohibiting pribumi (indigenous people) from selling their land to foreigners, this affected the Chinese, who were categorized as foreigners ("foreign Orientals"). Consequently their integration with their "indigenous" neighbors was disrupted.
Despite their contribution to the nationalist movement and struggle against Dutch colonialism, the peranakan were coming under increasing government pressure by the late 1950s to assimilate with what was then viewed as the indigenous Indonesian "national identity".
During Soeharto's era, the peranakan were stigmatized as leftist sympathizers and banned from politics, because Sukarno's regime chose to side with the People's Republic of China - something that Soeharto as an anti-Communist American ally did not want. |
In the last few posts, the atomic nature of the universe was discussed, Dalton’s atomic theory explained and we took a tour of the “guts” of an atom. Now let’s use all this to understand how the pieces fit together to illustrate what makes the element (and atom) hydrogen, different from carbon, oxygen and neodymium. The short story is the way the parts come together – it’s really a counting game.
A new element is discovered almost every year – but for now, a current periodic table will show 118 unique elements. (I’ve linked this image to a good table you can print out if you wish.) All periodic tables will have at least 3 pieces of information. For each element, every table should give you at least the symbol, the atomic mass and the atomic number. Some have far too much information, stuff you should be able to infer from the position on the table, they symbol, the mass, and so on, but every table – except the ones on the shoes at the top of this post – should give the three fundamental data. Of these three, the most important is the atomic number. So important is the atomic number, notice it provides the organizing principle for the arrangement of the atoms. Continue reading |
Include 5 to 10 blank pages inside an apple shaped book for each student. Have them draw/write in their journals whenever new information is learned, discussed or researched. Time should be given periodically for students to share their journal entries with their friends and teacher.
To develop a cluster of apple words, ask students, "What do you think of when you hear the word apples?" Record their responses on chart paper. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.