content
stringlengths 275
370k
|
---|
This image shows a portion of the central mound in the impact crater Gale that is of interest to scientists because it is composed of light-toned layered deposits. The layered deposits could have formed in a water environment if a lake once filled the crater. Alternatively, particles suspended in the atmosphere, such as dust or volcanic ash, could have built up the layers over time. By using HiRISE images to see details in the layers, such as how their thicknesses vary horizontally and vertically, scientists can narrow down the potential origins. The paucity of impact craters on the layered deposits indicates that either the deposits are very young, or more likely that they are being eroded to remove these craters. Wind erosion has modified the layers after they formed, creating both sharp corners and rounded depressions along the surface. Meter-size boulders can be seen at the base of steep cliffs, but the scarcity of boulders elsewhere suggests most of the erosion is occurring by the wind rather than downslope movement of material.
Image PSP_001422_1750 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on November 15, 2006. The complete image is centered at -5.0 degrees latitude, 137.7 degrees East longitude. The range to the target site was 262.1 km (163.8 miles). At this distance the image scale is 26.2 cm/pixel (with 1 x 1 binning) so objects ~79 cm across are resolved. The image shown here has been map-projected to 25 cm/pixel and north is up. The image was taken at a local Mars time of 3:31 PM and the scene is illuminated from the west with a solar incidence angle of 57 degrees, thus the sun was about 33 degrees above the horizon. At a solar longitude of 135.6 degrees, the season on Mars is Northern Summer.
NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace and Technology Corp., Boulder, Colo. |
An atomic fountain is a cloud of atoms that is tossed upwards by lasers in the Earth's gravitational field. If it were visible, it would resemble the water in a fountain. While weightless in the toss, the atoms are measured to set the frequency of an atomic clock.
The primary motivation behind the development of the atomic fountain derives from the Ramsey method of measuring the frequency of atomic transitions. In broad strokes, the Ramsey method involves exposing a cloud of atoms to a brief radiofrequency (rf) electromagnetic field; waiting a time T; briefly exposing the cloud to the rf field again; and then measuring what fraction of the atoms in the cloud have transitioned. If the frequency of the rf field is identical to the atomic transition frequency, 100% of the atoms will have transitioned; if the frequency of the field differs slightly from the transition frequency, some of the atoms will not have transitioned. By repeatedly sending clouds of atoms through such an apparatus, the frequency of the field can be adjusted to match the atomic transition frequency.
The precision of the Ramsey method can be increased by increasing the wait time T of the cloud. The use of an atomic fountain with a cooled atomic cloud allows for wait times on the order of one second, which is vastly greater than what can be achieved by performing the Ramsey method on a hot atomic beam. This is one reason why NIST-F1, a cesium fountain clock, can keep time more precisely than NIST-7, a cesium beam clock.
The idea of the atomic fountain was first proposed in the 1950s by Jerrold Zacharias. Zacharias attempted to implement an atomic fountain using a thermal beam of atoms, under the assumption that the atoms at the low-velocity end of the Maxwell–Boltzmann distribution would be of sufficiently low energy to execute a reasonably sized parabolic trajectory. However, the attempt was not successful because fast atoms in a thermal beam strike the low-velocity atoms and scatter them. The first successful realization of an atomic fountain clock, using a thermal beam of cesium atoms, was achieved in 1959 by Louis Essen and J V L Parry at the National Physical Laboratory, England.
- http://www.nist.gov/public_affairs/releases/n99-22.cfm How the NIST-F1 Cesium Fountain Clock Works
- C. J. Foot (2005). Atomic Physics. p. 212.
- M. A. Kasevich et al. (1989). "Atomic fountains and clocks". Optics News 15 (12): 31–32.
- S. Chu (1998). "The manipulation of neutral particles" (PDF). Rev. Mod. Phys. 70: 685–706. Bibcode:1998RvMP...70..685C. doi:10.1103/RevModPhys.70.685.
- L. Essen (1959). "An improved cesium frequency and time standard". Nature 184: 1791–1792. Bibcode:1959Natur.184.1791O. doi:10.1038/1841791b0.
|This physics-related article is a stub. You can help Wikipedia by expanding it.| |
Zoo Phonics is a new component of our curriculum this year and most of the posts have been about that or September's dinosaur theme. The children are continuing to study Chinese as well. We decided to begin the year with zoo animals to compliment the Zoo Phonics curriculum. Yueying has done several activities with the children to teach them animal vocabulary. She has also worked on dinosaur vocabulary. Next month the children will focus on learning about their classroom and school.
The children have discovered that dinosaur feet are huge! They learned that a Tyrannosaurus foot is about 4 feet long and 2 feet wide. Next the children traced their own feet and cut them out to compare them with the Tyrannosaurus foot. They also tried to see how many steps a Tyrannosaurus could take walking across our gym and then counted their own steps. The discussion also led to how tall the dinosaur might be. Some guessed it could be as tall as our gym.
The children have been learning about dinosaur fossils and made casts of dinosaur imprints at preschool. Before beginning the project the children watched a video at YouTube - How to make a Dinosaur Fossil. The children mixed sand, dirt, and water together and then pressed a dinosaur into the mud. The imprint was filled with plaster of paris to create the cast.
This week the preschoolers have been learning all about the letter a. They began the week reviewing the letters and their sounds and body movements. They traced puffy paint letter a's with their fingers, drew a's in applesauce and completed pre-writing worksheets. They learned that Allie is one of the hardest workers since she can be found in so many words. The children took their homework folders home including a worksheet to be completed with their parents featuring Allie Alligator wearing a hardhat and holding a shovel. The children looked at words on the bulletin board and all took turns finding Allie and the letter a. All of the children were able to recognize the merged letter a with Allie.
This week the preschoolers learned that dinosaurs hatch from eggs. The children have been singing counting songs and finding dinosaurs in eggs. Today they talked about what dinosaurs eat and found that most were plant eaters. Following this discovery they played the Triceratops Leaf Eating game. Children drew a numbered card from the deck and then had to feed the corresponding number of leaves to the triceratops. They practiced Chinese vocabulary while playing the game.
This month the preschoolers are studying dinosaurs and prehistoric habitats. Later in the fall they will learn about deserts and plains environments. In the winter we will study the arctic and then forests. Oceans, rivers, lakes, and ponds will be covered in the spring. Children will also learn about the people and cultures who live in each environment.
Yesterday the children learned about paleontologists and digging for dinosaur bones. To make the concept more concrete dinosaur puzzle pieces were hidden in the sand table. Children had to dig for the puzzle pieces and then take them to the table to figure out how they fit together. This was also a wonderful group activity where the children learned to work together as a team.
The preschoolers learned about Allie Alligator today. They saw her shaped as the letter "a" and then read Allie's page in the first reading book. While reading Allie's page the children looked at all the items in the picture and pointed out which ones begin with "a". They're also working on some pre-writing skills.
Yesterday the children finished creating Allie the Zoo Phonics alligator. The children painted the alligators on the first day and then added texture with torn construction paper of various greens. Today they added teeth. The alligators will be hung in the toddler and preschool rooms. Other Zoo Phonics animals will join Allie throughout the year.
Each letter in the Zoo Phonics system is associated with a zoo animal. The children then learn a body movement or signal related to that animal. At the same time the children learn the letter's sound. For example, the children snap their hands together to act out Allie snapping her mouth closed. The use of oral, visual, and kinesthetic means helps the children to learn the letters in a fun way and makes the letters more memorable.
Yesterday the preschoolers joined the toddlers to begin the school year with a Zoo Phonics lesson. Amber introduced Allie Alligator to the children and taught them the Zoo Phonics song including the body motions for each animal. The preschoolers had already been working on the song so were able to help everyone learn it. The kids painted two large 5.5 feet alligators today; one for each classroom. The alligators are difficult to see in the pictures since they're lying on top of white butcher paper. Tomorrow the children will finish the alligators by glueing teeth in their mouths and adding bumps to the bodies. The alligators will be displayed on the walls where other animals will join them throughout the school year. |
Quantum computing is one step closer to reality, thanks to work done by a team of international scientists at the University of Bristol... Jeremy O’Brien however, certainly has an impressive designation as the Director of the Centre of Quantum Photonics in the United Kingdom, and he has a very interesting announcement to make: “We can say with real confidence that, using our new technique, a quantum computer could, within five years, be performing calculations that are outside the capabilities of conventional computers."
So, what exactly has the team of international researchers at the University of Bristol done? They have designed a photonic chip that works with photons, rather than the electrons that are used in conventional processors. The chip currently has “several [working] models”, and will apparently work by sending entangled photons down pathways/networks in a silicon chip. Together, the entangled photons will perform a “coordinated quantum walk”, and the outcome of this process will represent the “results of a calculation.”
Also called an optical chip, the photonic chip’s network of optical circuits allows for a quantum walk to occur with two photons and be detected, which is the underlying basis for the entire discovery. Previously, scientists had achieved a quantum walk with single photons, but with two photons, they were challenged by the requirement of ensuring that the two photons are exactly identical, and then take into account their particle-particle interaction within the circuit. Now that they have done it, the possibilities really open up, and, according to the same scientists, going from two to many photons will not be very hard, as the same principles apply. Each time you increase the number of photons, the number of possible outcomes increases exponentially, allowing scientists to simulate extremely complex situations and models. So, for now, the next goal is performing multi-photon walks.
The concept of the quantum walk comes from the mathematical concept of the random walk, which can be defined as the “trajectory of an object taking successive steps in a random direction”, whether in two or multi-dimensional space. A random walk takes special meaning when dealing with quantum particles, as randomness here is inherent at every step. With the trajectory provided by the coordinated quantum walks of two entangled photons, scientists have been able to perform a new kind of computation with newly developed algorithms, which will perform orders of magnitude faster than today’s processors.
Mr. O’Brien added that the software that could run on such a processor architecture had yet to be developed, and input-output devices would have to re-developed as well. The first real-world applications you can hope to see will be in the world of science, where simulations relying on the assimilation of millions of variables are required. All this is possible because of the inherent uncertain nature of quantum physics, which can allows for a single subatomic particle to be in several places at simultaneously: “unlike an electronic 'bit' in conventional computing, the use of quantum particles, or 'qubits,' permits parallel computing on a scale that would not be possible with conventional electronics.”
Image courtesy: Gizmag |
By transforming cells with a plasmid containing a gene for green fluorescent protein, students understand that changing an organism’s genotype can result in a remarkably different phenotype. Students insert a jellyfish gene into laboratory-safe bacteria and induce production of a green fluorescent protein. Students understand the power of this technology and discuss the economic and social ramifications of recombinant DNA technology. This activity will allow you to teach about gene regulation and transcriptional activation. The GFP plasmid makes a neon green fluorescent protein that can be seen under UV light.
Beginner Lab: This lesson is appropriate for any biology or biotechnology class. |
History of seafood
The harvesting and consuming of seafoods are ancient practices that date back to at least the beginning of the Paleolithic period about 40,000 years ago. Isotopic analysis of the skeletal remains of Tianyuan man, a 40,000 year old modern human from eastern Asia, has shown that he regularly consumed freshwater fish. Archaeology features such as shell middens, discarded fish bones and cave paintings show that sea foods were important for survival and consumed in significant quantities. During this period, most people lived a hunter-gatherer lifestyle and were, of necessity, constantly on the move. However, where there are early examples of permanent settlements (though not necessarily permanently occupied) such as those at Lepenski Vir, they are almost always associated with fishing as a major source of food.
The ancient river Nile was full of fish; fresh and dried fish were a staple food for much of the population. The Egyptians had implements and methods for fishing and these are illustrated in tomb scenes, drawings, and papyrus documents. Some representations hint at fishing being pursued as a pastime.
The Israelites ate a variety of fresh and saltwater fish, according to both archaeological and textual evidence. Remains of freshwater fish from the Yarkon and Jordan rivers and the Sea of Galilee have been found in excavations, and include St. Peter’s fish and mouthbreeders. Saltwater fish discovered in excavations include sea bream, grouper, meager and gray mullet. Most of these come from the Mediterranean, but in the later Iron Age period, some are from the Red Sea. Fishermen supplied fish to inland communities, as remains of fish, including bones and scales, have been discovered at many inland sites. To preserve them for transport, the fish were first smoked or dried and salted. Merchants also imported fish, sometimes from as far as from Egypt, where pickled roe was an export article. Remains of Nile Perch from Egypt have been found, and these must have been smoked or dried, before being imported through the trade network that connected ancient Near Eastern societies. Merchants shipped fish to Jerusalem and there was evidently a significant trade in fish; one of the gates of Jerusalem was called the Fish Gate, named for a fish market nearby. Fish products were salted and dried and sent great distances during the Israelite and Judean monarchies. However, even in the later Persian, Greek and Roman periods, the cost of preserving and transporting fish must have meant that only wealthier inhabitants of the highland towns and cities could afford it, or those who lived close to the sources, where it was less expensive.
Fishing scenes are rarely represented in ancient Greek culture, a reflection of the low social status of fishing. The consumption of fish varied in accordance with the wealth and location of the household. In the Greek islands and on the coast, fresh fish and seafood (squid, octopus, and shellfish) were common. They were eaten locally but more often transported inland. Sardines and anchovies were regular fare for the citizens of Athens. They were sometimes sold fresh, but more frequently salted. A stele of the late 3rd century BCE from the small Boeotian city of Akraiphia, on Lake Copais, provides us with a list of fish prices. The cheapest was skaren (probably parrotfish) whereas Atlantic bluefin tuna was three times as expensive. Common salt water fish were yellowfin tuna, red mullet, ray, swordfish or sturgeon, a delicacy which was eaten salted. Lake Copais itself was famous in all Greece for its eels, celebrated by the hero of The Acharnians. Other fresh water fish were pike-fish, carp and the less appreciated catfish.
Pictorial evidence of Roman fishing comes from mosaics. The Greco-Roman sea god Neptune is depicted as wielding a fishing trident. Fish was served only in earlier periods, and it remained more expensive than simpler meat types. Breeding was attempted in freshwater and saltwater ponds, but some kinds of fish could not be fattened in captivity. Among those that could was the formidable and potentially toxic Mediterranean moray, a valued delicacy which were reared ponds at the seaside. These morays were also kept as pets and sometimes as a means of punishment. Another farmed species was the popular, mullus, the goatfish. At a certain time this fish was considered the epitome of luxury, above all because its scales exhibit a bright red color when it dies out of water. For this reason these fish were occasionally allowed to die slowly at the table. There even was a recipe where this would take place in garo, in the sauce. At the beginning of the Imperial era, however, this custom suddenly came to an end, which is why mullus in the feast of Trimalchio (see the Satyricon) could be shown as a characteristic of the parvenu, who bores his guests with an unfashionable display of dying fish. The fish and fishing practices of the Roman era were recorded by the Greco-Roman Oppian of Cilicia, whose Halieutics was an expansive poem in hexameter composed between 177 and 180. It is the earliest such work to have survived to the modern day.
Garum, also known as liquamen, was the universal sauce added to everything. It was prepared by subjecting salted fish, in particular mackerel intestines, to a very slow thermal process. Over the course of two to three months, in an enzymatic process stimulated by heating, usually by exposure to the sun, the protein-laden fish parts decomposed almost entirely. The resulting mass was then filtered and the liquid traded as garum, the remaining solids as alec - a kind of savoury spread. Because of the smell it produced, the production of garum within the city was banned. Garum, supplied in small sealed amphorae, was used throughout the Empire and totally replaced salt as a condiment. Today similar sauces are produced in Southeast Asia, usually sold abroad under the description "fish sauce", or nam pla.
Aquaculture in China began before the 1st millennium BC with the farming of the common carp. These carp were grown in ponds on silk farms, and were feed silkworm nymphs and faeces. Carp are native to China. They are good to eat, and they are easy to farm since they are prolific breeders, do not eat their young, and grow fast. The original idea that carp could be cultured most likely arose when they were washed into ponds and paddy fields during monsoons. This would lead naturally to the idea of stocking ponds. The Chinese politician Fan Li was credited with authorship of The Fish-Breeding Classic, the earliest-known treatise on fish farming.
During the 7th- to 10th-century Tang dynasty, the farming of common carp was banned because the Chinese word for common carp (鯉) sounded like the emperors' family name, Li (李). Anything that sounded like the emperor's name could not be kept or killed. The ban had a productive outcome, because it resulted in the development of polyculture, growing multiple species in the same ponds. Different species feed on different foods and occupy different niches in the ponds. In this way, the Chinese were able to simultaneously breed four different species of carp, the mud carp, which are bottom feeders, silver carp and bighead carp, which are midwater feeders, and grass carp which are top feeders. Another development during the Tang dynasty was a mutation of the domesticated carp, which led to the development of goldfish.
From AD 1368, the Ming Dynasty encouraged fish farmers to supply the live fish trade, which dominates Chinese fish sales to this day. From 1500, methods of collecting carp fry from rivers and then rearing them in ponds were developed."
In Japan, sushi has traditionally been considered a delicacy. The original type of sushi, nare-zushi, was first developed in Southeast Asia and then spread to southern China before its introduction to Japan sometime around the 8th century AD. Fish was salted and wrapped in fermented rice, a traditional lacto-fermented rice dish. Nare-zushi was made of this gutted fish stored in fermented rice for months at a time for preservation. The fermentation of the rice prevented the fish from spoiling. The fermented rice was discarded and fish was the only part consumed. This early type of sushi became an important source of protein for the Japanese. During the Muromachi period, another way of preparing sushi was developed, called namanare. Namanare was partly raw fish wrapped in rice, consumed fresh, before it changed flavor. During the Edo period, a third type of sushi was developed, haya-zushi. Haya-zushi was assembled so that both rice and fish could be consumed at the same time, and the dish became unique to Japanese culture. It was the first time that rice was not being used for fermentation. Rice was now mixed with vinegar, with fish, vegetables and dried foodstuff added. This type of sushi is still very popular today. Each region utilizes local flavors to produce a variety of sushi that has been passed down for many generations.
When Tokyo was still known as Edo in the early 1800s, mobile food stalls run by street vendors became popular. During this period nigiri-zushi was introduced, consisting of an oblong mound of rice with a slice of fish draped over it. After the Great Kanto earthquake in 1923, nigiri-sushi chefs were displaced from Edo throughout Japan, popularizing the dish throughout the country.
Indigenous peoples of the Americas
In medieval Europe, seafood was less prestigious than other animal meats, and often seen as merely an alternative to meat on fast days. Still, seafood was the mainstay of many coastal populations. "Fish" to the medieval person was also a general name for anything not considered a proper land-living animal, including marine mammals such as whales and porpoises. Also included were the beaver, due to its scaly tail and considerable time spent in water, and barnacle geese, due to lack of knowledge of where they migrated. Such foods were also considered appropriate for fast days. Especially important was the fishing and trade in herring and cod in the Atlantic and the Baltic Sea. The herring was of unprecedented significance to the economy of much of Northern Europe, and it was one of the most common commodities traded by the Hanseatic League, a powerful north German alliance of trading guilds. Kippers made from herring caught in the North Sea could be found in markets as far away as Constantinople. While large quantities of fish were eaten fresh, a large proportion was salted, dried, and, to a lesser extent, smoked. Stockfish, cod that was split down the middle, fixed to a pole and dried, was very common, though preparation could be time-consuming, and meant beating the dried fish with a mallet before soaking it in water. A wide range of mollusks including oysters, mussels and scallops were eaten by coastal and river-dwelling populations, and freshwater crayfish were seen as a desirable alternative to meat during fish days. Compared to meat, fish was much more expensive for inland populations, especially in Central Europe, and therefore not an option for most. Freshwater fish such as pike, carp, bream, perch, lamprey, and trout were common.
In Islam, the Shafi'i, Maliki and Hanbali schools allow the eating of shellfish, while the Hanafi school does not allow it in Sunni Islam. Nor does the Shi'ite school (Ja'fari) allow it. The Jewish laws of Kashrut forbid the eating of shellfish and eels. According to the King James version of the bible, it is alright to eat finfish, but shellfish and eels are an abomination and should not be eaten. Since early times, the Catholic Church has forbidden the practice of eating meat, eggs and dairy products at certain times. Thomas Aquinas argued that these "afford greater pleasure as food [than fish], and greater nourishment to the human body, so that from their consumption there results a greater surplus available for seminal matter, which when abundant becomes a great incentive to lust."
- African Bone Tools Dispute Key Idea About Human Evolution National Geographic News article.
- Yaowu Hu Y, Hong Shang H, Haowen Tong H, Olaf Nehlich O, Wu Liu W, Zhao C, Yu J, Wang C, Trinkaus E and Richards M (2009) "Stable isotope dietary analysis of the Tianyuan 1 early modern human" Proceedings of the National Academy of Sciences, 106 (27) 10971-10974.
- First direct evidence of substantial fish consumption by early modern humans in China PhysOrg.com, 6 July 2009.
- Coastal Shell Middens and Agricultural Origins in Atlantic Europe.
- Borowski, Oded (2003). Daily Life in Biblical Times. pp. 68–69.
- Macdonald, Nathan (2008). What Did the Ancient Israelites Eat?. pp. 37–38.
- Singer, Isidore; Adler, Cyrus; et al, ed. (1901–1906). "Food - Biblical Data". The Jewish Encyclopedia 5. New York: Funk and Wagnalls. pp. 430–431.
- (Zephaniah 1:10, Nehemia 3:3, Nehemia 12:39, Nehemia 13:16, 2 Chronicles 33:14)
- Marks, Gil (2010). Encyclopedia of Jewish Food. p. 198.
- Dalby, p.67.
- Book 10: Halieus of the Roman Apicius, c. 500 AD. Translated by Walter M. Hill, 1936.
- Image of fishing illustrated in a Roman mosaic.
- Moray Encyclopædia Britannica Online, 2012. Accessed 17 May 2012.
- Beveridge MCM and Little DC (2008) "The history of aquaculture in traditional societes" In: Barry A (ed) Ecological Aquaculture: The Evolution of the Blue Revolution] p. 9, John Wiley & Sons. ISBN 9781405148665.
- Parker R (2000) Aquaculture science Page 6. Delmar Thomson Learning.
- History of aquaculture Retrieved 2 August 2009.
- 范蠡 [Fan Li]. 《養魚經》 [Yǎngyú Jīng, "The Fish-Breeding Classic"]. 473 BC. (Chinese)
- Nash CE and Novotny AJ (1995) Production of aquatic animals Page 22, Elsevier Science Ltd. ISBN 0-444-81950-9.
- FAO (1983) Freshwater aquaculture development in China Page 19, Fisheries technical paper 215, Rome. ISBN 92-5-101113-3.
- Fisheries of Americas Retrieved 2 August 2009.
- "Sushi History".
- "The History of SUSHI".
- Food reference
- The rather contrived classification of barnacle geese as fish was not universally accepted. The Holy Roman Emperor Frederick II examined barnacles and noted no evidence of any bird-like embryo in them, and the secretary of Leo of Rozmital wrote a very skeptical account of his reaction to being served barnacle goose at a fish-day dinner in 1456; Henisch (1976), pp. 48–49.
- Melitta Weiss Adamson, "The Greco-Roman World" in Regional Cuisines of Medieval Europe, p. 11.
- Adamson (2004), pp. 45–39.
- Yoreh De'ah - Shulchan-Aruch Chapter 1, torah.org. Retrieved 17 June 2012.
- "All that are in the waters: all that... hath not fins and scales ye may not eat" (Deuteronomy 14:9-10) and are "an abomination" (Leviticus 11:9-12).
- "'''Summa Theologica''' Q147a8". Newadvent.org. Retrieved 27 August 2010.
- Adamson, Melitta Weiss, Food in Medieval Times. Greenwood Press, Westport, CT. 2004. ISBN 0-313-32147-7
- Dalby, A. Siren Feasts: A History of Food and Gastronomy in Greece. London: Routledge, 1996. ISBN 0-415-15657-2 |
Jeremiah Horrocks was born in 1619 in Toxteth, Liverpool. His father was a watchmaker and the family were deeply religious Protestant Puritans. Jeremiah was a brilliant scholar and won a place at Cambridge University at the age of 14. By then he was already well-versed in Greek, Latin and the Scriptures. He moved to Much Hoole, Lancashire, where he was a curate at St Michael's, the local church1.
Before reading his story, it is important to understand the belief system of the 17th Century. The majority of people believed in alchemy, magic and witchcraft. There were no laboratories and there was no organised scientific research. The educated classes knew that the Earth was round, but hardly anything about astronomy. This was (Protestant) England: it was not considered heretical to believe that the Earth orbited the sun as it was in (Roman Catholic) Italy, but it was still a radical belief.
Observations and Conclusions
Jeremiah had read everything he could about the work of Johannes Kepler (1571 - 1630), the German astronomer who established the laws of planetary motion. Kepler had correctly predicted the Venus transit of 1631, but there is no record of anyone who witnessed it. Kepler stated that the next transit would be in 1756. Jeremiah, however, disagreed with this calculation. He worked out that Venusian transits occur in pairs, eight years apart, then either 105 years or 121 years later. This made the next one due in 1639, and it would be visible from Europe.
For a young man of 20 to disagree with a published, famous astronomer - and then be proven correct, was extraordinary. Observing the transit he'd predicted must have given him great satisfaction. He was able to use his data to calculate the sizes of the sun and the other planets in our solar system. He found that the sun was in fact gigantic - with a volume more than a million times that of the Earth. Jeremiah showed that the planets Jupiter and Saturn were giants, totally opposing the biblical view that our own planet must be the grandest in creation. This must have caused consternation to the deeply religious young man. Pioneers like Giordano Bruno (1548 - 1600) had been burned at the stake for that very 'heresy' - Bruno met his fate in Italy only two decades earlier.
The biblical account of the universe still dominated even well-educated minds. What we know now, that our Sun is a star almost a million miles wide, attended by planets (and their own moons) who move in their own orbits, was almost unimaginable, and only the finest minds dared to think it.
In 1638, Jeremiah Horrocks had confirmed Galileo Galilei's discovery of Jupiter's four large moons. He found out what comets2 were - visitors to our Solar System with erratic orbits. He also worked out that planets shine because of reflected sunlight. The first to begin to take a series of tidal observations, (before he was 20 years old), he had formulated his own theory of gravity decades before Newton's theory.
The Venus Transit
On Sunday3 24 November4, 1639, all but two of the members of the Earth's population had no idea of the momentous event which was about to happen. The first transition of Venus across the face of the sun since the telescope was invented. This transit can only occur when the Earth, Venus and the sun are exactly aligned. One of those men was Jeremiah Horrocks. He had already written to his friend William Crabtree, urging him not to miss the once-in-a-lifetime event.
Jeremiah's observations of the transit began at 3.15pm. He had attended his duties at the church and the service had ended just before 3pm. He had set up his equipment earlier in the day so that when he returned for the alignment, he only had minor adjustments to make to his telescope. He focused the sun's disc onto a piece of card and traced around it. Then he saw the small black spot (Venus) starting to edge across the solar disc.
'I watched carefully on the 24th from sunrise to nine o'clock, and from a little before ten until noon, and at one in the afternoon, being called away in the intervals by business of the highest importance5 which, for these ornamental pursuits, I could not with propriety neglect. But during all this time I saw nothing in the sun except a small and common6spot. This evidently had nothing to do with Venus. About fifteen minutes past three in the afternoon, when I was again at liberty to continue my labours, the clouds, as if by divine interposition, were entirely dispersed, and I was once more invited to the grateful task of repeating my observations. I then beheld a most agreeable spectacle, the object of my sanguine wishes, a spot of unusual magnitude and of a perfectly circular shape, which had already fully centred upon the sun's disc on the left, so that the limbs of the sun and Venus precisely coincided, forming an angle of contact. Not doubting that this was really the shadow of the planet, I immediately applied myself sedulously to observe it'.
Jeremiah was overjoyed. He drew the position of the black spot on the card and watched avidly over the next few hours, tracing his observations and timing each one. Thirty miles away, Jeremiah's friend William Crabtree had also made preparations to view the transit, but it was too cloudy. However, just before sunset, the clouds cleared and he was able to witness the spectacle. He wrote that he was 'rapt with joy' and confessed to a 'womanly display of emotion.'
An untimely end and legacy
A year and two months after his historic prediction and observation, Jeremiah Horrocks died. However, his work lived on, and a scientific paper of his was published on the continent in 1662. This brought him to the attention of the Royal Society, a collection of the finest scientific minds in Britain. Horrocks's paper revolutionised man's understanding of the solar system. His conclusions were tested by members such as Sir Christopher Wren and found to be sound.
Thomas Hearn, a fellow astronomer, described Jeremiah Horrocks thus: 'A strange, unaccountable genius'.
Sir John Herschel called him 'the pride and boast of British astronomy.'
Sir Isaac Newton (1642 - 1727), who was born the year after Horrocks died, said:
If I have seen further than others before me, it is because I have stood on the shoulders of giants.
Jeremiah Horrocks is now known as the 'Father of British Astronomy'; he had a tragically short life (he was just 22 when he died) but he left a legacy which lives today. Thanks to him, Venus transits can be predicted with accuracy. The most famous took place in 1769 when the British explorer Captain James Cook, sailed to Tahiti in order to observe the transit. |
These cool science experiments create "funny bones" that can bend. Bones have two main substances calcium carbonate and collagen. Calcium carbonate, (calcite) gives our bones strength. Collagen is a light and flexible material. These two materials combine to give us and other land animals strong bones that are lightweight.
When bones are placed in an acid a chemical reaction occurs. Tiny bubbles of carbon dioxide form as the acid reacts with the calcium carbonate. The collagen in bones is left behind as the calcium carbonate dissolves. In this experiment you will be testing a variety of chicken bones to see how they react to being left in a bowl of vinegar for up to 10 days.
Easy Science Experiments, Pumice & Obsidian Find out the difference between pumice and obsidian and why one will float and the other will sink like a rock.
Volcano Science Experiment, Cinder Cones Have you ever wondered how volcanoes get their shape? Find out about cinder cones in this fun activity.
Gravity Experiments, Balancing Acts Learn how to balance a potato, fork and pencil on the edge of a table.
Elementary Science Experiment, Earthquake Waves Create your own wave box that makes paper clips jump when earthquake waves pass through them.
Gravity Experiment, Topsy Turvy Plants Have fun growing tomatoes upside down in this fun experiment.
Cool Science Experiments, Funny Bones In these cool science experiments find out how you can create "funny bones" that can bend.
Earth Science The links on this page include information on the Earth, the Rock Cycle, Volcanoes, experiments, activities and much more.
Check out Myrna Martin's award winning textbooks, e-books, videos and rock sets. The Ring of Fire Science Bookstore covers a wide range of earth science topics. Click here to browse. |
Being the target of bullying is a serious problem for many of today’s teens. It…
Bullies Can Have Lifelong Impact on Their Victims
Social anxiety disorder is a form of medically serious anxiety that triggers a number of disruptive or dysfunctional symptoms in social situations that most people find manageable, harmless, or even pleasurable. Mental health experts also sometimes refer to the condition by the alternate name social phobia. Numerous sources indicate that people who experience bullying during childhood or adolescence have increased risks for developing social anxiety disorder. In addition, according to the results of a study published in 2010 in the journal Psychological Science, some people bully others in response to emotional pressures caused by the presence of social anxiety.
Social Anxiety Disorder Basics
Some people only develop medically serious social anxiety during specific types of social situations, such as public speaking or meeting unfamiliar people, the National Institute of Mental Health explains. However, some people develop a more widespread form of anxiety that appears in a range of common or everyday social situations. Whether the trigger comes from a single situation or multiple situations, people affected by the condition develop symptoms that frequently include an inability to speak clearly, unusually high sweat production, facial flushing, nausea, and uncontrolled muscle tremors. Generally speaking, teenagers and adults with social anxiety disorder know that their responses to stressful situations don’t make logical sense; however, despite this knowledge, they feel unable to control their behaviors. As a result of this lack of control, affected individuals often live in extreme fear of embarrassment, humiliation, or ridicule.
Bullying is a general term for a group of behaviors designed to intimidate others and establish physical or psychological dominance. Specific forms of bullying behavior include physical intimidation or assault (punching, kicking, pushing, etc.), verbal intimidation or assault (teasing, threatening, making hateful statements, etc.) and social intimidation (purposeful exclusion of others, making others the target of rumors, etc.). Some bullying takes place in person in physical settings such as lunchrooms or other school locations, neighborhoods or public recreation facilities. However, bullying can also take place in virtual form through means such as text messages or videos sent between cell phones, messages or videos posted on Facebook or other online media sites, or private emails. Most people associate bullying with teenagers and younger children; however, workplace bullying is also a fairly common phenomenon.
Bullying as a Cause of Social Anxiety Disorder
In a study published in 2003 in the journal Cognitive Behavioral Therapy, a team of Canadian researchers examined the effects of exposure to bullying on a person’s long-term chances of develop an anxiety-related condition such as social anxiety disorder, panic disorder or obsessive-compulsive disorder (OCD). The authors of the study concluded that more than 90 percent of the study participants diagnosed with social anxiety disorder (social phobia was the term used by the authors) self-reported a personal history that included extreme exposure to teasing from others. By contrast, only half of the participants with OCD reported an extreme teasing history, while only one-third of the participants with panic disorder reported such a history.
Findings from numerous studies indicate that certain types of bullying are more likely than others to produce social anxiety disorder or some other form of anxiety-related condition. For instance, the authors of a study published in 2003 in the Child Study Journal concluded that teenagers exposed to physical bullying of themselves or their peer groups have unusual high risks for developing social anxiety. In addition, teenage males bullied because of their sexual orientation also have increased chances of developing anxiety-related symptoms at some point in the near of far future.
Social Anxiety Disorder as a Cause of Bullying
In the study published in Psychological Science, a team of researchers from George Mason University examined the ways in which various people respond to the emotional stresses caused by the presence of social anxiety disorder. These researchers concluded that, unlike most people affected by the disorder, certain individuals offset their feelings of anxiety by acting aggressively and impulsively toward others. One of the potential manifestations of this aggressive, impulsive behavior is bullying. The authors of the study noted the fact that, in terms of their anxiety-related symptoms, people with social anxiety who become aggressive and start bullying others do not appear to differ in any notable way from people with social anxiety who don’t act aggressively. Instead, they have similar numbers of symptoms, types of symptoms, symptom severity, and co-existing mental health problems (typically mood disorders or other anxiety disorders). |
The English language is a language with many complexities, including homophones, synonyms, 'silent' letters and a rich set of sounds. For children to develop the skills necessary to decode such a language, they must first learn the phonics, diagraphs and blends of each letter and understand how they 'stitch together' to form a word and then to apply reasoning in their decision as to whether or not the word 'makes sense'.
This is an integral part of learning how to read. However, there are some words that a child (or adult for that matter) simply cannot 'sound out'. This is because when the letters are combined to form a word they do not make their usual sound or blend.
Therefore, these particular words need to be recognised by sight and are commonly referred to as 'sight words'. There are also some words that appear in children's literature more frequently than others do. These high frequency words are also known as 'sight words'.
There are several versions of Sight Word lists getting around the web that aim at easing a child's ability to read into a world that has in excess of 1 million words in the English language. Our Sight Word App games use a combination of the 220 Dolch Sight Words (developed by Edward William Dolch, PhD, in 1948) plus many of the 95 Dolch Noun words (also developed by Edward William Dolch) as well as other high frequency words.
Knowing 'sight words' will enhance a child's confidence in literacy and therefore start a life long journey and love of reading. Our Sight Word App games deliver these high frequency words to children in a new and exciting way via a rich learning experience by combining a variety of senses: sight, sound AND touch, which can only inspire positive results.
Test it for yourself and let us know how your little person goes. |
2011-10-20 1:13:19 Jack
Complete the sentences by circling the correct option(a,b,c,d).
1. If you don’t know the receiver’s name and gender, you’d better write:
2. If you’d like to write a formal business letter to Adam Smith, you should write:
3. You’re writing a formal complaint letter to an airline company, you would say:
4. You have to reply a letter to explain that you can’t attend the meeting next week.
Which one is the most appropriate?
5. For business and formal letters, which one is the best form of the
6. If you need to take a decision, you’ll say____?
7. At the end of presentation, you will say:
8. Eric has arrived late for his work. He wants to make a brief explanation to his boss:
9. Would you like something to eat before we start the meeting?
10. When Ben heard that his colleague hadn’t known the news, he would say: |
Staphylococcus aureus is the most dangerous of all of the many common staphylococcal bacteria.
Staphylococcus aureus is present in the nose of adults (temporarily in 60% and permanently in 20 to 30%) and sometimes on the skin. People who have the bacteria but do not have any symptoms caused by the bacteria are called carriers. People most likely to be carriers include those whose skin is repeatedly punctured or broken, such as the following:
People can move the bacteria from their nose to other body parts with their hands, sometimes leading to infection. Carriers can develop infection if they have surgery, are treated with hemodialysis or chronic ambulatory peritoneal dialysis, or have AIDS.
The bacteria can spread from person to person by direct contact, through contaminated objects (such as telephones, door knobs, television remote controls, or elevator buttons), or, less often, by inhalation of infected droplets dispersed by sneezing or coughing.
Staphylococcus aureus infections range from mild to life threatening. The bacteria tend to infect the skin (see see Bacterial Skin Infections), often causing abscesses. However, the bacteria can travel through the bloodstream (causing bacteremia) and infect almost any site in the body, particularly heart valves (endocarditis—see see Infective Endocarditis) and bones (osteomyelitis—see see Osteomyelitis). The bacteria also tend to accumulate on medical devices in the body, such as artificial heart valves or joints, heart pacemakers, and tubes (catheters) inserted through the skin into blood vessels.
Certain staphylococcal infections are more likely in certain situations:
There are many strains of Staphylococcus aureus. Some strains produce toxins that can cause the symptoms of staphylococcal food poisoning (see see Staphylococcal Food Poisoning), toxic shock syndrome (see see Toxic Shock Syndrome), and scalded skin syndrome (see see Staphylococcal Scalded Skin Syndrome).
Many strains have developed resistance to the effects of antibiotics. If carriers take antibiotics, the antibiotics kill the strains that are not resistant, leaving mainly the resistant strains. These bacteria may then multiply, and if they cause infection, the infection is more difficult to treat. Whether the bacteria are resistant and which antibiotics they resist often depend on where people got the infection: in a hospital or other health care facility or outside of such a facility (in the community).
Methicillin-Resistant Staphylococcus aureus (MRSA):
Because antibiotics are widely used in hospitals, hospital staff members commonly carry resistant strains. When people are infected in a health care facility, the bacteria are usually resistant to several types of antibiotics, including all antibiotics that are related to penicillin (called beta-lactam antibiotics). Strains of bacteria that are resistant to beta-lactam antibiotics are called methicillin-resistant Staphylococcus aureus (MRSA). MRSA strains are common if infection is acquired in a health care facility, and more and more infections acquired in the community, including mild abscesses and skin infections, are caused by MRSA strains.
Skin infections due to Staphylococcus aureus can include the following:
All staphylococcal skin infections are very contagious.
Breast infections (mastitis), which may include cellulitis and abscesses, can develop 1 to 4 weeks after delivery. The area around the nipple is red and painful. Abscesses often release large numbers of bacteria into the mother's milk. The bacteria may then infect the nursing infant.
Pneumonia often causes a high fever, shortness of breath, and a cough with sputum that may be tinged with blood. Lung abscesses may develop. They sometimes enlarge and involve the membranes around the lungs (causing pleurisy) and sometimes cause pus to collect (called an empyema). These problems make breathing even more difficult.
Bloodstream infection is a common cause of death in people with severe burns. Symptoms typically include a persistent high fever and sometimes shock.
Endocarditis can quickly damage heart valves, leading to heart failure (with difficulty breathing) and possibly death.
Osteomyelitis causes chills, fever, and bone pain. The skin and soft tissues over the infected bone become red and swollen, and fluid may accumulate in nearby joints.
Skin infections are usually diagnosed based on their appearance. Other infections require samples of blood or infected fluids, which are sent to a laboratory to grow (culture) the bacteria. Laboratory results confirm the diagnosis and determine which antibiotics can kill the staphylococci (called susceptibility testing).
If a doctor suspects osteomyelitis, x-rays, computed tomography (CT), magnetic resonance imaging (MRI), or a combination is also done. These tests can show where the damage is and help determine how severe it is.
People can help prevent the spread of these bacteria by always thoroughly washing their hands with soap and water or with antibacterial hand sanitizer gels. The bacteria can be eliminated from the nose by applying the antibiotic mupirocin inside the nostrils. However, because overusing mupirocin can lead to mupirocin resistance, this antibiotic is used only when people are likely to get an infection. For example, it is given to people before certain operations or to people who live in a household in which the skin infection is spreading.
Infections due to Staphylococcus aureus are treated with antibiotics. Doctors try to determine whether the bacteria are resistant to antibiotics and, if so, to which antibiotics.
Infection that is acquired in a hospital is treated with antibiotics that are effective against methicillin-resistant Staphylococcus aureus (MRSA): ceftobiprole, vancomycin, linezolid, quinupristin plus dalfopristin, or daptomycin. If results of testing later indicate that the strain is susceptible to methicillin and the person is not allergic to penicillin, a drug related to methicillin, such as nafcillin, is used. Depending on how severe the infection is, antibiotics may be given for weeks.
MRSA infection can be acquired outside of a health care facility. The community-acquired MRSA strains are usually susceptible to other antibiotics, such as trimethoprim-sulfamethoxazole, clindamycin, minocycline, or doxycycline, as well as to the antibiotics used to treat MRSA infections acquired in the hospital. Mild skin infections due to MRSA, such as folliculitis, are usually treated with an ointment, such as one that contains bacitracin, neomycin, and polymyxin B (available without a prescription) or mupirocin (available by prescription only). If more than an ointment is required, antibiotics effective against MRSA are given by mouth or intravenously. Which antibiotic is used depends on the severity of the infection and the results of susceptibility testing.
If an infection involves bone or foreign material in the body (such as heart pacemakers, artificial heart valves and joints, and blood vessel grafts), rifampin is sometimes added to the antibiotic regimen. Usually, infected bone and foreign material has to be removed surgically to cure the infection.
Abscesses, if present, are usually drained.
Other Staphylococcal Infections
Staphylococcus aureus produces an enzyme called coagulase. Other species of staphylococci do not and thus are called coagulase-negative staphylococci. These bacteria normally reside in the skin of all healthy people.
These bacteria, although less dangerous than Staphylococcus aureus, can cause serious infections, usually when acquired in a hospital. The bacteria may infect catheters inserted through the skin into a blood vessel or implanted medical devices (such as pacemakers or artificial heart valves and joints).
These bacteria are often resistant to many antibiotics. Vancomycin, which is effective against many resistant bacteria, is used, sometimes with rifampin. Medical devices, if infected, often must be removed.
Last full review/revision September 2008 by Matthew E. Levison, MD |
View one or all of the films suggested on this page. Each has lessons that help to build understanding and empathy in students.
When teaching about the Shoah it is important to establish that it was awful but not an unprecedented event (historically). It is important to take students from the bleakness of the Holocaust into the richness of new life post the horror.
The following clip is a montage made especially for schools and teacher resources are linked beneath.
Teacher Resources. Click on the red resources tab on this site to locate a number of effective resources |
Figure 1: Artist’s impression of an Earth-like planet. Credit: Scott Richard.
In the search for Earth-like planets around other stars, the presence of life on these worlds can be determined by looking for various biomarker gases in the planet’s atmosphere. Two promising biomarker gases are oxygen, which is produced almost entirely by photosynthesis on Earth, and ozone, which is produced in the Earth’s stratosphere when ultraviolet (UV) light splits oxygen into individual oxygen atoms where they then combine with other oxygen molecules to form ozone. Ozone is a good indicator of photosynthetic life because even a small amount of atmospheric oxygen can result in a significant concentration of ozone.
Segura et al. (2003) developed photochemical and radiative/convective atmospheric models of Earth-like planets around 3 different types of stars: F-type, G-type (Sun) and K-type. This is to see how an Earth-like planet might differ from a planet circling our Sun. The models assume a present-day Earth-analogue planet with an atmospheric oxygen concentration at the present atmospheric level (PAL). Also, the planet’s distance from its host star is scaled according to the star’s luminosity such that the planet’s average surface temperature is 288 K, which is similar to the average surface temperature of present-day Earth.
Figure 2: A comparison of the types of stars from M-type to B-type.
On Earth, the stratosphere is a layer of the atmosphere where temperature increases as altitude increases, reaching a peak at 45 km altitude. This heating is caused by absorption of UV flux by ozone. In the atmospheric models by Segura et al. (2003), stratospheric temperatures are warmest for the planet around an F-type star and coolest for the planet around a K-type star (Figure 3). This is because an F-type star is hotter and produces a higher UV flux, resulting in the greater absorption of UV flux by ozone. Furthermore, the higher UV flux also causes the planet around the F-type star to have a thicker ozone layer (Figure 4).
Figure 3: Atmospheric temperature profile results for an Earth-like planet with 1 PAL of oxygen and around different types of stars. (Segura et al., 2003)
Figure 4: Atmospheric ozone concentration results for an Earth-like planet with 1 PAL of oxygen and around different types of stars. (Segura et al., 2003)
The amount of UV flux reaching a planet’s surface is of concern due to its ability to damage cells and even DNA. On Earth, most damage caused by UV radiation is from UV-A (315-400 nm) and UV-B (280-315 nm), with UV-B being somewhat more dangerous. Fortunately, almost no UV-C (< 280 nm) penetrates the Earth’s atmosphere. UV-C is the most energetic and most dangerous form of UV radiation. The atmospheric models for an Earth-like planet with 1 PAL of oxygen show, in the UV range of 200 to 400 nm, the surface of the planet around a K-type star receives 0.44 times the UV flux that the Earth’s surface receives, while the surface of the planet around an F-type star receives 1.61 times the UV flux.
For UV-B alone, the planet around a K-type star receives 0.43 times the amount Earth receives, while the planet around an F-type star receives 0.68 times the amount. This shows that planets around K-type and F-type stars exhibit significantly better UV protection than Earth at 1 PAL of oxygen, despite an F-type star being hotter than our Sun. For a K-type star, it is simply due to it being cooler and emitting less UV flux. For the F-type star, it is a consequence of its higher UV flux which results in the formation of a much thicker ozone layer. Planets around all 3 types of stars also show negligible amounts of UV-C reaching the surface.
Figure 5: Incoming and surface UV fluxes for Earth-like planets with different UV fluxes and orbiting around different stars. (Segura et al., 2003)
Figure 6: Normalized surface UV dose rates relative to present-day Earth for skin cancer (erythema) and DNA damage on Earth-like planets with different UV fluxes and orbiting around different stars. (Segura et al., 2003)
The atmospheric models by Segura et al. (2003) can be extended to oxygen levels lower than 1 PAL. For 0.1 PAL of oxygen and in the UV range of 200 to 400 nm, the planet around a K-type star receives 0.45 times the UV flux that present-day Earth receives, while the planet around an F-type star receives 1.62 times the UV flux. For UV-B alone, the values are 0.61 (planet around K-type star) and 0.85 (planet around F-type star) times the UV flux that present-day Earth gets. It is also clear that below ~0.01 PAL of oxygen, the ozone layer becomes too thin to provide significant UV shielding regardless of the type of star the planet circles around.
Segura et al., “Ozone Concentrations and Ultraviolet Fluxes on Earth-Like Planets Around Other Stars”, Astrobiology Volume 3, Number 4, 2003 |
How to Make a Human Antenna @ Discovery News…
In the setup, a participant wore a backpack containing a laptop and a data acquisition device connected through a wire to a conductive pad on the back of the participant’s neck. The pad measured the voltages picked up by participants, who performed specific gestures around light switches. Software in the laptop generated positioning instructions and at each switch, the gesture order was randomized to eliminate bias.
The experiments showed that electromagnetic noise is so predictable that it can be used it to figure out where a person is standing, what the person is doing, and even where a hand is placed on a wall. The team used a simple sensor that was essentially just a piece of metal, but Morris said that ultimately a sensor could be placed in the user’s hand or anywhere else that the radio signals being picked up by the body can be gathered.
“Our bodies, it turns out, are actually really good and relatively colorful antennas,” Morris said. The team presented their results earlier this week in Vancouver at the ACM CHI Conference on Human Factors in Computing Systems.
The researchers learned that in a typical house, the electromagnetic noise changes noticeably from room to room because of the various appliances in them. Then they applied artificial intelligence to the data.
“The noise is different enough in those different environments that the computer can actually use machine learning to tell the difference,” Morris said.
How to Make a Human Antenna - [Link] |
While the entire state has only 20 or so species of bumble bees, there are 400-500 different species of native wild bees in Wisconsin. 85% of these species are solitary bees. Some are tiny, some are black, some are green. They build their nests in small cavities and cracks or underground, and often emerge the next season. New bees emerge in spring from the eggs laid the previous summer. These solitary bees cannot fly very far, and need about 30 trips to collect the pollen needed for one egg.
We paid attention to bumble bees working up the stalks of wild white indigo. The bumble bee collects pollen by pressing the keel of the flower downward and then rubbing its hind legs on the exposed anthers. As the bumble bee works its way up the plant stalk, the pollen from the male blossoms at the top of the plant is deposited on the bee and then transported to the next white indigo plant it visits.
While Bumble bees often prefer certain flowers, such as columbine or Monarda or white indigo, they develop deft approaches for extracting the nectar from flowers with very different architecture. For example, they will approach a wild columbine by thrusting their head into the spur of the hanging flower. A rusty-patched bumble bee may perforate the top of the spur and reach the nectar the easy way. On an oxeye sunflower bumble bees forage across the flower head to gather pollen from the many tiny flowers when in bloom. On a yellow cone flower, or culver’s root, or catnip they will work their way around the flower cone or stalk, visiting all the tiny blossoms.
Thank you Susan, for teaching us. What a great show and tell session we had in the Preserve. To learn more about bumble bees visit https://beespotter.org/topics/key/ |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
One of the most common theories in the field of decision making is the expected utility theory (EU). According to this theory, people usually make their decisions by weighing the severity and likelihood of the possible outcomes of different alternatives. The integration of this information is made through some type of expectation, based calculus (cognitive activity) which enables us to make a decision. In this theory, psychological processes and the decision maker’s emotional state were ignored and not taken into account as inputs to the expectation based calculus.
Emotions as an information sourceEdit
In “Risk as Feelings”, Loewenstein, Weber and Hsee argue that these processes of decision making include ‘anticipatory emotions’ and ‘anticipated emotions’:
“anticipatory emotions are immediate visceral reactions (fear, anxiety, dread) to risk and uncertainties”; “anticipated emotions are typically not experienced in the immediate present but are expected to be experienced in the future” (disappointment or regret). Both types of emotions serve as additional source of information.
For example, research shows that happy decision-makers are reluctant to gamble. The fact that a person is happy would make him or her decide against gambling, since he or she would not want to undermine his or her happy feeling. This can be looked upon as "mood maintenance" .
According to the information hypothesis, feelings during the decision process affects people's choices, in cases where feelings are experienced as reactions to the imminent decision. If feelings are attributed to an irrelevant source to the decision at hand, their impact is reduced or eliminated.
Mellers and McGraw (2001) proposed that anticipated pleasure is an emotion that is generated during the decision making process and is taken into account as an additional information source. They argued that the decision maker estimates how he or she will feel when he or she is right or wrong as a result of choosing one of the alternatives. These estimated feelings are “averaged” and compared between the different alternatives. It seems that this theory is the same as the expected utility theory (EU) but both can result in different choices.
Implications to decision making processesEdit
In a research from 2001, Isen suggests that tasks which are meaningful, interesting, or important to the decision maker; and if he or she is in a good mood, the decision making process will be more efficient and thorough. People will usually integrate material for decision making and be less confused by a large set of variables, if the conditions are of positive affect. This allows the decision makers to work faster and they will either finish the task at hand quicker, or will turn attention to other important tasks. Positive affect generally leads people to be gracious, generous, and kind to others; to be socially responsible and to take other’s perspective better in interaction.
Related topics Edit
Notes and references Edit
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
There are many myths about ADHD and there are many things not yet known, but what are the facts on ADD and ADHD? Here are the top 10 facts, things we do know, about the condition.
ADHD is the term currnetly being used in scientific circles, replacing the term ADD, to refer to the condition of Attention Deficit Disorder. The key difference is whether a person has hyperactivity, which is a key fact on ADD and ADHD. Those who do not have hyperactivity will still probably have a measure of restlessness. Two basic types of ADHD are the hyperactive version of the inattentive version.
Medical professionals say symptoms should show up before a person is seven years old to have the condition. Many things can mask the symptoms, so they may not be noticeable until later.
An interesting fact of ADD and ADHD is that more boys than girls are diagnosed with ADHD. There could be many reasons for this. One theory suggests that young girls are better at coping with their symptoms until they reach the middle school years when school work and life itself gets more complicated.
Is it ADHD or a lack of sleep? Studies have shown that 11-12 year-olds who get 6.5 hours of sleep a night, or less, can have symptoms much like ADHD. The lack of sleep slows brain function and can impair a person's ability to pay attention. It is important then to make sure the child is getting enough sleep before jumping to the ADHD diagnosis.
A diagnosis of ADHD involves a lot more than a person not being able to pay attention. To have a clincal diagnosis some basic criteria must be met. A person must consistently show at least six of the list of symptoms of either innatention type ADHD or hyperactive ADHD. These symptoms must have persisted for at least six months -- whether or not they were there before the age of seven -- to a Idegree that is not considered age appropriate or to a level disruptive to the person's normal life. These criteria must also be met in more than one setting, such as school, home or work. A psychiatrist is the person who should make the diagnosis only when all of these criteria have been met.
Inattention is one of two main types of ADHD. This condition is marked by the following:
- Difficulty paying close attention and making careless mistakes.
- Difficulty staying on task.
- Gives the impression of not listening, whether really listening or not.
- Difficulty following through on instructions or finishing any task, and this is not related to failure to understand or any defiant behavior.
- Difficulty organizing.
- Avoiding or disliking having to give a sustained mental effort.
- Loses things needed to complete tasks.
- Forgetful of routing daily matters.
- Easily distracted.
Hyperactivity is the second of two main types of ADHD, and symptoms are:
- Fidgeting or squirming in one's seat.
- Failing to remain seated when remaining seated would be expected.
- Running around or climbing excessively in inappropriate situations.
- Always moving, on the go.
- Talking excessively.
- Answering before questions are finsihed.
- Difficulty waiting one's turn.
- Often interrupts others.
Adjusting to change
People with ADHD often have trouble adjusting to change, new situations or basic changes in the way things are done. They give the appearance of being inflexible. This is strange considering that most ADHD people are always looking for something new and about always restless. Psychiatrists say ADHD people need routine and order to help them get organized and handle their world effectively.
Symptoms are not very different for adults than for chldren. Many adults are not diagnosed because the diagnosis was not very common 20 or 30 years ago when they would have been children. Sometimes childhood cases were not too severe so they were not as noticeable until they reached adulthood.
Treatment is largely the same for adults and children, which involves medication and therapy. Developing coping skills is seen as useful, but medical officials do not see any advantage in trying programs designed to improve memory are attention skills.
About five percent of children have ADHD, and about half of them have the condition into adulthood.
ADHD is not always a bad thing. There are some positives, leading some to believe that treatment is not needed.
Creativity is one positive trait shared by people with ADHD. People with ADHD also tend to be charming and warmhearted toward others. A sense of humor and willingness to forgive others, and to keep trying when things go bad, are other positive traits common to people with ADHD. People with ADHD tend to be intuitive and sensitive to their environment. They can also be very enthusiastic and passionate once they do get focused on a task. While having difficulty getting focused is a problem, once they get locked in, it is hard to get them off the task at hand.
One oddity is that while people with ADHD do not often adjust well to change, they do seem more willing to take risks and try things that have not been tried before.
Adult ADHD, condition and treatment
This series looks at various aspects of Adult ADHD. The articles look into causes, conditions and possible solutions to the condition.
- How to Turn the Negative Symptoms of Adult ADHD into Positives
- Top 10 Facts about ADD and ADHD
- The ADHD Rating Scale & How is It Used in Diagnosis |
Collaborative Learning | Transformation Level | Math
- Students will design, build, and launch model rockets.
- Students get in groups to form a "company" and design a rocket.
- Students bring in the materials their group will need to build their rocket.
- Students have a "budget" of $1,000,000 and must write checks to pay for the materials that they use.
- Students can go online and type in the parameters, length, weight, and thrust of their rocket. They can set the angle and wind speed and watch the simulation of their launch.
- The "company" who designs the cheapest rocket that travels the furthest wins the "grand prize" of $10,000,000!
- Computers with Internet access
Grade Level: 6-8 |
Lemon trees (Citrus limon), native to Asia, grow best outside in U.S. Department of Agriculture plant hardiness zones 9 through 11 in full-sun locations. A standard lemon tree can reach up to 25 feet tall without pruning, while a dwarf lemon tree stays smaller at 8 to 12 feet high. Normal growth form lemon trees is 24 inches each year. A well-cared-for lemon tree lives 50 to 150 years. These ever-bearing citrus trees prefer to grow in coastal areas with cool summers and mild winters.
A poor growing location causes the gradual decline of a healthy young lemon tree. These citrus trees grow well in poor soil and even tolerate sandy or clay soil. The best pH level for lemon trees is 5.5 to 6.5, but they will grow in highly acidic and highly alkaline soil. One of the most important soil properties is good drainage. Lemon trees do not tolerate standing water. Plant the tree in a raised bed or large container if drainage in the area is poor. Overwatering is a common problem, and it eventually kills lemon trees through root rot. One major cause is planting the lemon tree near or in the lawn where frequent watering occurs.
Lemon trees suffer from garden pests just like all other fruit trees. Look for damaged leaves and fruits as well as insects. Some of the most common pests are California red scale, aphids, mites and mealybugs. If the problem is widespread throughout the garden, release beneficial insects such as ladybugs near the lemon tree to help with control. For localized infestations, use a strong spray of water to knock the pests out of the tree. For heavily infested areas, spray the lemon tree with horticultural oil in the early spring. Placing a ring of copper around the base of the lemon tree discourages snails and slugs. Wild rabbits can become a problem if they start feeding on the lemon trees, so place a wire cage around the tree trunk.
Lemon trees grow throughout the year without going dormant and therefore do not have the protection from cold damage that many other trees have. Temperatures down to 20 degrees Fahrenheit kill flowers, fruit, leaves and wood. Long periods of exposure to below-freezing temperatures kill the entire lemon tree. For short freezes, turn on holiday lights strung in the canopy and cover with blankets to keep the warmth next to the tree.
Lemon trees grow well in large containers when taken care of properly. Use a lightweight soil mixture when planting, and feed the tree monthly with a high-nitrogen fertilizer. Every four to five years, transplant the tree into a larger container and root prune the tree to control the vigorous growth. Move the lemon tree indoor during cold winters. The best location is within 6 feet of a sunny window and away from heat sources, which dry the soil out too quickly. Keep the humidity around the tree high by misting the tree with room temperature water and placing the container on pebble trays filled with water.
- Hemera Technologies/Photos.com/Getty Images |
Java provides a run-time operator instanceof to
compare a class and an instance of that class.
This operator " instanceof" compares an object to a
specified class type.
The instanceof operator is defined to know about an object's relationship with a class. It evaluates to true, if the object or array is an instance of the specified type; otherwise it returns false. The instanceof operator can be used with the arrays and objects. It can't be used with primitive data types and values. Its signature is written as:
|object instanceof type|
Lets have an example :
.Output of the Program:
x is an instance of X
obj is an instance of Z
In this example the class "Z" extends the class "X". So the expression " if (x instanceof X)" returns true because the object of class "X" has the reference to the object of class "Z". On the other hand, this example will generate an error, if the expression is written as "if (y instanceof X)" because the object "y" of class "Y" is not an object of class "X".
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. |
Also found in: Dictionary, Thesaurus, Medical, Financial, Acronyms, Wikipedia.
A wave having a form which, if plotted, would be the same as that of a trigonometric sine or cosine function. The sine wave may be thought of as the projection on a plane of the path of a point moving around a circle at uniform speed. It is characteristic of one-dimensional vibrations and one-dimensional waves having no dissipation. See Harmonic motion
The sine wave is the basic function employed in harmonic analysis. It can be shown that any complex motion in a one-dimensional system can be described as the superposition of sine waves having certain amplitude and phase relationships. The technique for determining these relationships is known as Fourier analysis. See Wave equation, Wave motion
sine wave[′sīn ‚wāv]
sine waveA continuous, uniform wave with a constant frequency and amplitude. See wavelength.
|A Sine Wave| |
Inverse cosine function and its properties
Suppose the cosine of an angle is y i.e. cos x = y, then its inverse cosine function can be written as cos-1y = x. It is also symbolized as arccos(y). So if we have the cosine value and need to find the angle, then inverse cosine function is the solution. It can be noted that the output values of all inverse trig functions are angles, it may be in degrees, grades or radians. So lets have a look at the graph of inverse cosine function and list out important properties like the domain and the range.
Graph of inverse cosine function with important properties
The graph of inverse cosine function is shown below. It looks like a reflected image of the cosine function graph.
Some important properties are listed below which will make the topic more clear.
The domain of inverse cosine function is between (-1, 1).
The range of the inverse function is (0, π).
cos-1(-y) = π – cos-1(y).
cos-1(y) = sec-1(1/y).
Since cosine of an angle is the ratio of length of adjacent to that of hypotenuse, we can say that the inverse cosine function cos-1(adjacent/hypotenuse) gives the required cosine angle. The derivative of inverse cos function of x is given by -(1/√(1-x2)) while its integration is given by xsin-1x√(1-x2) + c where c is the constant. This information is enough to let us move to the examples.
(1) Find the angle θ for a right triangle with length of adjacent side 3 cm and that of opposite side 4 cm:
By pythagoras theorem, we have length of hypotenuse h = √(32 + 42) = 5. Now cosine of an angle is cosθ = adjacent / hypotenuse = 3 / 5. Taking the inverse on both sides, θ = cos-1(3/5) = 53.13 degrees.
(2) Find cos-1(-1/2):
We know that cos-1(-y) = π – cos-1(y). Hence cos-1(-1/2) = π – cos-1(1/2) = π – π/3 = 2π/3 rad.
The inverse cosine functions can be directly applied for solving integration as well as differentiation problems. They also find wide applications in space, research and engineering categories.
San Francisco, USA
"If you're at school or you just deal with mathematics, you need to use Studygeek.org. This thing is really helpful."
"I will recommend Studygeek to students, who have some chalenges in mathematics. This Site has bunch of great lessons and examples. "
" I'm a geek, and I love this website. It really helped me during my math classes. Check it out) "
"I use Studygeek.org a lot on a daily basis, helping my son with his geometry classes. Also, it has very cool math solver, which makes study process pretty fun" |
Those unfamiliar with the prehistory of North America have a general perception of the cultures of the continent that includes Native Americans living in tipis, wearing feathered headdresses and buckskin clothing, and following migratory bison herds on the Great Plains. Although these practices were part of some Native American societies, they do not adequately represent the diversity of cultural practices by the overwhelming majority of Native American peoples. Media misrepresentations shaped by television and movies along with a focus on select regions and periods in the history of the United States have produced an extremely distorted view of the indigenous inhabitants of the continent and their cultures. The indigenous populations of North America created impressive societies, engaged in trade, and had varied economic, social, and religious cultures. Over the past century, archaeological and ethnological research throughout all regions of North America has revealed much about the indigenous peoples of the continent. This book examines the long and complex history of human occupation in North America, covering its distinct culture as well as areas of the Arctic, California, Eastern Woodlands, Great Basin, Great Plains, Northwest Coast, Plateau, Southwest, and Subarctic. Complete with maps, a chronology that spans the history from 11,000 B.C. to A.D. 1850, an introductory essay, more than 700 dictionary entries, and a comprehensive bibliography, this reference is a valuable tool for scholars and students. An appendix of museums that have North American collections and a listing of archaeological sites that allow tours by the public also make this an accessible guide to the interested lay reader and high school student.
A to Z of Early North America |
Scientists have recently uncovered evidence that in the year 775 the Earth was pummeled with a giant burst of radiation. The radiation left evidence behind on the earth in tree rings that formed during the year, which show high levels of radioactive compounds. The scientists say that evidence suggests the gamma ray burst was very short.
The scientists investigating the phenomenon note that our sun didn't cause the burst of radiation. Even if a strong burst of radiation occurred 1200 years ago in the form of a strong solar flare, the researchers say that there wouldn't have been enough radiation produced to leave behind the sort of evidence discovered on Earth. Scientists also say that if the radioactive burst had occurred from the sun, it would've created very bright auroras and there is no historical record of that event happening.
Astronomer Phil Plait has another suggestion for what could have caused the massive burst of radiation – a supernova. The problem with that theory, according to the astronomer, is that in order to generate the levels of carbon and beryllium discovered, the supernova would have had to been less than 1000 light years away from the Earth. He notes that such a close occurring supernova would've been so bright it would have been seen during the daylight.
There are no historical records of such a supernova occurring. Another team of scientists believes that the most likely scenario for the gamma ray burst is that it was the result of a collision between a pair of neutron stars or a neutron star and a black hole. The scientists say that such a collision would produce an extremely short gamma burst while producing no light. Researchers say that they know collisions of this sort have happened in distant galaxies in the past but such events are incredibly rare. Another member of the research team says that if the burst had been closer to the earth it could have caused significant harm to life on our planet. |
Two recent studies have concluded that multilingual exposure improves not only a child’s cognitive skills, but also their social abilities.
The first study, conducted by Dr. Katherine Kinzler’s lab, found that multilingual children were better at communication than children who only spoke one language. In their experiment, multilingual children did not only pay attention to what the adults were saying, but also to the context and the perspective of the interlocutor. Interestingly, they also found that “being raised in an environment in which multiple languages are spoken, rather than being bilingual per se, is the driving factor”.
In the second experiment, which was a follow-up study, they examined the effects of multilingual exposure on children that could hardly speak (14- to 16-month-old babies). In this follow up, led by Professor Liberman, they concluded that children raised in multilingual environments were more aware of the importance of the adult’s perspective for communication, even when that exposure to the second language was minimal.
With these results in hand, these researchers have argued that “Multilingual exposure, it seems, facilitates the basic skills of interpersonal understanding”.
To read the original article visit: https://www.nytimes.com/2016/03/13/opinion/sunday/the-superior-social-skills-of-bilinguals.html |
Why do you think population changes?
Population changes take place due to a number of factors, such as food and resource availability, predation and environmental factors. When the availability of food reduces, population decreases. For example, a decrease in number of deers will reduce the population of tigers, etc. Resource availability, such as, drinking water, hideouts, etc., also determine the size of population that can be supported. Predation by higher organisms is another factor that can change the population. An increase or decrease in number of predators can cause a variation in population. Environmental factors, such as temperature, humidity, weather, etc., can have drastic effects on population. An increase in temperature will increase burden on existing water resources and will result in deaths from heat or lack of water, etc.
In case of human population, emigration, immigration, political crisis, wars, technology, etc. are additional factors that may affect the population. |
Cross-reactivity is a condition in which allergies associated with plant pollens are mimicked by fruits and vegetables that share a similar protein structure. Immunoglobulin E antibodies (IgE) bind to fruit and vegetable proteins, such as cucumber, confusing them with pollen intolerances, causing reactions.
Ragweed allergies are associated with gourds such as watermelon, cantaloupe, zucchini and cucumbers. Allergic reactions to cucumbers and other gourds typically occur when consuming the food raw versus cooked.
Ragweed season occurs during August and September. People with ragweed and gourd allergies experience hay fever symptoms of runny nose, sneezing, coughing and itchy and watery eyes.
Cucumber allergies may cause oral allergy syndrome (OAS), with itching, swelling and tingling in the mouth, lips and throat upon consumption. Touching raw cucumbers may cause a rash in sensitized individuals.
Severe cucumber allergies cause symptoms such as nausea, vomiting, diarrhea, wheezing and anaphylaxis. Anaphylaxis is a life-threatening reaction resulting from the throat and airways swelling and is treated with an epinephrine (EpiPen) shot.
Eliminating cucumber from a diet is effective in reducing allergic reactions. Allergy symptoms are treated through topical antihistamine ointments applied to affected skin and oral antihistamines that reduce inflammation. |
“Is not the great defect of our education today … that although we often succeed in teaching our pupils ‘subjects,’ we fail lamentably on the whole in teaching them how to think: they learn everything, except the art of learning.”Dorothy Sayers, The Lost Tools of Learning
A teacher’s primary responsibility at Trinity School is to foster an expansive sense of wonder, curiosity and investigation in your child.
But students also learn how to learn only when their education is presented to them in an orderly fashion, with greater complexity building upon a solid foundation of the basics. To that end, we employ the classical sequence of the trivium (grammar, logic and rhetoric) to order our educational program.
From poetry to language to science, the grammar of a discipline is its foundational principles. Learning, therefore, begins with the study of grammar. This does not mean rote memory and dry lessons, however. For instance, the grammar of science comes from direct observation and recording. Your child will spend considerable time in their earlier years noting things, keeping journals, identifying patterns and asking questions.
This stage of learning is more than a stepping stone
After an initial encounter with a subject, your child will begin to ask more probing questions. Answering these deeper questions demands a more systematic approach. It requires students to move into a logic stage of their investigation. At this stage, considerable attention is paid to systems of thought. By studying systems like biology, chemistry, calculus or political theory, your child will not only gain a powerful understanding of the way the world is ordered but also develop the tools needed to discipline his or her own inquiry.
The logic stage of learning should awaken maturing students to the power of thought itself, as they learn how to abstract, analyze and synthesize the information they have gathered.
Eventually students want to start putting it all together. They want to synthesize what they have learned into a consistent view of the world. They want to draw their own conclusions, to think elegantly and to make
During this stage of inquiry your child will experience the final fruit of the free and disciplined exchange of ideas. As students’ own abilities to discover the truth mature, discussion takes the place of lectures and presentations. Students turn to the big questions of how the world really works, what it means to be human, how we ought to live in society with one another, and what it means for us to be in relationship to God. |
What is Psychiatry?
Psychiatry is a speciality that aids in diagnosing, treating, and preventing mental, emotional, and behavioural disorders. Psychiatrists are physicians with medical degree who specializes in mental health, including substance use disorders.
Psychiatric assessment may be needed to diagnose emotional, behavioural, or developmental disorders. Evaluation of a person is made based on altered behaviours present in relation to a particular environment. A note is made of social, cognitive and emotional aspects that may be affected as a result of these behaviours.
A person who needs such assessment is usually pointed out by family, spouse, teacher or friends.
What is involved in a comprehensive psychiatric evaluation?
These are the most common parts of a comprehensive, diagnostic psychiatric evaluation. But, each review is different, as each person’s symptoms and behaviours are different. The evaluation may include: |
Rainwater harvesting (RWH) is a practice of growing importance in the United Kingdom, particularly in the South East of England where there is less water available per person than in many Mediterranean countries. Rainwater harvesting in the UK is both a traditional and reviving technique for collecting water for domestic uses. This water is generally used for non-hygienic purposes like watering gardens, flushing toilets, and washing clothes. There is a growing demand for larger tank systems collecting between 1000-7500 litres of water. The two main uses for harvested rainwater are botanical uses, like gardening for plant irrigation, and domestic uses, like flushing toilets and running washing machines. Rainwater is almost always collected strictly from the roof, then heavily filtered using either a filter attached to the down pipe, a fine basket filter or for more expensive systems like self-cleaning filters placed in an underground tank. UK homes using some form of rainwater harvesting system can reduce their mains water usage by 50% or more, although a 20-30% saving is more common.
Prior to the widespread use of water mains, RWH was a traditional means of getting water in the UK. Even as far back as the 2nd-century AD, archaeological evidence shows that rainwater harvesting was being used by Housesteads Roman Fort in Northumberland as a way to flush the latrines. English castles from the 12th and 13th-century also have notable rainwater harvesting systems, such as Carreg Cennen, Orford, and Warkworth Castle.
In the 19th and the early 20th century, prior to widespread access to water mains, most large middle-class homes got their drinking water from springs and wells, but this water was usually hard which made it unsuitable for washing. Thus, such homes were usually designed to also harvest rainwater to be used in washing. During the interwar period, houses in hard water areas were sometimes built with rainwater storage tanks forming the roof of a scullery. Rainwater was led down to a third tap for washing purposes. Rainwater harvesting declined in popularity as water mains became more widespread through the early 20th century onwards.
In recent years, rainwater harvesting has become more common due to increasing water prices. While rainwater harvesting has been employed in high-profile facilities like the velodrome of the London Olympic Park, the UK's ongoing revival has lagged behind other countries such as Germany (the present world leader in modern rainwater harvesting). At present, only about 400 RWH systems are installed in the UK every year.
Some large retail developments are now incorporating rainwater harvesting even in some of the wetter parts of the UK.
Rainwater harvesting was being encouraged by the government of the UK through the Code for Sustainable Homes. The code ranked homes on a scale of one through six and requires new homes to have a score of at least three. One way to raise the score of a newly designed home is to incorporate a rainwater harvesting system. The code was revoked in 2015.
The Environment Agency has noted that water resources in the UK are under increasing pressure because of the growing population. In addition, the agency has warned that the South East of England is facing more serious water scarcity than anywhere else in England or Wales, such that the per-capita water supply is lower than many Mediterranean countries. The agency encourages a two-pronged approach to both reduce demand and increase supply, such as through the use of rainwater harvesting. However, there is a fundamental mismatch between supply and demand; the areas of the UK suffering water scarcity are in most cases also areas with low rainfall, which means the economics of installing a domestic RWH system are less favourable. The environmental impacts of domestic RWH systems are questioned since the environmental impact of water supply is a very small proportion of the total impact of water use (approximately 4%). For a UK household, the CO2 impact of supplying water to the house is around 100g of CO2 per day, around 1/600th of your total daily impact. However in countries without widespread mains water supplies, or where the environmental impact of mains water is very high, RWH may have more merit.
The installation of rainwater harvesting systems in the UK should be done according to the Water Supply (Water Fittings) Regulations and BS8515, in order to ensure safety. BS8515 also provides details on how to size the storage tank and allows estimation of the potential water savings. If you install a RWH system, you will need to inform your water company.
Rainwater harvesting at large scale may well be appropriate for farms as part of a catchment management strategy to decrease flood risk and diffuse pollution. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
SCRATCHING OUT THE ITCH WITH FACTS
Mosquito bites might not be completely avoidable, but there are some things you can do to reduce your risk. Not only will taking these precautions make dealing with a mosquito bite less frequent and annoying, but also much safer. Mosquitoes carry many diseases that they can transmit to humans. So before you head out to enjoy the fresh air, arm yourself with intel.
- WHY DO MOSQUITOES BITE?
You may think mosquitoes only exist to feed off of humans. This is wrong on many levels. Only a small percentage of mosquitoes primarily bite people. Mammals, birds, amphibians and reptiles are their typical prey. Further, mosquitoes don’t bite to live, but rather, to reproduce more efficiently (they live off of nectar and plant juices). This means that only female mosquitoes bite. The female uses this blood as a nutritional source for an amino acid called isoleucine. Mosquitoes use isoleucine to produce eggs, specifically, more eggs. If a female mosquito doesn’t find any isoleucine, she can only lay as few as 10 eggs. But if she finds prey to suck blood from, she can lay as many as 100.
- WHAT CAUSES MOSQUITOES TO BITE YOU?
With a variety of hosts to choose from, mosquitoes have developed a number of senses to help them pick their next victim. While body heat, odor and movement all play a role, it’s the scent of carbon dioxide and chemicals in your sweat which draw them to you. Truth be told, human blood is an inferior source of isoleucine compared to rats and buffalo – humans just happen to outnumber buffalo and are easier to attack than rats.
- WHAT HAPPENS WHEN A MOSQUITO BITES YOU?
When a mosquito picks you as her next victim, she typically lands on exposed skin (though she can bite through light clothing). Her mouth is long and tubular, allowing her to pierce through your skin and siphon blood out. After stabbing through the skin with her ‟feeding stylets,” she searches for a blood vessel, which typically takes less than 60 seconds. To keep the feeding quick and blood from clotting, she injects specialized saliva into your body. This continues until the mosquito has had enough, or roughly, four times her body weight in blood.
- WHY DO MOSQUITO BITES ITCH?
The saliva mosquitoes deposit acts as an anticoagulant, but it is also responsible for the unmistakable itch of a bite. The enzyme and protein compounds in the mosquito’s saliva cause an allergic reaction in most humans. Your body’s natural immune response is to release histamines and other compounds to combat this allergic reaction, in turn causing the itching. Strangely enough, if the site of the bite itches, it means your immune system is doing its job.
- WHAT DOES A MOSQUITO BITE LOOK LIKE?
Unlike bed bug bites or other insect bites that take a while to show up, mosquito bites are almost always immediately noticeable. Though the appearance of a bite can vary from person to person, a mosquito bite will typically be inflamed, roundish and filled with fluid. The bite might have oddly shaped edges, rather than perfectly round ones. There also might be a small dot at the center. Multiple randomized bites in one area are not uncommon. A localized reaction can mean more swelling, redness and itching than usual. Children, people with an impaired immune system or those who are extremely allergic to the saliva may exhibit more severe symptoms.
- WHAT SIGNS SHOULD YOU WATCH OUT FOR?
Between one and two million people around the world die from mosquito-borne diseases each year, according to the Prairie Research Institute of Illinois. Among these diseases, malaria is the most notorious, but in the United States, West Nile virus and mosquito-borne encephalitis are the most prevalent. A mosquito’s bite can also cause yellow fever, Chikungunya and dengue. If you or your loved ones have been bitten, watch for symptoms such as headaches, fevers, chills, body aches, stiffness, joint pain, confusion, swollen lymph glands, disorientation, weakness or skin rashes. If any of these occur, see your doctor right away.
- HOW CAN YOU STOP BITES FROM ITCHING?
Scratching mosquito bites can lead to secondary infection if you break the skin or reopen the bite. Dirt from under your nails is the culprit here, and can lead to staph, strep and other bacterial infections. To help prevent infection and stop the itch, Ohio State University recommends washing the area of the bite with soap and water. Use anti-itch cream, calamine lotion or antihistamines to lessen the itch. You can also use an ice pack to numb the area, thus negating the itch while reducing swelling. Aside from the warning symptoms listed above, if the swelling doesn’t start going down within a day or two, you have open sores or your eyes or joints become infected, see your doctor immediately.
- HOW CAN YOU STOP MOSQUITOES FROM BITING YOU?
Mosquito bites aren’t always preventable, especially at dawn and dusk when the majority of mosquitoes are most active (some, like the dangerous Asian tiger mosquito, are active all day while others remain active all night). Cover exposed skin with long pants and sleeves. Wear a hat, a light scarf and work gloves if the weather allows. Mosquitoes can bite through light clothing, but if you bunch material, it keeps the mosquito away from your skin. Keep properly fitted screens on all windows of your home and be sure they are in good repair. Doorways should have tight seals and doors shouldn’t be left open. You can use mosquito netting on strollers, playpens, beds and even your own head with a mosquito hat. Screened in porches are a great way to enjoy the outdoors while protecting your family from the itch and disease of mosquito bites. Outdoors, make sure there is no untreated standing water anywhere on or around your property. Of course, a mosquito repellent that contains DEET is essential, especially if you plan to spend any amount of time outside, particularly in the woods or other natural habitats of mosquitoes. Be sure to follow all instructions and warnings on the label.
For homes that have large mosquito populations, other measures might be necessary such as removing hidden mosquito breeding grounds, applying residual sprays and other methods employed by pest management professionals. Don’t spend your summer scratching mosquito bites. Call Terminix® so we can scratch that problem off your list. |
The cause may be more complex than you thought
This month, our Sussex health expert takes a look at chronic fatigue syndrome, a complex illness affecting the brain and body. It is characterised by incapacitating fatigue that is not relieved by rest, and any of the following symptoms for at least six months:
- impaired short-term memory or concentration significantly affecting daily life
- sore throat
- tender lymph nodes in the neck or underarms
- muscle pain
- joint pain with no associated swelling or inflammation
- persistent headaches
- unrefreshing sleep
- feeling unwell for more than 24 hours following physical exertion
Other common symptoms include: abdominal bloating, nausea, diarrhoea, night sweats or chills, brain fog, dizziness, shortness of breath, visual disturbances, irregular heartbeat or palpitations, jaw pain and multiple allergies or sensitivities to foods, alcohol, chemicals.
Chronic fatigue syndrome is more common in women than men.
The cause of chronic fatigue syndrome is unknown and there are no specific tests to confirm diagnosis. It is believed to be triggered by a combination of genetic mutations and environmental influences. Multiple triggers may be involved, such as viral infection, stress, nutrient deficiency, toxins, and hormone imbalances. Examples include:
- Chronic infection with viruses, such as Epstein-Barr virus, human herpes virus 6, and cytomegalovirus; or bacteria such as Borrelia causing Lyme disease.
- Immune dysfunction, such as the inappropriate production of inflammatory substances.
- Decreased levels of the hormone cortisol, which is secreted by the adrenal glands, may predispose to inflammation and activate immune cells. Thyroid disorders have also been implicated in chronic fatigue syndrome.
- An exclusion diet can be useful to identify potential food triggers. Exclusion diets are best undertaken under the supervision of a qualified healthcare practitioner.
- Staying hydrated is important to minimize fatigue and brain fog.
- Removing caffeine and sugar-based foods helps to keep energy levels on a more even keel.
- A high intake of vegetables daily optimizes nutrient intake. Including as many different colours as possible ensures a broad range of nutrients – “eating the colours of the rainbow” every day.
- Include healthy fats to provide energy and reduce inflammation – oily fish, walnuts, avocados are excellent examples.
It is important to keep exercising in order to keep muscles strong and prevent a worsening of fatigue. The most important thing is to create a moderate exercise plan which starts with a short and simple routine, such as 1 minute of activity followed by 3 minutes of rest. As the routine becomes more manageable, increase the total duration whilst maintaining the rest breaks in between. If there is at any stage a worsening of symptoms, drop back to the last level of exercise that was well tolerated.
Nutritional supplement treatment options
A number of nutritional supplements have been found to help the symptoms of CFS:
Vitamin B12 deficiency may cause fatigue. A number of research studies have shown that CFS patients respond well to supplementation of vitamin B12, or B12 with folic acid. These studies generally involve injections of vitamin B12. B12 can also be taken orally but higher doses are required due to poor absorption.
Vitamin B6. Research has shown that people with CFS have reduced levels of available B-vitamins compared to healthy controls, particularly vitamin B6.
L-carnitine is responsible for transporting fatty acids into the mitochondria – the engine-room of the cells – allowing these fatty acids to be converted into energy. Some people with CFS have a deficiency of carnitine and this has been linked to muscle fatigue and pain and low tolerance to exercise.
NADH (nicotinamide adenine dinucleotide) NADH is an activated form of vitamin B3 that plays an important role in the production of energy in the cells. Trials have shown that some people with CFS who take NADH have less fatigue and improved overall quality of life.
Many patients with CFS are found to be deficient in magnesium and report significant improvements in their symptoms when magnesium is supplemented into the diet.
Coenzyme Q10 (CoQ10) is another compound found naturally in the mitochondria. CoQ10 is involved in the production of cellular energy and is also an antioxidant. Surveys have shown that a high percentage of people with CFS feel an improvement in energy when taking CoQ10.
EPA/DHA. Double-blind, placebo-controlled studies have shown that people taking Omega 3 had significant improvement in chronic fatigue syndrome symptoms compared to those taking a placebo.
Article contributed by Dr Tracy S Gates, DO, DIBAK, L.C.P.H., Consultant, Pure Bio Ltd. Copyright © Pure Bio Ltd 2021. All rights reserved
Pure Bio Ltd are a leading UK supplier of the highest quality PURE nutritional supplements, based in Horsham, West Sussex. Visit www.purebio.co.uk for all your nutritional supplement needs |
What's the Problem?
When the average six-year-old child enters first grade, he or she already knows the meanings of about 26,000 words. They may not use all those words themselves, but they understand what the words mean when they hear them.
The goal of most first grade reading programs is for students to read between 200 and 600 common words by the end of the school year. Sadly, many students can’t even read 100 words by the time they finish first grade. In the United States, at least 40 to 45 percent of school aged children are below level in reading.
In fact, many of them never learn to read well at all and their difficulties continue throughout their time in school, even affecting their adulthood. A landmark study by the federal government found that over half the adults struggle to read material written at the sixth grade level.
Although teachers of today are often blamed, it’s not their fault. Poor reading is the product of a defective reading instruction method. Early in the twentieth century, a small group of influential educators changed the way reading was taught. Prior to that change, nearly everyone who attended school learned to read in a very short time. In our large cities, over 90 percent of the adult population was fully literate. As schools began phasing in the new method, they abandoned the method that had been used successfully to teach reading ever since reading was first invented. As a result, we now have an epidemic of poor reading.
Millions of children and adults are paying a terrible price because of what those early educators did. It causes huge problems throughout our entire society. Parents, schools and colleges spend billions of dollars annually in an effort to bring students up to basic reading levels and the results are just not reflecting those costly investments.
Is There a Solution?
Learning to read is a simple process. The Academic Associates® Reading Program can help to ease the process and produce better readers through our logical, phonics-based method. By starting with the basic and foundational concepts of letters and phonics, we can building upon that foundation and progress to advanced concepts incrementally, make mastery a step-by-step process. Our thorough approach helps readers read more expansively and more efficiently, avoiding any unnecessary gaps.
How our method works
Students are gently led through a series of easy steps. As they respond, they automatically begin to read. It’s so easy they often don’t even realize they’re reading at first. In the very first lesson, every student, even those with severe learning disabilities, will read at least 300 words and be prepared to read thousands more.
Each lesson is taught in simple, step-by-step increments, with every new concept building on all the previous concepts, until students know everything they need to know to sound out nearly all of the words in the language. By the last lesson, most fifth-graders through adults read and spell college-level words, and comprehend material written on their appropriate level or higher.
Each lesson includes reading exercises specific to the concepts learned. While students demonstrate mastery of a lesson in order to move on to the next, a comprehensive review of all past lessons reinforces the all skill continually. Checkpoints are incorporated throughout the program to provide constant updates of a student’s status, while identifying any areas in which he or she needs further reinforcement.
The Academic Associates methodology is not just about phonics. If it were, then reading would be rendered to a slow, monotonous process as words were laboriously and methodically pronounced one-by-one. Although our method may appear at first glance to be similar to some other phonics-based methods, it is a radical departure from every one of them. It develops all the skills of reading.
After learning to read words, students are ready to begin using their skills to accomplish the ultimate goal of reading—rapidly and effectively extracting information and knowledge from written material. They are taught specific techniques for understanding and applying what they read, and reading becomes a pleasant, productive and relatively automatic activity. Students then discover that the more they read, the more they enjoy it and the better readers they become.
Although males have traditionally been labeled as poorer readers than females, boys and men learn as quickly and as well as females with our method. Parents and teachers are often surprised to learn that there is no significant difference in learning to read between males and females with the Academic Associates® Reading Program!
Most students finish the course within 30 to 60 hours, although a few take longer. Most gain at least two grade levels (years) in reading, and very often, more than two levels. Every student is different, and each learns at a different rate, so we progress through the course at the student’s optimum learning speed.
Our program has been found to be very successful with those who have reading disabilities such as dyslexia and dysgraphia. Our method also helps with tracking and fluency.
The course is not just for students who are experiencing difficulty. Those who are average to strong readers will receive a welcome boost.
Thousands of students of all ages and many different ethnic and language backgrounds in the U.S. and several other countries have benefited from our program. It works consistently, even after all other methods have failed. It is unquestionably the most effective reading instruction methodology in the world.
Teach Others to Read
…learned to read in 4 hours and was able to read 100 words. Her confidence is soaring.
“He needs to read at his grade level.”
…no longer needed to be pulled out for reading in school, became an honor roll student each quarter, graduated program at 10-11th grade reading level in 5 months, went from being towards the bottom of his class to being towards the top of his entire 6th grade class on standardized test scores
“I want him to be ready for kindergarten. I don’t want him to struggle.”
…learned to read after only 3 short hours- before kindergarten
Her mother brought her so she could do better at reading and spelling.
…tested at grade 6 reading level- a gain of 4.5 years after 48 hours. She has been on the honor roll at school.
“She’s fine reading, but when she needs help, I can’t teach her very well because I grew up speaking a different language.”
…began reading at a 5th grade reading level after only 30 hours!! |
We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Ketones are produced when the body runs out of glucose and burns off the body's fat reserves for energy. The body prefers glucose as an energy source because it is widely used by all cells in the body, particularly the brain. The body burns fat for energy and produces ketones when carbohydrate intake is low. Burning ketones can be a normal reaction after running, but it should be avoided since it can limit a runner's performance and can also pose a health hazard.
Definition of Ketones
Ketones are acids that build up in the blood when fat is converted into energy for use by the body's cells, a process known as ketosis. Ketone byproducts are also known as ketone bodies. Ketosis occurs when insufficient glucose levels cannot provide for the body's energy needs, so the body turns to its fat reserves for energy. High-intensity or endurance running can significantly raise ketone bodies depending on a runner's health and diet. After a run, you will burn ketones until your blood glucose levels rise back to a normal level.
Effects of Ketosis
The effects of ketosis will vary depending on a runner's health status. For a runner who is not diabetic, ketosis can lead to feeling faint and dizzy; this is easily resolved by simply eating carbohydrates to raise glucose levels in the blood. Runners with Type 1 diabetes who are burning ketones for energy should check their insulin levels. Excess ketones and low insulin levels can result in ketoacidosis, which is considered a medical emergency. Some of the symptoms produced by ketoacidosis include vomiting, excessive thirst, frequent urination and high glucose levels in the blood. Ketoacidosis is poisonous to the body and can cause you to pass out for a long time, a condition known as a diabetic coma.
Ketone Levels After Running
Blood glucose can continue to drop hours after running and can result in the presence of ketone bodies in the urine. Abnormal ketone levels can be small, less than 20 mg/dL; moderate, 30 to 40 mg/dL; or large, greater than 80 mg/dL. Type 1 diabetic runners should test their ketone levels using ketone strips available in most pharmacies. If you suspect you may be at risk for ketoacidosis, consult a medical professional immediately.
Prevention of Ketones
Before a run, a nondiabetic runner should eat enough carbohydrates to provide sufficient glucose to meet the body's energy demands. A carbohydrate-loading diet, also called carb-loading, increases the amount of energy your body has available for use and prevents the need for fat conversion into energy. This diet should begin several days before a running event and is most beneficial for events that last 90 minutes or more. Carbohydrates should make up 50 to 75 percent of total daily calories. Diabetic runners should consult their doctors on what their glucose and insulin levels should be prior to beginning a run. |
The Book of Luke relates one of the most famous parables of Jesus, the story of the good Samaritan. In this tale, a man is beaten and left alongside a road, only to be ignored by a Levite and Jewish priest, both prestigious members of contemporary Jewish society. Only a Samaritan, an ethnoreligious minority frowned upon by most Jews at the time, was willing to help the individual. The fable was meant to demonstrate that humanity’s capacity for goodwill, as well as its capacity for indifference, is shared by all individuals within society.
To most passive observers of Christianity, this parable is where their understanding of Samaritan society ends. The source of the prejudice faced by the group is not understood by most church attendees. Even less commonly known is that this esoteric sect of Judaism continues to exist today.
Samaritans trace their origins largely to the tribes of Ephraim and Manasseh, claiming they are descendants from northern Israelis enduring the Assyrian conquest. Some Levi component of their background is likewise claimed. According to Samaritans, they practice Judaism in its unadulterated form. Samaritans continued to reside within Israel’s borders during the period of Babylonian captivity, when most Jews were forcibly transferred from their homeland. Samaritans contend that Judaism was fundamentally altered by this exile.
To most Jews, by contrast, Samaritans were the group failing to abide by the tenets of Judaism. There are several differences between Samaritanism and Judaism in its typical form. The most potent ideological contradiction with Judaism is their rejection of the Temple Mount’s spiritual significance. Mount Gerizim is elevated in its place.
Samaritans once numbered roughly one million individuals during Biblical times. Since then, however, their numbers have nosedived, a result of years of violent suppression as well as conversion. Particularly devastating were the Samaritan revolts against the Byzantine Empire, during which tens of thousands of Samaritans were killed. By 1786, the Samaritan population had fallen to 100 individuals.
Today, however, the Samaritan population has rebounded slightly to 800 individuals, concentrated almost entirely in Israel and the West Bank. Due to a small population pool and low rates of intermarriage, genetic disorders are frequent. Partially in response to this, Samaritan men sometimes marry Israeli women, in which case their spouses may convert to the faith (most conversions to Samaritanism are disallowed). Samaritan marriages are also required to be approved by a geneticist in order to limit birth defects.
Given their upward population trend, the current outlook for Samaritans is less tenuous than two hundred years ago. As intermarriage rates increase, the Samaritan population, once nearly vanquished, may continue to play a small but significant role in the same region they inhabited in the time of Jesus. |
Tourism can contribute to sustainable development in a number of ways. Not only does it sustain resources, heritage, and livelihoods, but rather improve them as well. When planned, developed, and managed appropriately, tourism acts as a bridge between economic growth, environmental conservation, and social development. In doing so, it aligns incentives and guarantees a better future for all.
What is sustainable tourism?
According to the UN World Tourism Organization and UN Environment Program, sustainable tourism is defined as “tourism that takes full account of its current and future economic, social and environmental impacts, addressing the needs of visitors, the industry, the environment and host communities”.
In doing so, tourism contributes to sustainable development in a number of ways. Here’s how:
Tourism impact on sustainable development
Through tourism, destinations are more likely to keep their cultural heritage alive. This is due to demand from travellers who are looking to indulge in community building and cultural immersion activities.
Unfortunately, many places experience ‘cultural dilution’, where ancestral practices, traditions, languages, and know-hows disappear altogether due to a growing trend in globalisation, urbanisation, and social dislocation.
In this case, tourism reminds communities of the unique value proposition in preserving and sharing their cultural heritage. As a result, the culture lives on.
Environmental preservation and regeneration
Directly through host country residents
Tourism also contributes to sustainable development through environmental preservation and regeneration. This happens in two ways: directly and indirectly.
First, many types of tourism including sustainable, nature-based, and ecotourism focus on our connection to the land, its resources, and biodiversity. As a result, communities have all the more reason to protect and improve their natural heritage. If solid waste is piling and corals are dying, visitors will no longer be attracted to the destination. When this happens, communities become deprived of income streams they once relied on from tourism. Many businesses would close down, and unemployment would prevail.
To counter this, tourism development encourages locals to preserve and regenerate their environment to sustain a living and keep the tourists coming.
Indirectly through shifts in traveller attitudes and behaviours
From the traveller side, experiencing raw, beautiful nature is bound to inspire them to engage in more environmentally-friendly practices. Seeing and learning from local communities, especially those living in resource poor areas, influences visitors to appreciate and respect our planet. This impact, in many cases, lives on, where visitors take these practices back home and inspire others to do the same.
In addition, tourism also drives inclusive economic opportunities to underserved rural areas. Many local communities do not live in big cities and urban centers and are as a result excluded from growth and development opportunities. Community-based tourism, in this case, drives foot to off-grid rural destinations, introducing and diversifying income opportunities in the process. This increases their resilience to exogenous shocks such as those relating to rainfall patterns since they no longer solely rely on agriculture, as is the case in many rural areas.
Diversity and inclusion
Finally, intimate exchanges with people of different backgrounds, beliefs, and practices helps build bridges of respect, tolerance, and understanding between them. It reminds them of how similar everyone is at their core, and to celebrate differences and diversity. This is fundamental to combating racism, prejudice, and discrimination, which ultimately reflects on diversity and inclusion in opportunity.
To conclude, tourism is a key driver of sustainable development. Governments and development actors must thus employ sustainable tourism as a tool to reap economic benefits, restore and revive the planet, and ensure everyone benefits in the process.
With support from the Embassy of the Netherlands in Jordan, we created new economic opportunities for local communities along the Jordan Trail and are currently developing and curating diversified sustainable travel experiences across the country. |
Make a clear distinction of all the stages of the lesson. Select the organizational point verbalize to students the goals and objectives of the lesson. A clear phasing time for the lesson stimulates students to a more responsible approach to the learning process.
Diversify the methods and media of instruction at the lesson. The more interesting the lesson, the less time is left for the students for foreign studies. An effective way of organizing productive educational activities is a collective creative work. Try to include absolutely all students in the lesson, do not limit yourself to socializing only with strong students.
Respect the child's personality. Do not allow derogatory words and actions towards the weak students. Your task is to see the person in each student, even if he is an inveterate bully and losers. Such children are, as a rule, feeling that they are treated with respect, try to meet the expectations of the teacher and behave well.
Your actions as a teacher should have a clear focus and to carry meaning. If students notice that you do not know what to do next or what to do in the classroom, discipline will be lost. Therefore need a clear lesson plan.
If the lesson is suddenly disturbed if they interfere with you to lead a lesson, never go on an explanation for the noise and cry. Stop, shut up, sit down, look closely at the children. Wait pause. When in the class there will be silence, calm tone explain that you will lead a lesson on, until order. As a rule, it works.
Make it a tradition to assign grades for behavior after each lesson, maintain communication with parents of students put in fame to the school governors, if you can't control the behavior of the students.
In elementary school, well, there are different gaming system of punishment: a system of fines, issuing of red cards, hall of shame etc. of Course, you can put in the magazine of the two, but this is unlikely to permanently solve the problem of discipline. You need to look for different methods, approaches, ways to increase the level of interest students in the learning process. |
Verne the Worm by Shelley Goldbeck was originally written with school aged children in mind. Verne’s story is a great jumping off point to excite students about the world of vermiculture. In addition, there are many resources on this site to help facilitate learning.
To learn more about how to get Verne the Worm by Shelley Goldbeck for your classroom, please contact us.
Classroom Activity Ideas
- Set up a classroom worm bin. Worm bins are easy to set up and maintain in the classroom. Check out this step by step tutorial.
- Test the power of compost with an experiment! Check out a sample outline here.
- Observe worm behaviour. Worms react to light and to physical stimuli. By observing the reactions of worms, students can learn about the sensitivities of these important animals. Check out this activity guideline. |
Traditionally, the study of communication pathways between the head and heart has been approached from a rather one-sided perspective, with scientists focusing primarily on the heart’s responses to the brain’s commands. We have learned, however, that communication between the heart and brain actually is a dynamic, ongoing, two-way dialogue, with each organ continuously influencing the other’s function. Research has shown that the heart communicates to the brain in four major ways: neurologically (through the transmission of nerve impulses), biochemically (via hormones and neurotransmitters), biophysically (through pressure waves) and energetically (through electromagnetic field interactions). Communication along all these conduits significantly affects the brain’s activity. Moreover, our research shows that messages the heart sends to the brain also can affect performance.
The heart communicates with the brain and body in four ways:
- Neurological communication (nervous system)
- Biochemical communication (hormones)
- Biophysical communication (pulse wave)
- Energetic communication (electromagnetic fields)
Some of the first researchers in the field of psychophysiology to examine the interactions between the heart and brain were John and Beatrice Lacey. During 20 years of research throughout the 1960s and ’70s, they observed that the heart communicates with the brain in ways that significantly affect how we perceive and react to the world.
In physiologist and researcher Walter Bradford Cannon’s view, when we are aroused, the mobilizing part of the nervous system (sympathetic) energizes us for fight or flight, which is indicated by an increase in heart rate, and in more quiescent moments, the calming part of the nervous system (parasympathetic) calms us down and slows the heart rate. Cannon believed the autonomic nervous system and all of the related physiological responses moved in concert with the brain’s response to any given stimulus or challenge. Presumably, all of our inner systems are activated together when we are aroused and calm down together when we are at rest and the brain is in control of the entire process. Cannon also introduced the concept of homeostasis. Since then, the study of physiology has been based on the principle that all cells, tissues and organs strive to maintain a static or constant steady-state condition. However, with the introduction of signal-processing technologies that can acquire continuous data over time from physiological processes such as heart rate (HR), blood pressure (BP) and nerve activity, it has become abundantly apparent that biological processes vary in complex and nonlinear ways, even during so-called steady-state conditions. These observations have led to the understanding that healthy, optimal function is a result of continuous, dynamic, bidirectional interactions among multiple neural, hormonal and mechanical control systems at both local and central levels. In concert, these dynamic and interconnected physiological and psychological regulatory systems are never truly at rest and are certainly never static.
For example, we now know that the normal resting rhythm of the heart is highly variable rather than monotonously regular, which was the widespread notion for many years. This will be discussed further in the section on heart rate variability (HRV).
The Laceys noticed that the model proposed by Cannon only partially matched actual physiological behavior. As their research evolved, they found that the heart in particular seemed to have its own logic that frequently diverged from the direction of autonomic nervous system activity. The heart was behaving as though it had a mind of its own. Furthermore, the heart appeared to be sending meaningful messages to the brain that the brain not only understood, but also obeyed. Even more intriguing was that it looked as though these messages could affect a person’s perceptions, behavior and performance. The Laceys identified a neural pathway and mechanism whereby input from the heart to the brain could inhibit or facilitate the brain’s electrical activity. Then in 1974, French researchers stimulated the vagus nerve (which carries many of the signals from the heart to the brain) in cats and found that the brain’s electrical response was reduced to about half its normal rate. This suggested that the heart and nervous system were not simply following the brain’s directions, as Cannon had thought. Rather, the autonomic nervous system and the communication between the heart and brain were much more complex, and the heart seemed to have its own type of logic and acted independently of the signals sent from the brain.
While the Laceys research focused on activity occurring within a single cardiac cycle, they also were able to confirm that cardiovascular activity influences perception and cognitive performance, but there were still some inconsistencies in the results. These inconsistencies were resolved in Germany by Velden and Wölk, who later demonstrated that cognitive performance fluctuated at a rhythm around 10 hertz throughout the cardiac cycle. They showed that the modulation of cortical function resulted from ascending cardiovascular inputs on neurons in the thalamus, which globally synchronizes cortical activity.[2, 3] An important aspect of their work was the finding that it is the pattern and stability of the heart’s rhythm of the afferent (ascending) inputs, rather than the number of neural bursts within the cardiac cycle, that are important in modulating thalamic activity, which in turn has global effects on brain function. There has since been a growing body of research indicating that afferent information processed by the intrinsic cardiac nervous system (heart-brain) can influence activity in the frontocortical areas[4-6] and motor cortex, affecting psychological factors such as attention level, motivation, perceptual sensitivity and emotional processing.
Neurocardiology: The Brain On the Heart
While the Laceys were conducting their research in psychophysiology, a small group of cardiologists joined forces with a group of neurophysiologists and neuroanatomists to explore areas of mutual interest. This represented the beginning of the new discipline now called neurocardiology. One of their early findings is that the heart has a complex neural network that is sufficiently extensive to be characterized as a brain on the heart (Figure 1.2).[11, 12] The heart-brain, as it is commonly called, or intrinsic cardiac nervous system, is an intricate network of complex ganglia, neurotransmitters, proteins and support cells, the same as those of the brain in the head. The heart-brain’s neural circuitry enables it to act independently of the cranial brain to learn, remember, make decisions and even feel and sense. Descending activity from the brain in the head via the sympathetic and parasympathetic branches of the ANS is integrated into the heart’s intrinsic nervous system along with signals arising from sensory neurons in the heart that detect pressure, heart rate, heart rhythm and hormones.
The anatomy and functions of the intrinsic cardiac nervous system and its connections with the brain have been explored extensively by neurocardiologists.[13, 14] In terms of heart-brain communication, it is generally well-known that the efferent (descending) pathways in the autonomic nervous system are involved in the regulation of the heart. However, it is less appreciated that the majority of fibers in the vagus nerves are afferent (ascending) in nature. Furthermore, more of these ascending neural pathways are related to the heart (and cardiovascular system) than to any other organ. This means the heart sends more information to the brain than the brain sends to the heart. More recent research shows that the neural interactions between the heart and brain are more complex than previously thought. In addition, the intrinsic cardiac nervous system has both short-term and long-term memory functions and can operate independently of central neuronal command.
Once information has been processed by the heart’s intrinsic nervous system, the appropriate signals are sent to the heart’s sinoatrial node and to other tissues in the heart. Thus, under normal physiological conditions, the heart’s intrinsic nervous system plays an important role in much of the routine control of cardiac function, independent of the central nervous system. The heart’s intrinsic nervous system is vital for the maintenance of cardiovascular stability and efficiency and without it, the heart cannot function properly. The neural output, or messages from the intrinsic cardiac nervous system travels to the brain via ascending pathways in the both the spinal column and vagus nerves, where it travels to the medulla, hypothalamus, thalamus and amygdala and then to the cerebral cortex.[5, 16, 17] The nervous-system pathways between the heart and brain are shown in Figure 1.3 and the primary afferent pathways in the brain are shown in Figure 1.4.
Had the existence of the intrinsic cardiac nervous system and the complexity of the neural communication between the heart and brain been known while the Laceys were conducting their paradigm-shifting research, their theories and data likely would have been accepted far sooner. Their insight, rigorous experimentation and courage to follow where the data led them, even though it did not fit the well-entrenched beliefs of the scientific community of their day, were pivotal in the understanding of the heart-brain connection. Their research played an important role in elucidating the basic physiological and psychological processes that connect the heart and brain and the mind and body. In 1977, Dr. Francis Waldropin, director of the National Institute of Mental Health, stated in a review article of the Laceys’ work, "Their intricate and careful procedures, combined with their daring theories, have produced work that has stirred controversy as well as promise. In the long run, their research may tell us much about what makes each of us a whole person and may suggest techniques that can restore a distressed person to health."
The Heart as a Hormonal Gland
In addition to its extensive neurological interactions, the heart also communicates with the brain and body biochemically by way of the hormones it produces. Although not typically thought of as an endocrine gland, the heart actually manufactures and secretes a number of hormones and neurotransmitters that have a wide-ranging impact on the body as a whole.
The heart was reclassified as part of the hormonal system in 1983, when a new hormone produced and secreted by the atria of the heart was discovered. This hormone has been called by several different names – atrial natriuretic factor (ANF), atrial natriuretic peptide (ANP) and atrial peptide. Nicknamed the balance hormone, it plays an important role in fluid and electrolyte balance and helps regulate the blood vessels, kidneys, adrenal glands and many regulatory centers in the brain. Increased atrial peptide inhibits the release of stress hormones, reduces sympathetic outflow and appears to interact with the immune system. Even more intriguing, experiments suggest atrial peptide can influence motivation and behavior.
It was later discovered the heart contains cells that synthesize and release catecholamines (norepinephrine, epinephrine and dopamine), which are neurotransmitters once thought to be produced only by neurons in the brain and ganglia. More recently, it was discovered the heart also manufactures and secretes oxytocin, which can act as a neurotransmitter and commonly is referred to as the love or socialbonding hormone. Beyond its well-known functions in childbirth and lactation, oxytocin also has been shown to be involved in cognition, tolerance, trust and friendship and the establishment of enduring pair-bonds. Remarkably, concentrations of oxytocin produced in the heart are in the same range as those produced in the brain. |
11.18: Metallic Solids
Metallic solids such as crystals of copper, aluminum, and iron are formed by metal atoms. The structure of metallic crystals is often described as a uniform distribution of atomic nuclei within a “sea” of delocalized electrons. The atoms within such a metallic solid are held together by a unique force known as metallic bonding that gives rise to many useful and varied bulk properties.
All metallic solids exhibit high thermal and electrical conductivity, metallic luster, and malleability. Many are very hard and quite strong. Because of their malleability (the ability to deform under pressure or hammering), they do not shatter and, therefore, make useful construction materials. The melting points of the metals vary widely. Mercury is a liquid at room temperature, and the alkali metals melt below 200 °C. Several post-transition metals also have low melting points, whereas the transition metals melt at temperatures above 1000 °C. These differences reflect differences in the strengths of metallic bonding among metals.
Properties of Metallic Solids
Owing to their crystalline structure, metallic solids exhibit few unique properties associated with the structure and have been tabulated in the following table.
|Type of Solid||Type of Particles||Type of Attractions||Properties||Examples|
|Metallic||Atoms of electropositive elements||Metallic bonds||shiny, malleable, ductile, conducts heat and electricity well, variable hardness and melting temperature||Cu, Fe, Ti, Pb, U|
Crystal Structure of Metallic Solids: Close-packing
Solids that are made of identical atoms can have two types of arrangements: square or close-packed (Figure 1). Since close-packing maximizes the overall attractions between atoms and minimizes the total intermolecular energy, the atoms in most metals pack in this manner.
Figure 1. Square vs close-packed arrangement.
We find two types of closest packing in simple metallic crystalline structures: hexagonal closest packing (HCP), and cubic closest packing (CCP). Both consist of repeating layers of hexagonally arranged atoms. In both types, a second layer (B) is placed on the first layer (A) so that each atom in the second layer is in contact with three atoms in the first layer. The third layer is positioned in one of two ways.
In HCP, atoms in the third layer are directly above atoms in the first layer (i.e., the third layer is also a type A), and the stacking consists of alternating type A and type B close-packed layers (i.e., ABABAB⋯) (Figure 2a).
In CCP, atoms in the third layer are not above atoms in either of the first two layers (i.e., the third layer is type C), and the stacking consists of alternating type A, type B, and type C close-packed layers (i.e., ABCABCABC⋯) (Figure 2b). Cubic face-centered (FCC) and CCP arrangements are actually the same structures with compact packing of atoms, occupying 74% of the volume.
Figure 2. (a) Hexagonal close-packing consists of two alternating layers (ABABAB…). (b) Cubic close-packing consists of three alternating layers (ABCABCABC…).
In both types of packing, each atom contacts six atoms in its own layer, three in the layer above, and three in the layer below. Thus each atom touches 12 near neighbors and therefore has a coordination number of 12.
About two–thirds of all metals crystallize in closest-packed arrays with coordination numbers of 12. Metals that crystallize in an HCP structure include Cd, Co, Li, Mg, Na, and Zn, and metals that crystallize in a CCP structure include Ag, Al, Ca, Cu, Ni, Pb, and Pt.
This text has been adapted from Openstax, Chemistry 2e, Sections 10.5 The Solid State of Matter, and 10.6 Lattice Structures in Crystalline Solids. |
More Americans came into contact with maps during World War II than in any previous moment in American history. From the elaborate and innovative inserts in the National Geographic to the schematic and tactical pictures in newspapers, maps were everywhere. On September 1, 1939, the Nazis invaded Poland, and by the end of the day a map of Europe could not be bought anywhere in the United States. In fact, Rand McNally reported selling more maps and atlases of the European theaters in the first two weeks of September than in all the years since the armistice of 1918. Two years later, the attack on Pearl Harbor again sparked a demand for maps. Two of the largest commercial mapmakers reported their largest sales to date in 1941, and by early 1942 Newsweek had named Washington, D.C. "a city of maps," one where "it is now considered a faux pas to be caught without your Pacific arena."
War has perennially driven interest in geography, but World War II was different. The urgency of the war, coupled with the advent of aviation, fueled the demand not just for more but different maps, particularly ones that could explain why President Roosevelt was stationing troops in Iceland, or sending fleets to the Indian Ocean. Americans had been reared on the Mercator map of the world, a sixteenth-century projection designed for navigation but which created immense distortions at the far northern and southern latitudes.
Indeed, Americans had become so used to seeing the world mapped on the Mercator projection that any other method met with resistance, both in classrooms and living rooms. But as aviation displaced sea navigation in the twentieth century, Americans were sorely in need of maps that conveyed the new realities of distance and direction in the air age.
The most important innovator to step into this breach was actually not a cartographer at all, but an artist. Beginning in the late 1930s, Richard Edes Harrison drew a series of elegant and gripping images of a world at war, and in the process persuaded the public that aviation and war really had fundamentally disrupted the nature of geography.
Harrison came to maps by chance. Trained in design, he arrived in New York during the Depression and made a living by creating everything from whiskey bottles to ashtrays. One day, a friend at Time asked him to fill in for an absent mapmaker at its sister publication, Fortune. That unexpected call led to a lasting collaboration, one where Harrison used techniques of perspective and color to translate the round earth onto flat paper. In fact, Harrison considered his lack of training in cartography an advantage, for he had no fixed understandings of what a map should look like.
Throughout the war, Harrison dazzled readers of Fortune with artistic geo-visualizations of the political crises in Europe and Asia. The key decision he made was to reject the Mercator projection, which had outlived its purpose. Instead, he drew on other projections, such as this one from 1943, centered on the north pole and which drew Eurasia and North America together, though distorting the southern hemisphere as a result.
In the original version of the map, from August 1941, Harrison blackened the entire Soviet Union as part of the Axis to reflect the recent German invasion. In the edition two years later here, the Soviets have aligned with the Allies, and the threat of the Axis appears more limited. But in either edition it was impossible to ignore the prospect of American stewardship that gradually—but completely—displaced the isolation of the 1930s. For Harrison, the polar projection was the new geographic reality, one that necessitated American internationalism.
Harrison's most notable legacy was a series of colorful and sometimes disorienting pictures (not quite maps) that emphasized relationships between cities, nations, and continents at the heart of the war. These maps were published in Fortune, then issued in an atlas that became an instant bestseller in 1944.
The most powerful of these images anticipated the perspective of Google Earth. Here Harrison reintroduced a spherical dimension to the map, focusing on the theaters of war in a way that—for instance—rendered the central place of the Mediterranean and the topographical obstacles facing any invasion of southern Europe. National borders were secondary to regional configurations, and the viewer was forced to reckon strategically with the complex terrain.
Harrison's ability to play with scale evokes the perspective of a pilot, but one placed at an infinite distance. Cartographers were quick to point out that no such perspective existed in nature, yet by drawing the topography with such care Harrison made the terrain far more real than it had been in the abstract representation of mountains used on traditional maps. His map of Russia from the south, created just before the end of the Hitler-Stalin Pact, powerfully illustrates the sheer size of the Soviet Union and its population. Notice his use of light and shadow to depict the multiple time zones encompassed by the Russian landmass, while the graphic at the lower right captured the massive growth of the urban population in the western regions.
With his imaginative use of color, Harrison generated spatial depth in a way that gave the public a vivid picture of places that otherwise remained foreign, such as this close up detail of his map of Japan from Siberia.
The public welcomed Harrison's images: The first edition of his atlas sold out before it even hit the shelves, and throughout the war he was inundated with requests for maps drawn with his signature techniques. Most professional cartographers celebrated his provocative style for its ability to foster a more dynamic understanding of geographical relationships, and the military hired Harrison to make maps for soldiers in the field and to help train pilots to understand regions that had yet to be photographed from the air. It was not long before Time, Newsweek, and the wire services began to experiment with new visual mapping techniques popularized by Harrison. The cartographic logjam had been broken, for Harrison's views struck a chord with a public hungry for information.
By 1944, Harrison had become a minor celebrity for his elegant pictures of global crisis, ones that could be intuitively understood by readers of widely varied levels of literacy and sophistication. His startling views of Japan from Alaska and the Solomon Islands brought home the proximity of the Axis and prepared the public for a dogged fight in the Pacific. Such a view was entirely absent from traditional maps of the north Pacific, which comfortably distanced Japan and Asia from North America across a massive ocean.
Harrison's critics claimed his work was more propagandistic and pictorial than scientific and reliable, governed by caricatures of the globe rather than fidelity to latitude and longitude. Fair enough. But his goal was to wrench Americans out of a two-dimensional sense of geography, and embrace an understanding of perspective and direction. In the process he reintroduced a sense of artistry and drama that directly affected the look and feel of popular maps.
When I interviewed Harrison in New York at the end of his life in 1993, he still insisted that I call him an artist rather than a cartographer, for he disdained the constricted techniques of mapmakers who were hidebound by convention. In fact his images sit somewhere between art and cartography, supplying the missing link between the globe and the map. The term "global" has become a cliché in the early twenty-first century, but in Harrison's case it's an appropriate characterization. In redrawing the map of the world, Harrison contributed to a reconsideration of America's role in that world. |
Written by Ada’s Medical Knowledge Team
What is pancreatic cancer?
Pancreatic cancer is a cancer arising from the pancreas, a digestive organ located in the upper region of the abdomen behind the stomach. This condition tends to affect older adults and people who have other medical conditions, especially pancreas conditions. Symptoms are unspecific and often occur late, which complicates diagnosis and worsens the outlook after diagnosis. It may lead to unexplained weight loss, loss of appetite, chronic back pain and pain in the upper abdomen. Diagnosis is made by MRI or CT scans. Treatment involves surgery, chemotherapy and, sometimes, radiotherapy. People who are diagnosed in the early stages have a better chance of successful treatment, but this is not common.
Cancer occurs when abnormal cells being to grow uncontrollably. These cells destroy the normal cells around them, and can spread through to other parts of the body. Pancreatic cancer involves the pancreas, an organ located in the upper region of the abdomen behind the stomach which usually produces substances which break down fats and hormones that manage blood sugar. Pancreatic cancer affects mostly people between the ages of 50 and 80, and becomes more common with age. People who smoke, who drink alcohol regularly and who are obese are at higher risk of developing pancreatic cancer. People which have another condition of the pancreas, such as diabetes, long-term inflammation (pancreatitis) or pancreatic cysts also tend to develop pancreas cancer more commonly. In some cases, pancreas cancer tends to run in families, and some genes and specific hereditary conditions are known to increase the risk of developing pancreatic cancer, such as Peutz-Jeghers syndrome.
There are often no symptoms in the early stages of pancreatic cancer. The most common symptoms of pancreas cancer include abdominal pain which spreads to the back, unexplained weight loss and yellowing of the skin and eyes. Other symptoms commonly include nausea, loss of appetite and constipation. Many people find that their stools change color or consistency and appear paler and greasier than previously. Rarer symptoms include blood sugar problems and recurrent blood clots in the veins.
Diagnosis is based on the symptoms and a physical examination of the skin and abdomen which may reveal yellowing of the skin and an enlarged gallbladder. Some people are diagnosed during investigation for a cause of changes in their blood sugar levels. An ultrasound, CT scan or MRI scan of the abdomen is done to confirm the diagnosis, and a small sample of the pancreas (a biopsy) will be taken and investigated for cancer.
Treatment depends on the size, specific type of cancer and whether the cancer has a spread at the time of diagnosis. These factors are used to determine the stage of the cancer. The treatment involves surgery, chemotherapy or radiotherapy, or most commonly, a combination of these. The treating doctor can give the best advice in individual cases. Emotional counseling or joining a support group may also be helpful. People who have pancreatic cancer which cannot be cured may receive treatments which aim to improve their symptoms and quality of life.
Reducing alcohol intake and giving up smoking can help to reduce the risk of developing pancreas cancer. Maintaining good health, including maintaining a healthy weight and taking care to manage other health problems, such as diabetes, may also be helpful.
Other names for pancreatic cancer
- Malignant neoplasm of pancreas |
The evolution of transportation, just like the evolution of humankind, has gone through trials and tribulations as it has evolved through time.
It has ebbed and flowed, overcoming challenges to grow to ever-increasing levels of complexity and efficiency. Today, we often take for granted our ability to get from one place to another, nearby or distant. We expect to get there, but don’t often reflect on how, and we suppose, more than we actually know, how we move from one location to another. But throughout history, we have had to slowly but surely, painstakingly evolve our means of transportation to where it is today.
Many modes of transport have evolved and many more have gone extinct. The modes of our transportation have developed alongside the expansion of our human understanding and culture. Our greatest demands and challenges have, in turn, initiated our greatest inventive feats that have taken us from where we have come to where we intend to go. Transportation technology has been the key to our most powerful sociological and teleological growth. And as it has done so in the distant past, it will continue to do so into the distant future.
During the stone age of antiquity, we walked and ran upon the solid earth and swam and floated in dugout canoes upon the liquid rivers or seas. By 3500 BC, we began using wheeled carts and river boats. By 3100 BC, we tamed horses to assist our way. By 2000 BC, we built chariots. By 600 BC, we built wagons. By 332 BC, we built submersibles. By 312 BC, we built miles of paved roads. By 236 BC, we constructed our first elevators. By 214 BC, we built canals. By 200 BC, we constructed manned kites to fly.
During the middle ages in the 800s, we paved streets with tar.
During the 13th century, by the late 1200s, we invented sky-flying rockets.
During the 15th century, by the later 1400s, we built advanced sailing ships to cross entire oceans.
During the 16th century, we began using horse-powered rails of wood and stone.
During the 17th century, by 1620, we launched the first oar-propelled submarine. By 1662, we invented the horse-drawn bus. By 1672, we built the first steam-powered car.
During the 18th century, by 1740, we invented the foot-and-hand-powered carriage. By 1769, we experimented with the steam-driven artillery tractor. By 1760, we used iron rails. By 1776, we propelled submarines by screws. By 1783, we launched the first hot air and hydrogen balloons. By 1784, we built a steam carriage.
During the 19th century, by 1801, we ran steam road locomotives. By 1803, we ran commercial steam carriages and steamboats. By 1804, we built steam-powered railway locomotives and amphibious vehicles. By 1807, we used hydrogen-powered internal combustion engines in boats and road vehicles. By 1816, we invented bicycles. By 1820, we used steam locomotives on rails. By 1821, we used steam-powered monorails. By 1825, we began using steam-powered passenger carriages. By 1838, we built the first transatlantic steamship. By 1852, we invented the elevator. By 1853, we built aircraft gliders. By 1862, we made gasoline engine automobiles. By 1867, we began using motorcycles. By 1880, we built electric elevators. By 1896, we built electric escalators. By 1897, we had the steam turbine and electric bicycle.
During the 20th century, by 1900, we built airships. By 1903, we flew motor-driven airplanes and sailed in diesel engine canal boats. By 1908, we drove gas engine automobiles. By 1911, we launched diesel engine driven ships. By 1912, we launched liquid-fueled rockets. By 1935, we built DC-3 transport aircrafts. By 1939, we built jet engine-powered aircrafts. By 1942, we launched V2 rockets. By 1947, we had supersonic manned flights. By 1955, we had nuclear-powered submarines. By 1957, we launched a man made satellite into orbit — Sputnik 1, built container ships and flew commercial Boeing 707s. By 1961, we launched the first manned space mission orbiting the Earth. By 1969, we flew Boeing 747 wide body airliners and made the first manned moon landing — Apollo 11. By 1971, we launched the first space station. By 1976, we flew the supersonic concord passenger jet. By 1981, we flew the Space Shuttle. By 1994, the channel tunnel opened.
During the 21st century, by 2001, we launched the first self-balancing personal transport. By 2004, we operated commercial high-speed Maglev trains and launched the first suborbital space flight — SpaceShipOne. By 2012, we have now probed and viewed beyond the edge of our solar system with Voyager 1 spacecraft.
So, where do the remainder years of this century and the future of transportation now take us? Back to the moon, to Mars, or to Jupiter and beyond? Will we continue on our pioneering quest for those proximal and then most distant planets and stars that at present we have only a dim apprehension of? Will we probe the greatest depths and heights of the Earth and exceed the greatest speeds and teleport the holographic particle forms of our most creative imaginations? Will there be an end to our inventive and transportation horizons? Or will we continue to go where no man or woman has ever dared to go before and beyond?
It is our nature to explore, encompass and conquer the world and the many potential worlds we now appear to know. Our anthropology demonstrates this ever-expanding quest for awareness and influence. The history of our evolving transportation and the mystery of its future will be in our hands, hearts and minds. The only limits to our creative and inventive endeavors will be those we self-impose and those of the misunderstanding of our constraining and possibly liberating laws of the universe. May we step out onto the diving board of life and take the next quantum leap into the inspiring and inventive frontiers of the transportation minds of the future. May we let no boundary stop us from our ultimate destiny to reach for the stars. May our compelling desire to know the universe lead us onward and outward to those new and broader transportation horizons of tomorrow. Wow, what an inspiring reflection upon from where we have come and to where we will go. |
Grade 1 Language: If You Give a Pig a Pancake
Sharpen your grade school students' comprehension and listening skills with a story telling activity. Use this lesson plan as a guide on how to go about the learning process. Story telling is also a great way to stimulate a child's imagination.
Click for Printing Tips |
Debbie Haren, Preschool Teacher
To teach students about the dangers of going with strangers.
- Lesson 1: STRANGERS- Buddy system
- Lesson 2: STRANGERS- Adults do not need help from a child!
- Lesson 3: STRANGERS- Have a password
- Lesson 4: STRANGERS- Know some ways to get away
- Lesson 5: STRANGERS- Know your phone number and address
Ask students if their mom or dad have ever talked to them about going anywhere with strangers. Have the students that answer yes tell the rest of the class what their parents or grandparents told them. Ask them why they think this is important for them to remember what to do.
Lesson 1: STRANGERS - Buddy system
- popsicle sticks
- scraps of material
- white paper(sturdy).
Talk to students about the importance of always going somewhere together. Every student should pick a person to be their "buddy" for the day.
Explain to the kids that it is always important to have someone with you in case something should happen to you. That way the other person can go for help.
It is also very important that a grown-up always knows where you are! That way they know you are safe.
If you are playing outside it is VERY important that a grown-up or another child is with you so no one tries to take you in their car!
- Have each student make two stick puppets out of paper and then use the material to make a dress or short to put on the person they made. Have them make two puppets each to remind them they always need a buddy with them!
Lesson 2: STRANGERS - Adults do not need help from a child!
Talk to students about how many bad adults that try to steal children by telling them they need help carrying something or getting something out of their car. If someone asks them for help, say "No, but I can get another adult to help you!"
NEVER get close to a person's car to look at something or to get something from the person, such as candy or a game.
Always walk home with someone after school.
I really feel strongly about this subject and I hope other teachers will use this information to help children not get abducted from strangers!
Lesson 3: STRANGERS - Have a password
Talk to your students about the importance of not listening to anyone who says your mom or dad sent them to pick you up. Explain to students that many children have been taken from their schools and playgrounds because they were told their mom or dad sent someone to pick them up!
- Have students go home with a letter from the teacher explaining about the weeks theme of STRANGER DANGER. Explain in the letter the importance of having a code word that a person has to know in order for them to pick them up. Make sure that only the child and the mom or dad knows that code word! Also have the mom or dad spend some time talking to the child about what to do in case the child should get lost somewhere such as the mall!
- Have the students come back to school the next day and talk about the discussion with their mom or dad. Also the teacher could bring in a missing child's poster and talk about how this child was taken and their mom and dad miss them very much!
Lesson 4: STRANGERS - Know some ways to get away
Talk to kids about what they could do if someone grabbed them and tried to take them to their car.
- Some of the best things to do are: Scream and yell this person is is taking me! Another thing to do if there is not many people around is to bite the person very hard and then run as fast as you can!
- Have the kids practice yelling and saying, "This person is trying to take me!": Get them used to being assertive! Most children are not used to acting without thinking and the more prepared they are the faster they will be able to react.
Lesson 5: STRANGERS- Know your phone number and address
Many children who get lost do not know their full name and address. Explain to the children how important it is to talk clearly and slowly so people who can help them can understand what they are saying. Tell children if they are lost it is important to go to a grown-up. If a police officer or someone who works at the place is around that is the first person they should go to.
- Have on a file card each child's name and address along with their phone number.
- Practice with them saying their name and phone number. Make sure they are saying it clearly and slowly so it can be understood.
- It is also important that the children know their mom's and dad's first names. Have them tell you that also.
- I would send a home a letter to the parents about what you are working on and have them be practicing this at home also! |
Lab demo of multiband photovoltaic device was showed.
As you might know, the usual photovoltaic panes are efficient between 15-20%. It’s because the light spectrum is limited, and the rays falling out of the spectrum are not converted to electricity. But this might change soon.
The demonstration was done using RSLE’s IBand technology and is the first known intermediate band solar cell reduced to practice in a laboratory demonstration.
Thin film solar cells have two main advantages: cheaper to manufacture than a traditional silicon solar panels , and in addition are flexible and easily adaptable to any surface. Unfortunately, until now faced with efficiency up to about 9%.
But with the technology of RSLE, the iBand technology makes it possible to compose several thin film solar cells ,each of which captures a different part of the solar spectrum. The experimental samples were produced by using commercially available technology, so production could begin relatively soon.
The company said that this technology illustrates great promise for high efficiency thin film solar efficiencies above 35% by potentially capturing the full spectrum of the sun’s spectrum. The intermediate band solar cell developed by RSLE, is a thin film technology based on the discovery of highly mismatched alloys. |
Written by IEEE | November 29, 2016
In Paraguay, there are a large number of upper limb amputations due to bad working conditions and motorcycle accidents. Many people are also in the low income category, and they cannot afford the prosthesis. With advanced manufacturing, particularly with the use of 3D printing, a company is able to create sophisticated prosthetics at a low cost.
The company, called PO, has combined 3D printing with a control mechanism to make an arm that can perform specific actions. They teamed up with a company called Myo incorporating their armband that controls the mechanical aspect of the hand. Their armband monitors bioelectric muscle signals and interacts with the prosthetic, allowing a user to grip items and gesture as if the arm was part of their body.
Because the company is using 3D printing to build the hands, the cost of the prosthetic is much lower than a traditional one. Incredibly, they can make 100 3D printed hands for the price of one prosthetic. While there is still a cost associated with fittings and the components, much of the price is covered by private donations. The technology behind the project is opensource so that other people can work to make an innovative product that can help people around the world. |
A three-dimensional material could one day mimic the behavior of living cells in tissues, new research shows. The tissue-like materials, developed by Haygan Bayley, Ph.D., of Oxford University and colleagues Gabriel Villar and Andrew J. Heron, were described this week’s Science and Science Express.
The tissue-like constructs consist of thousands of connected water droplets, encapsulated within lipid films. These printed “droplet networks” could, the authors say, form the building blocks of a new kind of technology for drug delivery, potentially replacing or interfacing with damaged human tissues.
In an earlier publication, the authors described structures—“multisomes”—in which networks of aqueous droplets of defined compositions were encapsulated within small drops of oil in water. The encapsulated droplets adhere to one another and to the surface of the oil drop to form interface bilayers that allow them to communicate with each other and with the surrounding aqueous environment through membrane pores.
The droplet contents can be released by changing the pH or temperature of the surrounding solution, and, they said, the multicompartment framework of multisomes mimics a tissue. In the current application, the authors used a three-dimensional printer that ejects individual water droplets containing all of the necessary chemicals and biochemicals.
The printed networks are then assembled on a tray that moves to establish the position of each ejected droplet. The droplets stick together and are separated by a single thin membrane into which pores can be placed to allow communication between neighboring droplets. The researchers demonstrate that the printed material can make folding movements similar to muscle-like activity and has communication networks that operate like neurons.
Dr. Bayley said of the printing advance reported in Science this week, “Conventional 3D printers aren't up to the job of creating these droplet networks, so we custom built one in our Oxford lab to do it. At the moment we've created networks of up to 35,000 droplets but the size of network we can make is really only limited by time and money. For our experiments we used two different types of droplet, but there's no reason why you couldn't use 50 or more different kinds.”
He added, “We aren't trying to make materials that faithfully resemble tissues but rather structures that can carry out the functions of tissues. We've shown that it is possible to create networks of tens of thousands connected droplets. The droplets can be printed with protein pores to form pathways through the network that mimic nerves and are able to transmit electrical signals from one side of a network to the other.”
Gabriel Villar of Oxford University's department of chemistry and the inventor of the 3D printer used in the currently reported research said, “We have created a scalable way of producing a new type of soft material. The printed structures could in principle employ much of the biological machinery that enables the sophisticated behavior of living cells and tissues.” |
The heterogeneity of ASD poses both challenges and opportunities to researchers: challenges, because there are likely to be many different causal factors and trajectories for ASD subtypes, and opportunities, because recognition of the variety of ASD phenotypes can lead to more appropriate diagnosis, more precisely targeted treatments and supports, and can increase public awareness about the diversity inherent in ASD.
We know that not all cases of ASD are the same. Researchers have learned that there are many factors that vary amongst the symptoms and the severity of the symptoms associated with ASDs. Other factors such as the age-of-onset, as well as the strengths and weaknesses that individuals with an ASD possess, also vary a great deal. ASD-CARC researchers believe that by identifying and studying these variable characteristics, also called “profiling”, we will be able to classify distinct subgroups of ASDs – subgroups that will have similar etiologies and respond to the same therapies or have the same support needs.
By studying Autism Profiles, we hope to identify different subgroups of ASD. These subgroups will provide clues that will help us understand some of the very earliest signs of developmental differences or anomalies.
We believe that the distinctive subgroups of ASDs may respond differently to a variety of treatments (e.g., dietary, ABA, educational strategies). Very careful clinical assessments will hopefully lead to our separating families into different subgroups based on subtle differences in the behaviour/symptoms and/or physical features of the affected individuals (i.e. through the creation of Autism Profiles).
It is important to learn whether genetic or environmental differences exist that could account for subgroups of ASDs, and the different responses to the variety of treatments and supports used with individuals with an ASD.
Since some characteristics are familial, rather than specific to an ASD, we encourage all family members to take part in all of our studies. This includes the individuals with ASD, parents and typically developing siblings or other family members.
In terms of physical features, researchers have found that abnormalities of ears are common in autism, but we know that not all children with autism have abnormal ears. If we study a subgroup of children with these ear anomalies, will these children have other characteristics in common that, together, might constitute a clinical subgroup or "Autism Profile"? Studying groups of children with ASD who share physical or behavioural features is more likely to give us a clearer picture of ASD "subgroups" than if we combine our findings on all children with ASD.
In order to identify physical differences that are not evident to the naked eye, we are using 3D-facial imaging to study the faces of individuals with ASDs and their family members. These cameras are located at some of our sites, as well as within the Mobile Labs. We have identified some differences in the faces of individuals with ASDs that are not detectable except through this technology and believe that this will lead us to better understanding early developmental differences that occur in the formation of the brain and facial features of persons with ASDs and related disorders.
It is also true that there are marked differences in the behavioural or neurophysiological characteristics in children and adults with ASDs. One subgroup of children may have, for example, gastrointestinal problems or sleep disorders. Ultimately, we want to compare each "subgroup" (defined on behavioural or physical features) using genetic studies, to determine whether there is a common clinical/behavioural profile associated with each set of genetic differences ("genotype") or environmental exposures.
Some of the behavioural characteristics we are interested in measuring in children with ASD are those being assessed through our on-line questionnaire studies (sleep problems, gastrointestinal and diet problems). All families are encouraged to participate in these on-line studies!
|SITE MAP STUDIES||
||Q–GLO LOOKING FOR SOMETHING?| |
Write the introduction and conclusion
The introduction and conclusion are the remaining paragraphs to be included into your essay.
The introduction should be written in such a way so it attracts the reader's attention and gives him an idea of the essay's focus.
- Begin with an attention grabber.
The attention grabber you use is up to you, but here are some ideas:
This information must be true and verifiable, and it doesn't need to be totally new to your readers. It could simply be a pertinent fact that explicitly illustrates the point you wish to make.
An anecdote is a story that illustrates a point.
You should make sure your anecdote is short and relevant to your topic. This can be a very effective opener for your essay, but use it carefully.
An appropriate dialogue does not have to identify the speakers, but the reader must understand the point you are trying to convey. Use only two or three exchanges between speakers to make your point. Follow dialogue with a sentence or two of elaboration.
A few sentences explaining your topic in general terms can lead the reader gently to your thesis. Each sentence should become gradually more specific, until you reach your thesis.
- If the attention grabber was only a sentence or two, add one or two more sentences that will lead the reader from your opening to your thesis statement.
- Finish the paragraph with your thesis statement.
At the end of your essay you should draw a conclusion - sum up your points or provide a final perspective on your topic.
The conclusion should be three or four strong sentences which do not need to follow any set pattern. Simply review the main points or briefly describe your feelings about the topic. Even an anecdote can end your essay in a useful way.
The introduction and conclusion finalize your essay but there's one more step before your essay can be considered finished.
Database: Make sure you have access to over 800,000 pre-written papers and 15,000 biographies. Register Account.
Custom Written Paper: Need a unique paper? Place your paper request and let our professional writers to complete it. |
What are examiners looking for in the GCSE Language Exam Writing Section? This article is written specifically for AQA but it is relevant to OCR too, and to a lesser extent to WJEC.
The major difference with WJEC is that this board places a massive emphasis on writing in exactly the correct format. For instance, formal letters must be set out correctly, and if it's informal, will be marked down if it isn't chatty enough!
- Is it accurate in easy words? What are the easy words?
- Is it accurate in tricky words?
Punctuation is correct, and is used effectively.
- Simple punctuation is present and correct: capital letters, full stops and commas. Get help with this.
- Long (complex) and short (simple) sentences are used (correctly). Find out more here.
- Some fragments are used - always for effect, not by mistake
- Sentences start in different ways. Find out how to do this and get examples.
- Use semi colons
- Use dashes - and parentheses (-) and (- -)
- Use exclamation marks sparingly (not more than once)
- Use ellipsis correctly, and sparingly, also not more than once (...)
Ramp up your vocabulary by reading more and collecting new words to use. Try this list.
- Long and short paragraphs are used. In other words - one paragraph can be one, short, sentence long. Yes, honestly. It's easy to do - harder to do skilfully, so start practising now.
- Plan carefully.
- Make sure the first sentence of each paragraph gives a good clue as to what it will be about. This is sometimes called the 'topic sentence'.
- Make clear, general points, and give more than one example to prove them. Explain clearly. Use sensory or emotive language.
- If you can, make sure paragraphs flow into each other. How? Create a mini-cliffhanger or mystery or question at the end of a paragraph. Then answer or solve it in the next.
Use these well to connect and expand your ideas, showing cause and effect, relationship and sequence. Get simple connectives here. Or go here, for a super-thorough list of all more advanced connectives.
Use Clever Techniques
Get a complete, huge list of these here. You've studied how other writers use them (I hope). Now it's your turn.
Can I make up facts and statistics for a newspaper article?
Yes, as long as they're not ridiculous, are believable and you do it no more than twice. For example, rather than write '46.5 percent of the population are obese', write 'almost half the population are obese'; or 'two thirds', or write 'most people' or 'few'.
Can I make up direct quotations, anecdotes or 'interviews' for the language exam?
Yes, but do it very briefly. One sentence should be enough. Less is more. |
Project 2: the Story of Stuff Inspired Infographics
Here is a series of information design posters based on improving the environment. The students had to go to the Story of Stuff website, select a topic like “how plastic water bottles are poisoning our environment”.
Project 1 – The Roswell Incident Inforgraphic
Objective: The idea is to tell a story without using words.
Process: This is the first project that the students are assigned. They are given the synopsis of the of the alleged flying saucer crash in Roswell, NM in 1949. They must research the story and develop an infographic in order to tell the story without words. |
Journal ArticleDigital resources are stored online in your NSTA Library.
On any night, the stars seen in the sky can be as close to Earth as a few light-years or as distant as a few thousand light-years. Distances this large are hard to comprehend. In this article, we explore how astronomers measure the distances to stars and learn about the roles of observation and inference in the development of scientific knowledge, a critical aspect of the nature of science. The goal of this article is to help teachers and students develop understandings about the size of the universe and how science can tell us so much about things we cannot observe directly. |
What is the importance of speech communication? Principles of Speech Communication.
What is the importance of speech communication? Principles of Speech Communication. Importance of Speech, the importance of speech upon human action. What is Persuasive Speech Type? Educative Speech Type, Argumentative Speech Type, Informative Speech Type. A speechmaker must JUSTIFY why you are convinced.you have satisfactorily understudied the subject.
Importance of Speech
If we look at the history of mankind, there has never been a time when any other form of communication equaled spoken words in terms of their value and importance. If one considers the development of mankind, if he contemplates its initial phase, he will find that Savage’s wandering family was entirely dependent on its membersavages depended entirely upon what its members said to one another.
A little later, when a group of families made a clan or tribe, the individuals still heard speeches of their leaders, or they voiced their own opinions in tribal meetings. The spoken words of tribal leaders were viewed. With respect and were obeyed without any objections.
Importance of speech upon human action
This effect is similar in nature to what we now feel when we hear our national anthem in a soccer match. The drama was another popular form of entertainment, which was a valuable spreader of knowledge and religion in all primitive societies. If one analyzes the components of a drama, expressions, spoken words, and the pitch and tone of an actor’s voice play an integral role in the success of a drama.
Every great epoch of the world’s progress shows the supreme importance of speech upon human action-individual and collective. The history of the United States might almost be written as. A continuous record of the influence of great speakers upon others. The colonists were led to concrete action by persuasive speeches.
Whether it was a small tribe in the Stone Age or a large nation such as the Roman Empire, speech and spoken words have always played a big role in the individual and collective lives of the people.
Principles of Speech Communication
Speechmaking is perhaps one of the innate abilities of man, irrespective of one’s citizenry, or ethnic affiliations. Yet many people speak without realizing that it is a special ability without which communication between people and groups would not be possible.would not be possible.
Speech communication differs from normal day to day talking in which one speaks sporadically without considering ethics and skills. However, it is similar to everyday communication. That they are both driven by the aim to communicate meaningfully.
Speechmaking is organized communication. Aimed at sharing specific messages about a given subject to create an impact on solving human problems.
Generally, there are, for conveniences, sake, four basic speech types
This article provides guidance in the following areas: Types of speech
Sages/steps in the speechmaking process and Structure of a speech. The onus remains squarely, on every speechmaker to identify the type of speech most suitable to his/her purpose. For emphasis, it should be known that the aim of your delivery should be the sole factor dictating the style/type of speech you should choose to use.
Argumentative Speech Type
Arguments imply elaborate presentation of all perspectives to an object or a subject of discussion, before settling down for the most applicable option. What comes out of an argument as most acceptable may not necessarily be truer, or better than other options, but the process of arguing makes it best, when compared to the others.
This is why one choosing this type must bear in mind that it is his/her approach to it, and the ability to convince that determines the success or failure of the entire process. While this may be closely related to the persuasive essay, the dissimilarity lies in using points to convince at all costs.
To argue, therefore, the speechmaker needs to clearly and exhaustively raise every point of the issue and state facts about it. And this statement of facts is the “why” of the validity or not of your argument.
Persuasive Speech Type
As the name indicates. This type of speech stimulates a favorable nature. Towards the subject of your concern or to appeal. To the audience to see it your way and act as you desire.
Companies, individuals, and Non-Governmental Organisations. It depends on the project grant. It is often necessary for their proposals to be brief. Before forums of grant agencies. In doing this. They are expected to give brief information. The straight-to-point rundown of what they propose to do to achieve a goal if given a grant.
This summary must necessarily include a statement of methodology. And justification why it has to be your proposal and not that of another. You must convince that using so and so method. You will be able to achieve set goals within the specified time, without waste of resources. And this, you must do without a doubt.
Speech Maker Must Tell?
A high point worthy of emphasis is to persuade. A speech giver must tell why you have this belief. That your method is best suited to produce the best results. Your entire exercise will be meaningless if it fails to provide justification.
Also, students defending their research projects/thesis/dissertations ought to bear this in mind, as they will at one time or another. Need to persuade their tutors I favor of their work.
Educative Speech Type
Although teaching in a classroom situation requires more than speech-making skills. It would do you well as a professional teacher. Haven underwent training in the profession, to add these to your skills. As one who teaches in a school or a religious organization, one makes speeches often, both officially and otherwise.
Advertising agencies as. Well, make use of this type of speech as a product display demonstration. To teach prospective consumers of a new product a step – by – step approach to using it.
An educative speech provides a comprehensible how-to-do-it guide to given subjects and must be done carefully to avoid confusing consumers/students/audience/congregation.
Informative Speech Type
The aim of this class of speech is to make known. This may come in the presentable form in which the speechmaker delivers. It to the audience or maybe a press release.
Whichever the case, both the writer and the giver of speech must choose words carefully in order not to mislead. As the aim is to give accurate, unmistaken information at press conferences. Organizational report forums, annual general meetings, state of the affair reviews, etc.
Where and when necessary, consult with people such as experts. Who has a better technical understanding of the subject than you and to these. Pose ALL your questions and let their answers be the knowledge. With which you confront the exercise. These answers should be the basis of the speech you present.
In doing this, you should avoid stating the obvious. By this, I mean elements that can easily be deciphered. And understood should not be your primary aim to explain. Rather you will do more good. By the layman and on these, place your emphasis.
your speech is a political manifesto.
If for instance, your speech is a political manifesto, it will be more profitable to describe in detail. What you intend to do to solve certain societal problems and your conduct in office. Than to dwell on the might of your political party or on the electioneering process. Whoever your audience already knows. How to vote and how strong your party is but even if they don’t.
If on the other hand, your concern is a product/service as a PRE/Advertiser, or an issue intangible as those handled by spiritual leaders and program facilitators, seek out beforehand, opinions, and opposing views about the product/service/issue.
Be sure to find out details about the product/service/issue as to how it functions or implications of every standpoint in an issue. It is only this detailed understanding of the subject that places you above your audience to be able to grant answers to their every question, including the ones they are not able to ask.
You have satisfactorily understudied the subject
When you have satisfactorily understudied the subject of your presentation, you should as well endeavor to study the people to whom you will be speaking. This may require going the extra mile to study the various groups of people likely to be present at your presentation as well as their depth of understanding of the subject. Also, their depth of understanding of the language of communication is of importance, as this helps your diction for proper understanding.
You may as well, need to take a closer look at the place and time of your presentation. Though this may not be of the same relevance as the first two, is advisable because the place and time of an event contribute to a large extent, to the atmosphere of the event and ineffective communication.
The atmosphere is as important as the message itself as it colors the meaning of a message. This is why “good morning” at a time may be a greeting and at another time, a disturbance, as “yes” may mean yes at times but mean “no” at other times. |
Thomas Paine's Common Sense helped Americans "decide upon the propriety of separation,” as George Washington said.
In May 1775 the Reverend Jonathan Boucher, rowing across the Potomac, met George Washington rowing in the other direction on his way to the Continental Congress. The two conversed briefly on the fate of the colonies, and Boucher asked Washington if he supported independence. “Independence, sir?” Washington replied. “If you ever hear of my joining in any such measure you have my leave to set me down for everything wicked.” Even when he took command of the army in July, Washington later admitted, he “abhorred the idea of independence.” So what changed his mind? By his own admission, it was more than anything else the 47-page pamphlet Common Sense, written by a little-known Englishman named Thomas Paine and published January 10, 1776.
Until that day most colonists, like Washington, hoped to regain the rights afforded to all other British subjects. But as Washington wrote, “the sound doctrine and unanswerable reasoning contained in the pamphlet Common Sense will not leave numbers at a loss to decide upon the propriety of separation.” Paine convinced an America already at war with Britain that it was fighting not merely for lower tariffs or the right to elect representatives to Parliament but for its own inevitable independence.
Born into poverty in Thetford, England, in 1737, Paine failed at marriage and a string of jobs before he was 37. His lower-class status and debts shut him out of politics; this sharpened his sense of injustice and left him suspicious of government. He labored to educate himself, reading the leading political thinkers of the time. In London in the early fall of 1774 he met Benjamin Franklin, who persuaded him to emigrate to America. And so, Franklin’s letters in hand, Paine left to start over his life in the New World—little realizing he would help start over the New World’s life as well.
He caught typhus during the nine-week, 3,500-mile sea voyage and had to be carried ashore in Philadelphia, on November 30, 1774. During his month-long recuperation he began to seek out the city’s intelligentsia, aided by Franklin’s letters of introduction, and by January he had landed a job at Pennsylvania Magazine. Apart from the typhus, he flourished in his new home. No longer an outcast because of his income or his political views, he fit right in at the center of American political foment. While working at the magazine and attending meetings of many debating, literary, and scientific clubs, he began to develop new, radical opinions.
In America he saw a perfect reverse image of England. Where England was rotten and corrupt, America was pristine and egalitarian. His thinking expanded to fill the borders of his new country, and as he became convinced of the colonies’ virtue and destiny, the abstract theories of independence and representative government he had debated over pints crystallized into realities worth dying for.
But despite all the bloody events that had begun in April 1775, most colonists shied away from advocating complete separation from England. Shots had already been fired at Lexington and Concord, but the colonists continued to think of their fight as one for the rights accorded to all other Englishmen, not for independence. Some in the elite, alarmed at growing popular participation in politics, feared that a political revolution might lead to a social revolution as well, or perhaps even anarchy, opening the door for military dictatorship. Some could not break from the loyalty to the crown they had learned in the nursery—and furthermore enjoyed being protected by the king. Mightn’t throwing off British oversight make them vulnerable to invasion from a worse power, like the tyrannical Bourbons? Some simply feared charges of treason if they failed to defeat the most powerful military in the world. Independence, like sedition or unwed pregnancy, remained a topic discussed in hushed tones behind closed doors.
But after King George III issued a proclamation in August 1775 declaring that “the New England governments are in a state of rebellion, blows must decide whether they are to be subject to this country or independent,” some Philadelphians began to squirm behind their closed doors. As Paine and his friend the Philadelphia doctor Benjamin Rush discussed their mutual hope for independence, Rush confided that he wanted to write a treatise explaining why the colonies should rebel. However, Rush later remembered, “I shuddered at the prospect of the consequence of its not being well received. I suggested to [Paine] that he had nothing to fear from the popular odium to which such a publication might expose him, for he could live anywhere, but that my profession and connections, which tied me to Philadelphia, where a great majority of the citizens and some of my friends were hostile to a separation of our country from Great Britain, forbade me to come forward as a pioneer in that important controversy.” Paine responded to the idea “with avidity.” Rush’s most forceful counsel was that “there were two words which he should avoid by every means as necessary to his own safety and that of the public—independence and republicanism.”
So Paine immediately set to writing the tract he would end with the line “let none other be heard among us, than those of . . . the FREE AND INDEPENDENT STATES OF AMERICA.” Writing was always torturous for him, and he toiled over the details now as much as ever. But by December he had a finished manuscript to show Franklin, Samuel Adams, and the prominent scientist David Rittenhouse, who each made a few, tiny edits (although one of Rush’s changes was destined for the history books: Paine had titled the manuscript Plain Truth, and Rush suggested Common Sense). Paine originally planned to serialize the work in a newspaper, but releasing it as a pamphlet would allow for larger distribution and protect its bolder sentiments from the blue pen of a nervous editor.
All that remained was to find a publisher brave enough to print it. Rush contacted the forward-thinking printer Robert Bell, who quickly agreed to do so for half the profits. Paine stipulated that his half should go to clothing the troops. So nine days into the momentous year of 1776, the two-shilling pamphlet “burst,” in Rush’s words, “from the press with an effect that has rarely been produced by type and paper in any age or country.”
The hundreds of thousands of members of the British empire who would read Common Sense in the coming months were in for a shock. Paine immediately tore into the monarchy, calling it not only rotten and despotic at bottom but also downright silly: “One of the strongest natural proofs of the folly of the hereditary right in kings, is that nature disapproves it, otherwise, she would not so frequently turn it into ridicule by giving mankind an ass for a lion.” William the Conqueror, the founder of the English royal line, was “a French bastard landing with an armed banditti, and establishing himself king of England against the consent of the natives.”
Having argued that monarchy as a system was fundamentally tyrannical and led to constant bloodshed, he answered every common argument against American independence. To those who hesitated to break from the “mother country,” he wrote, “Even brutes do not devour their young, nor savages make war upon their own families.” To those who doubted that the colonies could best Britain at war, he wrote, “there is something very absurd in supposing a continent to be perpetually governed by an island,” besides which, “’tis not in numbers but in unity that our great strength lies; yet our present numbers are sufficient to repel the force of all the world.” He pleaded with his readers to recognize their country’s destiny: “Freedom hath been hunted round the globe. Asia and Africa have long expelled her. Europe regards her like a stranger, and England hath given her warning to depart. O! receive the fugitive, and prepare in time an asylum for mankind.”
He then sketched in broad strokes a blueprint for an American utopia. He envisioned a strong but limited, unicameral, republican government. He counted this the most important section of Common Sense, although many modern historians deem it the weakest. He was at his best stirring up rebellious fervor, not spinning out the intricate compromises of constitution writing. He may have realized that. He left much of the government plan vague, recommending that a continental conference meet to hash over the details. Above all, he stressed the urgency that his readers act immediately.
The first edition sold out in two weeks, and at a time when the average pamphlet sold a few thousand copies, Common Sense sold 120,000 by April 1776. By then printers in nearly every colony were running their own editions; seven came out in Philadelphia alone. Colonists bought some 500,000 copies in all, in what Paine proudly called “the greatest sale that any performance has ever had since the use of letters.” By July everyone in America was familiar with the work’s content. One Connecticut clergyman even read the text verbatim from the pulpit on a Sunday in lieu of a sermon.
Many of those who read Common Sense never saw the world quite the same way again. As one Philadelphian wrote, “Common Sense . . . is read to all ranks, and as many as read, so many become converted, though perhaps the hour before were most violent against the idea of independence.” Paine’s words put the spark to the revolutionary fire latent in many colonists, defining as well as articulating their beliefs. As the idea of independence grew from heresy to political necessity, politicians sought to help their reputations by vociferously supporting the idea, and in the debates that followed, the shape of the future government began to evolve.
What made the little pamphlet so effective? It came exactly when the colonies were ready for it. Unlike previous pamphleteers who were excoriated for their support of independence, Paine was a few steps, not a few miles, ahead of popular opinion. In the words of a Philadelphia minister, Common Sense “struck a string which required but a touch to make it vibrate. The country was ripe for independence, and only needed somebody to tell the people so, with decision, boldness, and plausibility.”
Indeed, Paine’s direct, clear style was accessible but forceful. He, unlike his upper-class colleagues, came from the audience he was trying to reach, and he knew he must avoid the usual leaden prose and precious flourishes of political writing to appeal to that audience. His examples were taken directly from the common experience of a colonist, but the rage that underlay the work could belong only to someone who had experienced the injustices of British society firsthand. In short, he was the right man at the right time, in the right place, and with the right style to ignite a revolution.
Some, of course, disagreed with Paine’s ideas. He waged editorial-page battle with loyalists, while a mob of angry, independence-supporting New Yorkers burned 1,500 copies of an anti-Common Sense pamphlet. John Adams wrote heatedly and often against Paine’s unicameral government, arguing in favor of a system of checks and balances. But the majority of would-be Americans agreed with Paine, and that summer they took Common Sense’s advice to draft an independence “manifesto” listing “the miseries we have endured, and the peaceful methods which we have ineffectually used for redress,” and explaining that “not being able any longer to live happily or safely under the cruel disposition of the British court, we have been driven to the necessity of breaking off all connections with her.”
Paine’s reputation would suffer in the coming years, as his Deist work The Age of Reason, arguing against traditional Christianity, made him an outcast again on two continents. But today we rightly remember him as one of the fathers of the country. “It appears to general observation, that revolutions create genius and talents; but those events do no more than bring them forward,” he once wrote. In his case, the reverse was equally true. |
Motors convert electrical energy into mechanical energy by the interaction between the magnetic fields set up in the stator and rotor windings.
There are a number of different types of electric motor:
Design factors to consider when choosing an electric motor:
- Commutation method
- Duty cycle
- No-load speed
- Stall torque
- Load (operating) point
- Torque ripple
- Power source
- Envelope (volume)
- Heat dissipation
In an electric motor the armature is the rotating part.
- Larger wire gauge - Lower stator winding loss
- Longer rotor and stator - Lower core loss
- Lower rotor bar resistance - Lower rotor loss
- Lower speed - lower rotor windage loss
- Smaller fan - Lower windage loss
- Optimized air gap size - Lower stray load loss
- Better steel with thinner laminations - Lower core loss
- Optimum bearing seal/shield - Lower friction loss
In most instances, the following information will help identify a motor:
- Frame designation (actual frame size in which the motor is built).
- Horsepower, speed, design and enclosure.
- Voltage, frequency and number of phases of power supply.
- Class of insulation and time rating.
Locked Rotor Current
Steady state current taken from the line with the rotor at standstill, at rated voltage and frequency. This is the current seen when starting the motor and load.
Locked Rotor Torque
The minimum torque that a motor will develop at rest for all angular positions of the rotor, with rated voltage applied at rated frequency.
A motor converts electrical energy into a mechanical energy and in so doing, encounters losses. These losses are all the energy that is put into a motor and not transformed to usable power but are converted into heat causing the temperature of the windings and other motor parts to rise.
- Friction and Windage: this is primarily bearing friction and aerodynamic drag on rotor (and can include fan loss where motor is force air cooled). Independent of load.
- Core Loss: primarily hysteresis losses in rotor and stator iron caused by fluctuating magnetic field. This is independent of load.
- Stray Load Loss: occurs in rotor and stator iron and is roughly proportional to current squared, is induced by leakage fluxes caused by load currents.
- I2R Losses: heating losses in rotor and stator conductors caused by current flowing through the conductor resistance. As it is the square of current it is generally small at no load and large at high load.
In order to reduce wear and avoid overheating certain motor components require lubricating. The bearings are the major motor component requiring lubrication.
Excess greasing can however damage the windings and internal switches, etc.
The stationary part of a rotating electrical machine.
Torque versus Speed
Torque versus Speed curve.
The no-load speed, stall torque, and the load point are used to establish the motor torque loadline. Knowing the no load speed and available voltage, you can then establish an initial back EMF constant and the motor torque constant. The stall torque combined with the loadpoint torque helps establish motor size. The duty cycle, temperature, and expected heat sinking are used with the motor size to determine the temperature rise of the motor.
See also: AC Induction Motors, Armature, Breakdown Torque, Brush DC Motors, Brushes, Brushless DC Motors, Capacitor Start Motor, Centrifugal Cutout Switch, Commutator, Compensation Windings, Compound Wound Motors and Generators, Drum Type Armature, Electric Motor Efficiency, Electric Motor Failure, Electric Motor Noise, Electric Motor Windings, Forbes, Prof George, Gamme Ring Armature, Generator, Induction Motor, Interpoles, Laminated Core, Lap Winding, Linear Motor, Motor, Nameplate Rating, Series Wound Motor and Generator, Shunt Wound Motor and Generator, Squirrel Cage Windings, Stall Torque, Stator, Stepper Motors, Synchronous Motor, Synchronous Speed, Windage Loss. |
Every child knows what happens when three little pigs decide to build houses made of straw, sticks, and bricks. They’ll tell you that the wolf comes for all three, and he’ll huff, and he’ll puff, and blow the houses down; only the house made of brick withstands the blow, thanks to the industry of the third little brother pig.
This story is part of many kids’ classroom activities and bedtime story-reading routines. The Three Little Pigs, along with other popular tales such as Jack and the Beanstalk, Little Red Riding Hood, and The Tortoise and The Hare dominate children’s bookshelves. They didn’t start that way, though. Most of the tales that kids now know where imagined and passed down in small communities across western Europe before being transported by immigrants to America.
While these mainstream tales do a good job imparting lessons of life and love, you don’t have to limit yourself to the selection. It still pays to go beyond your borders and expose children to tales from different parts of the world. It teaches them cultural sensitivity, a crucial trait for a 21st-century learner.
A View of the World from a Different Lens
Children born in the 21st century live in an increasingly interconnected world — they are global citizens who can easily meet and communicate with other people from different parts of the globe. To be sensitive, recognizant, and understanding of different cultures is a must for them to succeed and make meaningful relationships with other people.
Cultural sensitivity, in a nutshell, is recognizing that one’s culture is different, but not superior, to other cultures. It helps eliminate prejudice and plants the seeds of a more harmonious relationship with people from other backgrounds. A culturally sensitive child would love to play with anybody in the playground.
How do you make a child cultural sensitive? You start with stories.
Multicultural Literature for Kids
Stories are part of everyone’s childhood, thanks to bedtime stories parents read to them at night, the tales of magic and dragons from a cartoon, or even stories written by the entire class using an online storybook creator. By including foreign folklore into these selections, children can get a glimpse of what it’s like in another part of the globe — and learning that, value-wise, we’re not all that different after all.
Exposure to multicultural literature raises a child’s awareness of social practices, values, and belief systems of other cultures. As a result, they learn to be more empathetic towards people. Stories from across the border also prevent first- and second-generation foreign students from feeling isolated, because they can see themselves in the stories that are discussed in class.
Moreover, children will understand and relate to global issues better if they’re familiar with themes, conflicts, and characterizations found in multicultural literature.
Stories for Starters
Here are some stories you can fill your bookshelves with:
- The Elephant Who Lost His Patience (Indian) – An ant takes advantage of an elephant’s generosity for a long time, until the gentle giant decides it has had enough. It shows how one shouldn’t abuse the kindness of his or her neighbor.
- The Island of the Sun (China) – Two brothers — one is generous, the other, greedy — are carried by a bird to the gold-laden Island of the Sun and instructed them to take only one piece each. The generous brother did as told. The other filled his basket and, consequently, was left by the bird on the island. The story teaches the value of honesty and generosity.
- The King’s Magic Drum (Nigeria) – A tortoise was given a magic drum as compensation for his stolen food. The drum produced a feast every time it was beaten, so the tortoise grew lazy and arrogant. It demonstrates the consequences of laziness and indulgence.
As you tell children stories from all over the world, remember to go over the morals that the tales are trying to say. True, it’s easy and familiar to talk about how hares outwit lions, but it’s important too to discuss how industry brings good fortune, how kindness creates great friendships, and how quick thinking saves the day. |
I believe I can explain the Cambrian explosion.
The Cambrian explosion refers to the first appearance in a relatively short space of geological time of a very wide assortment of animals more than 500 million years ago. I believe it came about through hybridization.
Many well preserved Cambrian fossils occur in the Burgess shale, in the Canadian Rockies. These fossils include small and soft-bodied animals, several of which were planktonic but none were larvae. Compared with modern animals, some of them seem to have the front end of one animal and rear end of another. Modern larvae present a comparable set-up: larvae seem to be derived from animals in different groups from their corresponding adults. I have amassed a bookful of evidence that the basic forms of larvae did indeed originate as animals in other groups and that such forms were transferred by hybridization. Animals with larvae are "sequential chimeras", in which one body-form—the larva—is followed by another, distantly related form—the adult. I believe there were no Cambrian larvae, and Cambrian hybridizations produced "concurrent chimeras", in which two distantly related body-forms appeared together.
About 600 million years ago, shortly before the Cambrian, animals with tissues (metazoans) made their first appearance. I agree with Darwin that there were several different forms (Darwin suggested four or five), and I believe they resulted from hybridizations between different colonial protists. Protists are mostly single-celled, but colonial forms consist of many similar cells. All Cambrian animals were marine, and, like most modern marine animals, they shed their eggs and sperm into the water, where fertilization took place. Eggs of one species frequently encountered sperm of another, and there were only poorly developed mechanisms to prevent hybridization. Early animals had small genomes, leaving plenty of spare gene capacity. These factors led to many fruitful hybridizations, which resulted in concurrent chimeras. Not only did the original metazoans hybridize but the new animals resulting from these hybidizations also hybridized, and this produced the explosion in animal form.
The acquisition of larvae by hybridization came much later, when there was little spare genome capacity in recipes for single animals, and it is still going on. In the echinoderms (the group that includes sea-urchins and starfish) there is evidence that there were no larvae in either the Cambrian or the Ordovician (the following period), and this might well apply to other major groups. Acquiring parts, rather than larvae, by hybridization continued, I believe, throughout the Cambrian and Ordovician and probably later, but, as genomes became larger and filled most of the available space, later hybridizations led to smaller changes in adult form or to acquisitions of larvae. The gradual evolution of better mechanisms to prevent eggs being fertilized by foreign sperm resulted in fewer fruitful hybridizations, but occasional hybridizations still take place.
Hybridogenesis, the generation of new organisms by hybridization, and symbiogenesis, the generation of new organisms by symbiosis, both involve fusion of lineages, whereas Darwinian "descent with modification" is entirely within separate lineages. These forms of evolution function in parallel, and "natural selection" works on the results.
I cannot prove that Cambrian animals had poorly developed specificity and spare gene capacity, but it makes sense. |
Wood stores carbon dioxide
Through photosynthesis, growing trees store carbon dioxide in the form of carbon compounds. (Wood and soil also release a certain amount of carbon dioxide, but they absorb considerably more.) The faster the forest grows, the more carbon dioxide is captured. From a climate perspective, it is therefore better to manage the forest and use the wood than to leave the forest untouched. The carbon dioxide that is stored in the trees then remains in place throughout the life of the tree, even after it has been turned into a wood product. It is therefore particularly good to use wood for large and long-lasting products such as the structural frames of buildings.
The forest will not run out
For more than a century we have taken great care to look after the forests in Sweden. At least two new trees are planted for every tree harvested. The growth in our forests is thus much greater than the extraction of wood, and stocks of wood are steadily increasing. This ensures that our forests can provide a never-ending supply of construction material.
Wood has climate benefits in every phase
- Production phase
The energy required to saw and plane wood products is relatively low and the by-products (such as bark and wood chips) are used as biofuel for the sawmills’ drying kilns.
- Usage phase
– Wooden buildings and products store carbon dioxide for their entire life.
– Increasing the use of wood in construction can cut the use of other construction materials from non-renewable sources, which in turn will reduce carbon emissions. (This is called the substitution effect.)
– Wood is a flexible material, and wooden buildings are easy to refit and extend, so they can enjoy a long useful life.
– Building regulations on low energy use are easily achieved with wood construction systems. In addition, wood has good heat insulation properties, which reduces the need for extra insulation.
– Wood can be reused; flooring and windows, for example, can be reclaimed and used in another building. This prolongs the time that the carbon dioxide remains stored. Wood can also go through material recovery and be used in the manufacture of fibreboard, for example.
- End-of-life phase
At the end of their service life, wood products are used as biofuel, replacing fossil fuels. This is also an important benefit for the climate.
As long as a wood product remains in use, its carbon storage effect continues. This effect is often not included in a building’s LCA (life cycle analysis – a method for measuring overall environmental impact over the whole life cycle of the product), which is unfortunate.
An eternal ecocycle
When end-of-life wood products are used as biofuel or composted, the stored carbon dioxide is released. But in contrast to carbon emissions from fossil fuels, the incineration of wood does not add new quantities of carbon dioxide to the atmosphere. The released carbon dioxide is instead absorbed by newly planted and growing trees through photosynthesis. The circle is thus closed, and a new ecocycle can begin.
Minimal carbon footprint
So far, the main focus of the debate on buildings’ environmental impact has been on the usage phase, which is the time from completion of the building until it is demolished. But to gain a full picture of energy consumption, we should also look at the construction phase. Because even if we build zero energy buildings, the fact remains that the manufacture of the construction materials and the actual construction phase have a negative impact on the climate. It is therefore important for us to use construction materials and construction methods that have a minimal carbon footprint.
Modern construction methods using wood allow us to achieve just that: a minimal impact on the climate from buildings that also meet today’s demands for reduced energy consumption.
Wood is nature’s own solution to the climate issue. Next time you wonder how you are going to meet future demands for sustainability, think wood!
A four-storey wooden building stores 150 tonnes of carbon dioxide.
Research proves climate benefits for wood
A four-storey building in wood provides net storage of 150 tonnes of carbon dioxide, according to research by Mid Sweden University. This is because the wood stores the carbon dioxide absorbed by the growing trees. No other large-scale construction material has this capacity. (The analysis takes account of the energy consumed in manufacturing the wood, in transport and in the production of the building.)
Part of the solution to the climate issue is growing right here. Wood is both renewable and recyclable, and has a fantastic capacity to store carbon dioxide.
Building in wood has a long tradition in Sweden. And wood continues to represent tradition – but also eco-awareness and sustainability. There is no choice but to make planning and construction sustainable in the long term. Increased use of wood offers an opportunity to cut down on the use of finite raw materials and reduce carbon emissions from construction products.
A renewable material
The construction sector in Sweden emits carbon dioxide at an annual rate of 10 million tonnes. That is the same as all car traffic combined each year. Many commentators are now saying that we seriously need to review the way we build and the impact construction has on the climate. We need to see change.
And wood is going to be a vital resource in this change, with the material not yet reaching its full potential in the construction sector. We will be able to build more large buildings in wood in years to come.
The Kyoto Protocol’s international commitments to cut emissions of carbon dioxide are likely to lead to an increased use of wood in buildings. Many countries have launched national wood construction programmes as part of their strategy to replace more energy-intensive construction materials with wood. Interest and investment in wood construction techniques is growing all over the world – even in countries with little in the way of domestic forest raw material.
The key benefits of building in wood can be summarised in five points, the first four of which relate directly to the material and the fifth to the construction technique:
- Low energy consumption when extracting wood products for construction purposes from the forest, plus a large quantity of carbon neutral bioenergy stored in wood products. Carbon neutral means, in principle, that if the wood is incinerated in its end-of-life phase for bioenergy, the amount of carbon dioxide emitted at that point is equivalent to the amount originally absorbed by the tree. As such, there is no net addition of carbon dioxide to the atmosphere. The storage of carbon dioxide in the building (wood material) can be seen as a postponed neutralisation of the carbon dioxide stored.
- During the usage phase, a wood product stores carbon equating to around the same amount of atmospheric carbon dioxide as the wood product weighs.
- During demolition and removal, wood products can always be sent for energy recovery. This normally releases considerably more energy than is used to produce the building. This energy is carbon neutral and replaces fossil energy sources.
- In stark contrast to other construction materials, building in wood is based on a renewable natural resource and does not consume finite raw materials.
- Producing well-insulated apartment blocks with a wooden structural frame is resource-efficient, with reduced transport and rapid assembly. In addition, the construction site does not need to be as big and the noise levels are considerably lower, much to the relief of local neighbours.
Once wood can no longer be reused or its material recovered, for use in fibreboard and other sheet materials for example, it can still generate energy through incineration. This energy is climate-neutral and is in fact stored solar energy. The carbon dioxide released during incineration was once absorbed by the tree as it grew in the forest.
In 2008, the European Parliament approved a climate package, whose overall aim is to prevent global warming from increasing by more than two degrees, compared with pre-industrialisation. The EU has agreed four targets that must be met by 2020.
Based on the first three, these targets are often referred to as the 20-20-20 targets.
- Reducing greenhouse gas emissions by at least 20 percent compared with 1990 levels
- Moving towards a 20 percent increase in energy efficiency
- Increasing the share of renewable energy in final energy consumption to 20 percent
- A 10 percent share of renewables in the transport sector
We now need to gather all our strength and help each other within the industry to achieve these climate objectives. A great many initiatives have already been launched, but there is still a long way to go. Remember that your choices make a difference. |
How to Study for the GRE Quantitative SectionThe GRE quantitative Section tests your critical thinking and problem solving abilities with a section of multiple choice math questions. In 45 minutes, you have to solve 28 math problems that cover high school algebra, geometry, and arithmetic. To get a high score on the GRE math section, it is necessary to review basic math formulas and number properties, as well as practice solving problems efficiently and quickly. Since no calculators are allowed on the GRE, you must also practice doing arithmetic by hand and in your head.
To help you study, this article breaks down the most important aspects of the GRE Quantitative section, and ways you can get a higher score.
The GRE Is a Computer Adaptive Test (CAT)
The GRE is only offered on the computer in a format called "computer adaptive testing." This means that the set of questions you will be given is not static, but dynamic. When you answer questions correctly, the computer gives you harder GRE questions that are worth more points. When you answer questions incorrectly, the computer gives you easier GRE questions that are worth fewer points. You earn a high score by answering many difficult questions correctly.
This affects GRE test takers in two ways: (1) GRE test takers must answer questions in the order they are presented without leaving any blank or skipping questions, and (2) test takers cannot go back and change their answers.
Another thing to keep in mind about the GRE, is that there is a heavy penalty for not finishing the exam. If time runs out before you get through all 28 questions, the GRE will deduct many points from your score for each question left unanswered. For this reason, it is better to guess on the remaining questions when you have less than a minute left on the clock. No points are deducted for wrong answers.
On the GRE website (gre.org), you can download two free computer adaptive tests.
Types of Questions on the GRE Math Section
The 28 questions on the GRE math section are divided into two types: problem solving questions with 5 answer choices, and quantitative comparisons with 4 answer choices. Understanding how to approach each type of question will go along way toward improving your GRE quantitative score.
Problem Solving: These are normal math problems that cover algebra, geometry, arithmetic, number properties, and word problems. Most of the time you will need to set up and solve an equation, plug numbers into an equation, or recall a mathematical property.
Quantitative Comparisons: These problems present you with two quantities labeled A and B. They can be expressions involving variables, numbers, or words. You may be give additional information that pertains to both A and B. Your task is to determine which quantity is larger, if they are equal, or if there is not enough information to determine a relation. On the GRE, their answer choices are always the same. Choice A means quantity A is larger, B means B is larger, C means they are equal, and D means not enough info.
Strategies for Solving GRE Math Problems
Many GRE math questions that look complicated can be solved with standard tricks that are made for multiple choice tests. For questions that have variables in the answers, try plugging in actual numbers to see which answer is reasonable. Or for problems that have numbers as the answer choices, you may be able to plug each number into the original problem and find the correct answer by elimination.
The makers of the GRE reuse certain concepts over and over. Factorizations such as (x+y)2 = x2 + 2xy + y2, and (x-y)(x+y) = x2 - y2 occur frequently. On the GRE, the Pythagorean theorem for right triangles is also applied frequently: a2 + b2 = c2. Some special right triangles to consider are those with sides 3-4-5 and 5-12-13, and triangles with angles 30-60-90 and 45-45-90.
On GRE quantitative comparison problems, you can perform the same operations to both columns without altering the relationship between them. For example, you can add and subtract anything from both columns, or multiply and divide them by positive quantities. This will help you simplify the problems so that you can see the relation more clearly.
If you have trouble solving all the problems within the time limit, consider taking a prep course or working with a private tutor. When you take the real GRE, you will have only about 1.5 minutes per question on average. If you spend a long time on any one question, especially near the beginning of the test, you have less time for the other questions. Therefore, if you get stuck on a problem, the best thing to do is eliminate a few wrong choices, guess, and move on to the next question. Remember, it is better for your score if you finish the GRE quantitative section
All math books Free download
1. ETS - GRE Math Review
2.Quantitative Aptitude, R.S. AgarwalDownload
3.GRE Math 450 Questions with AnswersDownload
4.1001 Math ProblemsDownload
5.GRE Math FlashcardsDownload
6.100 Data Interpretation Questions
7.501 Quantitative Comparison QuestionsDownload
8.501 Geometry QuestionsDownload
9.501 Measurement and Conversion QuestionsDownload
10.501 Algebra QuestionsDownload
11.501 Math Word ProblemsDownload |
Showing Climate Change Impacts on Mountains of the World
With the generous support of the Government of Flanders (Belgium), the UNESCO Man and Biosphere Programme (MAB) and the International Hydrological Programme (IHP) developed an exhibition that features satellite images of different mountain regions worldwide, many of which are UNESCO Biosphere Reserves.
Occupying 24% of the Earth’s surface, mountains and their adjacent valleys are home to 1.2 billion people. The importance of mountains as a source of freshwater justifies their reputation as ‘water towers’ of the world. They provide numerous and diverse sources of ecosystem services, with water supply one of the most critical. About 40% of the world population depends indirectly on mountain resources for water supply, agriculture, hydroelectricity and biodiversity.
Mountains are among the most sensitive ecosystems to climate change and are being affected at a faster rate than other terrestrial habitats. Climate impacts form an important threat to mountain ecosystem services and the populations depending on them, and have considerable effects on water resources. Many glaciers are retreating under the influence of rising temperatures, making them key indicators of climate change.
Using satellite images, the exhibition ”Climate change impacts on mountains of the world” highlights the critical functions of mountains, and the implications of climate change for mountain ecosystems, water resources and livelihoods. The exhibition is displayed on the exterior fences of UNESCO’s Headquarters in Paris until 15 December 2013.
This exhibition is a contribution to the International Year of Water Cooperation (2013) and was created with the support of the following partners: The Japan Aerospace Exploration Agency (JAXA), The European Space Agency (ESA), The United States Geological Survey (USGS) and Planet Action.
A high panel session organized during the UNESCO General conference will also call attention on the urgent need for enhanced monitoring and modeling of climate change impacts in mountain regions, to further develop sustainable adaptation strategies and policies.
- More information about the exhibition
- High-Level Panel Session: Climate Change Impacts on Water Resources and Adaptation Policies in Mountainous Regions
- Man and the Biosphere Programme
- International Hydrological Programme
- International Year of Water Cooperation
<- Back to: Global Climate Change |
leukocyte adhesion deficiency type 1
Leukocyte adhesion deficiency type 1 is a disorder that causes the immune system to malfunction, resulting in a form of immunodeficiency. Immunodeficiencies are conditions in which the immune system is not able to protect the body effectively from foreign invaders such as viruses, bacteria, and fungi. Starting from birth, people with leukocyte adhesion deficiency type 1 develop serious bacterial and fungal infections.
One of the first signs of leukocyte adhesion deficiency type 1 is a delay in the detachment of the umbilical cord stump after birth. In newborns, the stump normally falls off within the first two weeks of life; but, in infants with leukocyte adhesion deficiency type 1, this separation usually occurs at three weeks or later. In addition, affected infants often have inflammation of the umbilical cord stump (omphalitis) due to a bacterial infection.
In leukocyte adhesion deficiency type 1, bacterial and fungal infections most commonly occur on the skin and mucous membranes such as the moist lining of the nose and mouth. In childhood, people with this condition develop severe inflammation of the gums (gingivitis) and other tissue around the teeth (periodontitis), which often results in the loss of both primary and permanent teeth. These infections often spread to cover a large area. A hallmark of leukocyte adhesion deficiency type 1 is the lack of pus formation at the sites of infection. In people with this condition, wounds are slow to heal, which can lead to additional infection.
Life expectancy in individuals with leukocyte adhesion deficiency type 1 is often severely shortened. Due to repeat infections, affected individuals may not survive past infancy.
Leukocyte adhesion deficiency type 1 is estimated to occur in 1 per million people worldwide. At least 300 cases of this condition have been reported in the scientific literature.
Mutations in the ITGB2 gene cause leukocyte adhesion deficiency type 1. This gene provides instructions for making one part (the β2 subunit) of at least four different proteins known as β2 integrins. Integrins that contain the β2 subunit are found embedded in the membrane that surrounds white blood cells (leukocytes). These integrins help leukocytes gather at sites of infection or injury, where they contribute to the immune response. β2 integrins recognize signs of inflammation and attach (bind) to proteins called ligands on the lining of blood vessels. This binding leads to linkage (adhesion) of the leukocyte to the blood vessel wall. Signaling through the β2 integrins triggers the transport of the attached leukocyte across the blood vessel wall to the site of infection or injury.
ITGB2 gene mutations that cause leukocyte adhesion deficiency type 1 lead to the production of a β2 subunit that cannot bind with other subunits to form β2 integrins. Leukocytes that lack these integrins cannot attach to the blood vessel wall or cross the vessel wall to contribute to the immune response. As a result, there is a decreased response to injury and foreign invaders, such as bacteria and fungi, resulting in frequent infections, delayed wound healing, and other signs and symptoms of this condition.
This condition is inherited in an autosomal recessive pattern, which means both copies of the gene in each cell have mutations. The parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition.
These resources address the diagnosis or management of leukocyte adhesion deficiency type 1:
These resources from MedlinePlus offer information about the diagnosis and management of various health conditions:
- leucocyte adhesion deficiency type 1
- leukocyte adhesion molecule deficiency type 1 |
Some of the most deadly human diseases work by worming their way inside your DNA, attaching themselves to the cell's chromosomes. This makes them almost impossible to remove. But a new molecule designed to bamboozle rogue DNA could change everything.
The HIV virus is a good example of how cells can be damaged at the DNA level. Once the virus binds itself to a cell, it injects RNA and the enzymes necessary to create double-stranded DNA, which can then be integrated into one of the chromosomes of the host cell. This parasitic DNA can then lie dormant until it's ready to start building new viruses, creating an almost impregnable beachhead inside the body to continue the infection process.
Researchers at the University of Texas, led by chemistry professor Brent Iverson, wanted to beat the rogue DNA at its own game. So they've started working on molecules that can bind with specific DNA so that that part of the double helix becomes tangled, making it unable to carry out any genetic functions. Their molecule, which has been given the rather nifty name of "threading tetra-intercalator", can silence a strand of DNA for up to 16 days, before the helix finally untwists itself. You can see a diagram of how the molecule works on the left.
This breakthrough opens up the possibility of drug treatments specifically targeted to keep the rogue DNA created by HIV, cancer, and other genetic diseases silenced, potentially on a permanent basis. That particular application is still a ways off, but this result suggests it's a very real possibility. In a statement, Iverson explains how the molecule works:
"If you think of DNA as a spiral staircase, imagine sliding something between the steps. That's what our molecule does. It can be visualized as binding to DNA in the same way a snake might climb a ladder. It goes back and forth through the central staircase with sections of it between the steps. Once in, it takes a long time to get loose. Our off-rate under the conditions we used is the slowest we know of by a wide margin. Take HIV, for example. We want to be able to track it to wherever it is in the chromosome and just sit on it and keep it quiet. Right now we treat HIV at a much later stage with drugs such as the protease inhibitors, but at the end of the day, the HIV DNA is still there. This would be a way to silence that stuff at its source."
Fellow researcher Amy Rhoden-Smith provides some more technical details on how the base molecule, naphthalenetetracarboxylic diimide (NDI), can be adapted to silence the desired strain of rogue DNA. Basically, it's the molecular equivalent of building with Lego:
"It's pretty simple for us to make. We are able to grow the chain of NDIs from special resin beads. We run reactions right on the beads, attach pieces in the proper order and keep growing the molecules until we are ready to cleave them off. It's mostly automated at this point. "The larger molecule is composed of little pieces that bind to short segments of DNA, kind of like the way Legos fit together," she says. "The little pieces can bind different sequences, and we can put them together in different ways. We can put the Legos in a different arrangement. Then we scan for sequences that they'll bind." |
This would allow them to figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play.Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now - predictions made only several seconds into the future - but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles. Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That's because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before."In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviours will affect the world around it," said Sergey Levine, assistant professor in Berkeley's Department of Electrical Engineeing and Computer Sciences, whose lab developed the technology. "This can enable intelligent planning of highly flexible skills in complex real-world situations."The research team performed a demonstration of the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on December 5th.At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next based on the robot's actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects."In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own," said Chelsea Finn, a doctoral student in Levine's lab and inventor of the original DNA model.With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robot use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions."Humans learn object manipulation skills without any teacher through millions of interactions with a variety of objects during their lifetime. We have shown that it possible to build a robotic system that also leverages large amounts of autonomously collected data to learn widely applicable manipulation skills, specifically object pushing skills," said Frederik Ebert, a graduate student in Levine's lab who worked on the project.Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. In contrast to conventional computer vision methods, which require humans to manually label thousands or even millions of images, building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously. Indeed, video prediction models have also been applied to datasets that represent everything from human activities to driving, with compelling results."Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction," Levine said. "The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction."The Berkeley scientists are continuing to research control through video prediction, focusing on further improving video prediction and prediction-based control, as well as developing more sophisticated methods by which robots can collected more focused video data, for complex tasks such as picking and placing objects and manipulating soft and deformable objects such as cloth or rope, and assembly.
Print this page | E-mail this page
Discover the future of engineering today
Download a copy of our digital magazine |
Kakatua Putih | Cacatua alba | White Cockatoo
White Cockatoo (Cacatua alba) (also known as the Umbrella Cockatoo), It is a white parrot with brown or black eyes and a dark grey beak. When surprised, it extends a large and striking crest, which has a semicircular shape (similar to an umbrella, hence the alternative name). The undersides of the wings and tail have a pale yellow or lemon color which flashes when they fly. The White Cockatoo can live up to, and perhaps beyond, 80 years.
Justification This species has undergone a rapid population decline, principally owing to unsustainable levels of exploitation. This is likely to continue in the near future, unless recently revised trapping quotas are effectively enforced. It therefore qualifies as Vulnerable.
White Cockatoo is medium-sized, approximately 46-cm-long (19 in) long, and weighs about 400 grams for small females and up to 800 grams for big males. The male White Cockatoo usually has a broader head and a bigger beak than the female. During puberty, the female White Cockatoo can begin to develop a more reddish iris than the male. All white with underside of wings and tail washed yellow. Long, backward-curving white crest. Grey-black bill, white bare eye-ring, yellowish-white or slightly bluish, grey legs. Similar spp. Yellow-crested Cockatoo C. sulphurea, Sulphur-crested Cockatoo C. galerita and Salmon-crested Cockatoo C. moluccensis all have yellow, orange or pink crest feathers. Voice Short, loud, nasal high-pitched screech. Sometimes a rapid series of lower-pitched notes in flight.
The feathers of the White Cockatoo are mostly white. However, both upper and lower surfaces of the inner half of the trailing edge of the large wing feathers are a yellow color. The yellow color on the underside of the wings is most notable because the yellow portion of the upper surface of the feather is covered by the white of the feather immediately medial (nearer to the body) and above. Similarly, areas of larger tail feathers that are covered by other tail feathers – and the innermost covered areas of the larger crest feathers – are yellow. Short white feathers grow from and closely cover the upper legs.
White Cockatoo is considered vulnerable by the IUCN. Its numbers in the wild have declined owing to habitat loss and illegal trapping for the cage-bird trade. It is listed in appendix II of the CITES list of protected species which gives it protection by making the export, import and trade of wild-caught birds illegal.
The high market-value of these birds has led to unsustainable levels of harvesting for the pet trade. In 1994 the White Cockatoo was listed as a CITES I endangered species. This species has since been taken off the endangered species list, but is still listed as Vulnerable. Principal threats to this species are the pet trade and loss and degradation of their forest habitat.
In addition to the necessity of law enforcement to stop the illegal parrot trade, ProFauna urges the Indonesian government to raise the status of the white Cockatoo (Cacatua alba), the endemic species of Northern Maluku, to that of an Indonesian protected species.
The smuggling of parrots to the Philippines breaks the CITES (Convention of International on Trade in Endangered Species) agreements ratified by Indonesia in 1978. Most parrots are listed in Appendix II. Parrots in CITES Appendix II are prohibited from international commercial trade unless they are captive bred or permitted by the exporting country. In Indonesia the bird trade is controlled by the catch quota. Parrots in the trade are not captive bred.
The illegal trade of protected parrots violates the Indonesian legislation passed in 1990 (a wildlife law concerning Natural Resources and the Ecosystems Conservations). Accordingly, the perpetrators are liable to a maximum five-year prison term and a maximum 100-million Rupiah fine. Unfortunately, the Indonesian government has not enforced the law because many protected parrots are still being smuggled abroad and sold openly in Surabaya, East Java, Indonesia.
|Population estimate||Population trend||Range estimate (breeding/resident)||Country endemic?|
|43,000 – 183,000||decreasing||20,500 km2||Yes|
Range & population Cacatua alba is endemic to the islands of Halmahera, Bacan, Ternate, Tidore, Kasiruta and Mandiole in North Maluku, Indonesia. Records from Obi and Bisa are thought to reflect introductions. It remains locally common: in 1991-1992, the population was estimated at 42,545-183,129 birds1, although this may be an underestimate as it was largely based on surveys from Bacan and not Halmahera where the species may have been commoner. Recent observations indicate that rapid declines are underway. CITES data show significant harvest rates for the cage bird trade during the early 1990s. Annual harvests have declined in actual terms and as a proportion of the remaining population in recent years.Important Bird Areas Click here to view map showing IBAs where species is recorded and triggers any of the IBA criteria.
Ecology: It is resident (perhaps making minor nomadic movements) in primary, logged and secondary forest up to 900 m. It also occurs in mangroves, plantations (including coconut) and agricultural land, suggesting that it tolerates some habitat modification. The highest densities occur in primary forest, and it requires large trees for nesting and communal roosting.
Threats Unsustainable levels of trapping for the cage-bird trade pose the greatest threat. In 1991, an estimated minimum of 6,600 birds (possibly representing a mere quarter of the actual figure) were taken from the wild. Catch quotas for the species were exceeded by up to 18 times in some localities, indicating that trappers were removing in the order of 17% of the population annually. Although forest within its range remains relatively intact, exploitation by logging companies has become intensive, and some areas are have been cleared for agriculture and mining. Habitat and nest-site availability is therefore decreasing, particularly the latter. Furthermore, new logging roads greatly facilitate access for trappers.
Conservation measures underway CITES Appendix II. The North Maluku government has proposed to the Forestry Ministry that the species be classified as a protected species2. The Indonesian government issues catch quotas and all capture was illegal in 1999. It occurs in three protected areas: Gunung Sibela Strict Nature Reserve on Bacan, although this site is threatened by agricultural encroachment and gold prospecting and Aketajawe Nature Reserve and the Lalobata Protected Forest on Halmahera.
Conservation measures proposed Monitor national and international trade. Conduct research into population dynamics, ranging behaviour and threats, so that appropriate trapping quotas may be devised. Promote more effective enforcement of trapping quotas. Introduce trapping concessions to increase self-regulation of trade. Initiate a conservation awareness campaign promoting local support for the species and the regulated collection of eggs and young, rather than adults.
White Cockatoo nests in tree cavities. Its eggs are white and there are usually two in a clutch. During the incubation period – about 28 days – both the female and male incubate the eggs. The larger chick becomes dominant over the smaller chick and takes more of the food. The chicks leave the nest about 84 days after hatching.
- BirdLife International (2004). Cacatua alba. 2006. IUCN Red List of Threatened Species. IUCN 2006. www.iucnredlist.org. Retrieved on 11 May 2006. Database entry includes justification for why this species is vulnerable
- The Indonesian Parrot Project: conservation of Cockatoos and other Indonesian Parrots
- IUCN Red List
- Red Data Book |
Introduction: Heat Engines and Refrigeration
Refrigeration has allowed for great advances in our ability to store food and other substances safely for long periods of time. The same technology used to run refrigerators is also used in air conditioners. How does this technology work to produce cool air when the external conditions are hot? As we shall see, refrigerators (and air conditioners) rely on the thermodynamic application known as the heat engine, as well as the molecular properties of the substance contained in the coils of the refrigerator.
One of the most important practical applications of the principles of thermodynamics is the heat engine (Figure 1). In the heat engine, heat is absorbed from a " working substance" at high temperature and partially converted to work. Heat engines are never 100% efficient, because the remaining heat ( i.e., the heat is not converted to work) is released to the surroundings, which are at a lower temperature. The steam engines used to power early trains and electric generators are heat engines in which water is the working substance. In a reverse heat engine (Figure 2), the opposite effect occurs. Work is converted to heat, which is released.
In 1851, the Florida physician John Gorrie was granted the first U.S. Patent for a refrigeration machine, which uses a reverse heat engine (Figure 2) as the first step in its operation. Gorrie, convinced that the cure for malaria was cold because outbreaks were terminated in the winter, sought to develop a machine that could make ice and cool a patient's room in the hot Florida summer. In Dr. Gorrie's refrigerator, air was compressed using a pump, which caused the temperature of the air to increase (exchanging work for heat). Running this compressed air through pipes in a cold-water bath released the heat into the water. The air was then allowed to expand again to atmospheric pressure, but because it had lost heat to the water, the temperature of the air was lower than before and could be used to cool the room.
Modern refrigerators operate by the same reverse-heat-engine principle of converting work to heat, but use substances other than air. The working substance in a modern refrigerators is called the coolant; the coolant changes from gas to liquid as it goes from higher to lower temperature. This change from gas to liquid is a phase transition, and the energy released upon this transition is mainly dependent on the intermolecular interactions of the substance.Hence, to understand the refrigeration cycle used in modern refrigerators, it is necessary to first discuss phase transitions.
Phases and Phase Transitions
Matter mainly exists in three different phases (physical states): solid, liquid, and gas. A phase is a form of matter that is uniform in chemical composition and physical properties. As shown in Figure 3, a substance in the solid phase has a definite shape and volume; a substance in the liquid phase has no definite shape, but has a definite volume; a substance in the gas phase has no definite shape or volume, but has a shape and volume determined by the shape and size of the container.
Molecular (Microscopic) View
One of the major differences in the three phases illustrated in Figure 3 is the number of intermolecular interactions they contain. The particles in a solid interact with all of their nearest neighbors, the particles in a liquid interact with only some of the nearby particles, and the particles in a gas have almost no interaction with one another. By breaking or forming intermolecular interactions, a substance can change from one phase to another. For example, gas molecules condense to form liquids because of the presence of attractive intermolecular forces. The stronger the attractive forces, the greater the stability of the liquid (which leads to a higher boiling point temperature). A change in the physical state of matter is called a phase transition. The names of the phase transitions between solid, liquid, and gas are shown in Figure 4.
Phase transitions are similar to chemical reactions as they each have an associated enthalpy change. While a chemical reaction involves the breaking and forming of bonds within molecules, phase transitions involve the breaking or forming of intermolecular attractive forces. Phase transitions involving the breaking of intermolecular attractions ( such as fusion, vaporization, and sublimation) require an input of energy to overcome the attractive forces between the particles of the substance. Phase transitions involving the formation of intermolecular attractions ( such as freezing, condensation, and deposition) release energy as the particles adopt a lower-energy conformation. The strength of the intermolecular attractions between molecules, and therefore the amount of energy required to overcome these attractive forces (as well as the amount of energy released when the attractions are formed) depends on the molecular properties of the substance. Generally, the more polar a molecule is, the stronger the attractive forces between molecules are. Hence, more polar molecules typically require more energy to overcome the intermolecular attractions, and release more energy by forming intermolecular attractions.
Thermodynamic (Macroscopic) View
In addition to the microscopic view presented above, we can describe phase transitions in terms of macroscopic, thermodynamic properties. It is important to bear in mind that the microscopic and macroscopic views are interdependent; i.e., the thermodynamic properties, such as enthalpy and temperature, of a substance are dependent on the molecular behavior of the substance.
Phase transitions are accompanied by changes in enthalpy and entropy. In this tutorial, we will concern ourselves mainly with changes in enthalpy. The energy change involved in breaking or forming intermolecular attractions is primarily supplied or released in the form of heat. Adding heat causes intermolecular attractions to be broken. How does this occur? Heat is a transfer of energy to molecules, causing the molecules to increase their motion as described by the kinetic theory of gases and thereby weakening the intermolecular forces holding the molecules in place. Likewise, when molecules lose heat, intermolecular attractions are strengthened; as heat is lost, the molecules move slower and therefore can interact more with other nearby molecules.
Because phase changes generally occur at constant pressure ( i.e., in a reaction vessel open to the atmosphere), the heat can be described by a change in enthalpy (ΔH=qp). For phase transitions involving the breaking of intermolecular attractions, heat is added and ΔH is positive, the system is going from a lower-enthalpy phase to a higher-enthalpy phase (an endothermic process). Hence, fusion, vaporization, and sublimation are all endothermic phase transitions. For phase transitions involving the forming of intermolecular attractions, heat is released and ΔH is negative, because the system is going from a higher-enthalpy phase to a lower-enthalpy phase (an exothermic process). Hence, freezing, condensation, and deposition are all exothermic phase transitions. The enthalpy change for each of the phase-transition processes in Figure 4 is shown in Table 1 above.
The enthalpy change of phase transitions can also be used to explain differences in melting points and boiling points of substances. At a given pressure, a substance has a characteristic range of temperatures at which it undergoes phase transitions; for example, melting point is the temperature at which a substance changes from solid phase to liquid phase and boiling point is the temperature at which a substance changes from liquid phase to gaseous phase. In general, the greater the enthalpy change for a phase transition, the higher the temperature at which the substance undergoes the phase transition. For example, liquids with strong intermolecular attractions require more heat to vaporize than liquids with weak intermolecular attractions; therefore, the boiling point (vaporization point) for these liquids will be higher than for the liquids with weaker intermolecular attractions.
Now, we shall use our understanding of heat engines and phase transitions to explain how refrigerators work. The enthalpy changes associated with phase transitions may be used by a heat engine (Figure 1) to do work and to transfer heat between the substance undergoing a phase transition and its surrounding environment. In a heat engine, a "working substance" absorbs heat at a high temperature and converts part of this heat to work. In a secondary process, the rest of the heat is released to the surroundings at a lower temperature, because the heat engine is not 100% efficient.
As shown in Figure 2, a refrigerator can be thought of as a heat engine in reverse. The cooling effect in a refrigerator is achieved by a cycle of condensation and vaporization of the coolant, which usually is the nontoxic compound CCl2F2 (Freon-12). A refrigerator contains an electrically-powered compressor that does work on Freon gas. Coils outside the refrigerator allow Freon to release heat when it condenses, and coils inside the refrigerator allow Freon to absorb heat as it vaporizes. Figure 5 shows the phase transitions of Freon and their associated heat-exchange events that occur during the refrigeration cycle.
The cycle described above does not run continuously, but rather is controlled by a thermostat. When the temperature inside the refrigerator rises above the set temperature, the thermostat starts the compressor. Once the refrigerator has been cooled below the set temperature, the compressor is turned off. This control mechanism allows the refrigerator to conserve electricity by only running as much as is necessary to keep the refrigerator at the desired temperature.
Refrigerators are essentially heat engines working in reverse. Whereas a heat engine converts heat to work, reverse heat engines convert work to heat. In the refrigerator, the heat that is generated is transferred to the outside of the refrigerator. To cool the refrigerator, a "working substance", or "coolant", such as Freon is required. The refrigerator works by using a cycle of compression and expansion on the Freon. Work is done on the Freon by a compressor, and the Freon releases heat to the air outside of the refrigerator (as it undergoes the exothermic condensation from a gas to a liquid). To regenerate the gaseous Freon for compression, the Freon passes through an internal coil, where it undergoes the endothermic vaporization from the liquid phase to the gaseous phase. This endothermic process causes the Freon to absorb heat from the air inside the refrigerator, cooling the refrigerator.
Brown, Lemay, and Bursten. Chemistry: The Central Science, 7th ed., p. 395-98.
Petrucci and Harwood. General Chemistry, 7th ed., p. 435, 699-701, 714-15.
The authors thank Dewey Holten, Michelle Gilbertson, Jody Proctor and Carolyn Herman for many helpful suggestions in the writing of this tutorial.
The development of this tutorial was supported by a grant from the Howard Hughes Medical Institute, through the Undergraduate Biological Sciences Education program, Grant HHMI# 71199-502008 to Washington University.
Copyright 1999, Washington University, All Rights Reserved.
Revised March 12, 2009 |
Degrees of Freedom of members and joints of mechanisms govern the working of a machine. Each member of mechanism can move in certain directions or rotate about certain axes and is not allowed to move or rotate in other directions. Degrees of Freedom determine the possible movements of mechanisms.
Degrees of freedom (DoF) is related to the motion possibilities of rigid bodies. Kinematic definition for DoF of any system or its components would be “the number of independent variables or coordinates required to ascertain the position of the system or its components".
The concept of DoF in kinematics of machines is used in three ways. DoF of
1. A body relative to a reference frame.
2. A kinematic joint.
3. A mechanism.
Determining Degrees of freedom
Degrees of Freedom can be determined by analysis of motion of the concerned body or by determining the number of coordinates required to specify position of the body. In this article planar cases are considered which can be extended to spatial cases.
1. Degrees of freedom of a body relative to a specified reference frame
In a plane the position of a body relative to a reference frame can be specified by two position coordinates (say X and Y) and one coordinate (say theta)for specifying the orientation of the body. Total three coordinates are required to specify the position of the body if there are no constraints applied. The DoF will reduce as the motion of the body is restricted.
For example, a body is not allowed to move along one axis in the plane. As a result one DoF if lost thus leaving only two DoF.
2. Degrees of freedom of a kinematic joint
Two bodies connect with each other to form a joint. One body can move in a number of ways relative to the other and may be constrained in other ways. DoF of a kinematic joint is number of ways in which one member of the joint can move relative to the other member.
For example, revolute joint has one DoF as one member can move only in one way relative to the other member. It can only rotate about the axis of the joint. Prismatic joint also has only one DoF as one of the two members can slide along the other in one direction only.
Cylindrical joint has two DoF as one of the two members can rotate about the axis of the joint and can also translate along it. Two motions possible so two DoF.
3. Degrees of freedom of a mechanism
The DoF for a mechanism is defined as the number of coordinates or variables required to be specified such that the position and orientation of all the members of the mechanism can be stated as a function of time.
For determining the DoF for a mechanism we will start with assuming all the members of the mechanism free in plane and thus having three DoF each. Then we will apply constraints and DoF will reduce as the members are joined together to form mechanism.
Take the mechanism to be composed of ‘n’ members or links. Initially each link is assumed to be free and thus the mechanism has 3n DoF. One of the members is to be a base or frame link thus have zero DoF or it lost its all three DoF. The DoF left in the mechanism at this stage is 3n-3 or 3(n-1).
When the pairs of links form joints they will loose DoF. If the formed joint have 'Fi' DoF each then reduction in DoF is (3-Fi) as they were initially free (having 3 DoF). If there are 'j' number of joints then total reduction in DoF will be summation of (3-Fi) over 'j' number of joints . The net DoF for a mechanism can be given by |
This mla citation tool generates the works cited page entries by using the proper mla format the tool follows the latest style and rules. Mla format is one of the most common structures for organizing a paper in academic writing in this video, we will cover the basics of mla format. Mla citation format adapted from the mla handbook, 7th edition parenthetical citations within the text of your paper let your reader know when you’ve used. How do i cite a play in mla style 7th edition a streetcar named desire that i need to cite in mla style in a research paper follow this format. Noodletools: student research platform with mla, apa and chicago/turabian bibliographies, notecards, outlining. Mla style what’s new in the eighth edition the work’s publication format is not considered the principles behind in-text citations in mla style are.
No official mla format for citing online classroom materials exists this is merely a recommended format to use in citing such documents more info in-text citation examples general rules has more information about. Guidelines and examples for current mla procedures first page format, presenting quotations, and citing books, websites, and nonprint sources. Training: outline, mla, format creating an mla paper with citations and a word 2013 training creating an mla paper with citations and a bibliography use a mla. A one-of-a-kind web app for converting academic citations between competing formats, inluding tools for converting both single and multiple citations.
Teaching resources get resources for teaching mla style, research, or writing prepared by the mla and other teachers and librarians below interested in submitting a. How might you format your in-text citations so that they're more compliant with mla guidelines you already know why mla formatting guidelines are an important part. Do you need help with mla citation for websites then use our formatting tool and get your quotations and references cited properly.
Bibme free bibliography & citation maker - mla, apa, chicago, harvard. Formatting direct quotations properly in mla format view worksheet using direct quotations involves using the exact words of others in your paper, and under the mla. This website provides guidelines to using mla format for your academic papers all guides are up-to-date with the latest mla handbook 7th edition. In mla style, citing the works of others within your text is done with parenthetical citations this method involves placing relevant source information in parentheses.
Students commonly use the mla formatting and style of the in-text citations assume the author-name format for the mla format through the in-text citations. Citation machine helps students and professionals properly credit the information that they use cite your book in mla format for free. Mla style guide – quick & easy by eric garcia mla handbook for writers and research title of reference work edition year format example (557): -return. |
All atoms in a molecule are constantly in motion while the entire molecule experiences constant translational and rotational motion. A diatomic molecule contains only a single motion. Polyatomic molecules have more than one type of vibration, known as normal modes.
A molecule has translational and rotational motion as a whole while each atom has it's own motion. The vibrational modes can be IR or Raman active. For a mode to be observed in the IR spectrum, changes must occur in the permanent dipole (i.e. not diatomic molecules). Diatomic molecules are observed in the Raman spectra but not in the IR spectra. This is due to the fact that diatomic molecules have one band and no permanent dipole, and therefore one single vibration. An example of this would be O2 or N2. However, unsymmetric diatomic molecules (i.e. CN) do absorb in the IR spectra. Polyatomic molecules undergo more complex vibrations that can be summed or resolved into normal modes of vibration.
The normal modes of vibration are: asymmetric, symmetric, wagging, twisting, scissoring, and rocking for polyatomic molecules.
|Symmetricical Stretching||Asymmetrical Stretching||Wagging|
Figure 1: Six types of Vibrational Modes. Taken from publisher http://en.wikipedia.org/wiki/Infrared_spectroscopy with permission from copyright holder.
Calculate Number of Vibrational Modes
3n degrees of freedom describe the motion of a molecule in relation to the coordinates (x,y,z). The 3n degrees of freedom also describe the translational, rotational, and vibrational motions of the molecule. There are three degrees of freedom for translational, movement through space, and rotational motion, each for a nonlinear molecule. Therefore, translational and rotational can move and rotate around each of the three Cartesian axes. However, a nonlinear molecule can only rotate around 2 of the Cartesian axes because the rotation about the molecular axis does not represent a change of the nuclear coordinates. If you subtract the translational and rotational degrees of freedom, you obtain the following equations shown below for the degrees of vibrational freedom.
The degrees of vibrational modes for linear molecules can be calculated using the formula:
The degrees of freedom for nonlinear molecules can be calculated using the formula:
\(n\) is equal to the number of atoms within the molecule of interest. The following procedure should be followed when trying to calculate the number of vibrational modes:
- Determine if the molecule is linear or nonlinear (i.e. Draw out molecule using VSEPR). If linear, use Equation 1. If nonlinear, use Equation 2
- Calculate how many atoms are in your molecule. This is your n value.
- Plug in your \(n\) value and solve.
Example 1: \(CS_2\)
An example of a linear molecule would be \(CS_2\). There are a total of \(3\) atoms in this molecule. Therefore, to calculate the number of vibrational modes, it would be 3(3)-5 = 4 vibrational modes.
Example 2: \(CCl_4\)
CH4 is an example of a nonlinear molecule. In this molecule, there are a total of 5 atoms. Therefore, there are 3(5)-6 = 9 vibrational modes.
Example 3: \(POCl_3\)
A more complex example could be \(POCl_3\). The shape of this molecule dictates that this is a nonlinear molecule. It contains 5 atoms and therefore would have 9 degrees of vibrational freedom.
Why would CO2 and SO2 have a different number for degrees of vibrational freedom? Following the procedure above, it is clear that CO2 is a linear molecule while SO2 is nonlinear. SO2 contains a lone pair which causes the molecule to be bent in shape, whereas, CO2 has no lone pairs. It is key to have an understanding of how the molecule is shaped. Therefore, CO2 has 4 vibrational modes and SO2 has 3 modes of freedom.
- Harris, Daniel C., and Michael D. Bertolucci. Symmetry and Spectroscopy: an Introduction to Vibrational and Electronic Spectroscopy. New York: Dover Publications, 1989. Print.
- Housecroft, Catherine E., and Alan G. Sharpe. Inorganic Chemistry. Harlow: Pearson Education, 2008. Print. |
Full Small Intestine Description
[Continued from above] . . . A thin membrane known as the mesentery extends from the posterior body wall of the abdominal cavity to surround the small intestine and anchor it in place. Blood vessels, nerves, and lymphatic vessels pass through the mesentery to support the tissues of the small intestine and transport nutrients from food in the intestines to the rest of the body.
The small intestine can be divided into 3 major regions:
- The duodenum is the first section of intestine that connects to the pyloric sphincter of the stomach. It is the shortest region of the small intestine, measuring only about 10 inches in length. Partially digested food, or chyme, from the stomach is mixed with bile from the liver and pancreatic juice from the pancreas to complete its digestion in the duodenum.
- The jejunum is the middle section of the small intestine that serves as the primary site of nutrient absorption. It measures around 3 feet in length.
- The ileum is the final section of the small intestine that empties into the large intestine via the ileocecal sphincter. The ileum is about 6 feet long and completes the absorption of nutrients that were missed in the jejunum.
Like the rest of the gastrointestinal tract, the small intestine is made up of four layers of tissue. The mucosa forms the inner layer of epithelial tissue and is specialized for the absorption of nutrients from chyme. Deep to the mucosa is the submucosa layer that provides blood vessels, lymphatic vessels, and nerves to support the mucosa on the surface. Several layers of smooth muscle tissue form the muscularis layer that contracts and moves the small intestines. Finally, the serosa forms the outermost layer of epithelial tissue that is continuous with the mesentery and surrounds the intestines.
The interior walls of the small intestine are tightly wrinkled into projections called circular folds that greatly increase their surface area. Microscopic examination of the mucosa reveals that the mucosal cells are organized into finger-like projections known as villi, which further increase the surface area. Each square inch of mucosa contains around 20,000 villi. The cells on the surface of the mucosa also contain finger-like projections of their cell membranes known as microvilli, which further increase the surface area of the small intestine. It is estimated that there are around 130 billion microvilli per square inch in the mucosa of the small intestine. All of these wrinkles and projections help to greatly increase the amount of contact between the cells of the mucosa and chyme to maximize the absorption of vital nutrients.
The small intestine processes around 2 gallons of food, liquids, and digestive secretions every day. To ensure that the body receives enough nutrients from its food, the small intestine mixes the chyme using smooth muscle contractions called segmentations. Segmentation involves the mixing of chyme about 7 to 12 times per minute within a short segment of the small intestine so that chyme in the middle of the intestine is moved outward to the intestinal wall and contacts the mucosa. In the duodenum, segmentations help to mix chyme with bile and pancreatic juice to complete the chemical digestion of the chyme into its component nutrients. Villi and microvilli throughout the intestines sway back and forth during the segmentations to increase their contact with chyme and efficiently absorb nutrients.
Once nutrients have been absorbed by the mucosa, they are passed on into tiny blood vessels and lymphatic vessels in the middle of the villi to exit through the mesentery. Fatty acids enter small lymphatic vessels called lacteals that carry them back to the blood supply. All other nutrients are carried through veins to the liver, where many nutrients are stored and converted into useful energy sources.
Chyme is slowly passed through the small intestine by waves of smooth muscle contraction known as peristalsis. Peristalsis waves begin at the stomach and pass through the duodenum, jejunum, and finally the ileum. Each wave moves the chyme a short distance, so it takes many waves of peristalsis over several hours to move chyme to the end of the ileum.
Prepared by Tim Taylor, Anatomy and Physiology Instructor |
When the Sun is low, but still a couple of degrees above the horizon — say, about ten minutes before sunset — dispersion is large enough to make the green upper and red lower limbs visible, if the telescopic image is projected on a sheet of paper. Here's a simulation of the appearance (for the Standard Atmosphere) when the upper limb is 2° above the astronomical horizon:
You can see that the upper limb has a narrow green rim, and the lower limb has a red one. The rims aren't very conspicuous here; but in fact this is a realistic simulation of the rims at about their most prominent. The fact is, they're not very wide at all, even under the best conditions.
You might think that the rims would be more obvious closer to the horizon. But it turns out that there's a trade-off between increasing refraction (and hence, width of the rims) and increasing extinction. (Remember that refraction and extinction are nearly proportional, near the horizon, because of Laplace's theorem.) As the Sun approaches the horizon, the increased atmospheric reddening rapidly overwhelms the green rim, making it fainter and fainter. And the red rim also becomes less prominent near the horizon, because the whole disk of the Sun becomes redder, so there's less color contrast between the disk and the lower red rim.
As a result, the colored rims are most prominent at altitudes between 1 and 2 degrees. You can compare the simulation with some nice photographs of the colored rims taken by Laurent Laveder (be sure to click on the small images to see larger versions; and don't miss the second page of pictures.) These fine examples were taken just in this optimal range of altitude, and in very clear conditions.
In fact, the simulation here assumes extremely clear conditions. The aerosol optical depth at 550 nm wavelength above the observer (located 10 meters above sea level) is only 0.02 — an improbably small value. Typical conditions would have an aerosol optical depth of 0.1 or so. Even under very clear conditions, the aerosol optical depth near sea level is hardly ever less than 0.05.
But even these small aerosol opacities in the vertical direction correspond to quite considerable optical depths at the horizon. If the aerosol were uniformly mixed in the atmosphere, the horizontal optical depth would be about 40 times the vertical one. But in reality, the aerosol in concentrated in the boundary layer; I've assumed an aerosol scale height of 1 km in these simulations, which makes the horizontal optical depth just over 100 times the vertical value.
That means that the optical depth in the green, at 550 nm wavelength, is usually about 10. But the aerosol optical depth is almost inversely proportional to wavelength, so the optical depth in the red-orange at 610 nm is only about 0.9 as big: about 9. This means that the green rim is attenuated by about a factor of e = 2.718 … , relative to the red disk, at the horizon.
The aerosol opacity falls to half of its value at the horizon at an altitude of only 42 minutes of arc, if we assume a scale height of 1 km. That means that more than half of the excess attenuation of the green light occurs in a space only one and a half solar diameters wide at the horizon. The great majority of the reddening is in the last degree at the horizon.
Now let's examine the green rim more closely. Here are some enlarged simulations, at intervals of a degree, from the astronomical horizon up to 5° above it. Once again, the Sun is seen through the Standard Atmosphere from a height of 10 meters.
This image shows the variation in appearance of the green rim with apparent altitude. The sub-panels are marked at the left side with the altitude at the top of the disk. The scale bar in the upper right corner is one minute of arc long, and about 1/16 of a minute (3.75 seconds of arc) in thickness. (A minute of arc is about the limiting resolution of the human eye.)
Each sub-panel has been made as bright as possible, without saturating the image; this is similar to the adjustment of exposure a photographer has to make in following the Sun as it sets. If the brightnesses were not adjusted, the bottom panels would be too dark to see well.
Here, a slightly more realistic zenith optical depth of 0.03 has been assumed for the aerosol. Even so, this represents extremely clear conditions. The optical depth at the horizon is about 3.2, at 550 nm.
Notice how narrow the green rim becomes at altitudes above 2°. At the greatest altitude shown here (5 °), a little blue is detectable in the green rim; but the rim is so narrow that this is difficult to see, even at the scale shown here. (What you see on your screen is magnified about 20 or 30 times, compared to a naked-eye view of the low Sun.)
For a real-world comparison, see Laurent Laveder's pages showing how the blue gradually fades away as the Sun approaches the horizon. In his photographs, the blue is occasionally emphasized by irregular atmospheric structure — probably waves on the inversion that caps the convective layer. (His sunset has an Omega shape, so it ends with an inferior-mirage flash, rather than the undistorted rim shown here.)
Even at its widest, the apparent width of the green rim isn't much greater than the thickness of the scale bar. That's because the increasing extinction at short wavelengths never allows the whole green part of the spectrum to appear, at the lower altitudes where the rim is widest.
The faintness of the rim at the astronomical horizon (bottom section of the figure) shows why an isolated green rim — the “textbook” flash — isn't a plausible explanation for most real green flashes. As Dietze found, it's only a little brighter than the horizon sky, and not prominent enough to be apparent to the naked eye. If a more typical optical depth were assumed, the green rim would be completely invisible at the horizon.
However, there is one place where green-rim flashes can be seen, even with the naked eye. The rapid decrease in extinction and atmospheric reddening with altitude means that if the rim is occulted by an elevated obstacle (such as a mountain, or even the ridge line of a building), so that the upper limb is isolated at an apparent altitude of a degree or two, where it is most visible, there's a chance of seeing this “textbook” flash with the naked eye.
Furthermore, because the decrease in extinction with altitude is even more pronounced at shorter wavelengths, there's a good chance that this elevated-horizon flash will appear blue, or even violet. And, sure enough, there are numerous reports of blue and green flashes seen over mountains: Maggi's 1852 observation that “when the Sun sets behind distant mountains, the last disappearing edge is dyed a vivid blue”; de Maubeuge's 1886 report of a green flash over mountains 1 to 2 degrees above the horizon — an observation repeated over the Sinai in 1898; Lord Kelvin's 1899 blue sunrise over Mont Blanc; B. G. Escher's 1929 blue flash and green ray seen over the Sinai; multiple green flashes seen at high latitudes as the midnight Sun passed between distant peaks — a 1983 report of multiple blue flashes by Baxter nearly echoing a very early green observation by Henry Bedford in 1879; etc. This mechanism may also explain some cloud-top flashes.
Copyright © 2005 – 2008, 2013 Andrew T. Young
or the GF pictures page
or the GF home page
or the alphabetic index
or the website overview page |
This paper examines what makes a Bluefin Tuna unique both scientifically and as an economic commodity. Further, the paper analyzes the current international laws and actions taken to address Southern Bluefin Tuna’s seemingly imminent extinction. The paper concludes by recognizing that while these efforts encouraged conservation, more government enforcement and leadership is needed to ensure the survival of Southern Bluefin Tuna.
For thousands of years, fishing has played a key role in developing state economies worldwide. However, over-fishing has led to devastation of certain species, specifically the Southern Bluefin Tuna. Today, the Southern Bluefin Tuna is listed as a critically endangered species that, despite threats of its total collapse, remains a prized commodity on the worldwide market. Despite international law, treaties, and conservation efforts addressing threats to its survival, the Bluefin Tuna still faces extinction.
This paper will examine what makes a Bluefin Tuna unique both scientifically and as an economic commodity. Further, the paper will analyze the current international laws and actions taken to address Southern Bluefin Tuna’s seemingly imminent extinction. And, although these efforts encouraged conservation, greater government enforcement and leadership is needed to ensure the survival of Southern Bluefin Tuna.
A. What is a Southern Bluefin Tuna?
Southern Bluefin Tuna ( Thunnus maccoyii) are mainly caught for their fatty meat, color, size and unique texture. A Southern Bluefin tuna can weigh up to 200 kilograms and may live for up to 40 years. It can measure up to 2 meters, can swim between 2-3 kilometers/hour and can dive to up to 500 meters;however, the average size of a Southern Bluefin Tuna caught today weighs only approximately 7 kilograms, and has an average lifespan of 12 years.
The Southern Bluefin Tuna are somewhat slow-growing, with a three year old fish only growing 1.5cm per month. Southern Bluefin Tuna are pelagic, living in the open water, and thrive in waters between 30 and 50 degrees Celsius. During the winter months, the Southern Bluefin Tuna prefer more temperate and deeper water. The Southern Bluefin Tuna can survive in cold water temperatures because they have a unique circulatory system, which is able to keep their body temperature warmer than the surrounding water.
Southern Bluefin Tuna are further unique, as they are a highly migrative species. The fish breeds in the southern Indian Ocean, near Java, between September and April. Once born, the young migrate down the Western coast of Australia and remain near the coast until approximately 5 years of age, when they travel and remain in the open ocean.
B. Fishing for Southern Bluefin Tuna
Southern Bluefin Tuna may be found globally. The highest catching waters include the Indian Ocean, at 65%, the Southern Pacific Ocean, at 25%, and the Atlantic Ocean, at 10%. The main fishing method used by Japan, New Zealand, China, Province of Taiwan, Korea, Indonesia and the United States is longline fishing off of vessels. Longline fishing uses a long fishing line with many hooks. Australian fishermen primarily catch Southern Bluefin Tuna using the purse seine technique. This technique differs as, instead of using a fishing line and immediately pulling the fish onto the boat, the fish are towed (alive) using nets, and then transferred to grow cages. The Tuna remain in grow cages to be fed and fattened for approximately six months before being exported, primarily to Japan. The purse seine fishing season runs from January to March, and the longline season begins in December and runs throughout the winter.
A minority of fisherman use pole and line and trolling techniques. For example, between 1982 to 1992 Chinaand the Province of Taiwan utilized drift gillnets, a type of trolling technique. Although the primary target was Albacore Tuna, often Southern Bluefin Tuna was caught in these large trolling nets. In 1992 this technique stopped, however, when the United Nations banned large pelagic nets.
C. A Brief History of Fishing for Southern Bluefin Tuna
Fishing for Southern Bluefin Tuna began as early as the 1930s, when the fish was caught for canning. Fishing for the species, however, began to take off in the 1950s with the discovery of the pole-and-line fishing technique. In the 1950s, the catch was still primarily used for canning, with a total catch in the decade between 12,000 to 15,000 tons. During the 1950s, Southern Bluefin Tuna demand rose significantly, as the Japanese began fishing for the sashimi market. 4 Sashimi became the “ultimate delicatessen” in Japan and worldwide, as its raw fillet had a color, a high fat content, and a texture unique from other fishes or tuna species. With a sashimi market and advancing technology (by the 1950s fishermen installed super-cold freezers on vessels), the catch peaked in the 1960s at 80,000 tons. Fishermen looked for the valuable Southern Bluefin Tuna catches from the coast of New Zealand to the Southern Indian Ocean to South Atlantic. Even with the introduction of the purse seine method in the 1970s, the catch subsequently began to decline to a low 40,000 tones by the 1980s.
Today, Southern Bluefin Tuna’s sashimi meat makes this species the most demanded, rare, and expensive tuna species. Presently, about 90% of the Southern Bluefin Tuna catch is consumed by Japan. “The industry will continue to grow, but due to slow growth…and high costs involved, it can not fulfil the demand for Bluefin in any way.” An adult Southern Bluefin Tuna is worth up to approximately $50,000USD. The open ocean harvest in Australia alone is a $150 million AUD industry.
II. Early Conservation Action
With the sharp decline of the Southern Bluefin Tuna by the 1980s and the observations that formerly lucrative catching areas were beginning to disappear (especially on Australia’s coast), countries that depended on the economic viability of the species, namely Australia, New Zealand, and Japan, began conservation efforts by introducing a quota system. Annual meetings of research scientists, from all three countries, studying the Southern Bluefin Tuna began in 1982. By 1984 these scientists agreed that between 1967 and 1975, the spawning tuna were reduced to 210,000 tons, 25% of its original species. The catch of young tuna reduced the number of tuna mature enough to spawn, which in turn reflected the decrease in mid-size and mid-aged fish in both Japan and New Zealand’s catches.
In response to this scientific data, the three countries trilaterally agreed in 1985 to set Southern Bluefin Tuna’s first quota at 38,650 tons. By 1989 the quota was reduced to 11,750 tons with Japan receiving 6065 tons, Australia receiving 5265 tons and New Zealand receiving 420 tons. Each country was responsible for recording and enforcing its quotas. 6 Despite this trilateral agreement, the lack of management and enforcement, as well as the lack of constraints or enforcement mechanisms in international waters, the Southern Bluefin Tuna stock continued its decline.
A. United Nations Convention on the Law of the Sea
In 1982, the United Nations completed the United Nations Convention on the Law of the Sea (UNCOLS). The UNCOLS established a global framework that aimed to address ocean conservation and protection. It contains 320 articles and nine annexes that address and regulate ocean space and its resources. 1 As of October 23, 2006, 152 countries have ratified the UNCLOS. As of October 2006, the United States has not ratified the UNCLOS. 8 The United States is the only industrialized country in the world not to sign the UNCLOS, and despite a recent push from Senators, Naval Generals, and even celebrities, the United States has still not ratified the UNCLOS.
The UNCOLS covers a broad spectrum of marine issues: from dividing the sea into territories; to access to the sea regulations; to navigation rights; to providing international research; and to protection and preservation of the environment. The UNCLOS’ major accomplishments include “its treatment of jurisdictional authority, the establishment of obligations to protect and preserve the marine environment, and comprehensive coverage of specific environmental threats posed by pollution and overfishing.” It is considered internationally as the “‘constitution for ocean governance.’”
The UNCLOS divides the ocean into three categories—territorial sea, an exclusive economic zone (EEZ), and high sea—which have greatly impacted the highly migratory Southern Bluefin Tuna. The division of the Ocean into EEZs expanded the territorial sea limit to 200 nautical miles (230 miles) from the shoreline. About 40% of the world’s ocean and 90% of its marine resources are located in EEZs. In an EEZ, a state enjoys exclusive rights to use, to protect and to manage the sea. Although states may use the EEZ to its economic advantage, the UNCLOS also places greater responsibility on a state’s duty to protect the marine environment. The UNCLOS specifically states that participating states “shall” carry the burden of protecting and preserving the marine environment in their EEZ. The incorporation of “shall” instead of another word like “may”, leaves the states no option but to comply with this provision if they choose to ratify the treaty. Thus, the UNCLOS places a heavy responsibility on participating states to protect and to preserve the marine environment.
Despite the UNCLOS’ mandate for increased environmental responsibility within EEZs, the question remains whether state actions for preservation and conservation in these zones is effective. For example, rather than have scientific recommendations direct catch quotas in the EEZs, often state policy decisions are guided more by their economic needs. “Many governments are unwilling to take measures that would drive fisherman into bankruptcy and unemployment lines.” Thus, despite the UNCLOS provision, in reality many nations are unwilling to compromise their state’s economic well-being for more protection of the marine environment.
In addition, UNCLOS categorizes a large portion of the ocean as high sea, an area in which all states have the right to use the sea. High sea remains a global common area, in which a “tragedy of the commons” takes a toll on Southern Bluefin Tuna. A “tragedy of the commons” refers, in this respect, to a fishery that exists in open-access waters and is shared internationally. In these areas, there is an unrestricted and unlimited right to use marine resources and fish. The law of the sea is “built upon a number of basic principles. The most important of these is the ‘freedom of the seas’—the ocean’s status as a global common upon which nations’ freedom to travel and extract sources is unimpeded.” This open-access has led to a historical pattern of discovery, expansion, overexploitation, and collapse. This pattern is clearly demonstrated by the historical plight of the Southern Bluefin Tuna, even in EEZs. However, in an attempt to curb the exploitation of marine resources and avoid a classic “tragedy of the commons” problem, UNCLOS attempts to limit state sovereignty in the high sea zones. UNCLOS emphasizes that countries are “subject to the state’s treaty obligations and the rights, duties, and interests of coastal states…” Thus, through UNCLOS’ provisions, the obligations of international cooperation supercede state sovereignty to deplete resources in the high seas.
Although the UNCLOS encouraged international cooperation in the high seas, it did not establish any explicit international instructions. Therefore, despite the UNCLOS’ declaration that international obligations should ideally trump state sovereignty, fishing in the high seas zones is largely unregulated. “Few states have implemented legislation governing the rights and obligations of their vessels on the high seas.” This lack of effective regulation in the high seas inspired nongovernmental, regional organizations, like Northwest Atlantic Fisheries Organization, South Pacific Forum Fisheries, the International Commission for Conservation of Atlantic Tunas, and later, the Commission for the Conservation of Southern Bluefin Tuna to form. Such regional organizations sought to regulate high sea fisheries. However, for many organizations, the lack of central leadership and guidelines from the UNCLOS has failed to affect state action in complying with the regional organization rules and penalities.
B. Commission for the Conservation of Southern Bluefin Tuna
In response to both UNCLOS’s establishment of the high seas and EEZs as well as the problems of enforcing and setting quotas between Japan, Australia, and New Zealand, these three countries established the Commission for the Conservation of Southern Bluefin Tuna (CCSBT). In May 1993, Japan, Australia and New Zealand voluntarily signed an agreement establishing the CCSBT and agreed to allow the CCSBT to set total allowable catch quotas. In October 2001, Republic of Korea, and in August 2002 Taiwan, joined the commission, and currently South Africa is discussing membership.
The CCSBT’s goal is to “ensure, through appropriate management, the conservation and optimum utilization of the global SBT fishery. The Commission also provides an internationally recognized forum for other countries/entities to actively participate in SBT issues.” The CCSBT not only sets total allowable catch quotas for Southern Bluefin Tuna, but also conducts an extensive scientific research program. The CCSBT also provides an international and open forum for discussion on any Southern Bluefin Tuna issues, works with other worldwide, regional tuna organizations, as well as encourages conservation from member and non-member countries.
The CCSBT’s scientific research program consists of five parts: focusing on the characterization of the Southern Bluefin Tuna catch, the improvement of data interpretation and analysis, development of a scientific observer program, a tagging program, and ageing studies. The CCSBT scientific studies have become particularly important; for example, the tagging program reveals valuable information about their slow growth rates. The CSIRO Marine Institute has further assisted with the tagging program by using data-storage tags to track the location of the fish, its body temperature, the water temperature and time of day or night. CSIRO is also working on new technology in “pop-up” tags, which when they surface, transmit the data collected on each fish directly to satellite.
C. The United Nations Conference and the Fish Stock Treaty
Due to the difficulties faced by regional organization enforcement of high sea regulations, the United Nations held a conference to address the issues relating to the management and preservation of highly migratory fish. The Chairperson of the Conference on Straddling Stocks and Highly Migratory Fish Stocks (the Conference), Satya Nandan, stated that preservation needed “global solutions” as it “concerned the international community as a whole.” The Conference sought to address the problems and shortfalls of UNCLOS regarding fishing for straddling stocks and highly migratory marine species.
On August 4, 1994, the countries of UNCLOS approved the Articles of the Convention by consensus and established, for the first time, limits to access and fishing on the high seas. The Conference provided that all states with an interest in fishing migratory fishes or fishing in a high seas region must comply with standards set by regional organizations. Such organizations identify the fish stocks, regulate the fish stocks, and collaborate with other organizations to set parameters high sea zones. States are specifically required to fulfill this obligation by conducting scientific research of migratory fish stocks, by monitoring the fish stocks in the regulated areas, and by setting catch allocations. Any non-governmental organizations that wish to participate with the regional organizations are permitted to observe and submit any scientific information to organizational meetings.
The Conference further discussed enforcement of the treaty as well as the policing of the new, regional organizations. The Conference adopted strict measures that any state found violating any of the provisions would be sanctioned. Such violations could eventually lead to a ban on the state’s fishing rights on the high seas. Flag state have the responsibility to issue licenses or permits in order to ensure their boats’ cooperation with the guidelines set by the regional organizations. The flag states must establish a national registry of the boats, and release the registry on the request of another member state.
Furthermore, any state has the right to board and inspect another state’s boat at any time. Such inspectors are allowed to search the boat, view its licenses and records, and verify compliance with the Articles. If any violations by an inspector are found, the boat’s flag state must be immediately notified. The flag state must order the boat to undergo a formal investigation and suspend fishing.
D. The Effect of the Fish Stock Treaty on Southern Bluefin Tuna
The Conference had the enormous task of solving the problem of the world’s over-fishing in the high seas area. “Overall, the Fish Stock Treaty provides for better enforcement…It gives member states more authority to monitor and conduct investigations.” Despite the Conference and the Treaty’s valiant effort to solve this problem, the continual depletion of migratory fish stocks is demonstrated by the Southern Bluefin Tuna. Even after the Treaty was ratified in 1994, the depletion of the Southern Bluefin Tuna in high seas zones, and overall, has continued to decline. Current scientific data shows that the “spawning biomass is at a low fraction of its original biomass and well below the 1980 level. The stock is estimated to be well below the level that could produce maximum sustainable yield. Recruitments in the last decade are estimated to be well below the levels in the period 1950-1980.” Such data demonstrate that the new, stricter enforcements the Conference ratified in the Fish Stock Treaty are not achieving protection or preservation goals. Thus, despite international marine regulations like UNCLOS and the Treaty on Fish Stocks, the Southern Bluefin Tuna stock continues to decline.
As the Southern Bluefin Tuna stock continues to decline, a key question arises as to why the Fish Stock Treaty has failed to help the plight of the Southern Bluefin Tuna. Although the Treaty addresses over fishing and poses a possible solution, the Treaty’s enforcement mechanism presupposes an extraordinary high level of international cooperation that, today, does not exist. For the Treaty to succeed, “flag states, port states, and coastal states all need to be involved in the enforcement throughout the prosecution and sanction process.” States not only have the duty to cooperate in the enforcement of the treaty, but the treaty, through Article 7, also imposes the duty to sovereign states to work together in good faith to set reasonable conservation measures. This level of international cooperation is simply an unrealistic expectation. For example, as the Southern Bluefin Tuna (and fishing industry as whole) provide a significant source of income for many states, affecting their citizens at a local level, states are not inclined to cooperate to enforce preservation measures.
Furthermore, such cooperation between states is not only needed between states on a national level, but also between states and regional organizations in order for the Treaty’s enforcement mechanisms to succeed. Although the Treaty delegates regional organizations the primary power to impose conservation measures and to set quotas, states may choose their level of participation in the regional organization. For example, Article 8 provides that as an alternative to becoming a member of the local regional organization, nations may choose to simply apply the regional organization’s measures. This article dilutes the potential power of regional organization over a state by giving the state an option to choose which measures to utilize. By allowing states to choose their level of participation in regional organizations, Article 8 provides a loop-hole for states to continue to leaving fishing unregulated.
In addition to requiring extraordinary amounts of cooperation between both states and regional organizations, the Treaty requires participating states to expend significant resources in establishing management and enforcement mechanisms. “Countries may simply not have the sufficient means to ensure that vessels flying their flags operate according to the rules.” Establishing management systems to register and flag boats as well as creating enforcement departments to inspect boats all requires a state to expend resources. Some states simply do not have enough resources to expend to fully implement the Treaty’s measures or to make the Treaty’s requirements effective. Thus, due to the high level of international cooperation the Treaty requires, as well as the resources needed to effectively implement the Treaty, illegal fishermen are able to continue to deplete migratory fish stocks such as the Southern Bluefin Tuna specifically in high sea areas.
E. International Tribunal for the Law of the Sea
Another major accomplishment of UNCLOS was its establishment of the International Tribunal for the Law of the Sea (ITLOS) as an enforcement mechanism. The ITLOS is a formal international tribunal that settles disputes and punishes violators relating to any article in the UNCLOS. ITLOS, located in Hamburg, Germany, was officially inaugurated in October 1996, 14 years after the initial ratification of UNCLOS. ITLOS contains 21 judges who are elected by the states who have ratified UNCLOS. “It was designed to adjudicate international disputes arises between States or other international entities, as opposed to private maritime matters.” Since 1996, the ITLOS has heard twelve cases, including the Southern Bluefin Tuna Cases discussed in detail below. Its decisions are binding on the states, and the ITLOS considers itself the “judicial guardian of the marine environment.”
III. Case Study: The Southern Bluefin Tuna Cases
As previously described, the Southern Bluefin Tuna has a high scientific and economic value for countries worldwide. In response to problems of over fishing in the UNCLOS zones and non-compliance with the Fish Stock Treaty, Japan, Australia and New Zealand agreed that the CCSBT set the total allowable catch quotas for the Southern Bluefin Tuna. However, in 1998, Japan unilaterally increased its catch of the Southern Bluefin Tuna by 2,000 tons. Japan claimed that the increased quota was necessary to perform research on the status of the Southern Bluefin Tuna stock. Japan claimed that the catch increase stemmed from its “commitment to maintain the remaining long-range fishing sector in good health.” Australia and New Zealand officials claimed that Japan breached its agreement for purely economic purposes in order to exploit the Southern Bluefin Tuna market for their sole benefit.
As Japan, Australia and New Zealand could not come to an agreement, Australia and New Zealand submitted the dispute to arbitration, ITLOS, pursuant to the UNCLOS. Australia and New Zealand argued that not only would Japan’s research yield minimal results, but also that the research would cause irreversible damage to the remaining Southern Bluefin Tuna stock. Furthermore, Australia and New Zealand argued that the unilateral action taken by Japan violated the UNCLOS, specifically Articles 64, 116-119 and 300. These Articles address the preservation of highly migratory marine species, as well as the “conservation and management of the living resources of the high seas.” For example, Article 119 states:
States shall cooperate with each other in the conservation and management of living resources in the areas of the high seas. States whose nationals exploit identical living resources, or different living resources in the same area, shall enter into negotiations with a view to taking the measures necessary for the conservation of the living resources concerned. They shall, as appropriate, cooperate to establish sub-regional or regional fisheries organizations to this end.
In effect, Australia and New Zealand claimed that Japan breached its UNCLOS agreement by failing to restore the Southern Bluefin Tuna to its sustainable catch, and its commitment to work together in good faith.
As Australian and New Zealand scientists testified that the Southern Bluefin Tuna was on the edge of economic extinction and, therefore, the marine resource must be conserved, these countries also had clear economic motivations for opposing Japan’s increase in catch. Australia and New Zealand sought to protect their economic interests, namely in exporting the fish. The Australian tuna farming market had exploded into a $150 million per year (AUD) export industry (mainly exporting the Tuna to Japan), that brought economic stabilization and improvement to many local economies. Australia and New Zealand compared the status of the Southern Bluefin Tuna to the collapse of the Atlantic Cod in 1991, a formerly valuable local fishing industry. The collapse of the Cod cost the Canadian government over $3 billion dollars. Provided that the Southern Bluefin Tuna was kept at sustainable levels, Australia and New Zealand sought to profit from their own preservation policies. Thus, due to preservation and economic concerns, Australia and New Zealand urged the ITLOS to issue an order immediately stopping any scientific research by Japan, as well as an order for all parties to fish in accordance with quotas set by applying the precautionary principle.
In response to Australia and New Zealand’s arguments and evidence, Japan compiled an optimistic scientific view of the state of the Southern Bluefin Tuna. Japanese scientists claimed that lower catch rates would yield the species recovery at a high rate, while high catch rates would yield the species recovery at a slow rate. Their projected catch rate would not significantly slow the Southern Bluefin Tuna’s high rate recovery. Japan’s scientific assessments were based almost entirely from Japanese fishing records and not independent sources.
On August 27, 1999, the ITLOS issued an interim order that concluded that Japan had, in fact, breached their UNCLOS agreement. The order states that Japan “has breached its obligations under Articles 64, 116-119 of UNCLOS [by] failing to adopt necessary conservation measures…carrying out unilateral experimental fishing…taking unilateral action contrary to the rights and interests…failing in good faith to co-operate…otherwise failing in its obligation under UNCLOS…” Thus, the Tribunal agreed with Australia that the scientific uncertainties proven by the available data demonstrated that action must be taken to conserve the Tuna stock.
The order also stated five provisional solutions and sanctions: that the parties prevent any further aggravation or extension of the dispute, the parties keep catch levels to those last agreed and as set by the CCSBT for 1999, the parties cease any experimental fishing programs, the parties resume negotiations, and finally that the parties form agreements with any other states fishing for Southern Bluefin Tuna. nbsp; The interim order, and all of its provisional solutions and sanctions, became effective immediately in accordance with Article 290 of UNCLOS.
A key question in evaluating the ITLOS’ interim order is why the ITLOS did not order all three countries to cease Southern Bluefin Tuna fishing for a specific period of time? Besides the immediate, negative economic consequences of a moratorium on all Southern Bluefin Tuna fishing, in this case the ITLOS would not have the jurisdiction to order a moratorium on fishing. The UNCLOS Article 230(3), which specifically addresses ITLOS’ jurisdiction, limits the Tribunal’s powers to those measures requested by the parties. As no party requested a moratorium on Southern Bluefin Tuna fishing, the ITLOS would not have the power to create their own solution. The ITLOS must choose from the solutions argued by the parties.
However, as this order became effective only four days before Japan’s experimental fishing program was scheduled to end, the ITLOS also issued a final resolution that required Japan, Australia and New Zealand to submit to further arbitration. The final resolution required that the countries submit their arguments to a five member tribunal specifically appointed to hear the Southern Bluefin Tuna dispute. Australia, New Zealand and Japan could nominate members to the Tribunal.
Despite Japan’s objections that the Tribunal did not have jurisdiction under UNCLOS, the Tribunal issued its decision on August 4, 2000, concluding the immediate dispute. The Tribunal affirmed its jurisdiction over the proceedings, ordered a revocation of the ban on Japan’s experimental fishing program and ordered the parties to cease from action that would aggravate relations between the states. Japan also agreed to submit to further mediation under the rules of the CCSBT.
A. The ITLOS and the Precautionary Principle
In issuing the interim order of 1999, the ITLOS applied the precautionary principle - customary international law in the marine environment context. 19 The precautionary principle states that in the event of scientific uncertainty one should conserve a resource to prevent any further environmental damage. 1 The precautionary principle is clearly stated in the Rio Declaration 1992, Principle 15, which provides that:
Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degredation.
Thus, in applying the precautionary principle, a state regulates any activities that may, even in the absence of concrete scientific evidence, hurt or would likely hurt the environment. An argument for the implementation of the precautionary principle places the burden of proof on the party that claims a continued activity does not cause environmental harm.
The precautionary principle is predominantly applied to provide guidance in making management and environmental decisions. It can be found underlying international environmental policy decisions and treaties, including the Fish Stock Treaty. “The precautionary principle has become of tremendous importance because, in many cases, the establishment of proof of cause and effect by scientists is a difficult task, sometimes almost a fruitless search for an indefinite series of events.” Because scientists may not be able to predict future environmental causes and effects with clear certainty, the precautionary principle allows for the argument of conservation in absence of concrete scientific evidence.
The precautionary principle has arguably become a customary norm of international law in the marine environment. A customary norm reflects the general acceptance of the practice in the international arena. Such an acceptance of a norm is practiced under the belief that it is actually required by law and generally seen as a rule governing conduct. Because the precautionary principle has been applied in numerous treaties regulating the marine environment, such as the Law of the Sea, the Fish Stock Treaty, and national fisheries legislation, there is a strong consensus that the precautionary principle in the marine context has become customary law.
The ITLOS interim order applied the precautionary principle in the Southern Bluefin Tuna cases. Although the order does not explicitly refer to the precautionary principle, the principle is reflected in the order’s language. Paragraph 80 states:
Considering that, although the Tribunal cannot conclusively assess the scientific evidence presented by the parties, it finds that measures should be taken as a matter of urgency to preserve the rights of the parties and to avert further deterioration of the southern bluefin tuna stock.
Paragraph 80, further supported by paragraphs 77 and 79, refers to the scientific uncertainty regarding the stock of Southern Bluefin Tuna, and due to this uncertainty, conservation measures must be taken to prevent serious harm to the stock. In making its decision, the Tribunal noted that all parties agreed that the stock was at its lowest levels historically and that, therefore, Japan, Australia and New Zealand must implement conservation measures. In effect, the scientific evidence, even if “uncertain” triggered the ITLOS’ application of the precautionary principle. Thus, in making its decision to stop Japan’s actions, the ITLOS applied the precautionary principle.
It is also important to note that the ITLOS did not radically apply the precautionary principle. Even if the ITLOS could have banned Southern Bluefin Tuna fishing, transcripts of the ITLOS decisions demonstrate that the ITLOS did not consider a moratorium. The ITLOS did not view the scientific data presented to be so dismal as to justify a ban on fishing. The ITLOS looked at the scientific “trigger”, the stock evidence, and limited their solution to a precautionary conservation level. The ITLOS application of the precautionary principle in the Southern Bluefin Tuna case solidifies and further evidences that the precautionary principle has become customary law in the international marine environment.
B. So, Are the Current Precautions and International Laws Working?
The international laws, precautions, and even the establishment of the CCSBT, actions are not effectively working to protect the Southern Bluefin Tuna. In 1998, Australian scientists predicated that by 2020 there is more than a 50% likelihood that the Southern Bluefin Tuna’s spawning stocks will be at 0. An increase of the quota by 3,000 tons would cause this likelihood to rise to approximately 75%.
Data also suggest that as of 2005, the Southern Bluefin Tuna is still globally over-fished. The 2005 data demonstrates that the spawning is below the 1980 levels, and is at a fraction of its original production. “Given all the evidence, it seems highly likely that current levels of catch will result in further declines in spawning stock and exploitable biomass…” The Southern Bluefin remains the most over-exploited tuna species.
Due to the lack of improvement of the spawning stock and quarrels over quota setting, CCSBT has been highly criticized as ineffective. The CCSBT has been condemned by Australian officials as failing to “fulfill its role of conserving the southern bluefin tuna.” As the CCSBT works by consensus, when one country refuses to cooperate the CCSBT becomes useless. Such ineffectiveness is demonstrated in the conflict between Japan, Australian and New Zealand in the Southern Bluefin Tuna case, when Japan refused to cooperate with CCSBT quotas. “The CCSBT itself is all but dead as a means of regulating the Southern Bluefin Tuna fishery.” Thus, the CCSBT, one of the most important international bodies protecting the Southern Bluefin Tuna faces worldwide criticism.
Furthermore, even though the Tribunal’s ordered Japan to submit to mediation from the CCSBT, Japan has proved uncooperative in international efforts to protect or preserve Southern Bluefin Tuna. In 1998 the CCSBT had immense trouble setting the 1998 quota, as Japan demanded a 3000 ton increase in the quota plus approval for another experimental fishing program. In October 2006 Japan was caught in a “Southern Bluefin Tuna Scandal” as it had been illegally taking Southern Bluefin Tuna for the last 20 years for a profit of between $6-$8 billion dollars. “The scandal was uncovered when Australian investigators found that the amount of highly prized sashimi fish being sold at Japanese markets was more than double the officially reported catch.” As a result of the scandal Japan was forced to make its official catch to 3000 tons, reducing the worldwide quota from 14,810 to 11,810 tons. Japan also agreed to submit to control such as CCSBT representatives on boats and paper trails documenting boat-to-market sales and catches.
The Japanese scandal, international criticism of the CCSBT, as well as the lack of international cooperation despite the UNCLOS, the Fish Stock Treaty, and ITLOS adjudications, demonstrates that international safeguards are not protecting the Southern Bluefin Tuna.
IV. What CAN be done to protect the Southern Bluefin Tuna?
Thoughts and Recommendations
A. The Southern Bluefin Tuna should be nominated to CITES.
Although the Southern Bluefin Tuna’s protection falls under the UNCLOS and the Fish Stock Treaty, and is regulated and researched by CCSBT, the Southern Bluefin Tuna has not yet been nominated to the Convention on International Trade in Endangered Species (CITES) despite its role in international trade. Currently over 5,000 animal species and 28,000 plant species are listed in CITES and 167 countries are members of CITES.
CITES, effective July 1, 1975, seeks to regulate international trade in species and to establish safeguards for species exposed to extinction and overexploitation. The main goal of CITES is to promote sustainable use of trade of species based scientific data. Trade, in the context of CITES, refers to the trans-boundary movement of species. CITES classifies species into Appendices, I, II or III that refer to the status of the species. Appendix I species are critically endangered, requiring both import and export permits in order to trade. Appendix I species are the most protected. Appendix II species are not presently endangered, but could likely become endangered if they are not regulated. Appendix III species are also not endangered, but are listed by a nation to prevent exploitation of a species.
Furthermore, in order to trade a marine species, CITES requires that the species must have obtained an “Introduction from the Sea” certification. An Introduction from the Sea refers to a certificate that states that the transported species has been taken from a marine environment. The introduction of a marine species requires confirmation from a scientific authority that trade will not be detrimental to the species. CITES permits nations to collaborate in conducting research to whether trade in a marine species is harmful to the species. Therefore, in addition to any trade permits required by the designated appendix, any marine species must also have an Introduction from the Sea certificate in order to trade.
In addition to the Appendices and strict permit requirements, the CITES also contains provisions for the treaty’s enforcement. In order to comply with CITES, a country must make laws that state trans-boundary movement of certain species without a permit is a punishable crime. If a country is accused of non-compliance with CITES, the CITES Secretariat evaluates the state’s domestic regulations and permit actions. If the country is found in non-compliance, it has approximately 18 months to bring its domestic legislation into conformity with the CITES provisions. If, after one extension, a state has still not complied, the Secretariat may approach CITES standing committee and request a trade embargo against the state. When in effect, all CITES members must participate in the trade embargo. All states who join CITES accept the trade embargo as a penalty. Thus, countries who fail to comply with their CITES obligations face a harsh and possibly economically devastating penalty, making CITES enforcement extremely effective.
B. Nominating the Southern Bluefin Tuna to CITES would provide an effective enforcement mechanism to police illegal fishing.
A nomination of Southern Bluefin Tuna to Appendix II of CITES will assist in the protection and policing of trade in the species. Japan, Australia, New Zealand, Korea, Indonesia and Taiwan are all CITES members, and therefore, would all be bound to implement protection and policing of the Southern Bluefin Tuna. CITES would require trade in the Southern Bluefin Tuna to remain at levels that would encourage the regeneration of the species and set trade a sustainable level. Trade in the fish would require export permits as well as the use of already established scientific and management authorities for research and enforcement. As a member of CITES, any mismanagement of the species, like the recent Japanese scandal, would face grave consequences due to the Treaty’s enforcement mechanism.
In order for CITES to include the Southern Bluefin Tuna on Appendix II, a member of CITES must nominate the species and the species must be approved by a 2/3 vote at a Conference of the Parties. Australia has been opposed to nominating the Southern Bluefin Tuna to CITES, as it has taken the position that the CCSBT effectively regulates the species. However, with the recent criticism of the ineffectiveness of CCSBT and the discovery of the Japan Bluefin Tuna scandal, Australia may have no other avenue than to make a CITES nomination.
However, a nomination would face fierce opposition by Japan. Japan and its allies within CITES would likely campaign against the Southern Bluefin Tuna nomination and would vote against the nomination. Hopefully countries like Australia and New Zealand would rally together, create political pressure on other countries to support the nomination and present scientific research demonstrating the dire situation the Southern Bluefin Tuna faces if left unregulated. Then “Japan would be left in isolation, defending the indefensible—the right to drive a species to the brink of extinction, in order to eat unlimited amounts of tuna sashimi. ” Incorporation into CITES is arguably most practical, logical, and readily available step in the attempt to curb the decline encourage protection of the Southern Bluefin Tuna.
C. In addition to CITES, technological and human surveillance should monitor each fishing vessel.
Although CITES seems like the next, logical step in order to increase protection of the Southern Bluefin Tuna, CITES enforcement mechanisms still face some classic problems. For example, while fishing it may be almost impossible to regulate and to issue a permit for each fish arriving off of a fishing boat. The difficulties in the industry itself provide a loophole for illegal fishermen to evade even CITES requirements. Therefore, perhaps the best recommendation to help the plight of the Southern Bluefin Tuna, in addition to the CITES permit requirements, is to require surveillance on board each vessel that catches Southern Bluefin Tuna.
The international organization, TRAFFIC Oceania through the CCSBT, has proposed such vessel monitoring systems be placed on each boat. Such vessel monitoring systems would display the exact location of the boat as well as what exactly the boats are catching. Such a system would be further enhanced by the presence of a CCSBT representative on each vessel to monitor its location and catch. Such surveillance systems, using both technology and the human element would increase the probability of catching and deterring illegal fishermen.
However, placing surveillance technology and personnel on board each fishing vessel would cost states enormous resources. “Increasing the amount of surveillance could be a costly option but could, in certain circumstances, have the additional benefit of closer monitoring of the fishing activities of legal operators.” In a world that does not have the resources to establish even the basic management system of the Fish Stock Agreement, it may be unrealistic to expect countries will expend resources on technology and personnel. Yet with the recent Japanese scandal and their acquiescence to have CCSBT representatives on their vessels, such surveillance solutions are already beginning to be implemented. Combining CITES with surveillance, whether technological, human or both, provides the most effective recommendation for addressing the depletion of the Southern Bluefin Tuna.
D. Other Recommendations
1. Tuna Farming
Another possible future avenue to curb Southern Bluefin Tuna and to let the species rebuild is the creation of more tuna farms. In 1991 an experimental tuna farm was established in Port Lincoln, Australia. The farm received funding from Japan, the Tuna Boat Owners of Australia, and the South Australian Department of Primary Industries. The tuna are caught using the purse-seine method, and then towed to Port Lincoln, where they are placed in under water cages. The tuna farmers developed a feeding system, using herring and pilchards, which allow the tuna to grow and increase value. The tuna are kept in the farm for up to six months before they are harvested and sold for up to $20 per kilogram. The harvest occurs between January and March of each year.
The tuna farms are not only useful for producing lucrative fish, but are also important for scientific research. If scientists can create farms that not only enhance the growth of the fish, perhaps the farms can also re-create an environment that promotes spawning. Although creating this environment is still undergoing research due to the Southern Bluefin Tuna’s migratory nature, the possibility may help the species regenerate back to sustainable levels. Tuna farming presents a possible and realistic recommendation to regenerate the species back to sustainable levels.
2. Moratorium and Re-Organization of the CCSBT
A non-governmental organization (the Humane Society International), is urging an international moratorium on Southern Bluefin Tuna fishing. A temporary ban on fishing would allow the stocks to climb back to a sustainable level. “The Humane Society International argues the only way Australia can stop illegal fishing is to have southern bluefin tuna listed as an endangered species with the United Nations, which would either severely restrict trade or stop it.” However, such a solution would have drastic impacts on not only the fishing economy of nations, but also dramatically affect the livelihood of fishermen themselves. Therefore, an overall ban on fishing, even if only temporary, seems like an unrealistic solution.
Instead of an overall ban on fishing, TRAFFIC Oceania suggests addressing problems within the CCSBT itself. For example, CCSBT only requires members to record importation numbers. A possible solution to monitoring and enforcing the total allowable catch quotas is to also audit member’s domestic catch. Such requirements would allow more complete data regarding a nation’s quota and compliance. TRAFFIC Oceania further suggests better monitoring on the fishing vessels themselves, discussed above. TRAFFIC Oceania’s suggestions to directly address the CCSBT’s problems seem like a realistic solution to the current monitoring problems. However, although TRAFFIC Oceania’s suggestions address the current monitoring problems, the solutions do not provide for any further enforcement mechanisms or penalties for violations. Therefore, such solutions still must be coupled with strict enforcement and penalties for states that violate quotas and international treaties. TRAFFIC Oceania’s suggestions to address the problems within the CCSBT serve as initial, practical solutions to assisting the Southern Bluefin Tuna climb back to sustainable levels. Thus, a reorganization addressing problems within the CCSBT itself is a realistic and plausible recommendation to assist the Southern Bluefin Tuna regenerate back to sustainable levels.
Although international efforts, including UNCLOS, the Fish Stock Treaty, and the CCSBT have implemented regulations and penalties to protect and to preserve the Southern Bluefin Tuna, the continual decline of the species stock and violation of international agreements evidences that such actions are not effective. The international community must explore new options in order to ensure the Southern Bluefin Tuna will not be fished into extinction. Such options should include a CITES nomination, technological and human surveillance on fishing vessels, and a restructuring of the CCSBT. Although such solutions and recommendations may require creativity, cooperation, and leadership, without different protection measures, the world will likely lose an important economic and scientific resource.
Southern Bluefin Tuna Factsheet . ATuna. 1-1. 2005. Available at www.atuna.com/species/species_datasheets.htm#Southern_bluefin_tuna
“About Southern Bluefin Tuna” Commission for the Conservation of Southern Bluefin Tuna. 1. (2006) available at www.ccsbt.org .
“Southern Bluefin Tuna Industry: At a Glance” Australian Fisheries Management Authority. 1-2. (2006) available at www.afma.gov.au/fisheries/tuna/sbt/at_a_glance.htm .
Chronological Lists of, Accessions and Successions to the Convention and Related Agreements As At 14 September 2006. United Nations Ocean and Law of the Seas. 1. (2006) available at http://www.un.org/depts/los/ .
Mack, Julie. International Fisheries Management: How the UN Conference of Straddling and Highly Migratory Fish Stocks Changes the Law of Fishing on the High Seas. California Western International Law Journal. 313-333. (1995-1996).
Nickler, Patrick A. A Tragedy of the Commons in Coastal Fisheries: Contending Prescriptions for Conservation, and the Case of the Atlantic Bluefin Tuna. Boston College Environmental Affairs Law Review. 549-576. (1998-1999).
About the Commission. Commission for the Conservation of Southern Bluefin Tuna. 1-2. (2006) available at www.ccsbt.org/docs/about.html .
The Facts-Southern Bluefin Tuna . CSIRO Marine Research. 1-5. (2004) available at www.marine.csiro.au/leafletsfolder/31sbt/31sbt.html .
United Nations Conference on Straddling and Highly Migratory Fish Stock—Documents. United Nations Ocean and Law of the Seas. 1. (2006) available at www.un.org/Depts/los/fish_stocks_conference/ fish_stocks_conference.htm.
VanHoutte, Annick. Legal Aspects in the Management of Shared Fish Stocks-A Review. FAO Corporate Document Depository, Presented at the Norway-FAO Expert Consultation on the Management of Shared Fish Stocks . 1-3. (7-10 October 2002). Available at: http:// www.fao.org/ DOCREP/006/Y4652E/y4652e04.htm
Schmidt, Carl Christian. Economic Drivers of Illegal, Unreported and Unregulated Fishing. Conference on the Governance of High Seas Fisheries and United Nations Fish Stock Agreement. 1-10. (25 April 2005). Available at: http://www.dfo-mpo.gc.ca/fgc-cgp/documents/schmidt_e.htm .
Crunch Time For Endangered Tuna. Ocean Press Releases Greenpeace. 1-1. 19 February 1998. Available at www.archive.greenpeace.org/pressreleases/oceans/1998feb19.html .
Darby, Andrew. Japan Forced to Halve Bluefin Catch. The Age Company Ltd. 1-2. 16 October 2006. Available at www.theage.com.au/news/national/japan-forced-to-halve-bluefin-catch.html .
Hayes, Elizabeth. A Review of the Southern Bluefin Tuna Fishery: Implications for Ecologically Sustainable Management. TRAFFIC Oceania Report. 1-1. July 1997. Available at www.traffic.org/factfile/tuna_summary.htm .
Sexton, Mike. Calls For International Bans on Tuna Trading. Australian Broadcasting Company: The 7.30 Report. 29 August 2006. Available at http://www.abc.net.au/7.30/content/2006/s1727430.htm . |
Computer Science is a dynamic field of study. To cope with its evolution, everyone can participate. Thus Physical Computing will teach us how to design, build and understand complex systems. In fact, physical Computing is an approach to learn how we communicate through computers that starts by considering how humans express themselves physically. For example, we spend a lot of time building circuits, soldering, writing programs, building structures to hold sensors and controls, and figuring out how best to make all of these things related to a person’s demand. Using prototyping may make our work easier. Thus, prototyping plays an important role in Physical Computing. Tools like the Wiring, Arduino and Fritzing as well as I-CubeX help designers and artists to quickly prototype their interactive concepts. Through that the visual programming (or graphical programming) can make programming physical computing with Arduino as easy as drag and drop.
Visual programming languages for their ability to introduce variety of people including non-specialists and students, to programming. With color coded operators, geometrically shaped data types, and no semicolons (!), visual languages have a unique ability to make programming a more intuitive experience. And with the (admittedly necessary) annoyances of syntax removed, more of your programming focus can be directed towards solving the actual problem.
Physical Computing can be an introduction to programming, integrating the arts, engineering and computing. It is also open to students who want to learn about microcontrollers and explore more advanced work in computing. In this article, we will study AVR microcontrollers under the Arduino physical computing environment, which is an ideal platform for prototyping high-level microcontroller applications capable of communicating with a PC and displaying interactive graphics. Particularly, we will speak about Ardublock which gives an opportunity to program Arduino board. Ardublock is a graphical programming environment that is both easy to use and introduces computational thinking.
Why Visual Programming Language?
By Physical Computing we mean the building of little systems, usually interactive, composed of sensors (like button), actuator (like led and motors) which are linked by microcontroller. Arduino respond to that criteria, it is turning 12 years old to accomplish this need. Nowadays there are more than 100,000+ Arduinos on the market, and within the next 5 to 10 years, the Arduino will be used in every school to teach electronics and physical computing. That’s a big deal because engineers tend to design platforms for other engineers, not for artists, or kids who want to connect stuff up in a simple way to share an idea. The Arduino is simple, but not too simple. There are plenty of other microcontroller boards, but Arduino is probably the best known today and has turned into a global phenomenal. The questions here are:
How to make it possible for non-programmer to create electronics, fast. How to make Arduino used by everyone?
Some studied visual Programing languages Visual Programming language is the answer for these questions. There are a variety of visual languages out there, in the following we will list the most known ones and those that we investigated them. To make our exploration more significant we will execute a basic example of Arduino which is blinking led with every VPL tool.
- Scratch For Arduino
Scratch for Arduino (S4A) is a modified version of Scratch ready for communication with Arduino boards. So what is scratch? Scratch is an open source and educational software focused mainly for children, designed by the Lifelong Kindergarten group at MIT Media Lab in 2006, and implemented in Smalltalk (Squeak). It is a graphical programming language that aims to teach children the principles programming through the creation of simple games and interactive movies. The programming instructions are pieces that have to be stick each other in an order to form blocks and make a coherent program, just like a puzzle. The figure 1 presents the basic example of Arduino, which is the blinking led. After choosing the board which is Arduino Uno and make the pieces of puzzle together we can upload the visual program. As we can remark that there is no code generated.
Figure 1: Blinking led example using S4A
The official site web of Mindplus mentioned the definition of Mind+ as follows: “Mind+ is a flow-based visual programming software for Arduino that enables anyone to make fast prototypes intuitively and enjoy hacking even without programming background” Mind+ is a free graphical programming software for Arduino, especially for artists and DIY enthusiasts without programming background. It is composed of several modules that we can use without generating or even seeing the source code of the module. So instead of writing codes on Arduino IDE, we can easily make a program with only 3 software modules for one’s project. The figure 2 presents the blink example for Arduino. After connecting the module as follows , we choose the type of the Arduino board and the com then we upload.
Figure 2: Blinking led example using Mind+
Minibloq is a graphic programming environment which facilitates the introduction of students into the world of programming. The students use colorful blocks to program physical computing devices very easily. The figure 3 presents the blink led example of Arduino .On the left, while connecting blocks the code is generated.
Figure 3: Blinking led example using Minibloq
Google Blockly, a programming language influenced by the aforementioned Scratch, is different from other graphical programming languages in that it is not intended for direct use. Instead, Blockly can be seen as a set of libraries that greatly facilitates the development of a Scratch-like programming language. The blink led with Blockly includes three steps : First we make the blocks together . Then we press the ‘arduino’ Tab to generate the source code .It is the real Arduino code. Finally we copy the source code and paste it on the Arduino IDE and we upload the code on the board.
“Modkit is an in-browser graphical programming environment for little devices called embedded systems. Modkit can currently program Arduino and Arduino compatible hardware using simple graphical blocks similar to and heavily inspired by the Scratch programming environment developed by the Lifelong Kindergarten Group at the MIT Media Lab.”The blinking led example with Modkit generates the source code after making the blocks together we have our Arduino working.But the code isn’t the same as the source code used by the arduino IDE .
The Ardublock software is a plugin which is added to the IDE of arduino and which allows the user to program in blocks of functions, a little as scracth4arduino, or to appinventor for android. The good point of ardublock is that it generates the lines of codes. In addition to blocks that are literal translations of the functions in the Arduino library, it also provides some predefined blocks for working with a variety of electronic components supported by Scoop ,Adafruit, DFrobot. ,TinkerKit from Arduino and partial support for Grove from SeeedStudio. Example of Blink led using Ardublock : After adding the ardublock plugin to the Arduino IDE we can start with our first code the blink led. So we connect the blocks then we tape upload automatically the Arduino code is generated and uploaded to the board. The figure 4 illustrates the example of blink led . The ardublock generates the same source code as Arduino IDE.
Figure 4: Blinking led example using Ardublock
Ardublock is a very convenient way to get people to start to learn Arduino.Why?
We have started to identify the main use of visual programming language, which is let end users program an electronics platform. This work could benefit from comparative studied visual programming environments for Arduino, such as S4A (Scratch forArduino) Minibloq, Mindplus, Blockly, Modkit and Ardublock. The Blocks view in Modkit and Ardublock, while graphical, uses similar terminology to the default Arduino IDE. Other visual environments, such as Minibloq and Mindplus, differ more radically in their presentation of programming constructs and their approaches to program construction. Ardublock provides an integrated tool that makes it possible to write Arduino programs using the same style of graphical blocks as Scratch and Blockly. In addition to blocks that are literal translations of the functions in the Arduino library, it also provides some predefined blocks for working with third party Arduino components. When programming an Arduino board using ArduBlock, the graphical program is translated to regular Arduino code, not unlike Blockly’s language generators. This facilitates the transition between using graphical blocks for programming and using written C++ code, which is very helpful for novice programmers. Ardublock is a new way of programming physical controllers (Arduino) using drag and drop through graphical methods. It’s built around the idea that users will be able to make their system works without even write the code. The Ardublock team members are making it built and improved for step-by-learning. The ardublock philosophy is as a gateway environment to eventually lead non-programmer to a really interested into “text” based programming. |
Posted on: July 1, 2014
Crest factor is the ratio of the instantaneous peak amplitude of a waveform, to its root mean square RMS value. The peak amplitude refers to the instantaneous peak current that may be required by a load, whereas the RMS is the average load current under normal conditions.
The crest factor specifies the properties of an electrical system such as the purity of a signal or waveform, and the capability of a system such as a power supply to output a particular current or voltage.
The ratio is also referred to as peak-to-RMS ratio is given by:
Crest Factor (peak-to-RMS ratio) = (peak value)/(RMS value).
The crest factor indicates the extreme peaks of a waveform. For a purely DC system with a resistive load, the value should be 1:1 which is also the minimum. A sinusoidal AC waveform with a resistive load has a CF of 1.414. However, the waveform may be distorted when supplying reactive loads with no power factor correction.
Some IT equipment and other loads with power factor corrected supplies have a crest factor of about 1.414, whereas those with no correction such as stackable hubs and personal computers have factors of 2 or 3.
For applications requiring a pure sine wave, the supply is desired to have a crest factor of 1.414 or the closest possible. Distortions caused by interactions between supply and the load may affect the relationship between the peak and RMS values and result into a higher value.
The crest factor of a computer load depends on the power source feeding it and may vary from one ac receptacle to another. It is important to note that the crest factor arises due to the interaction between the AC source and the load and that the crest factor required by the load will depend on the AC supply waveform.
Crest factor of a source shows the possible and safe output peak currents it can handle above rated current. Since the supplies can provide higher outputs, they should also have fail safe circuits that should shut down and cut off power in case the load continues to draw more of this high current.
The source must be capable of supplying the peak current desired by the load; otherwise the source voltage becomes distorted by the excess peak current. Most power sources manufacturers usually provide the crest factor, or peak repetitive current data, to help consumers or designers match their loads to the suitable sources. |
Aquarium of the Pacific - Online Learning Center - Species Print Sheet
Conservation Status: Safe for Now
There are six sub-species of Pitohuis. The level of toxicity varies by sub-species, geographic location, and diet. Some individuals from some populations do not have toxin. The most colorful, the Hooded Pitohui, P. dichrous, and Variable Pitohui, P. kirhocephalus, are the most toxic.
At the Aquarium
Information about this bird, which is not on exhibit at the Aquarium, is supplied for reference.
Native to New Guinea.
Hooded Pituhois live in the tropical rainforests and jungles of New Guinea from the lower forests to sea level.
Hooded Pitohui, members of the family Corvidae (crows, ravens, jays, etc.), are beautiful passerines, i.e., songbirds. Their wing, head, and tail feathers are black and their back and belly feathers orange. They have black legs ending in sharp claws and a black beak that is strong and sharp. Male and female birds have the same coloration. When threatened, these birds erect their head feathers to form a crest.
Adult birds average 23 cm (9 in) in length and weigh about 65 g (2.3 oz).
These birds are omnivorous, feeding on a variety of berries and insects such as ants. Scientists are currently studying whether the toxin also comes from a small beetle that the birds eat, (Choresine spp,). These New Guinea beetles are distant relatives of a family of beetles 15,289 km (9,500 mi) away in Central and South America, from which poison dart frogs get some of their toxicity.
Since New Guinea does not have marked seasonal changes, nesting time varies according to location and weather patterns. Only one nest of Hooded Pitohuis has been observed. The nest was cup-shaped and composed of tendrils of climbing plants that had been intertwined on a triangular base of branches. At this nest the birds were observed engaging in cooperative breeding in which at least four adults fed the nestlings. Cooperative breeding has been observed in other Corvids.
Nestlings apparently grow their adult plumage quickly. When threatened, they rise up and crest just as adults do. Although the coloration of their feathers mimics that of adults, their plumage contains only a small amount of toxin, far less than that of the toxic adults. Lacking toxin, the young birds may rely on their striking coloration for protection. It is believed that toxin may be transferred to the nestlings and the nest from the belly and back feathers of the adults. Snakes that prey on eggs and nestlings have been observed eating eggs laid by Hooded Pituhois and then rapidly regurgitating them.
Hooded Pitohuis are often found flocking with other birds, including Raggiana Birds of Paradise (Paradissae raggiana spp). This species, like the pitohuis, does not taste very good. Scientists believe that this association is a type of cooperative relationship in which protection is gained by flocking with the highly unpalatable pitohuis.
These birds advertise their bad taste by emitting a strong, unique odor that may be a warning smell, and with bright colors. Striking color patterns and smells meant to warn off predators are called aposematic.
The diet of Pituhois is the source of the toxin, homoBTX, that settles in the dander, skin, and feathers of the birds, concentrating in the breast, belly, and leg feathers. Contact with these birds produces a very unpleasant tingling and long-lasting numbing effect, sneezing, and burning, watering eyes. Research studies have shown that the toxin appears to be a protection against parasites such as lice, and also against predators, including humans. Another New Guinea bird, the Blue-capped Ifrita, (Ifrita kowaldi), also carries the toxin in its skin and feathers. It is found in a mountainous area above 1,500 m (4,930 ft).
Common in New Guinea, Hooded Pitohuis are not listed as a species of concern.
Some New Guinea tribes people believe that a Hooded Pitohui can be eaten if it is held in the hands and mourned as if it were a dead child. However, a ‘mourner’ must be certain to mourn long enough to make the bird palatable!
Some New Guinea native tribes call the Hooded Pitohui the ‘Wobob’, which refers to an itchy, uncomfortable skin disease that comes from contact with the bird, and also as “rubbish birds” because of their unique odor and the disagreeable sensations that result from touching them. Pitohui cannot be eaten without a great deal of preparation to rid the skin and flesh of the highly unpalatable and dangerous toxin.
Local traditional New Guineans led scientists to the tiny choresine beetles that Hooded Pituhoi eat. They identified the beetles with the word ‘nanisani’, the name they use to describe the tingling and numbing sensation of the lips and face that result from contacting both beetles and bird feathers. |
Out-Of-This-World Teaching Strategies
Developed by Christopher Altrogge, Jessica Engele, Maria Jeanneau, Nicole Tremblay and Silke Svenkeson
This page is a resource designed for teachers who are looking for fun and easy ways to present the science of astronomy and the solar system to students.
Kids may think that science is a definite thing but it is not. Just because the textbook teaches something, does not mean that the “fact” cannot be tested. Things can be believed as a fact for a long time and exist in textbooks for decades or centuries, but a single experiment can blow these theories right out of the water! Students should not feel limited by what they are taught in the textbook, they should feel free to look further and always search for a greater, deeper, or newer understanding of the world and universe around us. There will never be a day that we fully understand everything or invent everything; there is always something new to be discovered!
To remember the planets and their order from the sun, just remember the following acrostic:
My Very Elderly Mom Just Showed Us Nintendo
Mercury, Venus, Earth, Mars, Jupiter Saturn, Uranus, Neptune
To remember the 5 known dwarf planets in the Kuiper Belt, use the “C-H-E-M-P” acronym.
A visual timeline, “Pluto Through Time,” is provided for teachers, in order to visualize a portion of their lesson, and to ensure that there is a section of their lesson devoted to visual learners. Visual learners retain content best when it is presented through images and other means of visualizing information, such as timelines; however, all students should be encouraged to examine the timeline.
“Name that Planet!”: Instruct students to research 3-4 characteristics of a planet and get other students to guess that planet.
Mini-poster presentation: Get students to research a planet, dwarf planet, or any astronomical object such as the moon, sun, asteroids, or comets. Then, get them to present their topic to the class and add all the 8×11½ posters into a booklet. Now your class will have its own guideline to understanding the universe!
Q: Why do people study space? How do our discoveries that are ‘out of this world’ pertain to us right now?
Q: Why is it important to study things that aren’t in our solar system?
Q: How does knowledge about the formation of our solar system help us today? How do Pluto and the Kuiper Belt contribute to this knowledge?
Q: For many years, Pluto was believed to be a planet and the Kuiper Belt was completely unknown to us. Is science definite? Can we ever fully understand our solar system? How can we prevent errors in our discoveries? Explain why it is important for scientists to be constantly testing the knowledge we already have.
Q: Earth is the only planet in our solar system that can support life. The further we explore, the more we learn about space. Do you think there is another planet in space that could support life? Do you think it was just chance that the Earth has developed such an advanced from of life? Do you think there is life out there as evolved as ours? Will we ever know for sure or be able to contact them?
Videos for the Classroom
- This video focuses on our place in the solar system, reviews the 8 planets and their locations, and mentions Pluto being a dwarf planet. Aimed for kids under Grade 3.
- This video is an episode of the Magic School Bus. It explains our solar system and the planets for children in an entertaining way. This was made before Pluto was classified as a dwarf planet and the rise of the Kuiper Belt. However, with the information on this page, you will be able to explain this information on the side. (Français)
- This site contains several videos about astronomy in general, aimed for high school students or middle school students. A great supplement to a lesson plan!
- This video focuses on the Kuiper Belt and it is geared towards high school age students.
- This is a series of videos called “Pluto in a Minute”. These videos explain many more interesting facts about Pluto in just a minute.
- A kid-friendly website describing Pluto, the Kuiper Belt, and all the planets in our solar system – Great for elementary aged students.
- This site is developed for teachers and contains great resources on astronomy as a whole, including additional activities, workshops and information on astronomy and space. There are specific pages on the site dedicated to dwarf planets and to Pluto!
- This is a wonderful resource for kids to explore the universe and learn a bit about astronomy.
- For high school or middle school students, this site explains many fascinating facts about Pluto and the newest discoveries from the New Horizon’s mission.
- Again for older students, this site offers great information about various astronomical topics.
Field trip to the University of Saskatchewan Observatory
From October to February: 7:30 – 9:30 PM
March and September: 8:30 – 10:30 PM
April and August: 9:30 – 11:30 PM
May to July: 10:00 – 11:30 PM
Free admission. Tours can be arranged for school and community groups on Friday evenings during the school year. You can book a tour by phoning (306) 966-6396 and get more information here!
Camping trip out in the country
Plan a trip to a campground or farm out of the city and gaze at the stars! Follow this link to discover interesting and up-to-date objects in the night sky:
Have a camp fire and tell astrological legends like how the constellations got their names and the stories behind them.
- This site includes current astronomical events, when and how to observe these events, and some great information for beginner astronomers. A current newsfeed also is included which offers information on the most current developments in the field of astronomy
- This site is designed for teachers and includes many different resources, one of which involves the legends and tales behind some popular constellations!
- Watch for Activities presented by The Royal Astronomical Society of Canada: Saskatoon Centre for public presentations and additional observing opportunities (good for older kids)
- Check out the S.P.A.C.E (Saskatoon Public AerospaCe Education) Club! They offer hands-on activities for grades 5-8 in a creative workshop environment!
Check out Engineering For Kids, they offer an AeroSpace program in the form of camps and classes. Kids from K-8 will learn in a hands-on and creative atmosphere about what is beyond our atmosphere! |
CubeSats, the small, square, inexpensive satellites loved by startups and academics, are changing how we study the Earth and space. For example, Skybox Imaging wants to massively increase the satellite coverage of Earth's surface with a fleet of CubeSats. Once the small satellites hitch a ride with an outgoing spacecraft, they can relax in Earth's orbit and collect data. But what if they could leave Earth's orbit and journey to Mars, or beyond? Wouldn't that require them to be far more expensive? Not according to Benjamin Longmier, Ph.D., who's heading up the CAT project. His solution: Propel CubeSats with water.
CAT stands for the CubeSat Ambipolar Thruster. Longmier and a team at the University of Michigan’s Plasmadynamics and Electric Propulsion Laboratory are developing a thruster that converts water into a plasma propellant, which will help CubeSats break free from Earth's orbit and head out to parts unknown. Solar panels provide CAT with a constant renewable energy source.
It all sounds great, but CAT isn't a reality yet. The project has some private funding, but a crowdfunding drive in July failed to bring in the $200,000 Longmier. Now the team has relaunched its Kickstarter with a more conservative $50,000 goal. That $50,000 will supposedly be enough to get the project started, using the traditional propellant of xenon gas in place of water for early engine testing. If CAT hits its stretch goals, or brings in more money, the team hopes to bring it to NASA's Technology Readiness Level 8, making it worthy for a jaunt out into the solar system.
The Kickstarter page claims CAT could propel an 11 pound CubeSat the 80,000,000 miles to Mars using 5.5 pounds of fuel. It explains the technology, stating "Just like a normal rocket that produces thrust from the burning and expansion of hot gases, CAT produces thrust from the expansion of a super-heated 350,000 °C plasma stream. Plasma is an ionized gas that can be accelerated to produce thrust (F=ma). The force generated by this thruster will be very low (micro-newtons) but very efficient. The engine will be turned on for long durations, accelerating the spacecraft to much higher velocities than a typical chemical rocket."
The project page also lists out some of the many possible uses for mobile, exploratory CubeSats. They could easily move around the Earth, providing Internet coverage or gathering weather information. They could give us the same kind of weather readings in orbit around other planets. Given how much we've learned from each individual probe sent to Mars, imagine how much more we could learn from a fleet of CubeSats orbiting the planet. $50,000 is just a small step in that direction--the stretch goals estimate needing $1,750,000 to get a CubeSat on its way to Mars--but that's still remarkably cheap compared to just about everything else in the field of space exploration. |
October 4, 2011
In Reading Facial Emotion, Context Is Everything
In a close-up headshot, Serena Williams´ eyes are pressed tensely closed; her mouth is wide open, teeth bared. Her face looks enraged. Now zoom out: The tennis star is on the court, racket in hand, fist clenched in victory. She´s not angry. She´s ecstatic, having just beaten her sister Venus at the 2008 U.S. Open.
“Humans are exquisitely sensitive to context, and that can very dramatically shape what is seen in a face,” says psychologist Lisa Feldman Barrett of Northeastern University and Massachusetts General Hospital/Harvard School of Medicine. “Strip away the context, and it is difficult to accurately perceive emotion in a face.” That is the argument of a new paper by Barrett, her graduate student Maria Gendron, and Batja Mesquita of the University of Leuven in Belgium. It appears in October´s Current Directions in Psychological Science, a journal published by the Association for Psychological Science.The paper–reviewing a handful of hundreds of studies supporting the authors´ position, says Barrett–refutes the contention that there are six to 10 biologically basic emotions, each encoded in a particular facial arrangement, which can be read easily in an image of a disembodied face by anyone, anywhere.
Facial-emotional perception is influenced by many kinds of contexts, says the paper, including conceptual information and sense stimuli. A scowl can be read as fear if a dangerous situation is described or as disgust if the posture of its body indicates reaction to a soiled object. Eye-tracking experiments show that, depending on the meaning derived from the context, people focus on different salient facial features. Language aids facial perception, as well. Study participants routinely did better naming the emotions in pouting, sneering, or smiling faces when the experimenter supplied words to choose from than when they had to come up with the words themselves.
Equally important is the cultural context of an expressive face. People from cultures that are psychologically similar can read each other´s emotions with relative ease, an effect that similar language or even facial structure does not produce. Culture even influences where a person seeks information to interpret a face. Westerners, who see feelings as inside the individual, focus their attention on the face itself. Japanese, meanwhile, focus relatively more on the surroundings, believing emotions arise in relationship.
The real-world implications of such research are “substantial,” says Barrett. For instance, it offers needed nuance to the understanding changes in emotion perception in people with dementia or certain psychopathologies, and even in healthy older people, all of whom “may have difficulty accurately perceiving emotion in static caricature faces, but might do fine in everyday life,” where context is available. In law enforcement, “the Transportation Safety Administration and the other government agencies are training agents to detect threat or deception using methods based on the idea that a person´s internal intentions are broadcast on the face.” If they´re learning to decipher faces out of context, “millions of training dollars might be misspent,” says Barrett. This means that a misguided psychological notion could be putting public safety at risk.
On the Net: |
How Tornadoes Form
Tornadoes are associated with large (supercell) thunderstorms that often grow to over 40,000 feet. A column of warm humid air will begin to rise very quickly.
How the column of air begins to rotate is not completely understood by scientists, but one way the rotation appears to happen is when winds at two different altitudes blow at two different speeds creating wind shear. For example, a wind at 1000 feet above the surface might blow at 5mph and a wind at 5000 feet might blow at 25mph. This causes a horizontal rotating column of air.
If this column gets caught in a supercell updraft, the updraft tightens the spin and it speeds up (much like a skater's spins faster when arms are pulled close to the body. A funnel cloud is created.
The rain and hail in the thunderstorm cause the funnel to touch down creating a tornado. |
The Celebration of Reconciliation
In Chapter 16 the students learn about the celebration of the sacrament of Reconciliation and the Rite of Reconciliation for several and individual penitents.
- Click here for the Chapter 16 activity.
Distribute the activity sheets. Read the directions aloud. Using a paper clip and the tip of a pen or pencil, demonstrate how to make the spinner for the game. Encourage the students to use their We Believe texts to help them write their questions for the "Wheel of Reconciliation" game. Give the students time to complete the each part of the "wheel" with a question about the sacrament of Reconciliation. Then, have students work with a partner to play the game. Partners should exchange "wheels" and take turns by "spinning" and recording their answers. When all questions have been answered on each "wheel" ask the partners to use their We Believe texts to help score and tally each others' answers. Correct answers are worth one point.
Answers will vary.
Multiple Intelligences: Linguistic, Bodily/Kinesthetic, Interpersonal, Logical/Mathematical |
Humpty Dumpty Reconstruction
You are living in 1876, a time of great crisis for America. Many Reconstruction Plans have been tried, but most seem to create as many problems as they solve. Our country has just experienced a major financial recession. We have just had a Presidential Election between Hayes and Tilden, but still do not have a President. (This may sound similar to a recent election) Tilden won the popular vote, but was one electoral vote short of winning the election. Disputed election results in three Southern states, Florida, South Carolina, and Louisiana, held the power of deciding America’s future. Congress held an election commission to resolve the problem, but there were an equal number of Democrats and Republicans on the Commission. There was a stalemate, and so no President is selected as we approach Inauguration Day, 1877. A Compromise is needed to restore the union and solve the Presidential Crisis. In the real Compromise of 1877, Reconstruction ends, as does the hope of liberty for the newly freed African Americans, and the dream of creating a more democratic America. You and your group have been selected and hired to write a compromise that does provide for civil liberties and makes America the home of the free. You have the power to make a difference and make America a better place for all who live here….today and in the future. Be persuasive. Be creative.
Scenario - Task - Step By Step Process - Resources - Learning Advice
Evaluation Criteria - Conclusion - Reflection - Extension Activities |
**Save 25% and buy the bundle!
This is a bundle of 6 ALL-in-One Reading Strategies lesson plans.
Are you in a pinch and need a lesson fast? These are just the lessons for you! Powerpoints introduce each topic with interactive activities for the students to follow along with.
Strategies included are:
Main Idea and Key Details
Comparing and Contrasting
Check out the previews here for a full description of each one:
Main Idea and Details
Compare and Contrast
Each lesson comes with an editable PowerPoint, worksheets, exit ticket, and lesson outline. Some lessons require the teacher to provide the text, however I offer book suggestions in those lessons if needed. These are a great introduction or review of the reading comprehension strategies for students. |
Measuring more of biodiversity for choosing conservation areas, using taxonomic relatedness
Williams, P. H. (1993)
In: Moon TY, ed. International Symposium on Biodiversity and Conservation (KEI). Seoul. 194-227.
One of the major goals of conservation is the maintenance with only limited resources of as much as possible of the variety of life. If we are to choose among areas in order to protect the greatest overall amount of biodiversity, then we shall need to be able to measure and compare biodiversity among areas. This has usually been measured only in terms of species richness. Diversity also includes a concept of difference, and the degree of difference between organisms can be represented in biodiversity measures using readily available information on group membership from taxonomic classifications. Furthermore, by using the complementarity in species composition between faunas, stepwise procedures can identify optimal sequences of priority areas for biodiversity protection, taking existing protected areas into account or not, as required. In some circumstances it may be possible to apply this approach to higher taxa, rather than to species. This could greatly reduce survey costs, allowing survey effort to be re-deployed to cover much more of overall biodiversity. These methods are illustrated by their application to the bumble bees and milkweed butterflies. |
An extensive study conducted on school children in Western Canada has proved that immunizing kids and adolescents goes a long way towards protecting the entire community from communicable diseases like the flu, thanks to a phenomenon known as “herd immunity.”
The findings come at a time when vaccine phobia is one of our largest public health concerns, with many parents worrying that immunizing kids can lead to adverse side affects. A recent survey revealed that one in four U.S. parents think that vaccines might cause autism, probably due in part to a 1998 paper published in the journal The Lancet that wrongly linked autism to vaccines–that paper has since been refuted, and fully retracted by the journal.
Now, scientists have more evidence that vaccines provide a public health benefit. Researchers studying youngsters in 49 remote Hutterite farming colonies in Canada found that giving flu shots to almost 80 percent of a community’s children created a herd immunity that helped protect unvaccinated older people from illness. As children often transfer viruses to each other first and then pass them along to grown-ups, the study provided solid proof that the best way to contain epidemics like the recent H1N1 outbreak is to first vaccinate all the kids. By immunizing the most germ-friendly part of the herd first, you indirectly protect the rest of the community, scientists say.
This is not the first time that scientists have found evidence that herd immunity can help protect the unvaccinated, but it’s the most definitive study on the subject yet. Researchers say this is the first such study to be conducted in such remote and isolated communities (the Hutterites‘ religious beliefs keep them separate from mainstream society), which reduced the chance that subjects could contract flu from other passing sources. Scientists say the new study provides “incontrovertible proof” that the shots themselves — rather than luck, viral mutations, hand-washing or any other factor — were the crucial protective element [The New York Times].
The study, published in The Journal of American Medical Association, focused on Hutterite farming colonies in Western Canada, where the people live in rural isolation in clusters of about 160 people. Though Hutterites drive cars and tractors, they shun radio and TV and each colony lives like a large joint family–eating together, going to a Hutterite school, and owning everything jointly.
In 25 of the colonies that joined the study, the scientists took school kids aged 3 to 15 years old and gave them flu shots in 2008. In 24 other colonies, the kids got placebo shots. In 2009, the researchers found that more than 10 percent of all the adults and children in colonies that received the placebo had had laboratory-confirmed seasonal flu. Less than 5 percent of those in the colonies that received flu shots had [The New York Times].
The study found that by vaccinating the kids against influenza, almost 60 percent of the larger community was granted “herd immunity” and protected against the illness. Carolyn Bridges, an expert in influenza epidemiology at the Centers for Disease Control and Prevention, says the study implies that giving flu shots only to schoolchildren would protect the elderly just as well as giving flu shots to the elderly themselves. The C.D.C. would never recommend that, she cautioned, “Because you still should vaccinate high-risk people” [New York Times].
The Hutterite study’s findings are in line with a previous study conducted in 1968, in Tecumseh, Michigan. In that study, flu expert Arnold Monto vaccinated almost 85 percent of the town’s schoolchildren during flu season. At the end of season, the town had only a third as many flu cases as nearby Adrian, Mich., which received no shots. There were far fewer cases of flu in all age groups [New York Times].
80beats: The Lancet Retracts 1998 Paper That Linked Vaccinations to Autism
Bad Astronomy: While the Anti-Vax Movement Strengthens, Their Arguments Only Get Weaker
Bad Astronomy:Antivaxxers and the media
Bad Astronomy:An unvaccinated child has died from a preventable disease
Bad Astronomy:Antivax kills.
Bad Astronomy: Hospital workers fired for refusing vaccinations |
reactance, in electricity, measure of the opposition that a circuit or a part of a circuit presents to electric current insofar as the current is varying or alternating. Steady electric currents flowing along conductors in one direction undergo opposition called electrical resistance, but no reactance. Reactance is present in addition to resistance when conductors carry alternating current. Reactance also occurs for short intervals when direct current is changing as it approaches or departs from steady flow, for example, when switches are closed or opened.
Reactance is of two types: inductive and capacitive. Inductive reactance is associated with the magnetic field that surrounds a wire or a coil carrying a current. An alternating current in such a conductor, or inductor, sets up an alternating magnetic field that in turn affects the current in, and the voltage (potential difference) across, that part of the circuit. An inductor essentially opposes changes in current, making changes in the current lag behind those in the voltage. The current builds up as the driving voltage is already decreasing, tends to continue on at maximum value when the voltage is reversing its direction, falls off to zero as the voltage is increasing to maximum in the opposite direction, and reverses itself and builds up in the same direction as the voltage even as the voltage is falling off again. Inductive reactance, a measure of this opposition to the current, is proportional to both the frequency f of the alternating current and a property of the inductor called inductance (symbolized by L and depending in turn on the inductor’s dimensions, arrangement, and surrounding medium). Inductive reactance XL equals 2π times the product of the frequency of the current and the inductance of the conductor, simply XL = 2πfL. Inductive reactance is expressed in ohms. (The unit of frequency is hertz, and that of inductance is henry.)
Capacitive reactance, on the other hand, is associated with the changing electric field between two conducting surfaces (plates) separated from each other by an insulating medium. Such a set of conductors, a capacitor, essentially opposes changes in voltage, or potential difference, across its plates. A capacitor in a circuit retards current flow by causing the alternating voltage to lag behind the alternating current, a relationship in contrast to that caused by an inductor. The capacitive reactance, a measure of this opposition, is inversely proportional to the frequency f of the alternating current and to a property of the capacitor called capacitance (symbolized by C and depending on the capacitor’s dimensions, arrangement, and insulating medium). The capacitive reactance XC equals the reciprocal of the product of 2π, the frequency of the current, and the capacitance of that part of the circuit, simply XC = 1/(2πfC). Capacitive reactance has units of ohms. (The unit of capacitance is farad.)
Because inductive reactance XL causes the voltage to lead the current and capacitive reactance XC causes the voltage to lag behind the current, total reactance X is their difference—that is, X = XL - XC. The reciprocal of the reactance, 1/X, is called the susceptance and is expressed in units of reciprocal ohm, called mho (ohm spelled backward). |
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Mayer describes multimedia as modern presentation modes (text, pictures etc.) and different modalities (visual, auditory etc.) that are presented by an integrated technical system such as computer and internet. According to Mayer, multimedia learning refers to learning from words and pictures and multimedia instruction refers to the presentations of words and pictures. We see that Mayer describes the differences between multimedia learning and multimedia instruction. According to his description, multimedia instruction is the learning material which presents words and pictures that are intended to promote learning whereas multimedia learning refers to the learner constructed knowledge that builds mental representations from these words and pictures; that is, multimedia instruction.
Mayer (2009) states three views of multimedia messages. Multimedia messages can be based on
the delivery media such as amplified speaker and computer screen
presentation modes such as words and pictures, or
sensory modalities such as auditory and visual.
In the first view, it is clearly seen that delivery media is technology centered and focus is on technology rather than learners; that is, the focus is on the devices used to present information rather than on how people learn. The other views are learner centered. These views are consistent with learner centered approach and based on cognitive theory of learning on how people learn. Moreover, these two views are consistent with constructivist learning which is based on actively constructed knowledge rather than passively transmitted and gathered. The only distinction between these views is the sensory modalities view of multimedia is consistent with a cognitive theory of learning that assumes humans have separate information processing channels for auditory and visual processing whereas presentation modes view is consistent with a cognitive theory of learning that assumes humans have separate information processing channels for verbal and pictorial knowledge.
In the following figure, Mayer (2009) describes his cognitive theory of multimedia learning.
Figure: Cognitive Theory of Multimedia Learning
This cognitive theory of multimedia learning is based on three assumptions (2009):
dual channels: there are separate channels for processing visual and auditory experiences and information in humans' memory
limited capacity: each information channel is limited in its ability to process the amount of information and experiences at one time
active processing: processing experience and information in channels is an active process designed to construct coherent mental representations
According to this model, the learner must engage in the five cognitive processes or steps in order for meaningful learning to occur in multimedia environment. First, learner selects the relevant words for processing in verbal working memory. Then, learner selects relevant images for processing in visual working memory. After that, learner organizes selected words into a verbal mental model and selected images into a visual mental model. Finally, learner integrates work based and image based representations as well as prior knowledge (Mayer, 2009).
Mayer's cognitive theory of multimedia learning draws on Paivio's (1986) dual coding theory, Sweller's (1988) cognitive load theory, Baddeley's (1992) model of working memory, Mayer's (1996) SOI model of meaningful learning and Bruner's constructivist theory.
The working memory model explains what happens to information after it is perceived by the sense organs and suggests that there are separate slave systems such as phonological loop and visuo-spatial sketch pad for processing visual and verbal information (Baddeley, 1992). Dual coding theory is built on the working memory model and suggests that humans have two separate systems for representing verbal and nonverbal information. This verbal and nonverbal information is processed differently and in separate channels and although these systems are structurally and functionally independent, they are also interconnected (Paivio, 1986). Cognitive load theory points the findings from studies about dual coding theory; for example, information processing system is consist of two independent channels for processing and representing information which are limited in their capacity and suggests that learning happens best under conditions that are aligned with human cognitive architecture. Cognitive load theory is concerned with the way cognitive resources are focused and used during learning and problem solving (Sweller, 1988).
Although Mayer's cognitive theory of multimedia learning draws on many theories, Mayer's multimedia theory is specifically based on Paivio's dual coding theory which basically assumes that humans have separate information processing channels for verbal and pictorial information for auditory and visual messages. According to Mayer's theory, the learner has a visual and verbal information processing system. For example, auditory narration goes into the verbal information processing system whereas animation goes into the visual information processing system. Since dual coding theory is built on the working memory model of Baddeley and working memory includes verbal and visual channels which are phonological loop and visuo-spatial sketch pad for processing visual and verbal information, we can also say that Mayer's theory is consistent with Baddeley's one. Mayer also uses Sweller's cognitive load theory to understand how humans learn and humans' cognitive limitations for processing information. By using cognitive load theory, Mayer suggest that presenting too many elements such as words and pictures in multimedia material can lead to overload to be processed in visual or verbal information processing systems. Mayer also supports the theory of constructivist learning. By considering constructivist learning theory, Mayer suggests that cognitive construction and active learning depends on the cognitive processing of the learner during learning process. For example, learner constructs new knowledge by using active learning methods such as actively and mentally engaged in learning processes although passively sitting in the chair and watching a presentation. Mayer's also use his SOI model of meaningful learning while building cognitive theory of multimedia learning. In this model, learners are again knowledge constructors who pay attention to relevant words and pictures in multimedia message in order to produce meaningful learning and organize the information in coherent verbal and pictorial model, and integrate it with prior knowledge.
By his theory, first, Mayer has contributed to establishing a cognitive theory of multimedia learning which builds on how people learn. Mayer's theory also continues to contribute greatly to establishing theories and principles about learning in multimedia environment. Also, his theory and principles are a great resource for instructional designers to consider the cognitive processes related with learning. Instructional designers need to consider the learners and their memory capacities. They need to design their learning materials to maximize the focus on learning activities and minimize the learners' attention to activities which are not directly related to learning. When considering new delivery media technologies such as mobile phones, tablet pcs and smart phones, I think future researches need to be conducted to evaluate the multimedia by using this theory in a real world context or new derivatives of theories need to be developed from Mayer's one to adapt it to recent real life conditions. |
There are three considerable factors taken into account to diagnose anaemia: enough lacks of healthy red blood cells (RBCs), enough lacks of or hemoglobin which binds the oxygen, lower hematocrit. Anaemia is a blood disorder.
There are possible signs of this disease such as:
- tiredness or weakness,
- pale or yellowish skin,
- faintness or dizziness,
- increased thirst,
- rapid breathing, shortness of breath,
- lower leg cramps,
- heart-related symptoms (abnormal heart rhythms).
Anaemia is evaluated when those measurements are twice lower than standard and they are not adjustable to age and sex. Because of the course of this disease, there are four types of anaemia:
They depend on the level of hemoglobin
More popular classification of anaemia is based on a causes of this disease. Themost common form of anaemia is iron deficiency anaemia which is usually due to chronic blood loss caused by excessive menstruation, traumatic hemorrhage and gastrointestinal bleeding. Iron deficiency anaemia may develop when demands for iron are increasing, i.e. in foetal growth in pregnancy and children undergoing rapid growth spurts in infancy and adolescence. Treatment for iron deficiency anaemia will depend on the cause and severity of the condition and may include dietary changes and supplements, medicines, and blood transfusion.
Pernicious anaemia is a form of megaloblastic anemia due to vitamin B12 deficiency dependent on impaired absorption of vitamin B12. This type may develop also in people who have conditions that prevent them from absorbing vitamin B12 (because of their certain autoimmune disorders that involve the endocrine glands) and in strict vegetarian. Treatment for this type of anaemia includes vitamin B12 supplements and dietary changes (eating foods rich in vitamin B12, such meat, fish, eggs, dairy products; breads, cereals, and other foods fortified with vitamin B12).
Aplastic anaemia (inherited as Fanconi anaemia FA; or acquired) is a blood disorder in which the body’s bone marrow doesn’t make enough new blood cells. This may result in a number of health problems including arrhythmias, an enlarged heart, infections, bleeding and even leukemia. Aplastic anaemia can be caused by exposed to the toxins, such as pesticides as well as radiation and chemotherapy or by medicines such as chloramphenicol.
Haemolytic anaemia is a condition in which red blood cells are destroyed and removed from the bloodstream before their normal lifespan is up. This type may be inherited (as Thalassaemias or Sickle cell anaemia) or acquired. If is aquired may be caused by autoimmune disorders but also by heavy metals, sulfonamides or even malarial infections.
Treatments for anemia depend on cause and severity. Supplements given orally or intramuscularly will replace specific deficiencies as well as blood transfusions or marrow stem cell transplants and lifestyle changes. |
|San José State University|
& Tornado Alley
Frederick Lanchester, a British mathematician, tried to apply mathematical analysis to warfare. Mathematics in the form of operational research or logistics has some very practical applications for the military. Lanchester was interested in more abstract analysis of warfare. For example, there is the Principle of Concentration that says that the best strategy is to concentrate the whole of a belligerent forces on a definite objective. Lanchester's analysis provides a justification of that principle.
Lanchester developed his analysis from a set of differentinal equations. Let n1 and n2 be the numerical strengths of two military forces. These are the number of fighting units. They could be the number of infantry soldiers or the the number of tanks in a tank battle. The rate of casualities for the two forces are then:
where c1 and c2 are coefficients that reflect the effectiveness of of the units of forces 1 and 2, respectively.
Lanchester then asked the question of what condition determines the fighting strength of two forces. He argued that the two strengths are equal if both suffer the same proportional losses; i.e.,
This condition, combined with the differential equations, then implies that
Thus the fighting strengths of the two forces are equal when the products of the squares of the numerical strengths times the coefficients of effectiveness are equal. In other words, the strength of a fighting force is equal to the product of the square of numerical strength times the effectiveness of an individual fighting unit, cini2.
This justifies the Principle of Concentration. In other terms, there are economies of scale in military strength.
Lanchester illustrates the implications of this deduction by considering the case in which a machine gunner has the effectiveness of sixteen riflemen. He then asks how many machine gunners would be required to replace 1000 riflemen. By his calculation the number is
Lanchester also considers alternate fighting conditions. Suppose firepower is directed at positions rather than individual soldiers or other fighting units. The casaulities would then be proportional to the density of the force as well as the rate of fire. Thus,
where ai is the area over which force i is deployed.
The above two equations reduce to
Hence equality of strength exists when
Thus an application of the previous analysis indicates that under these conditions the strength of a force is proportional to its numerical size rather than the square of the numerical size. But the analysis also indicates that its strenghth is proportional to the area of its deployment.
In other conditions, such as the defense of a narrow pass, the strength of a force may have little to do with its numerical strength. This would be the situation in mountainous terrain.
In summary, Lanchester's analysis of warfare indicates that the strength of a fighting force is of the form
where D reflects the dimensionality of the fighting situation. For fighting units targeting fighting units D=2; for fighting units targeting areas D=1 and for a narrow pass D=0.
HOME PAGE OF Thayer Watkins |
Learning to tie shoe laces is an important but difficult milestone for many children. For some of our clients, just the mention of shoe tying is enough to bring on tears. Here are some ways to help break down the process for your child.
- First of all, shoe tying should be practiced when there is time to practice. The last five minutes before the bus arrives is not the most opportune time for a child to feel focused and relaxed to attempt a new skill. Some families have found that sticking with slip-on or Velcro closure shoes for the school day and saving the lace up sneakers for afternoons and weekends works for them.
- Contrast laces can help your child differentiate which lace is which. Get a black lace and a white lace (or two of your child’s favorite colors) and cut each lace in half. Tie the two shortened laces together and lace up a sneaker. Now when your child is learning the motor plan of how to manipulate the laces, you can use directions such as “Make a loop with the black lace”, rather than diving into right vs. left.
- If your child is having trouble manipulating the laces due to decreased fine motor skills, try using a jump rope wrapped around their foot. The increased diameter of the rope makes it easier for you to fit your hands into the process and help your child. For some children, the simple fact that the rope is not a shoe helps to lessen hesitancy to try. Other children enjoy pretending to tie an elephant’s or a dinosaur’s shoe.
- Talk to your child’s OT about which method of tying laces may be best for your child. In general, children who struggle with motor planning may benefit from the “two bunny ear” approach, as the steps are repetitive. Children who have difficulty with fine motor dexterity or bilateral coordination may do better with the “one bunny ear” approach. Once you pick a method, stick with it for a good length of time so that your child settles into the consistency of the steps.
- A word about that “bunny ear”. Most children try to form a bunny ear, but end up with a small balloon-like loop at the very end of the lace; this results in the lace slipping through too far when trying to pull the laces tight at the end of the task. We have found that teaching the child to grasp the “middle of the lace”, then “bring the middle down to the bottom” results in a good-sized loop with enough extra lace at the end to allow for success when pulling the laces tight.
- Visuals are key! Try a book with step by step pictures, take a video of yourself tying shoes or try an app.
There are two apps we have found to be useful: Tie Your Shoes and Shoe Tying. Tie Your Shoes breaks the activity down into short steps and gives the option to have black and white laces, or two white laces. The app itself is easy to navigate and allows the user to repeat a step if needed. The video is narrated by a clown, however, even our older children at the clinic don’t seem to mind. (Note: if you are searching for Tie Your Shoes in the App Store, it is listed as an iPhone app, not an iPad app, but the video quality remains clear on an iPad.) Shoe Tying is a little harder to navigate and only shows white laces, but may be appropriate for an older child who finds Tie Your Shoes to be juvenile.
- Help your child gain confidence by asking them to do part of the task. Have your child complete just the first step (cross the laces), or just the last step (pull the loops tight), then work your way towards completing the whole activity. By celebrating the small successes, your child will gain interest and pride in their accomplishment. |
The peregrine falcon is the fastest animal on the planet. Their aerodynamic shape allows them to reach speeds of over 200mph when in a dive, called a stoop. The peregrine uses the stoop to catch their prey, other birds, in mid flight.
Peregrine falcons were extirpated from most of the eastern half of the United States by the mid twentieth century. The reason for their decline was mostly due to the chemical insecticide DDT. DDT was originally used to kill mosquitoes and stop the spread of malaria in the 1950's. Later in the 1960's it was used as an insecticide on crops. The DDT worked its way up the food chain, peregrines eating passerines or other birds that had eaten an insect, seed or fruit that was covered with the poison would then have the poison in their system. The DDT would not kill the bird but it did effect the egg that the birds produced. The shells of the eggs of birds poisoned by the DDT were so thin that when the 2 to 3 pound birds went to sit on the eggs to incubate them most would break. This severely effected the number of new peregrines that were being hatched and the over all population of the birds plummeted.
In 1972 DDT was banned in the United States. Later groups like the Peregrine Fund, out on the eastern coast of the United States, and the Midwest Peregrine Foundation began programs to release peregrine falcons back into areas where they had once existed. Here in Minnesota many of the birds were released from hack boxes on skyscrapers. Nesting platforms, like the one pictured above, were placed on building, bridges, smoke stacks and other tall man made objects. This has helped the peregrine falcons in the area expand their range beyond the few natural cliff areas in Minnesota and increase their population beyond what it was prior to DDT. The pergrine was taken off the endangered species list in 1999. The birds pictured here were hatched from the nest box at the Ford Damn on the Mississippi River in 2010. |
the time of the Devonian high-oxygen peak but still in a period of dropping oxygen. This scenario fits the proposal that the times of low, or lowering oxygen, stimulated the most consequential evolutionary changes—the formation of new body plans, which the first tetrapod most assuredly was.
Most of our understanding about the transition from fish to amphibians comes from only a few localities, with the outcrops in Greenland being the most prolific in tetrapod remains. Although the genus Ichythostega is given pride of place in most discussions of animal evolution as being first, actually a different genus, named Ventastega, was first, at about 363 million years ago, followed in several million years by a modest radiation that included Ichythostega, Acanthostega, and Hynerpeton. Are these forms legged fish or fishy amphibians? They are certainly transitional and difficult to categorize. Of these, Ichthyostega is the most renowned. Its bones were first recovered in the 1930s, but they were fragmentary, and it was not until the 1950s that detailed examination led to a reconstruction of the entire skeleton. The animal certainly had well-developed legs, but it also had a fish-like tail. Nevertheless, the legs led to its coronation as the first four-legged land animal. It was only later that further study showed that this inhabitant from so long ago was probably incapable of walking on land. Newer studies of its foot and ankle seemed to suggest that it could not have supported its body without the flotation aid of being immersed in water.
The strata enclosing Ichthyostega and the other primitive tetrapods from Greenland came from a time interval soon after the devastating late Devonian mass extinction, whose cause was most certainly an atmospheric oxygen drop that created widespread anoxia in the seas. The appearance of Ichthyostega and its brethren may have been instigated by this extinction, since evolutionary novelty often follows mass extinction in response to filling empty ecological niches (the traditional view)—and since it was a time of lower oxygen (the view here). And, as postulated in this book, while periods of low oxygen seem to correlate well with times of low organism diversity, just the opposite seems true of the process bringing about radical breakthroughs in body plans: while times of low oxygen may have few spe- |
In all corners of the world, scientists are gathering data to study the effects that a warming climate has on the earth. But it’s the view from 13 satellites that circle far above the planet that holds some of the most promising potential in predicting those changes.
That was the message that Jack Kaye, associate director for research in NASA’s earth science division, gave to a group of students and faculty at the Dole Institute of Politics Wednesday afternoon. Kaye’s talk was part of a visit to Kansas University.
“One thing about satellites, it gives you access to remote and hostile areas,” Kaye said. “You want to know what is going on over the ocean, over the tropics, over deserts, over a volcano? Without satellites there is no way you are going to do that.”
For an hour, Kaye talked about the satellites that orbit the earth gathering information on the atmosphere, biosphere, seas, ice sheets and clouds. Satellites provide a more comprehensive and continental look at climate change, Kaye said.
Through satellites, scientists have gathered data that shows the amount of sea ice has been significantly reduced, that it is not as thick as it used to be and that there is less old sea ice compared with new sea ice.
“I am a firm believer that climate change is real. It is happening. The physics are fundamentally sound and the data record is enough that we can see things happening,” Kaye told the crowd. “I do believe the planet is getting warmer. When the planet gets warmer, ice is going to melt and sea level will rise.”
While the earth’s most dramatic changes have been found in the polar regions, Kaye said there have been other observations as well.
“We can see changes in biology, we can see changes in the ocean, we can see changes in atmospheric conditions,” Kaye said but noted it can be difficult to discern long-term changes in the midst of frequent short-term variations.
Along with monitoring ice sheets, NASA has been studying how aerosols — which can come from sources as diverse as the soot of fossil fuels, dust from the desert or volcanic ash — affect clouds and precipitation. In two weeks, NASA will launch its next satellite, Glory, which will have a major focus on studying aerosols.
Through satellites NASA has been able to track where ground water levels are dropping, which areas of the earth have a drop in photosynthesis production and where rapid urbanization has occurred.
“We are changing the surface of the earth. And you can see that from a satellite. This is being repeated all over the world. Without satellites it is very hard to see that picture,” Kaye said. |
There are several factors which directly or indirectly influence the growth and development of an organism. There are as follows:
(i) Heredity, (ii) Environment, (iii) Sex, (iv) Nutrition, (v) Races, (vi) Exercise, (vii) Hormones, (viii) Learning and Reinforcement.
Heredity is a biological process through which the transmission of physical and social characteristics takes place from parents to off-springs. It greatly influences the different aspects of growth and development i.e. height, weight and structure of the body, colour of hair and eye, intelligence, aptitudes and instincts.
However environment equally influences the above aspects in many cases. Biologically speaking heredity is the sum total of traits potentially present in the fertilized ovum (Combination of sperm cell & egg cell), by which off-springs are resemblance to their parents and fore parents.
Environment plays an important role in human life. Psychologically a person's environment consists of the sum total of the stimulations (physical & Psychological) which he receives from his conception. There are different types of environment such as physical, environment, social environment & psychological environment.
Physical environment consists of all outer physical surroundings both in-animate and animate which have to be manipulated in order to provide food, clothing and shelter. Geographical conditions i.e. weather and climates are physical environment which has considerable impact on individual child.
Social environment is constituted by the society-individuals and institutions, social laws, customs by which human behavior is regulated.
Psychological environment is rooted in individual's reaction with an object. One's love, affection and fellow feeling attitude will strengthen human bond with one another.
So Growth and Development are regulated by the environment of an individual where he lives.
Sex acts as an important factor of growth and development. There is difference in growth and development of boys and girls. The boys in general taller, courageous than the girls but Girls show rapid physical growth in adolescence and excel boys. In general the body constitution and structural growth of girls are different from boys. The functions of boys and girls are also different in nature.
Growth and Development of the child mainly depend on his food habits & nutrition. The malnutrition has adverse effect on the structural and functional development of the child.
The racial factor has a great influence on height, weight, colour, features and body constitution. A child of white race will be white & tall even hair and eye colour, facial structure are governed by the same race.
This does not mean the physical exercise as a discipline. The functional activities of the child come in the fold of exercise of the body. We do not mean any law of growth through use or atrophy (The reverse of growth) through disuse.
The growth of muscles from the normal functioning of the child is a matter of common knowledge. It is a fact that repeated play and rest build the strength of the muscle. The increase in muscular strength is mainly dye to better circulation and oxygen supply. The brain muscles develop by its own activity-play and other activities provide for these growth and development of various muscles. Deliberately the child does not play or engages himself in various other functions with the knowledge that they will help him in growing. This style of functioning of the child is but natural.
There are a number of endocrine glands inside the human body. Endocrine glands are ductless glands. This means there are certain glands situated in some specific parts of the body. These glands make internal secretions locally. These secretions produce one or more hormones.
Hormones are physiological substances having the power to raise or lower the activity level of the body or certain organs of the body. For example, the gland pancreas secretes pancreatic juice, not into the blood, but into the intestine. Here it acts upon food and plays an important part in digestion of food. This pancreas also discharges into the blood, a substance called insulin. This being carried by the blood to the muscles enables them to use sugar as a fuel to add strength to muscles. It the pancreas fails to produce the secretions, the organism lapses to the unfavorable conditions of growth and development.
Similarly, the adrenal glands are very close to kidneys. These make a secretion of adrenaline, a very powerful hormone, which is responsible for strong and rapid heart-beat, release of stored sugar from liver and which controls blood pressure. Gonads are glands, which secrete hormones that have important effects on growth and sex behavior.
A balance of male hormones controls development in the direction of masculinity and that of female hormones steers it toward feminist. At puberty, these sex hormones promote the development of genital organs. Lacking the gonads, individuals of either sex develops into rather a neutral specimen without strong sex characteristics. Pituitary is called the "master gland". It is attached to the under side of the brain and its secretions controls the brain function and also the blood pressure. It stimulates other glands like adrenal and gonads. If this gland is over-active in childhood, the muscles and bones grow very rapidly and the individual may become a giant of seven to nine feet tall.
8. Learning and Reinforcement
Learning is the most important and fundamental topic in the whole science of psychology. Development consists of maturation and learning. Without any learning the human organism is a structure of various limbs, all other internal organs with muscles and bones. But it is not human being with maturation.
Learning includes much more than school learning. Learning goes to help the human child in his physical, mental, emotional, intellectual, social and attitudinal developments. All knowledge and skill, all habits, good and bad, all acquaintances with people and things, all attitudes built up in your dealing with people and things have been learned.
Reinforcement is a factor in learning. Exercise or activity is necessary for learning. It may be a motor activity, as in playing on a musical instrument. Or it may be a sensory activity as in listening to a piece of music. Whatsoever, there most be activity in some from. "We learn by doing". It is an old psychological proverb. Now it is that out activity should be repeated till we get the desired results. So the proverb should be, "We learn by doing getting results." |
What is Leukemia Cancer?
Leukemia is cancer of the blood cells. It starts in the bone marrow, the soft tissue inside most bones. Bone marrow is where blood cells are made.
- White blood cells help your body fight infection.
- Red blood cells carry oxygen to all parts of your body.
- Platelets help your blood clot.
When you have leukemia, the bone marrow starts to make a lot of abnormal white blood cells, called leukemia cells. They don′t do the work of normal white blood cells. They grow faster than normal cells, and they don′t stop growing when they should.
Over time, leukemia cells can crowd out the normal blood cells. This can lead to serious problems such as anemia, bleeding, and infections. Leukemia cells can also spread to the lymph nodes or other organs and cause swelling or pain.
There are several different types of leukemia. In general, leukemia is grouped by how fast it gets worse and what kind of white blood cell it affects.
The first type of classification is by how fast the leukemia progresses:
- Acute leukemia. In acute leukemia, the abnormal blood cells are immature blood cells (blasts). They can′t carry out their normal work, and they multiply rapidly, so the disease worsens quickly. Acute leukemia requires aggressive, timely treatment.
- Chronic leukemia. This type of leukemia involves more mature blood cells. These blood cells replicate or accumulate more slowly and can function normally for a period of time. Some forms of chronic leukemia initially produce no early symptoms and can go unnoticed or undiagnosed for years.
The second type of classification is by type of white blood cell affected:
- Lymphocytic leukemia. This type of leukemia affects the lymphoid cells (lymphocytes), which form lymphoid or lymphatic tissue. Lymphatic tissue makes up your immune system.
- Myelogenous leukemia. This type of leukemia affects the myeloid cells. Myeloid cells give rise to red blood cells, white blood cells and platelet-producing cells. |
Natural gas, as it is used by consumers, is much different from the natural gas
that is brought from underground up to the wellhead. Although the processing of natural gas is in many respects
less complicated than the processing and refining of crude oil, it is equally as necessary before its use by end
The natural gas used by consumers is composed almost entirely of methane. However, natural gas found at the wellhead,
although still composed primarily of methane, is by no means as pure. Raw natural gas comes from three types of
wells: oil wells, gas wells, and condensate wells. Natural gas that comes from oil wells is typically termed 'associated
gas'. This gas can exist separate from oil in the formation (free gas), or dissolved in the crude oil (dissolved
gas). Natural gas from gas and condensate wells, in which there is little or no crude oil, is termed 'nonassociated
gas'. Gas wells typically produce raw natural gas by itself, while condensate wells produce free natural gas along
with a semi-liquid hydrocarbon condensate. Whatever the source of the natural gas, once separated from crude oil
(if present) it commonly exists in mixtures with other hydrocarbons; principally ethane, propane, butane, and pentanes.
In addition, raw natural gas contains water vapor, hydrogen sulfide (H2S), carbon dioxide, helium, nitrogen, and
Natural gas processing consists of separating all of the various hydrocarbons and fluids from the pure natural
gas, to produce what is known as 'pipeline quality' dry natural gas. Major transportation pipelines usually impose
restrictions on the make-up of the natural gas that is allowed into the pipeline. That means that before the natural
gas can be transported it must be purified. While the ethane, propane, butane, and pentanes must be removed from
natural gas, this does not mean that they are all 'waste products'.
In fact, associated hydrocarbons, known as 'natural gas liquids' (NGLs) can be very valuable by-products of natural
gas processing. NGLs include ethane, propane, butane, iso-butane, and natural gasoline. These NGLs are sold separately
and have a variety of different uses; including enhancing oil recovery in oil wells, providing raw materials for
oil refineries or petrochemical plants, and as sources of energy.
While some of the needed processing can be accomplished at or near the wellhead (field processing), the complete
processing of natural gas takes place at a processing plant, usually located in a natural gas producing region.
The extracted natural gas is transported to these processing plants through a network of gathering pipelines, which
are small-diameter, low pressure pipes. A complex gathering system can consist of thousands of miles of pipes,
interconnecting the processing plant to upwards of 100 wells in the area. According to the American Gas Association's
Gas Facts 2000, there was an estimated 36,100 miles of gathering system pipelines in the U.S. in 1999.
In addition to processing done at the wellhead and at centralized processing plants, some final processing is also
sometimes accomplished at 'straddle extraction plants'. These plants are located on major pipeline systems. Although
the natural gas that arrives at these straddle extraction plants is already of pipeline quality, in certain instances
there still exist small quantities of NGLs, which are extracted at the straddle plants.
The actual practice of processing natural gas to pipeline dry gas quality levels can be quite complex, but usually
involves four main processes to remove the various impurities:
Scroll down, or click on the links above to be transported to a particular section.
In addition to the four processes above, heaters and scrubbers are installed, usually at or near the wellhead.
The scrubbers serve primarily to remove sand and other large-particle impurities. The heaters ensure that the temperature
of the gas does not drop too low. With natural gas that contains even low quantities of water, natural gas hydrates
have a tendency to form when temperatures drop. These hydrates are solid or semi-solid compounds, resembling ice
like crystals. Should these hydrates accumulate, they can impede the passage of natural gas through valves and
gathering systems. To reduce the occurrence of hydrates, small natural gas-fired heating units are typically installed
along the gathering pipe wherever it is likely that hydrates may form.
Oil and Condensate Removal
In order to process and transport associated dissolved natural gas, it must be separated from the oil in which
it is dissolved. This separation of natural gas from oil is most often done using equipment installed at or near
The actual process used to separate oil from natural gas, as well as the equipment that is used, can vary widely.
Although dry pipeline quality natural gas is virtually identical across different geographic areas, raw natural
gas from different regions may have different compositions and separation requirements. In many instances, natural
gas is dissolved in oil underground primarily due to the pressure that the formation is under. When this natural
gas and oil is produced, it is possible that it will separate on its own, simply due to decreased pressure; much
like opening a can of soda pop allows the release of dissolved carbon dioxide. In these cases, separation of oil
and gas is relatively easy, and the two hydrocarbons are sent separate ways for further processing. The most basic
type of separator is known as a conventional separator. It consists of a simple closed tank, where the force of
gravity serves to separate the heavier liquids like oil, and the lighter gases, like natural gas.
In certain instances, however, specialized equipment is necessary to separate oil and natural gas. An example of
this type of equipment is the Low-Temperature Separator (LTX). This is most often used for wells producing high
pressure gas along with light crude oil or condensate. These separators use pressure differentials to cool the
wet natural gas and separate the oil and condensate. Wet gas enters the separator, being cooled slightly by a heat
exchanger. The gas then travels through a high pressure liquid 'knockout', which serves to remove any liquids into
a low-temperature separator. The gas then flows into this low-temperature separator through a choke mechanism,
which expands the gas as it enters the separator. This rapid expansion of the gas allows for the lowering of the
temperature in the separator. After liquid removal, the dry gas then travels back through the heat exchanger and
is warmed by the incoming wet gas. By varying the pressure of the gas in various sections of the separator, it
is possible to vary the temperature, which causes the oil and some water to be condensed out of the wet gas stream.
This basic pressure-temperature relationship can work in reverse as well, to extract gas from a liquid oil stream.
In addition to separating oil and some condensate from the wet gas stream, it is necessary to remove most of the
associated water. Most of the liquid, free water associated with extracted natural gas is removed by simple separation
methods at or near the wellhead. However, the removal of the water vapor that exists in solution in natural gas
requires a more complex treatment. This treatment consists of 'dehydrating' the natural gas, which usually involves
one of two processes: either absorption, or adsorption.
Absorption occurs when the water vapor is taken out by a dehydrating agent. Adsorption occurs when the water vapor
is condensed and collected on the surface.
An example of absorption dehydration is known as Glycol Dehydration. In this process, a liquid desiccant dehydrator
serves to absorb water vapor from the gas stream. Glycol, the principal agent in this process, has a chemical affinity
for water. This means that, when in contact with a stream of natural gas that contains water, glycol will serve
to 'steal' the water out of the gas stream. Essentially, glycol dehydration involves using a glycol solution, usually
either diethylene glycol (DEG) or triethylene glycol (TEG), which is brought into contact with the wet gas stream
in what is called the 'contactor'. The glycol solution will absorb water from the wet gas. Once absorbed, the glycol
particles become heavier and sink to the bottom of the contactor where they are removed. The natural gas, having
been stripped of most of its water content, is then transported out of the dehydrator. The glycol solution, bearing
all of the water stripped from the natural gas, is put through a specialized boiler designed to vaporize only the
water out of the solution. While water has a boiling point of 212 degrees Fahrenheit, glycol does not boil until
400 degrees Fahrenheit. This boiling point differential makes it relatively easy to remove water from the glycol
solution, allowing it be reused in the dehydration process.
A new innovation in this process has been the addition of flash tank separator-condensers. As well as absorbing
water from the wet gas stream, the glycol solution occasionally carries with it small amounts of methane and other
compounds found in the wet gas. In the past, this methane was simply vented out of the boiler. In addition to losing
a portion of the natural gas that was extracted, this venting contributes to air pollution and the greenhouse effect.
In order to decrease the amount of methane and other compounds that are lost, flash tank separator-condensers work
to remove these compounds before the glycol solution reaches the boiler. Essentially, a flash tank separator consists
of a device that reduces the pressure of the glycol solution stream, allowing the methane and other hydrocarbons
to vaporize ('flash'). The glycol solution then travels to the boiler, which may also be fitted with air or water
cooled condensers, which serve to capture any remaining organic compounds that may remain in the glycol solution.
In practice, according to the Department
of Energy's Office of Fossil Energy, these systems have been shown to
recover 90 to 99 percent of methane that would otherwise be flared into the atmosphere.
To learn more about glycol dehydration, visit the Gas Technology Institute's
Solid-desiccant dehydration is the primary form of dehydrating natural gas using adsorption, and usually consists
of two or more adsorption towers, which are filled with a solid desiccant. Typical desiccants include activated
alumina or a granular silica gel material. Wet natural gas is passed through these towers, from top to bottom.
As the wet gas passes around the particles of desiccant material, water is retained on the surface of these desiccant
particles. Passing through the entire desiccant bed, almost all of the water is adsorbed onto the desiccant material,
leaving the dry gas to exit the bottom of the tower.
Solid-desiccant dehydrators are typically more effective than glycol dehydrators, and are usually installed as
a type of straddle system along natural gas pipelines. These types of dehydration systems are best suited for large
volumes of gas under very high pressure, and are thus usually located on a pipeline downstream of a compressor
station. Two or more towers are required due to the fact that after a certain period of use, the desiccant in a
particular tower becomes saturated with water. To 'regenerate' the desiccant, a high-temperature heater is used
to heat gas to a very high temperature. Passing this heated gas through a saturated desiccant bed vaporizes the
water in the desiccant tower, leaving it dry and allowing for further natural gas dehydration.
Separation of Natural Gas Liquids
Natural gas coming directly from a well contains many natural gas liquids that are commonly removed. In most instances,
natural gas liquids (NGLs) have a higher value as separate products, and it is thus economical to remove them from
the gas stream. The removal of natural gas liquids usually takes place in a relatively centralized processing plant,
and uses techniques similar to those used to dehydrate natural gas.
There are two basic steps to the treatment of natural gas liquids in the natural gas stream. First, the liquids
must be extracted from the natural gas. Second, these natural gas liquids must be separated themselves, down to
their base components.
There are two principle techniques for removing NGLs from the natural gas stream: the absorption method and the
cryogenic expander process. According to the Gas Processors Association, these two processes
account for around 90 percent of total natural gas liquids production.
The Absorption Method
The absorption method of NGL extraction is very similar to using absorption for dehydration. The main difference
is that, in NGL absorption, an absorbing oil is used as opposed to glycol. This absorbing oil has an 'affinity'
for NGLs in much the same manner as glycol has an affinity for water. Before the oil has picked up any NGLs, it
is termed 'lean' absorption oil. As the natural gas is passed through an absorption tower, it is brought into contact
with the absorption oil which soaks up a high proportion of the NGLs. The 'rich' absorption oil, now containing
NGLs, exits the absorption tower through the bottom. It is now a mixture of absorption oil, propane, butanes, pentanes,
and other heavier hydrocarbons. The rich oil is fed into lean oil stills, where the mixture is heated to a temperature
above the boiling point of the NGLs, but below that of the oil. This process allows for the recovery of around
75 percent of butanes, and 85 - 90 percent of pentanes and heavier molecules from the natural gas stream.
The basic absorption process above can be modified to improve its effectiveness, or to target the extraction of
specific NGLs. In the refrigerated oil absorption method, where the lean oil is cooled through refrigeration, propane
recovery can be upwards of 90 percent, and around 40 percent of ethane can be extracted from the natural gas stream.
Extraction of the other, heavier NGLs can be close to 100 percent using this process.
The Cryogenic Expansion Process
Cryogenic processes are also used to extract NGLs from natural gas. While absorption methods can extract almost
all of the heavier NGLs, the lighter hydrocarbons, such as ethane, are often more difficult to recover from the
natural gas stream. In certain instances, it is economic to simply leave the lighter NGLs in the natural gas stream.
However, if it is economic to extract ethane and other lighter hydrocarbons, cryogenic processes are required for
high recovery rates. Essentially, cryogenic processes consist of dropping the temperature of the gas stream to
around -120 degrees Fahrenheit.
There are a number of different ways of chilling the gas to these temperatures, but one of the most effective is
known as the turbo expander process. In this process, external refrigerants are used to cool the natural gas stream.
Then, an expansion turbine is used to rapidly expand the chilled gases, which causes the temperature to drop significantly.
This rapid temperature drop condenses ethane and other hydrocarbons in the gas stream, while maintaining methane
in gaseous form. This process allows for the recovery of about 90 to 95 percent of the ethane originally in the
gas stream. In addition, the expansion turbine is able to convert some of the energy released when the natural
gas stream is expanded into recompressing the gaseous methane effluent, thus saving energy costs associated with
The extraction of NGLs from the natural gas stream produces both cleaner, purer natural gas, as well as the valuable
hydrocarbons that are the NGLs themselves.
Natural Gas Liquid Fractionation
Once NGLs have been removed from the natural gas stream, they must be broken down into their base components to
be useful. That is, the mixed stream of different NGLs must be separated out. The process used to accomplish this
task is called fractionation. Fractionation works based on the
different boiling points of the different hydrocarbons in the NGL stream. Essentially, fractionation occurs in
stages consisting of the boiling off of hydrocarbons one by one. The name of a particular fractionator gives an
idea as to its purpose, as it is conventionally named for the hydrocarbon that is boiled off. The entire fractionation
process is broken down into steps, starting with the removal of the lighter NGLs from the stream. The particular
fractionators are used in the following order:
- Deethanizer - this step separates
the ethane from the NGL stream.
- Depropanizer - the next step separates
- Debutanizer - this step boils off
the butanes, leaving the pentanes and heavier hydrocarbons in the NGL stream.
- Butane Splitter or Deisobutanizer
- this step separates the iso and normal butanes.
By proceeding from the lightest hydrocarbons to the heaviest, it is possible to
separate the different NGLs reasonably easily.
Sulfur and Carbon Dioxide Removal
In addition to water, oil, and NGL removal, one of the most important parts of gas processing involves the removal
of sulfur and carbon dioxide. Natural gas from some wells contains significant amounts of sulfur and carbon dioxide.
This natural gas, because of the rotten smell provided by its sulfur content, is commonly called 'sour gas'. Sour
gas is undesirable because the sulfur compounds it contains can be extremely harmful, even lethal, to breathe.
Sour gas can also be extremely corrosive. In addition, the sulfur that exists in the natural gas stream can be
extracted and marketed on its own. In fact, according to the USGS, U.S. sulfur production from gas processing plants
accounts for about 15 percent of the total U.S. production of sulfur.
Sulfur exists in natural gas as hydrogen sulfide (H2S), and the gas is usually considered sour if the hydrogen
sulfide content exceeds 5.7 milligrams of H2S per cubic meter of natural gas. The process for removing hydrogen
sulfide from sour gas is commonly referred to as 'sweetening' the gas.
The primary process for sweetening sour natural gas is quite similar to the processes of glycol dehydration and
NGL absorption. In this case, however, amine solutions are used to remove the hydrogen sulfide. This process is
known simply as the 'amine process', or alternatively as the Girdler process, and is used in 95 percent of U.S.
gas sweetening operations. The sour gas is run through a tower, which contains the amine solution. This solution
has an affinity for sulfur, and absorbs it much like glycol absorbing water. There are two principle amine solutions
used, monoethanolamine (MEA) and diethanolamine (DEA). Either of these compounds, in liquid form, will absorb sulfur
compounds from natural gas as it passes through. The effluent gas is virtually free of sulfur compounds, and thus
loses its sour gas status. Like the process for NGL extraction and glycol dehydration, the amine solution used
can be regenerated (that is, the absorbed sulfur is removed), allowing it to be reused to treat more sour gas.
Although most sour gas sweetening involves the amine absorption process, it is also possible to use solid desiccants
like iron sponges to remove the sulfide and carbon dioxide.
Sulfur can be sold and used if reduced to its elemental form. Elemental sulfur is a bright yellow powder like material,
and can often be seen in large piles near gas treatment plants, as is shown. In order to recover elemental sulfur
from the gas processing plant, the sulfur containing discharge from a gas sweetening process must be further treated.
The process used to recover sulfur is known as the Claus process, and involves
using thermal and catalytic reactions to extract the elemental sulfur from the hydrogen sulfide solution.
In all, the Claus process is usually able to recover 97 percent of the sulfur that has been removed from the natural
gas stream. Since it is such a polluting and harmful substance, further filtering, incineration, and 'tail gas'
clean up efforts ensure that well over 98 percent of the sulfur is recovered.
Gas processing is an instrumental piece of the natural gas value chain. It is instrumental in ensuring that the
natural gas intended for use is as clean and pure as possible, making it the clean burning and environmentally
sound energy choice. Once the natural gas has been fully processed, and is ready to be consumed, it must be transported from those areas that produce natural gas, to those areas that require it. |
This evening’s main talk, “The History of the Calendar“, was given by Keith Brackenborough. The main purpose of a calendar is to keep track of the days, and make the business of planning, and recording the passage of the years easier. Of course the Earth and Moon do not make this an easy task. The Earth’s most obvious unit of time is the solar day, but it also orbits the Sun in one tropical year, which is, of course, not a whole number of days long. The Moon orbits the Earth, and of course its period is neither a whole number of days nor a whole fraction of a tropical year. Throughout recorded history people have used calendars which took account of the various differences more or less successfully. Keith’s talk covered many of the schemes that have been tried, from the relatively simple Lunar ones to schemes involving cycles many years long. He gave particular emphasis to the development of the calendar used by most of the world today, from its roots in Julius Caesar’s ideas for bringing order to Rome’s chaotic calendar, through Augustus’ tinkering with the lengths of the months, the Council of Nicea’s adjustments for determining the date of Easter, and Pope Gregory’s reforms to account for the slight inaccuracies in Julius Caesar’s original scheme. He told us about the way the Gregorian calendar was adopted slowly over the centuries by more countries, so that it is now the most common calendar, and should be adequate for at least another 2000 years. |
Have you ever wondered if there’s an easy way to swap cells in Excel?
Well, you’re in luck!
This article is all about swapping cells in Excel without any hassle or complicated methods.
We’ll walk you through the process step-by-step, making it as simple as a drag and drop.
This tutorial shows three methods for swapping cells in Excel.
Method #1: Use Drag and Drop to Swap Adjacent Cells in Excel
This is the best way to swap two cells (especially if you’re trying to swap two adjacent cells)
In the below dataset, values in cells A4 and B4 have been interchanged due to a data entry error.
We can use the drag and drop to swap cells A4 and B4 and correct the data entry error.
Below are the steps to do this:
- Select cell A4, point to the cell’s right border, and notice the four-headed arrow.
- Press and hold down the “Shift” key, drag to the right border of cell B4, and notice a display of a thick green bracket icon ( 工). While holding the “Shift” key, release the mouse button.
The cells A4 and B4 are swapped as shown below:
Note: You can also use this method to swap adjacent rows or columns.
Also read: How to Select Visible Cells Only in Excel?
Method #2: Use Cut and Insert Cut Cells to Swap Adjacent Cells in Excel
Let’s consider the below dataset where values in cells A4 and B4 have been interchanged due to a data entry error.
We can use the “Cut” and “Insert Cut Cells” commands to swap cells A4 and B4 and correct the data entry error.
Below are the steps to do this:
- Right-click cell B4 and choose “Cut” on the context menu.
Alternatively, select cell B4 and press Ctrl + X.
- Right-click cell A4 and choose “Insert Cut Cells” on the shortcut menu.
Alternatively, select cell A4 and press the Ctrl + Shift + “+” shortcut.
Cells A4 and B4 are swapped as shown below:
Note: You can also use this method to swap only adjacent columns or rows.
Also read: How to Rearrange Rows In Excel
Method #3: Use Excel VBA Code to Swap Non-Adjacent Cells in Excel
Suppose we want to swap the columns for “Capital City” and “State” in the dataset below to arrange the data in a more logical order.
We can use Excel VBA code to accomplish the task.
Below are the steps to do this:
- Press Alt + F11 to open the Visual Basic Editor.
- Open the “Insert” menu and choose “Module” to insert a module.
- Copy the following code and paste it into the module:
Sub Swap2NonAdjacentRanges() Dim Rng1 As Range Dim Rng2 As Range Dim arr1 As Variant Dim arr2 As Variant Set Rng1 = Application.Selection Set Rng1 = Application.InputBox("Range1:", , Rng1.Address, Type:=8) Set Rng2 = Application.InputBox("Range2:", , Type:=8) Application.ScreenUpdating = False arr1 = Rng1.Value arr2 = Rng2.Value Rng1.Value = arr2 Rng2.Value = arr1 Application.ScreenUpdating = True End Sub
- Save the file as an Excel Macro-Enabled Workbook.
- Press Alt + F11 to switch to the active worksheet containing the dataset.
- Select the first cell range that you want to be swapped. In this example, we select the cell range A1:A6.
- Press Alt + F8 to activate the “Macro” dialog box, select the “Swap2NonAdjacentRanges” macro on the “Macro name” list box, and click “Run.”
- Click “OK” on the first “Input” dialog box, displaying the reference of the first cell range we want to be swapped.
- Select the second cell range that you want to be swapped. In this example, we select the cell range C1:C6. Notice that the cell reference is entered in the second “Input” dialog box.
- Click “OK” on the second “Input” dialog box.
The “Capital City” and “State” cell ranges are interchanged as shown below:
Note: Always keep the target ranges equal in size when using this method.
Explanation of the Code
Let’s go over the code step by step to understand how it performs the swap operation:
1. Variable Declarations:
- ‘Dim Rng1 As Range’ and ‘Dim Rng2 As Range’: Declare two variables, “Rng1” for the first data range and “Rng2” for the second cell range.
- ‘Dim arr1 As Variant’ and ‘Dim arr2 As Variant’: Declare two array variables named “arr1” and “arr2” to store values from their respective ranges. The variables are of “Variant” data type, so they can handle data of unknown or mixed types.
2. Range Selection:
- ‘Set Rng1 = Application.Selection’: Sets the “Rng1” variable to the currently selected range, allowing the user to choose a range before running the macro.
- ‘Set Rng1 = Application.InputBox(“Range1:”, , Rng1.Address, Type:=8)’: Displays an input box to the user with the prompt “Range1:” and initially sets the input value to the address of “Rng1.” This line allows the user to change or specify the first range by manually entering or selecting it. The “Type:=8” parameter indicates that the input should be treated as a range.
- ‘Set Rng2 = Application.InputBox(“Range2:”, , Type:=8)’: Displays another input box to the user with the prompt “Range2:” and allows the user to specify the second range by manually entering or selecting it. The “Type:=8” parameter indicates that the input should be treated as a range.
3. Turning off Screen Updating:
- ‘Application.ScreenUpdating = False’: Disables screen updating to prevent flickering and improve performance while the macro executes.
4. Value Assignment:
- ‘arr1 = Rng1.Value’: Stores the values from the first range (“Rng1”) in the “arr1” array variable.
- ‘arr2 = Rng2.Value’: Stores the values from the second range (“Rng2”) in the “arr2” array variable.
5. Swapping Values:
- ‘Rng1.Value = arr2’: Assigns the values from “arr2” (second range) to the cells in “Rng1” (first range), effectively swapping the values.
- ‘Rng2.Value = arr1’: Assigns the values from “arr1” (first range) to the cells in “Rng2” (second range), completing the swap operation.
6. Turning on Screen Updating:
- ‘Application.ScreenUpdating = True’: Re-enables screen updating, allowing Excel to display changes made by the macro.
This tutorial showed three methods for swapping cells in Excel. We hope you found the tutorial helpful.
Also read: How to Swap Columns in Excel?
Situations where Swapping Cells in Excel is Useful
Swapping cells in Excel can be helpful when you need to rearrange or reorganize data.
Here are four scenarios where you may need to swap cells:
- To correct data entry errors: In case of data entry mistakes where you must interchange the content of two cells to rectify the error, swapping cells can be a convenient solution that saves time and effort.
- To put data in desired or more logical order: When importing data from external sources, the structure or order may not be as expected. Swapping columns or rows can help you to match your needs.
- To change priorities or categories: In the case of managing projects or analyzing data, you can modify the priority level of a task by switching cells from “High” to “Low.” You can also delegate a project to a different team or category or rearrange the items on a to-do list according to changing priorities.
- Comparing Data: If you want to compare two sets of data side by side, you may need to swap rows or columns to align corresponding data points. This process is often necessary when reconciling data.
Instead of cutting and pasting or retyping the values, various methods for swapping cells can save you time and make the process more efficient.
Other articles you may also like: |
Though each wind power plant is designed and optimized according to the conditions prevailing at its installation site, the plant needs some wind power plant control, and nonetheless fits into one of three main concepts as illustrated below.
These individual concepts are described in more detail next.
Table of Contents
Constant-speed wind power plant
Asynchronous generators connected directly to the power supply system were common particularly in the early stages of electricity generation using wind power plants. In combination with stall-controlled, three-vane rotors on Danish wind power plants, asynchronous generators were the most widely used electrical concept, especially in the case of small facilities with capacities in the kilowatt range. The squirrel-cage, asynchronous generators forming part of such systems require little maintenance and are relatively economical. Furthermore, they do not require complex vane pitch control. This design is also known as the Danish concept.
Wind usually impinges at a variety of speeds on a wind power plant. In order to utilize the wind power associated with each speed as efficiently as possible, modern wind power plants are equipped with a power control system that can be considered to encompass the rotor and generator. Constant-speed and variable-speed power control systems are available.
In the case of a constant-speed system, the rotor vanes usually have a fixed pitch, though some constant-speed systems also have a variable vane pitch. Moreover, the (asynchronous) generator driven by the rotor is coupled directly with the power grid.
Power control is performed as described next. From a certain wind speed and, consequently, power (rated power) onward, the air flow impinging on the rotor vanes is disrupted, this effect being termed “stall”. This type of power limitation is therefore also termed stall control. This principle is described in detail on the page titled “Stall” in the chapter titled “Physical principles”.
The generator supplies an alternating current which needs to have the same frequency as the grid current, otherwise disruptions would occur in the power grid or wind power plant. The grid frequency in Europe is 50 Hz. Other regions (e.g. USA) employ a grid frequency of 60 Hz.
In the case of a constant-speed wind power plant, the frequency of the current supplied by the generator depends directly on the rotor speed. If adverse wind conditions prevent the wind power plant from maintaining this frequency, the network is decoupled. Once the rated frequency can be delivered again, the wind power plant is re-connected “softly” to the network, e.g. via a thyristor controller which acts like a dimmer and prevents undesired surges during circuit entry.
Variable-speed wind power plant
Dynamic loads can only be reduced by means of a variable speed range for the rotor vis-à-vis the grid frequency. Though
By contrast, an asynchronous generator only requires part of the generated electrical power to be converted by the frequency converter. The asynchronous generator’s slip is used for this purpose: In the case of an intentionally high slip value, lost energy (slip power) is fed back to the stator power flow via suitable converters. In this case, a squirrel-cage rotor is no longer suitable for the asynchronous generator,
Synchronous generator with full feed
Variable-speed operation by a wind power plant incorporating a synchronous generator can be achieved by means of a frequency converter with a DC link. The variable-frequency alternating current produced by the generator is rectified before being fed via an inverter to the power grid.
Asynchronous generator with double feed
Variable-speed operation of a wind power plant incorporating a
Wind Power Plant Control
Variable-speed systems have established themselves in modern wind turbines. Both under partial load and full load, the rotor blades’ angle can be adjusted by means of a special mechanism in accordance with wind speed and generator power, and thereby aligned nearly ideally into the wind. This kind of mechanism is referred to as pitch control.
The generators forming part of such wind turbines are coupled to the electricity grid not directly, but via an additional component:
This enables a positive utilization of wide fluctuations in wind speed.
The rotor’s magnetic field serves to couple the rotor to the stator which is connected to the grid. This coupling depends on the rotor currents. The slip control diverts a portion of these currents via resistors, thereby weakening the magnetic field and coupling, and increasing the machine’s slip. Wind power plants employ this mechanism to offset gusts of wind. If a gust impinges on the rotor, its torque increases sharply, thereby also tending to raise the system’s power very quickly.
To give the pitch control time to readjust the blade angle, the generator’s slip is increased to up to 10%. While outputting a constant power level to the grid, the system accelerates and part of its excess energy is stored as rotational energy by the rotor and drive train. As the wind speed decreases again, the slip is reduced and the drive train slows down as a result. In this process, the stored energy is fed into the grid too. This smoothens the plant’s power output characteristic. The increased slip reduces the generator’s efficiency and also causes the involved resistors to produce a lot of heat, thus necessitating effective cooling in this operating mode.
In a doubly-fed generator, the rotor’s speed can be varied by up to 30% of the rated speed. This raises power levels under changing wind conditions. It also minimizes undesirable fluctuations in the power grid and stresses exerted on the structure’s crucial components.
To achieve this, the rotor windings are routed out via slip rings and connected to the grid via special inverters. The generator is thereby connected to the stator as well as the rotor, hence the term dual (or double) feed. This permits the controller to directly influence the magnetic conditions inside the rotor. The inverters can rectify alternating current in both directions, and convert direct current into alternating current of any required frequency.
At low wind speeds, the drive train’s rotation is slower compared with the grid’s operation. In this mode, a rotary field is fed into the rotor and superimposed on its rotation frequency. In this manner, the machine magnetically attains its rated slip, even though the rotor’s mechanical operation is slower compared with the grid’s operation. In this process, energy is drawn from the grid in order to produce the rotor field. However, this amount of energy is significantly lower than the stator’s output energy. This enables a plant’s generator to cover a wide speed range.
When the wind speed increases, this rotary field’s frequency is lowered accordingly, thus keeping the magnetic slip constant. To offset gusts and high wind speeds, the rotor field’s direction of rotation is reversed. This makes it possible to raise the mechanical speed at a constant magnetic slip. To achieve this, the converters feed portions of the rotor currents to the grid, resulting in a flow of energy in this direction. About 10% of the plant’s power is thus generated in the rotor and fed via the converters to the grid.
Because the machine’s excitation takes place via the converters, reactive power from the grid is not needed. Instead, the control system makes it possible to provide capacitive and inductive reactive power in accordance with the grid operator’s specifications. The plant, therefore, contributes toward stabilizing the grid.
A First Course on Wind Power Plant Systems
We hope you’ve liked this article on various design features of wind power plant systems. This course on wind power plants, you’ll learn about the basic functioning of a wind turbine and how they convert wind energy into electric energy. There are other energy resources that have been discussed in detail. Continue learning this series on wind power plants to learn more. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.