content
stringlengths
275
370k
Polar bears depend on the arctic ice for catching its prey. But due to global warming sea ice has declined, which has pushed the polar bears to swim longer distances to reach solid ice sheets. Swimming is also one of the arctic adaptations of polar bear to the harsh arctic climate. Amongst all the bear species, they are known for their outstanding swimming abilities. They are considered as marine animals though they resemble a lot to the land mammals. Polar bears are capable of swimming for hundreds of miles or even for hours to many days. Now let’s explore interesting facts about “How Fast Can a Polar Bear Swim” to have a clear understanding about the polar bear. How Fast Can a Polar Bear Swim - A Polar Bear can swim at a speed of 10 km/h i.e. 6.2 mph. - Polar bears love to spend hours at a time in water over long distances. - Normally, Polar bears can swim for 7 to 10 hours a day. - Polar bear are incredible swimmers and they don’t hesitate to swim across wide leads and bays. Polar bear can swim for 100 km (62 miles) in a single event with ease. Long Distance Polar Bear Swimming: - Study was conducted in Beaufort Sea in which 52 adult female polar bears with GPS collars were tracked ,there were no male bear because of their neck being too thick for GPS collars, 50 long swims were identified by 20 polar bears over six years. - The 50 long distance swims observed ranged from 53 km to 687 km with an average of 154 km. - Ten female bears had their cubs swim with them and after a year observed that 6 cubs survived. The researchers didn’t determine that whether they lost the remaining four cubs before or after their swims. Thus this study proved that baby polar bears are also capable of swimming over long distances. - Polar bear usually took their long distance swimming in July and October. How long can a polar bear swim without stopping - In the Beaufort Sea the longest swimming distance is covered by an adult female polar bear which is about 427 miles i.e. 687 km. - Scientists recorded that she took the maximum time and swam continuously for about 9 days i.e. 216 hours. - After resting for some time on sea ice she further walked on packed ice for 1800 km in 53 days and as a consequence she lost 22% of her body weight and yearling offspring. How Deep Can a Polar Bear Dive - Polar bear sometimes make deep dives for catching sea birds that rests on the surface, for stalking seals, for following ice floes or searching for kelp. - Polar bears usually dive 3 to 4.5 m i.e. 9.8-14.8 feet deep into the cold water of arctic and can hold their breath for more than 3 minutes. - Researchers really don’t know that actually how deep can a polar dive. But they estimated that it can dive as deep as 6m i.e. 20 feet. Longest Dive by Any Polar Bear - The longest dive duration observed by any polar bear was 3 minutes and 10 seconds and without surfacing it covered a distance of 45 to 50m (148-164 ft). How Polar Bears are Able to Attain Such Sharp Swim - Polar bears are shallow divers and usually swim in a dog paddle way. Their front paws help them in paddling and propels them through the water to give it the fastest possible speed. - Their large paws measure up to 30cm (12in) in diameter which not only help them in spreading their weight but offer them propulsion in swimming due to which they can gain a maximum speed of 10 km/h and they use to hold their legs and hind legs flat which work as rudders. - They close their nostrils when they dive deeper. How are Polar Bears able to Swim in Cold Waters - Their thick layer of fat which is over 4 inches i.e. 10 cm insulates them from cold while swimming in icy water. - Hollow hairs of polar bear act like a tiny float which make them even more buoyant and helps them in swimming in cold water. - Polar bear’s water repellent fur don’t get mat when wet, it minimizes the heat loss and their fur acts as a great insulator in cold water as well. - Polar bears have small shoulders, large rump, small head and a bit longer neck which help them to keep their head above the water surface. Why do Polar Bears Need to Swim With Such Speed - Polar bears need to swim with such fast speed in sea because global warming has caused a prolonged ice free period and shrinkage of their hunting grounds. - Fast declining sea ice pushed these polar bears to swim fast, cover longer distances in order to reach to the solid ice grounds and discover new hunting areas where lots of seals exists. - Scientists also observed that polar bears moved on average 2.3 times more in water than on sea ice, this is because when they are swimming they are not able to hunt or rest. Polar Bear Swimming Speed Comparison With its Prey - Polar bears can gain a maximum speed of 10km/h and can remain submerged for about 3 minutes. They can dive as deep as 15 feet. - Prime prey of polar bear is seal which swims with an average speed of 10km/h but can gain a maximum speed of up to 30 km/h. They can dive as deep as 300 feet and can remain submerged for 45 minutes without surfacing. - Polar bears prey i.e. seals swims faster than the polar bear that’s why bears uses the method of aquatic stalk for catching them. - Polar bears hunt the hauled out seal by swimming underwater and once reached the edge of ice floe then suddenly emerge from water and grab its prey this is the aquatic stalk technique. Learn more about Polar Bears by reading these: Polar Bear Facts for Kids
Presentation on theme: "When you catch a deep-sea fish, why does its eyes pop-out?"— Presentation transcript: 1 When you catch a deep-sea fish, why does its eyes pop-out? QUICK WRITEWhy is the electricity produced at the bottom of dams?When you catch a deep-sea fish, why does its eyes pop-out?Why do your ears pop on an airplane or up in the mountains? 6 FluidA substance that can easily change its shape, such as liquids and gases.The molecules in a fluid have a certain amount of force (mass and acceleration) and exert pressure on surfaces they touch. 13 The whole system is a low pressure, but it dramatically decreases towards the eye of the hurricane. Very Low pressurePressure always flows from high to low, which creates the high velocity winds.Higher Pressure 14 Barometric Pressure The barometer is used to forecast weather. Decreasing barometer means stormy weather and an increasing barometer means warmer weather. 24 3. What is the total force of the right Piston? F=Pa= 2000N/m2 x 20m2 =40,000N20m.002m21. What is the pressure of the left piston?2. What is the pressure of the right Piston?P= F/a = 4/.002 = 2000Pa2000Pa 25 Hydraulic BrakesThe hydraulic brake system of a car multiplies the force exerted on the brake pedal. 26 The tendency or ability of an object to float. BuoyancyThe tendency or ability of an object to float. 27 BuoyancyThe pressure on the bottom of a submerged object is greater than the pressure on the top. The result is a net force in the upward direction. 28 Buoyant ForceThe upward force exerted by a fluid on a submerged or floating object. 29 BuoyancyThe buoyant force works opposite the weight of an object. 30 Archimedes’ principle: Buoyant Force on an object immersed in a liquid equals the weight of the liquid displaced and the weight of the object if it floats. 32 Hmm! The crown seems lighter under water! Archimedes' PrincipleHmm! The crown seems lighter under water!The buoyant force on a submerged object is equal to the weight of the liquid displaced by the object. For water, with a density of one gram per cubic centimeter, this provides a convenient way to determine the volume of an irregularly shaped object and then to determine its density 33 Density and buoyancy: An object that has a greater density than the fluid it is in, will sink. If its density is less than the fluid it will float.Density 34 A solid block of steel sinks in water A solid block of steel sinks in water. A steel ship with the same mass floats on the surface. 35 DensityChanges in density cause a submarine to dive, rise, or float. 36 DensityChanges in density cause a submarine to dive, rise, or float. 37 DensityChanges in density cause a submarine to dive, rise, or float. 45 Bernoulli’s and Baseball A non-spinning baseball or a stationary baseball in an airstream exhibits symmetric flow. A baseball which is thrown with spin will curve because one side of the ball will experience a reduced pressure. This is commonly interpreted as an application of the Bernoulli principle. The roughness of the ball's surface and the laces on the ball are important! With a perfectly smooth ball you would not get enough interaction with the air.Bernoulli’s and Baseball 46 Bernoulli’s and Air Foil The air across the top of a conventional airfoil experiences constricted flow lines and increased air speed relative to the wing. This causes a decrease in pressure on the top according to the Bernoulli equation and provides a lift force. Aerodynamicists (see Eastlake) use the Bernoulli model to correlate with pressure measurements made in wind tunnels, and assert that when pressure measurements are made at multiple locations around the airfoil and summed, they do agree reasonably with the observed lift. 48 Others appeal to a model based on Newton's laws and assert that the main lift comes as a result of the angle of attack. Part of the Newton's law model of part of the lift force involves attachment of the boundary layer of air on the top of the wing with a resulting downwash of air behind the wing. If the wing gives the air a downward force, then by Newton's third law, the wing experiences a force in the opposite direction - a lift. While the "Bernoulli vs Newton" debate continues, Eastlake's position is that they are really equivalent, just different approaches to the same physical phenonenon. NASA has a nice aerodynamics site at which these issues are discussed. 52 Liquid Pressure = ρgh where….. MORE EQUATIONS!!!Liquid Pressure = ρgh where…..ρ = mass/volume = fluid densityg = acceleration of gravity h =height or depth of fluid 53 Fluid Pressure = gh = 1000Kg/m³ x 9.8m/s² x 1m = 9,800 Pa The pressure from the weight of a column of liquid of area A and height h isThe most remarkable thing about this expression is what it does not include. The fluid pressure at a given depth does not depend upon the total mass or total volume of the liquid. The above pressure expression is easy to see for the straight, unobstructed column, but not obvious for the cases of different geometry which are shown.Fluid Pressure = gh = 1000Kg/m³ x 9.8m/s² x 3m = 29,400 Pa
The Ogasawara Islands, also known as the Bonin Islands, have faced a number of unique — if not bizarre — developments over the course of history. Clustered in three groups, the volcanic, tropical chain of more than 30 islets about 1,000 km south of Tokyo looks hopelessly isolated on a map, and the only way to get to the main island is still by ferry, a 24-hour journey from Tokyo. But the chain, now inhabited by about 2,600 people, was a key junction during the wave of globalization from the 17th century to the late 19th century. The islands were witness to dramatic events that greatly affected the course of modern Japanese history, including the arrival of American Commodore Matthew Perry whose “black ships” eventually forced Japan to end its 214-year-old closed-door policy and embark on a course of modernization and Westernization. Given their strategic location, one of the islands, Iwo Jima — now officially called Iwoto — became a bloody battlefield in the closing days of World War II. “They were considered strategically important in World War II as sites for naval and airplane bases,” reads a passage in “Japan: An Illustrated Encyclopedia,” published in English by Kodansha Ltd. in 1993. Archaeological studies of ancient relics suggest some of the islands — including Kita-Iwoto, about 70 km north of Iwoto — were inhabited by humans around 2,000 years ago. But later, the islands were empty for hundreds of years. Who re-discovered them is not clear, but they were probably spotted by Westerners who were exploring the Pacific Ocean in the 16th and 17th centuries, aspiring to discover lands blessed with gold, silver or spices that would bring them riches. A Japanese legend says the islands were first discovered by samurai lord Sadayoshi Ogasawara in 1593, although historians say this is likely a fabricated story, given the errors and contradictions in materials left by those who claimed in the 18th century to be his descendants. In 1675, a 43-meter official exploration ship from the Tokugawa shogunate traveled to Chichijima and Hahajima. The crew drew up the first detailed maps of the islands and built a monument declaring that they belonged to Japan. But the shogunate didn’t effectively control the islands thereafter. In 1830, a group of five Westerners and 20 Hawaiians settled on Chichijima and Hahajima, including American citizen Nathaniel Savory of Massachusetts. Many of their descendants still live there. Commodore Perry visited the islands in 1853 while on his way to Tokyo, then known as Edo, with a mission to end Japan’s isolation. “The Commodore, having been satisfied of the importance of these islands to commerce, was induced to visit them, chiefly by a desire of examining them himself and recommending (Chichijima) as a stopping place for the line of streamers which, soon or later, must be established between California and China,” reads the official record of Perry’s journey to Japan, published in 1856. Perry also purchased a plot of land on Chichijima from Savory to store coal. Alarmed by the frequent visits by Western ships and the purpose of Perry’s journey, Japan declared the Ogasawara Islands part of its territory in 1876. This was endorsed by Western powers. Since that time, many Japanese have settled on the islands, building port towns based on agriculture, fishing and whaling. Many farmers grew wealthy because the tropical climate allowed them to grow vegetables unavailable on the mainland during winter. The combined population of the Ogasawara chain peaked in 1944 at 7,711. When the Pacific War began in 1941, the islands became part of the front line in the defense of mainland Japan. Many residents whose ancestors were Westerners faced discrimination and were often suspected of being spies. In 1944, the Imperial Japanese Army forced 6,886 residents to evacuate. On Jan. 29, 1945, U.S. forces started attacking Iwo Jima, leading to what became one of the fiercest ground battles of World War II. In October 1946, only the descendants of Western immigrants and their spouses were allowed to return. The total number of returnees was 135. Although the Allied Occupation officially ended in 1952, when the Treaty of San Francisco took effect, the Ogasawara Islands were held by the U.S. until 1968. Though thousands of residents were eager to go back after the war, their petitions were all rejected during the Occupation, and they were unable to step foot on their home islands until the 1968 handover to Japan.
- Spinal Anatomy - Discogenic Pain - Discogenic Disease - Vertebral Column - The Spine - Intervertebral Disc - Spinal Cord - Central Nervous System The spinal canal is a long opening down the center of the spinal column. The spinal cord runs through this opening. The canal begins at the base of the skull and ends at the lower back, providing a pathway for the central nervous system to send messages from the brain to the rest of the body and back again. The spinal canal is formed by openings within the vertebrae. These vertebral openings are called foramina. Along the length of the canal is the epidural space, which surrounds the dura mater – a protective membrane that encloses the spinal cord. The blood vessels that supply the spine with blood also run through the spinal canal. The most important function of the spinal canal is to serve as a conduit for spinal nerves. There are 31 pairs of spinal nerves, each branching off the spinal cord in order to send messages (in the form of sensory impulses) to different parts of the body. The groupings of nerve roots branching off the spinal cord are named for the region of the spine in which they are found: cervical (upper spine), thoracic (mid-spine), lumbar (lower spine) and sacral (lower spine). Each set of nerve roots delivers messages to a different area, muscle group or organ. The responsibilities for sensory and motor control for the different nerve groups within the spinal canal include: - Cervical nerves: Head, neck, shoulders, arms, wrists, hands and diaphragm - Thoracic nerves: Hands, chest, back and abdomen - Lumbar nerves: Legs and feet - Sacral nerves: Legs, bowel, bladder and reproductive function Abnormalities within the spinal anatomy – such as degenerative disc disease and spinal stenosis – can create extra pressure within the spinal canal, resulting in chronic neck or back pain. Often, the neck or back pain begins when a nerve root has become compressed or irritated by a condition known as spinal stenosis, which is a narrowing of the spinal canal that could be caused by a herniated disc, bone spur or other spinal condition. The surgeons at Laser Spine Institute can treat these conditions using state-of-the-art techniques. Contact Laser Spine Institute to learn more about how our minimally invasive, outpatient procedures can help you find relief from neck or back pain.
Place: United States of America Subject: biography, astronomy US astronomer who studied extragalactic nebulae and demonstrated them to be galaxies like our own. He found the first evidence for the expansion of the universe, in accordance with the cosmological theories of Georges Lemaître and Willem de Sitter, and his work led to an enormous expansion of our perception of the size of the universe. Hubble was born in Marshfield, Missouri, on 20 November 1889. He went to high school in Chicago and then attended the University of Chicago where his interest in mathematics and astronomy was influenced by George Hale and Robert Millikan. After receiving his bachelor's degree in 1910, he became a Rhodes scholar at Queen's College, Oxford, where he took a degree in jurisprudence in 1912. When he returned to the USA in 1913, he was admitted to the Kentucky Bar, and he practised law for a brief period before returning to Chicago to take a research post at the Yerkes Observatory 1914-17. In 1917 Hubble volunteered to serve in the US infantry and was sent to France at the end of World War I. He remained on active service in Germany until 1919, when he was able to return to the USA and take up the earlier offer made to him by Hale of a post as astronomer at the Mount Wilson Observatory near Pasadena, where the 2.5-m/100-in reflecting telescope had only recently been made operational. Hubble worked at Mount Wilson for the rest of his career, and it was there that he carried out his most important work. His research was interrupted by the outbreak of World War II, when he served as a ballistics expert for the US War Department. He was awarded the Gold Medal of the Royal Astronomical Society in 1940, and received the Presidential Medal for Merit in 1946. He was active in research until his last days, despite a heart condition, and died in San Marino, California, on 28 September 1953. While Hubble was working at the Yerkes Observatory, he made a careful study of nebulae, and attempted to classify them into intra- and extragalactic varieties. At that time there was great interest in discovering what other structures, if any, lay beyond our Galaxy. The mysterious gas clouds, known as the smaller and larger Magellanic Clouds, which had first been systematically catalogued by Charles Messier and called ‘nebulae’, were good extragalactic candidates and were of great interest to Hubble. He had been particularly inspired by Henrietta Leavitt's work on the Cepheid variable stars in the Magellanic Clouds; and later work by Harlow Shapley, Henry Russell, and Ejnar Hertzsprung on the distances of these stars from the Earth had demonstrated that the universe did not begin and end within the confines of our Galaxy. Hubble's doctoral thesis was based on his studies of nebulae, but he found it frustrating because he knew that more definite information depended upon the availability of telescopes of greater light-gathering power and with better resolution. After World War I, with the 2.5-m/100-in reflector at Mount Wilson at his disposal, Hubble was able to make significant advances in his studies of nebulae. He found that the source of the light radiating from nebulae was either stars embedded in the nebular gas or stars that were closely associated with the system. In 1923 he discovered a Cepheid variable star in the Andromeda nebula. Within a year he had detected no fewer than 36 stars within that nebula alone, and found that 12 of these were Cepheids. These 12 stars could be used, following the method applied to the Cepheids that Leavitt had observed in the Magellanic Clouds, to determine the distance of the Andromeda nebula. It was approximately 900,000 light years away, much more distant than the outer boundary of our own Galaxy - then known to be about 100,000 light years in diameter. Hubble discovered many gaseous nebulae and many other nebulae with stars. He found that they contained globular clusters, novae, and other stellar configurations that could also be found within our own Galaxy. In 1924 he finally proposed that these nebulae were in fact other galaxies like our own, a theory that became known as the ‘island universe’. From 1925 onwards he studied the structures of the galaxies and classified them according to their morphology into regular and irregular forms. The regular nebulae comprised 97% of them and appeared either as ellipses or as spirals, and the spirals were further divided into normal and barred types. All the various shapes made up a continuous series, which Hubble saw as an integrated ‘family’. The irregular forms comprised only 3% of the nebulae he studied. By the end of 1935, Hubble's work had extended the horizons of the universe to 500 million light years. Having classified the various kinds of galaxies that he observed, Hubble began to assess their distances from us and the speeds at which they were receding. The radial velocity of galaxies had been studied by several other astronomers, in particular by Vesto Slipher. Hubble analysed his data, and added some new observations. In 1929 he found, on the basis of information for 46 galaxies, that the speed at which the galaxies were receding (as determined from their spectroscopic red shifts) was directly correlated with their distance from us. He found that the more distant a galaxy was, the greater was its speed of recession - now known as Hubble's law. This astonishing relationship inevitably led to the conclusion that the universe is expanding, as Lemaître had also deduced from Albert Einstein's general theory of relativity. This data was used to determine the portion of the universe that we can ever come to know, the radius of which is called the Hubble radius. Beyond this limit, any matter will be travelling at the speed of light, and so communication with it will never be possible. The data on galactic recession was also used to determine the age and the diameter of the universe, although at the time both of these calculations were marred by erroneous assumptions, which were later corrected by Walter Baade. The ratio of the velocity of galactic recession to distance has been named the Hubble constant, and the modern value for the speed of galactic recession is 530 km/330 mi per sec - very close to Hubble's original value of 500 km/310 mi per sec. During the 1930s, Hubble studied the distribution of galaxies and his results supported the idea that their distribution was isotropic. They also clarified the reason for the ‘zone of avoidance’ in the galactic plane. This effect was caused by the quantities of dust and diffuse interstellar matter in that plane. Among his later studies was a report made in 1941 that the spiral arms of the galaxies probably did ‘trail’ as a result of galactic rotation, rather than open out. After World War II Hubble became very much an elder statesman of US astronomy. He was involved in the completion of the 5-m/200-in Hale Telescope at Mount Palomar, which was opened in 1948. One of the original intentions for this telescope was the study of faint stellar objects, and Hubble used it for this purpose during his few remaining years. (b. Marshfield, Missouri 1889; d. San Marino, California, 1953) When Edwin Hubble was a young man at the University of Chicago, a sports... (1889-1953) American astronomer who proved that the spiral ‘nebulae’ were galaxies lying far beyond our own Milky Way, established a widely... 1889-1953 US astronomer. In 1925, he published his classification of galaxies and his discovery that spiral nebulae were resolvable as...
The famous three-body problem can be traced back to Isaac Newton in 1680s, thereafter Lagrange, Euler, Poincare and so on. Studies on the three-body problem leaded to the discovery of the so-called sensitivity dependence of initial condition (SDIC) of chaotic dynamic system. Nowadays, the chaotic dynamics is widely regarded as the third great scientific revolution in physics in 20th century, comparable to the relativity and the quantum mechanics. Thus, the studies on three-body problem have very important scientific meanings. Poincare in 1890 revealed that trajectories of three-body systems are commonly non-periodic, i.e. not repeating. This can explain why it is so hard to gain periodic orbits of three-body system. In the 300 years since three-body problem was first recognized, only three families of periodic orbits had been found, until 2013 when Suvakov and Dmitrasinovic [Phys. Rev. Lett. 110, 114301 (2013)] made a breakthrough to numerically find 13 new distinct periodic orbits, which belong to 11 new families of Newtonian planar three-body problem with equal mass and zero angular momentum (see http://www. These 695 periodic orbits include the well-known figure-eight family found by Moore in 1993, the 11 families found by Suvakov and Dmitrasinovic in 2013, and especially more than 600 new families that have never been reported. The two scientists used the so-called "Clean Numerical Simulation (CNS)", a new numerical strategy for reliable simulations of chaotic dynamic systems proposed by the second author in 2009, which is based on high enough order of Taylor series and multiple precision data with many enough significant digits, plus a convergence/reliability check. The CNS can reduce truncation error and round-off error so greatly that numerical noises are negligible in a long enough interval of time, thus more periodic orbits of the three-body system can be gained. As pointed out by Montgomery in 1998, each periodic orbit in real space of the three-body system corresponds to a closed curve on the so-called "shape sphere", which is characterized by its topology using the so-called "free group element". The averaged period of an orbit is equal to the period of the orbit divided by the length of the corresponding free group element. These 695 families suggest that there should exist the quasi Kepler's third law: the square of the average period times the cube of the total kinetic and potential energy approximately equals to a constant. The generalized Kepler's third law reveals that the three-body system has something in common, which might deepen our understandings and enrich our knowledges about three-body system. "The discovery of the more than 600 new periodic orbits is mainly due to the advance in computer science and the use of the new strategy of numerical simulation for chaotic dynamic systems, namely the CNS", spoke the two scientists. It should be emphasized that 243 more new periodic orbits of the three-body system are found by means of the CNS. In other words, if traditional algorithms in double precision were used, about 40% new periodic orbits would be lost. This indicates the novelty and originality of the Clean Numerical Simulation (CNS), since any new methods must bring something completely new/different. As shown in Figure 1, many pictures of these newly-found periodic orbits of the three-body system are beautiful and elegant, like modern paintings. "We are shocked and captivated by the perfect of them", spoke the two scientists. See the article: XiaoMing Li, and ShiJun Liao, More than six hundred new families of Newtonian periodic planar collisionless three-body orbits, Sci. China-Phys. Mech. Astron. 60, 129511 (2017), doi: 10.1007/s11433-017-9078-5
In deciding how best to meet the world’s growing needs for energy, the answers depend crucially on how the question is framed. Looking for the most cost-effective path provides one set of answers; including the need to curtail greenhouse-gas emissions gives a different picture. Adding the need to address looming shortages of fresh water, it turns out, leads to a very different set of choices. That’s one conclusion of a new study led by Mort Webster, an associate professor of engineering systems at MIT, published in the journal Nature Climate Change. The study, he says, makes clear that it is crucial to examine these needs together before making decisions about investments in new energy infrastructure, where choices made today could continue to affect the water and energy landscape for decades to come. The intersection of these issues is particularly critical because of the strong contribution of the electricity-generation industry to overall greenhouse-gas emissions, and the strong dependence of most present-day generating systems on abundant supplies of water. Furthermore, while power plants are a strong contributor to climate change, one expected result of that climate change is a significant change of rainfall patterns, likely leading to regional droughts and water shortages. Surprisingly, Webster says, this nexus is a virtually unexplored area of research. “When we started this work,” he says, “we assumed that the basic work had been done, and we were going to do something more sophisticated. But then we realized nobody had done the simple, dumb thing” — that is, looking at the fundamental question of whether assessing the three issues in tandem would produce the same set of decisions as looking at them in isolation. The answer, they found, was a resounding no. “Would you build the same things, the same mix of technologies, to get low carbon emissions and to get low water use?” Webster asks. “No, you wouldn’t.” In order to balance dwindling water resources against the growing need for electricity, a quite different set of choices would need to be made, he says — and some of those choices may require extensive research in areas that currently receive little attention, such as the development of power-plant cooling systems that use far less water, or none at all. Even where the needed technologies do exist, decisions on which to use for electricity production are strongly affected by projections of future costs and regulations on carbon emissions, as well as future limits on water availability. For example, solar power is not currently cost-competitive with other sources of electricity in most locations — but when balanced against the need to reduce emissions and water consumption, it may end up as the best choice, he says. “You need to use different cooling systems, and potentially more wind and solar energy, when you include water use than if the choice is just driven by carbon dioxide emissions alone,” Webster says. His study focused on electricity generation in the year 2050 under three different scenarios: purely cost-based choices; with a requirement for a 75 percent reduction in carbon emissions; or with a combined requirement for emissions reduction and a 50 percent reduction in water use. To deal with the large uncertainties in many projections, Webster and his co-authors used a mathematical simulation in which they tried 1,000 different possibilities for each of the three scenarios, varying each of the variables randomly within the projected range of uncertainty. Some conclusions showed up across hundreds of simulations, despite the uncertainties. Based on cost alone, coal would generate about half of the electricity, whereas under the emissions-limited scenario that would drop to about one-fifth, and under the combined limitations, it would drop to essentially zero. While nuclear power would make up about 40 percent of the mix under the emissions-limited scenario, it plays almost no role at all in either the cost-alone or the emissions-plus-water scenarios. “We’re really targeting not just policymakers, but also the research community,” Webster says. Researchers “have thought a lot about how do we develop these low-carbon technologies, but they’ve given much less thought to how to do so with low amounts of water,” he says. While there has been some study of the potential for air-cooling systems for power plants, so far no such plants have been built, and research on them has been limited, Webster says. Now that they have completed this initial study, Webster and his team will look at more detailed scenarios about “how to get from here to there.” While this study looked at the mix of technologies needed in 2050, in future research they will examine the steps needed along the way to reach that point. “What should we be doing in the next 10 years?” he asks. “We have to look at the implications all together.”
There is a wide array of learning difficulties, and many children (between 4 and 6%) suffer from them. They are not due to a lack of intelligence, or to unfavorable socioeconomic circumstances, or to a psychoaffective problem. One of the important factors in the development of learning disabilities is a lack of awareness of the appropriate articulatory or physical gesture. This entails a disturbance of short-term memory, a prerequisite for a normal learning process. These difficulties include dyslexia (having do to with reading), dysorthography (the relation of sounds to written letters), and dyspraxia (the use and coordination of learned gestures). Also included among these difficulties are dysphasia (for spoken language) and dyscalculia (concerning mathematical functions and numbers). The TOMATIS Method operates on the plasticity of the neural circuits involved in the decoding and analysis of sounds, as well as on those involved in motricity, balance, and coordination. As such, the TOMATIS Method can help children develop compensatory strategies to deal with and manage their learning difficulties and language disorders. The TOMATIS Method does not eliminate these problems altogether, but at least helps the person manage them better and thus effectively overcome them. SOLISTEN Group : Equipped with special headphones, SOLISTEN plays specially processed and preselected music to stimulate the auditory integration system. By reproducing the electronic gate, this stimulation ensures the accurate integration of the acoustic information and helps the brain to better receive, select and process this information. - Designed for Occupational Therapists, Speech and Language Therapists, Physical Therapist and Special Need Teachers. - Can be either used by individuals or small groups. - Listening Program adapted for a 2 hour sessions per day during two periods of 30 days. - No-technical knowledge required to implement it - A specific 3 days training course - Mentoring and assistance year long
The term Arisierung, or "Aryanization", was coined by the National Socialists to describe the process whereby Jewish people were ousted from their jobs and from working life in general. "Aryanization" encompassed both illegal as well as state-sanctioned measures such as dismissal, debarment from practising a profession, restrictions in engaging in commercial activities and the transfer of rights and property to non-Jewish Germans, sometimes under duress. Up to 1937 seemingly unsystematic, isolated anti-Semitic actions took place which already posed a serious threat to the livelihood of the majority of Jews still living in Germany. However, in 1938 there was a radical worsening of the situation. First, all Jewish assets were registered. Then party entities, together with the authorities, used this information to initiate a wide-scale pseudo-legal transfer of Jewish enterprises to non-Jewish owners. After the Pogrom Night in November 1938 this process was speeded up by means of additional force and a large number of new legal regulations. By New Year 1939 most Jewish enterprises in Germany and Austria had already either been "arianized" or closed down. Whereas in the earlier years Jewish proprietors were at least able to realize part of the value of their property when they "sold" it, as things stood in 1938 "Aryanization" was increasingly taking on the nature of state-organized expropriation.
Here is the game of asexual reproduction… Life is persisting on earth because of a mystic phenomenon which is Reproduction. In living organisms it occurs by two ways viz. asexual (one parents) & sexual reproduction (two parents). Here I am going to tell you how reproduction occurs when there are only one parent. Yes we are going to learn about asexual reproduction. As there is only one parent involved, so the new born organisms has exactly same character as that of its parent (clone). Normally amitotic or mitotic type of cell division occurs. Rate of reproduction is high in this type of reproduction. Asexual reproduction occurs by several ways. Let’s learn J 1. Fission – It’s of two type’s viz. binary & multiple fission. - Binary fission is more like a normal cell division. It occurs in unicellular organisms. It is simplest type of asexual reproduction. Now guys see the image (Fig A). It can also explain you in a better way. - In Multiple fission (Fig C); Many organisms are formed from single organism. In certain unicellular organism, when there is unfavorable condition, cyst is formed around them. Later after what happen is; nuclear division occurs inside. This type of fission has been observed in amoeba, plasmodium etc. 2. Budding – Bulb like projection appears from the body of any organism that is reproducing by budding. This bud divides and later on separates from the parent body and forms the new organisms. Budding mainly occur in yeast, hydra, sponges etc. 3. Fragmentation and regeneration – An organism breaks into many parts and then each part forms a new organism. Regeneration is way of healing any particular part of body by cell division. Fragmentation occurs in many organisms like sea star, fungi, plants, worms and annelids etc. 4. Sporulation (Fig E) – I hope you are enjoying reading this ;). Well! Spores are gametic bodies here. They help in formation of new organisms. Reproduction occurs in them by spreading of spores which are formed in sporangia. In favorable condition, this sporangia burst and all spores are spread in air. Later on each spore is converted into new organism. This type of reproduction occurs in many organisms like fungi (mucor & rhizopus) etc. You know spores are of two types viz. zoospores & aplanospores. - Zoospores are motile as they have flagella. - Aplanospores are non-motile as they don’t have flagellar extensions. 5. Vegetative Propagation – It’s a special type of reproduction in plants. It occurs by natural and artificial method. In natural way, any of the part of plant is taken which is other than the sexual part of plant (flower) & it helps in new plant formation. Artificial method includes cutting, layering & grafting. I think even this picture can explain the whole story in better way 🙂 - Layering is done in raspberries. Gooseberries etc. Well Guys! Your doubts and queries are always welcomed if you find any issue or doubt. You can post it over here. If you want to learn more about reproduction you can jump into this link. And I am Anjali Ahuja (Biology mentor) signing off for now.
Throughout the 18th and 19th Centuries lead production reached a peak and Britain became the main producer of lead in the world. During the 19th Century miners struck 'bargains' with landowners so that prospecting for ore was a mutual benefit. This effectively meant the miners became self employed and had a great incentive to find viable lead veins and exploit the deposits. Major advances in the harnessing of water technology in the form of water wheels meant that some degree of mechanisation was possible. However, mining in the Dales was always hard manual labour and in very remote areas relying on pick and shovel by men, women and children in dirty and often dangerous situations. Miners rarely became wealthy. The precarious nature of prospecting meant that supplements to their income were needed and so most miners and their families also turned their hand to farming for food and even hand-knitting was used to generate extra income. The men often knitted on the long walks to and from the levels and smelting areas, so that time was not wasted. By the turn of the 20th Century the Dales mines could no longer compete with cheaper imports, particularly from Spain, and most mines closed. The last working mine in Swaledale finally closed in 1912. Many of the mineworkers moved away from the Dales in search of work. Some moved to the industrial mill towns but others emigrated and continued mining in Today the remains of the lead mining industry scar the landscape of most dales to some degree but they are particularly prominent in Swaledale, Arkengarthdale, Wensleydale and Wharfedale. Ruined remains of a stone built Smelt Mill in wild moorland surroundings
Farming in the Middle Ages Written by Simon Newman History - Middle Ages Farming in the Middle Ages was done by peasants and serfs. Peasant farmers made just enough money to live on while serfs had no rights and were all but slaves to the lords whose land they lived on. Some serf farmers eventually earned rights in exchange for back-breaking work seven days a week and on-command service to their lord. Farming Methods and Tools Lands were farmed using a three field agricultural system. One field was for the summer crop, another for winter crop, and the third layfallow, or uncultivated, each year. The fallow land was reserved to regain nutrients for the next year. Farmers only had a rudimentary knowledge of fertilizers. Thus, each year only an average of twothirds of a farmer’s land was usually cultivated. The other third of the land lay uncultivated or fallow. The average yield of an acre of farming in the Middle Ages was eight to nine bushels of grain. Some farmers did have methods for fertilizing their soil. A common fertilization technique for farming in the Middle Ages was called marling. For marling, farmers spread clay containing lime carbonate onto their soil. This process restored the nutrients needed to grow crops. Farmers also used manure as fertilizer, which they got from the livestock they raised. There were not many tools used for farming, and the tools available were rather useless. The wooden ploughs used for farming in the Middle Ages barely scratched the ground. Grain was cut with a sickle and grass mown with a scythe. It took an average of five men per day to collect a two acre harvest. Harrowing, or burying seeds, was done with a hand tool resembling a large rake. As scientific breeding had not yet begun, farm animals were small and often unhealthy. The size of a full-grown bull reached the size slightly larger than a calf today, and the fleece of an entire sheep weighed an average of two ounces. Other common livestock included sheep, pigs, cows, goats and chickens. The most important livestock animal, an ox, was unavailable to most farmers. Oxen were referred to as “beasts of burden” because of the amount of physical labor they could handle that humans could not. Horses also were sometimes referred to as “beasts of burden.” Villages or towns often pooled money together to buy a few oxen because they were so vital to completing important farm work. The oxen were rotated between members of the community, who looked after each other and made sure that, especially during ploughing time and harvesting time, important farm work was always finished by everyone. Common crops produced in the Middle Ages included wheat, beans, barley, peas and oats. Most farmers had a spring and a fall crop. The spring crop often produced barley and beans while the fall crop produced wheat and rye. The wheat and rye were used for bread or sold to make money. The oats were usually used to feed livestock. The barley was often used was used for beer. Farmers used a crop rotation system which is still used today. The way crop rotation works is that different crops are planted on the same field in alternating years. For instance, one year the farmers may plant oats and the next year they decide to plant beans. Because these two crops use different nutrients, the nutrients used by one crop (say oats) will be absorbed while that crop is growing. Those nutrients are used up when the oats finish growing. The next year, the farmers plant beans in that field, because beans use up different nutrients in the soil. Because those nutrients were not used up in that field the previous year, the field is primed for the beans. Farming in the Middle Ages was controlled by the weather. One night of bad frost could mean a whole year of bad crops. Certain rituals and procedures also had to be performed throughout the year to ensure a satisfactory crop. A farmer’s crop, no matter the season, always had to be monitored. A farmer’s year: - In January, farmers hoped for rain. They focused on making and repairing tools as well as repairing fences. - In February, farmers hoped for rain. They focused on carting manure and marl. - In March, farmers hoped for a dry month with no severe frosts. They focused on the ploughing and spreading of manure. - In April, farmers hoped for a mixture of rain and sunshine. They focused on sowing the spring seeds and harrowing them. - In May, farmers hoped for a mixture of rain and sunshine. They focused on digging ditches and started their first ploughing of the fallow fields. - In June, farmers hoped for dry weather. They focused on hay making, sheep shearing, and did a second ploughing of the fallow fields. - In July, farmers hoped for a month in which the first half was dry and the second half was rainy. They focused on hay making, sheep shearing, and crop weeding. - In August, farmers hoped for warm, dry weather. They focused on harvesting. - In September, farmers hoped for rain. They focused on threshing, ploughing and pruning fruit trees. - In October, farmers hoped for dry weather with no severe frosts. They focused on their last ploughing of the year. - In November, farmers hoped for a mixture of rain and sunshine. They focused on collecting acorns for pigs. - In December, farmers hoped for a mixture of rain and sunshine. They focused on making and repairing tools and slaughtering livestock. Women’s role in farming in the Middle Ages Farmer’s wives often helped raise the smaller livestock, such as chickens. These livestock were then killed and eaten by the family or possibly sold for extra money. Farmer’s wives also prepared and preserved all of the family’s meals. They made useful household food items such as butter and cheese as well. Some farmer’s wives also earned extra money for the family by spinning thread or learning another “stay-at-home” trade, such as brewing ale.
Ancient Life in Kansas Rocks, part 24 of 27 Pliohippus represented a stage in the development of the modern horse from an animal no larger than an average-sized modern dog. Pliohippus was about the size of a pony, and differed from earlier "horses" in having only a single toe. Here, at lower right, is a molar from this extinct horse. (Ogallala Formation, Pliocene) Two kinds of fossil elephants are commonly found in Kansas, the mastodon and the woolly mammoth. Remains of these extinct animals are commonly found in gravels and bottom lands and other unconsolidated earth. They are easily distinguished from each other by the appearance of their teeth. Mastodon molars at the left have large conical cusps, are low crowned and rooted, whereas the mammoths and other true elephants have molars, right, with 12 to 30 high, thin, transverse enamel ridges or crests, high crowned and rootless. Mammoth teeth acted like mill-stones, with varying hardnesses across the grinding surfaces. Mastodons were smaller than mammoths, not exceeding a height of 9.5 feet at the shoulder. They ranged over the entire United States and Canada, and became extinct in post-glacial time, possibly even within the span of written history. They were hairy with a heavy undercoat of wool. Since pottery and charcoal, evidence of campsites of early man, have been discovered in levels of earth below, and hence older than those at which mastodon remains have been found, it is probable that mastodons were hinted by men. Mammoths were the largest North American land mammals to have ever lived, growing to a height exceeding 13.5 feet at the shoulder, and were the only true elephants to have been native to North America. The best known of them ranged along the front of the glaciers and southward as far as Texas and Florida. Some have been found frozen (the flesh still edible) in frozen gravels in Siberia, and it has been determined that they ate grasses in summer and needles and twigs from coniferous trees in winter. Pictures of them have been found on walls in the cave dwellings of Cro-Magnon man in Europe. They are called "woolly" because of their dark brown to black hair that was up to 20 inches long, over a dense layer of wool up to 12 inches thick. (Pleistocene, eastern Kansas) Kansas Geological Survey Placed online Feb. 1997 URL = "http://www.kgs.ku.edu/Publications/ancient/f24_mammals.html" Send comments and/or suggestions to [email protected]
One thing we've all learned since white sharks started frequenting the waters around Cape Cod is that we don't know nearly as much about these fascinating creatures as one might think. Add this to the list: We don't know how quickly they grow, or how long they live. We've had working estimates. But new research by scientists at Woods Hole Oceanographic Institution, Northeast Fisheries Science Center, and Massachusetts Department of Marine Fisheries suggests that our understanding is in need of serious revision. Biologists have traditionally estimated sharks' ages by counting growth rings inside their vertebrae, much the way tree rings are used. But it's never been entirely clear that sharks' growth rings were annual, that one and only one new ring was produced each year. The new study tested that idea and found that, indeed, the rings weren't always annual. Just as humans grow rapidly early in life, then slow down and eventually stop, white shark growth rings appeared to be annual for up to forty years. After that point, rings appeared to have been laid down less frequently. The upshot is that we've been significantly underestimating how long white sharks live, and overestimating how rapidly they grow (at least later in life). That has ramifications for shark ecology and fishery management, although the nature of those ramifications is still a bit murky. If female white sharks reach sexual maturity at, say, age ten and bear young every other year for the rest of their lives, that could be good news. But if, instead, it turns out that white sharks mature much later in life, it could mean current management strategies are based on a falsely optimistic idea of how quickly they can replenish their populations. How Dr. Li Ling Hamady and her colleagues figured this out is possibly even more fascinating than the result, itself. Hydrogen bomb testing in the 1950's and early 60's released radioactive carbon into the atmosphere. It then made its way into the ocean, and into the plants and animals living there. Hamady used vertebrae from a handful of sharks with known times of death and estimated ages that would indicate they'd lived through the period of testing. She then looked to see if radiocarbon showed up in the bands that would correspond to those years, assuming each ring corresponded to a year. Call it the scientific version of turning lemons into lemonade.
To provide a global population of 9.6 billion with healthy and nutritious food and eliminate global hunger and malnutrition, food systems must simultaneously produce more food, improve livelihoods and reduce food wastage. According to the United Nations food production will need to nearly double by 2050 to feed the world.1 At the same time environmental impacts will need to be addressed, including ecosystem degradation, high greenhouse gas emissions and water scarcity. Yet agricultural productivity is growing at a slower pace than ever before, and soil fertility and the nutritional value of foods are declining.2 Arable lands and key resources are also becoming increasingly scarce. There is no silver bullet that will solve these issues. Instead a wide range of solutions will be needed across the food value chain. These include reducing food waste, promoting agricultural efficiencies, technological innovation and urban farming. Changes to people’s dietary preferences and consumption patterns around the world will also have a role to play. - According to the UN, we currently produce enough food for everyone on the planet to have an adequate diet, but poor distribution means that 795 million people are hungry while some 1.4 billion people are overweight or obese. - One-third of the food produced today is lost or wasted at the production, post-harvest and processing stages of the food chain. 1 - If food waste and the amount of cereals fit for human consumption that are fed to livestock were halved, an extra 2.75 billion people could be fed. 2 - Wasting food costs the UK £12.5 billion each year Changing consumption habits - Dietary preferences are changing around the world, with developing world economies expected to see 80% growth in the meat sector by 2022. 1 China, for example, saw meat consumption increase by 63% between 1985 and 2009, a trend which seems likely to continue. 2 - The developing world is also where more than 80% of growth in global demand for field crops, fibre and beverage crops, meat and forest products – including timber – will occur over the next 15 years, according to the OECD. 3 - By 2050, consumption of meat and dairy is expected to have risen by 76% and 65% respectively against a 2005–07 baseline, compared with 40% for cereals. 4 - 1. Friends of the Earth Europe (2014, Jan). Meat Atlas: Facts and figures about the animals we eat. - 2. Australian Department of Agriculture (2012) - 3. Organisation for Economic Co-operation and Development (OECD) and Food and Agriculture Organization of the United Nations (FAO), 2013. - 4. FAO projections for 2050 from FAO (2012). - Food is a source of potential conflict as well as innovation, and changes in both our production and consumption patterns are essential for addressing upcoming challenges and creating sustainable food systems. As such, the development and large-scale use of ‘green-tech’ methods that support ecosystems and watersheds – such as agroecology, intercropping and integrated pest management systems – will play a key role in improving long term agricultural productivity and global nutrition. - Scientific evidence of climate vulnerability, extreme weather, disasters and supply chain volatility have cemented the need for businesses to act in a unified way across value chains. A variety of large organisations involved in these food value chains have now recognised that their business models depend upon a reliable base of farmers producing consistent agricultural products, and are setting sustainable agriculture goals (and following through on them) to ensure a more secure future. However, this will need to happen across the sector to make a significant difference to the global outlook. - Increasing transparency around food supply chains, coupled with health and food quality concerns, mean middle class consumers around the world are likely to demand cleaner, healthier production systems in future. This could accelerate the push towards high levels of transparency in global food supply chains, enabled by advances in monitoring technology.
In this article we’re going to show you a cool way to play, talk and learn with your child. Then we’ll show you a cool way to play, talk and learn with your child. Next we’ll show you a cool way to play, talk, and learn with your child. And after that, we’ll show you a cool way to play, talk, and learn with your child. Notice a pattern here? You ought to, because this article is all about patterns – and how they can help your preschool or early elementary school student start building some serious math skills! The idea that patterns help prepare children for math might not sound surprising. After all, educators have long known that the predictable relationships found in patterns are similar to the predictable relationships found between numbers. And there’s a well documented connection between academic achievement and skill with patterns – it’s been shown that children with a better understanding of patterns tend to perform better in math classes, and that kids who aren’t doing so hot in math can substantially increase their scores by training with patterns. This is why so many children’s book and TV show characters not-so-subtly stop whatever it is they’re doing just to say things like, “Oh look! The stripes on this swarm of angry bumblebees go yellow, black, yellow, black!” People know patterns help kids in school. (Although they probably won’t help them in a swarm of angry bumblebees.) But here’s something you may not know about patterns. According to recent research from Vanderbilt University, children are capable of working with patterns in much more complicated ways than we usually ask of them. And these more sophisticated uses of patterns are what can really get your child primed for learning! So give your kid’s pattern prowess a boost by trying out some of these fun ways to play, talk and learn with patterns. They start out simple, and then get increasingly complex. But don’t worry – your kids can totally handle it! LEVEL 1: Copy or Extend a Pattern This is the simplest and most common way we present patterns to kids. Just create a pattern – like a row of cards that alternates red, blue, red, blue – and then ask your child to copy it (“I made a pattern. Can you make the same pattern?”) or extend it (“I made a pattern. Can you continue my pattern?”). It’s important to realize that kids could complete these activities by merely duplicating what’s in front of them without actually understanding how the pattern works. So be sure to ask questions and discuss your little one’s performance, so that you’ll know exactly what your child knows at this stage. LEVEL 2: Transfer a Pattern Ready to tweak your pattern play up a notch? Make a pattern using one set of objects – like fork, fork, spoon, fork, fork, spoon – and then have your child duplicate your pattern using totally different objects – like sock, sock, underpants, sock, sock, underpants. Kids who can do this are showing you some sophisticated pattern skills, since they’ve had to determine the sequence that defines the pattern (A, A, B or Same, Same, Different), and then apply it in a new way. LEVEL 3: Practice Pattern Unit Recognition For kids to become true pattern pros, they need to be able to identify the smallest units of a pattern. To help your child accomplish this, try building a tall tower of blocks stacked in a pattern – like orange, orange, yellow, orange, orange, yellow, and so on. Then ask your child, “What’s the smallest tower you could make that still has this pattern in it?” (An orange, orange, yellow stack of three blocks is the correct answer, of course!) Asking your child to “point to where the pattern starts over again” has the same basic effect. For children to answer these questions correctly, they must first understand that what makes a pattern is a repeating unit, and then they must accurately isolate that unit. Pretty complex stuff, kiddos. LEVEL 4: Engage in High-Level Pattern Practice Has your child mastered all the pattern basics above? Don’t stop now! Add a few wrinkles to how you interact with patterns, and they can start promoting more than just math skills. - Develop critical thinking. Help your child learn to talk about the patterns you see in more sophisticated ways, moving from shallow statements (“it’s red, red, blue”) to deeper explanations (“it’s two same and one different”). - Build memory. Have your child study a pattern, and then try to reproduce it from memory after you take the pattern away. - Stimulate creativity. Encourage your child to make patterns out of stuff you have lying around the house – like LEGOs, crackers, books, things they draw themselves or your household pets. What are some ways that you like to play with patterns with your kids? Share your ideas in the comments! Practice patterns with these activities:
In a future shaped by climate change, where will ocean life still thrive? Are there heat resistant organisms that can survive a warmer ocean? A recent study from Stanford University published this week in the Proceedings of the National Academy of Sciences opens a window into how some reef-building corals can live in unusually high temperatures, and may hold a key to species survival for organisms around the world. “If we can find populations most likely to resist climate change and map where they are, then we can protect them,” said Stephen Palumbi, director of Stanford University’s Hopkins Marine Station and leader of the research team. “It’s of paramount importance, because climate change is here.” Coral reefs are crucial sources of fisheries, aquaculture and storm protection for about 1 billion people worldwide. These highly productive ecosystems are constructed by reef-building corals; tiny animals that grow to form colonies so big they can be seen from space. But overfishing and pollution, plus rising temperatures and acidity, have destroyed half of the world’s corals during the past 20 years. The growing threat of climate change makes it imperative to understand how corals respond to extreme temperatures and other environmental stresses. Corals are tropical animals that live in perpetual danger of overheating. If the temperature goes up just 1-2 degrees they begin to suffer, Yet there are patches of live and healthy coral that break the rules. “Researchers in the US National Park of American Samoa discovered something amazing,” Palumbi recounts. “Some of the reefs there heat up to lethal temperatures in the summer. But corals there surprised everyone by thriving, not dying.” Palumbi, lead author Daniel Barshis, a Stanford postdoctoral scholar, and other researchers have spent the last five years mapping the heat resistant corals in Samoa. They found what they call the world’s strongest corals near the Ofu runway, and are they are using the Ofu reef as a natural laboratory to discover the mechanism that corals use to overcome temperature limits. Recent advances in DNA sequencing technology and the first coral genome sequence let the group chart how the corals change how they use their genome during heat stress, and what makes a heat resistant coral. Heat-resistant and heat-sensitive corals had a similar reaction to experimental heat: hundreds of genes “changed expression” or turned on to reduce and repair damage. However, the heat-resistant corals showed an unexpected pattern. “About sixty heat stress genes were already turned on even before the experiment began,” Barshis explained, These genes are “frontloaded” by heat resistant corals – already turned on and ready to work even before the heat stress began. “It's like having already charged batteries in your flashlight before a hurricane arrives” Palumbi said. “instead of going out to get them in the storm.” The findings show that DNA sequencing can offer broad insights into the differences that may allow some organisms to persist longer amid future changes to global climate. “We’re going to put a lot of effort into protecting coral reefs, but what happens if we wake up in 30 years and all our efforts are in vain because those corals have succumbed to climate change?” Barshis asked. As with strong corals, finding species most likely to endure climate change – “resilience mapping” – is the first step toward protecting them, Palumbi said. “Some of the solutions that we’re looking for must already be out there in the world”. This press release, photos and videos are available at http://OceanScienceNow.com Steve Palumbi cell phone 831-601-7002 Dan Barshis: [email protected]
The Western Arctic The western Arctic lies at the northern edge of Alaska, west of the Arctic Refuge and Prudhoe Bay, and below the Arctic Ocean in the north. (See a pdf map of the region here.) The western Arctic accounts for 35 million acres, the majority of which comprise the 23.5-million-acre National Petroleum Reserve-Alaska, the largest unprotected block of land in the federal land system. Read more about how the Reserve got its name. The western Arctic is an area of varied topography that ranges from coastal lagoons in the north to rugged mountains in the south. Its varied ecosystems and habitats support a diversity of fish and wildlife, including large populations of grizzly and polar bear, muskox and caribou, arctic fox and wolves, seals and bowhead whales, and several species of anadromous fish. The coastal lagoons and plain, tundra wetlands, and lakes provide extremely valuable nesting, staging, feeding, and molting areas for millions of waterfowl, sea and shorebirds, including yellow-billed loons, Pacific black brant, spectacled and Steller's eiders, and sandpipers. Local residents, mainly Inupiat, who have lived there for centuries, continue to live close to the land in this region, depending on wildlife and other resources to sustain their families. For the last several millennia and before European settlers began arriving in the nineteenth century, the western Arctic was home to plentiful wildlife, Inupiat Eskimos, and other Arctic peoples. In 1923, President Warren Harding decided to set aside a portion of the western Arctic as an oil reserve for the United States Navy, calling it the Naval Petroleum Reserve No.4. Since that time, however, no oil development has occurred in the reserve and only sporadic exploration activity has affected the region. In 1976, Congress recognized the unique wilderness and wildlife values of the reserve and transferred its management to the Department of the Interior. The area was also renamed the National Petroleum Reserve-Alaska. Congress provided special protection for sensitive ecosystems in the area, including Teshekpuk Lake and the Colville River. Congress then directed the Secretary of the Interior to study the many values of the area and put off making any decisions about the future of the reserve. Since 1976, Congress has allowed some limited oil leasing in the area (however all leases granted in the 1980s have expired) and considered bills that would turn the reserve into a national wildlife refuge but no final action was taken. Now, even though Congress has never resolved the question of protection for the key regions within the reserve, the Bush administration is pursuing aggressive development of the reserve threatening these pristine public lands, and the wildlife and people depending on them. The various legal actions taken by Earthjustice and its clients in the western Arctic focus on protecting the most sensitive areas, and their key values, from oil and gas development. These areas include: Teshekpuk River Region, including the Dease Inlet-Meade River Colville River Watershed Ikpikpuk River Region
How is poverty measured in the United States? The two federal poverty measures in the U.S. Each year, the U.S. Census Bureau counts people in poverty with two measures. Both the official and supplemental poverty measures are based on estimates of the level of income needed to cover basic needs. Those who live in households with earnings below those incomes are considered to be in poverty. Both the official and supplemental poverty measures are annual estimates based on a sampling of U.S. households. In 2015, the Current Population Survey (CPS) Annual Social and Economic Supplement (ASEC) was sent to about 95,000 U.S. households across the 50 states and the District of Columbia. Since this is a household survey, the sample excludes many who might otherwise be considered to be in poverty. The sample excludes those who are homeless and not living in shelters. It also excludes military personnel who do not live with at least one civilian adult, as well as people in institutions such as prisons, long-term care hospitals and nursing homes. The official poverty measure The official poverty measure has been used to estimate the national poverty rate from 1959 onward. The measure is used to create income thresholds that determine how many people are in poverty. Income thresholds by the official poverty measure are established by tripling the inflation-adjusted cost of a minimum food diet in 1963 and adjusting for family size, composition and the age of the householder. The Census Bureau also provides data using ratios that compare the income levels of people or families with their poverty threshold: - A household income above 100% of their poverty threshold is considered “above the poverty level.” - Income above 100% but below 125% of poverty is considered “near poverty.” - Households with incomes at or below 100% are considered “in poverty.” - Household incomes below 50% of their poverty threshold are considered to be in “severe” or “deep poverty.” The official poverty measure provides guidance for government poverty policy and programs. The official measure thresholds are the basis for the U.S. Department of Health and Human Services poverty guidelines which determine government program eligibility. The supplemental poverty measure The supplemental poverty measure provides a more complex statistical understanding of poverty by including money income from all sources, including government programs, and an estimate of real household expenditures. This information is valuable, but this measure’s thresholds are not the basis for government program income eligibility. The measure was developed by a 2010 government technical working group. In 2011, its first year of use, it showed that 16 percent of Americans lived in poverty during 2010, compared to 15.1 percent from the official poverty measure. This measure also shows the effect that a number of safety net programs have on poverty rates. In 2015, for example, Social Security reduced poverty overall by 8.3 percent. Refundable tax credits reduced poverty by about 2.9 percent, with the largest reduction among children under 18 years of age. Importantly, the supplemental poverty measure showed a wider variation of poverty from state to state. For example, it found that over a three-year average from 2013-15 California had a poverty rate of 15 percent by the official measure. By the supplemental measure California poverty was 20.6 percent, which was highest in the nation. For more information: Proctor, Bernadette D. et al. 2015. ”Income and Poverty in the United States: 2015,” U.S. Census Bureau. Renwick, Trudi, et al. 2015. “The Supplemental Poverty Measure: 2015.” U.S. Census Bureau. U.S. Census Bureau, 2016 (updated). ”How the Census Bureau Measures Poverty.” U.S. Dept. of Health and Human Services. 2015. ”Frequently Asked Questions Related to the Poverty Guidelines and Poverty.”
Extending approximately 650 km from east to west, the Nullarbor Plain runs from the south-western coast of South Australia to south-eastern Western Australia. It is bordered by the Great Australian Bight where it ends in steep cliffs. Its maximum altitude is about 200 m. Rainfall on the Nullarbor Plain is less than 250 mm a year and there is little surface water as most of it drains through porous limestone to form sink holes. Those, in turn, flow into underground caverns where subterranean lakes are formed. Discoveries of fossil bones and shells indicate that at one time the plain was part of the sea bed. The Nullarbor is crossed by the Trans-Australian Railway (the train which traverses the route from Perth to Sydney is known as the Indian-Pacific after the oceans it conjoins) which contains the world's greatest length of straight track (479 km). The first European to cross the Nullarbor Plain from east to west was Edward Eyre in 1841. The trek in the other direction was made by Alexander and John Forrest in 1870. Despite a lot of belief that the plain's name is an aboriginal name, it actually comes from the Latin, meaning "no trees" (null arbor).
Mangrove forests habitat in the coastal zone of Galapagos Galapagos Islands mangrove forests Extending along the shores of many islands, one finds forests of mangroves of four species: red, black, white and button. A rich concentration of nutrients and plankton flows in and out with the tides, making mangrove forests important breeding and nursery grounds for fishes and invertebrates. They are also used as nesting sites by many birds. Mangrove Swamps consist of a variety of salt-tolerant trees and shrubs that thrive in shallow and muddy saltwater or brackish waters. Mangroves can easily be identified by their root system. These roots have been specially adapted to their conditions by extending above the water. Vertical branches, pheumatophores, act as aerating organs filtering the salt out and allowing the leaves to receive fresh water. Mangroves are thought to have originated in the Far East then over millions of years the plants and seeds floated west across the ocean to the Galapagos Islands. Mangroves live within specific zones in their ecosystem. Depending on the species they occur along the shoreline, in sheltered bays, and others are found further inland in estuaries. Mangroves also vary in height depending on species and environment. The Galapagos is home to 4 types of Mangroves: Black Mangrove (Avicennia germinans) has the highest salt tolerant leaves of all the mangroves the leaves and is equipped with special salt-extracting glands. Trees grow to 65 ft (20 m) in height; the long spreading branches are covered by a dark brown bark. Leaves grow in pairs, leathery in texture with a narrow oval shape. The top leaf is dark green and the bottom is pale with hairs often coated with salt. The trees' yellow flowers grow in clusters developing into a green lima bean shaped fruit. Black mangroves have a carpet of short aerial roots or pneumatophores surrounding the base of the tree. Red Mangrove (Rhizophora mangle) is the most common in the Galapagos named for its reddish wood. This species is used around the world as a source of charcoal and tannins for leather working. Trees grow to 72 ft (22 m) in height, yet red mangroves also can be seen as small bushes. The thick leathery leaves grow in pairs with a dark green leaf above and pale yellow leaf below. Red mangroves have yellow flowers that grow in groups of 2 or 3. Red mangroves can be seen growing near the low tide zone as well as at higher elevations mixed with other mangrove species. Button Mangrove or Buttonwood Button Mangrove or Buttonwood (Conocarpus erecta) is not a true mangrove, yet this tree usually found in the higher mangrove elevations. They have dark gray bark and leaves which are either oval, leathery and smooth green or sharply pointed with salt glands at the base. Buttons have green flowers that mature into a round purple fruit. White Mangrove (Laguncularia racemosa) grows into a shrub with aerial roots close to the water. They thrive in areas with infrequent tidal flooding. Leaves are smooth, oblong and light green in color with notched tips.
Pinkeye (also called conjunctivitis) is redness and swelling of the conjunctiva, the mucous membrane that lines the eyelid and eye surface. The lining of the eye is usually clear. If irritation or infection occurs, the lining becomes red and swollen. See pictures of a normal eye and an eye with conjunctivitis. Pinkeye is very common. It usually is not serious and goes away in 7 to 10 days without medical treatment. Common symptoms of pinkeye are: - Eye redness (hyperemia). - Swollen, red eyelids. - More tearing than usual. - Feeling as if something is in the eye (foreign-body sensation or keratoconjunctivitis). - An itching or burning feeling. - Mild sensitivity to light (photophobia). - Drainage from the eye. Most cases of pinkeye are caused by: - Infections caused by viruses or bacteria. - Dry eyes from lack of tears or exposure to wind and sun. - Chemicals, fumes, or smoke (chemical conjunctivitis). Viral and bacterial pinkeye are contagious and spread very easily. Since most pinkeye is caused by viruses for which there is usually no medical treatment, preventing its spread is important. Poor hand-washing is the main cause of the spread of pinkeye. Sharing an object, such as a washcloth or towel, with a person who has pinkeye can spread the infection. For tips on how to prevent the spread of pinkeye, see the Prevention section of this topic. People with infectious pinkeye should not go to school or day care, or go to work until symptoms improve. - If the pinkeye is caused by a virus, the person can usually return to day care, school, or work when symptoms begin to improve, typically in 3 to 5 days. Medicines are not usually used to treat viral pinkeye, so it is important to prevent the spread of the infection. Pinkeye caused by a herpes virus, which is rare, can be treated with an antiviral medicine. Home treatment of viral pinkeye symptoms can help you feel more comfortable while the infection goes away. - If the pinkeye is caused by bacteria, the person can usually return to day care, school, or work 24 hours after an antibiotic has been started if symptoms have improved. Prescription antibiotic treatment usually kills the bacteria that cause pinkeye. Pinkeye may be more serious if you: - Have a condition that decreases your body's ability to fight infection (impaired immune system). - Have vision in only one eye. - Wear contact lenses. Red eye is a more general term that includes not only pinkeye but also many other problems that cause redness on or around the eye, not just the lining. Pinkeye is the main cause of red eye. Red eye has other causes, including: - Foreign bodies, such as metal or insects. For more information, go to the topic Objects in the Eye. - Scrapes, sores, or injury to or infection of deeper parts of the eye (for example, uveitis, iritis, or keratitis). For more information, go to the topic Eye Injuries. - Glaucoma . For more information, go to the topics Eye Problems, Noninjury or Glaucoma. - Infection of the eye socket and areas around the eye. For more information, go to the topic Eye Problems, Noninjury. Swollen, red eyelids may also be caused by styes, a lump called a chalazion, inflammation of the eyelid (blepharitis), or lack of tears (dry eyes). For more information, go to the topics Styes and Chalazia or Eyelid Problems (Blepharitis). Use the Check Your Symptoms section to decide if and when you should see a doctor. Health Tools help you make wise health decisions or take action to improve your health. |Actionsets are designed to help people take an active role in managing a health condition.| |Eye problems: Using eyedrops and eye ointment| Home treatment for pinkeye will help reduce your pain and keep your eye free of drainage. If you wear contacts, remove them and wear glasses until your symptoms have gone away completely. Thoroughly clean your contacts and storage case. Cold compresses or warm compresses (whichever feels best) can be used. If an allergy is the problem, a cool compress may feel better. If the pinkeye is caused by an infection, a warm, moist compress may soothe your eye and help reduce redness and swelling. Warm, moist compresses can spread infection from one eye to the other. Use a different compress for each eye, and use a clean compress for each application. When cleaning your eye, wipe from the inside (next to the nose) toward the outside. Use a clean surface for each wipe so that drainage being cleaned away is not rubbed back across the eye. If tissues or wipes are used, make sure they are put in the trash and not allowed to sit around. If washcloths are used to clean the eye, put them in the laundry right away so that no one else picks them up or uses them. After wiping your eye, wash your hands to prevent the pinkeye from spreading. After pinkeye has been diagnosed: - Take steps to prevent the spread of pinkeye by following the instructions in the Prevention section of this topic. - Do not go to day care or school or go to work until pinkeye - If the pinkeye is caused by a virus, the person can usually return to day care, school, or work when symptoms begin to improve, typically in 3 to 5 days. Medicines are not usually used to treat viral pinkeye, so preventing its spread is important. Home treatment of the symptoms will help you feel more comfortable while the infection goes away. - If the pinkeye is caused by bacteria, the person can usually return to day care, school, or work after the infection has been treated for 24 hours with an antibiotic and symptoms are improving. Prescription antibiotic treatment usually kills the bacteria that cause pinkeye. - Use medicine as directed. Medicine may include eyedrops and eye ointment. See a picture of inserting eyedrops or inserting eye ointment. For pinkeye related to allergies, antihistamines may help relieve your symptoms. Don't give antihistamines to your child unless you've checked with the doctor first. Symptoms to Watch For During Home Treatment Use the Check Your Symptoms section to evaluate your symptoms if any of the following occur during home treatment: Pinkeye is spread through contact with the eye drainage, which contains the virus or bacteria that caused the pinkeye. Touching an infected eye leaves drainage on your hand. If you touch your other eye or an object when you have drainage on your hand, the virus or bacteria can be spread. The following tips help prevent the spread of pinkeye. Wash your hands before and after: - Touching the eyes or face. - Using medicine in the eyes. - Do not share eye makeup. - Do not use eye makeup until the infection is fully cured, because you could reinfect yourself with the eye makeup products. If your eye infection was caused by bacteria or a virus, throw away your old makeup and buy new products. - Do not share contact lens equipment, containers, or solutions. - Do not wear contact lenses until the infection is cured. Thoroughly clean your contacts before wearing them again. - Do not share eye medicine. - Do not share towels, linens, pillows, or handkerchiefs. Use clean linens, towels, and washcloths daily. - Wash your hands and wear gloves if you are looking into someone else's eye for a foreign object or helping someone else apply an eye medicine. - Wear eye protection when in the wind, heat, or cold to prevent eye irritation. - Wear safety glasses when working with chemicals. Preparing For Your Appointment To prepare for your appointment, see the topic Making the Most of Your Appointment. You can help your doctor diagnose and treat your condition by being prepared to answer the following questions: - What are your main symptoms? - How long have you had your symptoms? - Have you had any vision changes, increased pain in the eye, or increased sensitivity to light? - Have you had this problem before? If so, do you know what caused the problem at the time? How was it treated? - Do you wear contact lenses or eyeglasses? - Does anyone in your family or at your workplace have signs of an eye infection, such as drainage from the eye or red and swollen eyes? - Have you been exposed to fumes or chemicals? - What home treatment measures have you tried? Did they help? - What prescription or nonprescription medicines have you tried? Did they help? - Do you have any health risks? |Author||Jan Nissl, RN, BS| |Editor||Susan Van Houten, RN, BSN, MBA| |Associate Editor||Tracy Landauer| |Primary Medical Reviewer||William M. Green, MD - Emergency Medicine| |Primary Medical Reviewer||Steven L. Schneider, MD - Family Medicine| |Specialist Medical Reviewer||Christopher J. Rudnisky, MD, FRCSC - Ophthalmology| |Specialist Medical Reviewer||Adam Husney, MD - Family Medicine| |Last Updated||December 6, 2009| Last Updated: December 6, 2009
the event does not occur Work Step by Step The probability of an event occurring at least once is equal to 1 minus the probability that ___________________. P(event happening at least once) = 1 - P(event does not happen) You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
In this course we will explore how algebra and algebraic equations can be applied in real world, career based situations. This class will look at state and national standards and apply them with a hands on approach as we use critical thinking and problem solving skills to tackle mathematical problems. Students will use classroom materials and the ALEKS math program to be able to understand and interpret mathematical concepts. This yearlong class will focus on the fundamentals of algebra. This course will cover: - Data analysis - One and two step algebraic equations - Coordinate graphing - Linear equations - nonlinear equations - logical reasoning The aim of the class is to give the student a solid mathematical base in algebra to work from to enable quicker and easier mathematical manipulations both inside of and outside of the classroom. The students will learn the skills to logically solve problems, manipulate algebraic expressions, graphically display and understand data, and apply these skills to analyze real life situations.
APPENDIX A: Weaponry and Wartime Experience The Infantryman’s Weapons: His Rifle During the War of 1812, Canadian militiamen used a smooth-bored, muzzle-loaded musket, usually a Brown Bess, with an extremely limited range and a level of accuracy that left much to be desired. A British of cer of the period left this description of the effectiveness of this weapon: The soldier's musket, if it is not too badly calibrated, which is very often the case, can strike a man at a distance of 80 yards and even up to 100 yards. But a soldier has to be very unlucky even to be wounded at a distance of 150 yards, this on condition that his adversary aims well. As for firing on a man at a distance of 200 yards, you might as well aim at the moon hoping to strike it. Moreover, the Brown Bess permitted only two shots a minute, or, occasionally, in the hands of an extremely well-trained soldier, three. A century later, during the First World War, the descendant of the militiaman of 1812 found himself on the battlefields of Europe with a rifle that was much easier to load, with remarkably improved accuracy and considerably increased range. With its rifled barrel, the Canadian infantryman's Lee-Enfield Short Rifle (S.ML.E.) could f re at a range of over 2, 000 yards (1,830 m) at an average rate of 10 shots a minute, even 15 in the hands of a highly skilled shooter. In fact the S.M.L.E. was not an invention in itself but the result of a series of technological innovations that emerged primarily in the second half of the 19th century. At about mid point in the century the new industrial ability to rifle gun barrels contributed to the spread of such rifles. Spiral grooves cut into the rifle bore imparted a rotating movement to the projectile that persisted throughout its trajectory, delivering both greater accuracy and greater range. The early 1850s also saw the development of the self-contained metal cartridge with a central percussion unit containing powder, bullet and bore. This invention helped to make breech loading more common during the 1860s: Gone was the lengthy and inconvenient process of forcing the bullet into the rifle's bore with a metal rod driven home with a mallet. Soon projectiles would take on a cylindro-conical shape that made them even more effective. At this stage in its development, the rifle still contained only a single shot: Each cartridge had to be inserted manually. During the last quarter of the century a more rapidly firing rifle was developed. It had a locked breech that combined the cartridge chamber with the f ring system. Both simple and strong, the lever locking made it possible to clear, open, extract and eject the spent clip, load a new cartridge, lock the system and arm the firing pin. The subsequent appearance of the clip magazine containing several cartridges successively introduced into the breech by a spring system gave birth to the repeating rifle. Finally, during the 1890s the Swedish chemist A f ed Nobel invented cordite, a smokeless powder containing nitroglycerine, and a powerful explosive that was immediately adopted as a propellant for projectiles. This invention was the last in the list of refinements characterizing the rifle used by troopers in the First World War, a weapon that had become vastly more deadly than its predecessor of the early 19th century.
At Scissett Middle School we believe that education is a holistic process encompassing the whole child; our inherent ethos is dedicated to making every experience a learning or enriching one. It is important that our pupils become valuable and fully rounded members of society who treat others with respect and tolerance, regardless of background or belief. Our aim is for every member of our school to embrace and promote the fundamental human values of democracy, the rule of law, individual liberty, mutual respect and tolerance for those of different faiths and beliefs. We expect all our children to understand the importance of these values and leave our school prepared for life in modern Britain. Within the English Department, we believe that all we do is driven by cultural values and reflects the ever-changing world in which we live. As such, we aim to select suitable high-quality literature and read, write and discuss topics of local, national and global importance that shape our environment. Within the English curriculum, we explore these five strands in some of the following ways: We use discursive and persuasive arguments to debate, discuss and examine the power of words (both written and spoken) and images to influence and change opinion. Students are given a variety of opportunities to discuss and debate throughout all units of work. Occasionally, they are given the opportunity to ‘vote with their feet’, which allows students to develop their own viewpoint independently and justify this with the power of their words. The rule of law Through discussion and examination of selected texts, pupils will gain a greater understanding of the law, past and present. Pupils may examine arguments for and against capital punishment, whilst they study the text Holes, or animal rights, whilst studying a range of animal poetry and compare and contrast laws that differ between countries. Additionally, literature may be studied that gives rise to similar discussion, for example, should the key character, George from the novel Mice and Men by John Steinbeck, be imprisoned for the shooting of Lennie or could it be considered an act of compassion. Students will also get the opportunity to explore the theme of justice within Shakespeare’s King Lear. Students are encouraged to explore and discuss texts which examine individual liberty and the right to hold personal beliefs and opinions. For instance, war poetry is studied in depth and the civil liberties of those involved in conflicts around the world, past and present, are also explored. Another example is explored within the non-fiction unit the Supernatural. Students will write to a real life person, who is planning on going to live on the planet Mars. In their letters, students will give their opinion on this matter, but they will treat it with sensitivity and mutual respect. Pupils are taught to respect the cultures and beliefs of others through the media of poetry, narrative, journalism and other text types. They are taught and encouraged to think and write with empathy and recognise that behaviour has its consequences. It is crucial, also, in speaking and listening exercises that children learn to listen and respond appropriately and respectfully and expect this in return. Within the novel A Monster Calls, students are able to empathise with the main character, who experiences many struggles throughout the novel. Whole class guided reading lessons in year 6 also encourages a respectful and cooperative learning environment in which students share and nurture their skills and love of reading. Tolerance of those of different faiths and beliefs Tolerance is promoted by the varied text types selected. Acceptance of others is fundamental to being able to empathise, which is a key skill of English. Poetry from other cultures examines a number of issues of tolerance and texts such as Holes and Of Mice and Men explore topics such as racism, discrimination and equality. ‘It is impossible to teach English without constant reference, implicit or explicit, to the values embedded in language and literary culture. NATE (National Association for the Teaching of English) believes that the subject should be seen not merely in instrumental terms but as a cultural study in which questions of values are constantly brought into focus for open discussion by reference, both to the enduring texts of literature, and to the emerging texts of contemporary media.’ Tom Rank, for NATE, 4 February 2015
One of the rare and brief bursts of cosmic radio waves that have puzzled astronomers since they were first detected nearly 10 years ago has finally been tied to a source: an older dwarf galaxy more than 3 billion light-years from Earth. Fast radio bursts, which flash for just a few milliseconds, created a stir among astronomers because they seemed to be coming from outside our galaxy, which means they would have to be very powerful to be seen from Earth, and because none of those first observed were ever seen again. A repeating burst was discovered in 2012, however, providing an opportunity for a team of researchers to repeatedly monitor its area of the sky with the Karl Jansky Very Large Array in New Mexico and the Arecibo radio dish in Puerto Rico, in hopes of pinpointing its location. Thanks to the development of high-speed data recording and real-time data analysis software by a University of California, Berkeley, astronomer, the VLA last year detected a total of nine bursts over a period of a month, sufficient to locate it within a tenth of an arcsecond. Subsequently, larger European and American radio interferometer arrays pinpointed it to within one-hundredth of an arcsecond, within a region about 100 light-years in diameter. Deep imaging of that region by the Gemini North Telescope in Hawaii turned up an optically faint dwarf galaxy that the VLA subsequently discovered also continuously emits low-level radio waves, typical of a galaxy with an active nucleus perhaps indicative of a central supermassive black hole. The galaxy has a low abundance of elements other than hydrogen and helium, suggestive of a galaxy that formed during the universe’s middle age. The origin of a fast radio burst in this type of dwarf galaxy suggests a connection to other energetic events that occur in similar dwarf galaxies, said co-author and UC Berkeley astronomer Casey Law, who led development of the data-acquisition system and created the analysis software to search for rapid, one-off bursts. Extremely bright exploding stars, called superluminous supernovae, and long gamma ray bursts also occur in this type of galaxy, he noted, and both are hypothesized to be associated with massive, highly magnetic and rapidly rotating neutron stars called magnetars. Neutron stars are dense, compact objects created in supernova explosions, seen mostly as pulsars, because they emit periodic radio pulses as they spin. “All these threads point to the idea that in this environment, something generates these magnetars,” Law said. “It could be created by a superluminous supernova or a long gamma-ray burst, and then later on, as it evolves and its rotation slows down a bit, it produces these fast radio bursts as well as continuous radio emission powered by that spin-down. Later on in life, it looks like the magnetars we see in our galaxy, which have extremely strong magnetic fields but rotate more like ordinary pulsars.” In that interpretation, he said, fast radio bursts are like the tantrums of a toddler. This is only one theory, however. There are many others, though the new data rule out several suggested explanations for the source of these bursts. “We are the first to show that this is a cosmological phenomenon. It’s not something in our backyard. And we are the first to see where this thing is happening, in this little galaxy, which I think is a surprise,” Law said. “Now our objective is to figure out why that happens.”
What Is Errorless Learning? Currently, the educational system, both formal and informal, is based on the method of trial and error. Both in schools and in homes, children are encouraged to try things and learn from their mistakes. However, there’s a method that promotes a more accurate acquisition of knowledge from the beginning. Its name is errorless learning. This methodology is commonly used in adults with brain damage, as experts have shown that making mistakes makes learning far more difficult. Nevertheless, this could be a valid alternative in children too. Why may the trial and error method not be appropriate? We must bear in mind that every time we perform an action, our brains establish certain neuronal connections. Whenever we repeat the action, such associations are strengthened and become more accessible to us. By way of an example, every time a child points to a circle and says “circle”, they’re reaffirming the mental association between the object and the word. In this way, it becomes easier and easier for them to identify and define this geometrical shape. But what happens when the child makes a mistake? What happens if they get confused and say “square” when they point to the circle? In principle, an erroneous neuronal connection would be established. However, if the same mistake isn’t repeated then that connection would lose its strength and there wouldn’t be a problem. However, if the same mistake is made frequently, then the erroneous association will become stronger, and it’s more likely to occur again and again. In addition, the trial and error method can have a negative emotional impact on the child. Repeatedly failing at the same exercise or task can diminish a child’s self-esteem and their perception of their capabilities. The child may also end up rejecting that particular activity or developing negative emotions associated with the teacher or class. For example, a child who didn’t learn how to solve a math problem correctly from the beginning is likely to always repeat the initial errors, especially if they’ve tried to do it several times using the wrong method. This will only increase their frustration and decrease their motivation in that subject. What is errorless learning? Errorless learning, on the other hand, advocates installing correct learning methods right from the start. In this way, only the correct neural connection is established and reinforced. In this way, we avoid repeating the error, and the child avoids the frustration and negative feelings. To achieve an accurate performance from the start, you need to follow certain guidelines. How to implement errorless learning Avoid asking the child open-ended questions that could lead to mistakes. For example, it wouldn’t be appropriate to show him three different colored paints and ask them to point out the yellow one. Nor should we ask them what a particular object is called. At least, not until the child has developed sufficient knowledge. - Instead, it’s preferable that we teach them right from the start only the right answers and the right actions and sequences to take. In this way, we would simply have to show them the yellow paint and repeat the word “yellow” until the child reinforces the association. Without question, there’s no room for error; the method focuses on establishing the only valid association. - The same thing happens if we want to teach something more complex, such as a sequence of actions. We have to focus on showing it in a clear and slow way from the beginning. For example, to solve a mathematical problem, we’ll have to sit next to them and guide them, step by step, through the whole process, without letting them get the wrong answer. By repeating the correct sequence, the desired learning will take place. - It’s important to prevent errors as much as possible, but, if they do occur, then we just have to ignore them and focus on the correct way of doing it. This method’s main objective is to reinforce the appropriate approaches to learning. - Finally, it’s vital to ensure that the tasks are appropriate to the child’s abilities. It’s also advisable to teach them in a personalized way and encourage motivation and positive reinforcement. An educational alternative In short, any healthy child has the cognitive skills necessary to detect his or her own mistakes and learn from them. Nevertheless, this alternative is very effective for children suffering from an Autism Spectrum Disorder (ASD) and, in general, whenever we want to avoid frustration and demotivation in our children. All cited sources were thoroughly reviewed by our team to ensure their quality, reliability, currency, and validity. The bibliography of this article was considered reliable and of academic or scientific accuracy. - Melo, R. M. D., Hanna, E. S., & Carmo, J. D. S. (2014). Aprendizaje sin error y la discriminación aprendizaje. Temas em Psicologia, 22(1), 207-222. - Briceño, M. T. (2009). El uso del error en los ambientes de aprendizaje: una visión transdisciplinaria. Revista de teoría y didáctica de las ciencias sociales, (14), 9-28.
A set of 5 color by number sheets that can be used to practice mental math. These color by number sheets are a fun way for students to practice using mental math to solve operations. How to use this resource in your classroom: - Have each student choose a color by number sheet. - Call out two numbers and an operation. Students must do the calculation mentally and color in the answer if it appears on their sheet. - The student that colors in their picture first, wins! To make it a bit more challenging, write a number sentence on the board with a missing term and have your students determine the unknown, e.g. 42 + ___ = 61.
Steam should be available at the point of use in the correct quantity, at the correct pressure, clean, dry and free from air and other incondensable gases. This tutorial explains why this is necessary, and how steam quality is assured. The correct quantity of steam must be made available for any heating process to ensure that a sufficient heat flow is provided for heat transfer. Similarly, the correct flowrate must also be supplied so that there is no product spoilage or drop in the rate of production. Steam loads must be properly calculated and pipes must be correctly sized to achieve the flowrates required. Steam should reach the point of use at the required pressure and provide the desired temperature for each application, or performance will be affected. The correct sizing of pipework and pipeline ancillaries will ensure this is achieved. However, even if the pressure gauge is correctly displaying the desired pressure, the corresponding saturation temperature may not be available if the steam contains air and/or incondensable gases. Air is present within the steam supply pipes and equipment at start-up. Even if the system were filled with pure steam the last time it was used, the steam would condense at shutdown, and air would be drawn in by the resultant vacuum. When steam enters the system it will force the air towards either the drain point, or to the point furthest from the steam inlet, known as the remote point. Therefore steam traps with sufficient air venting capacities should be fitted to these drain points, and automatic air vents should be fitted to all remote points. However, if there is any turbulence the steam and air will mix and the air will be carried to the heat transfer surface. As the steam condenses, an insulating layer of air is left behind on the surface, acting as a barrier to heat transfer. In a mixture of air and steam, the presence of air will cause the temperature to be lower than expected. The total pressure of a mixture of gases is made up of the sum of the partial pressures of the components in the mixture. This is known as Dalton’s Law of Partial Pressures. The partial pressure is the pressure exerted by each component if it occupied the same volume as the mixture: Note: This is a thermodynamic relationship, so all pressures must be expressed in bar a. Consider a steam/air mixture made up of ¾ steam and ¼ air by volume. The total pressure is 4 bar a.Determine the temperature of the mixture: Therefore the steam only has an effective pressure of 3 bar a as opposed to its apparent pressure of 4 bar a. The mixture would only have a temperature of 134 °C rather than the expected saturation temperature of 144 °C. This phenomena is not only of importance in heat exchange applications (where the heat transfer rate increases with an increase in temperature difference), but also in process applications where a minimum temperature may be required to achieve a chemical or physical change in a product. For instance, a minimum temperature is essential in a steriliser in order to kill bacteria. Air can also enter the system in solution with the boiler feedwater. Make-up water and condensate, exposed to the atmosphere, will readily absorb nitrogen, oxygen and carbon dioxide: the main components of atmospheric air. When the water is heated in the boiler, these gases are released with the steam and carried into the distribution system. Atmospheric air consists of 78% nitrogen, 21% oxygen and 0.03% carbon dioxide, by volume analysis. However, the solubility of oxygen is roughly twice that of nitrogen, whilst carbon dioxide has a solubility roughly 30 times greater than oxygen! This means that ‘air’ dissolved in the boiler feedwater will contain much larger proportions of carbon dioxide and oxygen: both of which cause corrosion in the boiler and the pipework. The temperature of the feedtank is maintained at a temperature typically no less than 80 °C so that oxygen and carbon dioxide can be liberated back to the atmosphere, as the solubility of these dissolved gases decreases with increasing temperature. The concentration of dissolved carbon dioxide is also kept to a minimum by demineralising and degassing the make-up water at the external water treatment stage. The concentration of dissolved gas in the water can be determined using Henry’s Law. This states that the mass of gas that can be dissolved by a given volume of liquid is directly proportional to the partial pressure of the gas. This is only true however if the temperature is constant, and there is no chemical reaction between the liquid and the gas. Layers of scale found on pipe walls may be either due to the formation of rust in older steam systems, or to a carbonate deposit in hard water areas. Other types of dirt which may be found in a steam supply line include welding slag and badly applied or excess jointing material, which may have been left in the system when the pipework was initially installed. These fragments will have the effect of increasing the rate of erosion in pipe bends and the small orifices of steam traps and valves. For this reason it is good engineering practice to fit a pipeline strainer (as shown in Figure 2.4.2). This should be installed upstream of every steam trap, flowmeter, pressure reducing valve and control valve. Steam flows from the inlet A through the perforated screen B to the outlet C. While steam and water will pass readily through the screen, dirt will be arrested. The cap D can be removed, allowing the screen to be withdrawn and cleaned at regular intervals. When strainers are fitted in steam lines, they should be installed on their sides so that the accumulation of condensate and the problem of waterhammer can be avoided. This orientation will also expose the maximum strainer screen area to the flow. A layer of scale may also be present on the heat transfer surface, acting as an additional barrier to heat transfer. Layers of scale are often a result of either: The rate at which this layer builds up can be reduced by careful attention to the boiler operation and by the removal of any droplets of moisture. Incorrect chemical feedwater treatment and periods of peak load can cause priming and carryover of boiler feedwater into the steam mains, leading to chemical and other material being deposited on to heat transfer surfaces. These deposits will accumulate over time, gradually reducing the efficiency of the plant. In addition to this, as the steam leaves the boiler, some of it must condense due to heat loss through the pipe walls. Although these pipes may be well insulated, this process cannot be completely eliminated. The overall result is that steam arriving at the plant is relatively wet, and the droplets of moisture carried along with the steam can erode pipes, fittings and valves especially if velocities are high. It has already been shown that the presence of water droplets in steam reduces the actual enthalpy of evaporation, and also leads to the formation of scale on the pipe walls and heat transfer surface. The droplets of water entrained within the steam can also add to the resistant film of water produced as the steam condenses, creating yet another barrier to the heat transfer process. A separator in the steam line will remove moisture droplets entrained in the steam flow, and also any condensate that has gravitated to the bottom of the pipe. In the separator shown in Figure 2.4.3 the steam is forced to change direction several times as it flows through the body. The baffles create an obstacle for the heavier water droplets, while the lighter dry steam is allowed to flow freely through the separator. The moisture droplets run down the baffles and drain through the bottom connection of the separator to a steam trap. This will allow condensate to drain from the system, but will not allow the passage of any steam. As steam begins to condense due to heat losses in the pipe, the condensate forms droplets on the inside of the walls. As they are swept along in the steam flow, they then merge into a film. The condensate then gravitates towards the bottom of the pipe, where the film begins to increase in thickness. The build up of droplets of condensate along a length of steam pipework can eventually form a slug of water (as shown in Figure 2.4.4), which will be carried at steam velocity along the pipework (25 - 30 m/s). This slug of water is dense and incompressible, and when travelling at high velocity, has a considerable amount of kinetic energy. The laws of thermodynamics state that energy cannot be created or destroyed, but simply converted into a different form. When obstructed, perhaps by a bend or tee in the pipe, the kinetic energy of the water is converted into pressure energy and a pressure shock is applied to the obstruction. Condensate will also collect at low points, and slugs of condensate may be picked up by the flow of steam and hurled downstream at valves and pipe fittings. These low points might include a sagging main, which may be due to inadequate pipe support or a broken pipe hanger. Other potential sources of waterhammer include the incorrect use of concentric reducers and strainers, or inadequate drainage before a rise in the steam main. Some of these are shown in Figure 2.4.5. The noise and vibration caused by the impact between the slug of water and the obstruction, is known as waterhammer. Waterhammer can significantly reduce the life of pipeline ancillaries. In severe cases the fitting may fracture with an almost explosive effect. The consequence may be the loss of live steam at the fracture, creating a hazardous situation. The installation of steam pipework is discussed in detail in Block 10, Steam Distribution. Your closest Spirax Sarco is
If we take the equivalent circuit of an SCR and add another external terminal, connected to the base of the top transistor and the collector of the bottom transistor, we have a device known as a silicon-controlled-switch, or SCS: (Figure below) The Silicon-Controlled Switch(SCS) This extra terminal allows more control to be exerted over the device, particularly in the mode of forced commutation, where an external signal forces it to turn off while the main current through the device has not yet fallen below the holding current value. Note that the motor is in the anode gate circuit in Figure below. This is correct, although it doesn’t look right. The anode lead is required to switch the SCS off. Therefore the motor cannot be in series with the anode. SCS: Motor start/stop circuit, an equivalent circuit with two transistors. When the “on” pushbutton switch is actuated, the voltage applied between the cathode gate and the cathode, forward-biases the lower transistor’s base-emitter junction, and turning it on. The top transistor of the SCS is ready to conduct, having been supplied with a current path from its emitter terminal (the SCS’s anode terminal) through resistor R2 to the positive side of the power supply. As in the case of the SCR, both transistors turn on and maintain each other in the “on” mode. When the lower transistor turns on, it conducts the motor’s load current, and the motor starts and runs. The motor may be stopped by interrupting the power supply, as with an SCR, and this is called natural commutation. However, the SCS provides us with another means of turning off: forced commutation by shorting the anode terminal to the cathode. [GE1] If this is done (by actuating the “off” pushbutton switch), the upper transistor within the SCS will lose its emitter current, thus halting current through the base of the lower transistor. When the lower transistor turns off, it breaks the circuit for base current through the top transistor (securing its “off” state), and the motor (making it stop). The SCS will remain in the off condition until such time that the “on” pushbutton switch is re-actuated. - A silicon-controlled switch, or SCS, is essentially an SCR with an extra gate terminal. - Typically, the load current through an SCS is carried by the anode gate and cathode terminals, with the cathode gate and anode terminals sufficing as control leads. - An SCS is turned on by applying a positive voltage between the cathode gate and cathode terminals. It may be turned off (forced commutation) by applying a negative voltage between the anode and cathode terminals, or simply by shorting those two terminals together. The anode terminal must be kept positive with respect to the cathode in order for the SCS to latch.
Corn bran is a food product made from the tough outer layer of corn. Like the brans derived from other grain crops, it is very high in fiber, and it can be used in a wide variety of ways. Many commercial food producers use this substance as a filler in their foods, and to reduce the caloric value of snack foods. It can also be used in home cooking to increase the fiber content of various foods and to add texture. Grains have three parts: the bran, the endosperm, and the germ. The bran is the hard outer shell which protects the grain from the elements. Inside the bran is the endosperm, the bulk of the grain, with the nutrient-rich germ at one end of the grain. In the event that grain is allowed to develop into a seedling, the bran will eventually split open to allow the roots and leaves of the baby plant to emerge. Basic white flour, including white corn flour, is made from the endosperm alone. The endosperm has a soft, mild flavor, but it tends to be lower in fiber and nutrients than the grain as a whole. When the germ is included, the nutritional value increases, and when the bran is included to make whole grain flour, the fiber content also rises. Corn bran can be processed and sold independently, or simply left on the corn as it is ground into flour. Many corn products such as grits include the bran and the germ for extra nutritional value, and some finer-ground corn flour products may be made with the bran intact as well. When whole grain corn flour is used in a recipe, it tends to be more coarse than white corn flour, with a more complicated texture and flavor. When plain corn bran is added to a recipe, it greatly increases the fiber content. It can be used in things like cereal, chips, snack bars, and so forth to up the fiber. Because it is largely indigestible, it has a minimal impact on calorie count, so foods designed for dieters are often made with bran to keep the calories low and the food filling. Corn bran is also low in carbohydrates, which is useful for cooks who want to reduce the carbohydrate content of their foods; many low carb foods use this substance to add a corn flavor.
This story speaks of the importance of giving. When hard times fall on his land, Buddha reaches out to the wealthy, asking them to help feed the poor. The rich people grumble and refuse until a young, well-to-do girl steps forward and offers to take her bowl from house-to-house to be filled for those less fortunate than herself. Supriya succeeds and many in the land fill her bowl and their own to give to the poor. Katherine Scholes begins this informative piece by describing the multi-facted nature of the word "peace" and what it can mean to different people at different times. Then she provides concrete ways that each of us can be a peacemaker. Agree/disagree statements challenge students to think critically about their knowledge of a topic, theme or text. The strategy exposes students to the major ideas in a text before reading—engaging their thinking and motivating them to learn more. It also requires them to reconsider their original thinking after reading the text and to use textual evidence to support and explain their thinking. Select the parts of your Learning Plan you'd like to print. If your Tasks or Strategies have PDF handouts, they'll need to be printed separately. These are listed on the left side of each Task or Strategy page.
Water is an important part of life. It means a lot to people. It is what makes life possible. Yet when it flows in huge quantities, it becomes a problem. Floods occur when the supply of water in a water body such as a stream it a river exceeds it\’s capacity. Every water body such as a pond, stream, lake or river has a certain capacity to contain water. If this capacity is exceeded, the water starts to flow out of the water body\’s border and causes a flood. A flood can be very destructive because it has the potential to cause a lot of damage. The damage caused by a flood might be irreparable. Many major floods damage entire cities and towns. A large flood can drown as much as eighty to ninety percent of a city. Most cities have some sort of mechanism in place to avoid or reduce damage caused by floods. Some of the techniques used to reduce flood damage are relatively simple. Some techniques to reduce flood damage are more complicated and involve a number of different steps. The residents of towns in danger of flooding need to be trained to take care of flooded areas. They should be taught to handle floods and to avoid the damage caused by them. The places that are located near waterbodues such as rivers or streams are at the greatest risk of flooding. This is because they are filled with flood damage services in the event of a flood. Their proximity to the flooding site makes them more vulnerable. This is why people prefer to live away from sites that are vulnerable to damage caused by floods. This is why the price of real estate near rivers and other floodi sites is much lower than faraway sites. People often shift from places near rivers to places away from them. People often choose to train themselves in flood damage reduction. Classes are available to learn about different ways to reduce or avoid the dangers caused by flooding. They can build dams or other barriers to keep the overflowing water out. Damns at a very effective measure for keeping flood water out and avoiding damage. Dams are effective at controlling routine floods and overflows. They are successful in eighty to ninety percent of all cases. In the rate chance of their failure, other means can be devised. Other measures to control flood damage are less effective, but they have to be employed from time to time. They have to be employed because the primary flood control measures are often insufficient. This makes them very useful in cases of severe flooding. Severe flooding causes more damage than moderate flooding because it involves more water. A greater volume of water causes more trouble.
The Responsive Classroom approach to teaching is comprised of a set of well-designed practices intended to create safe, joyful, and engaging classroom and school communities. The emphasis is on helping students develop their academic, social, and emotional skills in a learning environment that is developmentally responsive to their strengths and needs. In order to be successful in and out of school, students need to learn a set of social and emotional competencies—cooperation, assertiveness, responsibility, empathy, and self-control—and a set of academic competencies—academic mindset, perseverance, learning strategies, and academic behaviors. The Responsive Classroom approach is informed by the work of educational theorists and the experiences of exemplary classroom teachers. Six principles guide this approach: - Teaching social and emotional skills is as important as teaching academic content. - How we teach is as important as what we teach. - Great cognitive growth occurs through social interaction. - How we work together as adults to create a safe, joyful, and inclusive school environment is as important as our individual contribution or competence. - What we know and believe about our students—individually, culturally, developmentally—informs our expectations, reactions, and attitudes about those students. - Partnering with families—knowing them and valuing their contributions—is as important as knowing the children we teach. - Morning Meeting—Everyone in the classroom gathers in a circle for twenty to thirty minutes at the beginning of each school day and proceeds through four sequential components: greeting, sharing, group activity, and morning message. - Establishing Rules—Teacher and students work together to name individual goals for the year and establish rules that will help everyone reach those goals. - Energizers—Short, playful, whole-group activities that are used as breaks in lessons. - Quiet Time—A brief, purposeful and relaxed time of transition that takes place after lunch and recess, before the rest of the school day continues. - Closing Circle—A five- to ten-minute gathering at the end of the day that promotes reflection and celebration through participation in a brief activity or two. Morning Meeting: engaging way to start each day, build a strong sense of community, and set children up for success socially and academically. Each morning, students and teachers gather together in a circle for twenty to thirty minutes and interact with one another during four purposeful components: - Greeting: Students and teachers greet one other by name. - Sharing: Students share information about important events in their lives. Listeners often offer empathetic comments or ask clarifying questions. - Group Activity: Everyone participates in a brief, lively activity that fosters group cohesion and helps students practice social and academic skills (for example, reciting a poem, dancing, singing, or playing a game). - Morning Message: Students read and interact with a short message written by their teacher. The message is crafted to help students focus on the work they’ll do in school that day.
Phonics worksheets printables learning the alphabet and how recognize letters is the first step to literacy but true reading fluency doesn t take shape until children master phonics. Understanding the sounds each letter makes and learning consonant blends are among an array of topics covered in our printable phonics worksheets. Printable phonics worksheets for kids. Color phonics worksheets. Today we are very excited to introduce our new phonics coloring worksheets for word families which give kids practice finding reading words with common phonics spelling rules a chance to color creatively too. Check out our different sets of worksheets that help kids practice and learn phonics skills like beginning sounds rhyming and more. Assonance occurs when vowel sounds rhyme as in brown cow or green leaf or blond hog for consonant sounds color phonics uses italic letters to indicate a different sound other. Hashima city 501 6227. You are not required to register in order to use this site. These free phonics worksheets may be used independently and without any obligation to make a purchase though they work well with the excellent phonics dvd and phonics audio cd programs developed by rock n learn. Colouring worksheets featuring the popular jolly phonics characters bee inky and snake. Color phonics is a patented comprehensive phonics program. It is also a new pronunciation guide which associates vowel sounds with fourteen specific assonant colors rather than with confusing diacritical marks.
Vitamin D is a fat soluble vitamin that is present in many food items and can also be produced from the ultraviolet rays when the skin is exposed to the sunlight. This vitamin is well known to involve in multiple functions in the bones, autoimmune diseases, cell growth, immune functions, and neurovascular functions. The deficiency of Vitamin D is reportedly widespread across the world and the deficiency can occur due to various reasons such as: - Inadequate sun exposure - The reduced intake with diet - Suffering from malabsorption syndrome Deficiency of Vitamin D is very common among people worldwide and it has become a condition that requires immediate actions as Vitamin D deficiency can lead to the impairment of the functioning of many organs or abnormalities in the level of other important hormones and nutrients. Vitamin D plays a central role in the musculoskeletal system and any fluctuations in the level of this Vitamin throws a major impact on the health of the bones. Deficiency of Vitamin D does not only affect the quality of bone but it also increases the risk of fractures. Vitamin D deficiency is associated with diseases affecting bone health such as rickets, osteoporosis, osteomalacia. Deficiency of Vitamin D also leads to abnormalities in the levels of calcium, phosphorus and bone metabolism. It increases the osteoclastic activity that weakens the bones and leads to decreased bone mineral density. This further results in the conditions such as osteoporosis. Also, abnormalities in the phosphorus level lead to an increased phosphorus secretion and hence causes mineralization defect in the skeleton. People who have specific muscle weakness, muscle aches and pain have been found with Vitamin D deficiency. The deficiency is more common in the elderly’s and therefore even a small fall can lead to a fracture. Numerous studies have suggested that the Vitamin D is associated with the functioning of the immune system and the deficiency of this vitamin can lead to an impaired immune system. Immune responses are of two types, innate response, and an adaptive response. The innate response involves response against pathogens and in this response, macrophages and monocytes provide defense against the invading pathogens. In contrary, an adaptive response is a specific pathogen based response which gets triggered by the activation of antigen cells in the jawed vertebrates. Vitamin D is found in the action of monocytes to macrophage and it plays a vital role in the functioning of the innate immune response by killing the pathogens. Also, destruction of self tissues by the immune system causes different types of autoimmune diseases and vitamin D is known to have immunomodulatory properties. Vitamin D plays an important role in reducing the risk of autoimmune diseases and cancer as many immunomodulatory issues are controlled by Vitamin D. Vitamin D is actively involved in multiple metabolic pathways and there are evidences that suggest that the deficiency of Vitamin D in the body is associated with an increased risk of cardiovascular diseases such as arterial hypertension, dyslipidemia, obesity, myocardial infarction, stroke and coronary artery diseases. The association between the Vitamin D and high blood pressure includes the relationship between the level of Vitamin D and the activity of renin angiotensin aldosterone system. It is established that Vitamin D plays an active role in coronary artery disease through different mechanisms. Researchers have shown that Vitamin D protect vessel walls against damage that is caused by an inflammation. It does this by increasing the expression of anti-inflammatory cytokines and decreasing the expression of pro-inflammatory molecules. It has been shown that the Vitamin D deficiency highly increases the risk for many neurological diseases such as Alzheimer’s disease, Parkinson’s disease, depression, autism, epilepsy and multiple sclerosis. Vitamin D throws important effects during brain development and this effect is linked to the antioxidative mechanism, increases the activity of nerves and it also has mood stabilizing effect. Vitamin D not only regulates neuronal differentiation and the production of factors that include the growth of the nerves but it also serves as a neuroprotective agent. Deficiency of Vitamin D is related to numerous problems. An inadequate Vitamin D decreases the production of the substance that is responsible for limiting the infarct size and accelerates neuronal regrowth. Vitamin D has been considered as an essential nutrient for the human body and it is required by every system of the body including the digestive system. According to the studies, most of the patients suffering from diseases like celiac disease, short bowel syndrome, ulcerative colitis and many other intestinal problems are found to lack Vitamin D. In fact, the severe deficiency of Vitamin D is found in people suffering from acute pancreatitis. It is a condition which occurs due to an inflammation in the pancreas which produces digestive enzymes and numbers of hormones. In the case of chronic pancreatitis, patients experience wastage of Vitamin D as a consequence of malabsorption and diarrhea.
In this activity, students will use their knowledge of the periodic table and periodic trends to add fictional elements to a periodic table based on their properties. Once the elements are in the correct place they will reveal a hidden message. This review activity will help students prepare for a summative assessment such as a unit test or final exam. By the end of this activity, students should be able to - Demonstrate their understanding of the periodic table and trends to place fictional elements based on their properties. This activity supports students’ understanding of - Periodic table - Periodic trends Teacher Preparation: 5 minutes Lesson: 20-30 minutes - Student document - No specific safety precautions need to be observed for this activity. - This activity was highlighted in the ChemFun section of the May 2021 issue of Chemistry Solutions. - The game was designed to give students practice demonstrating their understanding of the placement of elements in the periodic table based on their properties. - This activity includes 20 fictional elements that will be placed in the first four rows of the periodic table. Only elements in columns 1, 2, and 13-18. - This activity can be used as a summative assessment during a unit about the periodic table or as a review before a unit test of exam. - An Answer Key document has been included for teacher reference. - Teachers can easily change the hidden message by changing the element symbols given in the clues. If changes are made, just be sure that there are not more than one of each symbol. For the Student Use your knowledge of the periodic table and trends to add fictional elements to a periodic table based on their properties. Once the elements are in the correct place they will reveal a hidden message. Once you have placed all of the elements into the table correctly, write out the symbols in the order of “increasing atomic mass” and read the hidden message. - Um is in the 3rd period and has three electrons in its highest p-orbitals. - St is a gas that does not belong to a specific family. - Ea, Nd and Dy are all alkali metals. - F, En, and K are alkaline earth metals. - A and Rb are halogens. - Ma, U and R are all elements that have a full octet. - Me and Ex both form -2 ions. - The ionization energy trend for the elements that form +1 ions is Ea < Nd < Dy. - The electronegativity value for En is greater than K, but it is less than F. - A has a smaller atomic radius than Rb. - R is the noble gas with the largest atomic radius. - Me has a smaller value for ionization energy than Ex. - Th, Or and E are all located in the second period. - Ma has one less proton than Nd. - Jo would form a +3 cation and is a metal. - The electronegativity values of Th is less than E but greater than Or. - Ys can only form a +4 ion. Write the hidden message in the space below.
Polygons are planes figures formed by a closed series of rectilinear segments. Ex– Triangle, Rectangle etc. 1. Sum of all the angles of a polygon with n sides = (n-2)π 2. Sum of all exterior angles = 360° 3. No. of sides = 360°/exterior angle Classification of polygons – A triangle is a polygon having three sides. 1. Area = 1/2 x base x height 2. Area = √s(s-a)(s-b)(s-c) where s = a+b+c/2 3. Area = rs (where r is in-radius) 4. Area = 1/2 x product of two sides x sine of angle 5. Area = abc/4R where R = circumradius Congruency of Triangles: 1. SAS congruency: If two sides and an included angle of one triangle are equal to two sides and an included angle of another, the two triangles are congruent. 2. ASA congruency: If two angles and the included side of one triangle is equal to two angles and the included side of another, the triangles are congruent. 3. AAS congruency: If two angles and side opposite to one of the angles is equal to the corresponding angles and sides of another triangle, the triangles are congruent. 4. SSS congruency: If three sides of one triangle are equal to three sides of another triangle, the two triangles are congruent. 5. SSA congruency: If two sides and the angle opposite the greater side of one triangle are equal to the two sides and the angle opposite to the greater side of another triangle, then triangle are congruent. Similarity of Triangles: 1. AAA similarity: If in two triangles, corresponding angles are equal, then the triangles are similar. 2. SSS similarity: If the corresponding sides of two triangles are proportional then they are similar. 3. SAS similarity: If in two triangles, one pair of corresponding sides are proportional and the included angles are equal then the two triangles are similar. 1. Height = a√3/2 2. Area = √3a2/4 3. R(circum radius) = 2h/3 = a/√3 4. r(in radius) = h/3 = a/2√3 5. In equilateral triangle orthocentre, in-centre, circumcentre and centroid coincide. Area = b/4√(4a2 – b2) where b=base and a=equal sides 1. Median: A line joining the mid-point of a side of a triangle to the opposite vertex is called a radian. - A median divides a triangle in two parts of equal area. - The point where three medians meet is called centroid of the triangle. - The centroid of a triangle divides each median in ratio 2:1. 2. Altitude: A perpendicular drawn from any vertex to the opposite side is called the altitude. - The point where all altitudes meet at a point is called the orthocentre of triangle. 3. Perpendicular bisector: A line that is a perpendicular to a side and bisects it is the perpendicular bisector of the side. - The point at which perpendicular bisectors of the sides meet is called the circumcentre. - The circumcentre is the centre of the circle that circumscribes the triangle. - The lines bisecting the interior angles of a triangle are the angle bisectors of that triangle. - The angle bisectors meet at a point called the incentre. - The angle formed by any side at the incentre is always 90° more than the half of angle opposite to the side. 1. Length of direct common tangents is = √[(Distance between their centres)2-(r1 – r2)2] = √[(O1O2)2 – (r1 – r2)2] 2. Length of traverse common tangents is = √[(Distance between their centres)2-(r1 + r2)2] = √[(O1O2)2 – (r1 + r2)2] Question 1: If each interior angle of a regular polygon is 108°. The number of sides of the polygon is Solution : Interior angle = 108° Exterior angle = 180 – 108 = 72 Number of sides of polygon = 360° /exterior angle = 360° /72 Questions 2: The ratio of angles of triangle is 2:3:5. Find the smallest angle of the triangle. Solution : Ratio of angles 2:3:5 then 2x + 3x + 5x = 180 10x = 180 x = 18 Hence, the smallest angle = 18×2 = 36° Question 3: Two medians AD and BE of ∆ABC intersect O at right angle. If AD = 9cm and BE = 6cm, then the length of BD is O is the centroid which divides the median in 2:1. So, AO:OD = 2:1 AD = 3 units -> 9 cm 1 unit -> 3 cm So, OD = 3 cm BE = 3 units -> 6cm So, BO = 4 cm ∆BOD is a right angled triangle. BD2 = BO2 + OD2 BD2 = (4)2 + (3)2 BD2 = 16 + 9 = 25 BD = 5 cm Question 4: The side AB of a parallelogram ABCD is produced to E in such a way that BE = AB, DE intersects BC at Q. The point Q divides BC in the ratio Solution : Acc. to question AD || BC and AB || DC ∠1 = ∠2 (Corresponding alternate angle) ∠3 = ∠4 (Corresponding alternate angle) and ∠BEQ is common By AAA property both are similar ∆EQB ∼ ∆EDA So, EB/EA = EQ/ED = QB/AD AD=BC & EA = 2EB then 1/2 = QB/BC => BQ = QC Hence, Q divides BC in ratio 1:1. Question 5: In a ∆ABC, AB=AC and BA is produced to D such that AC=AD. Then the ∠BCD is Solution :Acc. to question ABC is an isosceles triangle. => ∠C = ∠B = θ => ∠CAD = ∠C + ∠B = 2θ (An exterior angle of a triangle is equal to the sum of the opposite interior angles.) AC=AD So, ∆ADC is also an isosceles triangle. In ∆ADC, ∠A + ∠C + ∠D = 180° 2∠C = 180° – 2θ (∠C = ∠D) ∠C = 90° – θ ∠BCD = θ + 90° – θ ∠BCD = 90° Question 6: If O is the circumcentre of ∆PQR, and ∠QOR = 110°, ∠OPR= 25°, then the angle ∠PRQ is If O is the circumcentre then OP=OR=OQ. ∠OPR = 25° then ∠PRO = 25° ∠OQR + ∠ORQ + ∠QOR = 180° 2∠ORQ = 180° – 110° ∠ORQ = 35° So, ∠PRQ = ∠PRO + ∠ORQ = 25° + 35° Question 7: In ∆ABC, DE || AC, D and E are two points on AB and CB respectively. If AB=20 cm and AD = 8 cm, then BE : CE is AB = 20 cm and AD = 8 cm DE || AC then, ∠A = ∠D and ∠C = ∠E ∠B is common By AAA property, ∆ABC ∼ ∆DBE therfore BD/AD = BE/CE BE/CE = 12/8 BE/CE = 3/2 Hence, BE : CE = 3:2 Question 8: Angle between the internal bisectors of two angles of a triangle ∠B and ∠C is 110°, then ∠A is Internal bisectors of angles intersect each other at Incentre. ∠BIC = 110° The angle formed by any side at the incentre is always 90° more than the half of angle opposite to the side.So, ∠BIC = 90° + 1/2∠A 1/2∠A = 110° – 90° ∠A = 20×2 = 40° Question 9: The distance between two parallel chords of length 8 cm and each in a circle of diameter 10cm is AB = CD = 8 cm radius = D/2 = 10/2 = 5 cm OB2=OM2 + MB2 52 = OM2 + 42 OM2 = 25 – 16 OM = 3 cm MN = 2 x OM = 2 x 3 = 6 cm Question 10: The radius of two concentric circles are 12cm and 13cm. If the chord of the greater circle be a tangent to the smaller circle, then the length of that chord is: Solution : Acc. to Question AO = 13 cm and OD = 12 cm AO2= DO2 + AD2 132 = 122 + AD2 AD2 = 169 – 144 AD = 5cm AB = 2xAD = 10 cm Questions 11: Two tangents are drawn at the extremities of diameter AB of a circle with centre O. If a tangent to the circle at the point C intersects the other two tangents at Q and R, then the measure of the ∠QOR is Solution : Acc. to question In ∆OCR and ∆RBO OC = OB (radius) RC = RB (tangent from same point) PR is common By SSS property both are congruent ∆OCR ≅ ∆RBO Similarly they are also congruent ∆OCQ ≅ ∆QAO Then ∠COR = ∠ROB = x and ∠AOQ = ∠COQ = y 2x + 2y = 180° x + y = 90° ∠QOR = 90° Question 12: Two equal circles whose centres are O and O’ intersect each other at the point A and B, OO’= 24 cm and AB = 10 cm then the radius of the circle is AB = 10 cm AC = BC = 5 cm OC = CO’ = 12 cm In right angled triangle ∆ACO OA2 = OC2 + AC2 OA2 = 122 + 52 OA2 = 144 + 25 OA = 13 cm Question 13: The distance between the centers of two circles of radii 6 cm and 3 cm is 15 cm. The length of the traverse common tangent to the circles is: Solution : Length of traverse common tangent = √[(Distance between their centres)2-(r1 + r2)2] =√[(15)2-(6 + 3)2] =√(225 – 81) = 12 cm Question 14: If the distance between two points (0, -5) and (x, 0) is 13 units, then the value of x is: Solution :We know that (Distance)2 =[(x2-x1)2 + (y2 – y1)2] (13)2 = [(x2-0)2 + (0 – (-5) )2] 169 = x2 + 25 x = 12 units Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.
A US Food and Drug Administration ruling this month bans the use of triclosan, triclocarban and 17 other antiseptics from household soaps because they have not been shown to be safe or even have any benefit. About 40% of soaps use at least one of these chemicals, and the chemicals are also found in toothpaste, baby pacifiers, laundry detergents and clothing. It is in some lip glosses, deodorants and pet shampoos. The current FDA action bans antiseptics like triclosan in household soaps only. It does not apply to other products like antiseptic gels designed to be used without water, antibacterial toothpaste or the many fabrics and household utensils in which antibacterials are embedded. Data suggest that the toothpastes are very effective for people suffering from gum disease, although it is not clear if they provide substantial benefits for those who don’t have gingivitis. The FDA is currently evaluating the use of antibacterials in gels and will rule on how those products should be handled once the data are in. Although antibacterials are still in products all around us, the current ban is a significant step forward in limiting their use. As microbiologists who study a range of chemicals and microbes, we will explain why we don’t we need to kill all the bacteria. We also will explain how antibiotic soaps may even be bad by contributing to antibiotic-resistant strains of bacteria that can be dangerous. Bacteria are everywhere in the environment and almost everywhere in our bodies, and that is mostly good. We rely on bacteria in our guts to provide nutrients and to signal to our brains, and some bacteria on our skin help protect us from harmful pathogens. Some bacteria present in soil and animal waste can cause infections if they are ingested, however, and washing is important to prevent bacteria from spreading to places where they can cause harm. Washing properly with soap and water removes these potential pathogens. If soap and water are sufficient, why were antibacterials like triclosan and triclocarban added in the first place? Triclosan was introduced in 1972. These chemicals were originally used for cleaning solutions, such as before and during surgeries, where removing bacteria is critical and exposure for most people is short. Triclosan and triclocarban may be beneficial in these settings, and the FDA ruling does not affect healthcare or first aid uses of the chemicals. In the 1990s, manufacturers started to incorporate triclosan and triclocarban in products for the average consumer, and many people were attracted by claims that these products killed more bacteria. Now antibacterial chemicals can be found in many household products, from baby toys to fabrics to soaps. Laboratory tests show the addition of these chemicals can reduce the number of bacteria in some situations. However, studies in a range of environments, including urban areas in the United States and squatter settlements in Pakistan, have shown that the inclusion of antibacterials in soap does not reduce the spread of infectious disease. Because the goal of washing is human health, these data indicate that antibacterials in consumer soaps do not provide any benefit. What’s the downside to having antibacterials in soap? It is potentially huge, both for those using it and for society as a whole. One concern is whether the antibacterials can directly harm humans. Triclosan had become so prevalent in household products that in 2003 a nationwide survey of healthy individuals found it in the urine of 75% of the 2,517 people tested. Triclosan has also been found in human plasma and breast milk. Most studies have not shown any direct toxicity from triclosan, but some animal studies indicate that triclosan can disrupt hormone systems. We do not know yet whether triclosan affects hormones in humans. Another serious concern is the effect of triclosan on antibiotic resistance in bacteria. Bacteria evolve resistance to nearly every threat they face, and triclosan is no exception. Triclosan isn’t used to treat disease, so why does it matter if some bacteria become resistant? Some of the common mechanisms that bacteria use to evade triclosan also let them evade antibiotics that are needed to treat disease. When triclosan is present in the environment, bacteria that have these resistance mechanisms grow better than bacteria that are still susceptible, so the number of resistant bacteria increases. Not only are bacteria adaptable, they are also promiscuous. Genes that let them survive antibiotic treatment are often found on pieces of DNA that can be passed from one bacterium to another, spreading resistance. These mobile pieces of DNA frequently have several different resistance genes, making the bacteria that contain them resistant to many different drugs. Bacteria that are resistant to triclosan are more likely to also be resistant to unrelated antibiotics, suggesting that the prevalence of triclosan can spread multi-drug resistance. As resistance spreads, we will not be able to kill as many pathogens with existing drugs. Antibiotics were introduced in the 1940s and revolutionized the way we lead our lives. Common infections and minor scrapes that could be fatal became easily treatable. Surgeries that were once unthinkable due to the risk of infection are now routine. However, bacteria are becoming stronger due to decades of antibiotic use and misuse. New drugs will help, but if we do not protect the antibiotics we have now more people will die from infections that used to be easily treated. Removing triclosan from consumer products will help protect antibiotics and limit the threat of toxicity from extended exposure, without any adverse effect on human health. The FDA ruling is a welcome first step to cleansing the environment of chemicals that provide little health value to most people but pose significant risk to individuals and to public health. To a large extent, this ruling is a victory of science over advertising. This article originally appeared on The Conversation
Gunpowder, also commonly known as black powder to distinguish it from modern smokeless powder, is the earliest known chemical explosive. It consists of a mixture of sulfur, carbon (in the form of charcoal) and potassium nitrate (saltpeter). The sulfur and carbon act as fuels while the saltpeter is an oxidizer. Gunpowder has been widely used as a propellant in firearms, artillery, rocketry, and pyrotechnics, including use as a blasting agent for explosives in quarrying, mining, and road building. Gunpowder is classified as a low explosive because of its relatively slow decomposition rate and consequently low brisance. Low explosives deflagrate (i.e., burn at subsonic speeds), whereas high explosives detonate producing a supersonic shockwave. Ignition of gunpowder packed behind a projectile generates enough pressure to force the shot from the muzzle at high speed, but usually not enough force to rupture the gun barrel. It thus makes a good propellant, but is less suitable for shattering rock or fortifications with its low-yield explosive power. Nonetheless it was widely used to fill fused artillery shells (and used in mining and civil engineering projects) until the second half of the 19th century, when the first high explosives were put into use. Gunpowder is one of the Four Great Inventions of China. Originally developed by the Taoists for medicinal purposes, it was first used for warfare around 904 AD. It spread throughout most parts of Eurasia by the end of the 13th century. Its use in weapons has declined due to smokeless powder replacing it, and it is no longer used for industrial purposes due to its relative inefficiency compared to newer alternatives such as dynamite and ammonium nitrate/fuel oil. A simple, commonly cited, chemical equation for the combustion of gunpowder is: A balanced, but still simplified, equation is: The exact percentages of ingredients varied greatly through the medieval period as the recipes were developed by trial and error, and needed to be updated for changing military technology. Gunpowder does not burn as a single reaction, so the byproducts are not easily predicted. One study showed that it produced (in order of descending quantities) 55.91% solid products: potassium carbonate, potassium sulfate, potassium sulfide, sulfur, potassium nitrate, potassium thiocyanate, carbon, ammonium carbonate and 42.98% gaseous products: carbon dioxide, nitrogen, carbon monoxide, hydrogen sulfide, hydrogen, methane, 1.11% water. Gunpowder made with less-expensive and more plentiful sodium nitrate instead of potassium nitrate (in appropriate proportions) works just as well. However, it is more hygroscopic than powders made from potassium nitrate. Muzzleloaders have been known to fire after hanging on a wall for decades in a loaded state, provided they remained dry. By contrast, gunpowder made with sodium nitrate must be kept sealed to remain stable.[original research?] Gunpowder releases 3 megajoules per kilogram and contains its own oxidant. This is lower than TNT (4.7 megajoules per kilogram), or gasoline (47.2 megajoules per kilogram, but gasoline requires an oxidant, so an optimized gasoline and O2 mixture contains 10.4 megajoules per kilogram).[original research?] Gunpowder is a low explosive: it does not detonate, but rather deflagrates (burns quickly). This is an advantage in a propellant device, where one does not desire a shock that would shatter the gun and potentially harm the operator; however, it is a drawback when an explosion is desired. In that case, the propellant (and most importantly, gases produced by its burning) must be confined. Since it contains its own oxidizer and additionally burns faster under pressure, its combustion is capable of bursting containers such as a shell, grenade, or improvised "pipe bomb" or "pressure cooker" casings to form shrapnel. In quarrying, high explosives are generally preferred for shattering rock. However, because of its low brisance, gunpowder causes fewer fractures and results in more usable stone compared to other explosives, making it useful for blasting slate, which is fragile, or monumental stone such as granite and marble. Gunpowder is well suited for blank rounds, signal flares, burst charges, and rescue-line launches. It is also used in fireworks for lifting shells, in rockets as fuel, and in certain special effects. Combustion converts less than half the mass of gunpowder to gas, most of it turns into particulate matter. Some of it is ejected, wasting propelling power, fouling the air, and generally being a nuisance (giving away a soldier's position, generating fog that hinders vision, etc.). Some of it ends up as a thick layer of soot inside the barrel, where it also is a nuisance for subsequent shots, and a cause of jamming an automatic weapon. Moreover, this residue is hygroscopic, and with the addition of moisture absorbed from the air forms a corrosive substance. The soot contains potassium oxide or sodium oxide that turns into potassium hydroxide, or sodium hydroxide, which corrodes wrought iron or steel gun barrels. Gunpowder arms therefore require thorough and regular cleaning to remove the residue. The first confirmed reference to what can be considered gunpowder in China occurred in the 9th century AD during the Tang dynasty, first in a formula contained in the Taishang Shengzu Jindan Mijue (太上聖祖金丹秘訣) in 808, and then about 50 years later in a Taoist text known as the Zhenyuan miaodao yaolüe (真元妙道要略). The Taishang Shengzu Jindan Mijue mentions a formula composed of six parts sulfur to six parts saltpeter to one part birthwort herb. According to the Zhenyuan miaodao yaolüe, "Some have heated together sulfur, realgar and saltpeter with honey; smoke and flames result, so that their hands and faces have been burnt, and even the whole house where they were working burned down." Based on these Taoist texts, the invention of gunpowder by Chinese alchemists was likely an accidental byproduct from experiments seeking to create the elixir of life. This experimental medicine origin is reflected in its Chinese name huoyao (Chinese: 火药/火藥; pinyin: huŏ yào /xuo yɑʊ/), which means "fire medicine". Saltpeter was known to the Chinese by the mid-1st century AD and was primarily produced in the provinces of Sichuan, Shanxi, and Shandong. There is strong evidence of the use of saltpeter and sulfur in various medicinal combinations. A Chinese alchemical text dated 492 noted saltpeter burnt with a purple flame, providing a practical and reliable means of distinguishing it from other inorganic salts, thus enabling alchemists to evaluate and compare purification techniques; the earliest Latin accounts of saltpeter purification are dated after 1200. The earliest chemical formula for gunpowder appeared in the 11th century Song dynasty text, Wujing Zongyao (Complete Essentials from the Military Classics), written by Zeng Gongliang between 1040 and 1044. The Wujing Zongyao provides encyclopedia references to a variety of mixtures that included petrochemicals—as well as garlic and honey. A slow match for flame throwing mechanisms using the siphon principle and for fireworks and rockets is mentioned. The mixture formulas in this book do not contain enough saltpeter to create an explosive however; being limited to at most 50% saltpeter, they produce an incendiary. The Essentials was written by a Song dynasty court bureaucrat and there is little evidence that it had any immediate impact on warfare; there is no mention of its use in the chronicles of the wars against the Tanguts in the 11th century, and China was otherwise mostly at peace during this century. However it had already been used for fire arrows since at least the 10th century. Its first recorded military application dates its use to the year 904 in the form of incendiary projectiles. In the following centuries various gunpowder weapons such as bombs, fire lances, and the gun appeared in China. Explosive weapons such as bombs have been discovered in a shipwreck off the shore of Japan dated from 1281, during the Mongol invasions of Japan. By 1083 the Song court was producing hundreds of thousands of fire arrows for their garrisons. Bombs and the first proto-guns, known as "fire lances", became prominent during the 12th century and were used by the Song during the Jin-Song Wars. Fire lances were first recorded to have been used at the Siege of De'an in 1132 by Song forces against the Jin. In the early 13th century the Jin utilized iron-casing bombs. Projectiles were added to fire lances, and re-usable fire lance barrels were developed, first out of hardened paper, and then metal. By 1257 some fire lances were firing wads of bullets. In the late 13th century metal fire lances became 'eruptors', proto-cannons firing co-viative projectiles (mixed with the propellant, rather than seated over it with a wad), and by 1287 at the latest, had become true guns, the hand cannon. The Muslims acquired knowledge of gunpowder some time between 1240 and 1280, by which point the Syrian Hasan al-Rammah had written recipes, instructions for the purification of saltpeter, and descriptions of gunpowder incendiaries. It is implied by al-Rammah's usage of "terms that suggested he derived his knowledge from Chinese sources" and his references to saltpeter as "Chinese snow" (Arabic: ثلج الصين thalj al-ṣīn), fireworks as "Chinese flowers" and rockets as "Chinese arrows" that knowledge of gunpowder arrived from China. However, because al-Rammah attributes his material to "his father and forefathers", al-Hassan argues that gunpowder became prevalent in Syria and Egypt by "the end of the twelfth century or the beginning of the thirteenth". In Persia saltpeter was known as "Chinese salt" (Persian: نمک چینی) namak-i chīnī) or "salt from Chinese salt marshes" (نمک شوره چینی namak-i shūra-yi chīnī). Hasan al-Rammah included 107 gunpowder recipes in his text al-Furusiyyah wa al-Manasib al-Harbiyya (The Book of Military Horsemanship and Ingenious War Devices), 22 of which are for rockets. If one takes the median of 17 of these 22 compositions for rockets (75% nitrates, 9.06% sulfur, and 15.94% charcoal), it is nearly identical to the modern reported ideal recipe of 75% potassium nitrate, 10% sulfur, and 15% charcoal. Al-Hassan claims that in the Battle of Ain Jalut of 1260, the Mamluks used against the Mongols, in "the first cannon in history", formula with near-identical ideal composition ratios for explosive gunpowder. Other historians urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, that they used for an earlier incendiary, naphtha. Khan claims that it was invading Mongols who introduced gunpowder to the Islamic world and cites Mamluk antagonism towards early musketeers in their infantry as an example of how such weapons were not always met with open acceptance in the Middle East. Similarly, the refusal of their Qizilbash forces to use firearms contributed to the Safavid rout at Chaldiran in 1514. The musket appeared in the Ottoman Empire by 1465. In 1598, Chinese writer Zhao Shizhen described Turkish muskets as being superior to European muskets. The Chinese military book Wu Pei Chih (1621) later described Turkish muskets that used a rack-and-pinion mechanism, which was not known to have been used in European or Chinese firearms at the time. The state-controlled manufacture of gunpowder by the Ottoman Empire through early supply chains to obtain nitre, sulfur and high-quality charcoal from oaks in Anatolia contributed significantly to its expansion between the 15th and 18th century. It was not until later in the 19th century when the syndicalist production of Turkish gunpowder was greatly reduced, which coincided with the decline of its military might. Some sources mention possible gunpowder weapons being deployed by the Mongols against European forces at the Battle of Mohi in 1241. Professor Kenneth Warren Chase crs the Mongols for introducing into Europe gunpowder and its associated weaponry. However, there is no clear route of transmission, and while the Mongols are often pointed to as the likeliest vector, Timothy May points out that "there is no concrete evidence that the Mongols used gunpowder weapons on a regular basis outside of China." However, Timothy May also points out "However... the Mongols used the gunpowder weapon in their wars against the Jin, the Song and in their invasions of Japan." The earliest Western accounts of gunpowder appears in texts written by English philosopher Roger Bacon in 1267 called Opus Majus and Opus Tertium. The oldest written recipes in Europe were recorded under the name Marcus Graecus or Mark the Greek between 1280 and 1300 in the Liber Ignium, or Book of Fires. Records show that, in England, gunpowder was being made in 1346 at the Tower of London; a powder house existed at the Tower in 1461; and in 1515 three King's gunpowder makers worked there. Gunpowder was also being made or stored at other Royal castles, such as Portchester. The English Civil War (1642–1645) led to an expansion of the gunpowder industry, with the repeal of the Royal Patent in August 1641. In late 14th century Europe, gunpowder was improved by corning, the practice of drying it into small clumps to improve combustion and consistency. During this time, European manufacturers also began regularly purifying saltpeter, using wood ashes containing potassium carbonate to precipitate calcium from their dung liquor, and using ox blood, alum, and slices of turnip to clarify the solution. During the Renaissance, two European schools of pyrotechnic thought emerged, one in Italy and the other at Nuremberg, Germany. In Italy, Vannoccio Biringuccio, born in 1480, was a member of the guild Fraternita di Santa Barbara but broke with the tradition of secrecy by setting down everything he knew in a book titled De la pirotechnia, written in vernacular. It was published posthumously in 1540, with 9 ions over 138 years, and also reprinted by MIT Press in 1966. By the mid-17th century fireworks were used for entertainment on an unprecedented scale in Europe, being popular even at resorts and public gardens. With the publication of Deutliche Anweisung zur Feuerwerkerey (1748), methods for creating fireworks were sufficiently well-known and well-described that "Firework making has become an exact science." In 1774 Louis XVI ascended to the throne of France at age 20. After he discovered that France was not self-sufficient in gunpowder, a Gunpowder Administration was established; to head it, the lawyer Antoine Lavoisier was appointed. Although from a bourgeois family, after his degree in law Lavoisier became wealthy from a company set up to collect taxes for the Crown; this allowed him to pursue experimental natural science as a hobby. Without access to cheap saltpeter (controlled by the British), for hundreds of years France had relied on saltpetremen with royal warrants, the droit de fouille or "right to dig", to seize nitrous-containing soil and demolish walls of barnyards, without compensation to the owners. This caused farmers, the wealthy, or entire villages to bribe the petermen and the associated bureaucracy to leave their buildings alone and the saltpeter uncollected. Lavoisier instituted a crash program to increase saltpeter production, revised (and later eliminated) the droit de fouille, researched best refining and powder manufacturing methods, instituted management and record-keeping, and established pricing that encouraged private investment in works. Although saltpeter from new Prussian-style putrefaction works had not been produced yet (the process taking about 18 months), in only a year France had gunpowder to export. A chief beneficiary of this surplus was the American Revolution. By careful testing and adjusting the proportions and grinding time, powder from mills such as at Essonne outside Paris became the best in the world by 1788, and inexpensive. Two British physicists, Andrew Noble and Frederick Abel, worked to improve the properties of gunpowder during the late 19th century. This formed the basis for the Noble-Abel gas equation for internal ballistics. The introduction of smokeless powder in the late 19th century led to a contraction of the gunpowder industry. After the end of World War I, the majority of the British gunpowder manufacturers merged into a single company, "Explosives Trades limited"; and a number of sites were closed down, including those in Ireland. This company became Nobel Industries Limited; and in 1926 became a founding member of Imperial Chemical Industries. The Home Office removed gunpowder from its list of Permitted Explosives; and shortly afterwards, on 31 December 1931, the former Curtis & Harvey's Glynneath gunpowder factory at Pontneddfechan, in Wales, closed down, and it was demolished by fire in 1932. The last remaining gunpowder mill at the Royal Gunpowder Factory, Waltham Abbey was damaged by a German parachute mine in 1941 and it never reopened. This was followed by the closure of the gunpowder section at the Royal Ordnance Factory, ROF Chorley, the section was closed and demolished at the end of World War II; and ICI Nobel's Roslin gunpowder factory, which closed in 1954. This left ICI Nobel's Ardeer site in Scotland as the sole gunpowder factory in Great Britain; it too closed in October 1976. Gunpowder and gunpowder weapons were transmitted to India through the Mongol invasions of India. The Mongols were defeated by Alauddin Khalji of the Delhi Sultanate, and some of the Mongol soldiers remained in northern India after their conversion to Islam. It was written in the Tarikh-i Firishta (1606–1607) that Nasiruddin Mahmud the ruler of the Delhi Sultanate presented the envoy of the Mongol ruler Hulegu Khan with a dazzling pyrotechnics display upon his arrival in Delhi in 1258. Nasiruddin Mahmud tried to express his strength as a ruler and tried to ward off any Mongol attempt similar to the Siege of Baghdad (1258). Firearms known as top-o-tufak also existed in many Muslim kingdoms in India by as early as 1366. From then on the employment of gunpowder warfare in India was prevalent, with events such as the "Siege of Belgaum" in 1473 by Sultan Muhammad Shah Bahmani. The shipwrecked Ottoman Admiral Seydi Ali Reis is known to have introduced the earliest type of matchlock weapons, which the Ottomans used against the Portuguese during the Siege of Diu (1531). After that, a diverse variety of firearms, large guns in particular, became visible in Tanjore, Dacca, Bijapur, and Murshidabad. Guns made of bronze were recovered from Calicut (1504)- the former capital of the Zamorins The Mughal emperor Akbar mass-produced matchlocks for the Mughal Army. Akbar is personally known to have shot a leading Rajput commander during the Siege of Chittorgarh. The Mughals began to use bamboo rockets (mainly for signalling) and employ sappers: special units that undermined heavy stone fortifications to plant gunpowder charges. The Mughal Emperor Shah Jahan is known to have introduced much more advanced matchlocks, their designs were a combination of Ottoman and Mughal designs. Shah Jahan also countered the British and other Europeans in his province of Gujarāt, which supplied Europe saltpeter for use in gunpowder warfare during the 17th century. Bengal and Mālwa participated in saltpeter production. The Dutch, French, Portuguese, and English used Chhapra as a center of saltpeter refining. Ever since the founding of the Sultanate of Mysore by Hyder Ali, French military officers were employed to train the Mysore Army. Hyder Ali and his son Tipu Sultan were the first to introduce modern cannons and muskets, their army was also the first in India to have official uniforms. During the Second Anglo-Mysore War Hyder Ali and his son Tipu Sultan unleashed the Mysorean rockets at their British opponents effectively defeating them on various occasions. The Mysorean rockets inspired the development of the Congreve rocket, which the British widely utilized during the Napoleonic Wars and the War of 1812. Cannons were introduced to Majapahit when Kublai Khan's Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used cannons (Chinese: Pao) against Daha forces. Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty. Even though the knowledge of making gunpowder-based weapon has been known after the failed Mongol invasion of Java, and the predecessor of firearms, the pole gun (bedil tombak), was recorded as being used by Java in 1413,: 245 the knowledge of making "true" firearms came much later, after the middle of the 15th century. It was brought by the Islamic nations of West Asia, most probably the Arabs. The precise year of introduction is unknown, but it may be safely concluded to be no earlier than 1460.: 23 Before the arrival of the Portuguese in Southeast Asia, the natives already possessed primitive firearms, the Java arquebus. Portuguese influence to local weaponry, particularly after the capture of Malacca (1511), resulted in a new type of hybrid tradition matchlock firearm, the istinggar. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Circa 1540, the Javanese, always alert for new weapons found the newly arrived Portuguese weaponry superior to that of the locally made variants. Majapahit-era cetbang cannons were further improved and used in the Demak Sultanate period during the Demak invasion of Portuguese Malacca. During this period, the iron for manufacturing Javanese cannons was imported from Khorasan in northern Persia. The material was known by Javanese as wesi kurasani (Khorasan iron). When the Portuguese came to the archipelago, they referred to it as Berço, which was also used to refer to any breech-loading swivel gun, while the Spaniards call it Verso. By the early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180-260-pounders, weighing anywhere between 3–8 tons, length of them between 3–6 m. Javanese bronze breech-loaded swivel-guns, known as cetbang, or erroneously as lantaka, was used widely by the Majapahit navy as well as by pirates and rival lords. Following the decline of the Majapahit, particularly after the paregreg civil war (1404–1406),: 174–175 the consequent decline in demand for gunpowder weapons caused many weapon makers and bronze-smiths to move to Brunei, Sumatra, Malaysia and the Philippines lead to widespread use, especially in the Makassar Strait. It led to near universal use of the swivel-gun and cannons in the Nusantara archipelago. Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles', The History of Java (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali. On the origins of gunpowder technology, historian Tonio Andrade remarked, "Scholars today overwhelmingly concur that the gun was invented in China." Gunpowder and the gun are widely believed by historians to have originated from China due to the large body of evidence that documents the evolution of gunpowder from a medicine to an incendiary and explosive, and the evolution of the gun from the fire lance to a metal gun, whereas similar records do not exist elsewhere. As Andrade explains, the large amount of variation in gunpowder recipes in China relative to Europe is "evidence of experimentation in China, where gunpowder was at first used as an incendiary and only later became an explosive and a propellant... in contrast, formulas in Europe diverged only very slightly from the ideal proportions for use as an explosive and a propellant, suggesting that gunpowder was introduced as a mature technology." However, the history of gunpowder is not without controversy. A major problem confronting the study of early gunpowder history is ready access to sources close to the events described. Often the first records potentially describing use of gunpowder in warfare were written several centuries after the fact, and may well have been colored by the contemporary experiences of the chronicler. Translation difficulties have led to errors or loose interpretations bordering on artistic licence. Ambiguous language can make it difficult to distinguish gunpowder weapons from similar technologies that do not rely on gunpowder. A commonly cited example is a report of the Battle of Mohi in Eastern Europe that mentions a "long lance" sending forth "evil-smelling vapors and smoke", which has been variously interpreted by different historians as the "first-gas attack upon European soil" using gunpowder, "the first use of cannon in Europe", or merely a "toxic gas" with no evidence of gunpowder. It is difficult to accurately translate original Chinese alchemical texts, which tend to explain phenomena through metaphor, into modern scientific language with rigidly defined terminology in English. Early texts potentially mentioning gunpowder are sometimes marked by a linguistic process where semantic change occurred. For instance, the Arabic word naft transitioned from denoting naphtha to denoting gunpowder, and the Chinese word pào changed in meaning from trebuchet to a cannon. This has led to arguments on the exact origins of gunpowder based on etymological foundations. Science and technology historian Bert S. Hall makes the observation that, "It goes without saying, however, that historians bent on special pleading, or simply with axes of their own to grind, can find rich material in these terminological thickets." Another major area of contention in modern studies of the history of gunpowder is regarding the transmission of gunpowder. While the literary and archaeological evidence supports a Chinese origin for gunpowder and guns, the manner in which gunpowder technology was transferred from China to the West is still under debate. It is unknown why the rapid spread of gunpowder technology across Eurasia took place over several decades whereas other technologies such as paper, the compass, and printing did not reach Europe until centuries after they were invented in China. Gunpowder is a granular mixture of: Potassium nitrate is the most important ingredient in terms of both bulk and function because the combustion process releases oxygen from the potassium nitrate, promoting the rapid burning of the other ingredients. To reduce the likelihood of accidental ignition by static electricity, the granules of modern gunpowder are typically coated with graphite, which prevents the build-up of electrostatic charge. Charcoal does not consist of pure carbon; rather, it consists of partially pyrolyzed cellulose, in which the wood is not completely decomposed. Carbon differs from ordinary charcoal. Whereas charcoal's autoignition temperature is relatively low, carbon's is much greater. Thus, a gunpowder composition containing pure carbon would burn similarly to a match head, at best. The current standard composition for the gunpowder manufactured by pyrotechnicians was adopted as long ago as 1780. Proportions by weight are 75% potassium nitrate (known as saltpeter or saltpetre), 15% softwood charcoal, and 10% sulfur. These ratios have varied over the centuries and by country, and can be altered somewhat depending on the purpose of the powder. For instance, power grades of black powder, unsuitable for use in firearms but adequate for blasting rock in quarrying operations, are called blasting powder rather than gunpowder with standard proportions of 70% nitrate, 14% charcoal, and 16% sulfur; blasting powder may be made with the cheaper sodium nitrate substituted for potassium nitrate and proportions may be as low as 40% nitrate, 30% charcoal, and 30% sulfur. In 1857, Lammot du Pont solved the main problem of using cheaper sodium nitrate formulations when he patented DuPont "B" blasting powder. After manufacturing grains from press-cake in the usual way, his process tumbled the powder with graphite dust for 12 hours. This formed a graphite coating on each grain that reduced its ability to absorb moisture. Neither the use of graphite nor sodium nitrate was new. Glossing gunpowder corns with graphite was already an accepted technique in 1839, and sodium nitrate-based blasting powder had been made in Peru for many years using the sodium nitrate mined at Tarapacá (now in Chile). Also, in 1846, two plants were built in south-west England to make blasting powder using this sodium nitrate. The idea may well have been brought from Peru by Cornish miners returning home after completing their contracts. Another suggestion is that it was William Lobb, the plant collector, who recognised the possibilities of sodium nitrate during his travels in South America. Lammot du Pont would have known about the use of graphite and probably also knew about the plants in south-west England. In his patent he was careful to state that his claim was for the combination of graphite with sodium nitrate-based powder, rather than for either of the two individual technologies. French war powder in 1879 used the ratio 75% saltpeter, 12.5% charcoal, 12.5% sulfur. English war powder in 1879 used the ratio 75% saltpeter, 15% charcoal, 10% sulfur. The British Congreve rockets used 62.4% saltpeter, 23.2% charcoal and 14.4% sulfur, but the British Mark VII gunpowder was changed to 65% saltpeter, 20% charcoal and 15% sulfur. The explanation for the wide variety in formulation relates to usage. Powder used for rocketry can use a slower burn rate since it accelerates the projectile for a much longer time—whereas powders for weapons such as flintlocks, cap-locks, or matchlocks need a higher burn rate to accelerate the projectile in a much shorter distance. Cannons usually used lower burn-rate powders, because most would burst with higher burn-rate powders. Besides black powder, there are other historically important types of gunpowder. "Brown gunpowder" is cited as composed of 79% nitre, 3% sulfur, and 18% charcoal per 100 of dry powder, with about 2% moisture. Prismatic Brown Powder is a large-grained product the Rottweil Company introduced in 1884 in Germany, which was adopted by the British Royal Navy shortly thereafter. The French navy adopted a fine, 3.1 millimeter, not prismatic grained product called Slow Burning Cocoa (SBC) or "cocoa powder". These brown powders reduced burning rate even further by using as little as 2 percent sulfur and using charcoal made from rye straw that had not been completely charred, hence the brown color. Lesmok powder was a product developed by DuPont in 1911, one of several semi-smokeless products in the industry containing a mixture of black and nitrocellulose powder. It was sold to Winchester and others primarily for .22 and .32 small calibers. Its advantage was that it was believed at the time to be less corrosive than smokeless powders then in use. It was not understood in the U.S. until the 1920s that the actual source of corrosion was the potassium chloride residue from potassium chlorate sensitized primers. The bulkier black powder fouling better disperses primer residue. Failure to mitigate primer corrosion by dispersion caused the false impression that nitrocellulose-based powder caused corrosion. Lesmok had some of the bulk of black powder for dispersing primer residue, but somewhat less total bulk than straight black powder, thus requiring less frequent bore cleaning. It was last sold by Winchester in 1947. The development of smokeless powders, such as cordite, in the late 19th century created the need for a spark-sensitive priming charge, such as gunpowder. However, the sulfur content of traditional gunpowders caused corrosion problems with Cordite Mk I and this led to the introduction of a range of sulfur-free gunpowders, of varying grain sizes. They typically contain 70.5 parts of saltpeter and 29.5 parts of charcoal. Like black powder, they were produced in different grain sizes. In the United Kingdom, the finest grain was known as sulfur-free mealed powder (SMP). Coarser grains were numbered as sulfur-free gunpowder (SFG n): 'SFG 12', 'SFG 20', 'SFG 40' and 'SFG 90', for example; where the number represents the smallest BSS sieve mesh size, which retained no grains. Sulfur's main role in gunpowder is to decrease the ignition temperature. A sample reaction for sulfur-free gunpowder would be: The term black powder was coined in the late 19th century, primarily in the United States, to distinguish prior gunpowder formulations from the new smokeless powders and semi-smokeless powders. Semi-smokeless powders featured bulk volume properties that approximated black powder, but had significantly reduced amounts of smoke and combustion products. Smokeless powder has different burning properties (pressure vs. time) and can generate higher pressures and work per gram. This can rupture older weapons designed for black powder. Smokeless powders ranged in color from brownish tan to yellow to white. Most of the bulk semi-smokeless powders ceased to be manufactured in the 1920s. The original dry-compounded powder used in 15th-century Europe was known as "Serpentine", either a reference to Satan or to a common artillery piece that used it. The ingredients were ground together with a mortar and pestle, perhaps for 24 hours, resulting in a fine flour. Vibration during transportation could cause the components to separate again, requiring remixing in the field. Also if the quality of the saltpeter was low (for instance if it was contaminated with highly hygroscopic calcium nitrate), or if the powder was simply old (due to the mildly hygroscopic nature of potassium nitrate), in humid weather it would need to be re-dried. The dust from "repairing" powder in the field was a major hazard. Loading cannons or bombards before the powder-making advances of the Renaissance was a skilled art. Fine powder loaded haphazardly or too tightly would burn incompletely or too slowly. Typically, the breech-loading powder chamber in the rear of the piece was filled only about half full, the serpentine powder neither too compressed nor too loose, a wooden bung pounded in to seal the chamber from the barrel when assembled, and the projectile placed on. A carefully determined empty space was necessary for the charge to burn effectively. When the cannon was fired through the touchhole, turbulence from the initial surface combustion caused the rest of the powder to be rapidly exposed to the flame. The advent of much more powerful and easy to use corned powder changed this procedure, but serpentine was used with older guns into the 17th century. For propellants to oxidize and burn rapidly and effectively, the combustible ingredients must be reduced to the smallest possible particle sizes, and be as thoroughly mixed as possible. Once mixed, however, for better results in a gun, makers discovered that the final product should be in the form of individual dense grains that spread the fire quickly from grain to grain, much as straw or twigs catch fire more quickly than a pile of sawdust. In late 14th century Europe and China, gunpowder was improved by wet grinding; liquid, such as distilled spirits was added during the grinding-together of the ingredients and the moist paste dried afterwards. The principle of wet mixing to prevent the separation of dry ingredients, invented for gunpowder, is used today in the pharmaceutical industry. It was discovered that if the paste was rolled into balls before drying the resulting gunpowder absorbed less water from the air during storage and traveled better. The balls were then crushed in a mortar by the gunner immediately before use, with the old problem of uneven particle size and packing causing unpredictable results. If the right size particles were chosen, however, the result was a great improvement in power. Forming the damp paste into corn-sized clumps by hand or with the use of a sieve instead of larger balls produced a product after drying that loaded much better, as each tiny piece provided its own surrounding air space that allowed much more rapid combustion than a fine powder. This "corned" gunpowder was from 30% to 300% more powerful. An example is cited where 15 kilograms (34 lb) of serpentine was needed to shoot a 21-kilogram (47 lb) ball, but only 8.2 kilograms (18 lb) of corned powder. Because the dry powdered ingredients must be mixed and bonded together for extrusion and cut into grains to maintain the blend, size reduction and mixing is done while the ingredients are damp, usually with water. After 1800, instead of forming grains by hand or with sieves, the damp mill-cake was pressed in molds to increase its density and extract the liquid, forming press-cake. The pressing took varying amounts of time, depending on conditions such as atmospheric humidity. The hard, dense product was broken again into tiny pieces, which were separated with sieves to produce a uniform product for each purpose: coarse powders for cannons, finer grained powders for muskets, and the finest for small hand guns and priming. Inappropriately fine-grained powder often caused cannons to burst before the projectile could move down the barrel, due to the high initial spike in pressure. Mammoth powder with large grains, made for Rodman's 15-inch cannon, reduced the pressure to only 20 percent as high as ordinary cannon powder would have produced. In the mid-19th century, measurements were made determining that the burning rate within a grain of black powder (or a tightly packed mass) is about 6 cm/s (0.20 feet/s), while the rate of ignition propagation from grain to grain is around 9 m/s (30 feet/s), over two orders of magnitude faster. Modern corning first compresses the fine black powder meal into blocks with a fixed density (1.7 g/cm³). In the United States, gunpowder grains were designated F (for fine) or C (for coarse). Grain diameter decreased with a larger number of Fs and increased with a larger number of Cs, ranging from about 2 mm (1⁄16 in) for 7F to 15 mm (9⁄16 in) for 7C. Even larger grains were produced for artillery bore diameters greater than about 17 cm (6.7 in). The standard DuPont Mammoth powder developed by Thomas Rodman and Lammot du Pont for use during the American Civil War had grains averaging 15 mm (0.6 in) in diameter with edges rounded in a glazing barrel. Other versions had grains the size of golf and tennis balls for use in 20-inch (51 cm) Rodman guns. In 1875 DuPont introduced Hexagonal powder for large artillery, which was pressed using shaped plates with a small center core—about 38 mm (1+1⁄2 in) diameter, like a wagon wheel nut, the center hole widened as the grain burned. By 1882 German makers also produced hexagonal grained powders of a similar size for artillery. By the late 19th century manufacturing focused on standard grades of black powder from Fg used in large bore rifles and shotguns, through FFg (medium and small-bore arms such as muskets and fusils), FFFg (small-bore rifles and pistols), and FFFFg (extreme small bore, short pistols and most commonly for priming flintlocks). A coarser grade for use in military artillery blanks was designated A-1. These grades were sorted on a system of screens with oversize retained on a mesh of 6 wires per inch, A-1 retained on 10 wires per inch, Fg retained on 14, FFg on 24, FFFg on 46, and FFFFg on 60. Fines designated FFFFFg were usually reprocessed to minimize explosive dust hazards. In the United Kingdom, the main service gunpowders were classified RFG (rifle grained fine) with diameter of one or two millimeters and RLG (rifle grained large) for grain diameters between two and six millimeters. Gunpowder grains can alternatively be categorized by mesh size: the BSS sieve mesh size, being the smallest mesh size, which retains no grains. Recognized grain sizes are Gunpowder G 7, G 20, G 40, and G 90. Owing to the large market of antique and replica black-powder firearms in the US, modern black powder substitutes like Pyrodex, Triple Seven and Black Mag3 pellets have been developed since the 1970s. These products, which should not be confused with smokeless powders, aim to produce less fouling (solid residue), while maintaining the traditional volumetric measurement system for charges. Claims of less corrosiveness of these products have been controversial however. New cleaning products for black-powder guns have also been developed for this market. For the most powerful black powder, meal powder, a wood charcoal, is used. The best wood for the purpose is Pacific willow, but others such as alder or buckthorn can be used. In Great Britain between the 15th and 19th centuries charcoal from alder buckthorn was greatly prized for gunpowder manufacture; cottonwood was used by the American Confederate States. The ingredients are reduced in particle size and mixed as intimately as possible. Originally, this was with a mortar-and-pestle or a similarly operating stamping-mill, using copper, bronze or other non-sparking materials, until supplanted by the rotating ball mill principle with non-sparking bronze or lead. Historically, a marble or limestone edge runner mill, running on a limestone bed, was used in Great Britain; however, by the mid 19th century this had changed to either an iron-shod stone wheel or a cast iron wheel running on an iron bed. The mix was dampened with alcohol or water during grinding to prevent accidental ignition. This also helps the extremely soluble saltpeter to mix into the microscopic pores of the very high surface-area charcoal. Around the late 14th century, European powdermakers first began adding liquid during grinding to improve mixing, reduce dust, and with it the risk of explosion. The powder-makers would then shape the resulting paste of dampened gunpowder, known as mill cake, into corns, or grains, to dry. Not only did corned powder keep better because of its reduced surface area, gunners also found that it was more powerful and easier to load into guns. Before long, powder-makers standardized the process by forcing mill cake through sieves instead of corning powder by hand. The improvement was based on reducing the surface area of a higher density composition. At the beginning of the 19th century, makers increased density further by static pressing. They shoveled damp mill cake into a two-foot square box, placed this beneath a screw press and reduced it to half its volume. "Press cake" had the hardness of slate. They broke the dried slabs with hammers or rollers, and sorted the granules with sieves into different grades. In the United States, Eleuthere Irenee du Pont, who had learned the trade from Lavoisier, tumbled the dried grains in rotating barrels to round the edges and increase durability during shipping and handling. (Sharp grains rounded off in transport, producing fine "meal dust" that changed the burning properties.) Another advance was the manufacture of kiln charcoal by distilling wood in heated iron retorts instead of burning it in earthen pits. Controlling the temperature influenced the power and consistency of the finished gunpowder. In 1863, in response to high prices for Indian saltpeter, DuPont chemists developed a process using potash or mined potassium chloride to convert plentiful Chilean sodium nitrate to potassium nitrate. The following year (1864) the Gatebeck Low Gunpowder Works in Cumbria (Great Britain) started a plant to manufacture potassium nitrate by essentially the same chemical process. This is nowadays called the 'Wakefield Process', after the owners of the company. It would have used potassium chloride from the Staßfurt mines, near Magdeburg, Germany, which had recently become available in industrial quantities. During the 18th century, gunpowder factories became increasingly dependent on mechanical energy. Despite mechanization, production difficulties related to humidity control, especially during the pressing, were still present in the late 19th century. A paper from 1885 laments that "Gunpowder is such a nervous and sensitive spirit, that in almost every process of manufacture it changes under our hands as the weather changes." Pressing times to the desired density could vary by a factor of three depending on the atmospheric humidity. The United Nations Model Regulations on the Transportation of Dangerous Goods and national transportation authorities, such as United States Department of Transportation, have classified gunpowder (black powder) as a Group A: Primary explosive substance for shipment because it ignites so easily. Complete manufactured devices containing black powder are usually classified as Group D: Secondary detonating substance, or black powder, or article containing secondary detonating substance, such as firework, class D model rocket engine, etc., for shipment because they are harder to ignite than loose powder. As explosives, they all fall into the category of Class 1. Besides its use as a propellant in firearms and artillery, black powder's other main use has been as a blasting powder in quarrying, mining, and road construction (including railroad construction). During the 19th century, outside of war emergencies such as the Crimean War or the American Civil War, more black powder was used in these industrial uses than in firearms and artillery. Dynamite gradually replaced it for those uses. Today, industrial explosives for such uses are still a huge market, but most of the market is in newer explosives rather than black powder. Beginning in the 1930s, gunpowder or smokeless powder was used in rivet guns, stun guns for animals, cable splicers and other industrial construction tools. The "stud gun", a powder-actuated tool, drove nails or screws into solid concrete, a function not possible with hydraulic tools, and today is still an important part of various industries, but the cartridges usually use smokeless powders. Industrial shotguns have been used to eliminate persistent material rings in operating rotary kilns (such as those for cement, lime, phosphate, etc.) and clinker in operating furnaces, and commercial tools make the method more reliable. Gunpowder has occasionally been employed for other purposes besides weapons, mining, fireworks and construction: |Wikimedia Commons has media related to Gunpowder.| |Look up gunpowder in Wiktionary, the free dictionary.|
Paul Benioff (born 1930) is a US physicist who wrote a paper in 1980 that imagined the feats computing might achieve if it could harness quantum mechanics, where the word quantum refers to the tiniest amount of something needed to interact with something else – it’s basically the world of atoms and sub-atomic particles. Benioff’s imagination helped give rise to the phrase ‘quantum computing’, a term that heralds how the storage and manipulation of information at the sub-atomic level would usher in computing feats far beyond those of ‘classical’ computers. Benioff was coincidently writing about a vague concept being outlined by Russian mathematician Yuri Manin (born 1937) who that same year talked up the promises of quantum computing in his book, Computable and uncomputable. Since then, others such as US physicist Richard Feynman (1918-1988) have promoted the potential of computing grounded in the concept of ‘superposition,’ when matter can be in different states at the same time. Quantum computing is built on manipulating the superposition of the qubit, the name of its computational unit. Qubits, which are often atoms, electrons or protons, are said to be in the ‘basis states’ of 0 or 1 at the same time when in superposition, whereas a computational unit in classical computing can only be 0 or 1. This qubit characteristic, on top of the ability of qubits to engage with qubits that are not physically connected (a characteristic known as entanglement), is what proponents say gives quantum computers the theoretical ability to calculate millions of possibilities in seconds, something far beyond the power of the transistors powering classical computers. In 2012, five years after Canada’s privately owned D-Wave built the world’s first rudimentary (28-qubit) quantum computer, US physicist and academic John Preskill (born 1953) devised the term ‘quantum supremacy’ to describe how quantum machines one day could make classical computers look archaic. In October last year, a long-awaited world first arrived. NASA and Google claimed to have attained quantum supremacy when something not “terribly useful” was computed “in seconds what would have taken even the largest and most advanced supercomputers thousands of years”. The pair were modest that their computation on a 53-qubit machine meant they were only able “to do one thing faster, not everything faster”. Yet IBM peers doused their claim as “grandiosity” anyway, saying one of IBM’s supercomputers could have done the same task in two-and-a-half days. Nonetheless, most experts agreed the world had edged closer to the transformative technology. Hundreds of millions of dollars are pouring into research because advocates claim that quantum computing promises simulations, searches, encryptions and optimisations that will lead to advancements in artificial intelligence, communications, encryption, finance, medicine, space exploration, even traffic flows, to name just some areas. No one questions that practical quantum computing has the potential to change the world. But the hurdles are formidable to accomplish a leap built on finicky qubits in superposition, entanglement and ‘error correction’, which is the term for overcoming ‘decoherence’ caused by derailed qubits that can’t be identified as out of whack when they are in superposition. There’s no knowing as to when, or if, a concept reliant on mastering so many tricky variables will eventuate. While incremental advancements will be common, the watershed breakthrough could prove elusive for a while yet. To be clear, quantum computing is expected to be designed to work alongside classical computers, not replace them. Quantum computers are large machines that require their qubits to be kept near absolute zero (minus 273 degrees Celsius) in temperature, so don’t expect them in your smartphones or laptops. And rather than the large number of relatively simple calculations done by classical computers, quantum computers are only suited to a limited number of highly complex problems with many interacting variables such as the modelling of climate, traffic, molecules and economies, where classic computers fall short. Quantum computing would come with drawbacks too. The most flagged disadvantage are the warnings that a quantum computer could quickly crack the encryption that protects classical computers. Another concern is that quantum computing’s potential would add to global tensions if one superpower gains an edge – China is investing heavily and in 2017 claimed to have used quantum techniques to create hack-free communications. In the commercial world, the same applies if one company dominates. Like artificial intelligence, quantum computing has had its ‘winters’ – when its challenges smothered the excitement and research dropped off. That points to the biggest qualification about today’s optimism about quantum computing; that it might take a long time to get beyond today’s rudimentary levels where quantum machines are no more powerful than classical supercomputers and can’t do practical things. But if quantum computing becomes mainstream, a new technological era would have started. Problems to solve Lisbon, the capital of Portugal, is snarled in traffic. Why not use a quantum algorithm to find the best route? That’s what Volkswagen and D-Wave did in November. Their algorithm calculated the best way for buses to skirt traffic along a flexible route between stops. D-Wave CEO Vern Brownell said the pilot program “could be historic” because it was the “first time a quantum computer has been used to a real-time workload”. D-Wave's quantum computer that tackled Lisbon’s congestion was built to solve such optimisation problems. The many variables associated with traffic and the different interactions or constraints between those variables are said to be beyond the ability of classical computers to solve within a useful time frame – in this case, before the bus trip is over. Quantum computing’s theoretical advantages are that a quantum computer can process all the states a qubit can have at once and its computation power increases exponentially with each additional qubit. For three qubits, there are eight states to work with simultaneously, for four there are 16, for 10 there are 1,024, and for 20 there are 1,048,576 states, as Wired calculates. Brownell says that as quantum computers are probabilistic by nature they team well with AI, which is based on probabilistic models. And already they are being paired to address problems as shown when Woodside Energy in November signed an AI and quantum computing contract with IBM to develop an ‘intelligent plant’. The dual aims of the deal are, first, to reduce corrosion-driven maintenance costs that amount to A$1 billion a year and, second, to protect the company from cyberattack. The quantum algorithms would help to optimise the flow of hydrocarbon fluids around its facilities while protecting computer systems from hackers, even those who might one day be armed with quantum computers. The prospect of quantum computing excites many industries. Aeroplane and satellite manufacturers think that quantum computing will lead to sturdier and lighter alloys for their products. Battery makers hope quantum simulations will help develop batteries that will outperform lithium-ion ones. Pharmaceuticals reckon that quantum grunt can devise medicines (compounds) that could tackle untreatable diseases. They suggest being able to bring drugs and vaccines to market much faster and cheaper by using quantum computers to model molecules in ways that are impossible using classical computers. And so on for climate change solutions, financial modelling and many other areas. More intriguing, perhaps, is that quantum computers could help provide answers to science’s most fundamental abstract questions. The restarting of a more-powerful European Organisation for Nuclear Research’s Large Hadron Collider under the French-Swiss border scheduled for 2020 is likely to boost the number of proton collisions per second by 150%. That’s a problem because when it was shut down in 2018, the collider’s data output of about 300 gigabytes of data every second needed to be divided between 170 computing centres in 42 countries for processing. To process the looming data torrent, scientists will need 50 to 100 times more computing power than they have at their disposal today. Such blockages to research explain the urgency for quantum computation. Advocates say, in time, up to half existing computing workloads could be executed by quantum devices, which would help a world running against the limits of ‘Moore’s Law’, which observed that the speed and ability of classical computers doubled every couple of years. But quantum computers could come with mischief too. A big problem was flagged in 1994 when US mathematician Peter Shor (born 1959) published an algorithm that, if handed to a quantum computer, could crack in seconds the encryption or maths puzzles that protect classical computers. Many fear that rudimentary quantum computers could attain this ability, which would mean that quantum’s disadvantages could precede its touted benefits. Adherents and doubters Michelle Simmons (born 1967), professor of quantum physics at the University of New South Wales, was named 2018 Australian of the Year and in 2019 was appointed an Officer of the Order of Australia for services to quantum computing. In July last year, to further add to her prestige, Simmons’s team of researchers announced a leap that will “provide a route to the realisation” of quantum computing. The innovation was the world’s first two-qubit gate between phosphorus donor electrons in silicon that Simmons described as a “massive result, perhaps the most significant of my career”. Simmons’s team is said to follow a unique approach that requires not only the placement of individual atom qubits in silicon but also all the associated circuitry to initialise, control and read-out the qubits at the nanoscale – a concept of such precision it was thought impossible. The researchers not only brought the qubits to just 13 nanometres, or 13 one-billionths of a metre, apart, but engineered all the control circuitry with sub-nanometre precision – for comparison, the width of a human hair is 60,000 nanometres. Such are the technicalities of the advancements needed to inch the world forward towards quantum computation. Mikhail Dyakonov (born 1940) is a Russian professor of physics who works at the University of Montpellier in France. He has spent decades studying quantum and condensed matter physics. Such are his achievements, his name describes physical marvels such as the spin relaxation mechanism, plasma wave instability and surface waves. He has won prizes for physics in France, Russia and the US. He is perhaps the world’s most credible naysayer about quantum computation meeting the optimism that surrounds it. “The proposed strategy relies on manipulating with high precision an unimaginably huge number of variables” is the summary of the case against quantum computing he made in 2018 in IEEE Spectrum, the magazine of The Institute of Electrical Engineers. Dyakonov explains that while a conventional computer with N bits at any given moment must be in one of its 2N possible states, the state of a quantum computer with N qubits is described by the values of the 2N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is where the hoped-for power of the quantum computer comes from “but it is also the reason for its great fragility and vulnerability”, he says. Experts estimate that between 1,000 and 100,000 qubits are needed for a useful quantum computer, he says. But the number of continuous parameters describing the state of such an effective quantum computer at any given moment is at least 10300. How big is that number, asks Dyakonov? “It is much, much greater than the number of subatomic particles in the observable universe.” Then, there are the effects of errors. In a classical computer, errors happen when transistors are switched off when they are supposed to be on, and vice versa. Error-correction programs within a classical computer can override these mistakes. “Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never,” Dyakonov says. The vast number of scientists backed by hundreds of millions of dollars and some of the world’s biggest governments, organisations and companies expect to prove Dyakonov wrong by turning the theoretical musings of Benioff, Manin and others into a new technological era. By Michael Collins, Investment Specialist Paul Benioff. ‘The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines.’ Journal of Statistical Physics, Volume 22, Issue 5, pages 563 to 591. May 1980 ui.adsabs.harvard.edu/abs/1980JSP....22..563B/abstract Feynman proposed a basic model for quantum computing in this speech in 1981 ‘Simulating physics with computers’ that was published in the International Journal of Theoretical Physics in 1982 and can be found at: link.springer.com/article/10.1007%2FBF02650179. Manin talked up the potential of quantum computing in his book of 1980 Computable and uncomputable. See, D-Wave. ‘About us’. dwavesys.com/our-company/meet-d-wave John Preskill. California Institute of Technology. ‘Quantum computing and the entanglement frontier.’ 13 November 2012. arxiv.org/pdf/1203.5813.pdf NASA. New release. ‘Google and NASA achieve quantum supremacy.’ 24 October 2019. nasa.gov/feature/ames/quantum-supremacy. An article ‘Quantum supremacy using a programmable superconducting processor’ published by Nature on 23 October 2019 can be found at: nature.com/articles/s41586-019-1666-5 Financial Times. ‘Rivals rubbish Google’s claim of quantum supremacy.’ 23 September 2019. ft.com/content/cede11e0-dd51-11e9-9743-db5a370481bc See World Economic Forum. ‘Quantum leap: Why the next wave of computers will change the world.’ 29 October 2019. weforum.org/agenda/2019/10/quantum-computers-next-frontier-classical-google-ibm-nasa-supremacy/ The challenge is that superpositions are only possible if the qubit’s value is not measured because if you take a measurement the superposition becomes a 0 or 1. The Catch-22 is how to tell if a qubit is in error if you are unaware of its state. See Quantamagazine. ‘The era of quantum computing is here. Outlook: cloudy.’ 24 January 2018. quantamagazine.org/the-era-of-quantum-computing-is-here-outlook-cloudy-20180124/ See Newsweek. ‘How China is using quantum physics to take over the world and stop hackers.’ 30 October 2017. newsweek.com/china-using-quantum-physics-take-over-world-695026 D-Wave’s internet. ‘ZDNet: Forget quantum supremacy: This quantum-computing milestone could be just as important.’ 11 December 2019. dwavesys.com/media-coverage/zdnet-forget-quantum-supremacy-quantum-computing-milestone-could-be-just-important D-Wave concedes a quantum computer generally only comes up with a “good enough answer swiftly enough to use” due to errors associated with quantum mechanics. But that’s better than a classic computer’s correct answer after the bus trip is over. WIRED. “Inside the high-stakes race to make quantum computers work.’ 31 October 2019. wired.co.uk/article/quantum-computers-ibm-cern ZDNet. ‘Woodside Energy signs AI and quantum computing deal with IBM.’ 12 November 2019. zdnet.com/article/woodside-energy-signs-ai-and-quantum-computing-deal-with-ibm/ See WIRED. ‘Inside the high-stakes race to make quantum computers work.’ Ibid. The program essentially finds the prime factors given an integer. Peter Shor. ‘Algorithms for quantum computation: Discrete logarithms and factoring.’ Proceedings 35th annual symposium of foundations of computer science.’ Conference held from 20 to 22 November 1994. ieeexplore.ieee.org/document/365700 See Arthur Herman, director of the Hudson Institute’s Quantum Alliance Initiative. ‘The quantum computing threat to American security.’ The Wall Street Journal. 10 November 2019. wsj.com/articles/the-quantum-computing-threat-to-american-security-11573411715. Pointing out this flaw, which could prove a problem in quantum computing’s experimental stage, stirred much interest in quantum computing. See Michelle Simmons’s Wikipedia profile. en.wikipedia.org/wiki/Michelle_Simmons Michelle Simmons and others. Nature. ‘A two-qubit gate between phosphorus donor electrons in silicon.’ Published 17 July 2019. nature.com/articles/s41586-019-1381-2%22%22 The Australian. ‘Quantum discovery could change our lives.’ 18 July 2019. theaustralian.com.au/higher-education/quantum-discovery-could-change-our-lives/news-story/6577b81984b3869954e212db3bb4117f University of New South Wales newsroom. ‘200 times faster than ever before: the speediest quantum operation yet.’ 18 July 2019. newsroom.unsw.edu.au/news/science-tech/200-times-faster-ever-speediest-quantum-operation-yet See Mikhail Dyakonov’s Wikipedia profile. en.wikipedia.org/wiki/Mikhail_Dyakonov Mikhail Dyakonov. ‘The case against quantum computing.’ IEEE Spectrum. 15 November 2018. spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing. The Institute of Electrical Engineers, which claims 400,000 members, call itself the world’s largest technical professional organisation for the advancement of technology
The People's Party, or Populist Party, was a 19th century American political group mainly comprised of Southern white farmers hoping to counteract the political dominance of the wealthy. The Populist movement was heavily concentrated in Georgia in the 1890s after declining cotton prices threatened economic stability. In the late 1800s, farmers suffered repeated setbacks, including droughts in the Midwest and increasing reliance on moneylenders. The National Farmers' Alliance and the Colored Farmers' Alliance formed to advocate for agrarian rights. They believed farmers were disadvantaged by a commerce system that favored the industries responsible for their mounting debts, such as banking and rail companies. The Alliance appealed to the federal government, aiming to restore the floundering cotton industry by proposing relief plans and trade reforms that would drive inflation. Lack of government support spurred the Populist movement, which gained widespread visibility through the presidential campaign of candidate James B. Weaver in 1892. The Populists attracted white and black followers from the South and Midwest and worked to undermine Democratic influence by polarizing voters. The party's decline began when Populist leader and vice presidential candidate Tom Watson tried to recruit more black voters. Watson sparked resentment from white farmers by advocating for reform of prison programs that targeted blacks, including a system that allowed mining companies to lease convicts. Repeated losses and alleged Democratic corruption at the polls prevented many Populist leaders from obtaining high political positions. After 1896, Populism continued to die out because many Democrats who sympathized with the Populists were threatened by the party's challenge to white control.
According to Webster’s New World Dictionary of the American Language, an ally is someone “joined with another for a common purpose.” Being an ally with lesbian, gay, bisexual, transgender, queer, intersex, and asexual (LGBTQIA) individuals is the process of working to develop individual attitudes, institutions, and culture in which LGBTQIA people feel they are valued. This work is motivated by an enlightened self-interest to end homophobia, biphobia, transphobia, heterosexism, and cisgenderism (J. Jay Scott and Vernon Wall, 1991). An ally is a person who works both to facilitate the development of all students around issues of sexual orientation, gender identity, and gender expression and to improve the experience of LGBTQIA people. Allies can identify as lesbian, gay, bisexual, transgender, cisgender, intersex, queer, questioning, or heterosexual. The University of Illinois has several Ally networks. Allies are invited to join any and all that seem appropriate. And if there isn’t a group that fits you, please talk to Leslie Morrow about starting one. Persons affiliated with the ally network can be identified by the Ally Network posters. This network includes queer-friendly and queer-identified faculty, staff and students who provide safe space and support for the LGBTQIA campus community. An ally to LGBTQIA individuals is a person who: - Believes that it is in their self-interest to be an ally to LGBTQIA individuals. - Has worked to develop an understanding of LGBTQIA issues. Works to be comfortable with their knowledge of gender identity and sexual orientation - Is comfortable saying the words “gay,” “lesbian,” “bisexual,” and “transgender.” - Works to understand how patterns of oppression operate, and is willing to identify oppressive acts and challenge the oppressive behaviors of others. - Works to be an ally to all oppressed groups. - Finds a way that feels personally congruent to confront /combat homophobia, transphobia, heterosexism, and cisgenderism. - Similar to how an LGBTQIA person “comes out of the closet,” an ally “comes out” as an ally by publicly acknowledging her/his support for LGBTQIA people and issues. - Chooses to align with LGBTQIA individuals, and represents their needs — especially when they are unable to do so themselves. - Expects to make some mistakes and does not give up when things become discouraging. - Promotes a sense of community with LGBTQIA individuals, and teaches others about the importance of these communities. Encourages others to also provide advocacy. - Is aware that they may be called the same names and be harassed in similar ways to those whom they are defending. Whenever possible, a heterosexual ally avoids “credentializing,” which involves disclosing their heterosexual identity in order to avoid negative or unpleasant assumptions or situations. - Works to address/confront individuals without being defensive, sarcastic, or threatening. Benefits of Being an Ally - You open yourself up to the possibility of close relationships with an additional 10% of the world. - You become less locked into gender role stereotypes. - You increase your ability to have close and loving relationships with friends of all genders. - You have opportunities to learn from, teach, and have an impact on a population with whom you might not otherwise interact. - You may make a profound difference in the life of someone you love who finds something positive in their sexual and gender identity. Four Steps to Becoming an Ally to LGBTQIA People - Awareness/Accessing Resources: Become aware of who you are and how you are different from and similar to LGBTQIA people. Such awareness can be gained through conversations with LGBTQIA individuals, reading about LGBTQIA people and their lives, attending awareness building workshops and meetings, and by self-examination. - Knowledge/Education: Become educated on the issues, knowing facts, statistics, laws, policies and culture of LGBTQIA people. - Creating an Open and Supportive Environment: Encourage and promote an atmosphere of respect. Acknowledge, appreciate and celebrate differences among individuals and within groups. Be a safe and open person to talk with. Join one of the campus Ally Networks. - Take Action: Teach, share your knowledge. Action is the only way to change society as a whole. Stand up for and fight for human rights.
Smart labels may one day use nanotechnology to determine whether perishables, such as food and makeup, are still safe to use. Nanotechnology is advancing rapidly in ways that allow us to fight disease, detect pollution, and make materials that are stronger, lighter, and more resistant than ever before. Among these diseases are food-borne illnesses. Spoiled and contaminated food can lead to food-borne illnesses such as Salmonella infection and botulism. Such food-borne illnesses affect millions of people and lead to up to 3000 deaths a year. Now, researchers are developing nanotechnology that can help us choose food that is safe and fresh. Researchers presenting at the 254th National Meeting and Exposition of the American Chemical Society are developing new nanotechnology that will detect whether food or makeup is spoiled or contaminated with bacteria, as reported recently in Science. One of the ways it can do so is by detecting free radicals—particles that result from the process of oxidation. Oxidation is what leads to bananas and apples turning black, for example. Similar technologies to determine whether food is spoiled exist already, but they function by using a liquid that moves through specialized channels on a large card. This new technology, on the other hand, would fit onto a piece of paper the size of a postage stamp—meaning it could be added to packaging as smart labels or used as a quick test for food. This would provide much clearer guidance about the safety of food than ‘best by’ or expiration dates that are often printed on packages, and could reduce the incidence of food-borne illnesses in restaurants by making assessments of food safety faster and more objective. These new sensors have many potential applications, in addition to determining whether food or makeup is spoiled. These same kinds of sensors are used to detect antioxidants in tea and wine, and could one day be used to identify potential medicinal plants. This could be particularly useful in scientific expeditions to remote areas, where researchers could carry these sensors and determine whether plants have medicinal properties without having to bring back and analyze large quantities of samples—a costly and time-consuming process. Written by C. I. Villamil
Sixth Grade Writing Standards Writing standards for sixth grade define the knowledge and skills needed for writing proficiency at this grade level. By understanding 6th grade writing standards, parents can be more effective in helping their children meet grade level expectations. What is 6th Grade Writing? Sixth grade students are expected to produce cohesive, coherent, and error-free multi-paragraph essays on a regular basis. Sixth-graders write essays of increasing complexity containing formal introductions, ample supporting evidence, and conclusions. Students select the appropriate form and develop an identifiable voice and style suitable for the writing purpose and the audience. Sixth grade student writing should demonstrate a command of standard American English and writing skills such as organizing ideas, using effective transitions, and choosing precise wording. Sixth-graders use every phase of the writing process and continue to build their knowledge of writing conventions, as well as how to evaluate writing and conduct research. The following writing standards represent what states* typically specify as 6th grade benchmarks in writing proficiency: Grade 6: Writing Process Sixth grade writing standards focus on the writing process as the primary tool to help children become independent writers. In Grade 6, students are taught to use each phase of the process as follows: - Prewriting: In grade 6, students generate ideas and organize information for writing by using such prewriting strategies as brainstorming, graphic organizers, notes, and logs. Students choose the form of writing that best suits the intended purpose and then make a plan for writing that prioritizes ideas, addresses purpose, audience, main idea, and logical sequence. - Drafting: In sixth grade, students develop drafts by categorizing ideas, organizing them into paragraphs, and blending paragraphs within larger units of text. Writing exhibits the students’ awareness of the audience and purpose. Students analyze language techniques of professional authors (e.g., point of view, establishing mood) to enhance the use of descriptive language and word choices. - Revising: In sixth grade, students revise selected drafts by elaborating, deleting, combining, and rearranging text. Other grade 6 revision techniques include adding transitional words, incorporating sources directly and indirectly into writing, using generalizations where appropriate, and connecting conclusion to beginning (e.g., use of the circular ending). Goals for revision include improving coherence, progression, and the logical support of ideas by focusing on the organization and consistency of ideas within and between paragraphs. Students also evaluate drafts for use of voice, point of view, and language techniques (e.g., foreshadowing, imagery, simile, metaphor, sensory language, connotation, denotation) to create a vivid expression of ideas. - Editing: Students edit their writing based on their knowledge of grammar and usage, spelling, punctuation, and other features of polished writing, such as clarity, varied sentence structure, and word choice (e.g., eliminating slang and selecting more precise verbs, nouns, and adjectives). Students also proofread using reference materials, word processor, and other resources. - Publishing: Sixth graders refine selected pieces frequently to “publish” for intended audiences. Published pieces use appropriate formatting and graphics (e.g., tables, drawings, charts, graphs) when applicable to enhance the appearance of the document. Use of technology: Sixth grade students use available technology to support aspects of creating, revising, editing, and publishing texts. Students compose documents with appropriate formatting by using word-processing skills and principles of design (e.g., margins, tabs, spacing, columns, page orientation). Grade 6: Writing Purposes In sixth grade, students write to express, discover, record, develop, and reflect on ideas. They problem solve and produce texts of at least 500 to 700 words. Specifically, 6th grade standards in writing stipulate that students write in the following forms: - Narrative: Students write narrative accounts that establish a point of view, setting, and plot (including rising action, conflict, climax, falling action, and resolution). Writing should employ precise sensory details and concrete language to develop plot and character and use a range of narrative devices (e.g., dialogue, suspense, and figurative language) to enhance style and tone. - Expository: Students write to describe, explain, compare and contrast, and problem solve. Essays should engage the interest of the reader and include a thesis statement, supporting details, and introductory, body, and concluding paragraphs. Students use a variety of organizational patterns, including by categories, spatial order, order of importance, or climactic order. - Research Reports: Students pose relevant questions with a scope narrow enough to be thoroughly covered. Writing supports the main idea or ideas with facts, details, examples, and explanations from multiple authoritative sources (e.g., speakers, periodicals, online information searches), and includes a bibliography. - Persuasive: Students write to influence, such as to persuade, argue, and request. In grade 6, persuasive compositions should state a clear position, support the position with organized and relevant evidence, anticipate and address reader concerns and counter arguments. - Creative: Students write to entertain, using a variety of expressive forms (e.g., short play, song lyrics, historical fiction, limericks) that employ figurative language, rhythm, dialogue, characterization, plot, and/or appropriate format. - Responses to Literature: Sixth grade students develop an interpretation exhibiting careful reading, understanding, and insight. Writing shows organization around clear ideas, premises, or images, supported by examples and textual evidence. In addition, sixth graders choose the appropriate form for their own purpose for writing, including journals, letters, editorials, reviews, poems, presentations, and narratives, and instructions. Grade 6: Writing Evaluation Sixth grade students learn to respond constructively to others’ writing and determine if their own writing achieves its purposes. In Grade 6, students also apply criteria to evaluate writing and analyze published examples as models for writing. Writing standards recommend that each student keep and review a collection of his/her own written work to determine its strengths and weaknesses and to set goals as a writer. In addition, sixth grade students evaluate the purposes and effects of film, print, and technology presentations. Students assess how language, medium, and presentation contribute to meaning. Grade 6: Written English Language Conventions Students in sixth grade are expected to write with more complex sentences, capitalization, and punctuation. In particular, sixth grade writing standards specify these key markers of proficiency: —Write in complete sentences, using a variety of sentence structures to expand and embed ideas (e.g., simple, compound, and complex sentences; parallel structure, such as similar grammatical forms or juxtaposed items). —Employ effective coordination and subordination of ideas to express complete thoughts. —Use explicit transitional devices. —Correctly employ Standard English usage, including subject-verb agreement, pronoun referents, and the eight parts of speech (noun, pronoun, verb, adverb, adjective, conjunction, preposition, interjection). Ensure that verbs agree with compound subjects. —Use verb tenses appropriately and consistently such as present, past, future, perfect, and progressive. —Identify and properly use indefinite pronouns —Use adjectives (comparative and superlative forms) and adverbs appropriately to make writing vivid or precise. —Use prepositional phrases to elaborate written ideas. —Use conjunctions to connect ideas meaningfully. —Use regular and irregular plurals correctly. —Write with increasing accuracy when using pronoun case such as “He and they joined him.” —Punctuate correctly to clarify and enhance meaning such as using hyphens, semicolons, colons, possessives, and sentence punctuation. —Use correct punctuation for clauses (e.g., dependent and independent clauses), appositives and appositive phrases, and in cited sources, including quotations for exact words from sources. —Write with increasing accuracy when using apostrophes in contractions such as doesn’t and possessives such as Maria’s. —Capitalize correctly to clarify and enhance meaning. —Sixth grades pay particular attention to capitalization of major words in titles of books, plays, movies, and television programs. —Use knowledge of spelling rules, orthographic patterns, generalizations, prefixes, suffixes, and roots, including Greek and Latin root words. —Spell frequently misspelled words correctly (e.g., their, they’re, there). —Write with accurate spelling of roots words such as drink, speak, read, or happy, inflections such as those that change tense or number, suffixes such as -able or -less, and prefixes such as re- or un. —Write with accurate spelling of contractions and syllable constructions, including closed, open, consonant before -le, and syllable boundary patterns. —Understand the influence of other languages and cultures on the spelling of English words. —Use resources to find correct spellings and spell accurately in final drafts. —Write fluidly and legibly in cursive or manuscript as appropriate. Grade 6: Research and Inquiry In sixth grade, students select and use reference materials and resources as needed for writing, revising, and editing final drafts. Students learn how to gather information systematically and use writing as a tool for research and inquiry in the following ways: - Search out multiple texts to complete research reports and projects. - Organize prior knowledge about a topic in a variety of ways such as by producing a graphic organizer. - Formulate a research plan, take notes, and apply evaluative criteria (e.g., relevance, accuracy, organization, validity, publication date) to select and use appropriate resources. - Frame questions for research. Evaluate own research and raise new questions for further investigation. - Select and use a variety of relevant and authoritative sources and reference materials (e.g., experts, periodicals, online information, dictionary, encyclopedias, online information) to aid in writing. - Summarize and organize ideas gained from multiple sources in useful ways such as outlines, conceptual maps, learning logs, and timelines. - Use organizational features of electronic text (e.g., bulletin boards, databases, keyword searches, e-mail addresses) to locate information. - Follow accepted formats for writing research, including documenting sources. - Explain and demonstrate an understanding of the importance of ethical research practices, including the need to avoid plagiarism, and know the associated consequences. Sixth Grade Writing Tests In some states, sixth graders take standardized writing assessments, either with pencil and paper or, increasingly, on a computer. Students will be given questions about grammar and mechanics, as well as a timed essay writing exercise, in which they must write an essay in response to one of several 6th grade writing prompts. While tests vary, some states test at intervals throughout the year, each time asking students to respond to a different writing prompt that requires a different form of writing, (i.e., narrative, expository, persuasive). Another type of question tests if students know how to write a summary statement in response to a reading passage. Students are also given classroom-based sixth grade writing tests and writing portfolio evaluations. State writing assessments are correlated to state writing standards. These standards-based tests measure what students know in relation to what they’ve been taught. If students do well on school writing assignments, they should do well on such a test. Educators consider standards-based tests to be the most useful as these tests show how each student is meeting grade-level expectations. These assessments are designed to pinpoint where each student needs improvement and help teachers tailor instruction to fit individual needs. State departments of education often include information on writing standards and writing assessments on their websites, including sample questions. Writing Test Preparation The best writing test preparation in sixth grade is simply encouraging your child to write, raising awareness of the written word, and offering guidance on writing homework. Tips for 6th grade test preparation include talking about the different purposes of writing as you encounter them, such as those of letters, recipes, grocery lists, instructions, and menus. By becoming familiar with 6th grade writing standards, parents can offer more constructive homework support. Remember, the best writing help for kids is not to correct their essays, but offer positive feedback that prompts them use the strategies of writing process to revise their own work. Time4Writing Online Writing Courses Support 6th Grade Writing Standards Time4Writing is an excellent complement to sixth grade writing curriculum. Developed by classroom teachers, Time4Writing targets the fundamentals of writing. Students build writing skills and deepen their understanding of the writing process by working on standard-based, grade-appropriate writing tasks under the individual guidance of a certified teacher. Writing on a computer inspires many students, even reluctant writers. Learn more about Time4Writing online courses for sixth grade. For more information about general learning objectives for sixth grade students including math and language arts, please visit Time4Learning.com. *K-12 writing standards are defined by each state. Time4Writing relies on a representative sampling of state writing standards, notably from Florida, Texas, and California, as well as on the standards published by nationally recognized education organizations, such as the National Council of Teachers of English and the International Reading Association. You’ve been exploring the writing standards for sixth grade. To view the writing standards for other grade levels, use one of the following links:
A study published today in the journal Evolution explains why hummingbird feathers are so iridescent; that is, why they shimmer in the light and shift as you look at the birds from different angles. Other birds like ducks and grackles have iridescent feathers, of course, but hummingbirds take the trait to another level. Chad Eliason, a postdoctoral researcher at the Field Museum in Chicago, and an international team of colleagues conducted the largest-ever optical study of hummingbird feathers. They examined the feathers of 35 species with transmission electron microscopes and compared them with the feathers of other brightly colored birds, like green-headed Mallards, to look for differences in their make-up. The key difference, the researchers say, are structures called melanosomes in hummingbird feathers. Ducks have log-shaped melanosomes without any air inside, but hummingbirds’ melanosomes are pancake-shaped and contain lots of tiny air bubbles. The flattened shape and air bubbles of hummingbird melanosomes create a more complex set of surfaces. When light glints off those surfaces, it bounces off in a way that produces iridescence. The researchers also found that the different traits that make hummingbird feathers special — like melanosome shape and the thickness of the feather lining — are traits that evolved separately, allowing hummingbirds to mix and match a wider variety of traits. It’s kind of like how you can make more outfit combinations with three shirts and three pairs of pants than you can with three dresses. All in all, hummingbird feathers are super complex, and that’s what makes them so much more colorful than other birds. “A good analogy would be like a soap bubble,” says co-author Matthew Shawkey of Belgium’s University of Ghent. “If you just look at a little bit of soap, it’s going to be colorless. But if you structure it the right way, if you spread it out really thin to form the shell of a bubble, you get those shimmering rainbow colors around the edges. It works the same way with melanosomes: with the right structure, you can turn something colorless into something really colorful.” More wonderful questions to explore And, the authors note, this project opens the door to a greater understanding of why hummingbirds develop the specific colors that they do. “Not all hummingbird colors are shiny and structural. Some species have drab plumage, and in many species, the females are less colorful than the males,” notes co-author Rafael Maia, a biologist and data scientist at Instacart. “In this paper we describe a model of how all these variations can be achieved within feathers,” says co-author Juan Parra from Colombia’s Universidad de Antioquia. “Now other wonderful questions appear. For example, if it is possible to display a wide variety of colors, why are many hummingbirds green? Whether this reflects historical events, predation, or female variation in preferences are still open and challenging questions.” “This study sets the stage for really understanding how color patterns are developed. Now that we have a better idea of how feather structure maps to color, we can really parse out which genes are underlying those really crazy colors in birds,” says Eliason.
You get to know the major kinds of chemical reactions including polymerization, hydrolysis, combustion, replacements, and redox. Experiment with some electrochemical reactions as well. After completing this tutorial, you will be able to complete the following: What happens to the zinc and the copper in the oxidation reaction? ~ In the oxidation reaction, the oxidation state of zinc increases from 0 to 2+, while the oxidation state of copper decreases from 2+ to 0. For every mole of zinc, two electrons are transferred to the copper during the reaction. What is the difference between an oxidation reaction and a reduction reaction? ~ In an oxidation reaction, the oxidation state of a substance increases by releasing electrons. In a reduction reaction, the oxidation state decreases because electrons are gained. In a redox reaction, what role does the reducing agent play? ~ In a redox reaction, the reducing agent causes reduction by providing electrons to another substance. The reducing agent contains the atom that shows an increase in oxidation number. In a redox reaction, what role does the oxidizing agent play? ~ In a redox reaction, the oxidizing agent is the substance that causes the oxidation of another substance by accepting its electrons. The oxidizing agent contains the atoms that show a decrease in oxidation number. |Approximate Time||2 Minutes| |Pre-requisite Concepts||Students should be familiar with the terms chemical reaction, chemistry, and copper.| |Type of Tutorial||Animation| |Key Vocabulary||chemical reaction, chemistry, copper|
Basics of the Relationship between Major and Minor Scales on the Guitar Every piece of music on the guitar has a tonal center called a tonic. The tonic is the primary pitch or chord that everything else revolves around. It’s where a piece of music sounds resolved or complete and usually where the music begins and ends. Generally speaking, the tonic also determines a song’s key. There are two basic types of music tonalities and keys: major and minor. If a piece of music centers on a major chord, then it’s considered to be in a major key. If music centers on a minor chord, it’s a minor key. For instance, if a song centers on a G chord, you say it’s in the key of G. Traditionally, music has been taught as being in either the major or minor scale. The good news is if you know the major scale, then you also know the minor scale. The minor scale is drawn from the 6th degree of the major scale. Start any major scale on its 6th degree and you have a minor scale. For example, the 6th degree in the G major scale is E. The E minor scale is simply the notes of G major starting on E, as you see here: G major scale E minor scale The relationship between the major and minor scales (and between the 1st and 6th chords) is often described as being relative. For example, in the key of G, I and vi are G and Em. G major is the relative major of E minor, and E minor is the relative minor of G major. This relative relationship holds true in all keys. In the key of C, for example, the I chord is C major and the vi chord is A minor. They, too, are relative major and minor chords and scales. In written music, relative major and minor keys actually share the same key signature. Just as you use G major scale notes to play the E minor scale by starting on the 6th degree, you use G major scale chords to play in the key of E minor. The following list shows the chords for both the G major scale and its relative minor, E minor. Notice how the E minor scale features the very same chords, starting on the 6th degree: The major scale chords are represented with Roman numerals that look like this: Rearrange the major scale with the 6th degree in the first position and you get this sequence:
Venus shining brightly in the twilight sky Twilight is the time after sunset, and before sunrise, when the sky remains bright, providing an ambient deep-blue illumination. It is caused by the scattering of sunlight around the Earth's day-night boundary by particles in the Earth's upper atmosphere, the same process which gives rise to the blue appearance of the sky in daytime. Different types of twilight The apparent brightness of the sky during the hours of twilight depends on many factors, including atmospheric conditions and altitude, but most importantly on the distance of the Sun beneath the horizon. For this reason, the degree of twilight is customarily defined in terms of the angular distance of the Sun below the horizon. Three classes of twilight are defined: civil twilight is that when the Sun is less than 6° below the horizon; nautical twilight is that when the Sun is between 6° and 12° below the horizon; and astronomical twilight is that when the Sun is between 12° and 18° below the horizon. When the Sun is more than 18° below the horizon, there is said to be astronomical darkness. The names of the various classes of twilight gives some indication of their history. During civil twilight, the sky remains quite obviously bright, even to a casual observer. Only a few of the brightest stars may be visible to the naked eye. The beginning and end of civil twilight approximates the times that a lay person might describe as dawn and dusk. During nautical twilight, many of the night sky's brightest stars are visible, though an observer at a dark location, for example at sea, will still be able to make out a residual background glow, especially in the direction facing where the Sun is below the horizon. Observations of the Moon and bright planets are possible, though most deep sky objects will be lost in the glow of the sky. During astronomical twilight, the sky is not perceptibly bright to the naked eye, even from a dark site, but the residual glow of the sky is sufficient to affect the faintest objects that can be seen through a telescope. Bright and non-diffuse deep sky objects – in particular bright open clusters – will typically be observable, but faint diffuse objects such as galaxies may be more difficult to make out. The times for twilight shown on the homepage of this website refer to astronomical twilight.
Influenza causes serious illness among millions of people each year, resulting in 250,000 to 500,000 deaths. Those most at risk include infants younger than six months, because they cannot be vaccinated against the disease. Now, in a new study with mice, researchers have identified a naturally occurring protein that, when added to the flu vaccine, may offer protection to babies during their first months of life. Says Michael Sherman, professor emeritus of child health at the University of Missouri: “Influenza vaccine works by stimulating a person’s immune system to make antibodies that attack the flu virus. However, infants younger than six months do not make antibodies when given flu vaccine. This is because the immune systems of these very young babies do not respond to the adjuvant, or additive, within the vaccine that boosts the body’s immune response when confronted with a virus.” The adjuvant used in most vaccines is aluminum hydroxide, or ALUM. ALUM is an additive that essentially acts as an irritant to attract white blood cells called neutrophils to the vaccination site. Neutrophils secrete the protein lactoferrin, which works with the immune system to impede the virus’s ability to survive in the body. Lactoferrin vs. Aluminum Hydroxide However, in both premature and term infants, ALUM doesn’t make immature immune cells work better. In this very young group, only the smaller amount of naturally occurring lactoferrin found near the vaccination site improves the immune response. “It is well documented that infants obtain protection against certain infections from nutrients found in breast milk,” Sherman says. “Lactoferrin is the major protein in a mother’s milk and boosts her infant’s immune system to fight infection. In theory, we felt that we could create a vaccine by replacing ALUM with lactoferrin as an additive.” To test their hypothesis, researchers studied mice vaccinated with either the adjuvant ALUM or lactoferrin. The mice, whose ages approximated those of human infants younger and older than six months, received the H1N1 influenza virus. As reported in the journal Biochemical and Biophysical Research Communications, lactoferrin worked slightly better than ALUM as an adjuvant and also provided four to five times the protection against influenza, compared to the control group that received an influenza vaccine without an adjuvant. “Currently, the best protection for neonatal babies is to vaccinate the mother and all those who will have close contact with the infant,” Sherman says. “Our recent study was meant to test the possibility of creating a safe and effective flu vaccine for very high-risk premature infants. Now that we have, we feel that the use of a natural protein would make immunization not only possible but more accepted.” The researchers will next study lactoferrin’s ability to prevent secondary infections such as pneumonia, as well as the possibility that the protein could be used as an adjuvant in other vaccines. Michael P. Sherman, et al. Lactoferrin acts as an adjuvant during influenza vaccination of neonatal mice Biochemical and Biophysical Research Communications Volume 467, Issue 4, 27 November 2015, Pages 766–770 Image: Bill McConkey, Wellcome Images
Terri Noland, Vice President of Learning Ally gave a webinar earlier this school year. I finally found the time to watch. Here is what she presented: Stories leave endorphins in the brain. This can motivate a struggling reader. Students need to work on skills, but we must still give them grade level content. Sometimes that means audio books (Learning Ally provides human-read audiobooks) or graphic novels. (We also have access to ‘high interest- low vocabulary books’ – google that phrase and see all that is available.) Reading achievement is directly linked to motivation – which one causes the other is not clear. Terri presented these research-based strategies: 1. Provide access to audio books. 2. Model Reading and Reading Behaviors (Use the 5-Word Rule – reading the first page of a book, if the student cannot read or understand these first 5 words, the person cannot read it independently. Further, if the struggling reader can understand, but not read, he can enjoy it as an audio book.) 3. Reading Aloud – Many students consider this their favorite part of the school day. Reading aloud allows you to provide your child with a variety of content. 4. Incorporate goal setting. Help the child create personal and manageable goals. 5. Provide access to a wide array of materials. They say that a classroom library should have 7 books per student and a school library should have 20 books per student. At home, we must provide children a variety of reading materials. 6. Create time and Space – a worthy goal consists of 20 minutes per day. 7. Opportunity for Self-Selection – a must. 8. Allow time for discussion. 9. Reading has to be relevant. 10. Provide specific feedback such as: “I really like how you do…..” rather than, “Good job.”
Juvenile macular degeneration is the term for several inherited eye diseases -- including Stargardt's disease, Best disease, and juvenile retinoschisis -- that affect children and young adults. These rare diseases cause central vision loss that may begin in childhood or young adulthood. Unfortunately, there is no treatment available for these diseases, which are caused by gene mutations passed down in families. Visual aids, adaptive training, and other types of assistance can help young people with vision loss remain active. Researchers continue to look for ways to prevent and treat juvenile macular degeneration (JMD). Today's teachers make full use of computers, interactive whiteboards, digital devices, and even 3D technology to enhance the learning environment. Forty percent of teachers use computers for instruction, and at least one computer is in 97% of all American classrooms. That adds up to a lot of screen time for kids who also watch TV or play on the computer at home. But is it harmful to a child’s vision? Parents are worried. Nearly a third say they’re concerned that computers and handheld electronics... Genetic counseling can help families understand these eye disorders and sort out the risks of passing them on to their children. Counseling also helps families understand how their loved one's vision is affected. These diseases damage the macula, which is the tissue in the center of the retina at the back of the eye. The macula provides our sharp, central vision so we can do things like read and drive. It also allows us to see color and helps us recognize faces. (Age-related macular degeneration is a leading cause of vision loss in older adults.) Below is an overview of some of these hereditary eye diseases that lead to juvenile macular degeneration. Stargardt disease is the most common form of juvenile macular degeneration. It's named after German ophthalmologist Karl Stargardt, who discovered it in 1901. Stargardt disease affects about one in 10,000 children in the U.S. Although the disease starts before age 20, a person may not notice vision loss until age 30 to 40. Signs of Stargardt disease. The condition can be diagnosed by yellow-white spots that appear in and around the macula. If the spots appear throughout the back of the eye, then it is called fundus flavimaulatus. These deposits are an abnormal buildup of a fatty substance produced during normal cell activity. Stargardt disease symptoms. Symptoms include difficulty reading and gray or black spots in the central vision. Loss of vision occurs gradually at first and affects both eyes. Once vision reaches 20/40, the disease progresses more rapidly, eventually reaching 20/200, which is legal blindness. Some people lose vision to 10/200 very quickly over a few months. Most people will have vision loss ranging from 20/100 to 20/400 by age 30 to 40.
Have students answer the following questions in their spirals-"How do you argue? What does it mean to argue? and What is evidence?" This will get the students to start thinking about the skills needed to argue. I will have the students share their thoughts with their Shoulder Partners and then as a class. I do not want to get too heavy into the particulars with writing an argumentative essay, I just want to get the students feet wet with doing the skill. So, to begin, I will take all of their thoughts from the advanced organizer and use them to guide our lesson. First of all, I will define argumentative writing. I want the students to to have a basic understanding for what it is and the vocabulary used within the concept. To do this, I will display the Argumentative Writing power point and have the students copy down the definitions onto the next blank page in their spiral. I will go over argument, claim, and evidence. These three terms are tier 3, or content specific, vocabulary words so the completion of the task is dependent upon the understanding of these terms. Next, I will ask the students how they design a good argument to get something they want from their parents. What do they say to get their parents to understand their point of view and to see they are right? I want the students to see the process we use to argue. First, we state our claim. Then, we provide evidence to that claim and explain why that evidence supports our claim. This is the process we take when we are writing an argument as well. First, we state our claim (thesis), then we provide evidence (from text) to support our claim, and lastly, we explain our evidence and how it supports our claim. I will have the students copy down these notes into their spiral so they have it to use as a reference when they are writing. Next, I want to show the students a model for the writing. Because our writing today will be a shorter piece and because it will be the students first time doing argumentative writing, I want to provide them with a good model. I will display the piece of writing, but also provide them with a copy. This is a student sample of the task we will have to write later in the lesson. First, I will the prompt to the students, "How has the Gibb Street Garden changed one of the characters' from the novel Seedfolks, perspective of life? Provide text evidence to support your claim." Next, I will ask a student to restate the prompt, explaining what they have to do. I do this to ensure the students understand what the question is asking and if there are multiple parts to the prompt they understand they need to address them as well. I will then read through the student sample aloud. I will not mark anything, I will just read through it. Then, I will ask the students to help me identify and analyze the argument. Is there a claim made? We will underline the claim and clarify what it says. Then, I will ask the students to identify and label the evidence. We will underline the evidence and discuss what it states. Finally, I will ask the students to underline the explanation of the evidence. I will ask them why this piece is important. I often times see students provide evidence but forget to explain that evidence and how it supports the claim. This is key in argumentative writing or writing in general. This process will allow the students to see how an argumentative piece is constructed. It will hopefully provide them with a good model when they go to write their own. We will wrap up with discussion about the piece and whether or not it does a good job answering the prompt. It is time for the students to get to work! I will have the students respond to the same prompt I used during instruction. "How has the Gibb Street Garden changed one of the characters in the novel?" There are many characters to chose from within the novel, so they have plenty of options left. It also provides them with a concrete example to use because this is a newer skill and or concept. I will have the students begin by brainstorming. I want them to remember that brainstorming is an imporatant step in writing and should always be used to help us prepare for the task. To do this, I will have them use loose-leaf paper to list at least three characters they want to chose for the piece. Then, I will have them brainstorm the changes each one of those characters went through. This will allow them to see what character is their best option. It also gets them practicing with the text to provide and look for evidence. Finally, once I have approved their brainstorming list, I will have them begin their rough drafts. I may need to provide assistance with starting their introductions. I am going to see how they do and if I notice it is more than one or two students, I may stop to teach a mini-lesson on introductions. The students all have writing experience, so I am going on the assumption that they know how if they are pushed to do it. I will allow the students time to draft. As they are working, I will monitor they work, provide assistance and check for word choice, sentence structure, etc. The more guidance I can give them in the drafting process, the easier my life is when editing! I'll have the students work on this piece for homework. To help the students process their own learning and to assess their understanding, I will have them complete a Closure Slip. I want the students to gain an understanding for argumentative writing, can they explain the steps needed to build an argument? I am expecting the students will be able to identify the basic steps. This will help me when deciding what path I need to take to further and deepen their skills.
Measurement of Mass: Mass is a basic property of matter. It does not depend on the temperature, pressure or location of the object in space. The SI unit of mass is kilogram (kg). While dealing with atoms and molecules, the kilogram is an inconvenient unit. In this case, there is an important standard unit of mass, called the unified atomic mass unit (u), which has been established for expressing the mass of atoms as 1 unified atomic mass unit = 1u = (1/12) of the mass of an atom of carbon -12 Isotope (including the mass of electrons = 1.66 × 10-27 kg Range of Masses The masses of the objects, we come across in the universe, vary over a very wide range. These may vary from tiny mass of the order of 10-30 kg of an electron to the huge mass of about 1055 kg of the known universe. Measurement of Time: To measure any time interval we need a clock. We now use an atomic standard of time, which is based on the periodic vibrations produced in a cesium atom. This is the basis of the cesium clock, sometimes called atomic clock, used in the national standards. Such standards are available in many laboratories. In the cesium atomic clock, the second is taken as the time needed for 9,192,631,770 vibrations of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium-133 atom. The vibrations of the cesium atom regulate the rate of this cesium atomic clock just as the vibrations of a balance wheel regulate an ordinary wristwatch or the vibrations of a small quartz crystal regulate a quartz wristwatch. How useful was this post? Click on a star to rate it! Average rating / 5. Vote count: We are sorry that this post was not useful for you! Let us improve this post! Thanks for your feedback!
Differential Equations For Dummies To confidently solve differential equations, you need to understand how the equations are classified by order, how to distinguish between linear, separable, and exact equations, and how to identify homogenous and nonhomogeneous differential equations. Learn the method of undetermined coefficients to work out nonhomogeneous differential equations. Classifying Differential Equations by Order The most common classification of differential equations is based on order. The order of a differential equation simply is the order of its highest derivative. You can have first-, second-, and higher-order differential equations. First-order differential equations involve derivatives of the first order, such as in this example: Second-order differential equations involve derivatives of the second order, such as in these examples: Higher-order differential equations are those involving derivatives higher than the second order (big surprise on that clever name!). Differential equations of all orders can use the y' notation, like this: Distinguishing among Linear, Separable, and Exact Differential Equations You can distinguish among linear, separable, and exact differential equations if you know what to look for. Keep in mind that you may need to reshuffle an equation to identify it. Linear differential equations involve only derivatives of y and terms of y to the first power, not raised to any higher power. (Note: This is the power the derivative is raised to, not the order of the derivative.) For example, this is a linear differential equation because it contains only derivatives raised to the first power: Separable differential equations can be written so that all terms in x and all terms in y appear on opposite sides of the equation. Here's an example: which can be written like this with a little reshuffling: Exact differential equations are those where you can find a function whose partial derivatives correspond to the terms in a given differential equation. Defining Homogeneous and Nonhomogeneous Differential Equations In order to identify a nonhomogeneous differential equation, you first need to know what a homogeneous differential equation looks like. You also often need to solve one before you can solve the other. Homogeneous differential equations involve only derivatives of y and terms involving y, and they're set to 0, as in this equation: Nonhomogeneous differential equations are the same as homogeneous differential equations, except they can have terms involving only x (and constants) on the right side, as in this equation: You also can write nonhomogeneous differential equations in this format: y'' + p(x)y' + q(x)y = g(x). The general solution of this nonhomogeneous differential equation is In this solution, c1y1(x) + c2y2(x) is the general solution of the corresponding homogeneous differential equation: And yp(x) is a specific solution to the nonhomogeneous equation. Using the Method of Undetermined Coefficients If you need to find particular solutions to nonhomogeneous differential equations, then you can start with the method of undetermined coefficients. Suppose you face the following nonhomogeneous differential equation: The method of undetermined coefficients notes that when you find a candidate solution, y, and plug it into the left-hand side of the equation, you end up with g(x). Because g(x) is only a function of x, you can often guess the form of yp(x), up to arbitrary coefficients, and then solve for those coefficients by plugging yp(x) into the differential equation. This method works because you're dealing only with g(x), and the form of g(x) can often tell you what a particular solution looks like.
PLO'S: Features ( Writing and Representing ) Uses adjectives for description and uses strategies to spell unfamiliar words. THEME: ADJECTIVES AND KEY WORDS. 1) Listen the track U6 16, number and repeat pages 90 and 91 S1. 2) Past to the front and chose one card. This card will have a uses of the adjectives. 3) Each team will explain the uses they chose, and past in the board with post it examples. 4) Use the adjectives and the key words page 94 and make sentences in your notebook. 5) Underline with orange the adjectives, with blue the sustantives and with red the verbs. PURPOSE: Describe using adjectives and uses strategies to spell unfamiliar ( key words ). 1) Make a Pic Collage to represent the adjectives and key words. 2) Publish in face, YouTube, blog, Twitter. 1) Answer pages 84, 85 and 93.
Does an inversion often have a bad odor? Learn something new every day More Info... by email An inversion is a situation in which the layers of the atmosphere do not act normally, inhibiting normal weather processes and often trapping smog, smoke, and clouds close to the ground. The most common form of inversion is a temperature inversion, although inversions can take other forms as well. Essentially, an inversion can be thought of as a flip in the natural order of things, suppressing convection and other processes which allow air to cycle across the earth. In normal conditions, hot air close to the ground slowly rises upwards, pushing through a layer of cooler air. When a temperature inversion occurs, cooler air gathers near the ground, with a layer of hot air pressing down on top of it. This forces clouds, smog, and pollution to be trapped near the ground, because they cannot waft upwards, and a temperature inversion can sometimes break explosively, with severe thunderstorms or tornadoes. One classic form of temperature inversion is the marine inversion, caused by cool air from the surface of the ocean being pushed onto shore. Marine inversions explain why many coastlines around the world are foggy. Inversions also commonly appear in valleys, where warm air presses down on cooler air in the valley. Since many urban areas are in valleys or near the ocean, they often suffer from extreme pollution made worse by inversions. A weather inversion does not just impact the weather in the surrounding area. Inversions can also affect human health, as in the case of an inversion which traps pollution, and they can also impair visibility by forcing heavy cloud cover close to the ground. Inversions can also play funny tricks with radio signals and sounds; radio signals are often stronger during an inversion, for example, and the heavy fog characteristic of marine inversions can do peculiar things to noises, making them seem further or closer than they really are. Inversions ultimately resolve themselves, sometimes quite abruptly, and sometimes they appear and disappear several times over the course of a day. In other instances, an inversion may hover for several days, often leading to concerns about air quality and potentially dangerous weather conditions. Capping inversions, in which a layer of hot air traps a layer of cooler air, are notorious in the Midwest, because when the cap finally breaks, a huge amount of energy can be released, resulting in severe weather. The “cool” air in such inversions is often actually quite warm, so these inversions can feel very oppressive and tense until they finally disappear. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
Part 2: Inverse Variation Inverse variations are excellent vehicles for investigating nonlinear functions. A number of real-world phenomena are described by inverse variations, and they are typically the first functions that students encounter that do not cross either axis on a graph. An inverse variation is a situation in which one quantity increases while another quantity decreases -- such as the number of diners and serving size for a given amount of food, or speed and travel time for a given distance. The product of the quantities remains constant; that is, as one quantity doubles, the other quantity is cut in half. A caterer who takes a watermelon to a picnic knows that each person will receive more watermelon if there are fewer attendees, but each person will receive less watermelon if there are more attendees. That's because the amount of watermelon for each person varies inversely as the number of attendees. The more people, the less each person gets. A truck driver knows that driving at 75 miles per hour will get her to her destination faster than driving at 65 mph, because time is inversely proportional to speed. As her speed increases, her travel time decreases. - The length (l) varies inversely as the width (w) for a rectangle of constant area (A); that is, A = lw. - The depth (h) of oil in a cylinder varies inversely as the area of the cylinder's base (B); that is, as the cylinder becomes narrower, the oil becomes deeper, or V = Bh. Inverse variation: When the ratio of one variable to the reciprocal of the other is constant (i.e., when the product of the two variables is constant), one of them is said to vary inversely as the other; that is, when or xy = c, y is said to vary inversely as x. (Source: James, Robert C. and Glenn James. Mathematics Dictionary (5th edition). New York: Chapman & Hall, 1992) Two objects that vary inversely are also said to "vary indirectly" or to be "inversely proportional." Alternative definition: One quantity is inversely proportional to another when the product of the two quantities is constant. An inverse proportion can be described by an equation of the form xy = k, where k is the constant of proportionality. The equation of an inverse proportion can also be written in the form . (Source: SIMMS Integrated Mathematics: A Modeling Approach Using Technology; Level 1, Volume 2. Simon & Schuster Custom Publishing, 1996) Role in the Curriculum Inverse variation provides a rich curricular complement to direct variation. As teacher Peggy Lynn says in the Workshop 7 video, "I like teaching these two topics in the same context because of their relationship to each other." The constant of proportionality in a direct variation represents a quotient; by contrast, the constant of proportionality in an inverse variation represents a product. "Division and multiplication go hand in hand, so the students can relate to that," Peggy says. Direct variation and inverse variation are related topics, and it makes sense to study them in parallel. Because they have striking differences, the contrast allows students to gain a deeper understanding of various functions. "Students should have experience in modeling situations and relationships with nonlinear functions," according to the PSSM. Inverse variation allows students to consider nonlinear functions. The graph of an inverse variation never crosses the x-axis or the y-axis, nor does it pass through the origin. "[Students] should connect their experiences with linear functions to their developing understandings of proportionality, and they should learn to distinguish linear relationships from nonlinear ones," the PSSM states. When teaching inverse variation - as with direct variation and other activities involving mathematical modeling - asking students to gather data helps to spark their interest. Students are required to think more when investigating a phenomenon using a hands-on approach, though they often don't realize they're learning because they're having fun. In addition, when students are exposed to "messy data," they must make thoughtful decisions in order to identify functions that fit the data well enough to be useful in making predictions. Making sound mathematical decisions is the basis of effective modeling, so providing opportunities for students to make choices helps to develop their analytical abilities. For real-world explorations involving inverse variation, it will be necessary to collect enough data to make the nonlinear pattern obvious. Too few points may result in a pattern that appears to be linear. Once sufficient data have been collected, students can use tables and graphs to represent the data. Finally, students should compare direct variation with indirect variation, illuminating the differences and highlighting the similarities. For instance, they might describe the relationship between the general equations y = kx and . They should recognize that the constant of proportionality in the direct variation is a quotient of the variables, while the constant of proportionality in the inverse variation is a product. Or they might consider the graphs, since a direct variation is linear and passes through the origin, while an inverse variation is a curve with no x- or y-intercepts. Making these comparisons will allow students to understand the differences within a family of functions.
Learn how the application and direction of force applied to objects demonstrate increased efficiency with simple machines. Discover how an inclined plane helps exert a mechanical advantage, and how a wedge, lever, and pulley are designed around the the concepts of input and output forces. Establish engineering skills at the basic level by learning the design of simple machines. A simple machine helps make work easier by changing the amount or direction of the force being applied. There are six main types of simple machines that can be used to increase efficiency or change the direction of a force. An inclined plane is a flat sloped surface used to produce a mechanical advantage by allowing the user to exert their input force over a longer distance. Wedges are a simple machine that are thick at one end and taper to a thin edge. A wedge can be thought of as two inclined planes back to back. Screws are composed of an inclined plane wrapped around a cylinder. The closer the threads are on a screw, the greater the mechanical advantages. A lever is a bar that pivots, or rotates, on a fixed point. Levers can be used to change the direction of the input force. Wheel and Axle The wheel and axle is a simple machine that consists of two cylindrical objects fastened together. The larger of the two is called the wheel and the other the axle. Pulleys are simple machines made up of a grooved wheel with a rope or cable wrapped around it. A pulley can be used to change the direction of the input force or produce a mechanical advantage.
The Destructive Power of Water Many of the features of the Earth's surface have been formed by the cutting and eroding action of moving water. When you think about how hard rock is compared to water, it's easy to believe that it must have taken hundreds of thousands or even millions of years for water to shape the land. This ignorance about how rapidly water cuts rock has cost people their lives. In April 1987 a 300 foot section of the 540 foot long New York Thruway Bridge collapsed into the Choharie Creek. The moving waters of the creek had created turbulence around the bridge's pilings. Within a few years, this turbulence cut away the rock in which the pilings were anchored, and with nothing to hold it up, the bridge collapsed. In June 1987 a section of the 2,800 foot long Clearwater Pass Bridge in Florida dropped 10 inches. Divers sent down to inspect the bridge pilings found that more than 10 feet of rock had been scoured away from the bridge's pilings. Similar instances of moving water cutting away solid rock in a short period of time can be found by dams. When many of these structures were built, it was assumed that it took tens of thousands of years for water to erode solid rock. But experience has now shown us that water is able to do in a few years or even a few hours what scientists once thought took thousands or millions of years. Another lesson to be learned is that we don't need tens of thousands of years to form the water carved features of our Earth!
HIV stands for Human Immunodeficiency Virus The virus can only survive in human cells. Once HIV enters the body, it infects white blood cells (CD4 cells) and begins to weaken a person’s immune system, leaving them unable to fight disease and infection. HIV is transmitted through five body fluids: - Vaginal and anal fluids - Breast milk HIV cannot be transmitted though: HIV Testing Approximately 80,500 Canadians have been diagnosed with HIV and 1 in 5 people living with the illness don’t know their status. Because HIV often doesn’t show any noticeable symptoms for many years after it enters the body, many people do not know they have been infected. Testing is a critical tool in addressing HIV. Regular screening, as a routine part of personal health care, can significantly help reduce the number of new infections in the community. Learn more about testing on our Testing page. HIV Treatment There is no cure for HIV, but with proper treatment and care there’s no reason why you can’t thrive, achieve your aspirations, and live a long and happy life. You are not alone. Find out where to turn next, who can help & what you need to know by visiting our Treatment page. HIV vs. AIDS People often get confused between HIV and AIDS. HIV is something that can be measured. It is a virus that enters the body and weakens the immune system. You can be tested for HIV. AIDS stands for Acquired Immunodeficiency Syndrome. An AIDS diagnosis is a combination of being HIV positive and having one or more opportunistic infections, such as cancer. AIDS is a diagnosis, a medical term to identify illness. HIV and AIDS are two very different things. With adequate support and treatment to keep the immune system strong, people can live well with HIV for many years.
Our capacity for complex social interactions is a defining feature of humanity, but how did it evolve? It seems like it would have been a slow, gradual process, but a new statistical model suggests something very unusual happened 52 million years ago. The last few decades have seen remarkable strides in our understanding of social evolution in species like bees and birds. In these species, it appears that complex social structures develop gradually in many steps. Once solitary individuals will first pair off with each other, or live with a small group of their own offspring. It's from these initial, family-like units that eventually larger and much more complex societies emerge, and the key point is that you can't go straight from one individual doing its own thing to an entire complex, interconnected society. And yet, according to new research by Susanne Shultz and her team at Oxford, the ancestors of humans and most other primates likely really did go from solitary to social virtually - and, as it turns out, literally - overnight. Of course, as they themselves point out, "social behaviors do not fossilize", so it isn't necessarily easy to track the deep origins of our own social evolution. They used a mix of modern observations and statistical modelling to give us our best idea yet of where the human capacity for social interaction came from. Until now, a common explanation for primate social structure is the surrounding environment, in that local food scarcity will force individuals to band together to pool resources and survive. The problem with that is, according to Shultz and her team, is that modern primates live in the exact same social groups regardless of where they live. It's conceivable that ancient primates really did band together because of food scarcity, but if that was the only reason underpinning these complex societies, it seems very strange that primates like baboons and macaques would live together in exactly the same way when they've got plenty of food as those of corresponding populations where there's very little. Instead, the researchers looked for another mechanism that could cause primates to become more social in as few evolutionary steps as possible. They also built a statistical model to simulate what might happen if the most recent common ancestor of all today's monkeys and apes - a creature that lived roughly 50 million years ago - were to start banding together either in pairs or groups. To their surprise, the statistical model indicated that it actually made far more sense for this ancient primate to go straight from solitary to loosely affiliated groups of both genders, skipping pairs entirely. That's unlike the evolution of social groups we've seen in other species, but it makes sense when you consider the most likely motivation behind this switch. Around 52 million years ago, our evolutionary ancestor switched from being a largely nocturnal creature to one that was active during the day. While a creature would want to be on its own at night for maximum sneakiness, a small primate would enjoy safety in numbers during the day as the best way to stay safe from advancing predators. Pairs wouldn't be much use against a large predator, which explains why primates - uniquely, as far as we can tell - went straight to forming large groups. Of course, not all primates live in such groups - some monkeys, for instance, live in pairs, while gorillas organize themselves in harems with a single adult male and lots of females. The researchers believe both of these innovations came much later, both around 16 million years ago. The researchers argue that the loose affiliations that sprang up 52 million years ago quickly developed into more stable societies and, in turn, the development of cooperative behaviors. They argue that this evolutionary jump from solitude to society may well have been a key factor in the evolution of many uniquely primate traits, including our own advanced intelligence. So, basically, everything we humans have now might well be because, 52 million years ago, some primates decided they'd had enough of working nights.
Often times, physicians are like detectives, following clues and putting together observations and notes to figure out what is wrong and how it can be fixed. Students will discuss various diagnostic tools and techniques used by doctors to diagnose their patients. They’ll discuss simple and complex methods and modern tools (such as CT scans and MRIs) used by medical professionals. Additional activities include, sharing articles and assigning articles for next time, organ research, Inspiring Stories, and journal writing. - Students will see how doctors diagnose patients using diagnostic tools and resources. - Students will be aware of different methods and machines doctors in the U.S. use every day to diagnose patients. - In what ways are doctors like detectives? Examples: Both look for clues, both solve problems - What kind of low tech tools to doctors user to diagnose patients? Examples: their eyes, hands, ears, stethoscope, hearing test - What kind of high tech tools do doctors use to get clues? Examples: X-rays, blood tests - Name some easy, non-invasive ways to figure out what is wrong. Examples: taking a patient history, doing a strep test, looking at a rash - Name more difficult, invasive ways to figure out what is wrong. Examples: surgery, biopsy, endoscopy (camera inserted down the throat) Part 1: High Tech Scans Print out the images from the Power Point or show the images on a computer screen or projector. - Which part of the body is the image showing? Why do you think so? (Allow several guesses before revealing the correct answer). - Why are images like these helpful to doctors? Which images were best and easiest to read? Example: they allow doctors to see inside the body without invasive surgery. Part 2: Not So Gross Anatomy Give each student, or pairs of students, a Not So Gross Anatomy body outline, organ sheet, glue stick, and scissors. Have students cut out the organs and glue them where they think they belong. When everyone has completed their guesses, pass out or show the answer key, then, review the names of the body parts. Discuss the following: - How did you know where to place the organs? - Were any organs unfamiliar? - Which ones were surprising? Hand out a prize for the most accurate Not So Gross Anatomy body, if possible. Collect name badges from students. - Students share article with the group Have the students assigned to bring in articles share the story with the group. Have the student tell why they chose the article. Ask the group for their thoughts about the topic. - Further organ research Have students research the different organs in the library or on the internet to learn more about their role in the body and what function(s) they perform. - Inspiring Stories Story of the week: Adam Aponte, MD - Journal writing Have the students write about how they felt seeing the various scans and have them imagine how they’d use these type of tools in their own practice. Have them imagine why they would or would not use these tools as opposed to other diagnostic methods. For example, cost of the machine or cost of co-pay to patient. - Article to share with group next time Assign one or two students to find an interesting article having to do with medicine or being a physician. Have the students share what they’ve read and facilitate a short discussion with the group about the article or topic. Have the students prepare to discuss what they’ve read with the group about the article or topic during the next Premed Club meeting.
2009 Winter Outlooks for Temperature (left) and Precipitation (right) issued by NOAA’s Climate Prediction Center on October 15, 2009. Bitter cold temperatures and blizzards of historic proportions prompted the questions: Why were there so many historic snowstorms in the mid-Atlantic region this winter? Are they evidence that global warming isn’t happening? No, the globe is warming. But the real story behind the mid-Atlantic’s winter isn’t about climate change, it’s about climate variability. Climate variability, the term scientists use, explains why record-breaking snowstorms and global warming can coexist. In fact, many of the weather events observed this winter help to confirm our understanding of the climate system, including links between weather and climate. ClimateWatch Magazine » Can Record Snowstorms & Global Warming Coexist?
Difference – E Coli vs Klebsiella Pneumoniae E coli and Klebsiella pneumoniae are two types of bacteria which act as causative organisms for various infections in our body. Humans are always exposed to various types of pathogenic microorganisms. Some of these organisms might not do any harm to us but most of them, in a lethal number can result in various pathological conditions, affecting almost all the systems in our body. E coli and Klebsiella pneumoniae are two such organisms and although their sites of invasion and diseases caused may vary from one type to another, most people tend to confuse these two bacteria, probably due to lack of knowledge about what these exactly are and their pathological significance in the human body. The main difference between E Coli and Klebsiella pneuomoniae is their site of infection; E Coli invades the gastrointestinal system and urinary tract while Klebsiella pneuomoniae targets the respiratory system. This article explores in detail, 1. What is E coli? How does it Spread? Diseases Caused by E coli, Treatment and Prevention 2. What is Klebsiella pneumoniae? How does it Spread? Diseases Caused by Klebsiella pneumoniae, Treatment and Prevention 3. Difference Between E coli and Klebsiella Pneumoniae What is E Coli E Coli is a part of natural bacterial flora, living in human intestines which have several strains depending on their pathogenicity and genetic make-up. Most of these strains are harmless to humans except for few including, O157:H7 which are highly invasive, affecting humans in dangerous ways, with the potential of causing anemia, renal failure, and even death. E Coli can enter the human body through food and water, which has been contaminated with stools of an infected person. For example, food items like meat, milk or dairy products and raw fruits and vegetables are most likely to get contaminated with this bacterial pathogen due to their unsafe or unhygienic ways of preparation and consumption. Other than that, this can spread through direct contact, especially when a person doesn’t wash his hands after defecation – the residual bacteria can enter someone else’s body by touching contaminated objects. Patients with E Coli infection will usually experience bloody diarrhea, abdominal cramps, loss of appetite, nausea and vomiting which might be associated with a mild to moderate fever, often noticeable following 2-3 days after the initial exposure. Children are highly likely to get severely ill than adults. patients with weakened immune systems, alcoholics, diabetes mellitus and malignancies usually experience more severe attacks than the rest. Your doctor will take a complete history from you about any exposure to contaminated food or water, travel history or any direct contact with an infected person. He will also assess you to elicit signs such as abdominal tenderness due to ongoing infection which will then be followed by investigations to isolate E Coli bacteria in a stool sample – Stool culture. It is important to carry out immediate treatments especially to relieve pain and replace fluid and electrolyte imbalances, and specific antibiotic therapy can be initiated once the diagnosis is confirmed. Untreated patients can end up in dehydration; therefore, a high fluid intake is always recommended. Patients who are continuously passing out will be given fluid intravenously since the main concern is to prevent any shock from loss of fluid and electrolyte imbalances. Practicing safe food preparation techniques is the key to preventing E Coli infections. - Proper food handling and avoiding consumption of raw fruits and vegetables unless hygienically washed and cleaned. - Washing hands before preparing food and eating - Washing hands after using the washroom - Avoiding cross contamination by using properly cleaned utensils. - Avoiding using non-pasteurized dairy products especially milk. - Using boiled and chlorinated water. - Staying away from food preparation when you have diarrhea or vomiting associated with any of the signs and symptoms associated with ongoing infections. What is Klebsiella Pneumoniae This is a type of Gram-negative bacteria which is non-motile, encapsulated and facultative anaerobic in nature with a rod-like shape. It also owns the characteristic feature of being able to ferment lactose on a medium of MacConkey agar. Although Klebsiella pneumoniae is a part of the normal flora in oral cavity, skin and colon, it can result in pathological changes of lungs in case of aspiration or inhalation. These are known to spread from one person to another as nosocomial infections and can easily invade the alveoli of lungs, leading to blood mixed sputum in affected individuals. As far as the clinical significance of this pathogenic organisms is concerned, the corresponding infections are usually seen in patients with poor immunity systems with some sort of immune suppression. In fact, most affected individuals fall into the category of old or middle-aged men with chronic debilitating illnesses such as Diabetes, chronic alcoholism and its consequent malignant conditions such as chronic liver disease, chronic obstructive pulmonary disease (COPD), steroid therapy and renal failure. Moreover, patients who are being treated in an ICU setting are also at a high risk of getting infected by Klebsiella pneumonia, accounting for more than 30% of ICU deaths which occur as a result of hospital-acquired pneumonia. Other common respiratory conditions caused by Klebsiella include Bronchopneumonia and Bronchitis which ultimately increases the susceptibility to other lung conditions such as lung abscess, cavity formation, emphysema and pleural adhesion. This can also result in thrombophlebitis, urinary tract infection, cholecystitis, upper respiratory tract infections, wound infections, osteomyelitis, meningitis and bacteremia which can eventually end up in septicemia, invading blood. The presentation of patients affected by Klebsiella pneumoniae may vary from one person to another depending on the primary pathology and strength of the immune system, yet the mortality rate is said to be very high even with the treatment with specific antibiotics. According to latest research studies, this particular species of bacteria is found naturally in soil, with the capacity to fix nitrogen in anaerobic conditions and known to play a major role in increasing the harvest of various crop cultivation including paddy and herbs. Difference Between E Coli and Klebsiella Pneumoniae Both these types are a part of natural flora in our body, but their rapid increase in number due to various reasons can result in severe complications which can even result in death, if not managed promptly and properly. The major difference between E Coli and Klebsiella pneumonia lie on the fact that they invade different sites and act in different ways when it comes to the basic pathophysiology. E Coli is an organism which mostly invades our gastrointestinal system (specifically the colon) and urinary tract (giving rise to UTI). Klebsiella pneuomoniae usually target the respiratory system – alveoli of the lungs. Signs and Symptoms The signs and symptoms of the invasion of the bacteria can vary depending on the affected sites, but in immune suppressed patients both these infections can pop up at once, which might need extensive investigations to establish the diagnosis. “EscherichiaColi NIAID” ByRocky Mountain Laboratories, NIAID, NIH – NIAID (Public Domain) via Commons Wikimedia “Klebsiella pneumoniae 01″ (Public Domain) via Commons Wikimedia
It's a basic idea, but it makes a whole lot of sense: Native plants are better for native birds than introduced flora. More specifically, because these trees and shrubs have evolved with the local wildlife, they harbor more insects or yield more berries and fruit than non-native plants, providing greater amounts of food for certain critters. This seemingly obvious idea has been buttressed by years of research by Doug Tallamy, whose published work has shown that these plants host many more caterpillars, and that yards with more native vegetation host more native-bird species. But somewhat surprisingly, there haven't been much in the way of dedicated studies linking this previous research to the diet of a certain bird species. Now, a new analysis in Biological Conservation, released online this month, shows that yards filled with native vegetation do indeed offer more food for nesting birds than non-indigenous species. In a two-year survey of Carolina Chickadees around Washington, D.C., scientists connected songbird diets to the plants they source their food from. The results clearly support Tallamy's previous work showing that native gardens are packed with caterpillars and other insects during the time when many avians are breeding. “Quantifying insects as bird food is difficult,” Desiree Narango, the University of Delaware PhD student who led the research, says. (Tallamy was a co-author on the paper.) To start, she and her team catalogued the origin of each tree and shrub species around 97 suburban homes, selected through the Smithsonian’s Neighborhood Nestwatch Project. They then scoured the leaves of 16 plants at each site for caterpillars and continued to track which of the flora received the most visits from chickadees. They also kept tabs on nest building on and near the sites throughout the chickadees' breeding window, which typically falls between April and early June in the region. After analyzing the data, Narango found that Carolina Chickadees nested more often in yards with an abundance of native trees than in yards with more introduced species. Oaks, cherries, elms, and maples were among the top performers because they housed the most moth and sawfly larvae—important food sources for birds trying to rear young. And when it may take 6,000 to 9,000 caterpillars in a season to raise a brood of five chickadees (as previous studies have shown), the presence of natives becomes even more apparent. “Carolina Chickadees are a model species because they’re generalist foragers,” Narango says—meaning they’ll scrounge for food most anywhere. By gauging their preferences, she was able to get a sense of what plants other common suburban songbirds might lean toward. Most of the vegetation Narango searched was introduced, and thus had one caterpillar or less. But in native trees like oaks, she found scores of larvae—often 20 or more in the space of five minutes. These numbers match Tallamy’s “Lepidoptera index,” which ranks different types of plants by the diversity of caterpillars they foster. For example, the list holds that some oaks have up to 534 species of moths and butterflies (recently updated to 557); Prunus like wild cherry and plum can yield up to 456 species; and maples support up to 297 species. While the non-native cousins of some of these trees do support some larvae and other food, they aren’t nearly as productive. Unrelated, introduced species are even worse. “Eighty-six percent of the country is privately owned, so when you create landscapes out of [introduced] Bradford pear and crape myrtle, there are almost no caterpillars,” Tallamy says. “That’s not just the end of reproduction for chickadees, but of all the birds out there that need those insects.” This study, combined with Tallamy’s index, has the potential to be a huge tool for education by helping people make more informed planting choices, Roarke Donnelly, director of the environmental studies program at Oglethorpe University, says. But there’s one problem: finding nurseries that offer natives. “When I go looking for plants I know birds use, I can’t find them, either as seed or seedling,” Donnelly says. “I don’t think growers know there’s a burgeoning demand for this. We have to hook up them up with residents—there’s huge potential.“ Narango also believes that her results provide convincing evidence that planting native is in a bird lover’s best interests. “The trees [our color-banded chickadees] were going to were covered in warblers, tanagers, and orioles,” she says. “They’re basically telling us what these other birds want.” Correction: This article has been updated to state that 86 percent, not 82 percent, of the country is privately owned.
What is the outcome of genetic drift between two sub population? Genetic drift is caused due to drastic changes in the frequencies of particular genes by chance alone. Genetic drift with changes in the gene flow imposed by isolation mechanism acts as an agent of speciation. Genetic drift can result in evolutionary divergence. Due to genetic drift, the variance between small sub populations increases with time. Due to genetic drift, allele frequencies fluctuate randomly in each sub population. This fluctuation is more rapid and more severe in smaller sub populations. You have rated this answer /10
Use of instructional material presented by a computer. Since the advent of microcomputers in the 1970s, computer use in schools has become widespread, from primary schools through the university level and in some preschool programs. Instructional computers either present information or fill a tutorial role, testing the student for comprehension. By providing one-on-one interaction and producing immediate responses to input answers, computers allow students to demonstrate mastery and learn new material at their own pace. A disadvantage is that computerized instruction cannot extend the lesson beyond the limits of the programming. Learn more about computer-assisted instruction with a free trial on Britannica.com.
Fuel cell cars are known for their ability to use the commonly found hydrogen gas as fuel. When mixed with pure oxygen, it will generate electricity and the by-product is pure water. Hydrogen fuel cells are proven technology and they can be considered as truly green implementations; although it still depends whether the pure oxygen and hydrogen are acquired also with green techniques. Hydrogen is the lightest and simplest elements in the universe. It boils at -253 degrees Celsius and it can be stored at internal tank. When combines with oxygen, H2O will be formed and as we know, it is the chemical composition of pure water. When oxygen and hydrogen combines; heat and electrical energy are generated. The electrical energy can be fed to the car’s drive motor. It should be noted that a single cell could produce barely one Volt, so a car requires a few hundreds of such cells which are stacked, so enough power can be used to proper the car. Nissan uses an auxiliary source of power, so the fuel cell stack can be supplemented when the car accelerates. The battery could also store energy recovered from the brake regeneration system. Nissan’s system is based on a co-axial motor that delivers 280Nm and 120PS. Early electric cars could operate for 300 miles depending on energy usages and the maximum speed could reach 150km per hour. Efficiency and performance fuel cell cars continue to increase in recent years. In general, consumers look for cars that can provide them with reasonable driving range. In essence, hydrogen is a highly efficient source of energy with large energy density by mass. However, its energy density is poor by volume, so fuel cell cars require strong hydrogen tanks that can withstand high pressure. It means that the fuel cell could deliver a significant amount of energy. Early hydrogen tanks could withstand only 35MPa, but newer cars could withstand about 70MPa. By increasing the pressure, it isn’t necessary to install larger tanks and this will surely make the car lighter. Hydrogen tank has internal aluminium liner, while the plastic reinforced carbon fiber is used on the outer layer. In general, we should make sure that the car uses high elasticity, high strength carbon fiber. It should also be noted that some issues could happen when the car operate in very cold condition. Fuel cells could work when they are humid inside. When water is no longer found inside, it isn’t possible to sustain the electrochemical process. Then we should also consider that more water can’t be produced inside the cell to sustain the overall process. When we leave the car outside and expose it to the snowy winter night, water inside cells will freeze. As a result, the car will no longer be able to move. In some cases, this could cause permanent damages to the fuel cells. However, some car makers have come up with new solutions that allow the car started when the cell temperature is minus 30 degrees Celsius.
About Purple Milkweed: Purple milkweed can be found in woodland areas and prairies, or near streams or marshy areas. At one time, the silk from this plant’s seed pods was spun for fabric or used for stuffing pillows; in World War II, school children gathered the silk to provide a cheap filling for soldiers’ life jackets. Commercial attempts to make use of this abundant plant included the manufacture of paper, fabric, lubricant, fuel, and rubber; eventually these became impractical and were abandoned. Though this plant is toxic to most animals, butterflies are immune to the plant’s poison and actually become rather poisonous themselves as protection from predators. Purple Milkweed Germination: In late fall, direct sow just below the surface in full sun or partial shade and rich, moist soil. This plant also tolerates well drained or rocky soil. Plant three seeds together every 15-18 inches. Germination will take place in the spring, after the last frost. When the seedlings appear, thin to the strongest plant; seedlings usually do not survive transplanting, since they resent any disturbance of their roots. For spring planting, mix the seeds with moist sand and refrigerate for 30 days before direct sowing. Growing Purple Milkweed Seeds: Young plants should be watered until they become established; when grown from seed, plants may take up to three years to produce flowers. Mature plants can tolerate some drought but grow best with regular watering, especially if grown in full sun. Though not invasive, this plant will eventually spread by rhizomes and forms colonies in the wild. The flowers attract many bees and butterflies, including swallowtails, red admirals, an hairstreaks. Deer avoid this plant. Harvesting Purple Milkweed: This makes a striking cut flower. Cut the stems long, choosing flowers that have just opened. Keep in mind that the milky sap is mildly toxic and can irritate the skin. Saving Purple Milkweed Seeds: After the plant finishes flowering, 3-4” narrow pods will form. Be sure to harvest the pods before they split and the silky fluff carries the seeds away on the wind. As soon as the seeds inside the pod ripen to their mature brown color, remove the pods and spread them out to dry. Split open the pods and take out the silky seed material. Remove the fluff from the seeds. Store the seeds in a cool, dry place. Detailed Purple Milkweed Info: Origin: US Native Other Common Names: Duration: Perennial Bloom Time: Summer Height: 24-36 inches Spacing: 15-18 inches Light: Full Sun to Part Shade Soil Moisture: Medium USDA Zone: 3-9b Seeds Per Oz: 4,800 Produces a plant with narrow 4-6” pointed leaves and deep, rosy pink flower clusters.
Invasive Plants - Let's Stop These Silent Invaders Our water, wildlife, and economy are threatened by invasive species. Travelers from faraway lands without predators here in Oregon, they spread quickly. Native plants and animals are pushed out, and entire ecosystems and agricultural areas can be seriously degraded or destroyed. When invasive plant species spread, they often create monocultures, areas without biodiversity dominated by a single species. Monocultures of species such as English ivy and yellow starthistle are poor habitat for wildlife, and they can decimate rangeland used for domestic animals like cattle. Invasive plant species also reduce water quality because their roots do not trap and filter water as well as diverse native plants. Invasive plants and animals travel by air, land, and sea. Some of the most damaging species in the United States, such as zebra mussels, are carried by ships, but many others are spread on people's shoes, clothes, and luggage. Some are brought here as garden plants, food, or household pets, but they can outcompete natives and spread at an incredible rate when they make their way to the wild. Each arrival is a new threat to the healthy, diverse tapestry of life that makes Oregon unique. You Can Help Remove and Report Learn and Share Check out SOLVE's Invasive and Native Plant Guide for information on common invasives in your area and the best native plants to replace them with. Click on your ecoregion and explore the other resources below. Find out more about the invasive and native species in your area and spread the word! Explore More Resources The National Wildlife Federation has a great introduction to invasive species. Cooperative Weed Management Areas, or CWMAs, are a partnership between landowners, government agencies and local organizations working to manage and prevent the spread of invasive plants. King County, Washington has a great invasive species prevention program and much of the information is relevant here in Oregon as well! Let’s Pull Together is an organization that organizes annual invasive removal volunteer projects in Central and Southern Oregon. The No Ivy League started in Portland’s Forest Park, has good resources on how to remove invasive English ivy. Oregon Association of Conservation Districts Local Soil and Water Conservation Districts, including the East Multnomah and West Multnomah districts in the Portland area, are on the front lines of the fight against invasive plants and have many great resources. The Oregon Department of Agriculture’s Noxious Weed Program has good information on invasive plants including profiles of the most common invaders. Oregon Invasive Species Council: Read up on Oregon's 100 most dangerous invasives and get the latest information on invasive plants and animals. You can also report an invasive species sighting by filling out the online report form or calling 1-866-INVADER Watch OPB’s award winning documentary, The Silent Invasion. USDA Forest Service: Invasive Plants and Animals Garden Smart Guide In 2008, SOLVE partnered with the Oregon Invasive Species Council, Oregon Public Broadcasting, The Nature Conservancy, and other community organizations to create more awareness about the problem of invasive plants and animals. One product of this partnership was OPB’s award winning documentary The Silent Invasion. Another was the Garden Smart Oregon guide. This booklet, developed in association with The Nature Conservancy, Portland BES, Oregon Sea Grant and the Oregon Association of Nurseries, highlights the plants that are most likely to cause problems in our yards along with several suggested alternative non-invasive plants that are unlikely to escape into the natural environment. You can download the guide here or request a paper copy by contacting SOLVE at 503-844-9571 or [email protected].
The art created in Greece during the fifth century B.C. established the standards to which all Western art has aspired well into our own times. Indeed, the word "classical," when used either specifically or figuratively, usually refers to those ideals of beauty and proportion developed on the Greek mainland more than four hundred years before the birth of Christ. Copied by the Romans, who revered the art of their Greek subjects, and "rediscovered" during the fourteenth and fifteenth centuries in what came to be known as a "renaissance" or rebirth of classical culture, the works bequeathed us by the Greeks—or in many instances by their Roman imitators—still influence the art we make and the ideals by which we judge it. Although the art of the ancient Greeks may be said to have reached its apogee in Athens in the fifth century B.C., it had, in fact, been developing for at least four thousand years. The Greeks settled and traded over a wide area, and eventually, under Alexander the Great, they moved into the Near East as conquerors. Thus they were able to assimilate and transform the art of many indigenous cultures. Once the Romans subjugated Greece, they, too, embarked on their own process of assimilation and transformation, on the one hand faithfully copying Greek art, and on the other, subtly transforming that art into one that more appropriately served first, republican taste, and later, imperial needs. Greece and Rome presents the Metropolitan Museum's collections of classical art, which range from early Cycladic pieces—dating from about 2700 B.C.—to works created in Rome at the time of the conversion to Christianity of the emperor Constantine in A.D. 312. To be sure, this picture of the classical world is only a partial one. Greek painting, for example, has been largely lost to history, and certainly many of the best Greek and Roman works reside in other museums, or, in the case of architecture, still stand throughout the Mediterranean world. Yet the collections of the Metropolitan do contain many of the finest examples of Cycladic, Cypriot, Attic, East Greek, archaic, geometric, and classical Greek art as well as of the art created by the Etruscans and in republican and imperial Rome. Among the important examples of Greek art presented in this volume are the Cycladic Harp Player, made in about 2700 B.C.; Cypriot sarcophagi from the fifth century B.C.; an Attic kouros from the sixth century B.C.; a lekythos attributed to the Amasis Painter from about 540 B.C.; the famous calyx krater by Euphronios from about 515 B.C.; Roman copies of mid-fifth-century Greek statues such as the Wounded Warrior and the Diadoumenos; and a splendid gold phiale thought to be from the fourth century B.C. Roman art is represented by examples of late republican wall painting, silver, and glass, and by portrait busts or statues of her emperors, their consort and relatives, as well as of anonymous citizens—giving us a broad picture of the styles and attitudes favored during Rome's long history. In addition to portraiture, Roman art is represented by the famous wall paintings from Boscotrecase, architectural elements from Domitian's palace, marble funerary altars and sarcophagi, and utilitarian and luxury items in terracotta, glass, gold, and silver.
5.3 Read Aloud Teacher Set The C.I.A. units of study authored by Sarah Collinge expose all students to longer, more complex texts in instructional read-aloud. Each unit teaches a new approach to reading chapter books that motivates readers. This approach is outlined in the acronym C.I.A., which stands for collect, interpret, and apply. C.I.A. Unit of Study: Historical Fiction – Chains, 5.3 is the third unit in a series of five designed for fifth grade. Within this unit, you are provided with scripted lessons for instructional read-aloud that incorporate reading, writing, language, listening, and speaking standards. In addition, Sarah provides sample student work to use in guiding both your instruction and assessment. Look inside the guide. When you utilize all five units in your classroom, you will be exposing students to a variety of genres and making reading, writing, and social studies connections across the school year. In addition, you will be explicitly teaching Common Core State Standards. You will be amazed at your students’ level of engagement as they master grade-level standards and read quality literature.
In an attempt to mitigate the environmental impact of 3D printing, several organizations have taken to creating recycled filament, made not only from failed prints but from water bottles and other garbage. Inexpensive filament extruders are also available to allow makers to make their own filament from recyclable materials. Not only does recycled filament help the environment, but it also helps 3D printer users to save money and be more self-sufficient, making the technology more viable in remote communities. 3D printer manufacturer re:3D has been working on making their Gigabot 3D printer capable of printing with recycled materials, for the purpose of helping those in remote communities to become more self-sufficient. In a paper entitled “Fused Particle Fabrication 3-D Printing: Recycled Materials’ Optimization and Mechanical Properties,” a team of researchers used an open source prototype Gigabot X 3D printer to test and optimize recycled 3D printing materials.
Milkweed is vital for the monarch’s life cycle. It’s the only plant monarch caterpillars eat. These caterpillars hatch from eggs laid on the plant before consuming its leaves. However, not just any kind of milkweed will do. The key is this: You must plant milkweed native to your area. The reason? Planting non-native types of milkweed risks monarch butterfly health. In many areas, non-native, tropical milkweed survives through the winter, allowing ophryocystis elektroscirrha (OE), a parasite that can be found on monarchs and milkweed, to build up to dangerous levels. On the other hand, with native milkweed, the parasite dies with the plant in the winter, ensuring that new milkweed grows with less risk from the parasite when monarch butterflies return in the spring. You can locate vendors near you to purchase milkweed. Remember, local vendors do not always equal local seeds -- ask about the origin of milkweed seeds and plants before you purchase them. Another option, if you have milkweed in your area, is to harvest the plant yourself. Pro tip: To harvest seeds at the right time, make sure their pods pop open under light pressure. These guides can help you identify milkweed native to your area: When should you plant milkweed? Ideally, the best time to plant milkweed seeds is in the fall so the cold temperatures and moisture that come with winter stimulate germination. You can also plant milkweed in the springtime. However, milkweed seeds planted in the spring need to first be put in soil or moist paper towels and placed in the fridge to simulate the effects of winter. This process is called artificial stratification. If you are starting your seeds indoors, you should begin growing the plant 4 to 8 weeks before moving them outside. No matter how long winters last in your region, just make sure to wait until after the last frost before transitioning the plants outdoors. If you are using potted milkweeds, plant them after the last frost so that they do not die before the monarch’s mating season. You should also know where and how to plant milkweed. Best growing practices suggest milkweeds be planted in the sunniest parts of your yard or garden. If you have a choice of soil, most milkweed species thrive in light, well-drained soils with seeds planted a quarter-inch deep. Make sure you check your seed packets or ask your local nursery for special instructions on the type of milkweed you are planting as there are some exceptions. Since milkweed is a perennial plant, you won’t need to replant it every year. You can harvest the seeds from your new plants and grow them in other parts of your yard or garden if you desire. One final point: If you live north of Santa Barbara within 5 miles from the California coast, do not plant milkweed. Instead, plant nectar-rich flowers that match these areas’ natural vegetation and the monarch’s migration habits. 2. Grow Nectar-Rich Flowers
WHAT IS ATOPIC DERMATITIS (AD)? Atopic Dermatitis, also named as eczema, is a chronically relapsing skin disorder with an immunologic basis. The severity of Atopic Dermatitis ranges from mild to severe. However, it usually appears in newborns or very young children but it may last until they reach adolescence or adulthood. In the most critical cases, it may affect the normal growth and development of a child. Moreover, the children are likely to get inherited eczema whose Parents suffer from this skin disorder. Eczema causes the skin to itch, turns red, and scaly skin. Still, the new born rash does not necessarily mean that your baby has this skin condition. DIFFERENT TRIGGERS CAN MAKE (ATOPIC DERMATITIS) ECZEMA WORSE, INCLUDING - environmental stress, - allergies, and Atopic Dermatitis Treatment Consists Of: - adequate skin hydration, - avoidance of allergenic precipitants, - topical anti-inflammatory medications, - systemic antihistamines, and - Antibiotic coverage of secondary infections. HOW TO PREVENT IRRITATION? One of the most important things you can do is to prevent irritation before it happens. - Moisturizing: Your child’s daily treatment plan must include moisturizing and needs to be applied at least once or twice a day. - Avoid irritants: Patients who are sensitive to abrasive fabrics or chemicals in bath soaps and detergents should wear soft fabrics, i.e., 100% cotton clothing, and take short baths with mild, fragrance-free body cleansers. - Ask your child to avoid scratching: Scratching the infected area makes the rash worse and leads to infection. - Avoid certain triggers: Try avoiding the overheating, sweating, and stress if they trigger the symptoms. - Ask your pediatrician: if your child has some sort of allergies. Allergies to food, pets, or pollen can make it terrible. If the allergy is the cause of your child’s eczema, escaping the trigger is the only solution. Chronic Skin Disorder Always remember that Atopic dermatitis is a chronic skin disorder. Therefore, it requires ongoing supervision by you, your child, and your child’s pediatrician. In case the patient’s atopic dermatitis treatment does not show improvement, discuss your concerns with your child’s pediatrician.
In this course you will define your own data types in C, and use the newly created types to more efficiently store and process your data. Many programming languages provide a number of built-in data types to store things such as integers, decimals, and characters in variables, but what if you wanted to store more complex data? Defining your own data types in C allows you to more efficiently store and process data such as a customer's name, age and other relevant data, all in one single variable! You will also gain experience with programming concepts that are foundational to any programming language. Why learn C and not another programming language? Did you know that smartphones, your car’s navigation system, robots, drones, trains, and almost all electronic devices have some C-code running under the hood? C is used in any circumstance where speed and flexibility are important, such as in embedded systems or high-performance computing. At the end of this short course, you will reach the fifth milestone of the C Programming with Linux Specialization, unlocking the door to a career in computer engineering. Your job Outlook: - Programmers, developers, engineers, managers, and related industries within scientific computing and data science; - Embedded systems such as transportation, utility networks, and aerospace; - Robotics industry and manufacturing; - IoT (Internet of Things) used in smart homes, automation, and wearables. - IEEE, the world’s largest technical professional organization for the advancement of technology, ranks C as third of the top programming languages of 2021 in demand by employers. (Source: IEEE Spectrum) This course has received financial support from the Patrick & Lina Drahi Foundation.
What is a Mechanism? A mechanism is a mechanical device used to transfer or transform motion, force, or energy. Traditional rigid-body mechanisms consist of rigid links connected at movable joints. Whatever machines you see in your day to day life has some underlying mechanisms that govern its motion to produce the desired output. For example A NUT CRACKER works on the principle of MECHANICAL ADVANTAGE based on LEVER MECHANISM. This mechanism transfers energy from the input to the output. Since energy is conserved between the input and output (neglecting friction losses), the output force may be much larger than the input force, but the output displacement is much smaller than the input displacement. As mentioned earlier these mechanisms involve relative motion of rigid bodies often at very high speeds which various problems like Frictional Losses, Non Linear behavior, Non uniform Heat dissipation, etc. Thus modern day scientists are working on mechanisms that more or less achieve the same motion and force transfer with the help of flexible links and linkless joints. These 2 terms have tremendous significance when its comes to the emerging and exciting field of COMPLIANT MECHANISMS. So lets see what these wonders are and what are they capable of. A compliant mechanism also transfers or transforms motion, force, or energy. Unlike rigid-link mechanisms, however, compliant mechanisms gain at least some of their mobility from the deflection of flexible members rather than from movable joints only. Basically not all the links of a mechanism need to be flexible for it to be termed as A Compliant System but some important links must be flexible. Fully compliant mechanisms are very unstable and unreliable. Compliant mechanisms rely upon elastic deformation to perform their function of transmitting and/or transforming motion and force. From an overall perspective that considers performance, manufacturability, economy of material, scalability to micro and Nano sizes, adaptability to smart actuations and embedded sensors, resistance to wear, etc., compliant mechanisms are preferable over rigid body mechanisms. Large number of compliant mechanisms are constructed of rigid links that are interconnected by Flexure Hinges design to undergo relatively low levels of rotation that traditional Revolute Pair Joints. A relatively less number of compliant mechanisms have compliant links in addition to the Flexure Hinges designed to undergo large deformation. Currently available design techniques for compliant mechanisms can be grouped broadly into the following three categories based on the methods used as well as the type of mechanisms created using them. - Flexural pivot-based compliant mechanisms. - Flexible beam-based compliant mechanisms. - Fully compliant, elastic continua. Flexural pivot-based compliant mechanism Flexural pivot-based designs use narrow sections connecting relatively rigid segments. Thus, compliance is lumped to a few portions of the mechanism. They are usually of monolithic construction. They can be systematically designed either by starting from an available rigid-body linkage or an intuitively conceived linkage. So in this type mainly the joints are flexible which connects rigid links with each other. The biggest problem faced in this type is the force and momentum transfer is very limited. Compliant mechanisms of this type are often restricted to a small range of motion. Their applications are in precision instrumentation and many consumer products. Flexible beam-based compliant mechanisms Flexible beam-based compliant designs extend the range of motion because the slender beam-like segments are designed to undergo large deformations. These are not always of monolithic construction, as they may have some rigid segments and kinematic joints. Thus, they are sometimes partially compliant. Unlike in flexural pivot- based designs, the compliance is distributed in flexible beam-based designs. Design of Compliant Mechanisms The first step in the design of a compliant mechanism is to establish a kinematicaly functional design that generates the desired output motion when subjected to prescribed input forces. This is called topological synthesis. Although the size and shape of individual elements can be optimized to a certain extent in this stage, local constraints such as stress and buckling constraints cannot be imposed while the topology is being determined. Once a feasible topology is established, performance constraints can be imposed during the following stage in which size and shape optimization are performed. Topology synthesis—which involves generation of a functional design in the form of a feasible topology starting from input/output force/motion specifications, Size and shape optimization—to meet performance requirements such as maximum stress, motion amplification or force amplification etc. General Design procedure involves Systematic methods of design of compliant mechanisms starting from functional specifications. First is deriving the topology of a compliant mechanism given the desired input forces and output displacements. Next, is optimizing the size and shape of various elements of a compliant mechanism in order to satisfy prescribed mechanical or geometric advantage, stress constraints, size constraints etc. Advantages of Compliant Mechanisms There are a number of reasons why a compliant mechanism may be considered for use in a particular application. One advantage of compliant mechanisms is the potential for a dramatic reduction in the total number of parts required to accomplish a specified task. Some mechanisms may be manufactured from an injection-moldable material and constructed of one piece. For example, consider the fully compliant crimping mechanism shown in the figure below, along with its pseudo- rigid-body model. Due to symmetry, only half the mechanism is shown. The number of components required for the compliant mechanism are considerably less than for the rigid mechanism. The reduction in part count may simplify manufacturing and reduce manufacturing and assembly time and cost. It is possible to realize a significant reduction in weight by using a compliant mechanism over their rigid- body counterparts. This may be a significant factor in aerospace and other applications. Compliant mechanisms have also benefited companies by reducing the weight and shipping costs of consumer products. The reduction in the total number of parts and joints offered by compliant mechanisms is a significant advantage in the fabrication of micro mechanisms. Compliant micro mechanisms may be fabricated using technology and materials similar to those used in the fabrication of integrated circuits. Disadvantages of Compliant Mechanism Perhaps the largest challenge is the relative difficulty in analyzing and designing compliant mechanisms. Knowledge of mechanism analysis methods and the deflection of flexible members is required. The combination of the two bodies of knowledge in compliant mechanisms requires not only an understanding of both, but also an understanding of the interactions of the two in a complex system. Since many of the flexible members undergo large deflections, linearized beam equations are no longer valid. Nonlinear equations must be used that account for the geometric nonlinearities caused by large deflections. I believe the field of Complaint Mechanisms is still in its nascent stage and lots of research and quality work needs to be done regarding the fabrication of proper materials and design of flexible linkages. On the brighter side some of the greatest and brightest minds of the country are working on this field to expand its applications. Dr. G K Ananthasuresh from IISc Bangalore and Dr. Anupam Saxena from IIT Kanpur are among those few who have contributed quite a lot towards this field. I consider myself lucky to have got the opportunity to meet both of them personally here in NITK when they visited our college last semester for a 2 Day workshop on Kinematics and Mechanism conducted by Dr. Somasekhar Rao sir. Below are the links to the references. Thank You
A new study investigates the navigation capabilities of bats from birth to maturity For the first time in history, researchers at Tel Aviv University tracked fruit bats from birth to maturity, in an attempt to understand how they navigate when flying long distances. The surprising results: Fruit bats, just like humans, build a visual cognitive map of the space around them, making use of conspicuous landmarks. In this case, bat pups from Tel Aviv University came to know the city by looking for large unique structures such as the Azrieli Towers, the Dizengoff Center etc. The groundbreaking study was conducted by Prof. Yossi Yovel, together with students Amitai Katz, Lee Harten, Aya Goldstein and Michal Handel from the Sensory Perception and Cognition Laboratory at the Department of Zoology. The paper was published in July 2020 as the cover story of the prestigious Science Magazine. „How animals are able to navigate over long distances is an ancient riddle,“ explains Prof. Yovel. „Bats are considered world champions of navigation: they fly dozens of kilometers in just a few hours, and then come back to the starting point. For this study we used tiny GPS devices – the smallest in the world, developed by our team, in an experiment never attempted before: tracking bat pups from the moment they spread their wings until they reach maturity, in order to understand how their navigation capabilities develop. No such study has ever been conducted on any living creature, and the findings are very interesting.“ The researchers monitored 22 fruit bat pups born in a colony raised at TAU – from infancy to maturity, tracking them as they scoured the city for food. The results show that Tel Aviv bats navigate the space around them in much the same way as the city’s human inhabitants. „Bats use their sonar to navigate over short distances – near a tree, for example,“ says Prof. Yovel. „The sonar doesn’t work for greater distances. For this, fruit-bats use their vision. Altogether we mapped about 2000 bat flight-nights in Tel Aviv. We found that bats construct a mental map: They learn to identify and use salient visual landmarks such as the Azrieli Towers, the Reading Power Station and other distinct features that serve as visual indicators. The most distinct proof of this map lies in their ability to perform shortcuts. Like humans, bats at some stage get from one point to another via direct new routes not previously taken. Since we knew the flight history of each bat since infancy, we could always tell when a specific bat took a certain shortcut for the first time. We discovered that when taking new, unknown routes the bats flew above the buildings. Sending up drones to the altitude and location where a bat had been observed, we found that the city’s towers were clearly visible from this high angle. Here is another amazing example of how animals make use of manmade features.“
On September 19, 1861, a steamboat caught fire and sank in the Gulf of Mexico, two nautical miles from the Yucatán port town of Sisal. There were dozens of confirmed casualties, passengers and crew alike. But the full death toll will likely never be known, because the enslaved Indigenous Maya people held on the ship were never counted in the first place—they were simply listed as cargo. Archaeologists from Mexico’s Sub-Directorate of Underwater Archaeology (Subdirección de Arqueología Subacuática, or SAS) announced recently that they’d identified the underwater remains of this ship, La Unión. Between 1855 and 1861, the Havana-based vessel brought, on average, 25 to 30 enslaved Maya people from Mexico to Cuba every month. The enslaved persons were then sold upon arrival in Havana. The shipwreck was first found in 2017, after researchers found an 1861 document in the Yucatán state archive describing the fire and the approximate spot where it occurred. Local fishermen, who had heard about the wreck in oral retellings, also helped guide the researchers toward the search area. In tribute to one of these fishermen, the researchers temporarily named the shipwreck “Adalio,” after his grandfather. While it was clear that the team had something significant on their hands, it took three years of interdisciplinary research to confirm that “Adalio” was, in fact, La Unión. It is now the first ship ever discovered known to have carried enslaved Maya people. Helena Barba Meinecke, director of the Yucatán Peninsula division of the SAS, outlined her team’s research process in an email. One key clue that “Adalio” might really be La Unión was that its technological and skeletal components—the propulsion machine, boiler, axles, paddle wheels, and chimney—dated to the first era of steamboat technology (1837–1860), and La Unión began operating in 1855. In addition, the archaeologists found that the ship’s boilers had exploded and that its wood had been damaged by fire. Perhaps most importantly, the location of the wreck matched what was reported in contemporary accounts and documentation. Perhaps the eeriest find, however, was brass cutlery used by first-class passengers on La Unión, who would have been unaware of the enslaved people on board. The cutlery was also branded with the name of the shipping company that owned La Unión. The enslavement of Mexico’s Indigenous population began during the so-called Caste War of Yucatán, a long-running conflict that lasted from 1847 to 1901. Promised, and then denied, tax relief in exchange for military service—as they saw private estates rise throughout formerly public lands—Maya communities on the Yucatán Peninsula rebelled against Mexico’s European-descended government, and sustained enormous casualties in the process. According to the University of North Carolina, the combination of death and desertion cut the peninsula’s population in half within just a few years, by 1850. In a brutal 1848 decree, Meinecke writes, the Yucatán governor ordered the expulsion of all Maya captured in combat. They would be deported to Cuba, still a Spanish colony at the time, to toil in the island’s sugarcane plantations. It was irrelevant to these officials that Mexico had officially abolished slavery in 1829. Indeed, one illegal tactic put to use during the Caste War was the deployment of enganchadores. Sent with fraudulent documents into Maya communities ravaged by the violence, these kidnappers led people to believe that they would be settled on uninhabited Cuban land and live as farmers—though their true destination was a life of slavery. As late as October 30, 1860, La Unión was actually caught at sea carrying 29 enslaved Maya, including children as young as seven years old. Even this, however, failed to stop the trade. It wasn’t until the fire of September 1861, four months after President Benito Juárez issued a decree against further kidnappings, that the government crackdown became sufficient to prevent the deportations, even if the violence would continue in Mexico for decades to come. Like other researchers of slavery, Meinecke points out a major gap in the otherwise rich historical record: In most cases, the identities of those who were enslaved remain unknown. At the same time, she writes, Maya descendants have been identified in various locales throughout Cuba, including Havana, Camagüey, and Pinar del Río, to name a few places. Meinecke is hopeful that continued engagement with these descendants, and the recording of their oral histories, might one day reveal just who their ancestors were.
By comparing different types of remote atomic clocks, physicists at the National Institute of Standards and Technology (NIST) have performed the most accurate test ever of a key principle underlying Albert Einstein’s famous theory of general relativity, which describes how gravity relates to space and time. The NIST result, made possible by continual improvements in the world’s most accurate atomic clocks, yields a record-low, exceedingly small value for a quantity that Einstein predicted to be zero. As described in a Nature Physics paper posted online June 4, NIST researchers used the solar system as a laboratory for testing Einstein’s thought experiment involving Earth as a freefalling elevator. Einstein theorized that all objects located in such an elevator would accelerate at the same rate, as if they were in a uniform gravitational field—or no gravity at all. Moreover, he predicted, these objects’ properties relative to each other would remain constant during the elevator’s free-fall. In their experiment, the NIST team regarded Earth as an elevator falling through the Sun’s gravitational field. They compared recorded data on the “ticks” of two types of atomic clocks located around the world to show they remained in sync over 14 years, even as the gravitational pull on the elevator varied during the Earth’s slightly off-kilter orbit around the sun. Researchers compared data from 1999 to 2014 for a total of 12 clocks—four hydrogen masers (microwave lasers) in the NIST time scale with eight of the most accurate cesium fountain atomic clocks operated by metrology laboratories in the United States, the United Kingdom, France, Germany and Italy. The experiment was designed to test a prediction of general relativity, the principle of local position invariance (LPI), which holds that in a falling elevator, measures of nongravitational effects are independent of time and place. One such measurement compares the frequencies of electromagnetic radiation from atomic clocks at different locations. The researchers constrained the violation of LPI to a value of 0.00000022 plus or minus 0.00000025—the most miniscule number yet, consistent with general relativity’s predicted result of zero, and corresponding to no violation. This means the ratio of hydrogen to cesium frequencies remained the same as the clocks moved together in the falling elevator. This result has five times less uncertainty than NIST’s best previous measurement of the LPI violation, translating to five times greater sensitivity. That earlier 2007 result, from a 7-year comparison of cesium and hydrogen atomic clocks, was 20 times more sensitive than the previous tests. The latest measurement advance is due to improvements in several areas, namely more accurate cesium fountain atomic clocks, better time transfer processes (which enable devices at different locations to compare their time signals), and the latest data for computing the position and velocity of Earth in space, NIST’s Bijunath Patla said. “But the main reason we did this work was to highlight how atomic clocks are used to test fundamental physics; in particular, the foundations of general relativity,” Patla said. “This is the claim made most often when clockmakers strive for better stability and accuracy. We tie together tests of general relativity with atomic clocks, note the limitations of the current generation of clocks, and present a future outlook for how next-generation clocks will become very relevant.” Further limits on LPI are unlikely to be obtained using hydrogen and cesium clocks, the researchers say, but experimental next-generation clocks based on optical frequencies, which are much higher than the frequencies of hydrogen and cesium clocks, could offer much more sensitive results. NIST already operates a variety of these clocks based on atoms such as ytterbium and strontium. Because so many scientific theories and calculations are intertwined, NIST researchers used their new value for the LPI violation to calculate variations in several fundamental “constants” of nature, physical quantities thought to be universal and widely used in physics. Their results for the light quark mass were the best ever, while results for the fine structure constant agreed with previously reported values for any pair of atoms. The work was funded in part by the National Aeronautics and Space Administration. Paper: N. Ashby, T.E. Parker and B.R. Patla. 2018. A null test of general relativity based on a long-term comparison of atomic transition frequencies. Nature Physics. June 4. Advance Online Publication. DOI: 10.1038/s41567-018-0156-2
Anteaters, Sloths and Armadillos Of the 21 species of armadillo, the largest is the giant armadillo which is 91.5 cm (3 ft) in length. It has up to 100 peg-like - twice as many as most mammals – which are shed when the animal reaches adulthood. The smallest species, the fairy armadillo, is less than 15cm (6 in) long. Armadillos give birth to up to four young. The nine – banded armadillo, from North America, gives birth to quadruplets of the same sex. Armadillos are encased in Body Amour formed by separate plates made of bone. Soft skin links the plates together giving them flexibility. In most species the plates cover only the upper part of the body. If threatened, some species, such as the three banded armadillo, roll into a ball while others make for their burrow or dig themselves into the ground. Armadillos have large curved claws. They use them to dig into the ground to make burrows, to escape predators and to find food. The giant Armadillo’s middle claw is the largest claw in the animal kingdom, measuring 18 cm (7 in) around the curve. There are four species of anteater. The giant anteater lives in grasslands. The other three species live in forests and have prehensile (grasping) tails with which they hang from trees. Anteaters have long snouts and tongues to enable them to collect the termites and ants on which they feed. They locate the prey with their acute sense of smell. Their fore claws are so large that they need to walk of their knuckles. The claws are used to break upon termite nests and for defense. If threatened, they rear up on their hind legs and try to rip their opponent with their claws. A female’s ante aster gives birth to a single young. The young anteater travels on its mother’s back for the first year of its life, by which time it is almost half the size of its mothers. Scientific Name: Myrmecophaga Tridactyla Adapted to living upside down, sloths hang by their claws from the branches of trees. They can rotate their heads through a 270 degree angle, allowing them to keep their head upright while their body remains inverted. They eat, mate, give birth and spend their entire life – cycle upside down. Sloth’s hair lies in the opposite direction from other animals’ to allow rain to run off. Only when asleep do they adopt a more normal position, by squatting in the fork of a tree. There are seven species of sloth. All are herbivorous. Sloths are very slow movers. They rarely descent to the ground as they can only just stand, but cannot walk. They drag themselves along with their claws. In water though, they are good swimmers. Due to the high humidity levels in the rainforest, infestations of green algae grow within a sloth’s fur and cover its coat. This acts as a camouflage and makes the sloth less conspicuous. As the seasons change, the algae change color to match the color of the trees. There are seven species of pangolin or scaly anteater. They have much in common with the Edentates, but they belong to a different order called the Pholidota. They are covered with scales attached to the skin. Some species have a long, prehensile tail that is used to grasp branches and also to lash out at predators. They feed on termites, ants and larvae which they catch with their long tongues.
This course is for high school, and college (pre-university), and university (undergraduate and postgraduate) students who wish to improve their understanding of moles and molarity. On this course we will look at moles (not the animal), and at their relationship to molarity, and Avogadro's constant. We will also look at how to calculate the number of moles in a solution, the molarity of a solution, and how many grams of a compound we would need to make up a solution of a given molarity. An understanding of these subjects is crucial for any scientists that is going to be working in a lab. I have over 16 years of experience teaching on undergraduates biomedical science degrees, and I have worked as a research bioscientist for over 25 years. Everyday I perform the types of calculations covered in the courses offered at Math4Biosciences, and through my teaching I know how, and where, students struggle with the maths they need to do to pass their classes and to carry out their research. StartIntroduction to Molarity (1:27) StartMolarity - M, mM, µM, and nM (2:22) StartMolarity - Worked Examples - Introduction (1:31) StartMolarity Calculations - Example 1 (0:49) StartMolarity Calculations - Example 2 StartMolarity Calculations - Example 3 StartMolarity Calculations - Example 4 StartMolarity Calculations - Example 5 StartMolarity Calculations - Example 6 StartMolarity Calculations - Example 7 Frequently Asked Questions I'm Nick and I put this course together... When I was at school, and at university, I struggled with some of what I called the 'chemistry maths', that this, the sort of maths you have to do to work out moles and molarity, dilutions, and percentage solutions. These concepts, and calculation, I just didn't find easy. I couldn't see the point, I could grasp their importance and at times I thought my teachers and lecturers were just torturing me with these calculations for their amusement. What really helped me was a book, now long out of print, that covered all the basics (and some of the not so basic) that I needed for the courses I was taking. It put a lot of what I was trying to do in context, and helped me through the tricky maths with worked examples and exercises. I really didn't get to grip with 'chemistry maths' until I was working as a research scientist. Then it all suddenly became clear. You get the maths wrong, then the solutions in your experiment are wrong, and nothing works. It's important. When I started teaching I found that students were still struggling with 'chemistry maths', and over the years I have delivered lectures and labs in which we have gone over the maths, and I have even posted numerous (hundreds) of maths questions online for my students to try. However, what was missing was the book like I had when I was a student. So I decided to write one.... It never saw the light of day, and I realised that a book was not the way to go as it wasn't interactive, and wasn't much of an improvement on what I had 30 years earlier. Hence this course, and Maths4Biosciences, came about.... This course, moles and molarity, forms the basic 'cornerstone' of a lot a biology and chemistry, and so it is an importantsubject, and hopefully this course will help you understand the key concepts and calculations.
Overview of the Civil War The reign of Charles I, beginning in 1625, deteriorated into civil war and regicide. But the republic set up in his place was ousted by military rule under Oliver Cromwell. Then in 1660 the monarchy was restored under Charles II. By June 1649 England was a Commonwealth. What had happened to the King and House of Lords by that time? Discover how Parliament won the first civil war against the king despite being torn apart with its own internal divisions The Long Parliament wanted to dismantle the structures of Personal Rule. What measures, some of them drastic, did they take? A statue of Oliver Cromwell stands outside the Houses of Parliament. Was he really the great defender of parliamentary rule? The King and Parliament had been arguing for months. But what tipped the country over into civil war in 1642? Charles II was restored to the throne. What happened to bring this change about? Discover more about how historians have interpreted the events of the Civil War over the centuries Learn about the Interregnum reforms to the Commons, many of which were not seen again until the 19th century Learn about the first two experiments in republican rule in England and Oliver Cromwell's military coup of government Learn how relations between Charles I and Parliament started off badly in the first few years of his reign
Influenza, commonly known as “the flu”, is an infectious disease caused by the influenza virus. Symptoms can be mild to severe. The most common symptoms include: a high fever, runny nose, sore throat, muscle pains, headache, coughing, and feeling tired. These symptoms typically begin two days after exposure to the virus and most last less than a week. The cough, however, may last for more than two weeks. In children there may be nausea and vomiting but these are not common in adults. Nausea and vomiting occur more commonly in the unrelated infection gastroenteritis, which is sometimes inaccurately referred to as “stomach flu” or “24-hour flu”. Complications of influenza may include viral pneumonia, secondary bacterial pneumonia, sinus infections, and worsening of previous health problems such as asthma or heart failure. Usually, the virus is spread through the air from coughs or sneezes.This is believed to occur mostly over relatively short distances. It can also be spread by touching surfaces contaminated by the virus and then touching the mouth or eyes. A person may be infectious to others both before and during the time they are sick. The infection may be confirmed by testing the throat, sputum, or nose for the virus. Influenza spreads around the world in a yearly outbreak, resulting in about three to five million cases of severe illness and about 250,000 to 500,000 deaths. In the Northern and Southern parts of the world outbreaks occur mainly in winter while in areas around the equator outbreaks may occur at any time of the year. Death occurs mostly in the young, the old and those with other health problems. Larger outbreaks known as pandemics are less frequent. In the 20th century three influenza pandemics occurred: Spanish influenza in 1918, Asian influenza in 1958, and Hong Kong influenza in 1968, each resulting in more than a million deaths. The World Health Organization declared an outbreak of a new type of influenza A/H1N1 to be a pandemic in June of 2009. Influenza may also affect other animals, including pigs, horses and birds. Frequent hand washing reduces the risk of infection because the virus is inactivated by soap. Wearing a surgical mask is also useful. Yearly vaccinations against influenza is recommended by the World Health Organization in those at high risk. The vaccine is usually effective against three or four types of influenza. It is usually well tolerated. A vaccine made for one year may be not be useful in the following year, since the virus evolves rapidly. Antiviral drugs such as the neuraminidase inhibitors oseltamivir among others have been used to treat influenza. Their benefits in those who are otherwise healthy do not appear to be greater than their risks. No benefit has been found in those with other health problems.
Oracy is the ability to speak clearly and confidently, adapting tone and style to suit the needs of the audience. It is a key skill in the wider world and prepares children for the future world of interviews and interactions. In our school, a key focus of oracy is precision of language. Children are expected to speak in full sentences and use Standard English. Filler phrases such as ‘like’, ‘basically’ and ‘literally’ add no value to sentences and are actively discouraged. Class discussions are a daily feature to enable children to practise the use of concise points and summaries. Children are supported in structuring their responses through ‘stem sentences', (eg: "I think that.... because"). In addition, volume and body language enable children to engage with others. Eye contact, straight backs, and hands (or objects) away from faces mean children speak clearly and project their voices. In Upper Key Stage 2, these skills are practised and demonstrated through debating. Children are taught how to run, participate and engage in lively and interactive debates. They learn to listen to and share differing opinions in a safe environment. These also promote our school rules: “We are good listeners” and “We are respectful” and our British Value of tolerance. In Year 6, children have the opportunity to take part in a debating tournament with other schools run by Latymer Upper School.
British economic and political interest in India began in the 17th century, when the East India Company established trading posts there. Eventually the British took full control of Indian political and economic affairs, acting more as governors than traders on the Indian sub-continent. This had an affect on trading, culture and government affairs in India. Beginnings of Imperialism At first, the ruling Mughal Dynasty in India was able to keep the traders under close scrutiny. Beginning around 1707, however, the dynasty collapsed into dozens of small states. In 1757, the East India Company defeated Indian troops at the Battle of Plassey. The East India Company became the foremost power in India, and India became the "crown jewel of the British Empire." India became increasingly valuable to British interests after a railway network was built there. The railroads were used to transport raw products from the inner parts of the Indian sub-continent to the ports. Manufactured goods made at the ports would be transported back to the inner zones. Nearly all the materials used in manufacturing were produced on plantations, including tea, cotton, opium and coffee. In particular, the British would ship opium to China in exchange for tea that was sold in England. The trade goods had an enormous impact on Indian politics. The 1850s Crimean War, for example, cut off supplies of Russian exports to Scotland. In turn, the exports of products in the Indian province of Bengal increased. The U.S. Civil War boosted cotton production in India. The British further replaced India's political aristocracy with a bureaucratic military adept at maintaining law and order. This led to a reduction in fiscal overheads, leaving a larger share of national product available to the British while simultaneously stripping self-governance rights and natural products from the Indian people. As the economy increased, so too did the Indian infrastructure. However, the British held most of the political and economic power and they used this to restrict Indian-owned industries including cotton textiles. This led to a loss of self-sufficiency for many locals and, in the late 1800s, India experienced a severe famine. Beyond economic concerns, the British had a more-or-less hands-off policy when it came to religious and social customs in India. However, British missionaries increased during the imperial era, with hopes to spread Western Christianity. Many of the British officials working in India were racist, impacting the political climate. Many Indians who worked with British officials for administrative purposes were portrayed as disloyal or deceitful to their Indian brethren by the British. Resentment against the British mounted in the mid 1800s. In southern India, for example, the British and the French allied with opposed political factions to extract Indian goods for their respective domestic uses. A strong sense of nationalism began to take hold. In 1857, Indian soliders -- called sepoys -- came to believe that the cartridges of their rifles were greased with pork and beef fat. This was important because to use the cartridges, the user had to bite off the ends. This was a religious concern for Hindu and Muslim sepoys who were forbidden to eat these meats. This led to the Sepoy Mutiny when 85 soldiers refused to use the cartridges. The soldiers were jailed by the British and on May 10, 1857 the sepoys marched to Delhi. Once there, they were joined by other soldiers and eventually the captured the city. Decline and Political Regeneration The Sepoy Mutiny spread to much of northern India, sparking an intense battle between British forces and the Indian soldiers. It took the East India Company more than one year to regain control. However, the event weakened Britain's political position. Growing nationalism led to the founding of the Indian National Congress in 1885 and then to the Muslim League in 1906. Both groups called for self-government. During the 1930s, the British slowly enacted legal changes and the Indian National Congress began to win many political victories. Among those campaigning for Indian nationalism was Ghandi, a civil rights leader who advocated non-violent civil disobedience. India finally gained full independence from the United Kingdom in 1950, the Indian constitution and the parliamentary system of government's design was influenced by Great Britain. To this day, India remains part of the British Commonwealth. Jeremy Bradley works in the fields of educational consultancy and business administration. He holds a Master of Business Administration degree.
MOVIE: “GLOBAL DIMMING” When we see the increase in global temperatures, there is always that conspicuous absence of warming from 1940s – 1970s. Scientists have since discovered that there was a decline of sunlight reaching the earth during this period; they called it “Global Dimming”. Since there is no direct correlation with CO2 during this period, some climate deniers use this data to refute the impact of CO2 on the global temperature. However, many factors affect warming, including changes in incoming solar radiation and the reflective properties of sulfate aerosols that are released with the burning of fossil fuels. Only when combining all these factors together can we account for the rise in global temperature across all the different decades – and even the lack of warming in the mid-20th century. This movie provides a detailed explanation as to how the release of sulfate aerosols led to reduced incoming solar radiation that ironically kept the globe from warming. It wasn’t until the clean air act in 1970 that global warming resumed. Even today, it is not clear how much the current pollutant haze may be offsetting the total warming that would ensue in the absence of air pollution. Hence, we need to tackle both the release of CO2 resulting in global warming and the pollutant emissions – both factors that result from the burning of fossil fuels. Watch the movie “Global Dimming” and read the following arguments at skeptical science.com as to why there was no mid-20th Century warming. https://www.skepticalscience.com/global-cooling-mid-20th-century-advanced.htmLinks to an external site. Combine what you learn from this movie with the arguments to create a Personal Movie Discussion Board Essay/Blog for others to read. Use separate paragraphs for each item and keep your responses organized. As part of your essay/blog, include the following: - Describe four different data sets from the movie that led scientists to the discovery of global dimming, and explain how the Clean Air Act helped to eliminate this dimming. - Explain how the 3 day period after 9/11 and the related airplane contrails were a factor in the discovery of global dimming. Also describe how they determined the temperature rise during this 3 day period? - Explain how the global radiation budget is affected by sulfate aerosols compared to CO2. In your response describe how CO2 affects outgoing infrared radiation, describe how sulfate aerosols affect the incoming solar radiation, and explain the process of what happens when we reduce pollution emissions and airplane contrails. - Discuss the three most interesting things that struck you the most while watching the film. Explore your thoughts in depth on these three things. (100-200 words total minimum) NO PLAGIARISM PLEASE! TurnItIn checker is used.
Sexual orientation, an enduring emotional, romantic, sexual or affectional attraction to another person, exists along a continuum from exclusive homosexuality to exclusive heterosexuality and includes various forms of bi=sexuality. Independent of sexual orientation are persons identified as transgender, meaning their gender identity (male female, transgender, neither or both) or gender expression does not match their assigned sex. Regardless of these differences in definition, persons with these distinctions (often referred to collectively by the acronym LGBT, i.e., lesbian, gay, bi-sexual, transgender) are often subject to the same violation of their human rights. Sexual orientation is a relatively recent notion in human rights law and practice and a highly controversial subject in politics. Lesbians, gays and bi-sexuals do not claim any “special” or “additional rights” but are guaranteed the same rights as those of all other persons. The main principles guiding the rights approach to sexual orientation relate to equality and non-discrimination. While the human rights legal framework does not refer directly to discrimination based on sexual orientation, it does prohibit discrimination on the basis of sex. In 1993 the UN Commission on Human Rights declared that the prohibition against sex discrimination in the International Covenant on Civil and Political Rights (ICCPR) includes discrimination on the basis of sexual preference. Rights at Stake Lesbian, gay, bi-sexual and transgendered (LGBT) persons are frequently denied – either by law or practices – basic civil, political, social and economic rights. The following violations have been documented in all parts of the world: - Equality in rights and before the law: Through special criminal provisions or practices on the basis of sexual orientation, in many countries lesbians, gays and bisexuals are denied equality in rights and before the law. Often laws maintain a higher age of consent for same sex relations in comparison with opposite sex relations. - The right to non-discrimination and freedom from violence: Sexual orientation is usually denied by omission from anti-discrimination laws, constitutional provisions, or their enforcement. - The right to life : In some states the death penalty is applicable for sodomy and other same-sex behaviors. - The right to be free from torture or cruel, inhuman or degrading treatment: Police practices in investigations or detention often violate this right in the case of lesbians, gays, bisexuals, and transgender persons. - Arbitrary arrest: In a number of countries individuals suspected of having a homo/bisexual identity are subject to arbitrary arrest. - The freedom of movement: In many countries same-sex relationships are not recognized and couples are denied entry. - The right to a fair trial is often affected by the prejudices of judges and other law enforcement officials. - The right to privacy is denied by the existence of “sodomy laws” applicable to lesbians, gays, bisexuals, and transgender persons even if the relation is in private between consenting adults. - The rights to free expression and free association may either be denied explicitly by law, or denied in practice because of the homophobic social climate. - The practice of religion is usually restricted, especially in the case of religions advocating against homosexuality. - The right to work is the most affected among the economic rights, many lesbians, gays, bisexuals and transgender person being fired because of their sexual orientation or discriminated against in employment policies and practices. - The rights to social security, assistance and benefits, and as a result the right to an adequate standard of living, are affected, for example persons have to disclose the identity of their spouse. - The right to physical and mental health is at conflict with discriminatory policies and practices, some physicians’ homophobia, the lack of adequate training for health care personnel regarding sexual orientation issues or the general assumption that patients are heterosexuals. - The right to form a family is denied by governments by not recognizing same sex families and by denying the rights otherwise granted by the state to heterosexual families. Children can also be denied protection against separation from parents based on a parent’s sexual orientation. In some countries lesbian, gay and bisexual couples and individuals are not allowed to adopt a child, even in the case of the child of their same-sex partner. - The right to education: lesbians, gays, bisexuals and transgender students may be denied education because of an unsafe climate created by peers or educators in schools. The core legal obligations of States with respect to protecting the human rights of LGBT people include obligations to: - Protect individuals from homophobic and transphobic violence. - Prevent torture and cruel, inhuman and degrading treatment. - Repeal laws criminalizing homosexuality. - Prohibit discrimination based on sexual orientation and gender identity. Safeguard freedom of expression, association and peaceful assembly for all LGBT people. Related Human Rights Documents - International Covenant on Civil and Political Rights (ICCPR, 1966)
Voltage references are electronic devices that provide a fixed output voltage. They provide a constant voltage regardless of external factors. This includes the load on the device, temperature or fluctuations in the power supply. Learn more in our Complete Guide to Voltage References. Voltage references have different characteristics depending on their purpose. Voltage references used in laboratory applications are designed to have extremely high precision and accuracy, whereas those used as regulators for computer power supplies are much cheaper but less precise. There are many different kinds of voltage references, generally categorised by type, tolerance, rated voltage, reference voltage and rated current. The most common tolerances are ±2%, ±1% and ±0.5%, but voltage references are available even up to ±40%. Voltage reference ICs come in a standard semiconductor package, such as PDIP, SO, SOIC and SOT-23. The pin count may also be combined with the package type, for example SO-8. Voltage references can be used for a variety of different precision measurement and control systems such as power supplies in personal computers and analogue-to-digital converters. They are also used in scientific applications and in medical equipment where voltage variations need to be tested regularly. Voltage references are also used in battery-powered devices.
“The ‘boundary layer’ is everything.” New RJL Hawke Post-Doctoral Fellow, Dr Bishakhdatta Gayen, is referring to the millimetre-thick boundary where salt and heat from the ocean meets the base of the ice shelf around the Antarctic continent. Salt, in direct contact with the basal ice surface, triggers ice melt by lowering the freezing point of water — just as salt on the road on a cold winter’s day melts ice. The fresh meltwater is often unstable and turbulent, producing tiny eddies (whirlpools of water) within the boundary layer, which are thought to boost melting by transferring heat to the ice surface from the surrounding ocean (see diagram). These complex boundary layer dynamics and heat transfer processes are the focus of Dr Gayen’s research over the next two years, with the aim of developing a simple mathematical relationship between melt rate and turbulence, which can be scaled up to represent ocean-wide processes in models. “In ocean models the boundary layer is poorly represented and errors in this representation propagate all the way up to the measurement of ice melting and sea level change,” Dr Gayen said. “If you really want to know how the ocean affects ice sheet melting you have to understand it at the millimetre scale of the boundary layer, because this is the layer that affects melting first. “If you can resolve this smallest scale, then the larger ones will automatically be correct.” To resolve this small scale Dr Gayen will attempt to define the boundary layer — the physics that create it and its physical characteristics, including thickness. To do this he will use a supercomputer to solve a series of mathematical algorithms that describe the movement of fluid, heat and salinity, building mathematical relationships between these characteristics that can then be used to predict ice melt under defined conditions. Data generated by these ice melt simulations will be validated by experiments in the ANU’s Geophysical Fluid Dynamics laboratory. Within a tank in the laboratory, a large block of ice will be placed in contact with salty water. The temperature, salinity and movement of the water can then be modified to test its effect on the melt rate of the ice and compare these results to the supercomputer predictions. From this Dr Gayen aims to develop a “paramaterisation” — a mathematical model representing the boundary layer. “This parametarisation will be able to be scaled up in ocean models, to predict the rate of ice melt under different environmental conditions,” he said. Dr Gayen will also test his paramaterisation under different conditions that further drive ice melt. These include the slope of the ice shelf, and the effect of tides, currents, upwelling water and internal ocean waves (waves that oscillate horizontally and vertically within water masses). Finally, Dr Gayen will be able to validate his paramaterisation of the boundary layer, by assessing it against field observations obtained on the Amery Ice Shelf and elsewhere in Antarctica. On the Amery Ice Shelf, for example, scientists from the Australian Antarctic Division and Antarctic Climate and Ecosystems Cooperative Research Centre have run a multi-year drilling project, deploying instruments through boreholes in the ice to measure changes in ocean temperature, salinity, water movement and the melting of ice beneath the shelf (Australian Antarctic Magazine 31: 18–19, 2016). “The Amery drilling program has measured the properties of the boundary layer next to the ice face. So we can feed those conditions and my paramaterisation into ocean models and evolve them,” Dr Gayen said. “This work will provide a knowledge base for improvements in the representation of Antarctic processes in ocean models. This in turn will lead to more accurate projections of future Antarctic ice melt and sea level changes.” Australian Antarctic Division *The RJL Hawke Postdoctoral Fellowship was named in honour of former Australian Prime Minister Bob Hawke, acknowledging his contribution to protecting the Antarctic environment. The fellowship is awarded on the basis of scientific excellence for early career doctoral graduates to pursue policy-relevant science aligned to the Australian Antarctic Science Plan.
Welcome to Shark Fact Friday, a (mostly) weekly blog post all about unique sharks and what makes them so awesome. This week’s post is about a new genetics technique that could revolutionize shark science. One of the basic questions of most shark science and all shark management is where different shark species actually live. Scientists can answer this question several ways: they can use longlines or gillnets to catch sharks and document them, underwater cameras, visual surveys by scuba diving or snorkeling, or they can rely on data from fishermen. However, these techniques have one thing in common: they require observing sharks in the water. What if I told you that there is a new technique that identifies which sharks are in the water at a given location, without ever seeing them? Sounds like magic, right? It sure seems like magic, but it’s just science and genetics at work! Thanks to TV crime shows, most people are familiar with how to get DNA from a human – swabbing a cheek or pulling DNA from a strand of hair or skin. For a shark, DNA analysis normally requires taking a tiny bit of a fin. But, this new technique called environmental DNA, or eDNA, skips that step entirely. Instead, it relies on DNA in the water. Fish naturally shed DNA fragments into their environment in the form of scales, poop, skin and bodily fluids. These fragments usually degrade in a matter of days, so if scientists collect water and sequence the DNA in it, they can get a snapshot of what critters were in that environment recently. And that’s exactly how eDNA works! A recent study successfully used eDNA to identify at least 21 shark species from both the Caribbean and the Pacific. The scientists sampled different types of locations ranging from protected areas and remote islands where sharks are left alone, to places where lots of humans live and shark fishing is more common. Interestingly, but probably not surprisingly, remote locations and protected areas showed the highest diversity in shark species. All of this information was gleaned with just water samples – no hooks, no nets, and no expensive underwater cameras! This technique is relatively new, so it will be interesting and exciting to see how it is used in the future, especially since eDNA can detect so many different species. A colleague of mine said it best: “That’s like some Star Trek stuff!”
Research shows that infants and toddlers acquire language at a much higher rate due to an increase in brain development during the younger years. When working with young children virtually, it is imperative to acknowledge and support language development through healthy interactions between the child, parent and teacher. According to The Linguistic Society, children acquire language through interactions with parents, teachers and other children. To support the overall language development of young children when conducting remote learning opportunities, teachers and parents should be aware of exactly “what” is being communicated. It’s important! As a guide, here are 5 ways to enhance preschool language development while conducting virtual learning: Caregiver to child: “Good morning to you (child’s name). Today is Wednesday and it’s bright and sunny outside. Can you see the sun from your window? Can you show me?”
This post describes eight Narrow reading techniques that have significantly enhanced my students’ vocabulary and reading skills. As explained in previous posts, Narrow Reading is a powerful technique based on the concept that getting your students to go over and over the same text through a range of comprehension tasks may be tedious for them; whilst by creating several reading passages (I tend to use three to six) that are very similar in terms of topic, structure, vocabulary and patterns, you will still be recycling the same target linguistic features but through a wider range of texts adding in and allowing for more variety. In my experience, Narrow reading texts are most effective, when they: - are near-identical in terms of patterns; - contain comprehensible input (90% accessible in meaning without resorting to dictionaried or extra-textual help); - are relatively short (very short for absolute beginners, of course, as shown in figure 1 below) Figure 1 – Example of Narrow reading texts for absolute beginners of English The activities I usually ask my students to perform on Narrow reading texts are different from the typical ‘true or false’, ‘who, where, what, when, etc.’ or other classical comprehension questions, because such tasks often encourage skimming and scanning, educated guesswork and picking details, rather than processing texts in a more thorough and meticulous way. Skimming and scanning, educated guesswork and inferencing are obviously very important skills, which should be fostered in the L2 classroom. However, I want my students to process the texts in their entirety paying attention to as much text as possible, in order to intensify the students’ exposure to the vocabulary and patterns I intend to recycle. Hence, what I have done over the years, is trying to come up with tasks which, whilst being engaging and involving problem-solving, aim to get them to do just that. In sum, the main aim of Narrow reading tasks is to ‘trick’ the students into processing what is basically the same text over and over again whilst making them read six. In this sense, they are possibly one of the most effective recycling tools ever, allowing L2 teachers to expose their learners to the core items in their syllabi many times over throughout the duration of the academic year. 2.Eight effective Narrow reading techniques The eight techniques described below, are Narrow reading tasks that I carry out in my lessons, day in day out and my students enjoy. Obviously, they are contextualised in the topic-at-hand. 1.Spot the differences – This is a narrow reading activity which typically involves 3 to 6 texts (the more the better) of around 100 words that are completely identical apart from a few key details. The task is for the students to spot such details in each text which are different from all the other five texts. So if text A in line 3 reads ‘she is tall’ all the other texts will read at the same line ‘she is short’ or ‘she is average height’. Obviously, you can make it into a competition under time constraints with the right group. The rationale for the activity is to trick the students into reading the same texts three to six times over (thereby recycling the same lexis, patterns and grammar) whilst giving them a task which requires them to pay attention to the slightest detail in order to find the differences. As a follow-up you can do a ‘Spot the differences’ Listening task in which you will re-use the same texts (changing the target details of course) and will read out to them. Since the focus will be on modelling you will be reading the text at modelling, not near-native speed. Same rationale: getting them to listen to the same text and patterns over and over again. Figure 2 – ‘Spot the differences’ (French example) 2.Bad translation -‘Bad translation’ is another very effective Narrow Reading technique I use a lot. It consists of a set of very similar texts (typical 3 or 4) and their respective translations. The task is for the students to spot four or five mistakes the teacher deliberately made in the translation to lay emphasis on certain vocabulary or structures. This forces the students to really process the Target language texts in great detail and learn vocabulary incidentally as they do so. By doing this task, you are again tricking the students in re-reading the same sort of text, patterns and vocabulary several times over but with the added benefits of the L1 translation,which may result in some learning of new vocabulary in the process. The same texts can be recycled as follow-up by placing gaps in the Target Language text or in the translations. The same technique can also be turned into a Listening activity in which the students are provided with the translation and listen to the teacher as he reads the Target Language text. 3.Summaries – In this activity, the students are once again given 3 to 6 texts on the same topic, not identical but very similar in structure and language content. You summarise each text in 40-50 words in the L1 or L2 , depending on the students’ level. The task is for the students to find which summary matches which text. To make the task more challenging, you may want to add an extra summary or two, as distractors. 4.Picture – select an image from the internet or the textbook in use, which refers to the topic-at-hand or to specific grammar structures or patterns you want to recycle. Then create three or more narrow reading texts which describe the picture in detail. Make sure, though, that only one text and one text only is a 100% accurate description of the picture, whilst the others have one or two details in excess which do not match the picture. The task is for the students to identify the only text that matches the picture in every single detail. This task kills two birds with one stone in that not only it does enhance the students’ vocabulary and reading skills but can also be used to prepare MFL GCSE students for the oral photocard task by modelling useful language and approaches to that task. 5.Questions – This narrow reading technique requires a bit more work. I created it in order to focus my students not only on the content of the target texts, but also on understanding L2 questions. After creating 3 to 6 texts that are very similar in content and structure, write a set of ten or more questions in the target language, making sure that each text contains the answer to all of the questions you created but one. The students’ task is to find the one question that does not apply to each specific text (i.e. there is no answer to that question in the text) 6.Overgeneralizations – This is kind of reminiscent of ‘Spot the differences’. After creating the texts you will write ten or more statements in the target language about them which are true of all the texts except for one (e.g. ‘All the people in the texts play a sport). The students’ task is to find for each of the statements the one text it does not apply to. The statement could be in English or in the L2. 7.List – Create as many narrow reading texts as you can on the same topic. Then display a long list of details in the L1 or L2 taken from the various texts and display it on the board. The task is for the students to match the information on the list to the text it comes from. 8.The most / The least – After creating the text, you will write a number of gapped sentences such as: the most positive person is…; the sportiest person is … the biggest house is…, the person who visited most places is… . Students are tasked with filling the gaps. This technique has the added benefit of drilling in superlatives, a structure that KS3 students of French find quite difficult to acquire. More traditional activities, classics such as true or false or other comprehension questions tasks, cloze, sorting information into categories (linguistic or semantic), ‘Find the French for the following’ etc, can of course follow and are indeed desirable as they truly enhance the power of Narrow Reading tasks. The risk is staying on the same texts a bit too long, which may disengage some students. Narrow listening tasks, recycling the same texts used for Narrow reading by changing a few details here and there are another very effective alternative, depending on the level of the students. Some classes may cope with receiving the same input aurally, some may not. You may have to shorten the texts and simplify the task, when adapting them for listening purposes. You will also have to read them at modelling speed, rather than native or near-native pace. Any other vocabulary tasks / games drilling in the language you embedded in the Narrow reading tasks will be useful before engaging in production. Structured and semi-structured ‘pushed-output’ written and then oral tasks in which the students will be asked to re-use the same language patterns and vocabulary found in the narrow reading texts would obviously be the icing of the cake. 4. Concluding remarks Narrow reading tasks constitute an effective and engaging way to increase exponentially the exposure your students get to the target patterns, vocabulary and grammar structures that you want them to acquire though the written medium. Such tasks are powerful because they do not ask students to simply pick details in response to questions asking who, where and what, which may encourage some of them to simply skim and scan through texts in search of clues which may prompt educated guesses or inferences, but ‘trick’ the students into processing the texts more closely and thoroughly, whilst giving them a problem to solve. This makes enhancing the exposure to the target items more effective and engaging. To-date, Narrow reading tasks have not been sufficiently used in published instructional materials because they are not a well-known technique and are time-consuming to make. You often find clusters of texts of similar topics that share some linguistic features; however, narrow reading texts are most effective, in my experience of using them for over a decade, when they are extremely similar in structure, repeat the same patterns over and over again and when the tasks associated with them have a problem-solving component and even a competitive element. Mounting research evidence shows Narrow reading texts do enhance students’ vocabulary. Moreover, we know from masses of empirical studies that high-frequency exposure to the same patterns (syntactic, morphologica, phonogical, etc.) does sensitize students to them, thereby facilitating acquisition.
November 11 - 1620 - The Mayflower Compact The Mayflower Compact was one of the first governing documents in England's American colonies, establishing a basic framework and social contract among the people who sailed on the Mayflower and established the Plymouth Colony. On the other hand, the people on board the Mayflower were not in the situation they thought they would be in when they hit America. The Mayflower's voyage was organized by a group of Separatists, people who wanted to separate themselves from the Church of England because they believed it was too corrupt. Yet they could not make a full colony by themselves. Instead, they had other people, who they referred to as "Strangers" while calling themselves "Saints," join them on their journey. The voyage was problematic, with a planned second ship, the Speedwell, not being able to make it into open seas. This delay meant they faced serious gale winds in the North Atlantic, which made the trip rough. Then they arrived well north of their intended target of Jamestown. With their ship forced to stay on Cape Cod, the men of the Mayflower realized they needed to agree to some kind of compact as they would winter on board an anchored Mayflower.
Preparing food is an incredibly delicate process, which makes it surprising that many still take it for granted. Food’s vulnerability to contamination is already high, though risks can exponentially multiply if you take a single concept into account: that of the ‘temperature danger zone’. The danger zone is a legitimate concern in the culinary field, so much that it’s taught in My Food Safety’s food handling training courses. What’s this Danger Zone? There’s a specific range of temperature where food is most vulnerable to harmful bacterial growth. This is the touted ‘danger zone’, which spans 5-60 degrees Celsius. It’s extremely easy for bacteria to grow in food within the danger zone, which is why cooking and frozen storage are the only two points in time when food is technically safest. Why the Danger Zone, Per Se? Bacteria are extremely prolific in terms of reproduction. A single bacterium can spawn trillions of its mates in just 24 hours, since bacteria double in number every twenty minutes under the right conditions. The temperature range within the danger possesses these “right conditions” for bacteria to thrive: ample food, moisture, oxygen and temperature. In other words, foods can literally be too hot or too cold for bacteria to survive. Traversing the Danger Zone Food hygienists say that the fridge should be kept below 5 degrees Celsius to make sure the environment is too cold for bacteria to pose a real threat. When storing food in the fridge, items should not be too closely stacked for better air circulation. When it comes to freshly cooked food, their temperatures must be brought down as fast as possible. As soon as steaming stops, experts recommend that food be divided into small portions and placed inside the fridge. Hot food should also be kept and served hot, ideally at 60 degrees or hotter. Lastly, there’s the two-hour/four-hour rule, which details what should be done with food right after cooking. For the first two hours, food must be consumed immediately or kept outside the 5-60 degree range. Within 2-4 hours, it must be consumed immediately. Should more than 4 hours pass, the food must be thrown away.
KTD Literature will be working on developing rhyming words using scrabble tiles. My plan is to turn this into a race between two groups to see who can come up with the most rhyming words for a variety of sounds in only three minute intervals. 1TD spent last week reading three short chapter books and putting post-it notes anywhere one character changed another character. This week, those changes will be recorded on three different webs. 2TD is finally done with their amazing renditions of A Giraffe and a Half. The kids did a great job of working together on this book. Our next adventure will be with the novel Frindle by Andrew Clements. We will be reading this book together and recording how different characters change in a variety of categories. 3PA will spend the week creating a plan based on The Girl Who Owned a City of where they would relocate in Brookdale to create their own city. The kids will also be creating a visual to show the connections between this novel and The Green Book. 4PA will create their impressionist watercolor paintings of The Secret Garden in three stages of development. Some of the symbolism and metaphors found in the text will also find their way into the paintings. This should be a fun week for the 4th graders. 5PA is finishing up reading two award winning novels written in verse, The Crossover and Brown Girl Dreaming. Teams were given the four standards 5PA uses 2nd quarter for understanding literature. The only directions they were given was to show their mastery of the standards in some way. It can literally be anything they think of, a challenge. This week, groups will be developing their way of showing mastery of the following standards: RL.6.2: Determine a theme or central idea of a text and how it is conveyed through particular details; provide a summary of the text distinct from personal opinions or judgments. RL.6.3: Describe how a particular story's or drama's plot unfolds in a series of episodes as well as how the characters respond or change as the plot moves toward a resolution. RL.6.4: Determine the meaning of words and phrases as they are used in a text, including figurative and connotative meanings; analyze the impact of a specific word choice on meaning and tone. RL.6.5: Analyze how a particular sentence, chapter, scene, or stanza fits into the overall structure of a text and contributes to the development of the theme, setting, or plot. On Wednesday, we will have our first Socratic Seminar Tug-o-War comparing Ray Bradbury to all other writers to develop our understanding of how text structure impacts themes in literature.
Fractional- reserve banking is the common practice by commercial banks of accepting deposits investments, , making loans while holding reserves at least equal to a fraction of the bank' s deposit liabilities. Multiplication sheet 1 12 printable. In learning multiplication the Printable Multiplication Table Chart 1- 12 chart as posted above can be used by children to learn memorize any form of multiplication by 1 to 12. Multiplication Facts to 12 × 12 = 144. Missing printable Factors 1 to 12 ( Horizontal Questions - Full Page) This basic Missing Factor worksheet is designed to help printable kids practice missing factors for 1 through 12 with multiplication questions that change each time you visit. These multiplication times table worksheets are appropriate for Kindergarten 3rd Grade, 4th Grade, 1st Grade, 2nd Grade, 5th Grade. Multiplying 1 to 12 by 1 ( A) Welcome to The Multiplying printable 1 to 12 by 1 ( A) Math Worksheet from the Multiplication Worksheets Page at Math- Drills. Times Tables Times Tables – Advanced Times Tables 2 - 12 – 1 Worksheet 1- 12 times table worksheet here Multiplication Multiplication – Basic Facts Multiplication – Cubes Multiplication – Horizontal Multiplication – Quiz Multiplication – Repea. 1 – 12 multiplication tables as well as mixed multiplication facts. This math worksheet is printable and displays a full page math sheet with Vertical Multiplication questions. 7 Fluently multiply using strategies such as the relationship between multiplication , divide within 100 division ( e. Mixed Multiplication and division worksheets Multiplication Array Worksheets [ Introduction] Fun math multiplication greeting worksheets maker Multiplication worksheets Multiplication Drills 1 - 12 Worksheets Multiplication Drills 1 - 10 Worksheets Multiplication Worksheets ( counting) Multiplication Circle Worksheets 2 Digit Multiplication Worksheets. Multiplication Tables 1 12. Multiplication facts worksheets with facts to 12 × 12 = 144 including individual facts worksheets. A complete set of free printable multiplication times tables for 1 to 12. Multiply by 12 ( Horizontal Questions - Full Page) This basic Multiplication worksheet is designed to help kids practice multiplying by 12 with multiplication questions that change each time you visit. knowing that 8 × 5 = 40 one printable knows 40 ÷ 5 = 8) properties of operations. Fractional- reserve banking is the current form of banking practiced in most. Multiplication Tables Worksheets – Free printable math worksheets. Each practice sheet contains the funny theme to attract the early learners. This math worksheet is printable and displays a full page math sheet with Horizontal Multiplication printable questions. Content filed under the Multiplication Target Circles category. These multiplication times table worksheets are colorful and a great resource for teaching kids their multiplication times tables. Welcome to the Math Salamanders Multiplication Printable Worksheets. Here you will find a wide range of free printable Multiplication Worksheets which will. where is summed over for all possible values of and and the notation above uses the Einstein summation convention. The implied summation over repeated indices without the presence of an explicit sum sign is called Einstein summation, and is commonly used in both matrix and tensor analysis. multiplication sheet 1 12 printable Therefore, in order for matrix multiplication to be defined, the dimensions of the matrices must satisfy. You can also use the worksheet generator to create your own multiplication facts worksheets which you can then print or forward. The tables worksheets are ideal for in the 3th grade.
Analysis of Variance (ANOVA): Why look at variance if we're interested in means? If you aren't familiar with a procedure called, "Analysis of Variance (ANOVA)," it's basically used to compare multiple group means against each other and determine if they are different or not. We can determine how similar or dissimilar multiple groups' means are from one another by asking the question, "How much of the difference is due to groups, as opposed to individual differences?" For example, we might want to claim that Winter temperatures differ depending on whether you're in Illinois, Indiana or Ohio. However, over the course of ten days we find that the average temperatures were 30, 31 and 30.5 degrees Fahrenheit, respectively. Are they really that different? That is when statistical tests come in handy. So, how does ANOVA work? This article won't be so much about how to run an ANOVA or an F-test, but rather to run through the logic behind it; understanding the basic logic has always aided me in remembering when and why to use each model as well as understanding the limitations of each model -- hopefully this can do the same for you! In statistics, variance is basically how spread out your data is. Let’s say that you are interested in how anxious individuals at your school are. You might look at a sample of your fellow schoolmates (from different majors, and not necessarily your own friends) and administer an anxiety battery (basically a survey) which will yield anxiety scores. If the anxiety scores across all your participants are similar, we say that there is small variance (narrower curve -- blue). However, if the anxiety scores vastly differ from person to person, then we say that we have large variance (wider curve -- orange). However, why should we analyze variances when we're interested in comparing means (averages)? Consider for a moment a psychologist who is interested in whether exercising or meditating is more effective in reducing anxiety. She can run an experiment and randomly assign people into two groups: 1) exercise group and 2) meditation group. At the end, she can look at how much anxiety scores decreased, on average, between the two groups. We can ask ourselves two questions. First, was one more successful than the other at reducing anxiety, or were both equally as successful? In other words, did being in a particular group lead to a different outcome (amount of anxiety reduction)? Or, were the differences just due to individual differences? The first question is referring to variance between groups -- how different were the groups from one another in the outcome (anxiety reduction)? The second question was referring to overall variance between individuals -- how different were people from one another (regardless of group)? If a lot more of the difference in anxiety reduction is due to which group you got assigned to, compared to random individual differences, then we say that there is a significant difference between the two groups in the average amount of anxiety reduction. To wrap things up, ANOVA compares the amount of group variation due to the amount individual variation, allowing us to determine if groups are actually different or not, on average. The formal inference test will be the F-test, and like other inference tests, we’ll obtain a test statistic (in our case, F) and a p-value. In fact, the F-statistic is constructed by dividing the average amount of variance (differences) between groups by the average amount of variance (differences) between people -- we're looking at a ratio! Let’s take this apart piece by piece. In the numerator (the top half of the fraction), we have MSA, which is short for “mean squared error for factor A.” In other words, our factor A is our treatment groups! Do you remember what they are? Essentially, to get this value, we look at each group’s average amount of anxiety reduction. On average, how much anxiety was reduced in the meditation group? And on average, how much anxiety was reduced in the exercise group? Now, let’s look at how each of these group averages differ from the overall, grand reduction in anxiety (collapsed across all groups). That, in essence, is what the MSA is trying to capture – how much of the difference in anxiety reduction was due to treatment? Then, in the denominator (the bottom half of the fraction), we have the MSE, which is short for “mean squared error.” This is simply the overall variability in how much each individual’s anxiety score decreased, compared to the overall (mean) decrease. For example, you might have experienced a reduction in anxiety by 3 points, I might have experienced a reduction in anxiety by 1 point, and on average, everyone in the study decreased by 2 points. That means that you were one point above the average and I was one point below. So, in essence, the MSE is trying to capture how much variability, on average, is seen between all individuals in our study. However, this leads to a drawback. If we choose to use an ANOVA to compare 3+ group means, we cannot identify how the groups differ from another. For example, we cannot tell [from this test] if group 1 was greater than both groups 2 and 3); we can only detect if there is a difference at all between the groups. But don’t worry – there are other tests available for this. So, this concludes our logical breakdown of ANOVA. As we've seen, it's often difficult and ambiguous to just look at averages alone. We saw this in our initial example in weather, and then again in treatments. Statistics can prove useful in this way by incorporating other important factors such as variance in helping us make more disciplined judgements. And understanding the basic logic behind each statistical model was always helpful for me in determining which statistical test to use, and hopefully this helps you in becoming more confident in choosing ANOVA when it best fits your purpose!