content
stringlengths
275
370k
On February 22, 2017, NASA astronomers have announced that seven Earth-sized planets have been discovered around an ultra-cool dwarf star named TRAPPIST-1 which is located around 39 light-years from the Earth. And what’s more – three of them are orbiting their star in the habitable zone. Then, an international team of astronomers led by the Swiss astronomer Vincent Bourrier from the Observatoire de l’Université de Genève used the NASA/ESA Hubble Space Telescope to estimate whether there might be water on the planets of TRAPPIST-1 system. Now, on August 31, 2017, the team announced that their findings suggest that “the outer planets of the system might still harbor substantial amounts of water”, including the three planets within the habitable zone of the star – TRAPPIST-1e, f, and g. This result lends further weight to the possibility that these planets may indeed be habitable. To study the amount of ultraviolet radiation received by the individual planets of the system, the team used used the Space Telescope Imaging Spectrograph (STIS) on the NASA/ESA Hubble Space Telescope which combines a camera with a spectrograph(1), and covers a wide range of wavelengths from the near-infrared region into the ultraviolet. Why ultraviolet? Team leader Bourrier explains: “Ultraviolet radiation is an important factor in the atmospheric evolution of planets. As in our own atmosphere, where ultraviolet sunlight breaks molecules apart, ultraviolet starlight can break water vapor in the atmospheres of exoplanets into hydrogen and oxygen.” Lower-energy ultraviolet radiation causes a chemical reaction called photodissociation (a process that chemical compound is broken down by photons), and breaks up water molecules. The resulting products (hydrogen ans oxygen) can escape into the space with XUV radiation (ultraviolet rays with more energy) and X-rays heat the upper atmosphere of a planet. Especially hydrogen, as it is very light, can escape the exoplanets’ atmospheres and be detected around the exoplanets. If we can detect hydrogen around an exoplanet, this may be a possible indicator of atmospheric water vapor. The observed amount of ultraviolet radiation emitted by TRAPPIST-1 indeed suggests that the planets could have lost gigantic amounts of water over the course of their history – especially the two innermost planets (TRAPPIST-1ba and c) of the system, TRAPPIST-1b and TRAPPIST-1c, which receive the largest amount of ultraviolet energy. Co-author of the study, Julien de Wit from MIT, USA, explains “Our results indicate that atmospheric escape may play an important role in the evolution of these planets”. Water on the habitable-zone planets of TRAPPIST-1 According to the scientists, during the last eight billion years the inner planets which receive a high amount of ultraviolet could have lost more than 20 Earth-oceans-worth of water during the last eight billion years. But, the outer planets of the system, including the three planets in the habitable zone (TRAPPIST-1e, f and g) should have lost much less water. So, still there could be water on their surfaces. The calculated water loss rates as well as geophysical water release rates also favor the idea that the outermost, more massive planets retain their water. However, with the currently available data and telescopes no final conclusion can be drawn on the water content of the planets orbiting TRAPPIST-1. Bourrier summarizes “while our results suggest that the outer planets are the best candidates to search for water with the upcoming James Webb Space Telescope (2), they also highlight the need for theoretical studies and complementary observations at all wavelengths to determine the nature of the TRAPPIST-1 planets and their potential habitability”. - A spectrograph is an instrument that separates light into a frequency spectrum and records the signal using a camera. The term was first used in July, 1876 by Dr. Henry Draper when he invented the earliest version of this device, and which he used to take several photographs of the spectrum of Vega. - Scheduled to launch in October 201, the James Webb Space Telescope (JWST) is a space telescope that is part of NASA’s Next Generation Space Telescope program, developed in coordination between NASA, the European Space Agency, and the Canadian Space Agency. It will be located near the Earth-Sun L2 lagrangian point. In celestial mechanics, the Lagrangian points are positions in an orbital configuration of two large bodies where a small object affected only by gravity can maintain a stable position relative to the two large bodies. The Lagrange points mark positions where the combined gravitational pull of the two large masses provides precisely the centripetal force required to orbit with them. There are five such points, labeled L1 to L5, all in the orbital plane of the two large bodies. The first three are on the line connecting the two large bodies and the last two, L4 and L5, form an equilateral triangle with the two large bodies. The two latter points are stable, which implies that objects can orbit around them in a rotating coordinate system tied to the two large bodies. Several planets have minor planets near their L4 and L5 points (trojans) with respect to the Sun, with Jupiter in particular having more than a million of these. Artificial satellites have been placed at L1 and L2 with respect to the Sun and Earth, and Earth and the Moon for various purposes, and the Lagrangian points have been proposed for a variety of future uses in space exploration.
The League of Nations It was Wilson's hope that the final treaty, drafted by the victors, would be even-handed, but the passion and material sacrifice of more than four years of war caused the European Allies to make severe demands. Persuaded that his greatest hope for peace, a League of Nations, would never be realized unless he made concessions, Wilson compromised somewhat on the issues of self-determination, open diplomacy, and other specifics. He successfully resisted French demands for the entire Rhineland, and somewhat moderated that country's insistence upon charging Germany the whole cost of the war. The final agreement (the Treaty of Versailles), however, provided for French occupation of the coal and iron rich Saar Basin, and a very heavy burden of reparations upon Germany. The "Big Four" at the Paris Peace Conference in 1919, following the end of World War I. They are, seated from left, Prime Minister Vittorio Orlando of Italy, Prime Minister David Lloyd George of Great Britain, Premier Georges Clemenceau of France, and President Woodrow Wilson of the United States. Despite strenuous efforts, Wilson was unable to persuade the U.S. Senate to agree to American participation in the new League of Nations established in the aftermath of the war. (The National Archives) In the end, there was little left of Wilson's proposals for a generous and lasting peace but the League of Nations itself, which he had made an integral part of the treaty. Displaying poor judgment, however, the president had failed to involve leading Republicans in the treaty negotiations. Returning with a partisan document, he then refused to make concessions necessary to satisfy Republican concerns about protecting American sovereignty. With the treaty stalled in a Senate committee, Wilson began a national tour to appeal for support. On September 25, 1919, physically ravaged by the rigors of peacemaking and the pressures of the wartime presidency, he suffered a crippling stroke. Critically ill for weeks, he never fully recovered. In two separate votes -- November 1919 and March 1920 -- the Senate once again rejected the Versailles Treaty and with it the League of Nations. The League of Nations would never be capable of maintaining world order. Wilson's defeat showed that the American people were not yet ready to play a commanding role in world affairs. His utopian vision had briefly inspired the nation, but its collision with reality quickly led to widespread disillusion with world affairs. America reverted to its instinctive isolationism.
A number of different processes form a complex mix of energy, water and moving air to produce our everyday weather and long-term climate. As the Earth's surface warms, energy is emitted back into the atmosphere in a similar way that the hob of an electric cooker radiates heat. But if that's all that happened, the Earth's surface would be frozen, with an average temperature of around -18 °C - too cold to support life. Instead, gases in the Earth's atmosphere absorb some of the outgoing energy and return part of it to the Earth's surface. These gases (water vapour, carbon dioxide, methane, nitrous oxide, ozone and some others) act like a blanket by trapping some of the heat. The greater the concentration of these atmospheric gases, the more effectively they return energy back to the Earth's surface, trapping even more heat and warming the Earth rather like a greenhouse. That's why this process is known as the greenhouse effect. At any one time, the atmosphere contains many travelling weather systems with variable winds. When these winds are averaged over many years a well-defined pattern of large-scale 'cells' of circulation appears. These cells help to explain some of the different climate zones across the world. The largest cells (named Hadley cells, after English meteorologist George Hadley) extend from the Equator to 30-40° latitude. Here, warm, water-laden air rises, condensing to form a broken line of thunderstorms, sustaining the world's tropical rainforests. From the tops of these storms, air flows towards higher latitudes where it sinks to produce high pressure regions with hot, dry air - the world's deserts. Out-flowing air from these higher latitudes forms the trade winds that blow towards the Equator over the ocean. At the opposite extreme, the smallest and weakest cells are the polar cells extending from 60-70° latitude to the poles. Here, the air is very dry and stable - Antarctica is the driest continent on Earth. The cold air sinks and flows away from the poles. In between, in the mid-latitudes - where the UK is located - warm, moist air from the subtropics meets cold, dry air from high latitudes bringing unsettled wet weather typical of the temperate zones. When water falls as precipitation, it may fall back in the oceans, lakes or rivers or it may end up on land. The oceans hold about 97% of the Earth's water, while the remaining 3% is the freshwater so essential for life. About 78% of freshwater is frozen in the ice-sheets of Antarctica and Greenland; 21% is stored in sediments and rocks below the Earth's surface; and less than 1% falls as precipitation and is found in rivers, lakes and streams. Eventually, nearly all of the water that falls on land finds its way back to the ocean affecting the temperature, saltiness and density of different ocean regions. Colder, saltier water sinks in the oceans while warmer, less salty water rises. This overturning of the oceans creates warm and cold currents in different parts of the world and plays a significant part in determining the climate. Weather is the temperature, wind and precipitation (rain, hail, sleet and snow) that we experience every day. Weather systems are constantly circulating within the Earth's atmosphere so what you see today may be different tomorrow. An approximate 23° tilt in the Earth's axis also causes the atmospheric circulation cells to shift and the seasons to change. Yet weather follows identifiable patterns in different regions and over time. This is known as climate. Changeable conditions are a feature of the British weather - a topic that is often used to break the ice in conversations. Nowadays, however, we need to consider what may happen around the world over the next century, not just the next few days. Last updated: 26 September 2013 What is climate change? Our infographic explores the difference between weather and climate, what drives our climate and how our climate is changing.
In this introductory Java programming course, you will be introduced to powerful concepts such as functional abstraction, the object oriented programming (OOP) paradigm and Application Programming Interfaces (APIs). Examples and case studies will be provided so that you can implement simple programs on your own or collaborate with peers. Emphasis is put on immediate feedback and on having a fun experience. Programming knowledge is not only useful to be able to program today’s devices such as computers and smartphones. It also opens the door to computational thinking, i.e. the application of computing techniques to every-day processes. This edition is an improved version of the course released in April 2015. 1. From the Calculator to the Computer The first section introduces basic programming concepts, such as values and expressions, as well as making decisions when implementing algorithms and developing programs. 2. State Transformation The second section introduces state transformation including representation of data and programs as well as conditional repetition. 3. Functional Abstraction The third section addresses the organization of code in a program through methods, which are invoked to carry out a task and return a result as answer. Recursion, as a powerful mechanism in the invocation of methods, is also covered this week. 4. Object Encapsulation The fourth section introduces the object oriented programming (OOP) paradigm, which enables the modeling of complex programs in Java through objects and classes. The concept of inheritance as the basis for reusing code and simplifying programs in Java is studied in this week. The last section aims to study the reuse of code through third-party classes that are already developed and that we can incorporate to our programs to perform specific actions, and reduce the number of lines that we need to code.
During the American Civil War, the Confederate States of America consisted of the governments of 11 Southern states that seceded from the Union in 1860-61, carrying on all the affairs of a separate government and conducting a major war until defeated in the spring of 1865. Convinced that their way of life, based on slavery, was irretrievably threatened by the election of President Abraham Lincoln (November 1860), the seven states of the Deep South (Alabama, Florida, Georgia, Louisiana, Mississippi, South Carolina and Texas) seceded from the Union during the following months. When the war began with the firing on Fort Sumter (April 12, 1861), they were joined by four states of the upper South (Arkansas, North Carolina, Tennessee and Virginia). Formed in February 1861, the Confederate States of America was a republic composed of eleven Southern states that seceded from the Union in order to preserve slavery, states’ rights, and political liberty for whites. Its conservative government, with Mississippian Jefferson Davis as president, sought a peaceful separation, but the United States refused to acquiesce in the secession. The war that ensued started at Fort Sumter, South Carolina, on April 12, 1861, and lasted four years. It cost the South nearly 500,000 men killed or wounded out of a population of 9 million (including 3 million slaves) and $5 billion in treasure. The Confederacy’s eastern military fortunes went well for the first two years, with major victories at First Manassas (Bull Run), ‘Stonewall’ Jackson’s Valley Campaign, and the Seven Days’ Battles, where Gen. Robert E. Lee took command of the main eastern army in June 1862 and cleared Virginia of federal troops by September. His invasion of Maryland was checked at Sharpsburg (Antietam) in mid-September, and he returned to Virginia, where he badly defeated federal forces at Fredericksburg and Chancellorsville. The main western Confederate forces-commanded by Generals Albert Sidney Johnston, P. G. T. Beauregard, and Braxton Bragg-suffered defeats at Forts Henry and Donelson and Shiloh in Tennessee, and at Corinth, Mississippi, but they held that flank through 1862. Davis formed his government at the first Confederate capital in Montgomery, Alabama. The Confederacy’s Permanent Constitution provided for presidential item veto, debating seats for cabinet members, and six-year terms for the president and vice president (the president was ineligible for successive terms); it prohibited the foreign slave trade and forbade Congress from levying a protective tariff, giving bounties, or making appropriations for internal improvements. After initial problems, Davis’s government grew stronger as he learned to use executive power to consolidate control of the armed forces and manpower distribution. But some Southern governors resisted Davis’s centralization and tried to keep their men and resources at home. Although Davis used authority effectively, the insistence on preserving states’ rights plagued him constantly. Vice President Alexander H. Stephens, an early dissident, for example, sulked in his native Georgia and finally urged its secession from the Confederacy. But nothing gave the government more trouble than its poverty. There was only $27 million worth of specie in the Confederacy, and money remained scarce. A federal blockade gradually shrank Southern foreign trade and drained financial reserves. Christopher G. Memminger, treasury secretary, followed conservative policies. A campaign to raise funds through a domestic loan in February 1861 lagged; a $50 million loan drive launched in May did little better. Finally Congress resorted to a ‘produce loan,’ which allowed planters to pledge produce as security for bonds. Although initially popular, this expedient also failed. The next resort, paper money, stimulated inflation, and on April 24, 1863, Congress passed the toughest tax law ever seen in the South. Rates were increased, an income tax was authorized, and a profits tax was imposed on farm products; farmers and planters were subjected to a tax-in-kind, which required them to contribute one-tenth of their annual crop yield to the government. This unpopular law did not solve the financial problems, however. In mid-1863, Memminger proposed taking one-third of the currency out of circulation. Congress resisted, but finally, in February 1864, it passed a funding act that created a brief drop in inflation, which soon yielded to a price-and-money spiral that presaged bankruptcy. An 1863 foreign loan for $15 million through the Erlanger Bank in France realized only about $9 million in purchasing power. Then the government resorted to such desperate measures as impressment of private produce, livestock, machinery, and transportation equipment, which brought limited relief to the armies but endless enmity for what was seen as a ‘despotic’ government. The failure to tax land, cotton, and slaves earned cries of ‘a rich man’s war and a poor man’s fight’ and sapped morale behind the lines. The Confederacy never won the loyalty of the black population. Some free blacks volunteered for Southern ranks but were rejected. Federal invaders liberated slaves, and fear of insurrections sapped Southern strength in the last two war years. Keeping the ranks of the armies filled became difficult as casualties mounted and enthusiasm faded. In April 1862, Congress, on the advice of Davis, passed the first draft law in American history, which took into Confederate service all white men between eighteen and thirty-five. Liberal exemptions (including one white exemption for every twenty slaves owned) weakened the law. But the courts upheld it and most people accepted it as necessary, an attitude that persisted even after February 1864, when the age limits were extended to seventeen and fifty and substitutes were prohibited. In March 1865 blacks finally were enrolled in Confederate ranks, but very few served. Taxation, impressment, and conscription-these were the hallmarks of a tough administration. President Davis pursued centralization much as Abraham Lincoln did-laissez-faire policies could not win a modern war. The lessons learned in management, sacrifice, fortitude, and logistics would change the South permanently. Supplying and moving the armed forces became the main work of many in the South, and new methods of procurement, storage, and distribution were developed. Railroads were essential to the mass movement of men and matériel, of ordnance and medicine, and of civilian refugees from occupied areas. Congress passed laws nationalizing rail lines, sequestering space on blockade runners, and controlling commerce. Industrial development had lagged in the antebellum South, and now Congress encouraged industrialization by siphoning manpower and money to companies producing war goods. A minor industrial miracle occurred in the Confederacy: a nation with minuscule manufacturing capacity acquired foundries, powder works, rolling mills, arsenals enough to sustain nearly a million troops and ships enough to scare American merchantmen. The chief of ordnance, Gen. Josiah Gorgas, a Pennsylvanian and genius of logistics, supplied Rebel munitions to the end. Gorgas, an advocate of blockade running, oversaw the building of small, fast ships capable of eluding federal coastal patrols. Blockade running was a very successful venture: at least 600,000 rifles were imported, plus large quantities of cannon, saltpeter, lead, clothing, coffee, and medicines. Highly profitable, blockade running produced heroes, villains, and million- aires-and sustained the Rebels. Davis’s foreign policy centered on gaining recognition by Great Britain and France. Napoleon III wanted a Confederate victory but hesitated to act without the British. Many Britons sympathized with the Confederates, but the working class supported Lincoln’s Emancipation Proclamation. Judah P. Benjamin, Confederate secretary of state, hoped that an embargo on ‘King Cotton’ would force help from textile- producing countries. But each time recognition was almost at hand, military reverses chilled prospects. The issue remained with the Rebel soldiers: when they won, independence came close; when they lost, nothing else mattered. And they lost almost steadily after the first terrible week of July 1863. Defeats at Gettysburg and Vicksburg cost fifty thousand men and seventy thousand arms. After that week, long retreats began in the East through the Wilderness, Spotsylvania, Cold Harbor, Petersburg; in the West from Chickamauga, Lookout Mountain/Missionary Ridge, Atlanta, to Franklin and Nashville, Tennessee, which led to Lee’s surrender at Appomattox and Joseph E. Johnston’s at Durham Station, North Carolina. Sustained for a while by Davis’s offensive-defensive strategy, Confederate armies were finally defeated by attrition, the country behind them exhausted and drained. The surprise is not that they lost but that they persisted for four arduous years.
Agriculture is the basic sector of economic and human development: from satisfaction of primary food requirement to the import and export of products. Contemporary agricultural practice is nowadays facing a new challenge. Extreme temperatures, dryness and frosts, have become more and more frequent, and monsoons period and process of desertification are grievously affecting agriculture productivity. At the same time, it has been estimated that agriculture is responsible for 14% of global greenhouse gas emissions. Agriculture is entering a vicious circle in which climate change negatively affect agricultural productivity and agricultural production is a factor of climate change. New technology frontiers have rendered agriculture an important part of the solution, through mitigation of a significant amount of global emissions. Some 70% of this mitigation potential could be realized in developing countries. Imagine: agriculture as resource not just for feeding, but also to achieve the mitigation of negative effects of climate change. Food security and climate change can be addressed together by transforming agriculture and adopting practices and policies that also safeguard the natural resource base for future generations. In a formula: climate-smart agriculture. This project is not just science fiction. International organizations such as FAO, IFAD and WFP have been promoting and financing programmes for the research and implantation of these new technologies. Programmes sponsored by IOs and NGOs have focused on the experiment and implantation of new production systems which involve in the cycle of sustainable development many factors. Soil and nutrient management The availability of nitrogen and other nutrients is essential to increase yields. The use of methods and practices that increases organic nutrient inputs such as composting manure and crop residues, more precise matching of nutrients with plant needs, controlled release and deep placement technologies have a double effect. On the one hand, they increase productivity; on the other, they reduce the need of synthetic fertilizers which, due to cost and access, are often unavailable to smallholders and that very often worsen the quality of the final product. Water harvesting and use Improved water harvesting and retention systems (such as pools, dams, pits, retaining ridges, etc.) and water-use efficiency (irrigation systems) are fundamental for increasing production and addressing increasing irregularity of rainfall patterns. At the same time, these water management systems have often minimum impact on environment. Pest and disease control If human beings face many difficulties in adapting their practices to climate change, pathogen cells seem to easily adapt to new environmental conditions. It has been estimated that distribution, incidence and intensity of animal and plant pests and epidemics is increasing in developing areas, due to higher temperatures and tax of humidity. To face new diseases and pests, it seems essential not just to mitigate climate change effects, but also to rely on biodiversity. A growing variety of pests can be faced by addressing to seeds variety: the use of new seeds and tilling – pests resistant types – may be one of the immediate solutions. Undoubtedly, main root of this issue is climate change. Therefore, climate change seems to be an unrelenting phenomenon, above all for the implications on countries’ development. It seems thus that the solution should be sought on the one hand in the mitigation of climate change, on the other in the use of new practices which can adapt production to new environmental conditions. The most immediate solution has appeared to be just one: Genetic resources. Science has reached today a stage in which genetic make-up can determine plants and animals tolerance to shocks such as temperature extremes, droughts, flooding and pests and diseases. It also may regulate the length of growing season/production cycle and the response to inputs such as fertilizer, water and feed. Besides, the use of new seeds can improve the quality of the final product since they can prevent the massive use of pesticides and illnesses Think about drought tolerant maize, flood tolerant rice, drought, weeds and pests tolerant rice, or even biofortified crops, bred to be rich in nutrients. Not all these solutions are immune from criticism. Modern technologies and advances in the agriculture sector, such as inorganic fertilizers, pesticides, feeds, supplements, high yielding varieties, and land management and irrigation techniques increase considerably production. However in certain circumstances these practices and techniques have caused ecological damage, degradation of soils, unsustainable use of resources; outbreak of pests and diseases and have caused health problems to both livestock and humans. The result is exactly the opposite of what was expected: lower yields, outbreaks of pests and diseases, degradation or depletion of natural resources and forests encroachment. In addition, there are many production systems in developing countries that due to a lack of finance, resources, knowledge and capacity are well below the potential yield that could be achieved. A viable path New seeds are not necessarily genetically modified, but may be just imported by another area and implanted in another. Besides, the same result offered by genetic seeds can often be reached through simple practices of cultivation and irrigation. For example, water management practices are essential for the prevention of malaria. Traditionally, malaria prevention efforts have relied on pesticides or pharmaceutical drugs. As mosquitoes develop resistance to pesticides and as drugs lose their effectiveness against the malarial parasite. Countries could develop infrastructures for water management practices that curb malaria at its source: stagnant pools of water common in irrigated agriculture. Another – often underrated – solution is agroforestry. The use of trees and shrubs in agricultural systems help to tackle the triple challenge of securing food security, mitigation and reducing the vulnerability and increasing the adaptability of agricultural systems to climate change. On the productivity front, trees in the farming system represent a source of income and diversify production. In the field of climate change mitigation, trees can diminish the effects of extreme weather events, such as heavy rains, droughts and wind storms. They can improve soil fertility and soil moisture through increasing soil organic matter, and – at the same time – prevent erosion, stabilize soils, raise infiltration rates and halt land degradation. Researches and international programmes provide many examples of “smart” solutions and many successful experiments. Why is the solution not working? Looking at progresses in the field and taking into consideration all financing and programmes started up by IOs and NGOs, it may be asked how come that there still persist a high rate of scarcity and food insecurity in many Countries? The real problem seems to underlie in the weak political support. There are three pillars to be considerate in the development issue: (1) farmers; (2) politicians and the government; (3) researchers. The formers are holder of practical knowledge, but sometimes, in front of unpredictable phenomena like desertification or unexpected floods, become impotent. The latters, whose knowledge and discovering can’t substitute, but integrate, the practice of farmers, are often relegated in university departments and research laboratories. Existing knowledge, technologies and inputs do not easily reach farmers, especially in developing countries. The connection between research and small holder farmers should be established at a government level through policies, infrastructures and considerable investments to build the financial and technical capacity of farmers. More productive and resilient agriculture will need better management of natural resources, such as land, water, soil and genetic resources through practices, such as conservation agriculture, integrated pest management, agroforestry and sustainable diets. At the same time, a viable solution can’t prescind from political and financial support. States and IOs should cooperate together for guaranteeing to smallholders access to inputs, to knowhow and to new technologies, whose implementation can not just guarantee a development, but also mitigate negative externalities of food production. Smart. Isn’t it?
Most astronomers claim that comets are the remains of the solar system’s formation some 4.6 billion years ago. The quandary for those who accept an old age for the solar system and universe is this: As comets circle the sun, heat and other processes cause them to disintegrate. Because they shed so much material during each pass around the sun, comets could not possibly last for millions (much less billions) of years. In fact, there is no way that comets could survive much longer than about 100,000 years.1 This evidence is powerfully consistent with their relatively recent creation, as the biblical chronology shows.
Sleep apnea is a disorder that leads people to stop breathing intermittently throughout the night. Loud snoring can also indicate sleep apnea. Waking with shortness of breath and gasping for air, or a sense of choking may also be experienced by people with sleep apnea. There are three types of sleep apnea: obstructive sleep apnea is most common and occurs when the muscles in the back of the throat relax and cause a partial closure of the airway. Central sleep apnea is characterized by unclear communication signals between the brain, and the muscles required to regulate normal breathing patterns. Complex sleep apnea syndrome, also called treatment-emergent central sleep apnea, is a combination of both obstructive and central sleep apnea. Since the warning signs and symptoms of obstructive and central sleep apnea tend to overlap, complex sleep apnea is more difficult to identify. If you suspect you have sleep apnea, talk to your doctor; he or she can diagnose the condition with a series of screening questions and various sleep monitor applications. Anyone can have sleep apnea, including children, though certain factors increase the risk. Excess weight is a primary factor, though it is possible to have sleep apnea and not be overweight. People who are obese are four times more likely to have sleep apnea than people who are not. Other risk factors include: being male, alcohol and sedative usage, nasal congestion, neck circumference, smoking, and family history. - Loud snoring - Breathing cessation - Morning headache - Sore throat
A solar eclipse occurs when the moon passes between the Sun and the Earth so that the Sun is fully or partially covered. This can only happen during a new moon, when the Sun and Moon are in conjunction as seen from the Earth. At least two and up to five solar eclipses can occur each year on Earth, with between zero and two of them being total eclipses. Total solar eclipses are nevertheless rare at any location because during each eclipse totality exists only along a narrow corridor in the relatively tiny area of the Moon's umbra. A total solar eclipse is a spectacular natural phenomenon and many people travel to remote locations to observe one. The 1999 total eclipse in Europe helped to increase public awareness of the phenomenon, as illustrated by the number of journeys made specifically to witness the 2005 annular eclipse and the 2006 total eclipse. The recent solar eclipse of January 26, 2009, was an annular eclipse (see below), while the solar eclipse of July 22, 2009 was a total solar eclipse. In ancient times, and in some cultures today, solar eclipses have been attributed to supernatural causes. Total solar eclipses can be frightening for people who are unaware of their astronomical explanation, as the Sun seems to disappear in the middle of the day and the sky darkens in a matter of minutes.
Thank you for helping us expand this topic! Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. The topic Theory of the Earth is discussed in the following articles: ...have not changed during the history of the Earth were articulated by the 18th-century Scottish geologist James Hutton, who in 1785 presented his ideas—later published in two volumes as Theory of the Earth (1795)—at meetings of the Royal Society of Edinburgh. In this work Hutton showed that the Earth had a long history and that this history could be interpreted in terms of... Hutton summarized his views and provided ample observational evidence for his conclusions in a work published in two volumes, Theory of the Earth, in 1795. A third volume was partly finished at the time of Hutton’s death. geological history of the Earth TITLE: Earth sciences SECTION: Earth history according to Werner and James Hutton ...for the driving force of subterranean heat. Hutton viewed great angular unconformities separating sedimentary sequences as evidence for past cycles of sedimentation, uplift, and erosion. His Theory of the Earth, published as an essay in 1788, was expanded to a two-volume work in 1795. John Playfair, a professor of natural philosophy, defended Hutton against the counterattacks of the... ...this general concept was articulated, was probably the most important geologic concept developed out of rational scientific thought of the 18th century. The publication of Hutton’s two-volume Theory of the Earth in 1795 firmly established him as one of the founders of modern geologic thought. Click anywhere inside the article to add text or insert superscripts, subscripts, and special characters. You can also highlight a section and use the tools in this bar to modify existing content: Add links to related Britannica articles! You can double-click any word or highlight a word or phrase in the text below and then select an article from the search box. Or, simply highlight a word or phrase in the article, then enter the article name or term you'd like to link to in the search box below, and select from the list of results. Note: we do not allow links to external resources in editor. Please click the Websites link for this article to add citations for
August 26, 2004 A weekly feature provided by scientists at the Hawaiian Volcano Observatory. Do Hawaiian eruptions pose a threat to aircraft? As we work to increase monitoring capabilities on our restless neighbor Mauna Loa Volcano, our colleagues at the The threat posed by ash injected into the atmosphere by explosive eruptions is so well known that seven centers have been established to monitor it worldwide. Jet engines run hot enough to melt any volcanic ash they ingest. Engine parts get coated and openings get clogged, resulting in the complete shutdown of the affected engine. This is of enough concern to commercial airlines that the ash-threat centers maintain vigil, detecting and tracking volcanic ash clouds in order to redirect air traffic. It remains one of the goals of the USGS to improve aircraft safety from the threat of volcanic ash. Hawaiian eruptions are most often effusive and erupt lava, but they can also be explosive. K?lauea had a series of ash-producing eruptions between 500 and 200 years ago and, most recently, in 1924. Anecdotal evidence suggests that Mauna Loa erupted ash in 1868. Obviously, explosive eruptions of Hawaiian volcanoes are much less frequent than lava-producing eruptions, but they do happen. Over the last several thousand years, K?lauea has erupted explosively about as often as has Mount St. Helens. Therefore the probability of an ash-producing eruption in the Hawaiian Islands is low--about the same as it is for Mount St. Helens. Explosive Hawaiian eruptions are easily capable of putting ash into the atmosphere at all elevations at which commercial aircraft fly. The ash produced by at least one of the K?lauea events 200-500 years ago is believed to have reached altitudes of 9 km (30,000 feet) or more. One of the last eruptions in this series in 1790 produced an ash column that probably topped 5 km (16,000 feet). Of course, these events were slightly before aircraft were perfected, so those eruptions posed no threat. The most recent ash-producing eruption of Kilauea in 1924 deposited significant amounts of ash 40 km (25 miles) away. In the unlikely event that we do experience an explosive eruption, the threat to aircraft will be defined by how wind carries the ash and gas. Normal trade winds would carry most of this ash west of the Big Island, possibly affecting air traffic to the South Pacific and South America. If the ash column rises above about 6 km (20,000 feet), ash would get into the upper wind pattern and be carried to the northeast. Kona winds would also carry ash clouds to the north. Ash dispersal to the north could disrupt normal inter-island and mainland air traffic lanes. In terms of everyday operations, explosive Hawaiian eruptions pose infrequent but significant threats to aircraft. Effusive eruptions, which are much more frequent in Hawai'i, also produce airborne particles, but to much lower densities than explosive eruptions. The only incident of aircraft problems due to Hawaiian eruptions was the crash of a Bell 206 helicopter in November 1992 in the crater of Pu'u '?'?. The helicopter, which was carrying a film crew from Paramount Pictures, flew through the volcanic gas plume. The plume is known to be highly corrosive and low in oxygen, and the helicopter's engine failed as a result of ingesting volcanic gas. The threats posed to aircraft by effusive eruptions are just a severe as those posed by explosive eruptions, but only for the area immediately around the vent or vents. If you were wondering who would pilot a helicopter through the plume, rest assured that no local pilot would agree to do it. The film company brought in a pilot from the mainland to get what they needed. The helicopter made a hard landing inside the crater in Pu'u '?'?, and all inside were eventually rescued. And--you guessed it--this event was made into a TV movie. Eruptive activity at Pu`u `O`o continues. Lava in the Banana flow, which breaks out of the Mother's Day lava tube a short distance above Pulama pali, has been visible between the pali and Paliuli for the past several weeks. The viewing during darkness has been good but distant. Eruptive activity in Pu`u `O`o's crater is weak, with sporadic minor spattering. No earthquakes were reported felt on the island during the week ending August 25. Mauna Loa is not erupting. The summit region continues to inflate slowly. Seismic activity was notably high for the fifth week in a row, with 31 small earthquakes recorded in the summit area. The activity was lower than during the previous week, however, when 80 earthquakes were recorded. Most of the earthquakes are of long-period type and deep, 40 km (23 miles) or more. Updated: August 31, 2004 (pnf)
W/C 19th March 2018 In English we have continued to look at Flood and writing a disaster story. The children have thoroughly enjoyed the illustrations of the book and using different skills to tell a disaster story. For Maths, we have been looking at time. We have looked at how many days are in a year and how many seconds are in a minute etc. We even learnt a little poem to help us remember how many days are in each month! In Science, we continued to look at plants. We set up an experiment to see how plants drink water and if the flowers and celery would change colour if we put food colouring in the water. The children are very excited to find out if the experiment works or not. Year 3 had a very exciting trip this week too. We went to Longleat! The children had an amazing time looking at the different animals around the park and they had a great educational talk with some keepers. The children got to find out about different animals that live in the rainforest to link with our topic of Extreme Survival. The wonderful keepers even got out all the different animals for the children to hold, stroke and look at. The armadillo was definitely Armstrong's class favourite animal they saw during their workshop. See below some pictures from our week.
Renal Tubular Acidosis On this page: Your body's cells use chemical reactions to carry out tasks such as turning food into energy and repairing tissue. These chemical reactions generate acids. But too much acid in the blood (acidosis) can disturb many bodily functions. Healthy kidneys help maintain acid-base balance by excreting acids into the urine and returning bicarbonate (an alkaline, or base, substance) to the blood. This "reclaimed" bicarbonate neutralizes much of the acid that is created when food is broken down in the body. Renal tubular acidosis (RTA) is a disease that occurs when the kidneys fail to excrete acids into the urine, which causes a person's blood to remain too acidic. Without proper treatment, chronic acidity of the blood leads to growth retardation, kidney stones, bone disease, and progressive renal failure. One researcher, pediatric neurologist Donald Lewis, has theorized that Charles Dickens may have been describing a child with RTA when he created the character of Tiny Tim in his famous story, "A Christmas Carol." Tiny Tim's small stature, malformed limbs, and periods of weakness are all possible consequences of the chemical imbalance caused by RTA. Among the evidence cited to support this theory is the fact that Tiny Tim's condition, while fatal in one scenario, is reversible when Scrooge pays for medical treatments, which in those times would likely have included sodium bicarbonate and sodium citrate, which are alkaline agents that would neutralize the acid in Tiny Tim's blood. Whether the literary diagnosis of Tiny Tim is correct or not, the good news is that medical treatment can indeed reverse the effects of RTA. Return to top To diagnose RTA, your doctor will check the acid-base balance in samples of your blood and urine. If the blood is more acidic than it should be and the urine less acidic than it should be, RTA may be the reason, but additional information is needed first to rule out other causes. If RTA is suspected, additional information about the sodium, potassium, and chloride levels in the urine and the potassium level in the blood will help identify which of the three types of RTA you have. In all cases, the first goal of therapy is to neutralize acid in the blood, but different treatments may be needed to address the different underlying causes of acidosis. Return to top At one time, doctors divided RTA into four types. - Type 1 is also called classic distal RTA. "Distal," which means distant, refers to the point in the urine-forming tube where the defect occurs. It is relatively distant from the point where fluid from the blood enters the tiny tube (or tubule) that collects fluid and wastes to form urine. - Type 2 is called proximal RTA. The word "proximal," which means near, indicates that the defect is closer to the point where fluid and wastes from the blood enter the tubule. - Type 3 is rarely used as a classification today because it is now thought to be a combination of type 1 and type 2. - Type 4 RTA is caused by another defect in the distal tubule, but it is different from classic distal RTA and proximal RTA because it results in high levels of potassium in the blood instead of low levels. Either low potassium (hypokalemia) or high potassium (hyperkalemia) can be a problem because potassium is important in regulating heart rate. Return to top This disorder may be inherited as a primary disorder or may be one symptom of a disease that affects many parts of the body. Researchers have now discovered the abnormal gene responsible for the inherited form. More often, however, classic distal RTA is a complication of diseases that affect many organ systems (systemic diseases), like the autoimmune disorders Sjögren's syndrome and Other diseases and conditions associated with distal RTA include hyperparathyroidism, a hereditary form of deafness, analgesic nephropathy, rejection of a transplanted kidney, renal medullary cystic disease, obstructive uropathy, and chronic urinary tract A major consequence of classic distal RTA is low blood-potassium. The level drops if the kidneys excrete potassium into urine instead of returning it to the blood supply. Since potassium helps regulate nerve and muscle health and heart rate, low levels can cause extreme weakness, cardiac arrhythmias, paralysis, and even death. Untreated distal RTA causes growth retardation in children and progressive renal and bone disease in adults. Restoring normal growth and preventing kidney stones, another common problem in this disorder, are the major goals of therapy. If acidosis is corrected with sodium bicarbonate or sodium citrate, then low blood-potassium, salt depletion, and calcium leakage into urine will be corrected. Alkali therapy also helps decrease the development of kidney stones. Potassium supplements are rarely needed except in infants, since alkali therapy prevents the kidney from excreting potassium into the This form of RTA occurs most frequently in children as part of a disorder called Fanconi's syndrome. The symptoms of Fanconi's syndrome include high levels of glucose, amino acids, citrate, and phosphate in the urine, as well as vitamin D deficiency and low Proximal RTA can also result from inherited disorders that disrupt the body's normal breakdown and use of nutrients. Examples include the rare disease cystinosis (in which cystine crystals are deposited in bones and other tissues), hereditary fructose intolerance, and Wilson's disease. Proximal RTA also occurs in patients treated with ifosfamide, a drug used in chemotherapy. A few older drugs--such as acetazolamide or outdated tetracycline--can also cause proximal RTA. In adults, proximal RTA may complicate diseases like multiple myeloma, or it may occur in people who experience chronic rejection of a When possible, identifying and correcting the underlying causes are important steps in treating the acquired forms of proximal RTA. The diagnosis is based on the chemical analysis of blood and urine samples. Children with this disorder would likely receive large doses of oral alkali, such as sodium bicarbonate or potassium citrate, to treat acidosis and prevent bone disorders, kidney stones, and growth failure. Correcting acidosis and low potassium levels restores normal growth patterns, allowing bone to mature while preventing further renal disease. Vitamin D supplements may also be needed to help prevent bone problems. This form of RTA is most often referred to as type 4. It occurs when blood levels of the hormone aldosterone are low or when the kidneys do not respond to it. Aldosterone directs the kidneys to regulate the levels of sodium, potassium, and chloride in the blood. Type 4 RTA is distinguished by a high blood-potassium level. Hyperkalemic distal RTA may result from sickle cell disease, urinary tract obstruction, lupus, amyloidosis, or Aldosterone's action may be impeded by drugs, including - diuretics used to treat congestive heart failure such as spironolactone or eplerenone - blood pressure drugs called angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs) - the antibiotic trimethoprim - an agent called heparin that keeps blood from clotting - the antibiotic pentamidine, which is used used to treat - a class of painkillers called nonsteroidal anti-inflammatory - some immunosuppressive drugs used to prevent transplant For people who produce aldosterone but cannot use it, researchers have now identified the genetic basis for their body's resistance to the hormone. To treat type 4 RTA successfully, patients may require alkaline agents to correct acidosis as well as medication to lower the potassium in their blood. If treated early, most people with RTA will not develop permanent kidney failure. Therefore, the goal is early recognition and adequate therapy, which will need to be maintained and monitored throughout the patient's lifetime. Return to top The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) conducts and supports research into many kinds of kidney disease, including renal tubular acidosis. NIDDK-supported researchers are exploring the genetic and molecular mechanisms that control acid-base regulation in the kidney. These studies will point the way to more effective treatments for RTA. Return to top American Association of Kidney American Kidney Fund National Kidney Foundation Return to top
A hernia is a condition where an organ pushes through an opening that a certain muscle holds in place. There are four different hernias that typically occur in the human body:1 • Inguinal Hernia This occurs when intestines push through the inguinal canal, a tubular passage found near the groin area. It helps hold up the testicles in men, and the uterus in women. • Umbilical Hernia When the intestines pass through the abdominal wall, a bulge near the belly button occurs. Children younger than 6 months old are the largest demographic of this condition, although it usually goes away when they’re a year old. • Incisional Hernia This type of hernia can emerge because of a recent abdominal surgery. The intestines may push through the incision scar, or the weakened tissue surrounding it. • Hiatal Hernia Hiatal hernia occurs when a part of your stomach protrudes upward through the hiatus, which is an opening in your diaphragm. This guide will focus on hiatal hernia. This condition can occur in anyone, from unborn children to those who are approaching middle and senior age. Its symptoms should not be ignored, because they can be very uncomfortable and potentially dangerous in the long run.2,3 The Causes of Hiatal Hernia • Injury or trauma: A strong blow to the central chest area can rupture your digestive organs, causing the diaphragm to loosen and allow the stomach to go up. • Birth defect: It’s possible that some people are born with a large hiatus, which will need to be addressed to prevent the hernia from worsening. • Constant pressure: Certain actions that steadily exert pressure on your chest such as vomiting, coughing and straining while removing bowels can cause hiatal hernia. Symptoms of Hiatal Hernia to Watch Out For Most of the time, hiatal hernias are minor and asymptomatic, and you can live a full life without needing medical treatment at all. However, those who do develop complications typically display symptoms of gastro-esophageal reflux disease (GERD).5 It is a condition where stomach acid rises to your esophagus, causing symptoms such as:6,7 • Heartburn: An uncomfortable, burning feeling in the chest that usually occurs after eating due to the stomach acid climbing up the esophagus. • Sour taste: There’s a chance the stomach acid can reach the back of your tongue, causing you to experience a very sour taste. Learn All About Hiatal Hernia in This Guide Even if your hiatal hernia is asymptomatic, there’s still a chance you can develop complications if you don’t follow certain preventive measures. This guide will show you massages, dietary practices and other methods to help you deal with hiatal hernia.
Plays are often divided into ‘acts’. These are the major divisions within a play. Plays generally have between one and five acts. A very short play is sometimes referred to as a ‘one-acter’. Acts are typically made up of different scenes. This is the name given to the part of the stage juts out into the auditorium. In a proscenium arch theatre this would tend to be the part of the stage which is on the auditorium side of the curtain. When a director is casting a show, s/he often has a formal audition so that s/he can get an idea of the type of talent that is available for the production. Sometimes the actors are required to learn a monologue from the play or are teamed up with another actor to perform a scene. Occasionally directors ask performers to prepare a monologue of their choice. It is a good idea to have a few monologues prepared which can demonstrate that you can work in a variety of dramatic styles (eg a Shakespearean and a contemporary piece). If you need help selecting a monologue there are many books available to help you. The space within the theatre where the audience sits (or stands) for the duration of a performance. Sometimes it is referred to as the ‘house’. This refers to the area in the theatre which is unseen by the audience. It includes the space in the wings as well as the dressing rooms. This term is used to describe a moment during a performance when all of the stage lights are turned off. This term is often used to describe the ‘costume’ worn by the technical crew during a performance. Black clothes are worn because this is the colour that will be least obtrusive during a performance and it allows the stagehands to move set pieces on stage without distracting the audience. Blocking is usually a major part of the rehearsal process. It refers to the process of arranging the moves of the actors on stage. Often the stage manager will write down the blocking in the prompt book. This is often used in the production of naturalistic plays. It describes a set which is a ‘realistic’ room with three walls and it is as though the fourth wall has been removed so that audience members feel as though they are observing real action. Break a leg There is a superstition which suggests that it is bad luck to wish an actor “good luck” prior to a performance so the term “break a leg” is commonly used in its place. This is the process of preparing the theatre for a particular production. It includes building the set, introducing props and costumes, and rigging the lights. This is the process of dismantling the set at the conclusion of a production. It includes the removal of all set pieces, costumes, and lighting. A “call” is the name given to the time that a performer is required to be at the theatre. Actors are often told that their “call” for a rehearsal or performance is at a particular time. For example you may be told that “Tomorrow night’s call is 7pm”. The performing members of a theatre troupe are referred to as the ‘cast’. This is the name given to the process of selecting actors to play the different roles in a play. Sometimes this requires actors to participate in an audition. This refers to the cast, the crew, and other people who are connected with a show. The directive given to technical people to do something during the performance. This includes sound and lighting cues. Sometimes this is a verbal instruction given by the Stage Manager. For example, “Bring up the house lights”. But other times it might be a visual cue taken from the stage action. For example, “When the actor crosses the stage play the telephone sound effect”. Cue to cue This is the process that is often adopted during a technical run. It means that most of the dialogue and action are omitted and the cast jumps between technical cues and entrances/exits so that the lighting and sound cues may be perfected. This is sometimes referred to as “topping and tailing”. When a performance has finished often the actors acknowledge the audience’s applause by coming on to the stage and bowing. This is the term used to describe the parts of a play text when there is more than one character talking. Conversations between characters are referred to as dialogue. [This is in contrast to a monologue]. This refers to the area on the stage that is closest to the audience. This is a full run of a performance. It is often the final rehearsal prior to opening night. All elements of the performance (blocking, lighting, music, etc) are presented as they are meant to be in the final performance. A flat is a versatile set piece. It is usually a rectangular frame that is covered with fabric or plywood. Most theatre companies have a range of stock pieces that are used in many different productions. For this reason flats are often black so that they can be used in different plays. Sometimes flats are painted with background images for a particular production. This refers to the idea that an imaginary fourth wall has been removed from the set of a play so that the audience can watch the action. This concept often is used with regard to naturalism. Front of House The front of house is generally regarded as any areas that are accessible to audience members. This includes the foyer, the bar, the box office, and the auditorium. This is a short break in the performance. A play can have multiple intervals if there are large set changes that are required. However, there is usually only one interval of 15 to 20 minutes. It typically is positioned mid-way through the performance. This is usually one of the jobs for the Stage Manager. S/he uses masking tape on the rehearsal room floor to indicate the plan of the flats, set pieces, and props so that the actors can get a feel for the space in which they will be performing. A matinee show is usually one that takes place in the morning or afternoon rather than in the evening. It is derived from a Latin word meaning “of the morning”, but in Australia it’s more commonly an afternoon performance. A monologue is a long speech by a single character that is uninterrupted by the other characters on stage. If the character is alone on stage when presenting a monologue, the speech is called a soliloquy. This is the term that is used to describe the process of putting all props, lighting, and set pieces in their correct location before the start of the play. This is a term in its own right but it comes from the word ‘properties’. It refers to any items that cannot really be considered to be scenery or costumes. For example, an actor may need to bring a gun on to the stage in a particular scene. Actors are usually responsible for their own props although sometimes the stage manager keeps them safe in a central back stage location. This is the name given to the ‘frame’ which goes around the performance space in traditional nineteenth-century style theatres. In some theatres it actually looks like a gold-gilted picture frame. The audience looks through this frame into the dramatic world being created on stage. This is a performance space that slopes up towards the rear of the stage. This was a common feature of theatres in the past but nowadays it is more common for stages to be flat and the auditorium to be raked. This improves the sightlines from most seats in the auditorium. This jargon term is used in two different ways in the theatre. Firstly, it describes the length of the season of a particular production. For example “Our production of Hamlet runs for three weeks”. Secondly, it describes a rehearsal where a part of the play is practiced. For example “Tomorrow’s rehearsal will start with a run of Act IV of Hamlet”. When this term is used as a VERB, it refers to the process of preparing the stage for the start of a production. For example, “I have set the props in position for the start of Act I”. When it is used as a NOUN, it refers to complete stage setting for the production or for a particular scene. For example, “The set for Act I is a bedroom”. This is the accepted abbreviation for the Stage Manager. Stage Left/Stage Right These are the most common stage directions used in the theatre. They always refer to the stage from the actor’s perspective. That is, when an actor stands on stage and looks into the audience, it is his or her left and right. This is an abbreviation for the “technical rehearsal”. People will say things like “the tech went till 3am”. The term can also be used to describe a member of the production crew. For example, “The lighting tech will preset the lighting state for Act I by 7:45pm”. [Sometimes this person is referred to as “the techie”] Technical Rehearsal/Tech Run This is a rehearsal that is specifically focussed on the technical aspects of the production including the lighting, the set, and the sound effects or music. Sometimes the actors are required to wear costumes so that they can practice fast costume changes if necessary. Theatre in the round This is a form of theatre where the audience surrounds the performance space. This refers to the part of the stage that is the furthest distance from the audience. It also describes the process of “upstaging” another actor which is when one actor moves around to pull the audience’s focus from the primary action of the scene.
- A new study from the Department of Pathology at Case Western Reserve University School of Medicine shows that the infectious version of prion proteins, the main culprits behind the human form of mad cow disease or variant Creutzfeldt-Jakob Disease (vCJD), are not destroyed by digestive enzymes found in the stomach. Furthermore, the study finds that the infectious prion proteins, also known as prions, cross the normally stringent intestinal barrier by riding piggyback on ferritin, a protein normally absorbed by the intestine and abundantly present in a typical meat dish. The study appears in the Dec. 15 issue of the Journal of Neuroscience. - Prions are a modified form of normal proteins, the prion proteins, which become infectious and accumulate in the nervous system causing fatal neurodegenerative disease. Variant CJD results from eating contaminated beef products from cattle infected with mad cow disease. To date, 155 cases of confirmed and probable vCJD in the world have been reported, and it is unclear how many others are carrying the infection. - According to the study's senior author Neena Singh, M.D., Ph.D., associate professor of pathology, little is known about the mechanism by which prions cross the human intestinal barrier, which can be a particularly difficult obstacle to cross. - "The mad cow epidemic is far from over, and the continuous spread of a similar prion disease in the deer and elk population in the U.S. raises serious public health concerns," said Singh. "It is therefore essential to understand how this disease is transmitted from one species to another, especially in the case of humans where the infectious prions survive through stages of cooking and digestion." - Using brain tissues infected with the spontaneously occurring version of CJD which is also caused by prions, the researchers simulated the human digestive process by subjecting the tissue to sequential treatment with digestive fluids as found in the human intestinal tract. They then studied how the surviving prions are absorbed by the intestine using a cell model. The prions were linked with ferritin, a cellular protein that normally binds excess cellular iron to store it in a soluble, non-toxic form within the cell. - "Since ferritin shares considerable similarity between species, it may facilitate the uptake of prions from distant species by the human intestine,"said Singh."This important finding provides insight into the cellular mechanisms by which infectious prions ingested with contaminated food cross the species barrier, and provides the possibility of devising practical methods for blocking its uptake," she said. "If we can develop a method of blocking the binding of prions to ferritin, we may be able to prevent animals from getting this disease through feed, and stop the transmission to humans." - Currently, Singh's group is checking whether prions from distant species such as deer and elk can cross the human intestinal barrier. - The study was supported by National Institutes of Health
Bayes' Theorem is a simple mathematical formula used for calculating conditional probabilities. It figures prominently in subjectivist or Bayesian approaches to epistemology, statistics, and inductive logic. Subjectivists, who maintain that rational belief is governed by the laws of probability, lean heavily on conditional probabilities in their theories of evidence and their models of empirical learning. Bayes' Theorem is central to these enterprises both because it simplifies the calculation of conditional probabilities and because it clarifies significant features of subjectivist position. Indeed, the Theorem's central insight — that a hypothesis is confirmed by any body of data that its truth renders probable — is the cornerstone of all subjectivist methodology. - 1. Conditional Probabilities and Bayes' Theorem - 2. Special Forms of Bayes' Theorem - 3. The Role of Bayes' Theorem in Subjectivist Accounts of Evidence - 4. The Role of Bayes' Theorem in Subjectivist Models of Learning - Academic Tools - Other Internet Resources - Related Entries The probability of a hypothesis H conditional on a given body of data E is the ratio of the unconditional probability of the conjunction of the hypothesis with the data to the unconditional probability of the data alone. (1.1) Definition. The probability of H conditional on E is defined as PE(H) = P(H & E)/P(E), provided that both terms of this ratio exist and P(E) > 0. To illustrate, suppose J. Doe is a randomly chosen American who was alive on January 1, 2000. According to the United States Center for Disease Control, roughly 2.4 million of the 275 million Americans alive on that date died during the 2000 calendar year. Among the approximately 16.6 million senior citizens (age 75 or greater) about 1.36 million died. The unconditional probability of the hypothesis that our J. Doe died during 2000, H, is just the population-wide mortality rate P(H) = 2.4M/275M = 0.00873. To find the probability of J. Doe's death conditional on the information, E, that he or she was a senior citizen, we divide the probability that he or she was a senior who died, P(H & E) = 1.36M/275M = 0.00495, by the probability that he or she was a senior citizen, P(E) = 16.6M/275M = 0.06036. Thus, the probability of J. Doe's death given that he or she was a senior is PE(H) = P(H & E)/P(E) = 0.00495/0.06036 = 0.082. Notice how the size of the total population factors out of this equation, so that PE(H) is just the proportion of seniors who died. One should contrast this quantity, which gives the mortality rate among senior citizens, with the "inverse" probability of E conditional on H, PH(E) = P(H & E)/P(H) = 0.00495/0.00873 = 0.57, which is the proportion of deaths in the total population that occurred among seniors. Here are some straightforward consequences of (1.1): - Probability. PE is a probability function. - Logical Consequence. If E entails H, then PE(H) = 1. - Preservation of Certainties. If P(H) = 1, then PE(H) = 1. - Mixing. P(H) = P(E)PE(H) + P(~E)P~E(H). The most important fact about conditional probabilities is undoubtedly Bayes' Theorem, whose significance was first appreciated by the British cleric Thomas Bayes in his posthumously published masterwork, "An Essay Toward Solving a Problem in the Doctrine of Chances" (Bayes 1764). Bayes' Theorem relates the "direct" probability of a hypothesis conditional on a given body of data, PE(H), to the "inverse" probability of the data conditional on the hypothesis, PH(E). (1.2) Bayes' Theorem. PE(H) = [P(H)/P(E)] PH(E) In an unfortunate, but now unavoidable, choice of terminology, statisticians refer to the inverse probability PH(E) as the "likelihood" of H on E. It expresses the degree to which the hypothesis predicts the data given the background information codified in the probability P. In the example discussed above, the condition that J. Doe died during 2000 is a fairly strong predictor of senior citizenship. Indeed, the equation PH(E) = 0.57 tells us that 57% of the total deaths occurred among seniors that year. Bayes' theorem lets us use this information to compute the "direct" probability of J. Doe dying given that he or she was a senior citizen. We do this by multiplying the "prediction term" PH(E) by the ratio of the total number of deaths in the population to the number of senior citizens in the population, P(H)/P(E) = 2.4M/16.6M = 0.144. The result is PE(H) = 0.57 × 0.144 = 0.082, just as expected. Though a mathematical triviality, Bayes' Theorem is of great value in calculating conditional probabilities because inverse probabilities are typically both easier to ascertain and less subjective than direct probabilities. People with different views about the unconditional probabilities of E and H often disagree about E's value as an indicator of H. Even so, they can agree about the degree to which the hypothesis predicts the data if they know any of the following intersubjectively available facts: (a) E's objective probability given H, (b) the frequency with which events like E will occur if H is true, or (c) the fact that H logically entails E. Scientists often design experiments so that likelihoods can be known in one of these "objective" ways. Bayes' Theorem then ensures that any dispute about the significance of the experimental results can be traced to "subjective" disagreements about the unconditional probabilities of H and E. When both PH(E) and P~H(E) are known an experimenter need not even know E's probability to determine a value for PE(H) using Bayes' Theorem. (1.3) Bayes' Theorem (2nd form). PE(H) = P(H)PH(E) / [P(H)PH(E) + P(~H)P~H(E)] In this guise Bayes' theorem is particularly useful for inferring causes from their effects since it is often fairly easy to discern the probability of an effect given the presence or absence of a putative cause. For instance, physicians often screen for diseases of known prevalence using diagnostic tests of recognized sensitivity and specificity. The sensitivity of a test, its "true positive" rate, is the fraction of times that patients with the disease test positive for it. The test's specificity, its "true negative" rate, is the proportion of healthy patients who test negative. If we let H be the event of a given patient having the disease, and E be the event of her testing positive for it, then the test's specificity and sensitivity are given by the likelihoods PH(E) and P~H(~E), respectively, and the "baseline" prevalence of the disease in the population is P(H). Given these inputs about the effects of the disease on the outcome of the test, one can use (1.3) to determine the probability of disease given a positive test. For a more detailed illustration of this process, see Example 1 in the Supplementary Document "Examples, Tables, and Proof Sketches". Bayes' Theorem can be expressed in a variety of forms that are useful for different purposes. One version employs what Rudolf Carnap called the relevance quotient or probability ratio (Carnap 1962, 466). This is the factor PR(H, E) = PE(H)/P(H) by which H's unconditional probability must be multiplied to get its probability conditional on E. Bayes' Theorem is equivalent to a simple symmetry principle for probability ratios. (1.4) Probability Ratio Rule. PR(H, E) = PR(E, H) The term on the right provides one measure of the degree to which H predicts E. If we think of P(E) as expressing the "baseline" predictability of E given the background information codified in P, and of PH(E) as E's predictability when H is added to this background, then PR(E, H) captures the degree to which knowing H makes E more or less predictable relative to the baseline: PR(E, H) = 0 means that H categorically predicts ~E; PR(E, H) = 1 means that adding H does not alter the baseline prediction at all; PR(E, H) = 1/P(E) means that H categorically predicts E. Since P(E)) = PT(E)) where T is any truth of logic, we can think of (1.4) as telling us that The probability of a hypothesis conditional on a body of data is equal to the unconditional probability of the hypothesis multiplied by the degree to which the hypothesis surpasses a tautology as a predictor of the data. In our J. Doe example, PR(H, E) is obtained by comparing the predictability of senior status given that J. Doe died in 2000 to its predictability given no information whatever about his or her mortality. Dividing the former "prediction term" by the latter yields PR(H, E) = PH(E)/P(E) = 0.57/0.06036 = 9.44. Thus, as a predictor of senior status in 2000, knowing that J. Doe died is more than nine times better than not knowing whether she lived or died. Another useful form of Bayes' Theorem is the Odds Rule. In the jargon of bookies, the "odds" of a hypothesis is its probability divided by the probability of its negation: O(H) = P(H)/P(~H). So, for example, a racehorse whose odds of winning a particular race are 7-to-5 has a 7/12 chance of winning and a 5/12 chance of losing. To understand the difference between odds and probabilities it helps to think of probabilities as fractions of the distance between the probability of a contradiction and that of a tautology, so that P(H) = p means that H is p times as likely to be true as a tautology. In contrast, writing O(H) = [P(H) − P(F)]/[P(T) − P(H)] (where F is some logical contradiction) makes it clear that O(H) expresses this same quantity as the ratio of the amount by which H's probability exceeds that of a contradiction to the amount by which it is exceeded by that of a tautology. Thus, the difference between "probability talk" and "odds talk" corresponds to the difference between saying "we are two thirds of the way there" and saying "we have gone twice as far as we have yet to go." The analogue of the probability ratio is the odds ratio OR(H, E) = OE(H)/O(H), the factor by which H's unconditional odds must be multiplied to obtain its odds conditional on E. Bayes' Theorem is equivalent to the following fact about odds ratios: (1.5) Odds Ratio Rule. OR(H, E) = PH(E)/P~H(E) Notice the similarity between (1.4) and (1.5). While each employs a different way of expressing probabilities, each shows how its expression for H's probability conditional on E can be obtained by multiplying its expression for H's unconditional probability by a factor involving inverse probabilities. The quantity LR(H, E) = PH(E)/P~H(E) that appears in (1.5) is the likelihood ratio of H given E. In testing situations like the one described in Example 1, the likelihood ratio is the test's true positive rate divided by its false positive rate: LR = sensitivity/(1 − specificity). As with the probability ratio, we can construe the likelihood ratio as a measure of the degree to which H predicts E. Instead of comparing E's probability given H with its unconditional probability, however, we now compare it with its probability conditional on ~H. LR(H, E) is thus the degree to which the hypothesis surpasses its negation as a predictor of the data. Once more, Bayes' Theorem tells us how to factor conditional probabilities into unconditional probabilities and measures of predictive power. The odds of a hypothesis conditional on a body of data is equal to the unconditional odds of the hypothesis multiplied by the degree to which it surpasses its negation as a predictor of the data. In our running J. Doe example, LR(H, E) is obtained by comparing the predictability of senior status given that J. Doe died in 2000 to its predictability given that he or she lived out the year. Dividing the former "prediction term" by the latter yields LR(H, E) = PH(E)/P~H(E) = 0.57/0.056 = 10.12. Thus, as a predictor of senior status in 2000, knowing that J. Doe died is more than ten times better than knowing that he or she lived. The similarities between the "probability ratio" and "odds ratio" versions of Bayes' Theorem can be developed further if we express H's probability as a multiple of the probability of some other hypothesis H* using the relative probability function B(H, H*) = P(H)/P(H*). It should be clear that B generalizes both P and O since P(H) = B(H, T) and O(H) = B(H, ~H). By comparing the conditional and unconditional values of B we obtain the Bayes' Factor: BR(H, H*; E) = BE(H, H*)/B(H, H*) = [PE(H)/PE(H*)]/ [P(H)/P(H*)]. We can also generalize the likelihood ratio by setting LR(H, H*; E) = PH(E)/PH*(E). This compares E's predictability on the basis of H with its predictability on the basis of H*. We can use these two quantities to formulate an even more general form of Bayes' Theorem. (1.6) Bayes' Theorem (General Form) BR(H, H*; E) = LR(H, H*; E) The message of (1.6) is this: The ratio of probabilities for two hypotheses conditional on a body of data is equal to the ratio their unconditional probabilities multiplied by the degree to which the first hypothesis surpasses the second as a predictor of the data. The various versions of Bayes' Theorem differ only with respect to the functions used to express unconditional probabilities (P(H), O(H), B(H)) and in the likelihood term used to represent predictive power (PR(E, H), LR(H, E), LR(H, H*; E)). In each case, though, the underlying message is the same: conditional probability = unconditional probability × predictive power (1.2) – (1.6) are multiplicative forms of Bayes' Theorem that use division to compare the disparities between unconditional and conditional probabilities. Sometimes these comparisons are best expressed additively by replacing ratios with differences. The following table gives the additive analogue of each ratio measure. PR(H, E) = PE(H)/P(H) PD(H, E) = PE(H) − P(H) OR(H, E) = OE(H)/O(H) OD(H, E) = OE(H) − O(H) BR(H, H*; E) = BE(H, H*)/B(H, H*) BD(H, H*; E) = BE(H, H*) − B(H, H*) We can use Bayes' theorem to obtain additive analogues of (1.4) – (1.6), which are here displayed along with their multiplicative counterparts: |(1.4)||PR(H, E) = PR(E, H) = PH(E)/P(E)||PD(H, E) = P(H) [PR(E, H) − 1]| |(1.5)||OR(H, E) = LR(H, E) = PH(E)/P~H(E)||OD(H, E) = O(H) [OR(H, E) − 1]| |(1.6)||BR(H, H*; E) = LR(H, H*; E) = PH(E)/PH*(E)||BD(H, H*; E) = B(H, H*) [BR(H, H*; E) − 1]| Notice how each additive measure is obtained by multiplying H's unconditional probability, expressed on the relevant scale, P, O or B, by the associated multiplicative measure diminished by 1. While the results of this section are useful to anyone who employs the probability calculus, they have a special relevance for subjectivist or "Bayesian" approaches to statistics, epistemology, and inductive inference. Subjectivists lean heavily on conditional probabilities in their theory of evidential support and their account of empirical learning. Given that Bayes' Theorem is the single most important fact about conditional probabilities, it is not at all surprising that it should figure prominently in subjectivist methodology. Subjectivists maintain that beliefs come in varying gradations of strength, and that an ideally rational person's graded beliefs can be represented by a subjective probability function P. For each hypothesis H about which the person has a firm opinion, P(H) measures her level of confidence (or "degree of belief") in H's truth. Conditional beliefs are represented by conditional probabilities, so that PE(H) measures the person's confidence in H on the supposition that E is a fact. One of the most influential features of the subjectivist program is its account of evidential support. The guiding ideas of this Bayesian confirmation theory are these: - Confirmational Relativity. Evidential relationships must be relativized to individuals and their degrees of belief. - Evidence Proportionism. A rational believer will proportion her confidence in a hypothesis H to her total evidence for H, so that her subjective probability for H reflects the overall balance of her reasons for or against its truth. - Incremental Confirmation. A body of data provides incremental evidence for H to the extent that conditioning on the data raises H's probability. The first principle says that statements about evidentiary relationships always make implicit reference to people and their degrees of belief, so that, e.g., "E is evidence for H" should really be read as "E is evidence for H relative to the information encoded in the subjective probability P". According to evidence proportionism, a subject's level of confidence in H should vary directly with the strength of her evidence in favor of H's truth. Likewise, her level of confidence in H conditional on E should vary directly with the strength of her evidence for H's truth when this evidence is augmented by the supposition of E. It is a matter of some delicacy to say precisely what constitutes a person's evidence, and to explain how her beliefs should be "proportioned" to it. Nevertheless, the idea that incremental evidence is reflected in disparities between conditional and unconditional probabilities only makes sense if differences in subjective probability mirror differences in total evidence. An item of data provides a subject with incremental evidence for or against a hypothesis to the extent that receiving the data increases or decreases her total evidence for the truth of the hypothesis. When probabilities measure total evidence, the increment of evidence that E provides for H is a matter of the disparity between PE(H) and P(H). When odds are used it is a matter of the disparity between OE(H) and O(H). See Example 2 in the supplementary document "Examples, Tables, and Proof Sketches", which illustrates the difference between total and incremental evidence, and explains the "baserate fallacy" that can result from failing to properly distinguish the two. It will be useful to distinguish two subsidiary concepts related to total evidence. - The net evidence in favor of H is the degree to which a subject's total evidence in favor of H exceeds her total evidence in favor of ~H. - The balance of total evidence for H over H* is the degree to which a subject's total evidence in favor of H exceeds her total evidence in favor of H*. The precise content of these notions will depend on how total evidence is understood and measured, and on how disparities in total evidence are characterized. For example, if total evidence is given in terms of probabilities and disparities are treated as ratios, then the net evidence for H is P(H)/P(~H). If total evidence is expressed in terms of odds and differences are used to express disparities, then the net evidence for H will be O(H) − O(~H). Readers may consult Table 3 (in the supplementary document) for a complete list of the possibilities. As these remarks make clear, one can interpret O(H) either as a measure of net evidence or as a measure of total evidence. To see the difference, imagine that 750 red balls and 250 black balls have been drawn at random and with replacement from an urn known to contain 10,000 red or black balls. Assuming that this is our only evidence about the urn's contents, it is reasonable to set P(Red) = 0.75 and P(~Red) = 0.25. On a probability-as-total-evidence reading, these assignments reflect both the fact that we have a great deal of evidence in favor of Red (namely, that 750 of 1,000 draws were red) and the fact that we have also have some evidence against it (namely, that 250 of the draws were black). The net evidence for Red is then the disparity between our total evidence for Red and our total evidence against Red. This can be expressed multiplicatively by saying that we have seen three times as many red draws as black draws, which is just to say that O(Red) = 3. Alternatively, we can use O(Red) as a measure of the total evidence by taking our evidence for Red to be the ratio of red to black draws, rather than the total number of red draws, and our evidence for ~Red to be the ratio of black balls to red balls, rather than the total number of black draws. While the decision whether to use O as a measure total or net evidence makes little difference to questions about the absolute amount of total evidence for a hypothesis (since O(H) is an increasing function of P(H)), it can make a major difference when one is considering the incremental changes in total evidence brought about by conditioning on new information. Philosophers interested in characterizing correct patterns of inductive reasoning and in providing "rational reconstructions" of scientific methodology have tended to focus on incremental evidence as crucial to their enterprise. When scientists (or ordinary folk) say that E supports or confirms H what they generally mean is that learning of E's truth will increase the total amount of evidence for H's truth. Since subjectivists characterize total evidence in terms of subjective probabilities or odds, they analyze incremental evidence in terms of changes in these quantities. On such views, the simplest way to characterize the strength of incremental evidence is by making ordinal comparisons of conditional and unconditional probabilities or odds. (2.1) A Comparative Account of Incremental Evidence. Relative to a subjective probability function P, - E incrementally confirms (disconfirms, is irrelevant to) H if and only if PE(H) is greater than (less than, equal to) P(H). - H receives a greater increment (or lesser decrement) of evidential support from E than from E* if and only if PE(H) exceeds PE*(H). Both these equivalences continue to hold with probabilities replaced by odds. So, this part of the subjectivist theory of evidence does not depend on how total evidence is measured. Bayes' Theorem helps to illuminate the content of (2.1) by making it clear that E's status as incremental evidence for H is enhanced to the extent that H predicts E. This observation serves as the basis for the following conclusions about incremental confirmation (which hold so long as 1 > P(H), P(E) > 0). (2.1a) If E incrementally confirms H, then H incrementally confirms E. (2.1b) If E incrementally confirms H, then E incrementally disconfirms ~H. (2.1c) If H entails E, then E incrementally confirms H. (2.1d) If PH(E) = PH(E*), then H receives more incremental support from E than from E* if and only if E is unconditionally less probable than E*. (2.1e) Weak Likelihood Principle. E provides incremental evidence for H if and only if PH(E) > P~H(E). More generally, if PH(E) > PH*(E) and P~H(~E) ≥ P~H*(~E), then E provides more incremental evidence for H than for H*. (2.1a) tells us that incremental confirmation is a matter of mutual reinforcement: a person who sees E as evidence for H invests more confidence in the possibility that both propositions are true than in either possibility in which only one obtains. (2.1b) says that relevant evidence must be capable of discriminating between the truth and falsity of the hypothesis under test. (2.1c) provides a subjectivist rationale for the hypothetico-deductive model of confirmation. According to this model, hypotheses are incrementally confirmed by any evidence they entail. While subjectivists reject the idea that evidentiary relations can be characterized in a belief-independent manner — Bayesian confirmation is always relativized to a person and her subjective probabilities — they seek to preserve the basic insight of the H-D model by pointing out that hypotheses are incrementally supported by evidence they entail for anyone who has not already made up her mind about the hypothesis or the evidence. More precisely, if H entails E, then PE(H) = P(H)/P(E), which exceeds P(H) whenever 1 > P(E), P(H) > 0. This explains why scientists so often seek to design experiments that fit the H-D paradigm. Even when evidentiary relations are relativized to subjective probabilities, experiments in which the hypothesis under test entails the data will be regarded as evidentially relevant by anyone who has not yet made up his mind about the hypothesis or the data. The degree of incremental confirmation will vary among people depending on their prior levels of confidence in H and E , but everyone will agree that the data incrementally supports the hypothesis to at least some degree. Subjectivists invoke (2.1d) to explain why scientists so often regard improbable or surprising evidence as having more confirmatory potential than evidence that is antecedently known. While it is not true in general that improbable evidence has more confirming potential, it is true that E's incremental confirming power relative to H varies inversely with E's unconditional probability when the value of the inverse probability PH(E) is held fixed. If H entails both E and E*, say, then Bayes' Theorem entails that the least probable of the two supports H more strongly. For example, even if heart attacks are invariably accompanied by severe chest pain and shortness of breath, the former symptom is far better evidence for a heart attack than the latter simply because severe chest pain is so much less common than shortness of breath. (2.1e) captures one core message of Bayes' Theorem for theories of confirmation. Let's say that H is uniformly better than H* as predictor of E's truth-value when (a) H predicts E more strongly than H* does, and (b) ~H predicts ~E more strongly than ~H* does. According to the weak likelihood principle, hypotheses that are uniformly better predictors of the data are better supported by the data. For example, the fact that little Johnny is a Christian is better evidence for thinking that his parents are Christian than for thinking that they are Hindu because (a) a far higher proportion of Christian parents than Hindu have Christian children, and (b) a far higher proportion of non-Christian parents than non-Hindu parents have non-Christian children. Bayes' Theorem can also be used as the basis for developing and evaluating quantitative measures of evidential support. The results listed in Table 2 entail that all four of the functions PR, OR, PD and OD agree with one another on the simplest question of confirmation: Does E provide incremental evidence for H? (2.2) Corollary. Each of the following is equivalent to the assertion that E provides incremental evidence in favor of H: PR(H, E) > 1, OR(H, E) > 1, PD(H, E) > 0, OD(H, E) > 0. Thus, all four measures agree with the comparative account of incremental evidence given in (2.1). Given all this agreement it should not be surprising that PR(H, E), OR(H, E) and PD(H, E), have all been proposed as measures of the degree of incremental support that E provides for H. While OD(H, E) has not been suggested for this purpose, we will consider it for reasons of symmetry. Some authors maintain that one or another of these functions is the unique correct measure of incremental evidence; others think it best to use a variety of measures that capture different evidential relationships. While this is not the place to adjudicate these issues, we can look to Bayes' Theorem for help in understanding what the various functions measure and in characterizing the formal relationships among them. All four measures agree in their conclusions about the comparative amount of incremental evidence that different items of data provide for a fixed hypothesis. In particular, they agree ordinally about the following concepts derived from incremental evidence: - The effective increment of evidence that E provides for H is the amount by which the incremental evidence that E provides for H exceeds the incremental evidence that ~E provides for H. - The differential in the incremental evidence that E and E* provide for H is the amount by which the incremental evidence that E provides for H exceeds the incremental evidence that E* provides for H. Effective evidence is a matter of the degree to which a person's total evidence for H depends on her opinion about E. When PE(H) and P~E(H) (or OE(H) and O~E(H)) are far apart the person's belief about E has a great effect on her belief about H: from her point of view, a great deal hangs on E's truth-value when it comes to questions about H's truth-value. A large differential in incremental evidence between E and E* tells us that learning E increases the subject's total evidence for H by a larger amount than learning E* does. Readers may consult Table 4 (in the supplement) for quantitative measures of effective and differential evidence. The second clause of (2.1) tells us that E provides more incremental evidence than E* does for H just in case the probability of H conditional on E exceeds the probability of H conditional on E*. It is then a simple step to show that all four measures of incremental support agree ordinally on questions of effective evidence and of differentials in incremental evidence. (2.3) Corollary. For any H, E* and E with positive probability, the following are equivalent: - E provides more incremental evidence than E* does for H - PR(H, E) > PR(H, E*) - OR(H, E) > OR(H, E*) - PD(H, E) > PD(H, E*) - OD(H, E) > OD(H, E*) The four measures of incremental support can disagree over the comparative degree to which a single item of data incrementally confirms two distinct hypotheses. Example 3, Example 4, and Example 5 (in the supplement) show the various ways in which this can happen. All the differences between the measures have ultimately to do with (a) whether the total evidence in favor of a hypothesis should be measured in terms of probabilities or in terms of odds, and (b) whether disparities in total evidence are best captured as ratios or as differences. Rows in the following table correspond to different measures of total evidence. Columns correspond to different ways of treating disparities. Similar tables can be constructed for measures of net evidence and measures of balances in total evidence. See Table 5A in the supplement. We can use the various forms of Bayes' Theorem to clarify the similarities and differences among these measures by rewriting each of them in terms of likelihood ratios. This table shows that there are two differences between each multiplicative measure and its additive counterpart. First, the likelihood term that appears in a given multiplicative measure is diminished by 1 in its associated additive measure. Second, in each additive measure the diminished likelihood term is multiplied by an expression for H's probability: P(H) or O(H), as the case may be. The first difference marks no distinction; it is due solely to the fact that the multiplicative and additive measures employ a different zero point from which to measure evidence. If we settle on the point of probabilistic independence PE(H) = P(H) as a natural common zero, and so subtract 1 from each multiplicative measure, then equivalent likelihood terms appear in both columns. The real difference between the measures in a given row concerns the effect of unconditional probabilities on relations of incremental confirmation. Down the right column, the degree to which E provides incremental evidence for H is directly proportional to H's probability expressed in units of P(T) or P(~H). In the left column, H's probability makes no difference to the amount of incremental evidence that E provides for H once PH(E) and either P(E) or P~H(E) are fixed. In light of Bayes' Theorem, then, the difference between the ratio measures and then difference measures boils down to one question: Does a given piece of data provide a greater increment of evidential support for a more probable hypothesis than it does for a less probable hypothesis when both hypotheses predict the data equally well? The difference measures answer yes, the ratio measures answer no. Bayes' Theorem can also help us understand the difference between rows. The measures within a given row agree about the role of predictability in incremental confirmation. In the top row the incremental evidence that E provides for H increases linearly with PH(E)/P(E), whereas in the bottom row it increases linearly with PH(E)/P~H(E). Thus, when probabilities measure total evidence what matters is the degree to which H exceeds T as a predictor of E, but when odds measure total evidence it is the degree to which H exceeds ~H as a predictor of E that matters. The central issue here concerns the status of the likelihood ratio. While everyone agrees that it should play a leading role in any quantitative theory of evidence, there are conflicting views about precisely what evidential relationship it captures. There are three possible interpretations. |Probability as total evidence reading|| |Odds as total evidence reading|| On the first reading there is no conflict whatsoever between using probability ratios and using likelihood ratios to measure evidence. Once we get clear on the distinctions between total evidence, net evidence and the balance of evidence, we see that each of PR(H, E), LR(H, E) and LR(H, H*; E) measures an important evidential relationship, but that the relationships they measure are importantly different. When odds measure total evidence neither PR(H, E) nor LR(H, H*; E) plays a fundamental role in the theory of evidence. Changes in the probability ratio for H given E only indicate changes in incremental evidence in the presence of information about changes in the probability ratio for ~H given E. Likewise, changes in the likelihood ratio for H and H* given E only indicate changes in the balance of evidence in light of information about changes in the likelihood ratio for ~H and ~H* given E. Thus, while each of the two functions can figure as one component in a meaningful measure of confirmation, neither tells us anything about incremental evidence when taken by itself. The third view, "likelihoodism," is popular among non-Bayesian statisticians. Its proponents deny evidence proportionism. They maintain that a person's subjective probability for a hypothesis merely reflects her degree of uncertainty about its truth; it need not be tied in any way to the amount of evidence she has in its favor. It is likelihood ratios, not subjective probabilities, which capture the "scientifically meaningful" evidential relations. Here are two classic statements of the position. All the information which the data provide concerning the relative merits of two hypotheses is contained in the likelihood ratio of the hypotheses on the data. (Edwards 1972, 30) The ‘evidential meaning’ of experimental results is characterized fully by the likelihood function… Reports of experimental results in scientific journals should in principle be descriptions of likelihood functions. (Brinbaum 1962, 272) On this view, everything that can be said about the evidential import of E for H is embodied in the following generalization of the weak likelihood principle: The "Law of Likelihood". If H implies that the probability of E is x, while H* implies that the probability of E is x*, then E is evidence supporting H over H* if and only if x exceeds x*, and the likelihood ratio, x/x*, measures the strength of this support. (Hacking 1965, 106-109), (Royall 1997, 3) The biostatistician Richard Royall is a particularly lucid defender of likelihoodism (Royall 1997). He maintains that any scientifically respectable concept of evidence must analyze the evidential impact of E on H solely in terms of likelihoods; it should not advert to anyone's unconditional probabilities for E or H. This is supposed to be because likelihoods are both better known and more objective than unconditional probabilities. Royall argues strenuously against the idea that incremental evidence can be measured in terms of the disparity between unconditional and conditional probabilities. Here is the gist of his complaint: Whereas [LR(H, H*; E)] measures the support for one hypothesis H relative to a specific alternative H*, without regard either to the prior probabilities of the two hypotheses or to what other hypotheses might also be considered, the law of changing probability [as measured by PR(H, E)] measures support for H relative to a specific prior distribution over H and its alternatives... The law of changing probability is of limited usefulness in scientific discourse because of its dependence on the prior probability distribution, which is generally unknown and/or personal. Although you and I agree (on the basis of the law of likelihood) that given evidence supports H over H*, and H** over both H and H*, we might disagree about whether it is evidence supporting H (on the basis of the law of changing probability) purely on the basis of our different judgments of the priori probability of H, H*, and H**. (Royall 1997, 10-11, with slight changes in notation) Royall's point is that neither the probability ratio nor probability difference will capture the sort of objective evidence required by science because their values depend on the "subjective" terms P(E) and P(H), and not just on the "objective" likelihoods PH(E) and P~H(E). Whether one agrees with this assessment will be a matter of philosophical temperament, in particular of one's willingness to tolerate subjective probabilities in one's account of evidential relations. It will also depend crucially on the extent to which one is convinced that likelihoods are better known and more objective than ordinary subjective probabilities. Cases like the one envisioned in the law of likelihood, where hypotheses deductively entails a definite probability for the data, are relatively rare. So, unless one is willing to adopt a theory of evidence with a very restricted range of application, a great deal will turn on how easy it is to determine objective likelihoods in situations where the predictive connection from hypothesis to data is itself the result of inductive inferences. However one comes down on these issues, though, there is no denying that likelihood ratios will play a central role in any probabilistic account of evidence. In fact, the weak likelihood principle (2.1e) encapsulates a minimal form of Bayesianism to which all parties can agree. This is clearest when it is restated in terms of likelihoods. (2.1e) The Weak Likelihood Principle. (expressed in terms of likelihood ratios) If LR(H, H*; E) ≥ 1 and LR(~H, ~H*; ~E) ≥ 1, with one inequality strict, then E provides more incremental evidence for H than for H* and ~E provides more incremental evidence for ~H than for ~H*. Likelihoodists will endorse (2.1e) because the relationships described in its antecedent depend only on inverse probabilities. Proponents of both the "probability" and "odds" interpretations of total evidence will accept (2.1e) because satisfaction of its antecedent ensures that conditioning on E increases H's probability and its odds strictly more than those of H*. Indeed, the weak likelihood principle must be an integral part of any account of evidential relevance that deserves the title "Bayesian". To deny it is to misunderstand the central message of Bayes' Theorem for questions of evidence: namely, that hypotheses are confirmed by data they predict. As we shall see in the next section, this "minimal" form of Bayesianism figures importantly into subjectivist models of learning from experience. Subjectivists think of learning as a process of belief revision in which a "prior" subjective probability P is replaced by a "posterior" probability Q that incorporates newly acquired information. This process proceeds in two stages. First, some of the subject's probabilities are directly altered by experience, intuition, memory, or some other non-inferential learning process. Second, the subject "updates" the rest of her opinions to bring them into line with her newly acquired knowledge. Many subjectivists are content to regard the initial belief changes as sui generis and independent of the believer's prior state of opinion. However, as long as the first phase of the learning process is understood to be non-inferential, subjectivism can be made compatible with an "externalist" epistemology that allows for criticism of belief changes in terms the reliability of the causal processes that generate them. It can even accommodate the thought that the direct effect of experience might depend causally on the believer's prior probability. Subjectivists have studied the second, inferential phase of the learning process in great detail. Here immediate belief changes are seen as imposing constraints of the form "the posterior probability Q has such-and-such properties." The objective is to discover what sorts of constraints experience tends to impose, and to explain how the person's prior opinions can be used to justify the choice of a posterior probability from among the many that might satisfy a given constraint. Subjectivists approach the latter problem by assuming that the agent is justified in adopting whatever eligible posterior departs minimally from her prior opinions. This is a kind of "no jumping to conclusions" requirement. We explain it here as a natural result of the idea that rational learners should proportion their beliefs to the strength of the evidence they acquire. The simplest learning experiences are those in which the learner becomes certain of the truth of some proposition E about which she was previously uncertain. Here the constraint is that all hypotheses inconsistent with E must be assigned probability zero. Subjectivists model this sort of learning as simple conditioning, the process in which the prior probability of each proposition H is replaced by a posterior that coincides with the prior probability of H conditional on E. (3.1) Simple Conditioning If a person with a "prior" such that 0 < P(E) < 1 has a learning experience whose sole immediate effect is to raise her subjective probability for E to 1, then her post-learning "posterior" for any proposition H should be Q(H) = PE(H). In short, a rational believer who learns for certain that E is true should factor this information into her doxastic system by conditioning on it. Though useful as an ideal, simple conditioning is not widely applicable because it requires the learner to become absolutely certain of E's truth. As Richard Jeffrey has argued (Jeffrey 1987), the evidence we receive is often too vague or ambiguous to justify such "dogmatism." On more realistic models, the direct effect of a learning experience will be to alter the subjective probability of some proposition without raising it to 1 or lowering it to 0. Experiences of this sort are appropriately modeled by what has come to be called Jeffrey conditioning (though Jeffrey's preferred term is "probability kinematics"). (3.2) Jeffrey Conditioning If a person with a prior such that 0 < P(E) < 1 has a learning experience whose sole immediate effect is to change her subjective probability for E to q, then her post-learning posterior for any H should be Q(H) = qPE(H) + (1 − q)P~E(H). Obviously, Jeffrey conditioning reduces to simple conditioning when q = 1. A variety of arguments for conditioning (simple or Jeffrey-style) can be found in the literature, but we cannot consider them here. There is, however, one sort of justification in which Bayes' Theorem figures prominently. It exploits connections between belief revision and the notion of incremental evidence to show that conditioning is the only belief revision rule that allows learners to correctly proportion their posterior beliefs to the new evidence they receive. The key to the argument lies in marrying the "minimal" version of Bayesian expressed in the (2.1e) to a very modest "proportioning" requirement for belief revision rules. (3.3) The Weak Evidence Principle If, relative to a prior P, E provides at least as much incremental evidence for H as for H*, and if H is antecedently more probable than H*, then H should remain more probable than H* after any learning experience whose sole immediate effect is to increase the probability of E. This requires an agent to retain his views about the relative probability of two hypotheses when he acquires evidence that supports the more probable hypothesis more strongly. It rules out obviously irrational belief revisions such as this: George is more confident that the New York Yankees will win the American League Pennant than he is that the Boston Rex Sox will win it, but he reverses himself when he learns (only) that the Yankees beat the Red Sox in last night's game. Combining (3.3) with minimal Bayesianism yields the following: (3.4) Consequence If a person's prior is such that LR(H, H*; E) ≥ 1, LR(~H, ~H*; ~E) ≥ 1, and P(H) > P(H*), then any learning experience whose sole immediate effect is to raise her subjective probability for E should result in a posterior such that Q(H) > Q(H*). On the reasonable assumption that Q is defined on the same set of propositions over which P is defined, this condition suffices to pick out simple conditioning as the unique correct method of belief revision for learning experiences that make E certain. It picks out Jeffrey conditioning as the unique correct method when learning merely alters one's subjective probability for E. The argument for these conclusions makes use of the following two facts about probabilities. (3.5) Lemma If H and H* both entail E when P(H) > P(H*), then LR(H, H*; E) = 1 and LR(~H, ~H*; ~E) > 1. (3.6) Lemma Simple conditioning on E is the only rule for revising subjective probabilities that yields a posterior with the following properties for any prior such that P(E) > 0: - Q(E) = 1. - Ordinal Similarity. If H and H* both entail E, then P(H) ≥ P(H*) if and only if Q(H) ≥ Q(H*). From here the argument for simple conditioning is a matter of using (3.4) and (3.5) to establish ordinal similarity. Suppose that H and H* entail E and that P(H) > P(H*). It follows from (3.5) that LR(H, H*; E) = 1 and LR(~H, ~H*; ~E) > 1. (3.4) then entails that any learning experience that raises E's probability must result in a posterior with Q(H) > Q(H*). Thus, Q and P are ordinally similar with respect to hypotheses that entail H. If we go on to suppose that the learning experience raises E's probability to 1, then (3.6) then guarantees that Q arises from P by simple conditioning on E. The case for Jeffrey conditioning is similarly direct. Since the argument for ordinal similarity did not depend at all on the assumption that Q(E) = 1, we have really established (3.7) Corollary • If H and H* entail E, then P(H) > P(H*) if and only if Q(H) > Q(H*). • If H and H* entail ~E, then P(H) > P(H*) if and only if Q(H) > Q(H*). So, Q is ordinally similar to P both when restricted to hypotheses that entail E and when restricted to hypotheses than entail ~E. Moreover, since dividing by positive numbers does not disturb ordinal relationships, it also follows that that QE is ordinally similar to P when restricted to hypotheses that entail E, and that Q~E is ordinally similar to P when restricted to hypotheses than entail ~E. Since QE(E) = 1 = Q~E(E), (3.6) then entails: (3.8) Consequence For every proposition H, QE(H) = PE(H) and Q~E(H) = P~E(H) It is easy to show that (3.8) is necessary and sufficient for Q to arise from P by Jeffrey conditioning on E. Subject to the constraint Q(E) = q, it guarantees that Q(H) = qPE(H) + (1 −q)P~E(H). The general moral is clear. The basic Bayesian insight embodied in the weak likelihood principle (2.1e) entails that simple and Jeffrey conditioning on E are the only rational ways to revise beliefs in response to a learning experience whose sole immediate effect is to alter E's probability. While much more can be said about simple conditioning, Jeffrey conditioning and other forms of belief revision, these remarks should give the reader a sense of the importance of Bayes' Theorem in subjectivist accounts of learning and evidential support. Though a mathematical triviality, the Theorem's central insight — that a hypothesis is supported by any body of data it renders probable — lies at the heart of all subjectivist approaches to epistemology, statistics, and inductive logic. - Armendt, B. 1980. "Is There a Dutch Book Argument for Probability Kinematics?", Philosophy of Science 47, 583-588. - Bayes, T. 1764. "An Essay Toward Solving a Problem in the Doctrine of Chances", Philosophical Transactions of the Royal Society of London 53, 370-418. [Fascimile available online: the original essay with an introduction by his friend Richard Price] - Birnbaum A. 1962. "On the Foundations of Statistical Inference", Journal of the American Statistical Association 53, 259-326. - Carnap, R. 1962. Logical Foundations of Probability, 2nd edition. Chicago: University of Chicago Press. - Chihara, C. 1987. "Some Problems for Bayesian Confirmation Theory", British Journal for the Philosophy of Science 38, 551-560. - Christensen, D. 1999. "Measuring Evidence", Journal of Philosophy 96, 437-61. - Dale, A. I. 1989. "Thomas Bayes: A Memorial", The Mathematical Intelligencer 11, 18-19. - ----- 1999. A History of Inverse Probability, 2nd edition. New York: Springer-Verlag. - Earman, J. 1992. Bayes or Bust? Cambridge, MA: MIT Press. - Edwards, A. W. F. 1972. Likelihood. Cambridge: Cambridge University Press. - Glymour, Clark. 1980. Theory and Evidence. Princeton: Princeton University Press. - Hacking, Ian. 1965. Logic of Statistical Inference. Cambridge: Cambridge University Press. - Hájek, A. 2003. "Interpretations of the Probability Calculus", in the Stanford Encyclopedia of Philosophy, (Summer 2003 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2003/entries/probability-interpret/> - Hammond, P. 1994. "Elementary non-Archimedean Representations for of Probability for Decision Theory and Games," in P. Humphreys, ed., Patrick Suppes: Scientific Philosopher, vol. 1., Dordrecht: Kluwer Publishers, 25-62. - Harper, W. 1976. "Rational Belief Change, Popper Functions and Counterfactuals," in W. Harper and C. Hooker, eds., Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, vol. I. Dordrecht: Reidel, 73-115. - Hartigan, J. A. 1983. Bayes Theory. New York: Springer-Verlag. - Howson, Colin. 1985. "Some Recent Objections to the Bayesian Theory of Support", British Journal for the Philosophy of Science, 36, 305-309. - Jeffrey, R. 1987. "Alias Smith and Jones: The Testimony of the Senses", Erkenntnis 26, 391-399. - ----- 1992. Probability and the Art of Judgment. New York: Cambridge University Press. - Joyce, J. M. 1999. The Foundations of Causal Decision Theory. New York: Cambridge University Press. - Kahneman, D. and Tversky, A. 1973. "On the psychology of prediction", Psychological Review 80, 237-251. - Kaplan, M. 1996. Decision Theory as Philosophy. Cambridge: Cambridge University Press. - Levi, I. 1985. "Imprecision and Indeterminacy in Probability Judgment", Philosophy of Science 53, 390-409. - Maher, P. 1996. "Subjective and Objective Confirmation", Philosophy of Science 63, 149-174. - McGee, V. 1994. "Learning the Impossible," in E. Eells and B. Skyrms, eds., Probability and Conditionals. New York: Cambridge University Press, 179-200. - Mortimer, Halina. 1988. The logic of induction, Ellis Horwood Series in Artificial Intelligence, New York; Halsted Press. - Nozick, R. 1981. Philosophical Explanations. Cambridge: Harvard University Press. - Renyi, A. 1955. "On a New Axiomatic Theory of Probability", Acta Mathematica Academiae Scientiarium Hungaricae 6, 285-335. - Royall, R. 1997. Statistical Evidence: A Likelihood Paradigm. New York: Chapman & Hall/CRC. - Skyrms, B. 1987. "Dynamic Coherence and Probability Kinematics". Philosophy of Science 54, 1-20. - Sober, E. 2002. "Bayesianism — its Scope and Limits", in Swinburne (2002), 21-38. - Sphon, W. 1986. "The Representation of Popper Measures", Topoi 5, 69-74. - Stigler, S. M. 1982. "Thomas Bayes' Bayesian Inference", Journal of the Royal Statistical Society, series A 145, 250-258. - Swinburne, R. 2002. Bayes' Theorem. Oxford: Oxford University Press (published for the British Academy). - Talbot, W. 2001. "Bayesian Epistemology", Stanford Encyclopedia of Philosophy (Fall 2001 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2001/entries/epistemology-bayesian/> - Teller, P. 1976. "Conditionalization, Observation, and Change of Preference", in W. Harper and C.A. Hooker, eds., Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science. Dordrecht: D. Reidel. - Williamson, T. 2000. Knowledge and its Limits. Oxford: Oxford University Press. - Van Fraassen, B. 1999. "A New Argument for Conditionalization". Topoi 18, 93-96. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database. - Fitelson, B. 2001. Studies in Bayesian Confirmation Theory, Ph.D. Dissertation, University of Wisconsin. [Preprint in PDF available online] (750K download) - Bayes' Original Essay (in PDF) (UCLA Statistics Department/History of Statistics) - A Short Biography of Thomas Bayes (University of St. Andrews, MacTutor History of Mathematics Archive) - The International Society for Bayesian Analysis (ISBA)
New Year's Resolution Time Capsules Refresh Your Students' Goals With A New Beginning - Grades: 1–2, 3–5, 6–8 When students come back from break, it can be difficult to get them refocused after the excitement of the holidays. In some ways, I treat January as a new beginning. My students and I reflect on what we have accomplished so far in the school year and make plans for the rest of our year together. Part of our plan includes the students making resolutions. I start my lesson by asking students, “What is a Resolution?” They soon learn that a resolution is a promise that you make to yourself. I then read aloud some of the resolutions made by my students in previous years. This gives my current students some specific ideas about making resolutions. I follow this up with a discussion about how there are different kinds of resolutions. I ask my students to make two PERSONAL resolutions, two resolutions that involve FAMILY OR FRIENDS, and two resolutions that involve SCHOOL. Students share their top two resolutions with the class before we put them in our “Resolution Time Capsule.” I decorate a shoebox with New Year’s Eve decorations and have each student ceremoniously place their resolutions into the box. I explain to the students that we will not open the box until the end of the year to see if we have accomplished our goals. When the end of the year comes around, students are given their resolutions from the box and are asked to write a reflective piece of writing about how far they have come or what things they might still need to work on. This is the final piece of writing that is placed in their third grade portfolio. In other years, I have had students use Print Shop in our computer lab to create posters on which they type their resolutions. This is nice because the final posters can be used to create a bulletin board in your classroom where students are reminded of their resolutions everyday. |Students can make their resolutions into a poster.|
What It Is A phosphorus blood test is done to assess phosphorus levels in the blood. Phosphorus, a mineral obtained mostly from food, helps: - form healthy bones and teeth - process energy in the body - supports muscle and nerve functioning Why It's Done Doctors may order a phosphorus test to help diagnose or monitor any of several conditions, including: - kidney disorders (to assess whether the kidneys are excreting or retaining too much phosphorus) - gastrointestinal and nutritional disorders (to look for problems with intestinal absorption or malnutrition) - calcium and bone problems (because calcium and phosphorus work closely together in the body, and the levels of one can yield important information about the other) Kids tend to have higher phosphorus levels than adults, mainly because their bones are still growing. When there's not enough phosphorus, bone growth and other body functions may be affected. When there's too much, it can be a sign of conditions that affect the balance of minerals in the body. No special preparations are needed for this test. However, certain drugs — especially antacids, laxatives, and diuretics — might alter the test results, so tell your doctor about any medications your child is taking. On the day of the test, having your child wear a short-sleeve shirt can make things easier for the technician who will be drawing the blood. A health professional will usually draw the blood from a vein. For an infant, the blood may be obtained by puncturing the heel with a small needle (lancet). If the blood is being drawn from a vein, the skin surface is cleaned with antiseptic, and an elastic band (tourniquet) is placed around the upper arm to apply pressure and cause the veins to swell with blood. A needle is inserted into a vein (usually in the arm inside of the elbow or on the back of the hand) and blood is withdrawn and collected in a vial or syringe. After the procedure, the elastic band is removed. Once the blood has been collected, the needle is removed and the area is covered with cotton or a bandage to stop the bleeding. Collecting blood for this test will only take a few minutes. What to Expect Either method (heel or vein withdrawal) of collecting a sample of blood is only temporarily uncomfortable and can feel like a quick pinprick. Afterward, there may be some mild bruising, which should go away in a few days. Getting the Results The blood sample will be processed by a machine. The results are commonly available after a few hours or the next day. If phosphorus levels are found to be either elevated or deficient, further testing may be necessary to determine what's causing the problem and how to treat it. The phosphorus test is considered a safe procedure. However, as with many medical tests, some problems can occur with having blood drawn: - fainting or feeling lightheaded - hematoma (blood accumulating under the skin causing a lump or bruise) - pain associated with multiple punctures to locate a vein Helping Your Child Having a blood test is relatively painless. Still, many children are afraid of needles. Explaining the test in terms your child can understand might help ease some of the fear. Allow your child to ask the technician any questions he or she might have. Tell your child to try to relax and stay still during the procedure, as tensing muscles and moving can make it harder and more painful to draw blood. It also may help if your child looks away when the needle is being inserted into the skin. If You Have Questions If you have questions about the phosphorus test, speak with your doctor. Reviewed by: Steven Dowshen, MD Date reviewed: March 2011 Note: All information is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor. © 1995-2015 The Nemours Foundation/KidsHealth. All rights reserved.
Artifact Gallery -- Basket There is a whole group of Ancestral Pueblo people called the Basketmakers because of their superior basket making skills. The basket pictured, most likely dating from A.D. 450-750, shows the intricacy of woven patterns created by people in the Mesa Verde region as they began to transition from a hunter-gatherer to an agricultural lifestyle. Not only were baskets used for collecting seeds, nuts, fruits, and berries, but they were sometimes coated with pitch on the inside, which allowed them to hold water and tolerate heat. Baskets were also used for cooking, as an alternative to roasting food over hot coals. People could heat stones in the fire and then drop them into the baskets. Seeds were parched or roasted by placing warm stones in with the seeds and then shaking them together.
Let’s get started with a definition of ADHD and some symptoms: According to Mayo Clinic, ADHD is a chronic condition that affects millions of children and often persists into adulthood. ADHD includes a combination of problems, such as difficulty sustaining attention, hyperactivity and impulsive behavior. Children with ADHD also may struggle with low self-esteem, troubled relationships and poor performance in school. While treatment won’t cure ADHD, it can help a great deal with symptoms. Treatment typically involves medications and behavioral interventions. Early diagnosis and treatment can make a big difference in outcome. Signs and symptoms of ADHD may include: - Difficulty paying attention - Frequently daydreaming - Difficulty following through on instructions and apparently not listening - Frequently has problems organizing tasks or activities - Frequently forgetful and loses needed items, such as books, pencils or toys - Frequently fails to finish schoolwork, chores or other tasks - Easily distracted - Frequently fidgets or squirms - Difficulty remaining seated and seemingy in constant motion - Excessively talkative - Frequently interrupts or intrudes on others’ conversations or games - Frequently has trouble waiting for his or her turn Most of these symptoms are perfectly normal in young children, especially boys. So how is a mom to know what is normal boy behavior and what is more likely ADHD? The answer is time. When your 8, 9, or 10 year old boy is still acting without thinking, forgetting simple directions, constantly fidgeting or talking, then it’s much more likely that his behavior goes beyond typical boy behavior. Sometimes, ADHD can include all of the above, plus some. A few more signs of ADHD that may affect your child: - Difficulty keeping powerful emotions (good or bad) in check - Difficulty shifting focus - Making careless mistakes I only have five days for this series, so these are the major topics I’ll be discussing the rest of the week — I hope you’ll come back and join us as we explore these issues.
Hummingbirds fly. That sounds redundant but includes what they do NOT do. Almost all hummingbirds do not walk or hop4 as other birds do. A female hummingbird will not even stand up and rotate her place to care for her eggs. She would rather fly. She will lift up off her nest, shift her position and hover back down. Hummingbird feet allow them to perch and balance but their strength is flight.2 Hummingbird flight is more diverse than other kinds of flight. There are evolutionary strengths that allow hummingbirds to fly forward, in reverse and maneuver sideways as breezes blow the flowers. A scientific test showed that hummingbirds are able to hover for as much as 50 minutes in one place – a strength that allows them to gather nectars tirelessly.2 Hummingbirds also stop and accelerate instantly, lift straight up and down and pivot while hovering. Hummingbirds can even escape quickly from a flower by flipping into a backwards summersault where they briefly fly in reverse while upside down!7 Hummingbird wings are affectionately called “hands” because the wing bone structure is all hand bone.12 The elbow and wrist joints of hummingbirds are rigid and so the wing does not bend or fold in the middle but remains straight out from the body in flight.10 When a hummingbird flies forward, it's wings beat up and down in a very slight circling motion. The wing is constructed of mostly primary feathers and the tilt and rotation at the shoulder allows the up-stroke to propel the hummingbird forward with as much force as the down-stroke. This means that a hummingbird has two thrust strokes to every one of another bird. In other birds, the upstroke is a passive stroke – designed only to lift the wing into position.7 In order to hover, a hummingbird's wings move back and forth horizontally... drawing a narrow but elegant figure eight in the air with each full stroke. The stroke is continuous – like a Mobius strip – which is the symbol of infinity.1 You could repeat the hovering motion of hummingbird wings by holding your own arms straight out from the body and parallel to the floor. Turn your palms down. Imitate hummingbird wings by sweeping your arms forward with the thumbs leading the way. To continue into a hovering back-stroke, your hands roll up and over. The whole arm sweeps backward with your palms to the sky, thumbs still leading the way. One full cycle of a hummingbird wing beat is completed when your arms reach back behind the body as far as they can go - and then the thumbs roll up and flip over again.10 Your palms are turned down, arms sweeping forward into the next stroke of infinity flight. This unusual wing posture and movement is sustained by very strong breast muscles. An evolution particular to the hummingbird has made the muscles that elevate the wing (or up-stroke) as strong as the muscles that depress the wing (or down-stroke.) Add amazing rotation to the hummingbird shoulder joint (nearly 180°) and you have a forward and backward moving wing with precision control.7 The balance of muscle strength is exactly what allows every hummingbird to hover. The propulsion of the forward stroke is nullified by the reverse propulsion of the backstroke.2 There is enough force to cause lift but direction is kept in stasis until the hummingbird chooses otherwise.8 It is the double wing-stroke that provides abrupt swiftness and the characteristic “hum” of hummingbirds. For years it was thought that the humming hummingbird would beat its wings faster than other birds. Now we know that this is an illusion. When you compensate for body weight and wing length, hummingbird wings often move slower than other birds. There are times when the hummingbird wing beat lives up to our expectation. When the male ruby-throated hummingbird is mating, its wings will accelerate and have been measured as much as 200 bps during an aerial display. This is the highest beats per second of any bird – but it is only brief.8 Another illusion is that hummingbirds fly faster than most other birds. Appearances account for this misunderstanding. Because they are so small, when hummingbirds fly you can barely see them and this seems very fast. The fastest flying birds are the peregrine falcon, which drops down on it's prey at 175 mph,10 and the Duck Hawk measured at 160-180 mph.12 A scientific study measured the forward speed of a ruby-throated hummingbird at 27 mph and other reports have clocked hummingbirds in the wild at 40 mph and even 60 mph during courtship. It only takes 1/500th of a second for a hummingbird to complete a wing beat cycle and only three cycles for hummingbird flight to occur.10 It is no wonder that hummingbirds are fearless. It is no wonder that hummingbirds come into our gardens and sometimes hover in arms reach. Their aerial fineness is perfect protection and grace. There are researchers who believe that hummingbirds did not evolve from other birds (swifts) but developed their own bodies for specialized flight. These scientists have created a separate order just for hummingbirds called “Trochiliformes,” which recognizes the hummingbird as the highest evolution of all the non perching birds.8
This wonderful and exciting lesson is an amazing way to review days of the week, September holidays and numbers from 0-31 in a single lesson plan. The lesson includes:- • Calendario de octubre del 2017 • Practice Worksheet- Writing prompt for classroom birthdays, events and holiday dates in Spanish. . 1- Begin by reviewing the days of the week. Have them work on the vocabulary worksheet of the days of the week 2- Chant numbers using 0-31 vocab sheet 3- Review days of the week and numbers from 1-30 by using Day Number combination. Students just have to look at the calendar and follow. For example, domingo uno, lunes dos. ...continue all the way to 31. 4- Go over holidays, and have them write them in small print on the calendar 5- Have them write all birthdays in October ( Cumpleaňos ) and school October events 6- Have them practice dates. Write Classroom birthdays on the board and events. BONUS- Three Interactive Activities 1- Interactive Activity 1-Complete “Label the Skull”- Review parts of the face. Great Day of the Dead preparation. 2- Interactive Activity 2- Review Spanish Colors by coloring “La Lechuza” 3- Interactive Activity 3- Christopher Columbus Bookmark. Great homework and class starter activity. Have students color the bookmark and encourage them to bring their favorite book to class. Have them complete the prompt “Mi Libro favorito es…” Write choices on the board. Tally all the votes and select the Class Favorite. **La Fecha y los Dias de la Semana= You can do an in depth unit to teach students how to write dates in Spanish. Awesome Unit **Los Dias de La Semana Y Los Meses- have Fun with these amazing word searches- Make it a game and have students compete for homework passes. **100 days- Numbers Practice- You can celebrate 100 dias en la escuela. Practice numbers from 0 to million. Students love to listen big numbers called out and guessing them. Please click on my store "El Jaguar" so you can follow me and be the first one to be notified of new lessons and sales.
LONDON (Thomson Reuters Foundation) - More than two centuries after the British colonization of Australia, the High Court of Australia decided in 1992 that the common law could recognise the land and water rights of Aboriginal peoples under traditional law and custom. Known as the 'Mabo' case, this judgment overturned the long held view that when Australia was settled it was 'terra nullius' or 'practically unoccupied'. The judgement paved the way for intense negotiation and discussion between Aboriginal people, government and business interests and a year later the Federal Parliament passed the Commonwealth Native Title Act 1993. The law sets out the way Australia's indigenous people can seek native title, based on a litigation process but with an emphasis on agreement. So far, the majority of determinations have been made by consent. Native title in Australia is under pinned by recognition that: - Australian common law will recognize native title of indigenous people and can be protected under that law. - When the British Crown took possession of each of the Australian colonies (now a federation of states and territories), it acquired sovereignty over native title. This means that in some cases, native title can be wholly or partly extinguished by laws or executive grants (such as Crown land or perpetual leases) that might be inconsistent with native title. - For native title to be recognized, indigenous people must show that they have had a continuing connection with the land and waters in question and show these interests under traditional law and custom. - The nature of native title under common law is that it is 'communal' in character and cannot be bought or sold. However it can be surrendered to the Crown and can also be transmitted from one group to another according to traditional law and custom. - The Native Title Act also created the National Native Title Tribunal which hears claims and acts as a mediating and arbitration body. (Reporting by Paola Totaro, Editing by Belinda Goldsmith; Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers humanitarian news, women's rights, trafficking, property rights and climate change. Visit news.trust.org)
Temperature, humidity shape snow crystals Source: Marcia Politovich, National Center for Atmospheric Research As snow crystals form they take on a six-sided, or hexagonal shape, but with what seems like an infinite number of variations of being six sided. The temperature at which a crystal forms, and to less extent the humidity of the air, determine the basic shape. The graphic above shows, in a general way, the kinds of snow crystals that form at various temperature ranges. The many things that happen to snow crystals as they fall, such as collisions, partial melting and colliding with water drops that freeze to them, create even more shapes. This is why irregular crystals with no easily identifiable form are the most common. Some times crystals are a combination of more than one form. For example, hollow columns that form in air colder than -8°F could grow thin plates on one or both ends as they fall through warmer air. While most people refer to shapes like those in the graphic above as snowflakes, flakes are really made of many snow crystals that have stuck together. Snow crystals form hexagonal shapes because of the way the two hydrogen atoms that join with an oxygen atom to form a water molecule attach to the hydrogen atoms of other water molecules. You will find more information on snow crystals in Chapter 7 of the USA TODAY Weather Book, Second Edition, by Jack Williams, published in 1997 by Vintage Books. You will find a great deal of information about snow crystals in The Snowflake: Winter's Secret Beauty by Kenneth Libbrecht, photos by Patricia Rasmussen, published in 2003 by Voyageur Press, Inc. Libbrecht's Snow Crystals.com Web site is one of the best online sources of information. Wilson Alwyn Bentley, a Vermont farmer and self-taught photographer pioneered snow-crystal photography during four decades of work that began in the late 19th century. Ice crystals in the air create various kinds of rings and splotches of light, such as sun dogs. The USATODAY.com Understanding sky color and phenomena page has information on these, and links to photos that will help you understand them. When you go to this page, scroll down to the "If it's not a rainbow, what is it?" headline. Our Resources: Winter weather page has links to information on ice and snow, the weather that creates them, and the consequences of winter weather.
Messenger Detects Water On Mercury Being the closest planet to the sun, you would think Mercury would be a pretty hot place, but the Messenger probe has detected a massive deposit of frozen water on the surface of the planet: Mercury is as cold as ice. Indeed, Mercury, the closest planet to the Sun, possesses a lot of ice — 100 billion to one trillion tons — scientists working with NASA’s Messenger spacecraft reported on Thursday. Sean C. Solomon, the principal investigator for Messenger, said there was enough ice there to encase Washington, D.C., in a frozen block two and a half miles deep. That is a counterintuitive discovery for a place that also ranks among the hottest in the solar system. At noon at the equator on Mercury, the temperature can hit 800 degrees Fahrenheit. But near Mercury’s poles, deep within craters where the Sun never shines, temperatures dip to as cold as minus 370. “In these planetary bodies, there are hidden places, as it were, that can have interesting things going on,” said David J. Lawrence, a senior scientist at the Johns Hopkins University Applied Physics Laboratory working on the Messenger mission. The findings appear in a set of three papers published Thursday on the Web site of the journal Science. The ice could be an intriguing science target for a future robotic lander or even a resource for astronauts in the far future. Planetary scientists had strong hints of the ice a couple of decades ago when telescopes bounced radio waves off Mercury and the reflections were surprisingly bright. But some researchers suggested the craters could be lined with silicate compounds or sulfur, which might also be highly reflective. The Messenger spacecraft, which swung into orbit around Mercury in March 2011 and has completed its primary mission, took a closer look by counting particles known as neutrons that are flying off the planet. High-energy cosmic rays break apart atoms, and the debris includes neutrons. But when a speeding neutron hits a hydrogen atom, which is almost the same weight, it comes to almost a complete stop, just as the cue ball in billiards transfers its momentum when it hits another ball. Water molecules contain two hydrogen atoms, and thus when Messenger passed over ice-rich areas, the number of neutrons dropped. The ice is almost pure water, which indicates that it arrived within the last few tens of millions of years, possibly from a comet that smacked into Mercury. Dr. Solomon said several young craters on the surface of Mercury could be candidates for such an impact. Could there possibly be organic compounds in that frozen water? If a comet was the source of the water as theorized, then it’s certainly a possibility. Perhaps someday we’ll find out.
Over the past week or so we’ve covered a good number of blackface Halloween costumes. They’re always wicked and it’s not just because racism makes people feel bad. Over at The Grio, Blair L.M. Kelley gives a brief history of blackface: Blackface minstrelsy first became nationally popular in the late 1820s when white male performers portrayed African-American characters using burnt cork to blacken their skin. Wearing tattered clothes, the performances mocked black behavior, playing racial stereotypes for laughs. Although Jim Crow was probably born in the folklore of the enslaved in the Georgia Sea Islands, one of the most famous minstrel performers, a white man named Thomas “Daddy” Rice brought the character to the stage for the first time. Rice said that on a trip through the South he met a runaway slave, who performed a signature song and dance called jump Jim Crow. Rice’s performances, with skin blackened and drawn on distended blood red lips surrounded by white paint, were said to be just Rice’s attempt to depict the realities of black life. Jim Crow grew to be minstrelsy’s most famous character, in the hands of Rice and other performers Jim Crow was depicted as a runaway: “the wheeling stranger” and “traveling intruder.” The gag in Jim Crow performances was that Crow would show up and disturb white passengers in otherwise peaceful first class rail cars, hotels, restaurants, and steamships. Jim Crow performances served as an object lesson about the dangers of free black people, so much so that the segregated spaces first created in northern states in the 1850s were popularly called Jim Crow cars. Jim Crow became synonymous with white desires to keep black people out of white, middle-class spaces.
If you were given the equation x + 2 = 4, it probably wouldn’t take you long to figure out that x = 2. No other number will substitute for x and make that a true statement. If the equation were x^2 + 2 = 4, you would have two answers √2 and -√2. But if you were given the inequality x + 2 < 4, there are an infinite number of solutions. To describe this infinite set of solutions, you would use interval notation, and provide the boundaries of the range of numbers constituting a solution to this inequality. Use the same procedures you use when solving equations to isolate your unknown variable. You can add or subtract the same number on both sides of the inequality, just like with an equation. In the example x + 2 < 4 you could subtract two from both the left and right side of the inequality and get x < 2. Multiply or divide both sides by the same positive number just as you would in an equation. If 2x + 5 < 7, first you would subtract five from each side to get 2x < 2. Then divide both sides by 2 to get x < 1. Switch the inequality if you multiply or divide by a negative number. If you were given 10 - 3x > -5, first subtract 10 from both sides to get -3x > -15. Then divide both sides by -3, leaving x on the left side of the inequality, and 5 on the right. But you’d need to switch the direction of the inequality: x < 5 Use factoring techniques to find the solution set of a polynomial inequality. Suppose you were given x^2 - x < 6. Set your right side equal to zero, as you would when solving a polynomial equation. Do this by subtracting 6 from both sides. Because this is subtraction, the inequality sign does not change. x^2 - x - 6 < 0. Now factor the left side: (x+2) (x-3) < 0. This will be a true statement when either (x+2) or (x-3) is negative, but not both, because the product of two negative numbers is a positive number. Only when x is > -2 but < 3 is this statement true. Use interval notation to express the range of numbers making your inequality a true statement. The solution set describing all numbers between -2 and 3 is expressed as: (-2,3). For the inequality x + 2 < 4, the solution set includes all numbers less than 2. So your solution ranges from negative infinity up to (but not including) 2 and would be written as (-inf, 2). Use brackets instead of parentheses to indicate that either or both of the numbers serving as boundaries for the range of your solution set are included in the solution set. So if x + 2 is less than or equal to 4, 2 would be a solution to the inequality, in addition to all the numbers less than 2. The solution to this would be written as: (-inf, 2]. If the solution set were all numbers between -2 and 3, including -2 and 3, the solution set would be written as: [-2,3].
The solving of Raven’s matrix is a problem faced by many computer science and psychology majors. Raven’s matrix is one of the popular test of the human IQ, what we may equate to human intelligence. Raven’s tests consist of a matrix of visual objects that are manipulated between pairs with the last image missing in the last pair, this is the one that needs to be determined from a set of multiple choice options. The key in most raven problems is to determine the transformation of the objects or a group of object to determine what the last object should be. As shown in “2×1 Basic Problem 02”, image A represents a small circle, which image B represents as a large circle. This is your first clue – what changed between image A to B, the size of the same shape. With that clue you would then infer the same on C to ?. The small square in C should likewise change to a large square, so now the obvious answer becomes option 6. In a 2×1 problem there is no need for correlations and grouping of boxes because the problem is in a lateral form, meaning A to B and C to ?. But when considering a 2×2 problem your reasoning needs to be altered. 2×2 problems require that you perform correlation and grouping of the problem space, meaning you need to determine which figure correlate with which other figure before determining the transformations. If you look at “2×2 Basic Problem 02”, one can say the the Fill changed from A to B row-wise, but one can also say the Shape changed from A to C column-wise and both would be correct but that may affect the outcome. This is the basis of this project, to build an artificial intelligence agent that can smartly apply reasoning and logic to solve a set of Raven’s matrix tests, in particular 2×2 matrices.When building an AI agent to solve these tests, It is important to first determine how the input for the test would be passed to the agent. Fortunately for this project the inputs will be done via textual representation, as shown in the illustration “2×1 Basic Problem 02”. All the visual tests have already been decomposed to a textual representation, this is parsed by the calling application and sent to the agent via objects that represent the problem set which is the entire set of figures and object contained in the problem set, including the answer options. The agent implements it’s solving methodology in five stages. Stage one, groups figures together by measuring correlative correctness. Stage two employs a smart generator to generate frames – referred to as “Comparison Sheets” throughout the paper. While stage three uses a tester to compare these comparison sheets for correlative correctness. Stage four works in concert with stage three by comparing extra non-intuitive observable traits, which is used mostly used as tie-breakers and step five compares the scores of the tester and picks the highest score as the answer. We begin our journey by first establishing some basic axioms – all comparison and most of the operations are done on pairs of objects, figures represent as a box (as in the illustrations) A,B, C, 1, 2 etc. Objects or Shapes represent the actual items inside the figures (boxes). The AI agent will first attempt to correlate figures and score them to determine the grouping of A to B row-wise vs C to ?, or A to C column-wise vs B to ? or both. This correlation is done by looking at the the attributes on each object in the figure A then comparing these attributes one by one to all the attributes of the objects in the other two figures B and C, while scoring for correlative correctness and shape consistency. So a square in A, a square in B and a triangle in C means that A and B is more correlated than A and C because A->B maintains the shape: square. Once this is determined the AI agent can then conclude that the it needs to determine transformations from A to B then infer these on C to ?. Once the AI agent determines the objects that needs to be grouped, it renames them in working memory using their ordinal for ease of processing by naming the first figure – 1 and second figure – 2. This removes the static naming of A & B and makes the agent more dynamic. This solves the problem of correlation and grouping. The agent then continues by first observing the transformations that occur between the objects from 1 to 2 (A to B in this case), It uses these observable transformations and builds a “comparison sheet” in working memory. It then looks at the remaining figure in the question (C in this case) and renames it to 1 in working memory for ease of comparison processing. The agent then builds comparison sheets for C (now known as 1) to every answer option (C to 1, C to 2, C to 3 etc), which it also stores in working memory. Armed with all this knowledge in working memory it compares the comparison sheets from 1 to 2 (A to B) with the sets from 1 (C) to N[1,2,3,4,5,6], one at a time and scores those transformations and attributes that match exactly with 1 to 2 (A to B). Apart from that it also looks at a few other non-intuitive observable traits eg. did all the objects change to another type of object, did the location change, are all the object consistent between the question and the proposed answer, compares and scores those also. Finally the scores are ranked and the pair (C to ?) with the highest correlation score gets elected as the most likely answer option. Comparison sheets are generated by the smart generator from observables and non-intuitive traits. If you look at “Comparison sheet from A to B”, you will notice the renaming of the figures and underlying objects to their ordinal 1, 2, 3 etc. This example in particular shows that there’s one object in both figure 1 and figure 2 denoted by 1.1 and 2.1 It also shows that three transformations were detected and added to the sheet prefixed with “tf-”. So between figure 1 and 2, the angle changed by -270 and the shape changed. Also notice the the type of shape for the shape change was not noted on the transformation as this offers no bearing on the answer, what is important here is because of the shape change the tester can then infer that the shape in the answer must be different from the shape in question.For this demonstration I will use a longer sheet below and will also refer to the figures and objects with their original name for ease of explanation. Stage three, this is where the smart tester takes the “Comparison Sheets” and compares them against each other, while scoring them for correctness. It does this by comparing sheets from A to B with C to N[1,2,3,4,5,6]. So A.Z.fill: no on sheet (A to B) should match A.Z.fill on sheet (C to 6) and so forth. In stage four the smart tester uses logic and deeper reasoning to infer the answer. For example if “tf-count_changed = no” then the amount of objects in C should be the same amount of objects in the answer. Furthermore if “tf-count_changed = yes” then tf-objects_added and tf-object_deleted is consulted to infer the quantity of objects expected in the answer. If there’s a change in the angle between figures then the smart tester, does not compare explicit angles between objects on the sheets but instead compares the angle difference between the comparing objects and also tries to infer what the new angle should be. For example if the angle between related objects change from 45 to 135, the tester infers that the answer should also reflect a 90 degree angle change. These intuitive checks also augment the score by adding 1 for every positive test result. In the final stage five the tester ranks all scores from the previous stage and takes the highest one. In this question the #6 had the highest score of 9. In summary the agent uses the generate and test method to solve these problems and employs the use of production rules to create a smart tester and a smart generator. With this methodology the agent was able to solve 65% of the 20 Basic problems tests in 2 seconds. The only thing that would increase the processing time is the size of the problem, if the problem contains many objects the agent would take a little longer (nanoseconds) but this is not noticeable to the user, this is because the agent solves the problems in a procedural fashion. If we were to compare the agent’s reasoning to human cognition, like human cognition the agent uses observations and forms base conclusions from these observations. It then augments these conclusions based on other tests, just like we do when we look at these problems. With humans we tend to first try to figure out what forms our base comparison group, A & B or A & C, the agent does that also. Once we determine that the group is lets say A to B, we then try to figure out what’s different between them, what are the transformations, the agent models this using the analogy of “Comparison Sheets”. We then tend to look at the answers and compare them with the remaining figure ( C ) and determine which one of the answers closely resemble the transformations from A to B. The option that has the closest correlation is normally the one we choose, the agent does the same, it even goes as far as to have a second possible answer, but unfortunately there is no option for a second guess in this project. However, there are weaknesses to my design. One of these weakness is the agent does not use long term memory of past questions, it relies only on its production rules and working memory between the smart generator and tester to determine the answer, this I hope to change in the figure by storing the chosen answer to each problem and the problem itself so the agent can look up. Furthermore there are two problems where the agent scored all the answer options the same and there were no perceivable tie-breaker so by default it chooses the last option. This I believe is caused because of the ordinal naming of the objects in the figures. Conversely the strength of this design is the modularity in which it is implemented using definitive stages. There’s is a clear distinction between the stages and what should be passed between stages. The functionality of each stage can be improved independently and not adversely affect the other stages, because of the modular design and the use of the “Comparison Sheet” that is passed between the modules. I believe given more time the agent can be improved with long term memory, better object correlations and deeper knowledge and analysis of shapes and changes in angles. Most of all this was an excellent project that kept me on the edge of my seat, my fingers glued to the keyboard punching out code to make my agent smarter and more efficient and some moments of pulling out my hair and wanting to throw the computer out the window, but moreover it made me think deeper on how we think, use knowledge and reasoning to solve problems as humans.
When Charles Darwin listened to music, he asked himself, what is it for? Philosophers had pondered the mathematical beauty of music for thousands of years, but Darwin wondered about its connection to biology. Humans make music just as beavers build dams and peacocks show off their tail feathers, he reasoned, so music must have evolved. What drove its evolution was hard for him to divine, however. “As neither the enjoyment nor the capacity of producing musical notes are faculties of the least direct use to man in reference to his ordinary habits of life, they must be ranked among the most mysterious with which he is endowed,” Darwin wrote in 1871. Today a number of scientists are trying to solve that mystery by looking at music right where we experience it: in the brain. They are scanning the activity that music triggers in our neurons and observing how music alters our biochemistry. But far from settling on a single answer, the researchers are in a pitched debate over music. Some argue that it evolved in our ancestors because it allowed them to have more children. Others see it as merely a fortunate accident of a complex brain. In many ways music appears to be hardwired in us. Anthropologists have yet to discover a single human culture without its own form of music. Children don’t need any formal training to learn how to sing and dance. And music existed long before modern civilization. In 2008 archaeologists in Germany discovered the remains of a 35,000-year-old flute. Music, in other words, is universal, easily learned, and ancient. That’s what you would expect of an instinct that evolved in our distant ancestors. Darwin himself believed that music evolved as a primordial love song. In other species, males make rhythmic grunts, screeches, and chirps to attract mates. “Musical tones and rhythm were used by the half-human progenitors of man, during the season of courtship, when animals of all kinds are excited by the strongest passions,” he proposed in The Descent of Man. And today, 139 years later, some scientists still sign on to this interpretation. Dean Falk of the School for Advanced Research in Santa Fe, New Mexico, and Ellen Dissanayake of the University of Washington at Seattle accept the idea that a predisposition to music is hardwired, but they think Darwin misunderstood its primary function. They suggest that music evolved not only to serve love but also to soothe its aftermath. Mothers coo to their babies in a melodious singsong sometimes called motherese, a behavior that is unique to humans. Motherese is much the same in all cultures; its pitches are higher and its tempo slower than adult speech. What’s more, motherese is important for forming bonds between mother and child. Falk and Dissanayake argue that the fundamentals of music first arose because it helped form these bonds; once the elements of music were laid down, adults were able to enjoy it as well. A third faction holds that music evolved not from any one-on-one experience but as a way to bring groups together. Robin Dunbar, a psychologist at the University of Oxford, is now running experiments to test the idea that music evolved to strengthen the emotional bonds in small groups of hominids. Dunbar has spent much of his career studying bands of primates. One of the most important things they do to keep the peace is groom one another. Grooming triggers the primate brain’s hypothalamus to release endorphins, neurotransmitters that ease pain and promote a feeling of well-being. Our early ancestors may have engaged in similar behavior. As humans evolved, though, they started congregating in larger groups. By the time the average group size hit about 150, grooming was no longer practical. Music evolved, Dunbar proposes, because it could do what grooming could no longer do. Large gatherings of people could sing and dance together, strengthening their bonds. In a few studies, researchers have found that listening to music can raise the level of endorphins in the bloodstream, just as grooming can. Recently, Dunbar and his colleagues ran experiments to learn more about music’s soothing effects. If music was important for forging social bonds, then performing music (not just listening to it) might release endorphins too. Dunbar and his colleagues studied people who played music or danced together in church groups, samba classes, drumming circles, and the like. After the performances, the scientists made an indirect measure of the endorphin levels in the performers’ bodies, putting blood pressure cuffs on people’s arms and inflating them until the subjects complained of pain. (Since endorphins kill pain, a higher pain threshold indicates elevated levels of the compounds.) The researchers then repeated the procedure with employees of a musical instrument store who listened passively to constant background music. People who actively moved their bodies to music--dancers, drummers, and so on--had elevated pain thresholds, but no such effect showed up among those who merely listened. Aniruddh Patel, an expert on music and the brain at the Neurosciences Institute in La Jolla, California, finds Dunbar’s research unconvincing. If music evolved as a substitute for grooming, he notes, then you would expect that people with social impairments would have trouble with music. Those with autism have no trouble perceiving music, however. In fact, psychologist Rory Allen of Goldsmiths, University of London, has found that they have the same physical responses to emotional music that typical people do. In rejecting music as an evolutionary adaptation, Patel carries on an old tradition. William James, the pioneering psychologist, declared in 1890 that music was “a mere incidental peculiarity of the nervous system.” Rather than evolving as some essential adaptation, it “entered the mind by the back stairs,” James wrote. Harvard psychologist Steven Pinker echoed this view in his 1997 best-selling book, How the Mind Works. “As far as biological cause and effect are concerned, music is useless,” he declared. Music is a by-product of how we communicate with each other--nothing more than “auditory cheesecake,” in Pinker’s words. In the 13 years since Pinker coined that fetching phrase, neuroscientists such as Patel have collected evidence that supports the auditory cheesecake hypothesis, but only up to a point. When Patel and his colleagues examined the parts of the brain that handle different aspects of music--tone, rhythm, and so on--they found that there is no special lobe uniquely dedicated to those particular jobs. It looks as if music is riding the coattails of other parts of the brain that evolved for other functions. In a chapter of the recent book Emerging Disciplines, Patel describes how that borrowing process might work. Listening to the tones in instrumental music, for example, activates language regions of the brain that also process words and syntax. Those regions may make sense of tones by parsing melodies almost as if they were sentences. To keep a beat, Patel’s research suggests, we co-opt the brain network that links our hearing and the control of our muscles. This network’s main job is to allow us to learn new words. When babies start learning to speak, all words are just arbitrary sounds. To match their own vocalizations to the words they hear, they need a way to precisely adjust their tongue and vocal cords to mimic the sounds of those words. As adults, we can use this connection between hearing and muscles to keep a beat--but that is merely a side effect of being able to imitate sound. To explore these ideas, researchers are looking at animals that have the same skills. Vocal learning is rare in the animal kingdom. Only a few groups of birds and mammals can do it. Even our closest chimpanzee relatives can’t. Keeping a beat is rare as well. In recent experiments, Hugo Merchant and his colleagues at the National Autonomous University of Mexico tried to train rhesus monkeys to tap a button in sync with a metronome. The monkeys failed, even after thousands of trials. Intriguingly, some birds can master rhythm. Since 2008 Patel and his colleagues have been studying a cockatoo named Snowball. He can dance to any music with a strong beat, although he seems particularly fond of Cyndi Lauper and the Backstreet Boys. Patel doesn’t think it is a coincidence that Snowball belongs to a lineage of birds that excel at vocal learning. Like us, Snowball may be borrowing his vocal learning equipment to dance. Patel concludes that music is a cultural invention, not an evolutionary adaptation. Scientists are scanning the activity that music triggers in our neurons and observing how music alters our biochemistry. Regardless of how it arose long ago, music can exert a powerful effect on the brain right now. The brains of longtime musicians are transformed by years of practice, much as playing basketball or juggling can rewire the brain. In the past couple of years, neuroscientists have discovered that simply listening to music can change the brain too. Last year Sylvain Moreno of York University in Toronto and his colleagues showed that giving third graders nine months of music classes improved their ability to read. In a 2008 study, Finnish psychologists had stroke patients spend two months listening to music. Six months later, the patients had better verbal memory and attention than stroke victims who had not had music therapy. Some victims of stroke lose the ability to speak, and for these people music can have an especially great benefit. In a treatment called melodic intonation therapy, stroke patients practice singing short sentences as they tap out the rhythm. Gradually they increase the length of the sung sentences until reaching the point where they can start to speak. Gottfried Schlaug, a neuro scientist at Harvard Medical School, has found that melodic intonation therapy creates profound changes in the brain. In particular, it thickens a bundle of nerve fibers called the arcuate fasciculus, an information highway crucial for using language. So music may take advantage of the circuits that evolved for vocal learning, but once people invented it, Patel suggests, music efficiently spread from culture to culture because of its emotional appeal. Music proved to be a valuable tool to bring people together in ritual chants, tapping parts of the brain that normally detect emotions in other people’s speech. Music also proved to be a great aid for memory, and so people used it when performing religious ceremonies and reciting epic tales like The Odyssey. Darwin had a hard time figuring out what music was good for, but our ancestors apparently had no trouble at all. Copyright 2010 Carl Zimmer
Why Lesson Planet? Students locate the literary devices used in Martin Luther King Jr.'s "I Have a Dream" speech. In this figurative language lesson plan, students first distinguish between similes, metaphors, analogies, personification, etc. Students watch a video of Dr. King's speech and work in groups work to locate any figurative language included in the speech. Students create a presentation to share with the class what they learned.
Health Effects of Uranium What types of Possible Health Effects Have Been Examined? Uranium is both a chemical and a radioactive material. Uraniumís chemical toxicity is the principal health concern of DU exposure because some forms of uranium can potentially cause damage in the kidneys. A few people have developed signs of kidney disease after intake of large amounts of uranium, but this has not been found in those Service members with the greatest DU exposures from embedded DU fragments. Uraniumís radiological hazards generally are of less concern, because both natural and depleted uranium (which is 40% less radioactive than the natural form) are only weakly radioactive. Although there is a chance of developing cancer from a radioactive material, no human cancer of any type has been seen as a result of exposure to either natural or depleted uranium. More detailed information on the chemical effects and radiation effects are included in other sections of this website. How is DoD Evaluating the Effects of Depleted Uranium on Health? In 1993, the DoD and the VA instituted a medical surveillance program for depleted uranium exposures occurring during the 1991 Gulf War. Since then the VA Medical Center in Baltimore, through its DU Medical Follow-up Program has been evaluating almost 80 survivors of friendly fire incidents involving DU during in the 1991 Gulf War. They are invited for comprehensive medical evaluations every two years. About one-fourth currently have embedded fragments of depleted uranium, and many have marked elevations of uranium in their urine. To date, there have been no adverse clinical effects noted in these individuals related to DU; specifically, there has been no kidney damage, leukemia, bone or lung cancer, or other uranium-related health effects. No babies born to this group have had birth defects. The VA plans to continue monitoring these individuals indefinitely. What do Other Organizations Say About the Health Effects of DU? Scientific agencies outside the DoD and the VA have reviewed the evidence and determined that DU posed minimal risk to human health. The Department of Health and Human Servicesí Agency for Toxic Substances and Disease Registry, the RAND Corporation, the Institute of Medicine, the United Kingdom Royal Society, the European Commission, and the World Health Organization have all completed studies and concluded that there is no evidence that DU causes cancer. NATO reported no relationship between exposure to DU in Europe and health problems potentially attributable to radioactivity. The organization stated: The United Kingdom has stated that there is no reliable scientific or medical evidence to link DU with ill health of veterans of conflict in either the Persian Gulf or Balkans, or of people living in these regions. Many independent reports have been published and have failed to detect a relationship between DU exposure and illness, and none has found widespread DU contamination sufficient to impact the health of the general population or deployed personnel. [LINK] In a 1999 study conducted by the RAND Corporation, the authors stated: ď(N)o evidence is documented in the literature of cancer or any other negative health effect related to the radiation received from exposure to depleted or natural uranium, whether inhaled or ingested, even at very high doses.Ē Other Important Publications
223 Physics Lab: Sample Lab 223 & 224 Lab Overview | Return to Physics Labs A typical simple pendulum consists of a heavy pendulum bob (mass = ) suspended from a light string. It is generally assumed that the mass of the string is negligible. If the bob is pulled away from the vertical with some angle, and released so that the pendulum swings within a vertical plane, the period of the pendulum is given as: is the length of the pendulum and is the acceleration due to earth's gravity. Note only the first three terms in the infinite series is given in Equation 1. The period is defined as the time required for the pendulum to complete one oscillation. That is, if the pendulum is released at some point, P, the period is defined as the time required for the pendulum to swing along its path and return to point P. The above formula for the pendulum's period is greatly simplified if we limit the to small values. If is small, we can approximate the period of the pendulum with a first-order expression, which, in the case of our simple pendulum is Note that the period in this expression is independent of the pendulum's mass as initial angle, It is important to understand that the above equation is valid only in the small angle approximation. - Determine the maximum angle for which the first-order expression (Equation 2) for the period of a simple pendulum (Equation 1) is valid. In other words, ascertain the cutoff angle for when small angle approximation fails. - Use a simple pendulum to determine the value of g, the acceleration due to earth's gravity. Equipment and setup - (Figure 1.) The pendulum stand, clamp and bob. - (Figure 2.) The pendulum string is secured with the pendulum clamp. Notice that it is not necessary to tie knots in - (Figure 3.) The pendulum bob is an aluminum rod. The bob's center of mass is marked. - (Figure 4.) A protractor is used to determine the initial angle that the pendulum string makes with the vertical. - (Figures 5, 6 & 7.) As the pendulum swings, a stopwatch or computer timing device is used to measure the pendulum's period. - (Figures 6 & 7.) The computer timing devices shown in these figures are found on our physics lab web page (Figure 6) and also in the Lab Programs folder found on the computer desk tops of each laboratory computer (Figure 7). - (Figure 8.) The meter sticks are located in the window well at the front of the classroom. [Click on images to enlarge.] Hints and Cautions - Do not tie knots in the pendulum string. Instead, use the pendulum clamp to set the pendulum to the desired length. - Clemson Physics Lab Tutorials - Using significant figures - Using Excel - Graphing data - Adding a trendline to an Excel plot - Fitting multiple curves (trendlines) to one data set - Using error bars in Excel Lab Report Template Each lab group should download the Lab Report Template and fill in the relevant information as you perform the experiment. Each person in the group should print-out the Questions section and answer them individually. Since each lab group will turn in an electronic copy of the lab report, be sure to rename the lab report template file. The naming convention is as [Table Number][Short Experiment Name].doc. For example the group at lab table #5 working on the Ideal Gas Law experiment would rename their template file as "5 Gas Law.doc". These Nudge Questions are to be answered by your group and checked by your TA as you do the lab. They should be answered in your lab notebook. Objective 1 Nudges - What method did you use to determine the initial angle of the pendulum? - What is the uncertainty of the timing device? - What is the uncertainty of your measurement of the period of the pendulum? - What steps did you take to decrease the uncertainty in the measurement of the period? - What is the uncertainty in the length measurement? Objective 2 Nudges - What method will you use to determine the cut-off angle? - Would the length of the pendulum affect the uncertainty of the period and length measurements? - What parameters will you graph in order to measure g? - Did your best-fit line fall within the data points' error bars? Was this - What initial angle will you use when working on the second objective? These Questions are also found in the Lab Report Template. They must be answered by each individual of the group. This is not a team activity. Each person should attach their own copy to the lab report just prior to handing in the lab to your - Describe how the pendulum's period is affected if the bob's mass is doubled. Halved. Assume the period is independent of - Draw a free-body diagram of the pendulum at the top of this page. You may ignore friction forces. Write down the force that drives the system, that is, the force along the direction of motion. - A simple pendulum has a mass of 0.750kg and a length of 0.500m. What is the tension in the string when the pendulum is at an angle of 20°? - Show that for small angles, the driving force in Question 2 becomes . Use the fact that at small angles and the image of the pendulum at the top of this page to help you. - Compare the pendulum's restoring force to the restoring force of the simple harmonic motion of an oscillating mass on a spring. The experiment was performed and a sample write-up was "graded" by the laboratory curator. The write-up and the curator's comments are available for you to view and refer back to when you write future lab reports. Data, Results and Graphs The data for this sample experiment is found in the sample lab report Answers to Questions The questions for this sample experiment are found in the sample lab report As of now, there are no associated with this experiment. If you have a question or comment, send an e-mail to Lab Coordiantor: 223 & 224 Lab Overview | Return to Physics Labs
Digestive Topics : Colonoscopy A Colonoscopy is a test in which a doctor looks directly into the last part if the intestines with a narrow bendable tube mounted with a camera to find out why children have diarrhea, bleeding and stomach pain. Download the GIKids Colonoscopy Fact Sheet to why your child may need a colonoscopy, how to prepare for a colonoscopy and what to expect after a colonoscopy. Comic Strip - How to prepare for a colonoscopy - Bowel Prep - Coutesy of Harpreet Pall, MD, St. Christopher's Hospital for Children What is a colonoscopy? Colonoscopy is a test in which the gastroenterologist looks directly at the lining of the lower intestine (called “colon” or “large intestine”) with a narrow bendable tube mounted with a camera and light. This lubricated instrument is inserted into the sedated patient via the anus and can travel to just above the colon, into the lowest part of the small intestine. Tiny tissue samples (biopsies) are usually taken during the examination, and the doctor can also remove growths (polyps) using the scope. Why might a child need a colonoscopy? The most common conditions leading to colonoscopy in children are: blood in the stool, diarrhea of unknowon cause, abdominal pain that might be due to intestinal inflammation, follow-up of a chronic condition involving the lining of the intestine. What happens before and after the test? In the days right before the test, your child will need to take oral medicine to flush out all the stool, and will also need to drink lots of clear fluids. It is important that during this time he/she does not eat any solid food or drink any liquids that you can’t see through. For 2 or more hours right before the colonoscopy, your child cannot have anything to eat or drink, as this would make it less safe to have the sedation/anesthesia (sleeping medication) needed for the test. You will be given more detailed instructions on all of the above by your child’s doctor or nurse. After the test, the doctor will tell you what was seen with the scope, and may have pictures of your child’s intestine to show you. You will get biopsy results later. Once your child is awake and drinking liquids, he/she can go home and start eating as before. A few children feel sick after the test and may be watched a little longer until they feel better. What are the risks of colonoscopy? Colonoscopy is a safe procedure, but does have some small risks. In general these are: a hole made in the intestinal wall, excessive bleeding, problems from the sleeping medications given for tests, or infections. Your gastroenterologist will go over these and any other risks related to your own child’s situation. What should we watch for after the colonoscopy? Your child may have a little blood in the stool for a day or so, and this is ok. There may be discomfort from gas in the intestine left over from the test, which will pass with time, as the child lets it out. However, if your child has any of the following, you should call your doctor or go to the emergency department: abdominal pain for more than an hour, an abdomen that is big and hard, bleeding more than about a spoonful, bleeding that continues beyond the second day, fever, or repeated throwing up. - Endoscopic Pictures of the Colon - National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) - American Gastroenterological Association - American Society for Gastrointestinal Endoscopy (ASGE) - Endoscopy Guide for Children - Guía de Endoscopia para los niños en español - Scool age General tips on preparing children for procedures: IMPORTANT REMINDER: This information from the North American Society for Pediatric Gastroenterology, Hepatology and Nutrition (NASPGHAN) is intended only to provide general information and not as a definitive basis for diagnosis or treatment in any particular case. It is very important that you consult your doctor about your specific condition. Updated October 2011
“Resilience is the process of adapting well in the face of adversity, trauma, tragedy, threats or significant sources of stress — such as family and relationship problems, serious health problems or workplace and financial stressors. It means “bouncing back” from difficult experiences. Research has shown that resilience is ordinary, not extraordinary. People commonly demonstrate resilience. One example is the response of many Americans to the September 11, 2001 terrorist attacks and individuals’ efforts to rebuild their lives. Being resilient does not mean that a person doesn’t experience difficulty or distress. Emotional pain and sadness are common in people who have suffered major adversity or trauma in their lives. In fact, the road to resilience is likely to involve considerable emotional distress. Resilience is not a trait that people either have or do not have. It involves behaviors, thoughts and actions that can be learned and developed in anyone.” -The America Psychological Association “Resilience is the power to adapt well to adversity. It is the process of coping with and managing tragedy and crisis in your life. It is ‘bouncing back’ from hard times, whether these be national disasters, such as the current financial crisis, a hurricane, or a terrorist attack, or personal disasters such as bankruptcy, divorce, or the death of a loved one. Research since September 11th suggests that resilience may be much more common than we thought. Although certain forms of temperament may be inherited that may help people to be more resilient in a crisis, and although certain forms of psychiatric or cognitive disorders may interfere with the learning of these skills, most of what makes up resilience is learned and can be taught. This is especially true of one of the key components of resilience: Optimism. Being optimistic does not mean that we look at the world through rose-colored glasses or that we avoid pain or do not experience intense emotions when going through a crisis. Just the opposite. Resilient individuals are aware of their feelings and are able to discharge and manage them as well as deal with and manage others in a crisis. Resilience does not involve avoiding one’s feelings; it involves confronting and managing them. Being able to use thinking as a way of managing emotion is a major part of resilience.” -From Duct Tape Isn’t Enough by Dr. Ron Breazeale
binomial (bĪˌnōˈmēəl) [key], polynomial expression (see polynomial) containing two terms, for example, x + y. The binomial theorem, or binomial formula, gives the expansion of the n th power of a binomial ( x + y ) for n = 1, 2, 3, … , as follows:;e1;none;1;e1;;;block;;;;no;1;139392n;16544n;;;;;eq1;;;left;stack;;;;;CE5where the ellipsis (…) indicates a continuation of terms following the same pattern. For example, using the formula and reducing fractions, one obtains ( x + y )5 = x 5+5 x 4 y +10 x 3 y 2+10 x 2 y 3+5 x y 4+ y 5. The coefficients 1, n, n ( n - 1)/1·2, etc., of x and y may also be found from an array known as Pascal's triangle (for Blaise Pascal), formed by adding adjacent numbers to find the number below them as follows: The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on binomial from Fact Monster: See more Encyclopedia articles on: Mathematics
Sounding rockets, or research rockets, are data-collecting spacecraft carrying scientific instruments to conduct short experiments during sub-orbital flight. They are typically used to test and calibrate satellite and spacecraft instrumentation, and fly for less than 30 minutes. Efficient and cost-effective, sounding rockets are small enough to launch from remote or temporary sites, and their experiments can be developed in about six months. The rockets are divided into two parts: the scientific payload, which carries the instruments for experimentation and data collection, and the rocket motor, which propels the rocket into space and separates from the payload after launch. Data collected by sounding rockets are transferred to researchers on the ground during the flight via telemetry, which is similar to how a radio system works. The payload remains in space for five to 20 minutes to conduct the experiment, and then returns to Earth under a parachute and is collected for future use. NASA currently uses 15 types of sounding rockets, which range in height from seven to 65 feet and can launch from 30 to over 800 miles into space.
The Way of the Java/Objects of Arrays Objects of Arrays deck array!of Cards In the previous chapter, we worked with an array of objects, but I also mentioned that it is possible to have an object that contains an array as an instance variable. In this chapter I am going to create a new object, called a Deck, that contains an array of Cards as an instance variable. instance variable variable!instance The class definition looks like this verbatim class Deck public Deck (int n) cards = new Card[n]; The name of the instance variable is cards to help distinguish the Deck object from the array of Cards that it contains. Here is a state diagram showing what a Deck object looks like with no cards allocated: state diagram constructor As usual, the constructor initializes the instance variable, but in this case it uses the new command to create the array of cards. It doesn't create any cards to go in it, though. For that we could write another constructor that creates a standard 52-card deck and populates it with Card objects: public Deck () cards = new Card; int index = 0; for (int suit = 0; suit <= 3; suit++) for (int rank = 1; rank <= 13; rank++) cards[index] = new Card (suit, rank); index++; Notice how similar this method is to buildDeck, except that we had to change the syntax to make it a constructor. To invoke it, we use the new command: Deck deck = new Deck (); Now that we have a Deck class, it makes sense to put all the methods that pertain to Decks in the Deck class definition. Looking at the methods we have written so far, one obvious candidate is printDeck (Section printdeck). Here's how it looks, rewritten to work with a Deck object: public static void printDeck (Deck deck) for (int i=0; i<deck.cards.length; i++) Card.printCard (deck.cards[i]); The most obvious thing we have to change is the type of the parameter, from Card to Deck. The second change is that we can no longer use deck.length to get the length of the array, because deck is a Deck object now, not an array. It contains an array, but it is not, itself, an array. Therefore, we have to write deck.cards.length to extract the array from the Deck object and get the length of the array. For the same reason, we have to use deck.cards[i] to access an element of the array, rather than just deck[i]. The last change is that the invocation of printCard has to say explicitly that printCard is defined in the Card class. For some of the other methods, it is not obvious whether they should be included in the Card class or the Deck class. For example, findCard takes a Card and a Deck as arguments; you could reasonably put it in either class. As an exercise, move findCard into the Deck class and rewrite it so that the first parameter is a Deck object rather than an array of Cards. For most card games you need to be able to shuffle the deck; that is, put the cards in a random order. In Section random we saw how to generate random numbers, but it is not obvious how to use them to shuffle a deck. One possibility is to model the way humans shuffle, which is usually by dividing the deck in two and then reassembling the deck by choosing alternately from each deck. Since humans usually don't shuffle perfectly, after about 7 iterations the order of the deck is pretty well randomized. But a computer program would have the annoying property of doing a perfect shuffle every time, which is not really very random. In fact, after 8 perfect shuffles, you would find the deck back in the same order you started in. For a discussion of that claim, see http://www.wiskit.com/marilyn/craig.html or do a web search with the keywords ``perfect shuffle. A better shuffling algorithm is to traverse the deck one card at a time, and at each iteration choose two cards and swap them. Here is an outline of how this algorithm works. To sketch the program, I am using a combination of Java statements and English words that is sometimes called pseudocode: for (int i=0; i<deck.cards.length; i++) // choose a random number between i and deck.cards.length // swap the ith card and the randomly-chosen card The nice thing about using pseudocode is that it often makes it clear what methods you are going to need. In this case, we need something like randomInt, which chooses a random integer between the parameters low and high, and swapCards which takes two indices and switches the cards at the indicated positions. You can probably figure out how to write randomInt by looking at Section random, although you will have to be careful about possibly generating indices that are out of range. You can also figure out swapCards yourself. The only tricky thing is to decide whether to swap just the references to the cards or the contents of the cards. Does it matter which one you choose? Which is faster? I will leave the remaining implementation of these methods as an exercise to the reader. Now that we have messed up the deck, we need a way to put it back in order. Ironically, there is an algorithm for sorting that is very similar to the algorithm for shuffling. This algorithm is sometimes called selection sort because it works by traversing the array repeatedly and selecting the lowest remaining card each time. During the first iteration we find the lowest card and swap it with the card in the 0th position. During the ith, we find the lowest card to the right of i and swap it with the ith card. Here is pseudocode for selection sort: for (int i=0; i<deck.cards.length; i++) // find the lowest card at or to the right of i // swap the ith card and the lowest card Again, the pseudocode helps with the design of the helper methods. In this case we can use swapCards again, so we only need one new one, called findLowestCard, that takes an array of cards and an index where it should start looking. helper method method!helper Once again, I am going to leave the implementation up to the reader. How should we represent a hand or some other subset of a full deck? One good choice is to make a Deck object that has fewer than 52 cards. We might want a method, subdeck, that takes an array of cards and a range of indices, and that returns a new array of cards that contains the specified subset of the deck: public static Deck subdeck (Deck deck, int low, int high) Deck sub = new Deck (high-low+1); for (int i = 0; i<sub.cards.length; i++) sub.cards[i] = deck.cards[low+i]; return sub; The length of the subdeck is high-low+1 because both the low card and high card are included. This sort of computation can be confusing, and lead to ``off-by-one errors. Drawing a picture is usually the best way to avoid them. Because we provide an argument with the new command, the contructor that gets invoked will be the first one, which only allocates the array and doesn't allocate any cards. Inside the for loop, the subdeck gets populated with copies of the references from the deck. The following is a state diagram of a subdeck being created with the parameters low=3 and high=7. The result is a hand with 5 cards that are shared with the original deck; i.e. they are aliased. I have suggested that aliasing is not generally a good idea, since changes in one subdeck will be reflected in others, which is not the behavior you would expect from real cards and decks. But if the objects in question are immutable, then aliasing can be a reasonable choice. In this case, there is probably no reason ever to change the rank or suit of a card. Instead we will create each card once and then treat it as an immutable object. So for Cards aliasing is a reasonable choice. As an exercise, write a version of findBisect that takes a subdeck as an argument, rather than a deck and an index range. Which version is more error-prone? Which version do you think is more efficient? Shuffling and dealing In Section shuffle I wrote pseudocode for a shuffling algorithm. Assuming that we have a method called shuffleDeck that takes a deck as an argument and shuffles it, we can create and shuffle a deck: Deck deck = new Deck (); shuffleDeck (deck); Then, to deal out several hands, we can use subdeck: Deck hand1 = subdeck (deck, 0, 4); Deck hand2 = subdeck (deck, 5, 9); Deck pack = subdeck (deck, 10, 51); This code puts the first 5 cards in one hand, the next 5 cards in the other, and the rest into the pack. When you thought about dealing, did you think we should give out one card at a time to each player in the round-robin style that is common in real card games? I thought about it, but then realized that it is unnecessary for a computer program. The round-robin convention is intended to mitigate imperfect shuffling and make it more difficult for the dealer to cheat. Neither of these is an issue for a computer. This example is a useful reminder of one of the dangers of engineering metaphors: sometimes we impose restrictions on computers that are unnecessary, or expect capabilities that are lacking, because we unthinkingly extend a metaphor past its breaking point. Beware of misleading analogies. efficiency sorting mergesort In Section sorting, we saw a simple sorting algorithm that turns out not to be very efficient. In order to sort items, it has to traverse the array times, and each traversal takes an amount of time that is proportional to . The total time, therefore, is proportional to . In this section I will sketch a more efficient algorithm called mergesort. To sort items, mergesort takes time proportional to . That may not seem impressive, but as gets big, the difference between and can be enormous. Try out a few values of and see. The basic idea behind mergesort is this: if you have two subdecks, each of which has been sorted, it is easy (and fast) to merge them into a single, sorted deck. Try this out with a deck of cards: Form two subdecks with about 10 cards each and sort them so that when they are face up the lowest cards are on top. Place both decks face up in front of you. Compare the top card from each deck and choose the lower one. Flip it over and add it to the merged deck. Repeat step two until one of the decks is empty. Then take the remaining cards and add them to the merged deck. The result should be a single sorted deck. Here's what this looks like in pseudocode: public static Deck merge (Deck d1, Deck d2) // create a new deck big enough for all the cards Deck result = new Deck (d1.cards.length + d2.cards.length); // use the index i to keep track of where we are in // the first deck, and the index j for the second deck int i = 0; int j = 0; // the index k traverses the result deck for (int k = 0; k<result.cards.length; k++) // if d1 is empty, d2 wins; if d2 is empty, d1 wins; // otherwise, compare the two cards // add the winner to the new deck return result; The best way to test merge is to build and shuffle a deck, use subdeck to form two (small) hands, and then use the sort routine from the previous chapter to sort the two halves. Then you can pass the two halves to merge to see if it works. If you can get that working, try a simple implementation of mergeSort: public static Deck mergeSort (Deck deck) // find the midpoint of the deck // divide the deck into two subdecks // sort the subdecks using sortDeck // merge the two halves and return the result Then, if you get that working, the real fun begins! The magical thing about mergesort is that it is recursive. At the point where you sort the subdecks, why should you invoke the old, slow version of sort? Why not invoke the spiffy new mergeSort you are in the process of writing? Not only is that a good idea, it is necessary in order to achieve the performance advantage I promised. In order to make it work, though, you have to add a base case so that it doesn't recurse forever. A simple base case is a subdeck with 0 or 1 cards. If mergesort receives such a small subdeck, it can return it unmodified, since it is already sorted. The recursive version of mergesort should look something like this: public static Deck mergeSort (Deck deck) // if the deck is 0 or 1 cards, return it // find the midpoint of the deck // divide the deck into two subdecks // sort the subdecks using mergesort // merge the two halves and return the result As usual, there are two ways to think about recursive programs: you can think through the entire flow of execution, or you can make the ``leap of faith. I have deliberately constructed this example to encourage you to make the leap of faith. leap of faith When you were using sortDeck to sort the subdecks, you didn't feel compelled to follow the flow of execution, right? You just assumed that the sortDeck method would work because you already debugged it. Well, all you did to make mergeSort recursive was replace one sort algorithm with another. There is no reason to read the program differently. Well, actually you have to give some thought to getting the base case right and making sure that you reach it eventually, but other than that, writing the recursive version should be no problem. Good luck! [pseudocode:] A way of designing programs by writing rough drafts in a combination of English and Java. [helper method:] Often a small method that does not do anything enormously useful by itself, but which helps another, more useful, method. pseudocode helper method method!helper
Alliteration Activity Pack Have fun with funky phrases with this activity! There are two different resources included in this pack that target alliterations. Alternate from a story-time session to a train car word activity! - 18 Alliteration Story Cards - Alliteration Train Cards (A-Z) How To Prepare and Use: - Alliteration Story Cards - Option 1: Have your kiddo read the story cards, or read them to your kiddo. Talk about what makes each story an alliteration. - Option 2: Cut the sentence off of the story card, and have your child either match the sentences to the stories, or have them identify the alliterative words in the artwork. - Alliteration Train - Cut out the train sections. - Place the letters and word cards in front of your kiddo, and have them match the words to the correct letter to create a full train.
By Jackie Whiting Students frequently ask: can you help me find a source that's not biased? When they ask that question we know what they mean, what it shows us the students need to learn is that 1) there are degrees of bias and 2) everyone has bias, so 3) there is no such thing as unbiased. Instead, we need to teach students to recognize what a text creator's bias is and how or whether that bias negates the usefulness of that source for the student's purpose. Today we worked with a class of grade 11 students doing research for an in-depth research paper. The focus of the class unit is on the relationship between socioeconomic status and educational experience so this topic will frame the research questions the students are seeking to answer. To facilitate the students' resource selection and understanding of the impact of bias on source credibility we worked with the class unpacking an editorial from the "Room for Debate" section of the New York Times in response to the question: "Is School Reform Hopeless?" We scaffolded this exercise to help students begin to understand their own biases on this topic and how their bias will influence how they understand what they read and how they convey what they ultimately write. We selected one of the editorials and provided the students just the conclusion to that text. We selectively removed words from the paragraph and asked students to replace the blanks with whatever word they each thought would best convey the meaning of the paragraph. When they completed this exercise individually, we asked them to work with 2 or 3 other students in the class to compile their words on one document and compare how they each completed the paragraph and how their choice of words changed the meaning of the paragraph. The pictures below are of the excerpted paragraph with the students' words on post-it notes. Here is an example of a phrase with blanks to be filled: ...too many are climbing stairwells with broken handrails and missing steps, tripping and falling as they ________ to keep up, while others are _________ up on elevators... In one group students said: struggling to keep up, while others racing up. trying to keep up, while others rising up. attempting to keep up, while others moving up. The students were able to see that racing implies competition, rising implies progress and maybe increase in status, while moving is more passive. They were surprised that none of those were the words that the author used but they couldn't think of another word to use. The actual sentence is: "...too many are climbing stairwells with broken handrails and missing steps, tripping and falling as they work to keep up, while others are zooming up on elevators..." Certainly working implies a conscious sense of purpose and purposefulness to the effort that is not reflected in struggle, try or attempt. Work may also imply a degree of success and ability absent in those other terms. Zooming also has a very different connotation than the words the students chose, particularly in contrast to working. So, we asked students to compare their bias with that of the author and consider how differing opinions might influence their assessment of the source's credibility. For the next phase of this exercise, we provided the students with the rest of the editorial where we had highlighted words or phrases and added questions to invite students to discuss the writer's choice of word and how those words affected the meaning of her editorial. Here is an example paragraph: "In addition to attending to these basic survival needs, schools have to attract experienced teachers and leaders with the right sensibilities and training to educate youth from diverse social and cultural backgrounds. Successful school districts also enhance youth development through extracurricular activities and additional enrichment. When families cannot afford costly after-school programs, personal tutors and experiential summer vacations, effective school-communities invest in programs to offset these opportunity gaps." Here are the questions we posed corresponding to each of the highlighted phrases: What does this phrase imply? (basic survival needs) What do you think these are? (right sensibilities) How is this different than education? (youth development) What other gaps have you heard of? (opportunity gaps) As they shared their conclusions and questions the students raised questions like: what does equity mean? One student said it meant equality. At that point, we directed the class to the Allsides Dictionary. Here is how Allsides describes their dictionary: Click to see how Allsides defines equity and the cartoon they use to distinguish "equity" from "equality". We think this resource is incredibly valuable to students as they learn to navigate the information they encounter and develop information literacy -- particularly in the face of fake news!
This lesson focuses the students attention on the similarities and differences between the ancient Greeks and Romans. Did one society influence the other? What role did democracy play in governing these societies? How did these societies influence the development of the United States government? This lesson requires students to analyze and compare the limited powers of Medieval kings. This lesson encourages students to identify the rights of property and due process outlined in the Magna Carta, and relate them to the rights of American citizens today ensured by the United States Bill of Rights. Learn about the philosophies of Machiavelli and Hobbes and how they may have influenced ideals mentioned in the First Amendment. In this lesson, the students will review Antecedent documents and describe the rights of citizens under the Petition of Right. Students will also analyze the connection between economics and government. This lesson focuses on the Toleration Act of 1689. In small groups, students will analyze the segments of this document, rewrite a modern version providing explanation of religious freedom granted to citizens, and draw connections between the Toleration Act of 1689 and the Bill of Rights within the United States Constitution. This lesson will explore the economic, political and religious motivations of explorations by various countries during the time period of 1450 to 1650. This lesson focuses on the hardships and the accomplishments of various explorers in their quest to find the riches of Asia and to discover and claim new land for their country. This lesson reviews navigation technology that was available to early explorers. The students will have an opportunity to construct an astrolabe, quadrant or compass, discuss the similarities and differences between these tools and how they compare to navigation technology today. This lesson examines the global impact, for better or worse, of the Columbian Exchange. First, students will build schema and geography skills by completing a map activity. Next, teachers will broaden that schema by introducing key terms and basic background information. Then, working cooperatively, students will dig deeper through a guided, structured research activity. Finally, students will use all of this information to construct a five-paragraph essay describing in detail some of the changes that were brought about by the Columbian Exchange. In this lesson, students will work in small groups to research the culture and political system of five Iroquoi tribes. Using this information, they will plan and conduct a council meeting to discuss possible solutions to problems each are facing. In this lesson, students will analyze the Declaration of Independence and identify the three main points of the document. Students will then compare and contrast the Declaration of Independence with the Mayflower Compact. In this lesson, students will explore the ideals the Founding Fathers used to create the Constitution of the United States. Through role play student will identify John Winthrop, describe how his beliefs about society impacted the Puritans of the Massachusetts Bay Colony and how those ideals in-turn connect to the Constitution of the United States. Here students will have an opportunity to gain a better understanding of the various motives of early colonists to begin a new life and the challenges they faced. How did each group of colonists cope with these challenges? Did all colonists face the same types of challenges? Students will be able to answer these questions and be able to compare the decisions various societies made to promote a better life for all citizens. What does it mean to have religious freedom? In this lesson, students will compare and contrast the rights of the Chapters 16 and 17 of the West Jersey Charter with the First and Seventh Amendments of the Bill of Rights. In this lesson, students will examine how freedom of religion is important to a free society today and as a motivation to the early colonists of the 1700's. Students will also have an opportunity to review and analyze the Primary Source, Charter of Privileges, 1701. In this lesson, students in small groups will research information on and construct a timeline of the French and Indian War and the Seven Years War. Based up on their research, students will respond to the question, How did the French and Indian War impact the American Colonies? This lesson introduces students to two new Revolutionary War topics while reinforcing information learned in previous lessons. The lesson employs the ARTIST teaching method to help students analyze a primary source document. Throughout the lesson, students will engage in individual, partner, and whole group activities in order to master the objectives. Students should have demonstrated adequate understanding of the Boston Tea Party and the Coercive Acts before beginning this lesson. This lesson reinforces prior knowledge and encourages critical thinking. First, students combine prior knowledge with secondary source analysis to gain a better understanding of the circumstances that led to Shays’ Rebellion. Then, students will use all of this information to draw their own conclusions about the overall importance and impact of this event. Conduct this lesson after students have been exposed to the weaknesses of the Articles of Confederation and the Annapolis Convention. With this lesson, students examine both secondary and primary sources to discover both the causes of and the events that occurred on the evening of March 5, 1770. The lesson employs “Frame of Reference,” a strategy in which students examine different views of an event, promoting critical thinking skills such as the ability to detect bias and to make reasoned judgments based on evidence. A cooperative learning strategy called “Think-Pair-Share” is used to facilitate the lesson. In this lesson, students will analyze President George Washington's quotes and speeches. What was his decision about continuing on as President of the United States? What advice did he give in his Farewell Address regarding federal government and foreign relations? Has his advice been applied by other Presidents of the United States? In this lesson, students will define and discuss the differences between rights, privileges and licences. Using quotes from James Madison and Thomas Jefferson, students will analyze the argument for including a Bill of Rights in the United States Constitution. In this lesson, students will investigate the American System and gain an understanding of what role the United States government had in the economy during the 1800s. How did the Federalists and Democratic-Republicans differ in their point of view on this matter? Is the American System still in effect today? In this lesson, students will determine why Napoleon Bonaparte decided to sell France's land within the United States. What prompted him to sell such a large area of land for a low price? The students will identify and locate current states that were a part of the Louisiana Purchase. How did this purchase of land effect the growth of the United States? Using the information researched, students will create a story/newscast about the Louisiana Purchase from the perspective of either an American or French news reporter. In this lesson, students will analyze President Thomas Jefferson's decision on the Embargo Act of 1807. Was the Embargo Act a good idea? What challenges were encountered? Was it successful? How does the Embargo Act of 1807 connect to the United States' dependence on Mideast oil producing companies today? In this lesson, students will identify who was involved in the War of 1812. What events and motives led to this war? What was the outcome of this war? Students will develop a classroom presentation based up on the information they gather. In this lesson, students will describe the Monroe Doctrine, presented by President James Monroe to Congress in 1823, and the events that led to the development of this document. Students will discuss the purpose of the Monroe Doctrine and whether or not they agree with its position. In this lesson, students will identify slavery as the cause of debate which led to the Missouri Compromise of 1820. How will slavery be addressed within new states admitted to the Union? What were the terms of the Missouri Compromise? Was it successful in preventing a confrontation over slavery? Students will read background information on the Missouri Compromise and color code a map to outline the Missouri Territory and the Union as it was in 1820. The students will take part in a classroom role play to help understand how lawmakers need to work toward compromise between differing points of view in order for change to take place. In this lesson, students will use Primary Sources to analyze how the development of industry in the north changed life in the United States just prior to the Civil War. Were most people for or against the growth of industry? How does the industry of the 19th Century compare to industry today, both in the United States and globally? In this lesson, students will be able to gain an understanding of the routine and conditions of life for the Lowell Mills girls. How was their day structured? What were their responsibilities? What were the conditions of the factories the girls endured? The students will use their writing skills to develop a narrative describing life in the Lowell Mills. In this lesson, students will explore the controversy surrounding President Andrew Jackson’s Indian Removal policy. Using primary sources, students will read the words of those involved, and compare the viewpoints of Jackson, Cherokee leaders, and other leading politicians. After analyzing and assessing these viewpoints, students will place the legacy of Indian Removal, culminated by the Trail of Tears, in the history of United States relations with Native Americans. Through the use of Primary Sources, students will have an opportunity to explore the growth of industry in northern United States prior to the onset of the Civil War. In this lesson, students will explore the conditions of daily life that slaves endured and determine if these conditions were cause for slaves to rebel against their owners or be more compliant. Students will use creative writing skills to develop a letter to abolitionist newspaper from a fictitious slave describing the horrible conditions of slavery. In this lesson, students will have the difficult task of devising an original plan to appease multiple parties on the issue of slavery and how new territories will address slavery during the mid 1800s. Students will then review Henry Clay's 1850 Resolution, examine the terms of the Compromise of 1850 and determine what was gained and lost as a result of the Compromise of 1850. In this lesson, students will examine the origins and history of the Know-Nothing Party. Students will also investigate and debate how immigration was perceived by political, religious leaders and and society during the mid to late 1800s. As an introduction to this lesson, students will analyze the lyrics of Follow the Drinking Gourd. Students will work in small groups to complete web quests on a given aspect of the Underground Railroad and determine whether the Fugitive Slave Act positively or negatively impacted various people during this time period. In this lesson, students will examine the how the issue of slavery was being handled by the United States government and the reaction of people in both the north and south. The students will review the terms of the Kansas Nebraska Act along with political cartoons and determine if the Kansas Nebraska Act was the best possible solution at that time. Using their creative writing skills, student will create a newspaper article highlighting aspects of history that revolve around the development of the Kansas Nebraska Act. In this lesson, students will investigate the purpose of a Presidential Inaugural Address. Students will analyze Lincoln's First Inaugural Address and how Lincoln structured his speech to reflect his beliefs yet try to preserve the delicate nature of the country during that turbulent time. Students will have an opportunity to develop their own inaugural speech from Lincoln's perspective. How radical were the Radical Republicans of the 1860s? In this lesson, students will analyze their ideals and proposals and determine how radical they were during that time period as well as develop a comparison to today's Republicans. Throughout history, times of great conflict tend to bring out multiple perspectives among citizens and politicians alike. Can the conflict be resolved peacefully? Is declaring war the only option? How will the government bolster support for a lengthy war effort? In this lesson, students will investigate how the Peace Democrats of the 1860s, commonly called Copperheads, responded during the Civil War and Reconstruction Eras. Can a connection be drawn between the reactions to the crisis situation of the 1860s and those of more recent history? In this lesson, students will assess the support President Lincoln received from the Democrats regarding the Civil War. What was the point of view of the War Democrats verses the Peace Democrats? What events led Former Union General, George McClellan, to change his support of the war? With an election on the horizon in 1864, would George McClellan become a Democratic nominee? Students representing both the War Democrats and Peace Democrats will role play a meeting to discuss their support of the Civil War and the nomination of George McClellan as the Democratic candidate in the election of 1864. How did the British react to secession and the Civil War conflict evolving within the United States? In this lesson, students will examine the events of the Trent Affair and determine whether the British supported the Union or Confederacy, or if they took more of a neutral position during this time. How was the United States Civil War perceived by European countries? This lesson provides students an opportunity to explore how the French government reacted to the United States Civil War. Did they favor the efforts of the Union or did they favor the ideals of the Confederacy and why? Students will research this information and also analyze political cartoons illustrating these issues. In this lesson, students will examine the events of the Battle of Gettysburg and determine why this particular battle was a pivotal event in the Civil War. In this lesson, students will analyze President Abraham Lincoln's Gettysburg Address. Students will identify the Gettysburg Address as Lincoln's most famous speech. Students should pay specific attention to the way in which Lincoln used past and current events to develop future goals for the nation and use this writing method to develop an original speech in honor of Veteran's Day. In this lesson, students will have an opportunity to analyze the events of the Battle of Fredericksburg and write a newspaper article about the historic event from the perspective of either a northern or southern newspaper reporter. In this lesson, students will analyze and discuss the terms of the Emancipation Proclamation. What was the purpose of this document? Did it free all slaves? Students will answer these questions as well as create maps outlining the United States prior to the Civil War, how it changed during the Civil War, and the impact of the Emancipation Proclamation to the southern and border states. Reflecting upon the Economic, Social, and Political differences between northern and southern states which eventually led to secession and the Civil War, students will compare and contrast the United States Constitution with the Confederate Constitution. Students will have an opportunity to discuss if any of their findings came as surprise. How did the Transcontinental Railroad change the west? In this lesson, students will investigate the contributions of immigrant groups to the building of the railroad and explore how geography and technology impacted the construction. The Homestead Act was a key factor in westward migration. What were the requirements for land ownership? What challenges did migrants face? Students will investigate the impact of the Homestead Act on the development of the West and discuss the lifestyle of migrants. What was life like for Irish immigrants? In this lesson, students will investigate the factors that lead to Irish immigration to America. They will compare their lives to the life of an immigrant child. What were the factors that contributed to Italian immigration to America? Students will investigate the push/pull factors and compare Italian immigration to America to Italian immigration to other countries. What were the contributions of Jewish immigrants? Students will investigate the factors that led to Jewish immigration to American and their contributions to culture and society. What led to the rise of labor unions? In this lesson, students will examine the factors that contributed to the formation of labor unions and the response of industry and business. What factors contributed to the rise of urban street gangs? In this lesson, students investigate the factors that led to the rise of urban street gangs and compare late 19th century gangs to gangs of today. How did society respond to the rise of urban street gangs? In this lesson, students will investigate the rise of police forces in America and debate the role of the police in society. Why did political machines develop? In this lesson, students will investigate the factors that led to the rise of political machines in the mid to late 19th century and their role in society. They will discuss the strengths and weaknesses of the political machine system. How did John D. Rockefeller create Standard Oil? In this lesson, students will research the steps in the creation of Standard Oil and investigate Rockefeller’s contributions to American industrial growth. They will examine why Standard Oil was considered a monopoly. What were the positive and negative effects of the steel industry? In this lesson, students will investigate the life and contributions of Andrew Carnegie and discuss how his life illustrates the accomplishments and problems of the Industrial Revolution in America. What is the relationship between banks and American industry? In this lesson, students will investigate the role and function of the banking industry. They will research the contributions of J.P. Morgan and participate in a banking simulation. In this lesson, students will identify and analyze Natural Rights as outlined in the Declaration of Independence. Did these rights apply to all citizens? Did they exclude certain members of society? The students will examine the purpose of the Seneca Falls Convention of 1848 and the challenges women faced along the path to equal rights as American citizens. In this lesson, students will analyze the terms of the Open Door policy and how it impacted the relationship between China and the United States. In this lesson, students will identify the major accomplishments as well as challenges of Woodrow Wilson's presidency. The students will compare and contrast the views of the 20th and 21st century progressive political parties. In this lesson, students will identify and analyze the factors that propelled American military involvement in World War I. In this lesson, students will identify and analyze President Woodrow Wilson's position for United States neutrality during the onset of World War I and how his position changed in 1917. In this lesson, students will evaluate the American military involvement in World War I that led to the Allied victory. In this lesson, students will review President Woodrow Wilson's Fourteen Points, a plan for international cooperation and preserving peace among nations, and how President Wilson responded to those who opposed his plan, such as Senator Henry Cabot Lodge. In this lesson, students will examine the economic impact World War I had on American society and compare it to the result of past American wars. In this lesson, students will identify and analyze President Theodore Roosevelt's accomplishments and use research evidence to determine his role as a "Founding Father" of the United States of America. In this lesson, students will examine and evaluate the motives and results of Madison Grant's ideas for conservation, immigration, and how racial or cultural differences among citizens impact society. In this lesson, students will explore the new era of technology that catapulted the use of radio for entertainment and as a way to reach citizens with news broadcasts. In this lesson, students will explore the economic system of the federal government during the 1920's and discuss how individuals manage family budgets. Students will also identify Andrew Melon and his role in the federal budget of the 1920's. In this lesson, student will explore the cultural aspect of the 1920's. Students will identify the major sporting events of the time period and develop a creative news broadcast announcing the outcome of a particular sporting event. Through the use of Primary Sources, students will analyze the Good Neighbor Policy instituted by President Franklin D. Roosevelt during the 1930's. In this lesson, students will investigate the causes and effects of the Great Depression. Students will explore the purposes of the stock market and various ways people manage and invest money. In this lesson, students will identify and analyze measures the United State government took to counteract the Great Depression, specifically the New Deal. Students will compare and contrast Hoover's and Roosevelt's responses to the Great Depression and use Primary Sources to explore how government can promote economic prosperity. In this lesson, students will research and identify Hitler's acts of aggression prior to the onset of World War II and analyze the events that led to and outcome of the Munich Conference. This lesson requires students to use geography skills to identify and label the European countries that were under German control by the summer of 1941. By researching songs of the time period, the students will examine what life was like for either American or European citizens during World War II. In this lesson, students will research and identify the who, what, when, how and why pertaining to the bombing of Pearl Harbor in 1941 and how the nation reacted to the news of this event. Compare our nation's reaction to the bombing of Pearl Harbor with the reaction to the events of 9/11. How would you describe the mood of the people of the United States? In this lesson, students will determine the historical significance of the Battle of Midway. What events led to this battle and what was the outcome? The students will create a journal on the Battle of Midway from a given perspective outlining facts as well as personal thoughts that person may have had about the events before, during and after the battle. This lesson provides students with an opportunity to examine how propaganda posters were used to rally American support for World War II. Students will research the event of D-Day. To bring an end to World War II, United States President Harry Truman made the final decision to drop an atomic bomb on Hiroshima and Nagasaki in Japan. In this lesson, students will research the history of the atomic bomb; creating the bomb, the decision to use the bomb and the results of using the bomb. The students will create and debate a list of the pros and cons of the decision to use the atomic bomb. Using the information they have gathered from the lesson, students, in small groups, will create a newspaper article outlining the events surrounding the bombing of Japan. In this lesson, students will be able to describe the Atomic Age. What events led to the creation of the atomic bomb? How was nuclear energy developed into a weapon? Who invented it? For what purpose did it serve? In The students will research and debate the United States' decision to use the atomic bomb in 1945, using evidence to support their point of view. Along with having great power comes the responsibility of using it wisely. In this lesson, students will review the Theory of Relativity and how it helped to formulate the energy source for nuclear weapons. Why would nuclear weapons be necessary and so greatly coveted by various governments? What would be the destructive outcome of using these weapons? Students will explore which countries have the scientific technology to generate nuclear weapons and why good judgment and responsibility is important in having access to such power. In this lesson, students will explore the events that lead to the Korean War in 1950. Which countries engaged in the war and how was the United States specifically involved? In this lesson, students will identify the historical significance of July 20, 1969 and the crew members of Apollo XI. The students will track the history of United States space flight and research information to develop a biography on one of the crew members. What was the Civil Rights movement? Who was involved and what did they try to accomplish? What challenges did they encounter? In this lesson, students will use Primary Sources and research to explore the answers to these very important questions. In this lesson, student will be able to explain the development and purpose of the United Nations and how the United Nations interceded in the conflict that occurred in the Middle East during the late 20th and early 21st century. Imagine your city suddenly being divided by a wall you could not cross. This lesson explores the reasons why the Berlin Wall was established by the Soviets in 1961 and the impact it had upon German citizens as well as the reaction around the world, particularly the United States. In this lesson, students will explore the events that led to the beginning of the Gulf War in the early 1990's. Why and how did the United States engage in this war? In this lesson, student will be able to explain the development and purpose of the United Nations and how the United Nations interceded in the conflict that occurred in the Middle East during the late 20th and early 21st century. In this lesson, students will identify and discuss the structure and beliefs of Al Qaeda. How do they differ from our culture and what may have prompted their hostility toward the United States?
English Yeomen in the 16th Century Social and Economic Status, Compiled by D. B. Scudder, Reprinted from Scudder Searches, volume V, no. 2, (Summer 1993) In the 15th century, when Thomas of Salem and his brother, the Rev. Henry of Collingbourne Ducis, and their parents were born, the yeoman occupied an important position in the rural middle class. The term “yeoman” first appeared in the 4th century following the Black Death (bubonic plague). Yeomen were extolled in ballads as “independent” (i.e., not subservient), skilled in the use of the “English long bow” made from the yew from which the term is derived), and characterized as of “hearty good nature.’ The early ballads of Robin Hood identify Robin as a yeoman, although later versions describe him as a disguised member of the gentry. Yeomen carried the field at the battle of Agincourt in 1415, where Henry V won a major victory against the French, slaughtering countless French knights with their powerful bows and well-placed arrows. Originally, a yeoman could have been a farmer, an armed retainer of an aristocrat, or most any other countryman of the “middling classes.” Later, however, a freehold land ownership requirement was added. As the Middle Ages drew to a close during the 16th century, yeomen were more numerous, more wealthy, and more important than in any other age, before or after. A constant motif in the literature of the time was that the yeoman is the best type of Englishman, holding society together, neither clinging to the high, nor despising his poorer neighbors, hearty, hospitable, and fearless. Today the English countryside is not only full of Elizabethan mansions but also of the more modest houses of Tudor or early Stuart architecture which once were the manor houses of small gentry or the seats of prosperous freehold yeomen. It was a great age for the rural middle class. Yeomen were one rung on the social ladder below the landowning gentry, who included most of the Lords of the Manors. The gentry, as “gentlemen,” were entitled to bear (coats of) arms and, along with most of the clergy, were addressed as “Mr.” But, the differences between the “small” gentry and the prosperous yeoman were slight, and there was considerable mobility between the two groups, downward as well as upward. As the Reverend William Harrison, writing in 1577, puts it: For the most part, yeomen are farmers to the gentleman, but many are able and do buy the lands of unthrifty gentlemen, sending their sons to the schools and universities, and to the Inns of Court [English law school], or otherwise leaving them sufficient lands whereupon they may live without labor, [and] do make them by those means to become gentlemen. While the prestige of being a yeoman was below that of being a “gentleman,” it was not far below, and it was still well above that of the bulk of the rural population. Thus, it is small wonder that many of our colonial ancestors, if they could legitimately make the claim, identified themselves as yeomen in their wills. David B. Scudder, Scudder Searches, volume V, no. 2, (Summer 1993), 8. Charlesdrakew, “Cottage in Bignor, West Sussex, England,” 2009, public domain, https://commons.wikimedia.org/wiki/File:Bignor_cottage.jpg.
A "corn" is a small circular thickened lesion in the skin of the foot. It usually forms due to repeated pressure on the skin, such as the rubbing of a shoe. The name "corn" comes from its resemblance to a kernel of corn. A corn is different from a callus in that it has a central core of hard material. People with foot deformities, such as hammertoes, often suffer from corns because the tops of the bent toes rub against the tops of shoes. There are a number of treatment options for corns. When corns get hard enough to cause pain, a foot and ankle surgeon will recommend the treatment option most appropriate for you. However, if the underlying cause of the corn is not treated or removed, the corn may return. It is important to avoid trying to remove a corn at home or using medicated corn pads, as serious infection may occur. To learn more, listen to the Corns and Hammertoes podcast.
In a small clinical trial led by the Johns Hopkins Bloomberg School of Public Health, researchers say that a promising single-dose dengue vaccine, developed by scientists at the National Institutes of Health, was 100 percent effective in preventing human volunteers from contracting the virus, the most prevalent mosquito-borne virus in the world. The findings, published March 16 in Science Translational Medicine, could be the final puzzle piece in developing a vaccine that is effective against dengue, which infects nearly 400 million people across more than 120 countries each year. While most of those who are infected with dengue survive with few or no symptoms, more than two million people annually develop what can be a dangerous dengue hemorrhagic fever, which kills more than 25,000 people each year. Preventing dengue has been a particular challenge. A three-dose vaccine called Dengvaxia received limited licensure in 2016 in Mexico, the Philippines, and Brazil. That vaccine produced antibodies against the dengue in a clinical trial and protected against dengue during the first year after vaccination. But two years after vaccination, children who were under the age of nine when they received the vaccine were hospitalized for dengue at a significantly higher rate than those who received the placebo. For this reason, the researchers, led by Dr. Anna P. Durbin, an associate professor in international health at the Bloomberg School, were concerned that measuring antibodies alone may not truly indicate the ability of the vaccine to protect against dengue.
Culture-based education (CBE), and more specifically Hawaiian culture-based education (HCBE), is a key lever to achieving Kamehameha School’s (KS) Vision 2040 of a thriving lāhui. We believe that HCBE instills confidence and resiliency in Native Hawaiian learners to improve the well-being of the lāhui. An HCBE system engages Native Hawaiian learners to reach positive socio-emotional and academic outcomes. For that reason, KS is committed to creating and promoting an HCBE system where all students, Native Hawaiian learners in particular, will thrive and reach their full potential. CBE is grounded in the foundational values, norms, knowledge, beliefs, practices, experiences, and language of a(n indigenous) culture. It “places significance on Native language; place-based, and experiential learning, cultural identity; holistic well-being; and personal connections and belonging to family, community, and ancestors” (Alcantara, Keahiolalo, and Peirce, 2016). The literature base for CBE describes five basic elements that comprise this approach: Language, Family & Community, Context, Content, and Data & Accountability. In HCBE, the five elements of CBE are applied specifically from a Native Hawaiian perspective. For example, HCBE practitioners strive to incorporate ʻŌlelo Hawaiʻi (Hawaiian language) in the classroom and involve family and community in the development of Hawaiian-centered curricula relevant to learners. By sustaining the values, traditions, and language of Hawaiʻi through HCBE, we hope to see Native Hawaiians grow in success and contribute to their communities both locally and globally. This HCBE collection includes exclusively research-focused resources that explore CBE and HCBE in varying contexts. Users should make their own assessments of the quality of the data from these sources. It is our hope that these resources will support your journey to ʻimi naʻauao, or seek wisdom, that would strengthen the lāhui. If you would like a research study to be included in this collection, please email us at [email protected]. Search our collection using the filters below to narrow results. You may select multiple filters. Your search - "" - did not match any documents. Topic(s): [X] Learner Outcomes [X] Literature Review [X] Teacher Outcomes Age band(s): [X] Early Childhood [X] K-12 Type(s): [X] Instrument
Life on Earth does not enjoy change, and climate change is something it likes least of all. Every aspect of an organism’s life depends on climate, so if that variable changes, everything else changes too – the availability of food and water, the timing of migration or hibernation, even the ability of bodily systems to keep running. Species can adapt to gradual changes in their environment through evolution, but climate change often moves too quickly for them to do so. It’s not the absolute temperature, then, but the rate of change that matters. Woolly mammoths and saber-toothed tigers thrived during the Ice Ages, but if the world were to shift back to that climate overnight, we would be in trouble. Put simply, if climate change is large enough, quick enough, and on a global scale, it can be the perfect ingredient for a mass extinction. This is worrying, as we are currently at the crux of a potentially devastating period of global warming, one that we are causing. Will our actions cause a mass extinction a few centuries down the line? We can’t tell the future of evolution, but we can look at the past for reference points. There have been five major extinction events in the Earth’s history, which biologists refer to as “The Big Five”. The Ordovician-Silurian, Late Devonian, Permian-Triassic, Late Triassic, Cretaceous-Tertiary…they’re a bit of a mouthful, but all five happened before humans were around, and all five are associated with climate change. Let’s look at a few examples. The most recent extinction event, the Cretaceous-Tertiary (K-T) extinction, is also the most well-known and extensively studied: it’s the event that killed the dinosaurs. Scientists are quite sure that the trigger for this extinction was an asteroid that crashed into the planet, leaving a crater near the present-day Yucatan Peninsula of Mexico. Devastation at the site would have been massive, but it was the indirect, climatic effects of the impact that killed species across the globe. Most prominently, dust and aerosols kicked up by the asteroid became trapped in the atmosphere, blocking and reflecting sunlight. As well as causing a dramatic, short-term cooling, the lack of sunlight reaching the Earth inhibited photosynthesis, so many plant species became extinct. This effect was carried up the food chain, as first herbivorous, then carnivorous, species became extinct. Dinosaurs, the dominant life form during the Cretaceous Period, completely died out, while insects, early mammals, and bird-like reptiles survived, as their small size and scavenging habits made it easier to find food. However, life on Earth has been through worse than this apocalyptic scenario. The largest extinction in the Earth’s history, the Permian-Triassic extinction, occurred about 250 million years ago, right before the time of the dinosaurs. Up to 95% of all species on Earth were killed in this event, and life in the oceans was particularly hard-hit. It took 100 million years for the remaining species to recover from this extinction, nicknamed “The Great Dying”, and we are very lucky that life recovered at all. So what caused the Permian-Triassic extinction? After the discovery of the K-T crater, many scientists assumed that impact events were a prerequisite for extinctions, but that probably isn’t the case. We can’t rule out the possibility that an asteroid aggravated existing conditions at the end of the Permian period. However, over the past few years, scientists have pieced together a plausible explanation for the Great Dying. It points to a trigger that is quite disturbing, given our current situation – global warming from greenhouse gases. In the late Permian, a huge expanse of active volcanoes existed in what is now Siberia. They covered 4 million square kilometres, which is fifteen times the area of modern-day Britain (White, 2002). Over the years, these volcanoes pumped out massive quantities of carbon dioxide, increasing the average temperature of the planet. However, as the warming continued, a positive feedback kicked in: ice and permafrost melted, releasing methane that was previously safely frozen in. Methane is a far stronger greenhouse gas than carbon dioxide – over 100 years, it traps approximately 21 times more heat per molecule (IPCC AR4). Consequently, the warming became much more severe. When the planet warms a lot in a relatively short period of time, a particularly nasty condition can develop in the oceans, known as anoxia. Since the polar regions warm more than the equator, the temperature difference between latitudes decreases. As global ocean circulation is driven by this temperature difference, ocean currents weaken significantly and the water becomes relatively stagnant. Without ocean turnover, oxygen doesn’t get mixed in – and it doesn’t help that warmer water can hold less oxygen to begin with. As a result of this oxygen depletion, bacteria in the ocean begins to produce hydrogen sulfide (H2S). That’s what makes rotten eggs smell bad, and it’s actually poisonous in large enough quantities. So if an organism wasn’t killed off by abrupt global warming, and was able to survive without much oxygen in the ocean (or didn’t live in the ocean at all), it would probably soon be poisoned by the hydrogen sulfide being formed in the oceans and eventually released into the atmosphere. The Permian-Triassic extinction wasn’t the only time anoxia developed. It may have been a factor in the Late Triassic extinction, as well as smaller extinctions between the Big Five. Overall, it’s one reason why a warm planet tends to be less favourable to life than a cold one, as a 2008 study in the UK showed. The researchers examined 520 million years of data on fossils and temperature reconstructions, which encompasses almost the entire history of multicellular life on Earth. They found that high global temperatures were correlated with low levels of biodiversity (the number of species on Earth) and high levels of extinction, while cooler periods enjoyed high biodiversity and low extinction. Our current situation is looking worse by the minute. Not only is the climate changing, but it’s changing in the direction which could be the least favourable to life. We don’t have volcanic activity anywhere near the scale of the Siberian Traps, but we have a source of carbon dioxide that could be just as bad: ourselves. And worst of all, we could prevent much of the coming damage if we wanted to, but political will is disturbingly low. How bad will it get? Only time, and our decisions, will tell. A significant number of the world’s species will probably become extinct. It’s conceivable that we could cause anoxia in the oceans, if we are both irresponsible and unlucky. It wouldn’t be too hard to melt most of the world’s ice, committing ourselves to an eventual sea level rise in the tens of metres. These long-range consequences would take centuries to develop, so none of us has to worry about experiencing them. Instead, they would fall to those who come after us, who would have had no part in causing – and failing to solve – the problem. Mayhew et al (2008). A long-term association between global temperature and biodiversity, origination and extinction in the fossil record. Proceedings of the Royal Society: Biological Sciences, 275: 47-53. Read online Twitchett (2006). The paleoclimatology, paleoecology, and paleoenvironmental analysis of mass extinction events. Paleogeography, Paleoclimatology, Paleoecology, 234(2-4): 190-213. Read online White (2002). Earth’s biggest “whodunnit”: unravelling the clues in the case of the end-Permian mass extinction. Philosophical Transactions of the Royal Society: Mathematical, Physical, & Engineering Sciences, 360: 2963-2985. Read online Benton and Twitchett (2003). How to kill (almost) all life: the end-Permian extinction event. Trends in Ecology & Evolution, 18(7): 358-365. Read online
What are social pensions? Social pensions are a proven means of reducing old-age poverty and supporting multi-generational households. The term ‘pension’ is widely used to describe a range of cash income, mainly for older people, including both non-contributory and contributory cash transfers of various kinds. The term ‘social pension’ is used to refer to non-contributory pensions funded via national governments, usually through taxes. Universal non-contributory pensions are distinguished from those that are means tested. Universal pensions are unconditionally available to all. Means-tested pensions are targeted to the poor, and are conditional on tests of earning, income or assets. Old age social pensions improve income security in later years by providing cash transfers in old age. They assist the poorest people and can regenerate local economies and redistribute wealth as well as improving the nutritional status of the young, support school attendance and improve the health of all household members. Impact of old age social pensions Old age social pensions have a range of economic, social and health benefits: - Pensions reduce individual poverty – enabling the poorest older people to pay for basic necessities such as food. - Reducing old-age poverty contributes to overall poverty reduction – through not only alleviating immediate poverty but reducing chronic poverty and promoting higher living standards over the longer term.[i] - Pensions reduce household poverty – as income is often pooled within multi-generational households. - Cash transfers to older people stimulate the local economy – as recipients invest in basic necessities from local stores, as well as income generation and acquisition of productive assets. - Children benefit when grandparents have a pension – as research shows that older people consistently invest their money in health and education of dependents. - Pensions improve family cohesion and the status of older people – as it may provide an incentive for different generations to live together, and older people have some control over their income which can contribute to more equitable distribution of resources. - Pensions make older people feel independent – as it promotes a sense of security and dignity in older people who would otherwise depend on family members. - Pensions enable older people to pay for health care, medicines and associated costs such as transport to health centres – costs which can account for as much as three quarters of the income of the poorest older people.[ii] - Pensions mean that older people can afford to eat – otherwise a significant proportion of older people would not be able to afford to eat regular meals, adversely affecting nutritional intake and health. [i] Devereux, S, ‘Can social safety nets reduce chronic poverty?’, Development Policy Review 20:5, 2002. [ii] Randel, J et al. (eds), The ageing and development report: poverty, independence and the world’s older people, London, HelpAge International, 1999.
Know the Warning Signs and Be Prepared In a landslide, masses of rock, earth or debris move down a slope. Debris and mud flows are rivers of rock, earth, and other debris saturated with water. They develop when water rapidly accumulates in the ground, during heavy rainfall or rapid snowmelt, changing the earth into a flowing river of mud or “slurry.” Landslides can flow rapidly, striking with little or no warning at avalanche speeds. They also can travel several miles from their source, growing in size as they pick up trees, boulders, cars and other materials. What to do BEFORE a landslide Landslides occur in all U.S. states and territories and can be caused by a variety of factors including earthquakes, storms, volcanic eruptions, and by human modification of land. Landslides can occur quickly, often with little notice and the best way to prepare is to stay informed about changes in and around your home that could signal that a landslide is likely to occur. Learn to recognize landslide warning signs Changes occur in your landscape such as patterns of storm-water drainage on slopes (especially the places where runoff water converges) land movement, small slides, flows, or progressively leaning trees. - Doors or windows stick or jam for the first time. - New cracks appear in plaster, tile, brick, or foundations. - Outside walls, walks, or stairs begin pulling away from the building. - Slowly developing, widening cracks appear on the ground or on paved areas such as streets or driveways. - Underground utility lines break. - Bulging ground appears at the base of a slope. - Water breaks through the ground surface in new locations. - Fences, retaining walls, utility poles, or trees tilt or move. - A faint rumbling sound that increases in volume is noticeable as the landslide nears. - The ground slopes downward in one direction and may begin shifting in that direction under your feet. - Unusual sounds, such as trees cracking or boulders knocking together, might indicate moving debris. - Collapsed pavement, mud, fallen rocks, and other indications of possible debris flow can be seen when driving (embankments along roadsides are particularly susceptible to landslides). Practical steps to prepare you, your family and your home The following are things you can do to protect yourself, your family and your property from the effects of a landslide or debris flow: - Learn about the hazards in your community. Check out The Oregon Department of Geology and Mineral Industries Interactive Landslide Map and the Oregon HazVu to see if you or your home could be impacted by a landslide. - Build an emergency kit and make a family communication plan. - Prepare for landslides by following proper land-use procedures - avoid building near steep slopes, close to mountain edges, near drainage ways or along natural erosion valleys. - Become familiar with the land around you. Learn whether debris flows have occurred in your area by contacting local emergency management officials. Slopes where debris flows have occurred in the past are likely to experience them in the future. - Get a ground assessment of your property. - Consult a professional for advice on appropriate preventative measures for your home or business, such as flexible pipe fittings, which can better resist breakage. - Protect your property by planting ground cover on slopes and building retaining walls. - In mudflow areas, build channels or deflection walls to direct the flow around buildings. Be aware, however, if you build walls to divert debris flow and the flow lands on a neighbor's property, you may be liable for damages. - If you are at risk from a landslide talk to your insurance agent. Debris flow may be covered by flood insurance policies from the National Flood Insurance Program (NFIP). What to do DURING a landslide Listen to local officials Learn about the emergency plans that have been established in your area by your local government. In any emergency, always listen to the instructions given by local emergency management officials. - If you must travel, check the Oregon Department of Transportation TripCheck website or call 5-1-1. - During a severe storm, stay alert and awake. Many deaths from landslides occur while people are sleeping. - Listen to local news stations on a battery-powered radio for warnings of heavy rainfall. - Listen for unusual sounds that might indicate moving debris, such as trees cracking or boulders knocking together. - Move away from the path of a landslide or debris flow as quickly as possible. The danger from a mudflow increases near stream channels and with prolonged heavy rains. Mudflows can move faster than you can walk or run. Look upstream before crossing a bridge and do not cross the bridge if a mudflow is approaching. - Avoid river valleys and low-lying areas. - If you are near a stream or channel, be alert for any sudden increase or decrease in water flow and notice whether the water changes from clear to muddy. Such changes may mean there is debris flow activity upstream so be prepared to move quickly. - Curl into a tight ball and protect your head if escape is not possible. What to do AFTER a landslide - Go to a designated public shelter if you have been told to evacuate or you feel it is unsafe to remain in your home. Text SHELTER plus your ZIP code to 43362 (4FEMA) to find the nearest shelter in your area (example: shelter 12345). - Stay away from the slide area. There may be danger of additional slides. - Listen to local radio or television stations for the latest emergency information. - Watch for flooding, which may occur after a landslide or debris flow. Floods sometimes follow landslides and debris flows because they may both be started by the same event. - Check for injured and trapped persons near the slide, without entering the direct slide area. Direct rescuers to their locations. - Look for and report broken utility lines and damaged roadways and railways to appropriate authorities. Reporting potential hazards will get the utilities turned off as quickly as possible, preventing further hazard and injury. - Check the building foundation, chimney, and surrounding land for damage. Damage to foundations, chimneys, or surrounding land may help you assess the safety of the area. - Replant damaged ground as soon as possible since erosion caused by loss of ground cover can lead to flash flooding and additional landslides in the near future. - Seek advice from a geotechnical expert for evaluating landslide hazards or designing corrective techniques to reduce landslide risk. A professional will be able to advise you of the best ways to prevent or reduce landslide risk, without creating further hazard. - If you see a landslide, report it to the Oregon Department of Geology and Mineral Industries. The Oregon Department of Geology and Mineral Industries began creating a landslide inventory following the 1996 and 1997 storm events and they continue to update the inventory. Find additional information on how to plan and prepare for a landslide or debris flow emergency and learn about available resources by visiting the following websites:
In this article we will discuss about:- 1. Introduction to Keynesian Theory 2. Features of Keynesian Theory of Employment 3. Assumptions 4. Variables 5. Summary 6. Determination of Equilibrium Level 7. Theory of Income and Output 8. Keynesian Model 9. Policy Implications 10. Criticisms. Introduction to Keynesian Theory: Keynes was the first to develop a systematic theory of employment in his book. The General Theory of Employment, Interest and Money (1936). The classical and the neoclassical economists almost neglected the problem of unemployment. They regarded unemployment as a temporary phenomenon and assumed that there is always a tendency towards full employment. It was Keynes who led a vigorous and systematic attack on the traditional theory of employment and replaced it with a more general and more realistic theory. Keynes’ main criticism of the classical theory was on the following two grounds: (a) The classical prediction that full- employment equilibrium will be achieved in the long-run was not acceptable to Keynes, who wanted to solve the short run problem of unemployment. According to Keynes, in the long-run there is no problem; in the long-run, we are all dead. (b) Keynes criticised the classical assumption of self-regulating economy. The great depression of 1930s led Keynes to believe that full employment equilibrium in the economy was not be automatically achieved in the short period; and that government intervention was necessary to tackle the problem of the economy. Keynes’ theory of employment is called the effective demand theory of employment. According to this theory, unemployment arises due to the deficiency to effective demand and the method of remove unemployment is to raise effective demand. Features of Keynesian Theory of Employment: The following are the main features of the Keynesian theory of employment which determine its basic nature: (i) It is general theory in the sense that- (a) it deals with all levels of employment, whether it is full employment, widespread unemployment or some intermediate level; (b) it explains inflation as readily as it does unemployment, because basically both situations are a matter of volume of employment, and (c) it relates to changes in the employment and output in the economic system as a whole. (ii) Keynesian theory of employment is a short-run theory which attempts to analyse the short-run phenomenon of unemployment. He assumed constant all those strategic variables which remain stable and change very little in the short-run. (iii) Keynesian theory is based on empirical foundations and has important policy implications. (iv) Keynes did not have much faith in the policy of laissez faire and automatic adjustment of the economic system. On the contrary, he advocated government intervention to reform the capitalist system. (v) In this theory, Keynes gave money specially an important role in the determination of employment and output in the economic system as a whole. Assumptions of the Theory: Keynesian theory of employment is based on the following assumptions: (i) Keynes confines his analysis to the short-period. (ii) He assumes that there is perfect competition in the market. (iii) He carries out his analysis in the closed economy, ignoring the effect of foreign trade. (iv) His analysis is a macro-economic analysis, i.e., it deals with aggregates. (v) He assumes the operation of the law of diminishing returns or increasing costs. (vi) The government is assumed to have no part play either as taxer or a spender, i.e., the fiscal operations of the government is not explicitly recognised. (vii) He assumes that labour has money illusion. It means that a worker feels better when his wages double even when prices also double, thus leaving his real wage unchanged. Variables of the Theory: The variables used by Keynes in his theory can be broadly divided into three groups: 1. Given Elements: First there are variables which have been assumed as given because they change so slowly that their effects in short run can be ignored. They are- (a) the quality and quantity of labour and capital stock; (b) techniques of production; (c) degree of competition; (d) consumer tastes; (e) the structure of the society. 2. Independent Variables (or Causes): Independent variables are the behaviour patterns of the society. In other words, they represent the basic functions or relationships. There are four independent variables: (i) The consumption function; (ii) The investment function or the marginal efficiency of investment schedule; (iii) The liquidity preference function; (iv) The quantity of money fixed by the monetary authority. All these variables are stated in wage units. 3. Dependent Variables (or Effects): The dependent variables of the Keynesian system are- (a) the level of employment, output and income, and (b) the rate of interest. Keynes makes rate of interest an independent variable. But, according to Hansen, rate of interest is a determinate, and not a determinant. Rate of interest along with national income together are mutually determined by the above mentioned four independent variables. Summary of Keynesian Theory of Employment: Keynesian theory of employment, as developed in the General Theory is outlined in Chart-1. The main propositions of the theory are given below: (i) Total employment = total output = total income. As employment increases, output and income also increase proportionately. (ii) Volume of employment depends upon effective demand. (iii) Effective demand, in turn, is determined by aggregate supply function (representing costs of entrepreneurs) and aggregate demand function (representing receipts of entrepreneurs). It is determined at the point where aggregate demand and aggregate supply are equal. (iv) Keynes assumed aggregate supply function as given in the short period and regarded aggregate demand as the most important element in his theory. (v) Aggregate demand function is governed by consumption expenditure and investment expenditure. (vi) Consumption expenditure depends upon the size of income and the propensity of consume. Consumption expenditure is fairly stable in the short-period because propensity to consume does not change quickly. (vii) Investment expenditure is governed by marginal efficiency of capital (i.e., profitability of capital) and the rate of interest. Unlike consumption expenditure, investment expenditure is highly unstable. (viii) The marginal efficiency of capital is determined by the supply price of capital assets on the one hand and the prospective yield on the other. Prospective yield, in turn, depends upon future expectations. This explains why the marginal efficiency of capital and hence investment expenditure fluctuates. (ix) Rate of interest is a monetary phenomenon and is determined by the demand for money (liquidity preference) and the quantity of money. Liquidity preference depends upon three motives- transaction motive, precautionary motive, and speculative motive. Quantity of money is regulated by the monetary authority. (x) The essence of the whole theory of employment is that employment (= output = income) depends upon effective demand. Effective demand expresses itself in the whole of total spending of the community, i.e., consumption expenditure and investment expenditure. A fundamental principle is that as income of the community increases, consumption will increase, but by less than the increase in income. Thus, in order to increase the level of employment, investment must be increased. Investment must be high enough to fill the gap between income and consumption. (xi) Original Keynesian analysis considers private consumption and private investment expenditure only and does not take into account government expenditure. But, in modem times, government expenditure is also a significant determinant of effective demand. Government expenditure is considered the most effective weapon to fight unemployment. Determination of Equilibrium Level of Employment: The central problem of the General Theory is- What determines the level of employment? Keynes’ answer is- effective demand. Effective demand is the logical starting point of Keynes’ theory of employment. Effective demand means desire plus ability and willingness to buy, i.e., actual expenditure. Effective demand depends upon aggregate demand function and aggregate supply function. Aggregate demand function represents different amounts of money which the entrepreneurs expect to get from the sale of output at varying levels of employment. Or, to put it differently, aggregate demand function reveals planned or intended expenditure at different levels of income. Aggregate demand schedule (AD curve in Figure – 7) slopes upward to the right, indicating that as the expected sale proceeds increase, greater number of workers will be employed. The AD curve flattens at the later stages of employment because marginal propensity to consume declines as income increases. Aggregate supply function represents different amounts of money which the entrepreneurs must get from the sale of output at varying levels of employment. Or stated in a different way, aggregate supply function represents different levels of income (and thus output and employment) which the entrepreneurs will supply at different levels of expenditures. Aggregate supply schedule (AS curve in Figure-7) also slopes upwards to the right, indicating that at higher levels of employment expected minimum sale proceeds increase. After the full employment level is reached (i.e., after point F), AS curve becomes perfectly inelastic (a vertical straight line) which shows that employment cannot increase further even if minimum expected sale proceeds increase. The equilibrium level of employment is determined at the point of intersection between aggregate demand function and aggregate supply function. This is also the point of effective demand. Aggregate supply represents costs, while aggregate demand represents expected receipts of the entrepreneurs. So long as receipts are greater than costs, the employment will continue to increase. This process will go on till receipts become equal to costs. No employment will be offered to the workers if costs are greater than receipts. In Figure-7, point E is the point of effective demand where AD curve and AS curve intersect each other. ON is the equilibrium level of employment. At this level, aggregate demand (receipts) is equal to aggregate supply (costs). At ON employment level, the entrepreneurs maximise their profits and have no tendency either to increase or decrease employment. At no other level of employment, the economy will be in equilibrium. For example, at ON1 level of employment, the expected receipts are greater than the expected costs (AN1 > BN1). This will induce entrepreneurs to increase employment. Similarly, at ONf employment level, expected costs exceed expected receipts (FNf > GNf). Such a level of employment will not be offered, because it will involve losses. (i) The equilibrium level of employment as represented by the point of effective demand (point E) does not necessarily indicate a full-employment equilibrium. As is clear from Figure- 7, there exists NNf amount of unemployment at E point of effective demand. Keynes’ main contribution is the demonstration that less- than-full employment equilibrium is possible and, in a capitalist economy, this is normal situation. In such an economy, investment is generally inadequate to fill the gap between income and consumption. (ii) Aggregate supply function (being given in the short period) cannot be manipulated and thus is not of much practical significance. In order to attain full-employment level of ONf (or to remove unemployment NNf), aggregate demand must be raised from AD curve to AD1 curve. Thus, the Keynesian theory of employment may be more properly called the aggregate demand theory of employment. The Keynesian theory of employment is also called the theory of income and output. The point of effective demand, which gives the equilibrium level of employment, also indicates the equilibrium level of national income and output. Effective demand manifests itself in spending of income or the flow of total expenditure in the economy. The flow of expenditure determines the flow of income because one man’s expenditure is another man’s income. The flow of expenditure also represents the value of total output because total price of national output is just the same thing as the total expenditure made and the total income received by the community. Total expenditure, which represents total demand for goods and services, comprises of consumption expenditure and investment expenditure. To meet this demand, workers are employed to produce consumer goods and investment goods. Thus, effective demand (E.D.) = total employment (N) = total output (O) = total income (Y) = expenditure on consumption goods (C) + expenditure on investment goods (I) or ED = N = O = Y = C + I Thus the level of effective demand determines the general level of income, output and employment in a capitalist economy. At the point of effective demand, aggregate supply [i.e., total value of all final goods and services produced (Y)] is equal to aggregate demand [i.e., total planned expenditures on final goods and services (C+I)]. At this equilibrium level, the economy as a whole produces that level of output, generates that level of income and employs that quantity of labour which is the most profitable. This most profitable level of output, income and employment depends primarily on aggregate demand. Aggregate supply adjusts itself to aggregate demand. Thus, the important implication of the Keynesian theory is that demand creates its own supply. This is just the reverse of Say’ law of markets which states that supply creates its own demand. Thus, the point of effective demand represents the economy’s general equilibrium level at which – (i) aggregate supply (total income) = aggregate demand (total expenditure) Y = C + I … (1) (ii) total saving = total investment S = I … (2) (Since total saying is equal to total income minus total consumption (S = Y – C), therefore, Y = C + I can be written as Y – C = I or S = I) Figure-8 illustrates the determination of equilibrium level of income (or output or employment). C-line represents consumption function. Consumption is an increasing function of income, i.e., C = f (Y). C + I line represents aggregate demand or consumption plus investment expenditure. Keynes believed that a considerable amount of investment is autonomous (i.e., independent of income). Therefore, C + I line is parallel to C- line, the difference indicates the investment expenditure. SS (45° line) is the aggregate supply schedule which indicates that at a given level of expected total expenditure (C + I), exactly equal level of income (Y) will be offered. That is why SS line represents Y = C + I and the equilibrium lies on this line. Economy’s equilibrium is at point E, which is also the point of effective demand. At this equilibrium point (i) Total income = total expenditure Y = C + I or (OY = CY + EC) (ii) Total saving = total investment S = I or (EC = EC) Keynes’ theory of employment can be summed up in terms of an equational model as developed by D. Oscar Lange. The basic equation of the model is: M = L (i, Y) … (1) The amount of money which people hold (M) is a function (L) of rate of interest (i) and income (Y). There is an inverse relationship between i and M, but Y and M move in the same direction. L represents liquidity preference function. C = F (Y, i) … (2) Consumption (C) is a function (F) of income (Y) and the rate of interest (i). C and Y rise and fall together. About the relationship between C and i, Keynes was not certain. I = F (i, C) …(3) Investment (I) is a function (F) of the rate of interest (i) and consumption (C). Given the marginal efficiency of capital, I rises as the rate of interest (i) falls, and falls as the rate of interest rises. Again, given the state of expectations, the marginal efficiency of capital rises as C rises, and falls as C falls. F signifies investment function. Y = C + I … (4) Income (Y) is equal to consumption (C) plus investment (I) M can be taken as given, since it is determined by the monetary authorities of a country. Thus, we are left with four unknowns (Y, C, I and i) and an equal number of equations. The system is then, determinate i.e., the value of all the unknowns can be understood with the help of the following four diagrams in Figure-9. Let us start with the initial equilibrium position when income is Y0 (Rs. 6000), the amount of money M0 (Rs. 3000) and the rate of interest is i0 (3%). Y0 curve is the liquidity preference schedule at Y0 income level (Figure-9A). With the rate of interest 3% and income Rs. 6000, consumption will be Co (Rs. 4000). The i0 is the consumption function at 3% rate of interest (Figure-9B). With consumption Rs. 4000 and the rate of interest 3%, investment will be I0 (Rs. 2000). C0 curve is the investment function at consumption level Rs. 4000 (Figure-9C). Figure -9D shows that the economy is in equilibrium, i.e., income (Rs. 6000) is equal to consumption (Rs. 4000) plus investment (Rs. 2000). The 45° line shows Y = C + I. If, for example C+I is not Rs. 6000 but Rs. 8000, then income will rise to Rs. 8000. How would the system behave in order to reach a new equilibrium position? With income Rs. 8000, liquidity preference function rises to Y1 and, given the quantity of money Rs. 3000, the rate of interest rises to i1 (4%) in Figure- 9A. With the rate of interest 4%, consumption function falls to i1; but because of higher income (Rs. 8000), consumption rises to C1 (Rs 4500) in Figure-9B. With consumption Rs. 4500, the investment function shifts upward to C1. At consumption Rs. 4500 and the rate of interest 4%, investment is I1 (Rs. 3500) in Figure-9C. Thus, the economy reaches a new and higher equilibrium level because income (Rs. 8000) = consumption (Rs. 4500) + investment (Rs. 3500) in Figure-9D. Thus, if one knows the shape of the functions (i.e., liquidity preference function, consumption function and investment function) and the value of any one of the dependent variables (M, C, I, and i), then the changes in the whole system as a result of a change in one variable can be worked out. Keynesian theory of employment has the following policy implications: I. Reform of Capitalism: Keynesian theory has demonstrated that in a capitalist’s economy, unemployment, and not full employment, is a normal situation. But as a remedial measure, Keynes did not suggest a complete reconstruction of the capitalist society on socialistic pattern. He wanted to preserve and reform capitalism, rather than lo replace capitalism by socialism. II. Government Intervention: Keynes has no faith in the policy of laissez-faire and has shown that the state of full employment is not automatically achieved. He recommended state intervention to raise effective demand in order to increase the level of employment in the economy. In order to increase the volume of employment, effective demand, i.e., consumption and investment expenditures must be increased. Keynes suggested that propensity to consume can be raised by redistribution of income from the rich (with low propensity to consume) to the poor (with high propensity to consume). Such a redistribution of income can be achieved through progressive taxation. IV. Monetary Policy not Reliable: Employment can be increased by increasing the quantity of money (i.e., cheap money policy) because it will reduce rate of interest and increase private investment. But Keynes did not consider cheap money policy as a reliable policy to promote private investment in a situation of depression and unemployment. After all, the monetary authorities can only make money available to a businessman at a cheaper rate, but cannot compel him to increase investment if he is pessimistic about the future prospects of the business. V. Public Works Programme: Keynes laid maximum emphasis on the public investment because of the unstable nature of private investment. He suggested that government can remove unemployment by starting public works and utilising the unemployed people there. Employment in this case will increase many times because of the operation of the multiplier. VI. Objective of Full Employment: The present-day popularity of the objective of full employment is also attributed to Keynes. There is hardly any nation, planned or unplanned, which has not accepted full employment as the ultimate goal of its economic policy. Though Keynes has revolutionised the modern economic thinking, his analysis has some inherent weakness: (i) Keynesian theory is not a complete theory of employment in the sense that it does not provide a comprehensive treatment of unemployment, (a) It deals only with cyclical unemployment and ignores other forms of unemployment, such as, frictional unemployment, technological unemployment, etc. (b) It does not tell us how to secure full and fair employment. (ii) There exists no direct and determinable relationship between effective demand and volume of employment. It all depends upon the relationship between wage rate, prices and money supply. Moreover, in modern times, most countries are facing the problem of stagflation (i.e., unemployment with inflation). (iii) Keynesian theory assumes perfect competition which is not a very realistic assumption. He completely ignored the problems of monopoly. (iv) Keynesian theory deals with short-run phenomenon. It pays no attention in the long-run problems of the dynamic economy. (v) Keynesian economics is static in nature. It ignores the time lags in the behaviour of economic variables. However, the post-Keynesians have filled this gap by providing truly dynamic analysis. (vi) Keynesian theory is purely macro-economic theory which deals with aggregates. Micro-economic problems have been completely ignored. (vii) Keynes assumes a closed economy. In this way, his analysis does not take into account the impact of international trade on the growth of employment and income of the economy. (viii) Keynesian economics is, by and large, a depression economics. It is the product of Great Depression of 1930s and attempts to suggest measures to solve the problems of unemployment. It pays little attention to deal with the inflationary situation. (ix) It is basically a capitalistic theory. It examines the determinants of employment in a free enterprise economy. Though Keynes has suggested government intervention and controlled capitalism, his theory fails to deal socialist economic system. (x) Keynesian theory is not applicable in underdeveloped countries. Keynes deals with the problem of cyclical unemployment, whereas the underdeveloped countries face the problems of chronic unemployment and disguised unemployment. As a remedial measure, Keynes suggested expansion of aggregate demand and discouragement to saving, while the underdeveloped countries need curbs on spending, and increases in saving for capital formation and for large-scale investment to break the vicious circle of poverty. In short, the Keynesian theory is not general; it is not applicable in all places and at all times. As Harris has remarked- “Those who seek universal truths, applicable in all places at all times had better not waste their time on the General Theory.”
What Are the Origins of Egyptology Today, Egyptology – the study of ancient Egyptian history, culture, and language – is a worldwide discipline studied and taught at major universities on nearly every continent. It has evolved from a more esoteric study known only to elites in a handful of schools and museums in Europe to something much more global that is accessible to a wider range of people, which has come to influence many aspects of modern society. The very definition of Egyptology and what makes one an Egyptologist has also changed over the last 200 years because it involves a variety of sub-disciplines that include but are not limited to some of the following: archaeology, art history, history/chronology, and philology. Essentially, Egyptology is a modern study that can trace its roots to the Enlightenment of the eighteenth century. It was during the Enlightenment when people began to question the governments they lived under and the religions they followed, when the idea of studying older, venerable cultures became popular. Enlightenment scholars began to see the perfect forms of government in ancient Athens and Rome and as they looked further, they began to see that the even older cultures of Mesopotamia and Egypt also had a lot to offer. It was in the milieu of the Enlightenment and during the Napoleonic Wars that followed during the early nineteenth century where most scholars pinpoint the origins of Egyptology. The seminal event within this period was the discovery and subsequent decipherment of the Rosetta Stone, which allowed modern scholars to read the enigmatic hieroglyphic script of the ancient Egyptian language, thereby making the plethora of Egyptian texts readable. Once the texts became readable, ancient Egyptian chronology became clearer and the nuances of pharaonic civilization became accessible to the modern world. As much as the discovery and decipherment of the Rosetta Stone represented a watershed moment in the history of Egyptology, the march toward understanding the pharaohs began hundreds of years earlier and then continued long after scholars translated the text on the legendary stone. Early Interest in Ancient Egypt Although the ancient Egyptians wrote about their own history, the first true critical analysis of ancient Egyptian history was conducted by the early Greek and Roman historians and geographers. The fifth century Greek historian, Herodotus, is perhaps best known for the in-depth treatment he gave to pharaonic history in Book II of The Histories, which influenced others, such as Diodorus and Strabo, to follow with their own observations of the Nile Valley. The accuracy of the classical accounts of ancient Egyptian history could vary widely. The further back in time the accounts went, the more likely that the chronologies were garbled and facts were simply wrong. The reason for these problems is directly related to the fact that even the most educated Greeks and Romans never took the time to learn the ancient Egyptian language so they were often forced to rely on the Egyptian priests for translations and explanations of texts. The priests were only human, which meant that some parts of Egyptian history were sacrificed for others they believed more important. The classical historians were able to more critically examine events closer to their own period, though, because many of those events were already written about in Greek. The Hebrews had a long-lasting love-hate relationship with ancient Egypt that was chronicled in many books of the Old Testament including both Chronicles, both Kings, and most notably Exodus. The Egyptians are the enemies of the Hebrews throughout most of the book of Exodus, but later developed friendly relations with the kingdoms of Israel and Judah and even came to the latter’s aid against the Assyrians in 701 BC at the Battle of Eltekah as was described in 2 Kings 19:9-10. But the Hebrews’ interest in ancient Egypt was only due to how it affected their kingdom and religion. It does not even approach the flawed, yet critical, nature of the classical historians’ treatment of Egypt. Although the biblical accounts of ancient Egypt were not written in an objective, academic manner, they did keep the spirit of ancient Egypt alive in places where little concrete knowledge of pharaonic culture was known. During the European Middle Ages, pilgrims who followed in the wake of crusaders often brought travel itineraries along with them that were published in Europe by monks and other members of the Roman Catholic Church. High on the list of any of the medieval travel guides were the Pyramids of Giza, which were referred to at the time as the “Granaries of Joseph” because most believed the structures served as granaries instead of tombs. The medieval European interest in ancient Egypt was almost entirely based on the Bible, but there were others who took a more esoteric view of pharaonic civilization. For instance, an Englishman named John Sanderson had six hundred pounds worth of mummies shipped from Egypt to England; not so that he could display them in his mansion or in one of the world’s first museums, but to have them ground up into powder. It was believed by Sanderson – and many other Europeans at the time – that mummy powder was efficacious for cuts and other injuries. While medieval Europeans viewed ancient Egyptian civilization through the lens of the Bible, but with some emphasis on the culture’s more arcane aspects, the people who lived in the pyramids shadows also offered their explanations for the once seemingly great but lost civilization. The Muslim Arabs who conquered Egypt in AD 642 saw ancient Egyptian monuments and particularly the Pyramids of Giza, as simultaneously being “monuments of ignorance” and therefore an affront to Islam, but also as sources of wisdom and power, During the Middle Ages in the Middle East, a number of fictional tales were written in Arabic and Persian where the Pyramids of Giza played a central role. In one legend, an Egyptian king named Surid was said to have built the pyramids as both a tomb – which was the purpose of pyramids – and as a repository of ancient wisdom. In another Islamic legend, the pyramids were said to be tombs of ancient Yemeni kings. Many medieval Islamic sources also give credence to the legendary figure, Hermes Trismegistus, and how he ordered the construction of the pyramids to preserve ancient esoteric knowledge from floods. Although medieval Muslims were correctly able to deduce that pyramids were tombs, their lack of understanding of the ancient Egyptian language kept them from understanding the depth of pharaonic civilization. The curiosity that Europeans felt toward ancient Egypt during the Middle Ages began to evolve into a genuine desire to view pharaonic culture more objectively during the Renaissance. While Renaissance artists were influence by Greek models to create some of the finest pieces of work in the history of Western Civilization, some scholars began looking at ancient Egypt from beyond the perspective of the Bible. By the fifteenth century, most educated Europeans knew that pyramids were used as tombs, not granaries as they had previously believed. The interest in ancient Egypt began to permeate throughout some of Europe’s oldest universities, but the key to understanding all aspects of pharaonic culture were still unknown – the language. Some Renaissance scholars were able to correctly surmise that the enigmatic hieroglyphic script contained both phonetic and idiomatic elements, but it may as well have been a script from another planet because its decipherment still remained far out of reach. The Enlightenment and Ancient Egypt The first legitimate attempts to understand ancient Egyptian civilization objectively came during the period known as the Enlightenment. Many are familiar with the political aspects of the Enlightenment put forth by seventeenth century writer John Locke or eighteenth century writers Voltaire, and Jean-Jacques Rousseau, but just as important are the cultural changes during the period. Enlightenment philosophers, historians, and philologists all began to study pre-Hellenic ancient civilizations without the veneer of the Bible. Although they viewed ancient Egypt and the other ancient Near Eastern civilizations as exotic and “other,” these early modern scholars all had a will to understand ancient peoples objectively. It was from within this intellectual milieu that the modern study known as Egyptology made its first true steps. If one were to identify Egyptology’s first true patron, it would be none other than the conqueror Napoleon Bonaparte. Napoleon is best known for his rise and fall as a military commander and dictator over much of Europe, which in many ways demonstrates that the French-Corsican commander eschewed many of the Enlightenment’s ideas about democracy and representational government. Although it is true that Napoleon only used the political ideas of the Enlightenment when they were to his advantage, he was a firm believer in the cultural aspects of the Enlightenment as discussed above. Napoleon’s conquest brought the French to Egypt, which they occupied from 1798-1801. Even before he invaded Egypt, Napoleon was awed by Egypt’s legacy so he brought 167 scholars, known as savants, from the Commission of the Sciences and Arts with him during the initial invasion. The savants studied all aspects of Egypt, from the flora and fauna to its history, and compiled all of their findings in a multi-volume work known as Descripton de l’Égypte. The volumes of interest to the proto-Egyptologists of the time were labeled Anitquités, which contained numerous drawings of the monuments with accompanying French text. Despite the strides that Napoleon’s scholars were quickly making, understanding the ancient Egyptian language was still a stumbling block that needed to be overcome. The Discovery and Decipherment of the Rosetta Stone The break that the world needed came in mid-July 1799 in the small village of Rosetta located on the Mediterranean Sea. According to accounts from the period, the key to understanding the ancient Egyptian language – the Rosetta Stone – was discovered by French soldiers who were clearing away a wall for a fort. The Rosetta Stone, which was ensconced in the wall, was immediately recognized as something important so it was spirited away for savants to study. The French knew that the stone – what is known as a “stela” by Egyptologists as it commemorated an important historical event – was important because it contained fifty-four lines of Greek text, which they could read, along with fourteen lines of unreadable hieroglyphic text, and thirty-two lines of the equally unreadable demotic Egyptian script. But before the French could dedicate any serious research to the Rosetta Stone, Egypt was captured by the British in 1801. Along with victory went the spoils of war and under Article XVI of “The Capitulation of Alexandria,” the French were forced to relinquish the Rosetta Stone and various other Egyptian antiquities. The British then promptly moved the Rosetta Stone to the British Museum in London where it still sits today. Although the British had physical possession of the Rosetta Stone, it did not stop French scholars from studying the enigmatic inscriptions because many copies were made to folios. In many ways, the race to decipher the Rosetta Stone became a microcosm of the wars that were being fought by the British and French for control of Europe – the victor would assume a special place in history and would also capture a certain amount of pride for his country. The first translations of the Greek lines were done by Reverend Stephen Weston in London in 1802. Attempts were then made to decipher the demotic, but when it was learned that it was just a cursive form of the hieroglyphic script, the focus then turned to the undamaged hieroglyphic lines. Despite making great initial progress on the Rosetta Stone, the vital hieroglyphic lines sat untranslated for several years until two men – one English and the other French – engaged each other in one of the greatest academic competitions in history. A few years after the Rosetta Stone was brought to London, a young polymath named Thomas Young (1773-1829) took up the challenge. He knew that the liturgical language of the Coptic Orthodox Church in Egypt was the modern successor to the ancient Egyptian language and therefore any understanding of Egyptian would be made through Coptic. He also determined correctly that some signs in the hieroglyphic script were phonetic (alphabetic), while others were idiomatic (non-alphabetic). With this knowledge, in 1814, Young was able to decipher the cartouches (names of kings written inside of circle) of King Ptolemy and Queen Bernike and was partially able to make a list of alphabetic signs, but was unable to translate the entire stone. While Young was laboring away in England, across the channel in France an equally impressive polyglot named Jean-Francois Champollion (1790-1832) worked equally as furiously to decipher the enigmatic script. Using his background in the Semitic languages of Arabic and Hebrew, Champollion was able to complete a useable translation of the Rosetta Stone’s hieroglyphic lines in 1814. Although there were later found to be problems with some of Champollion’s translations and his theories on Egyptian grammar, his work provided the basis for the modern Egyptological understanding of the ancient Egyptian language and writing. After the Rosetta Stone With Champollion’s decipherment of the Rosetta Stone, both scholars and rogues began to flood Egypt in order to become rich and/or famous by rediscovering ancient treasures. Both Britain and France dispatched large numbers of agents to acquire the best pieces for their burgeoning museums in what became a war over ancient Egyptian culture that continues on today to some degree. By the middle of the nineteenth century, German scholars led by Karl Richard Lepsius conducted archaeological expeditions into the Nile Valley and by the end of the century the Americans got into the act. George Reisner is responsible for some of the first American Egyptological expeditions, but many see James Henry Breasted as the father of American Egyptology. Although the axis of modern Egyptology is still centered on the United Kingdom, France, Germany, and the United States, there are programs in countries such as Argentina and Japan. The process by which Egyptology became a modern discipline is a long one, but one can point to the discovery and decipherment of the Rosetta Stone as being the true starting point. Without understanding the ancient Egyptian language, much of what is known today about Egyptian history and culture would still be covered by shrouds of mystery. - Krebsbach, Jared. “Herodotus, Diodorus, and Manetho: An Examination of the Influence of Egyptian Historiography on the Classical Historians.” New England Classical Journal. 41 (2014) pgs. 98-99 - Ried, Donald Malcom. Whose Pharaohs? Archaeology, Museums, and Egyptian National Identity from Napoleon to World War I. (Los Angeles: University of California Press, 2002), p. 24 - Dykstra, Darell. “Pyramids, Prophets, and Progress: Ancient Egypt in the Writings of ʿAli Mubārak.” Journal of the American Oriental Society 114 (1994) pgs. 57-58 - Curran, Brian A. “The Renaissance Afterlife of Ancient Egypt (1400-1650).” In The Wisdom of Ancient Egypt: Changing Visions through the Ages. Edited by Peter Ucko and Timothy Champion. (London: University of London Press, 2003), p. 103 - Curran, p. 108 - Outram, Dorinda. The Enlightenment. (Cambridge: Cambridge University Press, 1995), p. 63 - Jeffreys, David. “Introduction – Two Hundred Years of Ancient Egypt: Modern History and Ancient Archaeology.” In Viewss of Ancient Egypt Since Napoleon Bonaparte: Imperialism, Colonialism, and Modern Appropriations. Edited by David Jeffreys. (Walnut Creek, California: Left Coast Press, 2011), p. 2 - Andrews, Carol. The British Museum book of the Rosetta Stone. (London: British Museum Press, 1985), p. 9 - Andrews, p. 13 - Reid, p. 41 - Griffith, F. “The Decipherment of the Hieroglyphs.” Journal of Egyptian Archaeology.” 37 (1951) pg. 41 - Jeffreys, pgs. 3-4
Although much of the learning that happens in a Montessori classroom requires specific Montessori materials, there are activities that can be practiced at home – Dry Pouring is one such exercise. All that is needed for a lesson is: a tray, two identical jugs and a dry ingredient (beads, rice, grain, lentils etc.). The idea behind Dry Pouring is to make the daily task of transferring a material or liquid from one vessel into another something that is inviting rather than daunting, and indirectly the exercise will assist fine motor and problem solving skills, too. It’s one of the lessons that children enjoy most in a Montessori classroom; discharging the chosen materials from one jug into another is great fun and little ones usually take great pride in mastering the skill of pouring. Have a look at the below tutorials for a quick introduction to a couple of Dry Pouring exercises: It’s important to note that as this is a pouring exercise, the vessels used should have a spout to enable success in the activity (as highlighted in the video). Something else you may have noticed is that the exercise starts off simply (pouring materials from one jug into the other using both left and right hands) and as little ones become expert pourers, the exercise gets more complicated – incorporating extra vessels and smaller objects until finally, children are ready to engage with Liquid Pouring as an activity. If you’d like further information about this tutorial, feel free to contact us at [email protected]. We’re happy to answer any questions. Image Attribution: Thebrilliantchild.blogspot.co.uk
It is commonly thought that free market and the laws of supply and demand are concepts that could only have emerged in a modern economic system. But is that assumption true? According to the latest research paper co-authored by JU researchers published by The Economic Journal, the exchange of goods in the pre-Roman period might have been more intensive than previously thought. In the field of economics, the concept of a market economy is largely considered a modern phenomenon. Influential economists such as Karl Marx and Max Weber, for example, argued that although markets existed in antiquity, economies in which structures of production and distribution responded to the laws of supply and demand developed only as recently as the 19th century. A recent study by an international team of researchers led by Dr hab. Adam Izdebski from the JU Institute of History uses palynology – the study of pollen remains extracted from cored sediments – to challenge this belief and provide evidence for an integrated market economy existing in ancient Greece. Market integration began earlier than assumed Using publicly available data from the European Pollen Database, as well as data from other investigators, researchers analysed pollen assemblages from 115 samples taken from six sites in southern Greece to measure landscape change. Using radiocarbon dating to tie their measurements to historical time, researchers followed the change in percentage values for individual plant taxa between 1000 BCE and 600 CE and observed a decrease in pollen from cereals, a staple of the ancient Greek diet, during a period of apparent population growth. This decrease occurred at the same time as an increase in the proportion of olive and vine pollen. These trends raise an important question: why would local producers chose to plant olives and vines instead of cereal grains, when the demand for this staple food must have been high and mounting? In the current study, researchers argue that pollen data from southern Greece reveals an export economy based on cash cropping as early as the Archaic period, primarily through olive cultivation. Although archaeological evidence from these periods documents the movement of goods, quantifiable data on market integration and structural changes in agricultural production have been very limited. ‘In this paper’, says lead author Adam Izdebski, ‘we introduce pollen records as a new source of quantitative data in ancient economic history’. From mud to markets: Integrated scientific approaches reveal an integrated ancient economy Before arriving at their conclusions, researchers compared the trends they observed in the pollen data with three other sources of data in an instance of pioneering scientific research. First, researchers observed a decrease in pollen from uncultivated landscapes corresponding with each increase in settlement numbers. This correlation between the number of settlements and the exploitation of the land supports the methodology of the study and indicates the potential of palynology for future studies in a variety of scientific disciplines. Researchers then looked for evidence of increased trade activity in Mediterranean shipwrecks, which are routinely used to estimate maritime trade and overall economic activity. After restricting their search to wrecks from the appropriate period and region, scientists observed trends in shipwrecks consistent with trends found in cereal, olive, and vine pollen. Both sources of data suggest an economic boom in the 1st and 2nd century CE, a decline in the 4th and 5th century, and a smaller boom in the 6th century. Finally, researchers examined trends in the presence of large-scale oil and wine presses in the Mediterranean. The presence of these machines, although not located in Greece, indicates a pattern of broad economic trends in the region and changing incentives for the production of large quantities of olive oil and wine. Again, the researchers found that trends in archaeological findings of oil and wine presses were consistent with trends in cereal, olive, and vine pollen. As the emergence of integrated markets and capitalist economies of the early modern era is believed to have been at the roots of the Anthropocene, the current epoch in which humanity has become a major geological force, the current study shows that the structural developments that occurred on a large scale through European colonization from the 15th century onward were possible several thousand years before. Source: press release
An artist’s impression of the temperate rainforest in West Antarctica 90 million years ago. Image: By Alfred-Wegener-Institut/J. McKay (Creative Commons licence) An ice-free polar forest once flourished, helped by enough heat and ample greenhouse gas. It could come back. Many millions of years ago, the southern continent wasn’t frozen at all, but basked in heat balmy enough for an ice-free polar forest to thrive. And ancient pre-history could repeat itself. Climate scientists can tell you what the world could be like were today’s greenhouse gas concentrations to triple – which they could do if humans go on clearing tropical forests and burning fossil fuels. They know because, 90 million years ago, the last time when carbon dioxide levels in the atmosphere went past the 1200 ppm (parts per million) mark, sea levels were 170 metres higher than today and the world was so warm that dense forests grew in what is now Antarctica. At latitude 82 South, a region where the polar night lasts for four months, there was no icecap. Instead, the continental rocks were colonised by conifer forest, with a mix of tree ferns and an understorey of flowering shrubs. Even though at that latitude the midday sun would have been relatively low in the sky, and the forests would have had to survive sustained winter darkness for a dozen weeks or more, average temperatures would have been that of modern day Tasmania, and a good 2C° warmer than modern Germany. “Even during months of darkness, swampy temperate forests were able to grow close to the South Pole, revealing an even warmer climate than we expected” German and British researchers report in the journal Nature that they took a closer look at a sequence of strangely-coloured mudstone in a core drilled 30 metres below the bottom of the sea floor, off West Antarctica. The section of sediment had been preserved from the mid-Cretaceous, around 90 million years ago, in a world dominated by dinosaurs. By then, the first mammals may have evolved, the grasses were about to emerge, and seasonal flowering plants had begun to colonise a planet dominated for aeons by evergreens. And in the preserved silt were pollens, spores, tangled roots and other plant material so well preserved that the researchers could not just identify the plant families, but even take a guess at parallels with modern forests. Before their eyes was evidence of something like the modern rainforests of New Zealand’s South Island, but deep inside the Antarctic Circle. “The preservation of this 90 million-year-old forest is exceptional, but even more surprising is the world it reveals,” said Tina van de Flierdt, of Imperial College London. “Even during months of darkness, swampy temperate forests were able to grow close to the South Pole, revealing an even warmer climate than we expected.” British rain levels Somewhere between 115 and 85 million years ago, the whole world was a lot hotter: in the tropics temperatures reached 35°C and the average temperature of that part of the Antarctic was 13°C. This is at least two degrees higher than the average temperature for modern Germany. Average temperatures in summer went up to 18.5°C, and the water temperatures in the swamps and rivers tipped 20°C, only 900 kms from the then South Pole. Modern Antarctica is classed as desert, with minimal precipitation: then it would have seen 1120 mm a year. People from southwestern Scotland or parts of Wales would have felt at home. It is an axiom of earth science that the present is key to the past: if such forests today can flourish at existing temperatures, then the same must have been true in the deep past. So climate scientists from the start have taken a close interest in the evidence of intensely warm periods in the fossil record: a mix of plant and animal remains, the ratio of chemical isotopes preserved in rock, and even the air bubbles trapped in deep ice cores can help them reconstruct the temperatures, the composition of the atmosphere and the rainfall of, for example, the warmest periods of the Pliocene, when carbon dioxide levels in the atmosphere tipped the 1000 ppm mark, and average planetary temperatures rose by 9°C. Prehistoric encore approaching? In the past century, atmospheric CO2 levels have swollen from 285 ppm to more than 400 ppm, and the planetary thermometer has already crept up by 1°C above the level for most of human history. If human economies continue burning fossil fuels at an ever-increasing rate, the conditions that prevailed 56 million years ago could return by 2159. The Cretaceous evidence will help climate scientists calibrate their models of a world in which greenhouse gas emissions go on rising. “Before our study, the general assumption was that the global carbon dioxide concentration in the Cretaceous was roughly 1000 ppm,” said Johann Klages, of the Alfred Wegener Institute centre for polar and marine research in Germany, who led the study. “But in our model-based experiments, it took concentration levels of 1120 to 1680 ppm to reach the average temperatures back then in Antarctica.” – Climate News Network About the Author Tim Radford is a freelance journalist. He worked for The Guardian for 32 years, becoming (among other things) letters editor, arts editor, literary editor and science editor. He won the Association of British Science Writers award for science writer of the year four times. He served on the UK committee for the International Decade for Natural Disaster Reduction. He has lectured about science and the media in dozens of British and foreign cities. Book by this Author: Science that Changed the World: The untold story of the other 1960s revolution by Tim Radford. Climate Adaptation Finance and Investment in California by Jesse M. Keenan This book serves as a guide for local governments and private enterprises as they navigate the unchartered waters of investing in climate change adaptation and resilience. This book serves not only as a resource guide for identifying potential funding sources but also as a roadmap for asset management and public finance processes. It highlights practical synergies between funding mechanisms, as well as the conflicts that may arise between varying interests and strategies. While the main focus of this work is on the State of California, this book offers broader insights for how states, local governments and private enterprises can take those critical first steps in investing in society’s collective adaptation to climate change. Available On Amazon Nature-Based Solutions to Climate Change Adaptation in Urban Areas: Linkages between Science, Policy and Practice by Nadja Kabisch, Horst Korn, Jutta Stadler, Aletta Bonn This open access book brings together research findings and experiences from science, policy and practice to highlight and debate the importance of nature-based solutions to climate change adaptation in urban areas. Emphasis is given to the potential of nature-based approaches to create multiple-benefits for society. The expert contributions present recommendations for creating synergies between ongoing policy processes, scientific programmes and practical implementation of climate change and nature conservation measures in global urban areas. Available On Amazon A Critical Approach to Climate Change Adaptation: Discourses, Policies and Practices by Silja Klepp, Libertad Chavez-Rodriguez This edited volume brings together critical research on climate change adaptation discourses, policies, and practices from a multi-disciplinary perspective. Drawing on examples from countries including Colombia, Mexico, Canada, Germany, Russia, Tanzania, Indonesia, and the Pacific Islands, the chapters describe how adaptation measures are interpreted, transformed, and implemented at grassroots level and how these measures are changing or interfering with power relations, legal pluralismm and local (ecological) knowledge. As a whole, the book challenges established perspectives of climate change adaptation by taking into account issues of cultural diversity, environmental justicem and human rights, as well as feminist or intersectional approaches. This innovative approach allows for analyses of the new configurations of knowledge and power that are evolving in the name of climate change adaptation. Available On Amazon From The Publisher: Purchases on Amazon go to defray the cost of bringing you InnerSelf.comelf.com, MightyNatural.com, and ClimateImpactNews.com at no cost and without advertisers that track your browsing habits. Even if you click on a link but don't buy these selected products, anything else you buy in that same visit on Amazon pays us a small commission. There is no additional cost to you, so please contribute to the effort. You can also use this link to use to Amazon at any time so you can help support our efforts.
What is BALLYA Trichinosis Test? BALLYA Trichinosis TestGet A Free Quote! is manufactured by BALLYA, it’s rapid test for detection Trichinosis disease in pigs. Lateral flow assay test base on the gold immunochromatography What is Trichinosis? is a zoonotic disease caused by Trichinella spiralis. People are infected with live Trichinella larvae from raw or undercooked food. The main clinical manifestations are gastrointestinal symptoms, fever, eyelid edema and muscle pain. Humans are infected by eating raw or undercooked pig or other animal meat. Cysts in skeletal muscle can survive for 57 days at 12 o'clock, and can survive for 2 to 3 months in carrion. Inadequate grilling or rinsing is not enough to kill the cyst larvae. In addition, feces transmission among animals has received certain attention, and such transmission between people is not impossible, especially the feces excreted within 4 hours after infection are the most infectious. More than 100 animals susceptible to Trichinella include pigs, dogs, cats, mice, foxes, wolves, wild boars, etc. Humans are also susceptible and can cause serious diseases. Infection of Trichinella spiralis in pigs is mainly caused by eating undercooked water containing Trichinella spiralis, waste meat residues and scraps. It is mainly found in grazing pigs. What is Trichinella spiralis? The male body of the spirochaete is between 1.4 and 1.6 mm in length, and the front is flatter than the rear. The anus is at the end, with a large mating pseudocapsule on each side. Spirochetes are about twice as female as males and have anus at the ends. The vulva is located near the esophagus. A female's single uterus is full of developing eggs in the back, while the front contains fully developed larvae Trichinellosis life cycle Female Trichinella worms can survive for about six weeks, during which time they can reproduce up to 1,500 larvae. When a spent woman dies, she leaves the owner. Larvae can enter the circulatory system and migrate around the host's body to look for encapsulated muscle cells. Migration and encapsulation of larvae cause fever and pain, which are caused by the host's inflammatory response. In some cases, accidental migration to specific organs and tissues can cause myocarditis and encephalitis, and can lead to death. When larvae invade the muscles, the muscles become acutely inflamed, manifesting as degeneration of myocardial cells, tissue congestion, and bleeding. In the later period, a muscle biopsy or post-mortem muscle examination revealed that the muscles were pale, and there were white nodules the size of a needle tip on the cut surface. Microscopic examination revealed the cysts of the worm body, and there were larvae bent in the shape of a folding knife. Peripheral cysts formed by connective tissue. When the adult invades the intestinal epithelium, it causes inflammation of the intestinal mucosa, showing mucosal hypertrophy, edema, infiltration of inflammatory cells, increased exudation, intestinal contents filled with mucus, mucosal bleeding spots, and occasional ulcers. What is swine trichinosis? Sick pigs are usually infected with mild symptoms without infection, or with mild enteritis. Severe infection, increased body temperature, diarrhea, blood in the stool; sometimes vomiting, loss of appetite, rapid weight loss, death in about half a month, or chronicity. After infection, the larvae enter the muscles and cause acute muscle inflammation, pain, and fever, sometimes swallowing, chewing, difficulty walking, and eyelid edema. The symptoms disappear after 1 month, and the pigs that have been resistant to the disease become long-term carriers. It is difficult to make a diagnosis based on clinical data. The larvae produced by Trichinella do not excrete with feces; although occasional Trichinella cysts or larvae are occasionally found in the host's feces, they are extremely difficult to find, so the fecal examination method is not suitable for the disease. If the disease is suspected, it can only be diagnosed by experimentally examining the worms in the muscles. Cut a small piece of tongue muscle, press it, and observe it under a microscope. There are two layers of structures outside the cyst that parasitizes in the striated muscle. The larvae are curled into the cyst like a knife. The width of the cyst is about 0.3 mm and the length Approx. 0.4 mm, white needle-shaped on the eye. Serological tests can also be applied, using enzyme-linked immunosorbent assays, indirect hemagglutination inhibition tests, intradermal tests, and precipitation tests to test whether trichinella specific antibodies are increased in the serum. If it is increased, the disease can be determined. How to treat swine trichinosis? There is no specific treatment for this disease. Can be tried with prothioimidazole, thiabendazole or tomidazole, 25-40 mg per kilogram of body weight per day, orally divided into two or three times, one course of treatment for 5-7 days, which can kill adults and muscle larvae. How to prevent trichinosis? The premise of preventing the disease is to improve citizens' awareness of safety and health, which is the key to prevention. On this basis, do a good job of public health, strengthen breeding management, and burn or bury animal corpses. Pig farmers are prohibited from feeding pigs with meat washing water to prevent the disease; for personal safety, pig farmers should regularly check and deworm insects, and pay attention to personal hygiene; health and quarantine departments should strengthen quarantine. Once sick pigs, sick meat are found, Dispose of them in strict accordance with the (Food Sanitary and Quarantine Regulations) and the Animal Sanitary and Quarantine Regulations; pig houses and pig farms should try to eliminate rats and prevent pigs from swallowing dead carcasses and other animals to reduce the chance of infection and transmission. Which animals can be detected by the BALLYA Trichinosis test? The BALLYA Trichinosis test can be applied to all kinds of pigs. The characteristics of this product are short detection time, simple operation and low price. Significance of testing Trichinosis In order to ensure the safety of consumers, reduce the economic loss of the owner of the breeding plant, prevent the sick pigs from entering the market, and treat the sick pigs in time, the Trichinosis Test can play a key role. Ensure the safety of farm owners and consumers, reduce unnecessary losses. Component of BALLYA Trichinosis Test? BALLYA Trichinosis Test , 20 cassettes Swab, 20 pcs PE Groves, 1 packet Sample Buffer, 20 vialsMini Pipette Disposable micropipette tips (optional) Kit Instruction, 1pcs How to use BALLYA Trichinosis Test? 1. Add 2drops of whole blood or serum into the buffer solution test tube. 2. Cover the lip and shake. 3. Take out and place the card on the flat desk. 4. Absorb the sample and add 4 drops into the sample well carefully. 5. Read the result for 10 minutes. The result after 15 minutes is invalid. Flesh Tissue Sample 1. Cut down 1g flesh tissue, no fat. 2. Cut up the flesh and add into the buffer solution test tube. 3. Cover the lip and shake. 4. Take out and place the card on the flat desk. 5. Absorb the sample and add 4 drops into the sample well carefully. 6. Stand for 10mins at room temperature and read the result. The result after 15minutes is invalid. Limitations of BALLYA Trichinosis Test? BALLYA Trichinosis Test is a qualitative test kit. It’s only for screening purpose. If have positive cases or suspected case, maybe use other detection method to make a further detection, such as, ELISA, PCR, qPCR, etc. The pig industry has developed rapidly, and people's demand for pork has increased significantly. In order to protect consumers' meat safety and physical health. Swine trichinellosis, as one of the diseases that must be inspected for pig slaughter and quarantine, shows its great safety significance.BALLYA Trichinosis Test provided by BALLYA can effectively detect whether there is pig trichinellosis. This kit is not only simple to operate, but also has a short test time and high accuracy. Allow the veterinarian to respond accordingly. Where to buy BALLYA Trichinosis Test? Get a free quote now and enjoy a 10% discount!
This lesson is aimed at helping those pupils who, after passing through level one (Primary 1 and 2) reach levels two and three (Primary 3 – 6) without knowing how to solve a simple equation in Mathematics. This makes them fail their exams, not because they are dull. Some teachers also hurriedly teach Mathematics without taking into consideration some basic principles. This class of pupils is merely slow to understand and needs gradual guidance to calculation. This blog answers the difficulties faced by these pupils at all levels. Parents and teachers are advised to use this lesson for a better result in the performance of the Slow Learners.
What are eating disorders and disordered eating? Eating disorders are serious mental illnesses that also affect physical health. The most common eating disorders are: - anorexia nervosa, which is when someone tries to lose more weight than is healthy and has a distorted body image - bulimia nervosa, which is when someone eats very large amounts of food and then gets rid of the food – for example, by vomiting or using laxatives - avoidant restrictive food intake disorder (ARFID), which is when someone eats only a small range or amount of food and doesn't get all the nutrients they need - binge eating disorder, which is when someone eats very large amounts of food and feels distressed about their eating, but doesn’t try to get rid of the food. Disordered eating is behaviour that isn’t quite as severe or regular as the behaviour in anorexia nervosa, bulimia nervosa or binge eating disorder. Disordered eating can be just as serious as the other eating disorders, and it needs treatment too. Someone with disordered eating might be at risk of developing an eating disorder. Although girls are most at risk of eating disorders, boys can develop them too. Boys sometimes go untreated for longer because parents and health professionals aren’t looking for body image and eating problems in boys. Red flags for eating disorders Changes in your child’s eating habits, mood, behaviour, physical health and appearance can be red flags for eating disorders. Note that you don’t have to be ‘thin’ to have an eating disorder. In fact, rapid weight loss in teenagers of any size can be a sign of an eating disorder. Food and eating habits You might notice that your child: - prepares food for others, but doesn’t eat it - cuts down on portion sizes or shows other signs of highly limited eating and dieting - cuts out ‘junk food’ or major food groups like meat or dairy - loses weight or goes up and down in weight. You might notice that your child seems anxious or irritable, particularly around mealtimes. You might notice that your child: - avoids social activities, particularly ones that involve food - goes to the bathroom or toilet straight after meals - vomits or uses laxatives - exercises too much, particularly while alone in the bedroom. Friends, teachers or coaches might tell you that something doesn’t seem right with your child. Physical health and appearance You should also be concerned if you notice physical changes in your child, including: - irregular periods in your daughter, or her periods stopping altogether - tiredness or lack of energy all the time - complaints about being cold all the time, even in warm weather - faintness or dizziness - soft downy hair growing on your child’s face, arms or torso - hair loss from your child’s head. Swollen or puffy cheeks, damaged teeth or gums, and sores on the knuckles or hands might be signs that teenagers are making themselves vomit. Talking with your child about disordered eating and eating disorders If you notice any of the red flags above, you need to talk with your child and a health professional as soon as you can. If you just think that something isn’t right about the way your child is eating or behaving around food, trust your judgment and talk with your child. It’s important to be sensitive, caring and non-judgmental when you talk with your child about food, weight and body image, but it could be a tricky conversation. You might feel really worried, and your child might get angry and say that there isn’t a problem. Even if this happens, try to stay calm and send the message that you’re concerned about your child’s health and wellbeing, not your child’s weight and appearance. You might need to say that you think your child needs to see a health professional. If you’re not sure how to talk about these issues, you could first visit your GP or mental health professional and ask for help. Contacting a support organisation for eating disorders is another option. If your child has an eating disorder, your love and support will be very important in helping your child get better. Getting help for eating disorders If you’re worried about your child’s eating habits, it’s a good idea to take your child to see a GP or mental health professional as soon as possible. If possible, try to find a health professional who has experience in eating disorders. Your GP can refer your child if necessary. Early intervention for disordered eating can stop problem eating turning into a more severe eating disorder. It might save your child from intensive treatment and a very long recovery. Also, it might be easier to get your child to see a health professional now rather than later. Support services for eating disorders For adolescent eating disorders support services in your state, contact your specialist children’s hospital. For information about support and treatment services for eating disorders, you can also contact: - Butterfly Foundation, Australia’s national foundation for eating disorders - InsideOut, Australia’s national institute for eating disorders. If you’re concerned about an eating disorder or body image issue, you can get free support from a qualified counsellor by calling Butterfly Foundation’s national helpline on 1800 334 673, 8 am-midnight, seven days a week. You can also contact the helpline using email or webchat. Why teenagers can be at risk of disordered eating and eating disorders We don’t know why some children develop eating disorders. But adolescence can be a risky time for teenagers treating their bodies in unhealthy ways. During adolescence, your child’s body and your child’s brain grow and develop very quickly. There are lots of changes going on in the way your child thinks, feels and relates to people. Many teenagers are more aware of body image. At the same time, your child needs more of the right kinds of food. But it can be harder to keep up with teenage nutritional needs because they’re growing so fast. Lifestyle and food habits might change as your child begins to eat more meals and snacks away from home. And this is also a time when young people are more aware of and influenced by media messages and information at school about health, obesity and dieting. So you might notice some changes in your child’s eating habits and attitudes towards food, including: - eating at random times and/or skipping meals - eating more convenience foods and high-energy sugary snacks and drinks - being more aware of information about ‘healthy’ eating, obesity and diets - experimenting with dieting and restrictive eating – that is, not eating certain foods or food groups. The combination of all these things can lead some teenagers to develop eating habits that aren’t good for their growing bodies. Other risk factors for eating disorders We can’t link eating disorders to a particular gene, environment or personality type. But there are some factors that can put young people at higher risk of developing an eating disorder. These risk factors include:
During the eighteenth century, British North American colonists experienced many economic, social, and political changes. In an attempt to expand the empire, the British adopted mercantilist policies to tie the colonies to the mother country. Through a series of Navigation Acts, the British pushed the colonies into a trade network that proved beneficial to most participants. The colonies produced raw materials and exchanged them for goods manufactured in the mother country. Such economic growth caused an increase in the colonists’ standard of living. To underscore mercantilism, the British attempted to extend their political control over the colonies. Under the political system that gave power to the colonial governor and the colonial assembly, the colonists concluded they had certain political rights, including the right to protest policies they did not like. The American colonists also experienced social changes stemming from the Great Awakening, a wave of religious revivalism, and the Enlightenment, a period of intellectual development promoting personal improvement and social betterment. Both led to positive developments in American society during the eighteenth century. They caused the American colonists to be distrustful of institutionalized authority, yet favorably disposed to education and the instruction of educators. Moreover, the Enlightenment caused America’s educated elite to be suspicious of any attempt to shackle their minds or erode the rights of English citizens. Although different in their goals, the Great Awakening and the Enlightenment had similar motivations, largely in the way they revealed the fundamental pragmatism and practicality of the American people. The attempt to expand the empire did not just affect internal colonial policy. The British wanted to eliminate France and Spain from the New World. Metacom’s War centered on tensions between New England settlers and the Wampanoags as the number of settlers increased. Bacon’s Rebellion focused on concerns about the availability of land in Virginia as more indentured servants survived their terms of service and looked to obtain their own plots. However, the remaining wars, King William’s War (1689 1697), Queen Anne’s War (1702-1713), and King George’s War (1744-1748), stemmed from the tensions between the European powers. Many colonists paid a high price for their participation in these wars. Their losses certainly lent themselves to a feeling that the colonists had made significant sacrifices for England, and therefore deserved equal and fair treatment as citizens of the British crown. British attempts to expand their power in North America ultimately paved the way for the revolution.
QUIZ YOURSELF ON AFFECT VS. EFFECT! synonym study for homograph OTHER WORDS FROM homographhom·o·graph·ic [hom-uh-graf-ik, hoh-muh-], /ˌhɒm əˈgræf ɪk, ˌhoʊ mə-/, adjective Words nearby homograph What is a homograph? Homographs are words that have the same spelling but different meanings, whether they’re pronounced the same or not. Bass (the fish, rhymes with class) and bass (the instrument, rhymes with ace) are homographs. But so are bark (the sound a dog makes) and bark (the covering of a tree). These two senses of bark can also be considered homophones. You can learn more about the difference in the next section. There are many homographs in English, including many commonly used words, which can make things confusing, even for native speakers. What’s the difference between homograph, homophone, and homonym? Homograph, homophone, and homonym all start with homo-, which means “same.” The -graph in homograph means “written.” Homographs are words that are written the same—meaning they always have the same spelling—but have different meanings. Homographs can be pronounced the same or not. For example, tear (rhymes with ear) and tear (rhymes with air) are homographs. So are bear (the animal) and bear (the verb meaning “to carry”). The -phone in homophone means “sound.” Homophones are words that sound the same but have different meanings, whether they’re spelled the same or not. There, their, and they’re are homophones. Bear (the animal) and bare (meaning “uncovered” or “empty”) are homophones. So are bear (the animal) and bear (the verb meaning “to carry”). As you can see, the two senses of bear can be considered both homographs and homophones. When words are both homographs and homophones—meaning they have both the same spelling and the same pronunciation, but different meanings—they can be called homonyms. The -nym in homonym means “name.” The word homonym can also be used as a synonym (there’s that -nym again) for either homophone or homograph. Overall, knowing what the word homograph means is a lot less important than making sure you use homographs properly so people can understand what you mean. Did you know ... ? What are real-life examples of homographs? Homographs can be a source of confusion, especially when they’re used out of context. Yep. I’m a firm believer that folks can tell the difference between homographs given proper context. Shoot a bow, take a bow, tie a bow, a cellist’s favorite bow, the bow plunged beneath the waves. — Sean (@DailyChef7) November 27, 2018 There's not a ton of homographs I mix up regularly, but for some reason I most always read "polish" (like shoes) as "Polish" (like sausage) — Finty Prasandhoff (@thynctank) February 23, 2016 Which of the following word pairs are homographs? A. air and heir B. play and play C. flu and flew D. fly and flew
Spatial reasoning is a basic skill in geometry, and the relation between early spatial thinking and later mathematical developments has been recorded for decades. This knowledge should be developed in children at the earliest possible age. The American National Research Council (National Research Council, 2006) considers spatial thinking as a basic skill that can be learned and formally taught to all students using well-designed tools, technologies, and curricula. It is observed that children are able to exceed their programme expectations if they have adequate opportunities to learn. There is a need to introduce young children to tasks that relate to a more dynamic and transformational approach to geometry. The advent of technology gives teachers an excellent opportunity to present geometry in a more dynamic way than ever before. As teachers, we should use modern methods, including information technology and computer software. Children are currently exposed to a wide range of technological devices, such as: - Several electronic games and other softwares that are intended for both entertainment and information. The use of technology to teach young children seems to be an obvious solution in which learning geometry could be pleasant for children. In that sense, a child should work and cooperate with other children and teachers, while also teaching should cover a wide range of approaches, including playing. To achieve that goal NeoTrie appears to meet all these conditions! NeoTrie was tested as a part of math lessons at the Primary School in Żernica (Poland) in the academic years 2017-18. For the academic year 2018-19, new implemented tools were used, shared and tested in schools in Spain, Netherlands, France, and other countries within the Scientix pilot project, in cooperation with didactics and mathematics researchers from the University of Almería. The first step was to prepare the NeoTrie lessons. The second was already during the lesson to keep some important information, such as: - The date; - The topic of the lesson; - Number and age of the pupils; - Activities performed using NeoTrie description; - Necessary tools; - The advantages and disadvantages; These recommendations provide guidelines and suggestions on how to organize a lesson using NeoTrie in the most effective way. This article presents lessons on triangles, segments in a prism, as well as lessons introducing the concept of fractions, comparing and adding fractions. It was observed that during the lessons with the use of NeoTrie the pupils were: focused, disciplined, mobilized to work and satisfied with the tasks successfully performed. It clearly seems that they found classes with NeoTrie more attractive than those with the use of the other didactic methods. You will find more information about the program and its use during lessons in the article published in “Psychology, Society & Education“. Did you try the activities with your students? Share your experiences in the comments! You can also find examples in the attached files. Author: Grazyna Morga Official page of NeoTrie VR: https://sites.google.com/ual.es/neotrie-forum/ The project on Facebook: https://www.facebook.com/neotrie/ National Research Council (2006). Learning to think spatially: GIS as a support system in the K-12 curriculum. Washington, DC: National Academy Press
well, paralell is parallel. i am providing links to . hope this ! a series circuit is one with all the loads in a row. there is only one path for the electricity to flow. if this circuit was a string of light bulbs, and one blew out, the remaining bulbs would turn off. unlike in series circuits, a charge in a parallel circuit encounters a single voltage drop during its path through the external circuit. the current through a given branch can be predicted using the ohm's law equation and the voltage drop across the resistor and the resistance of the resistor. a parallel circuit has certain characteristics and basic rules: a parallel circuit has two or more paths for current to flow through. voltage is the same across each component of the parallel circuit. where is the map so i can answer; to be the responsible party for healthcare delivery through medicare and medicaid programs;
At this point in time, one of the greatest mysteries in astronomy is where short, dramatic bursts of radio light seen across the universe, known as Fast Radio Bursts (FRBs), are originating from. Although FRBs last for only a thousandth of a second, there are now hundreds of records of these enigmatic sources. However, from these records, the precise location is known for just four FRBs – they are said to be ‘localised’. In 2016, one of these four sources was observed to repeat, with bursts originating from the same region in the sky, in a non-predictable way. This resulted in researchers drawing distinctions between FRBs where only a single burst of light was observed (‘non-repeating’) and those where multiple bursts of light were observed (‘repeating’). “The multiple flashes that we witnessed in the first repeating FRB arose from very particular and extreme conditions inside a very tiny (dwarf) galaxy,” says Benito Marcote, from the Joint Institute for VLBI ERIC and lead author of the current study. “This discovery represented the first piece of the puzzle but it also raised more questions than it solved, such as whether there was a fundamental difference between repeating and non-repeating FRBs. Now, we have localised a second repeating FRB, which challenges our previous ideas on what the source of these bursts could be.” On 19th June 2019, eight telescopes from the European VLBI Network (EVN) simultaneously observed a radio source known as FRB 180916.J0158+65. This source was originally discovered in 2018 by the CHIME telescope in Canada, which enabled the team, led by Marcote, to conduct a very high resolution observation with the EVN in the direction of FRB 180916.J0158+65. During five hours of observations the researchers detected four bursts, each lasting for less than two thousandths of a second. The resolution reached through the combination of the telescopes across the globe, using a technique known as Very Long Baseline Interferometry (VLBI), meant that the bursts could be precisely localised to a region of approximately only seven light years across. This localisation is comparable to an individual on Earth being able to distinguish a person on the Moon. With this location the team were able to conduct observations with one of the world’s largest optical telescopes, the 8-m Gemini North on Mauna Kea in Hawaii. Examining the environment around the source revealed that the bursts originated from a spiral galaxy (named SDSS J015800.28+654253.0), located half a billion light years from Earth – specifically, from a region of that galaxy where star formation is prominent. “The found location is radically different from the previously located repeating FRB, but also different from all previously studied FRBs,” explains Kenzie Nimmo, PhD student at the University of Amsterdam. “The differences between repeating and non-repeating fast radio bursts are thus less clear and we think that these events may not be linked to a particular type of galaxy or environment. It may be that FRBs are produced in a large zoo of locations across the Universe and just require some specific conditions to be visible.” While the current study casts doubt on previous assumptions, this FRB is the closest to Earth ever localised, allowing astronomers to study these events in unparalleled detail. “We hope that continued studies will unveil the conditions that result in the production of these mysterious flashes. Our aim is to precisely localize more FRBs and, ultimately, understand their origin,” concludes Jason Hessels, corresponding author on the study, from the Netherlands Institute for Radio Astronomy (ASTRON) and the University of Amsterdam.
How do viruses get their name? How do viruses get their name? Viruses are named based on their genetic structure to facilitate the development of diagnostic tests, vaccines and medicines. Virologists and the wider scientific community do this work, so viruses are named by the International Committee on Taxonomy of Viruses (ICTV). Where do coronaviruses come from? See full answer Coronaviruses are often found in bats, cats and camels. The viruses live in but do not infect the animals. Sometimes these viruses then spread to different animal species. The viruses may change (mutate) as they transfer to other species. Eventually, the virus can jump from animal species and begins to infect humans. In the case of COVID-19, the first people infected in Wuhan, China are thought to have contracted the virus at a food market that sold meat, fish and live animals. Although researchers don’t know exactly how people were infected, they already have evidence that the virus can be spread directly from person to person through close contact. What is the meaning of COVID-19? COVID-19 is a disease caused by a new strain of coronavirus. ‘CO’ stands for corona, ‘VI’ for virus, and ‘D’ for disease. Formerly, this disease was referred to as ‘2019 novel coronavirus’ or ‘2019-nCoV.’ Is COVID-19 caused by a virus or by bacteria? FACT: The coronavirus disease (COVID-19) is caused by a virus, NOT by bacteria.The virus that causes COVID-19 is in a family of viruses called Coronaviridae. Antibiotics do not work against viruses. Some people who become ill with COVID-19 can also develop a bacterial infection as a complication. In this case, antibiotics may be recommended by a health care provider. There is currently no licensed medication to cure COVID-19. If you have symptoms, call your health care provider or COVID-19 hotline for assistance. Can the coronavirus disease spread through feces? The risk of catching the COVID-19 virus from the faeces of an infected person appears to be low. There is some evidence that the COVID-19 virus may lead to intestinal infection and be present in faeces. Approximately 2−10% of cases of confirmed COVID-19 disease presented with diarrhoea (2−4), and two studies detected COVID-19 viral RNA fragments in the faecal matter of COVID-19 patients (5,6).However, to date only one study has cultured the COVID-19 virus from a single stool specimen (7). There have been no reports of faecal−oral transmission of the COVID-19 virus. Can the coronavirus survive on surfaces? It is not certain how long the virus that causes COVID-19 survives on surfaces, but it seems likely to behave like other coronaviruses. A recent review of the survival of human coronaviruses on surfaces found large variability, ranging from 2 hours to 9 days (11).The survival time depends on a number of factors, including the type of surface, temperature, relative humidity and specific strain of the virus. How long have coronaviruses existed? The most recent common ancestor (MRCA) of all coronaviruses is estimated to have existed as recently as 8000 BCE, although some models place the common ancestor as far back as 55 million years or more, implying long term coevolution with bat and avian species. What is the official name of the coronavirus disease? ICTV announced “severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)” as the name of the new virus on 11 February 2020. Can COVID-19 spread in hot and humid climates? Are CT scans helpful for diagnosing COVID-19? Along with laboratory testing, chest CT scans may be helpful to diagnose COVID-19 in individuals with a high clinical suspicion of infection. Why Delta variant is dangerous? The variant, first identified in India, is the most contagious yet and, among those not yet vaccinated, may trigger serious illness in more people than other variants do, say scientists tracking the spread of infection. Are the elderly more vulnerable to the coronavirus disease? The COVID-19 pandemic is impacting the global population in drastic ways. In many countries, older people are facing the most threats and challenges at this time. Although all age groups are at risk of contracting COVID-19, older people face significant risk of developing severe illness if they contract the disease due to physiological changes that come with ageing and potential underlying health conditions. What is the risk of dying for the older people? Over 95% of these deaths occurred in those older than 60 years. More than 50% of all fatalities involved people aged 80 years or older. Reports show that 8 out of 10 deaths are occurring in individuals with at least one comorbidity, in particular those with cardiovascular disease, hypertension and diabetes, but also with a range of other chronic underlying conditions. How does COVID-19 spread? • Current evidence suggests that the virus spreads mainly between people who are in close contact with each other, typically within 1 metre (short-range). A person can be infected when aerosols or droplets containing the virus are inhaled or come directly into contact with the eyes, nose, or mouth. How many different human coronaviruses are there? Six species of human coronaviruses are known, with one species subdivided into two different strains, making seven strains of human coronaviruses altogether. Is coronavirus a disease? Coronavirus disease (COVID-19) is an infectious disease caused by a newly discovered coronavirus. Is coronavirus disease zootonic? All available evidence for COVID-19 suggests that SARS-CoV-2 has a zoonotic source. Can smoking waterpipes spread the coronavirus disease? Smoking waterpipes, also known as shisha or hookah, often involves the sharing of mouth pieces and hoses, which could facilitate the transmission of the COVID-19 virus in communal and social settings. Do smokers get more severe symptoms of COVID-19 if infected? Smoking any kind of tobacco reduces lung capacity and increases the risk of many respiratory infections and can increase the severity of respiratory diseases. COVID-19 is an infectious disease that primarily attacks the lungs. Smoking impairs lung function making it harder for the body to fight off coronaviruses and other respiratory diseases. Available research suggests that smokers are at higher risk of developing severe COVID-19 outcomes and death. Does the coronavirus create stigmas in the population? Stigma occurs when people negatively associate an infectious disease, such as COVID-19, with a specific population. In the case of COVID-19, there are an increasing number of reports of public stigmatization against people from areas affected by the epidemic. Unfortunately, this means that people are being labelled, stereotyped, separated, and/or experience loss of status and discrimination because of a potential negative affiliation with the disease. What is the difference between people who have asymptomatic or pre-symptomatic COVID-19? Both terms refer to people who do not have symptoms. The difference is that ‘asymptomatic’ refers to people who are infected but never develop any symptoms, while ‘pre-symptomatic’ refers to infected people who have not yet developed symptoms but go on to develop symptoms later. Has COVID-19 been detected in drinking water supplies? The COVID-19 virus has not been detected in drinking-water supplies, and based on current evidence, the risk to water supplies is low. Does Covid cause heart problems? In a small number of severe cases, Covid-19 may cause inflammation of the heart muscle (myocarditis) and heart lining (pericarditis). Myocarditis and pericarditis can be caused by other viral infections, not just Covid-19. How severe is the coronavirus disease? Most people infected with the COVID-19 virus will experience mild to moderate respiratory illness and recover without requiring special treatment. Older people, and those with underlying medical problems like cardiovascular disease, diabetes, chronic respiratory disease, and cancer are more likely to develop serious illness. Is smoking dangerous during COVID-19 pandemic? Current evidence suggests that the severity of COVID-19 disease is higher among smokers. Smoking impairs lung function, making it more difficult for the body to fight off respiratory disease due to the new coronavirus.Tobacco users have a higher risk of being infected with the virus through the mouth while smoking cigarettes or using other tobacco products. If smokers contract the COVID-19 virus, they face a greater risk of getting a severe infection as their lung health is already compromised. Should children wear a mask during the COVID-19 pandemic? Can masks prevent the transmission of COVID-19? What are the known coronaviruses that can infect people? Human coronaviruses are capable of causing illnesses ranging from the common cold to more severe diseases such as Middle East respiratory syndrome (MERS, fatality rate ~34%). SARS-CoV-2 is the seventh known coronavirus to infect people, after 229E, NL63, OC43, HKU1, MERS-CoV, and the original SARS-CoV. Does wearing a mask mean you can have close contact with people during the COVID-19 pandemic? Wearing a mask does not mean you can have close contact with people. For indoor public settings such as busy shopping centres, religious buildings, restaurants, schools and public transport, you should wear a mask if you cannot maintain physical distance from others. Can COVID-19 be transmitted through food? There is currently no evidence that people can catch COVID-19 from food. The virus that causes COVID-19 can be killed at temperatures similar to that of other known viruses and bacteria found in food.
Bias is a term used to describe a tendency or preference towards a particular perspective, ideology, or result. Often bias is most apparent when the tendency demonstrated interferes with the ability to be impartial, unprejudiced, or objective. Simply put, bias pertains to the choices that we would make if we woke up one morning and were given absolute power to make any changes that we could without repercussion. Bias reflects what we hold as important in our world based on what we believe is acceptable. Biases are difficult to manage or change for the following reason: “People do not like to admit that they are wrong, even if provided evidence to the contrary”. Bias is perpetuated through many channels in our society. It can be: Cultural– interpreting or judging actions based on one’s culture. For example, some countries cite American tourists as being some of the worst travelers in the world because we expect other countries to make unreasonable attempts to cater to our needs in respect to signage, menus, and enmities placed in hotel rooms. Ethnic or racial– an example of this is nationalism. Nationalism refers to the devotion to the interests or culture of one’s nation. International soccer violence in the past few years has escalated due to the rise in nationalistic thought equating superiority to winning. Since nations identify heavily with their soccer teams, losing is seen as a cultural deficit. Geographic– the best example of this can probably be seen in the way we perceive persons living in different parts of the United States. The west coast is seen as a region where people are more “laid back”, while the east coast is seen as “fast paced”. Northerners are generally believed to be more educated than Southerners. Media– real or perceived bias of journalists and news producers in the mass media in the selection of events reported and how they are covered. MSNBC for example is seen by many as a “liberal” outlet for news, while Fox is seen by others as a “conservative” outlet for news. Gender– this relates to sexism. Historically women have been subjected to an ideal which tells them that they need to stay at home, be docile and follow the lead of men. Personal– bias used which results in personal gain. Often, people hire based on a perceived comfort level with an individual, even though they may not know the person intimately. For instance, hiring a person because they happen to have the same passion for golf that you do, knowing you would like to have a potential golfing partner on staff. Religious– bias for or against religion, faith or beliefs. Not allowing for alternative testing for a student who is celebrating a religious holiday because of the teacher’s belief that the student’s religion is flawed is an example of this.
|Methanosarcina barkeri fusaro| Kluyver and van Niel 1936 Methanosarcina is a genus of euryarchaeote archaea that produce methane. These single-celled organisms are the only known anaerobic methanogens that produce methane using all three metabolic pathways for methanogenesis. They live in diverse environments where they can remain safe from the effects of oxygen, whether on the earth's surface, in groundwater, in deep sea vents, and in animal digestive tracts. Methanosarcina grow in colonies. The amino acid pyrrolysine was first discovered in a Methanosarcina species, M. barkeri. Primitive versions of hemoglobin have been found in M. acetivorans, suggesting the microbe or an ancestor of it may have played a crucial role in the evolution of life on Earth. Species of Methanosarcina are also noted for unusually large genomes. M. acetivorans has the largest known genome of any archaeon. According to a theory published in 2014, Methanosarcina may have been largely responsible for the worst extinction event in the Earth's history, the Permian–Triassic extinction event. The theory suggests that acquisition of a new metabolic pathway via gene transfer followed by exponential reproduction allowed the microbe to rapidly consume vast deposits of organic carbon in marine sediments, leading to a sharp buildup of methane and carbon dioxide in the Earth's oceans and atmosphere that killed 90% of the world's species. This theory could better explain the observed carbon isotope level in period deposits than other theories such as volcanic activity. Methanosarcina has been used in waste water treatment since the mid-1980s. Researchers have sought ways to use it as an alternative power source. Methanosarcina are the only known anaerobic methanogens that produce methane using all three known metabolic pathways for methanogenesis. Most methanogens make methane from carbon dioxide and hydrogen gas. Others utilize acetate in the acetoclastic pathway. In addition to these two pathways, species of Methanosarcina can also metabolize methylated one-carbon compounds through methylotrophic methanogenesis. Such one-carbon compounds include methylamines, methanol, and methyl thiols. Methanosarcina are the world's most diverse methanogens in terms of ecology. They are found in environments such as landfills, sewage heaps, deep sea vents, deep subsurface groundwater, and even in the gut of many different ungulates, including cows, sheep, goats, and deer. Methanosarcina have also been found in the human digestive tract. M. barkeri can withstand extreme temperature fluctuations and go without water for extended periods. It can consume a variety of compounds or survive solely on hydrogen and carbon dioxide. It can also survive in low pH environments that are typically hazardous for life. Noting its extreme versatility, biologist Kevin Sowers postulated that M. barkeri could even survive on Mars. Methanosarcina grow in colonies and show primitive cellular differentiation. In 2002, the amino acid pyrrolysine was discovered in M. barkeri by Ohio State University researchers. Earlier research by the team had shown that a gene in M. barkeri had an in-frame amber (UAG) codon that did not signal the end of a protein, as would normally be expected. This behavior suggested the possibility of an unknown amino acid which was confirmed over several years by slicing the protein into peptides and sequencing them. Pyrrolysine was the first amino acid discovered since 1986, and 22nd overall. It has subsequently been found throughout the family Methanosarcinaceae as well as in a single bacterium, Desulfitobacterium hafniense. Both M. acetivorans and M. mazei have exceptionally large genomes. As of August 2008, M. acetivorans possessed the largest sequenced archaeal genome with 5,751,492 base pairs. The genome of M. mazei has 4,096,345 base pairs. Methanosarcina cell membranes are made of relatively short lipids, primarily of C25 hydrocarbons and C20 ethers. The majority of other methanogens have C30 hydrocarbons and a mixture of C20 and C40 ethers. Role in early development of life on Earth In 2004, two primitive versions of hemoglobin were discovered in M. acetivorans and another archaeon, Aeropyrum pernix. Known as protoglobins, these globins bind with oxygen much as hemoglobin does. In M. acetivorans, this allows for the removal of unwanted oxygen which would otherwise be toxic to this anaerobic organism. Protoglobins thus may have created a path for the evolution of later lifeforms which are dependent on oxygen. Following the Great Oxygenation Event, once there was free oxygen in Earth's atmosphere, the ability to process oxygen led to widespread radiation of life, and is one of the most fundamental stages in the evolution of Earth's lifeforms. Inspired by M. acetivorans, a team of Penn State researchers led by James G. Ferry and Christopher House proposed a new "thermodynamical theory of evolution" in 2006. Observing that M. acetivorans converts carbon monoxide into acetate, the scientists hypothesized that early "proto-cells" attached to mineral could have similarly used primitive enzymes to generate energy while excreting acetate. The theory thus sought to unify the "heterotrophic" theory of early evolution, where the primordial soup of simple molecules arose from non-biological processes, and the "chemoautotrophic" theory, where the earliest lifeforms created most simple molecules. The authors observed that though the "debate between the heterotrophic and chemotrophic theories revolved around carbon fixation", in actuality "these pathways evolved first to make energy. Afterwards, they evolved to fix carbon." The scientists further proposed mechanisms which would have allowed the mineral-bound proto-cell to become free-living and for the evolution of acetate metabolism into methane, using the same energy-based pathways. They speculated that M. acetivorans was one of the first lifeforms on Earth, a direct descendent of the early proto-cells. The research was published in Molecular Biology and Evolution in June 2006. Role in the Permian–Triassic extinction event In December 2012, it was hypothesized that Methanosarcina's methane production may have been the cause of the Permian–Triassic extinction event, in which an estimated 90% of all life on Earth went extinct. A study conducted by Chinese and American researchers supports that hypothesis. Using genetic analysis of about 50 Methanosarcina genomes, the team concluded that the microbe likely acquired the ability to efficiently consume acetate using acetate kinase and phosphoacetyl transferase roughly 240 ± 41 million years ago,[a] about the time of the extinction event 252 million years ago. The genes for these enzymes may have been acquired from a cellulose-degrading bacterium via gene transfer. The scientists concluded that these new genes, combined with widely available organic carbon deposits in the ocean and a plentiful supply of nickel,[b] allowed Methanosarcina populations to increase dramatically. Under their theory, this led to the release of abundant methane as waste. Then, some of the methane would have been broken down into carbon dioxide by other organisms. The buildup of these two gases would have caused oxygen levels in the ocean to decrease dramatically, while also increasing acidity. Terrestrial climates would simultaneously have experienced rising temperatures and significant climate change from the release of these greenhouse gases into the atmosphere. It is possible the buildup of carbon dioxide and methane in the atmosphere eventually caused the release of hydrogen sulfide gas, further stressing terrestrial life. The team's findings were published in the Proceedings of the National Academy of Sciences in March 2014. Earlier theories on the cause of the Permian–Triassic extinction event include volcanic activity, global climate change, and an asteroid impact. The microbe theory's proponents argue that it would better explain the observed rapid, but continual, rise in carbon isotope level in period sediment deposits than a volcano, which would cause a spike followed by a slow decline. The microbe theory suggests that volcanic activity played a different role - supplying the nickel which Methanosarcina required as a cofactor. Thus, the microbe theory holds that Siberian volcanic activity was a catalyst for, but not the direct primary cause of the mass extinction. Use by humans In 1985, Shimizu Construction developed a bioreactor that uses Methanosarcina to treat waste water from food processing plants and paper mills. The water is fed into the reactor where the microbes break down the waste particulate. The methane produced by the bacteria is then used to power the reactor, making it cheap to run. In tests, Methanosarcina reduced the waste concentration from 5,000–10,000 parts per million (ppm) to 80–100 ppm. Further treatment was necessary to finish the cleansing process. According to a 1994 report in Chemistry and Industry, bioreactors utilizing anaerobic digestion by Methanothrix soehngenii or Methanosarcina produced less sludge byproduct than aerobic counterparts. Methanosarcina reactors operate at temperatures ranging from 35 to 55 °C and pH ranges of 6.5-7.5. Researchers have sought ways to utilize Methanosarcina's methane-producing abilities more broadly as an alternative power source. In December 2010, University of Arkansas researchers successfully spliced a gene into M. acetivorans that allowed it to break down esters. They argued that this would allow it to more efficiently convert biomass into methane gas for power production. In 2011, it was shown that most methane produced during decomposition at landfills comes from M. barkeri. The researchers found that the microbe can survive in low pH environments and that it consumes acid, thereby raising the pH and allowing a wider range of life to flourish. They argued that their findings could help accelerate research into using archaea-generated methane as an alternate power source. - Galagan, J. E.; Nusbaum, C.; Roy, A.; Endrizzi, M. G.; MacDonald, P.; Fitzhugh, W.; Calvo, S.; Engels, R.; Smirnov, S.; Atnoor, D.; Brown, A.; Allen, N.; Naylor, J.; Stange-Thomann, N.; Dearellano, K.; Johnson, R.; Linton, L.; McEwan, P.; McKernan, K.; Talamas, J.; Tirrell, A.; Ye, W.; Zimmer, A.; Barber, R. D.; Cann, I.; Graham, D. E.; Grahame, D. A.; Guss, A. M.; Hedderich, R.; Ingram-Smith, C. (2002). "The Genome of M. Acetivorans Reveals Extensive Metabolic and Physiological Diversity". Genome Research 12 (4): 532–542. doi:10.1101/gr.223902. PMC 187521. PMID 11932238. - Will Dunham (March 31, 2014). "Methane-spewing microbe blamed in Earth's worst mass extinction". Reuters. Retrieved March 31, 2014. - "Methane-Belching Bugs Inspire A New Theory Of The Origin Of Life On Earth". Space Daily. May 15, 2006. - Michael Schirber (July 14, 2009). "Wanted: Easy-Going Martian Roommates". Space Daily. - "Researchers ID Microbe Responsible for Methane from Landfills" (Press release). North Carolina State University - Raleigh. April 6, 2011. - "Science Notebook". The Washington Post. May 27, 2002. p. A09. - "New Amino Acid Discovered". Applied Genetics 22 (11). June 2002. - Ian Kerman. "Methanosarcina barkeri". Retrieved Apr 9, 2014. - G. D. Sprott; C. J. Dicaire; G. B. Patel. "The ether lipids of Methanosarcina mazei and other Methanosarcina species, compared by fast atom bombardment mass spectrometry". Retrieved Apr 9, 2014. - "Oldest Hemoglobin ancestors Offer Clues to Earliest Oxygen-based Life" (Press release). The National Science Foundation. April 20, 2004. - "Scientists find primitive hemoglobins". UPI. April 20, 2004. - Sara Reardon (14 December 2012). "Permian mass extinction triggered by humble microbe". New Scientist (2895). - Rothman, D. H.; Fournier, G. P.; French, K. L.; Alm, E. J.; Boyle, E. A.; Cao, C.; Summons, R. E. (2014-03-31). "Methanogenic burst in the end-Permian carbon cycle". Proceedings of the National Academy of Sciences 111 (15): 5462–7. doi:10.1073/pnas.1318106111. PMC 3992638. PMID 24706773. - Steve Connor (March 31, 2014). "Volcanoes? Meteors? No, the worst mass extinction in history - The Great Dying - could have been caused by microbes having sex". the Independent. Retrieved March 31, 2014. - Laura Dattaro (March 31, 2014). "Biggest Extinction in Earth's History Caused By Microbes, Study Shows". The Weather Channel. Retrieved March 31, 2014. - "Shimizu develops cheap, easy waste water treatment technique". The Japan Economic Journal. June 18, 1985. Chemicals & Textiles section, page 17. - "Anaerobic Bioreactors Becoming Economical". Water Technology 2 (4). July 1994. - "Researchers Engineer New Methane-production Pathway in Microoganism" (Press release). University of Arkansas. December 8, 2010.
NYU neuroscientist Elizabeth A. Phelps and colleagues at Yale University find that the left amygdala responds to cognitive representations of fear Although people learn about potentially dangerous events through hard experience (a given dog is dangerous because it once bit you), often we learn about such events through communication (a given dog is dangerous because you heard it bit somebody else.) In understanding the neural systems of fear learning, most researchers have focused on the former type of learning, which is called fear conditioning. However, little is known about the neural system underlying fear-learning through communication, in the absence of aversive experience. Using fear conditioning, the neural systems of fear learning and expression have been eloquently mapped with both human and animal research. This research has indicated that a brain structure called the amygdala is critical to the expression of a conditioned fear response. But is the amygdala involved when you encounter a fear-invoking event that you have merely heard about? NYU neuroscientist Elizabeth A. Phelps addressed this question by examining activity in the human amygdala with a task called "instructed fear." Using fMRI, Phelps found that the amygdala is indeed activated in response to verbally communicated "threat" stimuli. Furthermore, in this and follow-up studies, Phelps and her colleagues found that this amygdala activity is related to the physical indications of a fear response. During "instructed fear," subjects do not actually receive an aversive stimulus, but rather they are told an aversive event might occur in conjunction with a neutral stimulus. In this case, subjects were presented a series of three images on a computer screen: a yellow square, a blue square, and the word "rest." Subjects were told they might receive a shock, delivered by an electrode on their wrist, when one color was presented (the threat condition) and that they would not receive a shock when the other color was presented (the safe condition). Although all subjects indicated that they believed they would receive a shock, none of the subjects actually received a shock during the study. Taken as a whole, Phelps' findings extend the amygdala's involvement in the expression of fear to situations where the aversive consequences are imagined and anticipated but never experienced. In other words, fears that exist only in our minds activate some of the same neural systems as fears that are learned through experience. This research was conducted in collaboration with Michael Davis, Christian Grillon, John C. Gore, Christopher Gatenby and Kevin J. O'Connor. These studies were conducted at the Yale School of Medicine's Department of Radiology. Elizabeth A Phelps is an associate professor of psychology and neural science at NYU. She received her Ph.D. from Princeton University.
Rabbit Fact file - Rabbits belong to the lagomorph family: Lagomorphs are herbivores (they feed exclusively on plants) and include rabbits and hares. - Rabbits are prey animals: They are most active at dawn and dusk, they have eyes located on the sides of their heads which gives them a very broad field of vision. They have large independently moving ears to enable them to hear really well; their noses are well developed to give them an excellent sense of smell and they have muscular hind legs. All of these physical adaptations are to help them avoid becoming another animal’s dinner! - Rabbits are subtle communicators: their primary mode of communication is via scent. They deposit faeces, squirt urine and chin mark to message to other rabbits. They can also use different body postures and vocalisation. - Rabbits are highly social: Wild rabbits live in large groups within a warren, which are divided up into smaller family units. Rabbits are territorial animals and form complicated social structures. - Rabbits have an unusual digestive system: They feed on large quantities of low-quality food and extract as much goodness as possible from their food. Food passed through the gut and special droppings (called caecotrophs) are produced. Rabbits eat these, allowing the food to be re-ingested. - Rabbits have continuously growing teeth: A rabbit’s top front teeth are called incisors and grow at a rate of 3mm a week! Grass and hay are abrasive and eating lots of this helps to wear their teeth down. - Rabbits are highly productive breeders: A single female rabbit (a Doe) can produce approximately 30 young in a single breeding season in the wild and can become pregnant again within hours of giving birth. Rabbits are intelligent: Pet rabbits can be taught to respond to commands using positive reward based training and can be house trained. WHAT RABBITS NEED…………….. - We need: To be able to exercise, graze on growing grass, forage, hide and dig every day and be able to play with our friendly, neutered rabbit companions every day. Lots of safe toys to play with and chew and to be able to play with people who will be quiet and gentle with us and who won’t punish or shout at us. HOME SWEET HOME - We need: A large shelter where we can rest together with a large secure exercise area with places for us to hide when we feel afraid. - We need: To be checked for signs of pain, illness or changes in our behaviour. To be vet health checked and vaccinated. To be neutered, as this stops us having unwanted babies and reduces the risk of fighting. FOOD AND DRINK - We need: Fresh, clean drinking water available 24/7. Lots of good quality hay and grass 24/7 FRIENDS FOR LIFE - We need: Each other! A rabbit should be kept with at least one other rabbit. A good companion is a neutered male or female that have been brought up together. We also need people to spend time with us every day to get us used to be handled. This list is not exhaustive; there is a longer list on the Rabbit Welfare Fund website. asparagus, baby sweetcorn, beetroot, broccoli, brussel sprouts, cabbage, carrots, cauliflower, celeriac, celery, chicory, courgette, cucumber, curly kale, fennel, green beans, parsnip, peas,peppers, pumpkin, swede, turnip, squash, radish,rocket, lettuce, spinach, spring greens and watercress. basil, coriander, dill,mint, parsley, oregano and rosemary. apple, apricot, banana, blackberries, blueberries, cherries, grapes, kiwi fruit, mango, melon, nectarines, oranges, papaya, peaches, pears, pineapple, plums, raspberries, strawberries and tomatoes. Flystrike is also called ‘myiasis’. It happens when flies lay their eggs on your rabbit, and those eggs hatch out into maggots. Flystrike is a painful, sometimes fatal condition. If an animal becomes infested, seek immediate veterinary advice. Rabbits that are at risk of Flystrike are unable to clean themselves properly, are ill, they may produce abnormally smelly urine or have diarrhoea, are fed inappropriate diets, have an internal parasitic infection or have an open wound. Although clean, well kept pets can get Flystrike. - Preventing Flystrike - Check for signs of illness/abnormal behaviour daily - In warm weather check your rabbit all over their body, especially around their rear end/tail area at least twice a day - If your rabbits back end is dirty, clean immediatley. Ensure the area is fully dried. It may be necessary to clip the fur - Clean toilet areas daily - Clean housing and change bedding at least once a week - Ensure your rabbit is not overweight and is fed the correct diet - Consider insect-proofing the housing of pets living outside. - Neuter female rabbits, entire females may be more prone to Flystrike - What to do if you find your rabbit has Flystrike Flystrike can occur in hours. Toxic shock and death can result very quickly. Flystrike is an emergency, do not delay. You need to get your rabbit to the vets immediately. Rabbits can make a full recovery if the condition is found and treated quickly. For further information The RWF Guide to feeding pet rabbits.
Neutrophils are the most abundant type of white blood cell in the body and are responsible for helping your body fight infection. When a germ is initially detected by the body, neutrophils are the defence system which go out and attack the germ before any of your other white blood cells. When neutrophils are low you can be more vulnerable to illness and infection. What might a low result mean? A low level of neutrophils in the blood is called neutropenia, and is generally a temporary finding after an infection when your neutrophils can be depleted. Low neutrophils can also be caused by taking certain medications which may directly or indirectly lower neutrophil levels. Examples of these medications include chemotherapies, immunosuppressants and some antibiotics. Low levels of neutrophils can also be found with conditions which suppress the bone marrow, such as aplastic anaemia, and in conditions such as AIDS, in which the HIV virus attacks the immune system. What might a high result mean? An elevated level of neutrophils in the blood is called neutrophilia, and generally indicates that you have a infection. When under attack from bacteria or a virus your immune system produces more neutrophils to send out into the blood to destroy the invader. Elevated levels of neutrophils can also be found in people who exercise intensively, who have high stress levels or who take steroid medication such as prednisolone or cortisone.
Insulating material could replicate the effects of Earth’s atmosphere on the red planet, warming it by 50°C Spreading a thin layer of silica aerogel over the surface of Mars could increase the temperature enough for crops to be grown, scientists have shown using models of the Martian climate. They say this intervention could transform the planet’s surface within decades, rather than the centuries thought necessary for large-scale planetary modification. ‘Silica aerogel is remarkably translucent, yet its thermal conductivity is one of the lowest of all known materials,’ says Robin Wordsworth, a planetary climate scientist at Harvard University. These gels consist of nanoscale networks of interconnecting silica clusters and are more than 97% air. A coating just two to three centimetres thick could replicate the effects of Earth’s atmosphere on the surface of the red planet, allowing enough visible light for photosynthesis to pass through, while increasing the temperature of the underlying surface by 50°C. The models, which focussed on an ice-rich, mid-latitude location, suggested this would be enough to keep water liquid to a depth of several metres throughout the Martian year. The silica aerogel would also shield terrestrial lifeforms from ultraviolet wavelengths. ‘Our initial inspiration for this was a natural process on the Martian surface, the solid-state greenhouse effect,’ says Wordsworth. Light travels through carbon dioxide ice deposits, creating warmth beneath the surface which is trapped by the insulating snowpack. Heating leads to the explosive release of CO2, which generates dark spots observable from planetary orbiters. However, over the majority of the surface the Martian atmosphere is so thin that it cannot muster a greenhouse effect sufficient to boost temperatures above the melting point of water. Previous ‘terraforming Mars’ theories proposed releasing carbon dioxide and water into the atmosphere from Martian reservoirs such as polar ice. However, research has revealed inadequate amounts of water and carbon dioxide on Mars to increase the temperature above the melting point of water. Wordsworth envisions a mat of silica aerogel covering the surface in order to raise the temperature enough to grow basic forms of life, such as algae. Large pressurised domes could eventually be used for cultivating crops or sustaining habitable environments for humans. ‘The larger the area, the greater the volume, the more resistant it would be to diurnal and seasonal temperature changes,’ Wordsworth says. Silica aerogels are themselves fairly fragile, so would need to be reinforced or combined with other materials. ‘We don’t know how easily silica aerogel manufacturing techniques employed on Earth can be adapted to Martian conditions. Therefore, there’s a lot of work to be done to test this on Mars,’ comments Germán Martínez, planetary scientist and Mars habitability expert at the Lunar and Planetary Institute in Houston, Texas. However, he adds that ‘a big advantage is that this approach can be further tested in extreme environments on Earth today’. Wordsworth says the team now plans to run field tests in regions such as the Atacama Desert in Chile and Antarctic dry valleys, as these are ‘the closest approximations that we have for the Martian surface’. He also suggests that synthetic biology might play a part in improving the habitability of Mars, taking advantage of organisms such as diatoms which utilise silica as a building material on Earth. R Wordsworth, L Kerber and C Cockell, Nature Astronomy, 2019, DOI: 10.1038/s41550-019-0813-0
The effect of sleep on individual neurons in live zebrafish is reported in a paper in Nature Communications this week. The study finds that sleep increases the movement of chromosomes (chromosome dynamics), which alters their structure to enable reduction of DNA damage. The results suggest that chromosome dynamics could be a potential marker to define individual sleeping neurons. Prolonged sleep deprivation can be lethal, and sleep disturbances are associated with various deficiencies in brain performance. Although the critical importance of sleep is known, it is unclear what effects it has at a cellular level. This is because sleep has previously been defined by behavioural criteria, as it has not been possible to study sleep-dependent cellular processes under the microscope. Lior Appelbaum and colleagues report a new method for time-lapse imaging of chromosome dynamics in individual neurons of live zebrafish. Using this approach, the authors demonstrate that sleep increases chromosome dynamics by two-fold, specifically in neurons, while neuronal activity has the opposite effect. They show that sleep-dependent increases in chromosome dynamics are essential for the repair of DNA double-strand breaks. Although this work provides causal evidence that sleep has a key role in enabling cellular maintenance in neurons, it also illustrates that the cost of wakefulness and cellular activity is the accumulation of DNA damage. Further studies on additional vertebrate and invertebrate animals are required to establish if chromosome dynamics could be an evolutionarily conserved marker of cellular sleep. Environment: Global river delta population reveals flooding vulnerabilityNature Communications Ecology: Turtle scavenging critical to freshwater ecosystem healthScientific Reports Planetary science: Phosphine detected in the clouds of VenusNature Astronomy
25 Aug 2017 - Scientists compiled all known population genetics studies of deep sea ecosystems, finding a paucity of research. The researchers warn that human impacts like pollution, fishing, and mining are encroaching further into deep sea areas faster than scientists are studying them. They say more research will enable stakeholders to protect vulnerable ecosystems. We know very little about the deepest parts of the ocean – and are disturbing them faster than we’re learning about them, according a study published this week in Molecular Ecology. To see just how big this knowledge gap is, researchers at Oxford University conducted a survey of all known population genetics studies of deep sea invertebrates. Population genetics is the study of the differences between and within populations, and helps scientists understand how groups of plants and animals evolved and how they may respond to environmental changes. The researchers discovered that there have been 77 papers published on this topic in the last 33 years. Of these, just nine looked at areas deeper than 3,500 meters – which comprise about half the planet’s surface. These studies shine a valuable, if dim, light on an otherwise unknown expanse. They indicate the animals that live in the deep may be about as genetically diverse as shallow-water species, and that some populations are distinct and isolated from each other even in small areas. But that’s pretty much it. “Basic ecological information (e.g., species ranges, population subdivision, population genetic diversity, dispersal capability and demographic parameters) is lacking for all but a few species,” the researchers write in their study. They warn that despite this lack of knowledge and exploration of the deep, human activities are leading to ever-greater impacts. For instance, microplastics can now be found in the deepest, most remote reaches of the ocean. Commercial bottom-trawling fishing is tearing through ancient, deep sea ecosystems, turning them into “faunal deserts.” And about 1.8 million square kilometers – an area about the size of Libya – has been allotted for potential exploration and extraction of metals. “Today humans have an unprecedented ability to [affect] the lives of creatures living in one of the most remote environments on earth — the deep sea,” said Christopher Roterman, co-author and postdoctoral researcher in Oxford’s Department of Zoology, in a statement. “At a time where the exploitation of deep sea resources is increasing, scientists are still trying to understand basic aspects of the biology and ecology of deep sea communities.” Roterman calls for more research of the deep sea, saying it will help us figure out how its ecosystems may respond to disturbance and how best to protect them. “Population genetics is an important tool that helps us to understand how deep sea communities function, and in turn how resilient they will be in the future to the increasing threat of human impacts.” Roterman said. “These insights can help governments and other stakeholders to figure out ways to control and sustainably manage human activities, to ensure a healthy deep sea ecosystem.” Roterman said fishing is currently the activity having the biggest impact on deep sea communities. But he warns that metals mining may soon become the bigger threat. “What may start off in relative terms, as a pin-prick on the seafloor, may rapidly expand before the long-term detrimental effects are fully understood,” he said. “What we don’t know at present is how human activities and climate change will affect these populations in the future, but history tells us that we shouldn’t be complacent.” Getting good data from 5,000 meters down can be a tricky undertaking. But the researchers say advances in technology may help population geneticists learn about the denizens of the deep more cheaply, easily, and quickly. “Next-generation sequencing allows us to scan larger and larger portions of an animal’s genome and at a lower cost,” Michelle Taylor, co-author and senior postdoctoral researcher in Oxford’s Department of Zoology “This makes deep sea population genetic studies less costly, and for many animals, the sheer volume of data these new technologies create means they can now be studied for the first time.” The researchers write that in addition to unveiling the secrets of deep sea ecosystems, genetics studies will help stakeholders manage and protect marine diversity and resources. But, Taylor urges, haste is of the essence. “We cannot bury our heads in the sand and think that people are not going to try and exploit resources in the deep sea, so science needs to catch up.”
The USA is a liberal democratic and federal state with a strong and independent judiciary. The existence of a well organised and independent Judicial system is a necessity for every democratic system because without it the rights of the people can never be protected from possible violations by an arbitrary exercise of power by government. In a true federal spirit two separate systems of courts are established in USA:- a) the federal Judicial system b) the state judicial system. The US Judiciary is an independent judiciary. It is an independent organ of government and enjoys power to interpret and defend the constitution and the fundamental rights and freedoms of the people of the United States. The concept of judicial review can legitimately be described as the American contribution to the theory and practice of political science. The federal courts stand divided into two parts- the constitutional courts and the legislative courts. The circuit courts of appeals and district courts were created by the constitution. District courts are the lowest level federal courts with original jurisdictions in their respective districts. The 50 states have been divided into 89 judicial districts and each has a district court. The Supreme Court is at the apex of the U.S. Judicial pyramid. It is the highest court of the land and the only court specifically mentioned in the constitution. The judges can be removed only by impeachment and for established misbehaviour. The power of impeachment is in the hands of the congress. The Supreme Court is the final interpreter of the constitution. Its interpretations of the constitutional provisions are ‘considered inherently superior and final.’
What is sleep Apnea? Obstructive Sleep Apnea (OSA) is not just snoring or feeling tired during the day – it’s a serious sleep disorder that occurs when you stop breathing or your breathing is interrupted during sleep, decreasing your oxygen levels and alerting the brain to wake you up to breathe. Sleep apnea is a common sleep disorder with 1 in 4 Canadians at risk for OSA. Undiagnosed or untreated sleep apnea results in excessive daytime sleepiness and other major health risks that can have a direct negative impact on an individual’s day to day life. The Correlation between Sleep and Depression There is a correlation between sleep and mood, and lack of sleep and depression. Research suggests that untreated sleep apnea can cause depression. Sleep apnea is associated with poor-quality sleep, insomnia, poor memory and irritability. And, OSA may cause depression due to sleep loss, sleep disruption and cognitive changes from OSA. Some studies report depression is highly prevalent in individuals with up to 63% of OSA individuals being affected by depression.1 Poor sleep, fatigue, and cognitive impairment are also common in depression. And weight gain and sleep disruption due to depression could cause or worsen sleep apnea. Some people experience an onset of symptoms from both conditions (OSA & depression) at the same time, while others experience sleep deprivation before depression. Diagnosing OSA or Depression It can be challenging to determine if certain symptoms are due to sleep apnea or depression or both. If you have some of the symptoms outlined below you should first determine if you have sleep apnea. Sleep apnea may be causing or contributing to your depression and treating sleep apnea can improve symptoms of depression. Symptoms of OSA Someone with sleep apnea may have the following symptoms: - Loud irregular snoring - Gasping or choking during sleep - Frequent urination at night - Constant tiredness - Poor concentration - Lack of energy - Leg cramps - Weight gain - Sexual dysfunction Symptoms of Depression - Irritability, frustration, and anger over small issues - Feelings of sadness, emptiness, or hopelessness - Changes in appetite - Sleep disturbances like insomnia - Fatigue and tiredness - Trouble thinking or concentrating Testing for Sleep Apnea In most provinces, determining if you have sleep apnea can be done with a simple home sleep apnea test. Your family physician can provide a referral for a sleep test from a sleep clinic, like RHS. RHS sleep clinics provide accredited sleep tests free of charge that also include an interpretation from an independent sleep specialist. How to treat OSA & Improve Depressive Symptoms If your sleep test results indicate that you have sleep apnea then treatment would be recommended. Continuous positive airway pressure (CPAP) therapy is a common treatment for obstructive sleep apnea. A CPAP machine uses a hose and mask or nosepiece to deliver constant and steady air pressure to keep airways open during sleep. Treatment with CPAP is effective for all degrees of sleep apnea. Depending on the severity of the OSA, other treatments such as Oral Appliance Therapy (OAT) can also be applied. Treating sleep apnea will reduce many health risks associated with untreated sleep apnea and can also result in an improvement of depressive symptoms. If the sleep test results indicate that you don’t have sleep apnea, your physician can refer you to a mental health professional to talk about your depression. 1Harris Glozier et al. OSA and Depression Sleep Med Rev 13:437-444.
In English, there are many different ways of making sentences with if. It is important that: 1) You understand the difference between sentences that express real possibilities, and those that express unreal situations. 2) You learn which tenses follow each conditional a) Zero Conditional We use the zero conditional to express a situation that is always true. Present simple + present simple - If I read too much, I get a headache b) First Conditional We use the first conditional to express real possibilities. Present simple + future - If I go to the concert, I'll see Ricky Martin c) Second Conditional We use the second conditional to express an unreal situation. The situation or condition is improbable, impossible, imaginary or contrary to known facts. Past simple + would (conditional) - If I won the lottery I would buy a house d) Third Conditional We use the third conditional to imagine the consequence of events that happened or began to happen in the past. Past Perfect + would have + past participle - If I had known, I would have gone to visit you e) Mixed Conditional (2nd & 3rd Conditional) The mixed conditional is a mixture between the 2nd and 3rd conditional. - If the weather had been better, we would go back next year - If I'd been born in 1980, I'd be 23 years old now. (remember: I'd been born - I'd = I had; I'd be 23 years - I'd = I would)
This project was compiled using material from lessons by artists Laura Ni Fhlaibhín and Clare Breen. Project aim: to learn about materials and mark making. Children will be challenged to create their own original drawing tool, using a range of natural and discarded materials. Students will be reminded of the creative possibilities in making their own tools, rather than buying drawing tools and contributing to consumption of plastics etc. Children will then experiment with drawing with their tool, using inks or deleted paint. The fun and inventive potential in developing a sustainable outlook will be developed in this project. Duration: 2 lessons, 60 mins each Suitability: suitable for all class levels Materials: -Masking tape and or/duct tapes in a range of bright fun colours (one roll for every 4 students) - Tree Branches and twigs of many sizes- (at least one for - A range of natural materials-leaves, feathers, sponges, - Discarded materials-old tea towels, milk carton lids, old toothbrushes, old cutlery, plastic food packaging, polystyrene etc. - Ink or classroom poster paints mixed with water, - A selection of paper-old wallpaper, wrapping paper, cereal boxes, non-texture cardboard etc. - Old, clean yoghurt pots as containers. - Literacy: children can write the procedural steps involved in the creation of their drawing tool. - History: Stone Age: students can gain inspiration from Stone Age tools and making processes. - Green Schools: collection reusable materials, plastic waste Students and the wider school community can be asked to gather a range of natural and discarded materials for the weeks leading up to this project. A large cardboard box is useful to collect the materials. - Lay out the materials on a spare table or two, or on the floor. Explain the expectations for the lesson; students are welcome to explore the materials but must keep them tidy for others to enjoy. - Large plastic water bottles can be used to store the tools in between lessons- carefully cut the bottle in two, mindful of any sharp plastic. - You can make up your own story line for the warm-up exercise in lesson 1. Just make sure to include lots of changes in pace. (Encourage them to create texture with their pencil/ pastel to mimic the energy of the story.) - For lesson 2 old wallpaper is really useful for large scale drawings and can also often be found in charity shops. Roll out a large section (2-3 metres) of the plain reverse side of the wallpaper and generously secure with lots of masking tape. I ask students to help me in preparing the rolls on the ground. Masking tape can just be torn off by hand rather than using scissors. - Black drawing ink can be used but regular classroom poster paint, diluted with a little water is perfect too. - This activity is wonderful for outdoors weather permitting, with the wallpaper rolls taped to the ground. - An unusual and eye-catching classroom display can be created by hanging the drawing tools with strings. Foundations: Drawing Tools: Lesson 1 Making an experimental Foundations: Drawing Tools: Lesson 2 Drawing with their tool If you continue to use this site we will assume that you are happy with it.
Biography Book Report Instructions and rubric for students to complete a book report on a biography of their choice. (Grade 5) Students will write a book report after reading a biography of their choice. Plan: BIOGRAPHY BOOK REPORT If the biography you read did not contain some of this information, please look it up online or in an encyclopedia. - Cover page: Include a drawing of your character, the title of the biography, the author of the biography and your name. - Page one: Tell the date and place where your character was born and raised. Dont include too many family details or details about habits and hobbies that dont have anything to do with their later work. - Describe the early life of your character. Tell about what kind of person he or she was. What was it about your character that helped him or her to succeed? Did your character know what he or she wanted to become in the future? How did your character prepare for his or her future? - Page two: Tell about the work your character did. Tell why his or her contribution was important, why the work was important. Did he or she invent something or teach others? How did he or she change the world? What lasting effects did your character have on the lives of others? - Page three: What should we all know about your character? How do we benefit today from him or her? What do you think was the most impressive thing about your character? - Page four: Tell how your character has inspired you. In what ways would you like to be him or her? You will be graded according to the following criteria: Followed the instructions (format and content) This means that everything you write will be on the correct page and that you have included all the necessary information. Clarity and continuity of thought This means that your ideas are presented in logical order and are easy to This means that you have separate paragraphs for each new idea, you have used a topic sentence for each new paragraph, and sentences are complete and include correct punctuation. New paragraphs should be indented. You should use one Check your spelling before handing in the report. Any corrections on the final draft should be done neatly.
Hand, Foot and Mouth Disease Hand, foot, and mouth disease (HFMD) is a common illness of infants and children caused by a strain of Coxsackie virus. It causes a blister-like rash that, as the name implies, involves the hands, feet and mouth. The illness is typically mild and complications are rare. Who gets HFMD? This disease usually affects children under 10 years old although it can occasionally occur in adults. What are the symptoms? Fever, a poor appetite, runny nose and sore throat can appear three to five days after exposure. A blister-like rash on the hands, feet and in the mouth usually develops one to two days after the initial symptoms. Sufferers may also have a headache and abdominal pain. HFMD is moderately contagious. Infection is spread by direct contact with nose and throat discharges or with the faeces of infected persons. A person is most contagious during the first week of the illness. What treatment is there? There is no specific treatment available for this infection. Symptomatic treatment, such as paracetamol (e.g. Calpol) can be given in the dose prescribed for the child’s age, to provide relief from fever, aches, or pain from the mouth ulcers. Antibiotics are not effective against this disease. Encourage plenty of fluids and, if the child is old enough to do so, get them to rinse their mouth with warm water after eating. Preventive measures include frequent hand washing, especially after changing nappies. The person with the illness should be encouraged to wash their hands well after using the toilet and before handling or eating food. Make sure the toilet is kept clean. Use diluted bleach or disinfectant to clean the toilet making sure you clean the handle as well as the seat.
At the center of our solar system is an enormous nuclear generator. The Earth revolves around this massive body at an average distance of 93 million miles (149.6 million kilometers). It's a star we call the sun. The sun provides us with the energy necessary for life. But could scientists create a miniaturized version here on Earth? It's not just possible -- it's already been done. If you think of a star as a nuclear fusion machine, mankind has duplicated the nature of stars on Earth. But this revelation has qualifiers. The examples of fusion here on Earth are on a small scale and last for just a few seconds at most. To understand how scientists can make a star, it's necessary to learn what stars are made of and how fusion works. The sun is about 75 percent hydrogen and 24 percent helium. Heavier elements make up the final percent of the sun's mass. The core of the sun is intensely hot -- temperatures are greater than 15 million degrees Kelvin (nearly 27 million degrees Fahrenheit or just under 15 million degrees Celsius). At these temperatures, the hydrogen atoms absorb so much energy that they fuse together. This isn't a trivial matter. The nucleus of a hydrogen atom is a single proton. To fuse two protons together requires enough energy to overcome electromagnetic force. That's because protons are positively charged. If you're familiar with magnets, you know that similar charges repel each other. But if you have enough energy to overcome this force, you can fuse the two nuclei into one. What you're left with after this initial fusion is deuterium, an isotope of hydrogen. It's an atom with one proton and one neutron. Fusing deuterium with hydrogen creates helium-3. Fusing two helium-3 atoms together creates helium-4 and two hydrogen atoms. If you break all that down, it essentially means that four hydrogen atoms fuse to create a single helium-4 atom. Here's where energy comes into play. A helium-4 atom has less mass than four hydrogen atoms collectively. So where does that extra mass go? It's converted into energy. And as Einstein's famous equation tells us, energy is equal to the mass of an object times the speed of light squared. That means the mass of the tiniest particle is equivalent to an enormous amount of energy. So how can scientists create a star?
Reprogrammed of Stem Cells For a cell to be considered a stem cell, it must possess two unique qualities: the ability to cell divide exponentially and the ability to transform into specific cells in the body. The only type of stem cell that can do that are the ones that can be extracted from an embryo (specifically from a Blastocyst). Because it has this unique property, this type of cell is also known as the Pluripotent Stem Cell. The beauty of stem cells is that it can be used for research and for treating various medical conditions. In fact, it has already been used in treating Type-1 diabetes and it helps restore the Pancreas’ function to create the hormone Insulin- the one responsible for the regulation of our blood sugar levels. Back in the day, embryonic stem cells have been used extensively because it is the only stem cell, at least at that time, that was able to differentiate and morph into other specialized cells in the body. Adult stem cells, even though it has the ability to transform into other cells, can only morph into things from where it originally came from. The Source of Pluripotent Stem Cells Scientists get pluripotent stem cells from embryos that were cultivated in “In Vitro Fertilization” clinics. Even though the sperm and the egg cells that were freely donated by would-be donors, it is still controversial to use it since the resulting embryos are not fit for transplantation into surrogate mothers anymore. So technically, they’re dead after extraction of the said cells. However, with the advancements in medical technology, scientists have come up with a way to “create” their own set of pluripotent cells without having to use an embryo for it. The idea was to get some adult stem cells and “reprogram” them to become pluripotent stem cells with the use of four different proteins that are considered to be essential in the formation and early development of an embryo. The result was the creation of what is now known as the Induced Pluripotent Stem Cells. These cells also have the same abilities that are present in embryonic stem cells. They have the ability to self-renew and divide into multiple stem cells or they can turn into specialized cells and differentiate into the different cell types in the body. Because of its amazing characteristics, the iPS has been used extensively both in research and treatments. In fact, there have already been studies documenting its miraculous effects such as the ability for the stem cells to regenerate the lost teeth of mice and the near-complete restoration of the eyesight of certain rats as well. With that being said, although the iPS holds the same abilities that are also present in the original embryonic stem cells, they were not considered to be exactly the same. There are various methods to create the iPS, but different scientists have now approved that despite the major difference of how they are created, their uses are practically the same. Therefore, they are now considered similar and will now be used for further testing and research for them to come up with suitable treatment options in the future.
Prediabetes occurs when the level of sugar (glucose) in your blood is too high, but not high enough to be called diabetes. Losing extra weight and getting regular exercise can often stop prediabetes from becoming type 2 diabetes. Your body gets energy from the glucose in your blood. A hormone called insulin helps the cells in your body use glucose. If you have prediabetes, this process does not work as well. Glucose builds up in your bloodstream. If the levels get high enough, it means you have developed type 2 diabetes. If you are at risk for diabetes, your health care provider will test your blood sugar using one or more of the following tests. Any of the following test results indicate prediabetes: - Fasting blood glucose of 100 to 125 mg/dL (called impaired fasting glucose) - Blood glucose of 140 to 199 mg/dL 2 hours after taking 75 grams of glucose (called impaired glucose tolerance) - A1C level of 5.7% to 6.4% Having diabetes increases the risk for certain health problems. This is because high glucose levels in the blood can damage the blood vessels and nerves. This can lead to heart disease and stroke. If you have prediabetes, damage may already be occurring in your blood vessels. Having prediabetes is a wake-up call to take action to improve your health. How to Help Prevent Diabetes Your provider will talk with you about your condition and your risks from prediabetes. To help you prevent diabetes, your provider will likely suggest certain lifestyle changes: - Eat healthy foods. This includes whole grains, lean proteins, low-fat dairy, and plenty of fruits and vegetables. Watch portion sizes and avoid sweets and fried foods. - Lose weight. Just a small weight loss can make a big difference in your health. For example, your provider may suggest that you lose about 5% to 7% of your body weight. So, if you weigh 200 pounds (90 kilograms), to lose 7% your goal would be to lose about 14 pounds (6.3 kilograms). Your provider may suggest a diet, or you can join a program to help you lose weight. - Get more exercise. Aim to get at least 30 to 60 minutes of moderate exercise at least 5 days a week. This can include brisk walking, riding your bike, or swimming. You can also break up exercise into smaller sessions throughout the day. Take the stairs instead of the elevator. Even small amounts of activity count toward your weekly goal. - Take medicines as directed. Your provider may prescribe metformin to reduce the chance that your prediabetes will progress to diabetes. Depending on your other risk factors for heart disease, your provider may also prescribe medicines to lower your blood cholesterol level or blood pressure. You can't tell that you have prediabetes because it has no symptoms. The only way to know is through a blood test. Your provider will test your blood sugar if you are at risk for diabetes. The risk factors for prediabetes are the same as those for type 2 diabetes. You should get tested for prediabetes if you are age 45 or older. If you are younger than 45, you should get tested if you are overweight or obese and have one or more of these risk factors: - A previous diabetes test showing diabetes risk - A parent, sibling, or child with a history of diabetes - Inactive lifestyle and lack of regular exercise - African American, Hispanic/Latin American, American Indian and Alaska Native, Asian American, or Pacific Islander ethnicity - High blood pressure (140/90 mm Hg or higher) - Low HDL (good) cholesterol or high triglycerides - History of heart disease - History of diabetes during pregnancy (gestational diabetes) - Health conditions associated with insulin resistance (polycystic ovary syndrome, acanthosis nigricans, severe obesity) If your blood test results show that you have prediabetes, your provider may suggest that you be retested once each year. If your results are normal, your provider may suggest getting retested every 3 years. Impaired fasting glucose - prediabetes; Impaired glucose tolerance - prediabetes American Diabetes Association. Standards of medical care in diabetes - 2020. Diabetes Care. 2020;43(Suppl 1):S77-S88. care.diabetesjournals.org/content/43/Supplement_1/S77. Kahn CR, Ferris HA, O’Neill BT. Pathophysiology of type 2 diabetes mellitus. In: Melmed S, Auchus RJ, Goldfine AB, Koenig RJ, Rosen CJ, eds. Williams Textbook of Endocrinology. 14th ed. Philadelphia, PA: Elsevier; 2020:chap 34. Siu AL; US Preventive Services Task Force. Screening for abnormal blood glucose and type 2 diabetes mellitus: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2015;163(11):861-868. PMID: 26501513 www.ncbi.nlm.nih.gov/pubmed/26501513. Review Date 1/1/2020 Updated by: David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, Brenda Conaway, Editorial Director, and the A.D.A.M. Editorial team.
Humans become infected with the parasite that causes toxoplasmosis through contact with infected animal faeces (poo), usually from cats. Normally, symptoms are mild, but toxoplasmosis in pregnancy can cause birth defects. It can also cause illness in people with a compromised immune system. Pregnant women and people who have compromised immune systems should take precautions. Toxoplasmosis is an infection caused by a parasite known as Toxoplasma gondii (T. gondii). This single-celled organism is commonly found throughout the world and tends to infect birds and mammals. The parasite forms egg-like structures called oocysts. These must be ingested by mouth, which means the infection cannot be transferred from person to person. Humans become infected with the toxoplasmosis parasite through contact with infected animal faeces (poo). Cats are the main hosts. They acquire T. gondii from eating infected rodents or birds and then may pass the infection to their human handlers. Another way of catching this infection is touching or eating raw or undercooked lamb, pork or kangaroo meat. The parasites can be stored in small pockets (cysts) in the muscle tissue of these meats. Drinking contaminated unpasteurised milk can also cause infection with toxoplasmosis parasites. Symptoms of toxoplasmosis In most cases of animal and human infection, toxoplasmosis does not cause any symptoms. The only evidence of infection is detection of antibodies in the blood against the toxoplasmosis parasite. Symptoms, if they do occur, include: - Swollen lymph glands, especially around the neck - Muscle aches and pains - Generally feeling unwell - Inflammation of the lungs - Inflammation of the heart muscle - Inflammation of the eye, for example, the retina (at the back of the eye). Duration of infection with T. gondii The toxoplasmosis parasite can cause a long-term infection. Following infection, a small number of parasites can remain locked inside cysts within certain parts of the body, such as the brain, lungs and muscle tissue. Under normal circumstances, the immune system will easily destroy any parasites that escape these cysts, but a person with lowered immunity may not be able to fend off an attack. The parasites can greatly increase in number and cause a variety of serious illnesses, including infection of the brain. Effects of toxoplasmosis on unborn babies If newborn babies are infected, at worst, they will only suffer from mild illness. However, toxoplasmosis in pregnancy can expose babies in the womb to the parasite and this is potentially more serious. If a woman contracts toxoplasmosis for the first time while pregnant, the parasites may affect the baby through the placenta. Most unborn babies aren’t affected at all, but a minority may be harmed by infection. Effects of toxoplasmosis on unborn babies can include: - Skin rashes - Nervous system damage - Mental retardation - Cerebral calcification (hardening of brain tissue) - Liver damage - Eye problems - Fetal death (in rare cases). Precautions against toxoplasmosis Pregnant women and people who have compromised immune systems should take precautions against toxoplasmosis. If a woman is infected before she becomes pregnant, then her immune system will attack the parasite and make it harmless. Problems only occur if a woman becomes infected for the first time while pregnant. A pregnant woman and people with compromised immune systems can take simple precautions to reduce the risk of infection with the parasite. These include: - Wash hands after handling raw meat. - Cook meat (including kangaroo meat) thoroughly until the juices run clear. - Do not eat rare or medium-rare meat dishes. - Wash vegetables to remove any traces of soil. - Wash hands thoroughly before eating. - Immediately wash cutting boards, knives and any other implements that have come into contact with raw meat. - Wear gloves while gardening. - Avoid contact with cats. - Get someone else to handle litter trays. - Make sure litter trays are cleaned daily. Toxoplasmosis in cats and sandpits The infectious oocysts are robust and hardy. They can survive in water, soil or sand for around 12 months. Young children who play in sandpits and gardens may be at risk if they come into contact with infected cat faeces. Precautions include: - Make sure your child’s sandpit can be covered when not in use. - Discourage stray cats from your property. - Ask your child to always wash their hands thoroughly before eating. Precautions against toxoplasmosis for your household cat Cats are only infectious for a few weeks after ingesting the parasites and kittens are more likely to pass on the infection than older cats. Suggestions on reducing the risk of infection in your cat include: - Keep your cat indoors whenever possible. - Don’t allow the cat to hunt and eat birds or other wildlife. - Feed your cat canned or dry foods, instead of raw meat (including kangaroo meat). Treatment for toxoplasmosis Treatment of toxoplasmosis is often unnecessary. The infection is diagnosed with a simple blood test that checks for the presence of specific antibodies. A healthy person who is not pregnant and becomes infected does not require treatment. Symptoms, if any, are usually mild and disappear after a few weeks. For pregnant women and those with compromised immune systems, such as those in the later stages of human immunodeficiency virus infection/acquired immunodeficiency syndrome (HIV/AIDS), medications including antibiotics may be prescribed. Where to get help - Your doctor Things to remember - People become infected with Toxoplasma gondii parasites through contact with infected animal faeces (usually cat faeces). - A healthy person does not require treatment for toxoplasmosis, as symptoms are mild and usually disappear within a few weeks. - Pregnant women and people who have compromised immune systems should take precautions against toxoplasmosis. - A pregnant woman is advised to avoid contact with cats, as her unborn child is at increased risk of birth defects if parasites cross the placenta. You might also be interested in: Want to know more? Go to More information for support groups, related links and references. This page has been produced in consultation with and approved by: (Logo links to further information) Department of Health logo Fact sheet currently being reviewed. Last reviewed: August 2012 Content on this website is provided for education and information purposes only. Information about a therapy, service, product or treatment does not imply endorsement and is not intended to replace advice from your doctor or other registered health professional. Content has been prepared for Victorian residents and wider Australian audiences, and was accurate at the time of publication. Readers should note that, over time, currency and completeness of the information may change. All users are urged to always seek advice from a registered health care professional for diagnosis and answers to their medical questions. Content on this website is provided for education and information purposes only. Information about a therapy, service, product or treatment does not imply endorsement and is not intended to replace advice from your qualified health professional. Content has been prepared for Victorian residence and wider Australian audiences, and was accurate at the time of publication. Readers should note that over time currency and completeness of the information may change. All users are urged to always seek advice from a qualified health care professional for diagnosis and answers to their medical questions. For the latest updates and more information, visit www.betterhealth.vic.gov.au Copyight © 1999/2015 State of Victoria. Reproduced from the Better Health Channel (www.betterhealth.vic.gov.au) at no cost with permission of the Victorian Minister for Health. Unauthorised reproduction and other uses comprised in the copyright are prohibited without permission.
How changes in stars’ speed gave away the most Earth-like planets ever observed When thinking about Earth-like exoplanet discoveries, the Kepler space telescope immediately comes to mind. Yet, it is not only Kepler, but also ground-based information from the HARPS-N spectrograph, that allowed the ETAEARTH consortium to obtain information on these planets with a degree of precision never reached before. A joint initiative between Europe and the US, ETAEARTH (Measuring Eta_Earth: Characterization of Terrestrial Planetary Systems with Kepler, HARPS-N, and Gaia), was tasked with measuring the dynamical masses of terrestrial planet candidates discovered by the Kepler mission. The project delivered beyond expectations, being responsible for most of the Earth-like planet discoveries made over the past five years. Dr Alessandro Sozzetti, coordinator of the project and researcher at the National Institute for Astrophysics in Italy, discusses the project’s outcomes. There is much ongoing research dedicated to Earth analogues. What makes ETAEARTH stand out? Over the five years of the project, ETAEARTH has combined the fantastic photometric precision of NASA’s Kepler and K2 missions and the unrivalled quality of ground-based radial velocity measurements with the HARPS-N spectrograph on the Italian Telescopio Nazionale Galileo (TNG) in the Canary Islands. The point was to determine the physical properties of terrestrial extrasolar planets in orbit around stars similar in size to or smaller in size than the Sun, with unprecedented accuracy. ETAEARTH scientists had a considerable advantage over other research teams because we had access to a conspicuous Guaranteed Time Observations (GTO) program with HARPS-N@TNG, for a total of 400 observing nights over five years. Such a large telescope time investment was key to the spectacular successes of the project. What’s the added value of combining KEPLER and HARPS-N data? Kepler and K2 exploit the technique of planetary transits: They measure the dip in the light from a star as a planet crosses it, revealing the planet’s size. HARPS-N, on the other hand, measures changes in the star’s speed due to the gravitational pull from an orbiting planet, allowing us to determine its mass. From the combination of these two observations, we can calculate the planet’s density and determine its bulk composition (e.g., rocky, water-rich, gas-rich, etc.) with high accuracy. Can you tell us more about your methodology? ETAEARTH carefully selected Kepler and K2 small-radius exoplanet candidates based on their chances of having their masses measured accurately with HARPS-N. We then designed adaptive observing strategies tailored to each system, depending for example on the magnitude of the signal sought with HARPS-N and on the orbital period of the candidate. Once an observing campaign for a given target was completed, we accurately determined the fundamental physical parameters of the central star – that is, its mass and radius – as only precise knowledge of these quantities allows us to derive accurate estimates of the planetary parameters. The next step in our methodology entailed a sophisticated combined analysis of the available Kepler/K2 and HARPS-N data to derive all the system’s orbital and physical parameters (for both single and multiple transiting planets). Finally, our measurements of planetary densities were compared with predictions from theory to underpin the actual composition of the planet(s). What were the main difficulties you faced in this process and how did you overcome them? The biggest challenge we had to face arose from dealing with stellar activity. This phenomenon, produced primarily by spots on the surface of the star that come in and out of view as the star rotates (just like our Sun), introduces complications in the interpretation of the data – particularly those gathered with HARPS-N. It can sometimes mask entirely or even mimic a planetary signal. So you think you are seeing a planet, but you are instead accurately measuring the star acting up! Our learning curve was steep, but ultimately we succeeded, using a twofold approach: First, we adapted our observing strategies with HARPS-N to make sure we could sample both stellar and planetary signals well enough. With the best-possible temporal distribution of our observations, we then developed sophisticated analysis tools that allowed us to effectively disentangle planetary signals and those produced by stellar activity. What would you say were your most important findings? We could learn for the first time about the physics of these objects’ interiors. We have notably determined with high precision (20 % or better) the composition of 70 % of currently known planets with masses between one and six times that of the Earth and with a rocky composition similar to that of Earth. Among these, we discovered Kepler-78b, the first planetary object that has a similar mass, radius and density to Earth. We have also found the two closest transiting rocky planets, orbiting the solar-type star HD219134 only 21 light years away. This golden sample of planets with well-constrained parameters allowed us to infer that all dense planets with masses below six Earth masses (including Earth and Venus) are well-described by exactly the same rocky composition (in technical terms, the same fixed ratio of iron to magnesium silicate). Most notably, ETAEARTH provides the first-ever constraints on the density of K2-3d, a planet in a multiple transiting system that is similar to Earth in mass and orbits within the Habitable Zone of the star known to-date to be closest in mass to the Sun. K2-3d appears to belong to the still elusive class of ‘water worlds’, with a density somewhat lower than Earth’s. Finally, using information from the full sample of objects found by Kepler, we have determined that one in five solar-like stars host an Earth-like planet, i.e. an object with a size similar to Earth orbiting within the Habitable Zone of its solar-type parent star. What are your follow-up plans, if any? Our post-ETAEARTH plans will primarily focus on tapping the huge potential that is about to be unleashed by the new important player in the exoplanet arena, NASA’s TESS mission which was successfully launched just a few weeks ago. TESS will find transiting planets over most of the observable sky with radii not much bigger than Earth’s, and around stars typically five to ten times brighter than those observed by Kepler. Some of these small planets will orbit at Habitable Zone distances from their central stars (typically of lower mass than the Sun). We plan to invest large amounts of observing resources from both hemispheres whilst continuing to use HARPS-N and the new ultra-high-precision European planet hunter ESPRESSO on the Very Large Telescope in the Chilean Andes in order to measure masses and densities of the best candidates provided by TESS. Doing this could dramatically increase the sample of optimal targets amenable for investigations of their atmospheres. last modification: 2018-05-29 17:15:01
The fact that “adjacent agricultural fields can produce significantly different yields” (lines 16–17) is offered as evidence of the Until recently, many anthropologists assumed that the environment of what is now the southwestern United States shaped the social history and culture of the region's indigenous peoples. Building on this assumption, archeologists asserted that adverse environmental conditions and droughts were responsible for the disappearances and migrations of southwestern populations from many sites they once inhabited. However, such deterministic arguments fail to acknowledge that local environmental variability in the Southwest makes generalizing about the environment difficult. To examine the relationship between environmental variation and sociocultural change in the Western Pueblo region of central Arizona, which indigenous tribes have occupied continuously for at least 800 years, a research team recently reconstructed the climatic, vegetational, and erosional cycles of past centuries. The researchers found it impossible to provide a single, generally applicable characterization of environmental conditions for the region. Rather, they found that local areas experienced different patterns of rainfall, wind, and erosion, and that such conditions had prevailed in the Southwest for the last 1,400 years. Rainfall, for example, varied within and between local valley systems, so that even adjacent agricultural fields can produce sgnificantly different yields. The researchers characterized episodes of variation in southwestern environments by frequency: low-frequency environmental processes occur in cycles longer than one human generation, which generally is considered to last about 25 years, and high-frequency processes have shorter cycles. The researchers pointed out that low-frequency processes, such as fluctuations in stream flow and groundwater levels, would not usually be apparent to human populations. In contrast, high-frequency fluctuations such as seasonal temperature variations are observable and somewhat predictable, so that groups could have adapted their behaviors accordingly. When the researchers compared sequences of sociocultural change in the Western Pueblo region with episodes of low- and high-frequency environmental process and sociocultural change or persistence. Although early Pueblo peoples did protect themselves against environmental risk and uncertainty, they responded variously on different occasions to similar patterns of high-frequency climatic and environmental change. The researchers identified seven major adaptive responses, including increased mobility, relocation or permanent settlements, changes in subsistence foods, and reliance on trade with other groups. These findings suggest that group's adaptive choices depended on cultural and social as well as environmental factors and were flexible strategies rather than uncomplicated reactions to environmental change. Environmental conditions mattered, but they were rarely, if ever, sufficient to account for sociocultural persistence and change. Group size and composition, culture, contact with other groups, and individual choices and actions were - barring catastrophes such as floods or earthquakes - more significant for a population's survival than were climate and environment. unpredictability of the climate and environment of the southwestern United States, difficulty of producing a consistent food supply for a large population in the Western Pueblo region, lack of water and land suitable for cultivation in central Arizona, local climatic variation in the environment of the southwestern United States, high-frequency environmental processes at work in the southwestern United States Add Your Explanation You must have a Magoosh account in order to leave an explanation.
Posted by Anonymous on Sunday, March 30, 2008 at 9:27am. For the first question I really need all help I can get. Thanks! 1. Given a and b are unit vectors, a) if the angle between them is 60 degrees, calculate (6a+b) . (a-2b) b) if |a+b| = sqrt3, determine (2a-5b) . (b+3a) 2. The vectors a = 3i - 4j - k and b = 2i + 3j - 6k are the diagonals of a parallelogram. Show that this parallelogram is a rhombus, and determine the lengths of the sides and angles between the sides. 3. If a and b are perpendicular, show that |a|^2 + |b|^2 = |a + b|^2. What is the usual name of this result. b) If a and b are not perpendicular, and a-b = c, express |c|^2 in terms of a and b. What is the usual name of this result? Math please help - Reiny, Sunday, March 30, 2008 at 11:17am You seem to post quite a few vector questions under the name of anonymous. Are you the same person? Please use a first name or some other nick to identify yourself. Your first question uses the basic laws of vectors. (2a-5b)∙(b+3a) = 2a∙b + 6│a│^2 - 5│b│^2 - 15a∙b = 6│a│^2 - 5│b│^2 - 13a∙b we know │a│ and │b│ are 1 each and a∙b = │a││b│cos60º a∙b = 1*1*1/2 so 6│a│^2 - 5│b│^2 - 13a∙b = 6*1 - 5*1 - 13*1/2 for 1. b) make a diagram and find cosß using sides 1,1,√3 that way you can find a∙b and follow my example of a) 2. In a parallelogram the diagonals bisect each other, but in a rhombus (which is a parallelogram) they bisect each other at right angles. so take the dot product, and see if you get zero. (you will) then 1/2 of vector a + 1/2 of vector b will give you a side Answer This Question More Related Questions - Trig - I need help with my vectors unit. I'm really confused about how to set ... - Math - Let a, b, and c be unit vectors, such that a+b+c=0. Show that the angle ... - science (physics) Need to solve this today please - (a) Express the vectors A, B... - Calculus - SORRY, i wrote this question wrong before ... u, v, w are UNIT ... - calulas and VECTORS - Calculate to the nearest degree, the angle between the ... - Calculus - hah im trully very SORRY, ok now i double checked.. so everything ... - Math - if vectors a+2b and 5a-4b are perpendicular to each other and a and b are... - Math vectors - For which of the following conditions will the cross product of ... - Calculus and vectors - |C|=5 and |D|=8. The angle formed by vectors C and D is ... - Math: Calculus - Vectors - Two forces of 90 N act on an object. The forces make ...
Self-confidence and self-awareness Children are confident to try new activities, and say why they like some activities more than others. They are confident to speak in a familiar group, will talk about their ideas, and will choose the resources they need for their chosen activities. They say when they do or don’t need help. Managing feelings and behaviour Children talk about how they and others show feelings, talk about their own and others’ behaviour, and its consequences, and know that some behaviour is unacceptable. They work as part of a group or class, and understand and follow the rules. They adjust their behaviour to different situations, and take changes of routine in their stride. Children play co-operatively, taking turns with others. They take account of one another’s ideas about how to organise their activity. They show sensitivity to others’ needs and feelings, and form positive relationships with adults and other children.
Thousands of people have died after an earthquake sent huge waves crashing into coastal resorts across south and east Asia. Dr Brian Baptie, a senior seismologist with the British Geological Survey, explained how the wave, or tsunami, was created. In geological terms, what has happened? Sumatra, or north-western Indonesia, is right on a plate boundary. The earth's surface is made up of lots of different tectonic plates and they are all moving around. The plate that has the Indian Ocean on it is moving roughly north-east and colliding with Sumatra. And as that collision takes place, the Indian Ocean plate gets subducted underneath Sumatra, and as that plate is subducted it breaks up and that is what causes the earthquake. This earthquake been one of the largest ever, one of the great earthquakes. There has been a rupture along a fault about 1,000km long, and that has generated a vertical displacement of about 10m. The displacement in the sea floor has generated this huge tsunami. How does the wave develop? There's a huge vertical displacement in the sea floor as a result of the earthquake and that displaces a huge volume of water. You can imagine, if the rupture is 1,000km long with a 10m displacement in the sea floor you get hundreds of cubic kilometres of water and that results in a wave that travels through the ocean. 1960 - Chile, 9.5 magnitude 1964 - Alaska, 9.2 1957 - Alaska, 9.1 1952 - Russia, 9.0 2004 - Indonesia, 9.0 In the deep ocean the height of the wave can be a few metres, maybe 5-10m, and it travels at a few hundred kilometres per hour. That means it travels relatively slowly compared with the seismic waves from the earthquake, and it has arrived quite a few hours later at surrounding coastal areas all around the Indian Ocean. As the tsunami wave approaches the shore it slows down, because the water gets shallower and what that means is the wave increases dramatically. When it hits the shoreline it can be 10-20m, and that is probably what has happened in this case. Why was there no warning this was happening? There is a tsunami warning system in place in the Pacific Ocean because there is a historical precedent of lots of earthquakes causing tsunamis like this, throughout the 20th Century. But there is no real precedent for a tsunami like this in the Indian Ocean. So this is the first time this has happened, and there is no warning system as far as I know in the Indian Ocean. Could there be more waves on a similar scale? Unlikely there will be further tsunamis of the same size. What normally happens when you get a very large earthquake is you get aftershocks that continue for many days. They are usually a bit smaller than the main shock, although it is not impossible that there could be another one. But there may be aftershocks and they may generate smaller tsunamis.
1. What is Network or Define Network? A network is a set of devices connected by physical media links. A network is recursively a connection of two or more nodes by a physical link or two or more networks connected by one or more nodes. 2. What is meant by a Link? At the lowest level, a network can consist of two or more computers directly connected by some physical medium such as coaxial cable or optical fiber. Such a physical medium is called as Link. 3. What is a node? A network can consist of two or more computers directly connected by some physical medium such as coaxial cable or optical fiber. Such a physical medium is called as Links and the computer it connects is called as Nodes. 4. What is a gateway or Router? A node that is connected to two or more networks is commonly called as router or Gateway. It generally forwards message from one network to another. 5. What is point-point link? If the physical links are limited to a pair of nodes it is said to be point-point link. 6. What is Multiple Access? If the physical links are shared by more than two nodes, it is said to be Multiple Access. 7. What are all the advantages of Distributed Processing? b. Distributed database c. Faster Problem solving d. Security through redundancy e. Collaborative Processing 8. What are the criteria necessary for an effective and efficient network? It can be measured in many ways, including transmit time and response time. It is measured by frequency of failure, the time it takes a link to recover from a failure, and the network’s robustness. Security issues include protecting data from unauthorized access and virus. 9. What are all the factors that affect the performance of the network? a. Number of Users b. Type of transmission medium 10. Name the factors that affect the reliability of the network? a. Frequency of failure b. Recovery time of a network after a failure Previous Page Next Page
The occupation of Japan was driven by a sense of urgency, the need to dismantle the old order and a new constitution had to be written before an opposition to the efforts could arise. It was only estimated to take a couple of years but actually took seven years to eradicate the authoritarian system and build up a liberal democratic society. In the Potsdam Declaration of 1945, the US, Britain, and China called for the removal of individuals responsible for the war and the punishment of war criminals. The structure of the economy would change to allow for the payment of reparation and prevent rearmament. The values of Democracy would be instilled and the ideology of imperialistic ways abolished. All of Japan’s overseas assets would be lost and thus limiting its dominion to the four main islands. The first objective was the timely demobilization of Japan’s armed forces. Over 6.5 million soldiers had to travel back home, dispose of their weaponry, and find a way of working and reintegration within society. The structure of the military, which was the basis of government and politics, was dismantled. A greater concern for human rights and civil liberties was instilled in the population in an effort to keep Japan from repeating the imperialistic and authoritarian ways of the past. This was achieved through educational reforms that were put into place. A less rigid structure allowed for a greater number of people to attend educational facilities. An attempt to mimic US education was implemented in an effort to focus more on the development of the individual. More emphasis was placed on interpretation and analysis as opposed to the traditional methods. The occupation encouraged a strong labor movement and the American ideology of unionization. This reform of the labor system would abolish laws, ordinances, and other restrictions that prohibited civil liberties. The unionization of Japan was short-lived and eventually the promotion of unionization was weakened by the discouragement of industry-wide unions. Unlike the labor reforms, the land reforms proved to be quite successful. This was partially due to the Japanese wanting to reform the problem along with the American agenda. Lastly, the occupation was determined to dismantle the zaibatsu, large groups of business that had acquired goods during the war. These groups had economic and political power and were believed to be major supporters of militarization. Despite the many efforts to disband the zaibatsu, the effort was eventually given up due to opposition from overseas business and ongoing Cold War agendas.
You are here: Staphylococcus bacteria (or staph) are commonly carried on the skin or in the nose of healthy individuals. Staph is spread by close contact either through direct physical contact with an infected individual or by touching objects (e.g., benches, towels, clothing, sports equipment) contaminated with the bacteria. MRSA is a type of bacterial infection caused by Staphylococcus aureus that is resistant to methicillin, an antibiotic commonly used to treat staph infection. Initially found primarily in hospitals, nursing homes, and other health care settings, numerous MRSA cases have occurred in recent years in athletic settings. College and university cases have generally been among players of sports where skin-to-skin contact or sharing of equipment is common, such as football, wrestling, lacrosse, and fencing. Schools in the news with MRSA cases include large universities such as the University of North Carolina and the University of Georgia, and colleges such as Bowdoin and Amherst. We have had cases at Hampshire, though causality was not readily established. MRSA is highly contagious and can cause bloodstream infections, pneumonia, and in extreme cases death; cases at schools include at least 2 deaths. MRSA causes a skin infection that may resemble a pimple, boil, or other lesion. The skin may be red, warm, swollen, tender, or have drainage. The lesion drainage is very infectious. Skin infections that are left untreated can develop into more serious life-threatening infections of the lungs, blood, or bone. Symptoms of these infections include: difficulty breathing, malaise, fever, or chills. Some individuals may have staph colonies, including MRSA, on the skin and not have any symptoms. These individual act as carriers for the bacteria. To prevent MRSA infection and act quickly to treat it if it occurs, the following precautions are necessary. If you have any questions, please contact Nancy Apple in environmental health and safety (ext. 6620), or Karen Kalmakis at health services (ext. 5458).
One of the earliest forms of employing the dialectical method was the Dialogues of Greek philosopher Plato. In which the author sought to study truth through discussion in the form of questions and answers. The Greek philosopher, Aristotle, thought of dialectic as the search for the philosophic basis of science, and he frequently used the term as a synonym for the science of logic. “Hegel's aim was to set forth a philosophical system so comprehensive that it would encompass the ideas of his predecessors and create a conceptual framework in terms of which both the past and future could be philosophically understood. Such an aim would require nothing short of a full account of reality itself. Thus, Hegel conceived the subject matter of philosophy to be reality as a whole. This reality, or the total developmental process of everything that is, he referred to as the Absolute, or Absolute Spirit. According to Hegel, the task of philosophy is to chart the development of Absolute Spirit. This involves (1) making clear the internal rational structure of the Absolute; (2) demonstrating the manner in which the Absolute manifests itself in nature and human history; and (3) explicating the teleological nature of the Absolute, that is, showing the end or purpose toward which the Absolute is directed.” Hegel, following the ancient Greek philosopher Parmenides, argued that "what is rational is real and what is real is rational." This must be understood in terms of Hegel's further claim that the Absolute must ultimately be regarded as pure Thought, or Spirit, or Mind, in the process of self-development Traditionally, this dimension of Hegel's thought has been analyzed in terms of the categories of thesis, antithesis, and synthesis. Although Hegel tended to avoid these terms, they are helpful in understanding his concept of the dialectic. The thesis, then, might be an idea or a historical movement. Such an idea or movement contains within itself incompleteness... Please join StudyMode to read the full document
Actually, if you live in Japan, you probably know some of street names or bridge names in your town. If you live in central Tokyo, you likely have heard “Meiji dori”, “Aoyama dori” or “Nihon bashi”. And when you make your own sentences, you might say, “kono dori o migi ni magaru.” (Turn right on this street.) or “ano bashi o watarou.” (Let’s cross that bridge.) In fact, they are not right. - street, road: toori (Not “dori”) - bridge: hashi (Not “bashi”) “kono tori o migi ni magaru” and “ano hashi o watarou” When a proper noun is put in front of “tori” or “hashi”, the pronunciation of their kanji changes to “meiji dori” or “nihon bashi”. Let’s have a look at other examples like this. 1. When a word is preceded by a proper noun, the first letter of the word takes on voiced consonant marks. (There are exceptions to this rule.) - saka → zaka [example: Fujimi zaka] - kawa → gawa [example: Sumida gawa] - yama → san [example: Fuji san] - mizuumi → ko [example: Yamanaka ko] - shiro → jo [example: Osaka jo] Moreover, Many non-Japanese have an interesting understanding of Mt. Fuji (Fuji san). The san in Fuji san is the Chinese pronunciation of 山(yama), but those who don’t know kanji sometimes confuse that san with the san that we put after someone’s name. Sometimes they say, “Fuji san is an important mountain to Japanese, so you call it Fuji san to pay it respect, don’t you?” But, this is not the reason. Incidentally, why is the big park in Shinjuku, Tokyo called “Shinjuku gyoen”, not “koen”? It is because this place was an imperial property during the Meiji period and “gyoen” means an “imperial park/garden”.
Over time, Jupiter sucked up the fragments of the comet Shoemaker–Levy 9, which crashed into the planet in 1994. Twenty years ago this week, humans for the first time witnessed a collision between two bodies in the solar system. From July 16 to 22, 1994, more than 20 fragments of the comet Shoemaker-Levy 9 pelted Jupiter's atmosphere. The weeklong fireworks show left scars that could be seen for months by people on Earth with the aid small telescopes. Some of these gashes were even more visible than the Great Red Spot, a swirling hurricane in Jupiter's atmosphere that's nearly three times the size of Earth in diameter. NASA's fleet of space telescopes and probes at the time were tapped to document the historic collision. The Galileo spacecraft, which was on a mission to study Jupiter, wouldn't arrive in the Jovian system for another year and a half. But the probe still snapped images of fireballs shooting from the gas giant's southern hemisphere. [Comet Shoemaker-Levy 9's Epic Jupiter Crash in Pictures] The telescopes of NASA's Deep Space Network looked for disturbances in radio emissions from Jupiter's radiation belt. The crash and its aftermath were also studied with the Hubble Space Telescope, the solar-orbiting Ulysses spacecraft, and Voyager 2 (long before it became the first man-made object to leave the solar system altogether). Shoemaker-Levy 9 was discovered a little more than a year before its demise, in March 1993. Husband-and-wife astronomers Carolyn and Eugene Shoemaker and amateur astronomer David Levy first spotted the comet while comparing two film frames taken with a camera at the California Institute of Technology's Palomar Observatory. Scientists believe the comet was pulled into orbit around Jupiter decades before it finally succumbed to the immense gravity of the solar system's biggest planet. Though the visible marks are gone, the spectacular comet crash left a legacy to both science and the popular consciousness. Among the studies it inspired was one recent investigation that found water in Jupiter's atmosphere that was dropped on the planet by Shoemaker Levy 9. The impact also raised public awareness about how vulnerable Earth is to strikes from comets and asteroids. Two blockbuster films "Armageddon" and "Deep Impact" both hit theaters a few years later, in 1998. Congress also authorized NASA to hunt for near-Earth objects (NEOs). Eventually, the space agency formed its NEO Program Office, which coordinates efforts to detect, monitor and study potentially hazardous asteroids and comets that could pose a threat to Earth, NASA officials said in a statement.
The Basics of Sun Safety for Kids Just one blistering sunburn in childhood can double your little one's lifetime risk of melanoma, the deadliest form of skin cancer. Young, sensitive skin is especially vulnerable to damaging rays, so protect your child by being sun-care savvy. Childhood and adolescence are critical periods during which exposure to UV radiation is more likely to contribute to skin cancer in later life. Parents have an important role to ensure their children establish healthy sun protection habits during the early years. Research into the effectiveness of role modelling shows us that adopting sun protective behaviours yourself means your children will be more likely to do the same. Infants under 6 months of age should be kept out of the sun. Their skin is too sensitive for sunscreen. An infant's skin possesses little melanin, the pigment that gives colour to skin, hair and eyes and provides some sun protection. Therefore, babies are especially susceptible to the sun's damaging effects. Seek shade. UV rays are strongest and most harmful during midday, so it’s best to plan indoor activities then. If this is not possible, seek shade under a tree, an umbrella, or a pop-up tent. Use these options to prevent sunburn, not to seek relief after it’s happened. Cover up. When possible, long-sleeved shirts and long pants and skirts can provide protection from UV rays. Clothes made from tightly woven fabric offer the best protection. A wet T-shirt offers much less UV protection than a dry one, and darker colors may offer more protection than lighter colors. Some clothing certified under international standards comes with information on its ultraviolet protection factor. Get a hat. Hats that shade the face, scalp, ears, and neck are easy to use and give great protection. Baseball caps are popular among kids, but they don’t protect their ears and neck. If your child chooses a cap, be sure to protect exposed areas with sunscreen. Wear sunglasses. They protect your child’s eyes from UV rays, which can lead to cataracts later in life. Look for sunglasses that wrap around and block as close to 100% of both UVA and UVB rays as possible. Use sunscreen with at least SPF15 and UVA and UVB protection, for the best protection, apply sunscreen generously 30 minutes before going outdoors. Don’t forget to protect ears, noses, lips, and the tops of feet. Take sunscreen with you to reapply during the day, especially after your child swims or exercises. This applies to waterproof and water-resistant products as well. Keep in mind, sunscreen is not meant to allow kids to spend more time in the sun than they would otherwise. Try combining sunscreen with other options to prevent UV damage. Turning pink? Unprotected skin can be damaged by the sun’s UV rays in as little as 15 minutes. Yet it can take up to 12 hours for skin to show the full effect of sun exposure. So, if your child’s skin looks “a little pink” today, it may be burned tomorrow morning. To prevent further burning, get your child out of the sun. Tan? There’s no other way to say it—tanned skin is damaged skin. Any change in the color of your child’s skin after time outside—whether sunburn or suntan—indicates damage from UV rays. Cool and cloudy? Children still need protection. UV rays, not the temperature, do the damage. Clouds do not block UV rays, they filter them—and sometimes only slightly. Stay safe in the sun everyone!
This view of the Orion nebula highlights fledgling stars hidden in the gas and clouds. It shows infrared observations taken by NASA's Spitzer Space Telescope and the European Space Agency's Herschel mission, in which NASA plays an important role. A star forms as a clump of this gas and dust collapses, creating a warm glob of material fed by an encircling disk. These dusty envelopes glow brightest at longer wavelengths, appearing as red dots in this image. In several hundred thousand years, some of the forming stars will accrete enough material to trigger nuclear fusion at their cores and then blaze into stardom.
Astronauts have been taking part in short spaceflight missions since 1961. They have only recently begun to spend significantly longer times in space, with missions extending for months, since the days of the Russian Mir space station (1986-2001) and extended stays on the International Space Station (ISS; November 2000). Though earlier studies clearly showed that astronauts on these extended missions suffered serious deficits from lengthy times in a low-gravity environment, including dizziness when standing up, considerable loss of bone mass, and impaired muscle function, little was known about the effects of long-term space flight on the heart and vascular system. In a new study, a research team has tested various cardiovascular measures in six astronauts on long-term missions aboard the International Space Station. These findings show that lengthy spaceflight indeed affects cardiovascular responses, but not as dramatically as the researchers predicted, suggesting that the intensive exercise routines astronauts on these long missions complete every day are doing their job. The article is entitled "Cardiovascular Regulation During Long-Duration Spaceflights to the International Space Station." It appears in the current edition of the Journal of Applied Physiology, published by the American Physiological Society. The researchers collected data from six male astronauts, between 41 and 55 years old, who were headed to the ISS on missions ranging from 52 to 199 days. At about a month before they embarked, the research team collected a wealth of data on each subject's cardiovascular health. This data was collected during spontaneous and paced breathing, both sitting up and lying down, to reflect a variety of conditions and cardiovascular stresses. The researchers measured various factors including finger arterial blood pressure, heart rate, left ventricular ejection time, and cardiac output. The astronauts repeated these measures independently a few weeks after they arrived at the space station, then a few weeks before they returned to Earth. A final assessment took place again soon after landing on Earth. Results showed that heart rate, blood pressure, and arterial baroreflex response (the body's natural way to regulate heart rate and blood pressure based on continuous sensing of both) were unchanged from pre-flight to in-flight. Left ventricular ejection times and cardiac output both increased in-flight, while time between heartbeats, arterial pulse pressure, and the blood pumped from the heart decreased. In the post-flight testing compared to pre-flight measures, heart rate and cardiac output increased slightly, while arterial baroreflex response decreased by about a third, but only in the seated position. Importance of the Findings These findings suggest that long-duration spaceflight has significant effects on cardiovascular function, yet these effects are relatively small. The researchers attribute this cardiovascular stability to the intensive exercise program astronauts commit to while on lengthy spaceflight missions. On these particular missions, the six astronauts were each allotted 2.5 hours per day to set up for exercise, complete a workout, and clean up after the session, with options to exercise on a cycle, treadmill, or doing resistance training. These exercise sessions appear to keep astronauts relatively healthy and prepared for return to Earth, despite the potentially negative effects of a low-gravity environment. "These post-flight changes were somewhat less than expected based on short-duration flights and early reports of long-duration missions and suggest that the current countermeasures on the ISS, which include exercise training, are keeping cardiovascular control mechanisms well prepared for return to Earth," the authors say. The ISS astronauts in the current study represent the first six-person crew, signifying the transition to greater possibilities to conduct science on this major international laboratory, they note. Explore further: Scientists 'map' water vapor in Martian atmosphere More information: The study is available online at bit.ly/FQW3kG
Decoding Worm Lingo PASADENA, Calif.—All animals seem to have ways of exchanging information—monkeys vocalize complex messages, ants create scent trails to food, and fireflies light up their bellies to attract mates. Yet, despite the fact that nematodes, or roundworms, are among the most abundant animals on the planet, little is known about the way they network. Now, research led by California Institute of Technology (Caltech) biologists has shown that a wide range of nematodes communicate using a recently discovered class of chemical cues. A paper outlining their studies—which were a collaborative effort with the laboratory of Frank C. Schroeder, assistant scientist at the Boyce Thompson Institute for Plant Research (BTI) of Cornell University—was published online April 12 in the journal Current Biology. Previous research by several members of this team had recently shown that a much-studied nematode, Caenorhabditis elegans, uses certain chemical signals to trade data. What was unknown was whether other worms of the same phylum "talk" to one another in similar ways. But when the researchers looked at a variety of nematodes, they found the very same types of chemicals being combined and used for communication, says Paul Sternberg, the Thomas Hunt Morgan Professor of Biology at Caltech and senior author on the study. "It really does look like we've stumbled upon the letters or words of a universal nematode language, the syntax of which we don't yet fully understand," he says. Nematodes are wide-ranging creatures; they have been found in hot springs, arctic ice, and deep-sea sediments. Many types of nematodes are harmless, or even beneficial, but others cause damage to plants and harm to humans and animals. Decoding the language of these worms could allow us to develop strategies to prevent the spread of unwanted nematode species, saving time and money for the agricultural and health-care industries. "We can now say that many—maybe all—nematodes are communicating by secreting small molecules to build chemical structures called ascarosides," says Sternberg, whose past research in C. elegans found that those worms secrete ascarosides both as a sexual attractant and as a way to control the social behavior of aggregation. "It's really exciting and a big breakthrough that tells us what to look for and how we, too, might be able to communicate with this entire phylum of animals." Building upon Sternberg's previous findings, he and Andrea Choe, then a graduate student and now a postdoctoral scholar in biology at Caltech, decided to look for evidence of ascarosides in other species of nematodes. These included some parasitic organisms as well as some benign roundworm samples. "I turned a section of Paul's lab into a parasite zoo, and people were both intrigued by it and terrified to come back there," says Choe. "One day they would see me cutting carrots to culture plant parasites, and the next I would be infecting mosquitoes or harvesting hookworms from rat intestines. We really tried to get as many different samples as we could." Once they had cultured a sufficient number of different nematode species, the creatures were bathed in a liquid solution dubbed "worm water." This worm water collected the chemicals given off by the nematodes. The worms were then filtered out and sent to Schroeder's lab at BTI to be analyzed using a mass spectrometer—a tool used to deduce the chemical structure of molecules. "When the results came back from BTI, showing that the same ascarosides were present in all the worm-water samples, I thought that they had made a mistake," says Choe. "It was a very surprising finding." Using technology developed by Dima Kogan, a former graduate student at Caltech and coauthor of the paper, the researchers were also able to test the responses of various worms to particular ascarosides. Worms were placed on an agar plate, along with an experimental cue—a blend of ascarosides. Any action that might occur on the plates was then recorded; Kogan's software analyzed those recordings frame by frame, counting the number of worms that were either attracted or repelled by the given chemicals. When asked about the development of the software, Choe explains that it all began when Kogan noticed that the current method involved counting worms by eye. "He was stunned that we would spend our time doing this," says Choe, "and he came up with this software in less than a week. It removed user bias, sped up our research 10-fold, and allowed us to study more chemicals and more species." Next, the researchers will work to learn more about how the worms actually sense the ascarosides. "Now that we know these chemicals are broadly present in nematodes, we want to find the genes that are responsible for the ability to respond to these chemicals," says Sternberg, who is also an investigator with the Howard Hughes Medical Institute. "That knowledge could open up a whole other angle, not just for dealing with the chemicals, but for actually interfering with those communication systems a little downstream by hitting the receivers." The team also plans to continue deconstructing the language they have found among nematodes. For example, Sternberg wonders, how many different combinations of chemicals mean "food," or "mate," or "attack"? If the scientists can crack the code in terms of what different blends mean to different species, they can begin to interfere with the actions of the nematodes that wreak havoc across the world—leading to better eradication of plant pests, as well as human and animal parasites. "There is only one known worm pheromone used in agriculture," says Choe. "It is time for us change that. This research could be a very big breakthrough on that front." The Current Biology study, "Ascaroside Signaling is Widely Conserved Among Nematodes," was funded by a grant from the National Institutes of Health and was supported by the Howard Hughes Medical Institute. Additional authors on the study are Stephan H. von Reuss, from Schroeder's lab at BTI; Robin B. Gasser, from the University of Melbourne; and Edward G. Platzer, from UC Riverside.
• suffrage • Pronunciation: sêf-rij • Hear it! Part of Speech: Noun Meaning: 1. The right to vote. 2. A vote cast in deciding an issue. 3. A short, intercessory prayer on behalf of souls departed. Notes: This word entered English in the 14th century meaning "a short prayer of intercession", but by the 16th century it was used to refer to voting in the British parliament. The adjective is suffragial, but the more interesting term is suffragette, the name given to women at the beginning of the 20th century who demonstrated for the woman's right to vote, for women's suffrage, which women in the US suffered without until 1920. In Play: Although the word was more closely associated with the right of women to vote in the last century, it refers to anyone's right to vote, "We would probably elect a better government if we extended suffrage to elementary school children." But don't forget that it also refers to a prayer on behalf of a soul to be promoted to a higher office, "Let's all offer suffrages for Larry to be promoted to some position far from this office." Word History: Suffrage goes back to Latin suffragare "to vote". The root of this word comes from the same source as English break. The English word break (= German brechen) comes from the Proto-Indo-European root, bhreg-. The initial [bh] became [b] in English and the [g] became [k], both by regular historical change. The [bh] in Latin, however, standing at the beginning of a word as it does here, became [f], so the Latin word for "break" is frangere, past participle fractus, the origin of our word fracture. With the prefix sub- "under" (the final [b] assimilating to the following [f]), this stem gave Latin suffragari "to vote." Why the connection between "break" and "vote"? The guess is that the early Romans used broken shards of pottery for casting votes. (We all owe a unanimous vote of gratitude to Ruth Baldwin for suggesting we look into the odd connotations of today's word.)
CryoSat: the ice edge holds the key Until now satellites have not been able to monitor melting of ice at the very point where it is most significant: at the ice edge. CryoSat’s ability to do just that thrills scientists working in the field. "CryoSat will pave the way for a better understanding of what happens to the ice at the exact point where things are the most interesting: at the ice edge where the majority of the melting takes place," says Danish glaciologist Carl Egede Bøggild. Part of the Geological Survey of Denmark and Greenland (GEUS) Bøggild heads a large-scale monitoring programme on the Greenland Ice. The programme utilises a combination of on-site measurements and satellite data. "In principle you would prefer satellite data when you want to monitor large-scale developments," explains Bøggild. "However it has been a major problem that satellites have had trouble monitoring the very ice edge zone." In order to measure the thickness of a given ice layer, a radar altimeter satellite emits a radar signal and later records it being reflected back out to space. The time taken for the signal to return can be utilised to calculate the exact ice height, from which its thickness can in turn be derived. However the topography at the edge of an ice sheet can be very steep and uneven, making it difficult for the satellite to catch the reflected signal, or know precisely from which point within the ten-kilometre signal 'footprint' the signal is returning from. Often the uncertainty would be too large for the results to be reliable. The practical implication was that the entire ice edge remained inaccessible for satellite monitoring. However the science team behind CryoSat has managed to tackle this problem. Its double-antenna design means it can measure the angle of the returning signal to put an exact location on where it comes from relative to the spacecraft track. The satellite will still be able to carry out its measurements, no matter how steep the ice surface may be. "To my mind the ice edge is the most interesting place to do science," the Danish glaciologist states. "In the middle of the inland ice things are very stable. As climate changes, the edge is where you will be able to observe the effect first. "American airborne measurements have shown a thinning of the Greenland glaciers by one metre per year. However our measurements on location at the ice edge show melting on an even larger scale. Now we are anxious to learn what the measurements from CryoSat will show." According to on-site measurements the Sermilik glacier in Southern Greenland is thinning between two and eight metres a year. Not all of this change is linked to climate change caused by human activities. The glaciologist compares the inland ice to dough for making a loaf of bread laid out on a kitchen table: "You see a slow movement from the middle towards the edge. In the case of the inland ice it may take thousands of years from a snow flake falls in the centre until it reaches the edge. "You might say that the system has a certain built-in memory. Some of the melting we witness now is actually an aftermath of the last, mini Ice Age which ended in the last half of the 19th century". Systematic monitoring of air temperature has taken place since 1875. Comparing the temperature levels with the actual melting one can determine that about half of the melting is linked to changes in climate. The other half will then have other causes – primarily the aftermath of the last Ice Age. The Danish ice monitoring effort has found thinning of large areas of the inland ice. That goes for practically the entire ice edge zone. One interesting twist to the story is that in some areas thinning is taking place despite a drop in mean temperatures. "This goes to show the complexity of the system," Bøggild adds. "Normally one would use the number of days with temperatures above zero degrees as an indicator of melting. Generally these two factors would be linked. However factors other than temperature may also be influencing melting. One of them is the amount of incoming solar radiation. This would make it possible to see these kind of surprising results locally". Despite his great expectations for CryoSat, Carl Egede Bøggild underlines that satellites will not replace ground measurements: "Satellites will give us a far more accurate view of the amount of melting but they will not tell us why the melting is taking place. In order to improve your understanding of the causes you have to do research on site. Also we will have to keep on doing measurements on site in order to verify the findings of the satellites. We are talking about two different kinds of tools supplementing each other very well."
Pirates in popular culture In English-speaking popular culture, the modern pirate stereotype owes its attributes mostly to the imagined tradition of the 18th century Caribbean pirate sailing off the Spanish Main and to such celebrated 20th century depictions as Captain Hook and his crew in the theatrical and film versions of Peter Pan, Robert Newton's portrayal of Long John Silver in the 1950 film of Treasure Island, and various adaptations of the Eastern pirate, Sinbad the Sailor. In these and countless other books, movies, and legends, pirates are portrayed as "swashbucklers" and "plunderers." They are shown on ships, often wearing eyepatches or peg legs, having a parrot perched on their shoulder, and saying phrases like "Arr, matey" and "Avast, me hearty." Pirates have retained their image through pirate-themed tourist attractions, traditional film and toy portrayals of pirates, and the continued performance and reading of books and plays featuring pirates. The archetypal characteristics of pirates in popular culture largely derive from the Golden Age of Piracy in the late 17th and early 18th centuries, with many examples of pirate fiction being set within this era. Vikings, who were also pirates, took on a distinct and separate archetype in popular culture, dating from the Viking revival. The first major literary work to popularise the subject of pirates was A General History of the Robberies and Murders of the most notorious Pyrates (1724) by Captain Charles Johnson, It is the prime source for the biographies of many well known pirates of the Golden Age, providing an extensive account of the period. In giving an almost mythical status to the more colourful characters, such as the notorious English pirates Blackbeard and Calico Jack, the book provided the standard account of the lives of many pirates in the Golden Age, and influenced pirate literature of Scottish novelists Robert Louis Stevenson and J. M. Barrie. While Johnson's text recounted the lives of many famous pirates from the era, it is likely that he used considerable licence in his accounts of pirate conversations. Stevenson's Treasure Island (1883) is considered the most influential work of pirate fiction, along with its many film and television adaptations, and introduced or popularised many of the characteristics and cliches now common to the genre. Stevenson identified Johnson's General History of the Pyrates as one of his major influences, and even borrowed one character's name (Israel Hands) from a list of Blackbeard's crew which appeared in Johnson's book. Appearance and mannerisms of Caribbean pirates In films, books, cartoons, and toys, pirates often have an unrefined appearance that evokes their criminal lifestyle, rogue personalities and adventurous, seafaring pursuits. They are frequently depicted as greedy, mean-spirited, and focused exclusively on fighting enemy pirates and locating hidden treasure. They are often shown wearing shabby 17th or 18th century clothing, with a bandana or a feathered tricorne. They sometimes have an eye patch and almost always have a cutlass and a flintlock pistol, or some other sword or gun. They sometimes have scars and battle wounds, rotten or missing teeth (suggesting the effects of scurvy), as well as a hook or wooden stump where a hand or leg has been amputated. Some depictions of pirates also include monkeys or parrots as pets, the former usually assisting them in thieving goods due to their supposed mischievous disposition. Stereotypical pirate accents are modeled on those of Cornwall, South Devon or the Bristol Channel area in South West England, though they can also be based on Elizabethan era English or other parts of the world. Pirates in film, television and theatre are generally depicted as speaking English in a particular accent and speech pattern that sounds like a stylized West Country accent, exemplified by Robert Newton's performance as Long John Silver in the 1950 film Treasure Island. A native of the West Country in south west England from where many famous English pirates hailed, Newton also used the same strong West Country accent in Blackbeard the Pirate (1952). Historical pirates were often sailors or soldiers who'd fallen into misfortune, forced to serve at sea or to plunder goods and ships in order to survive. Depending on the moral and social context of a piece of pirate literature, the pirate characters in that piece may be represented as having fallen, perhaps resembling a "respectable" person in some way. Pirates generally quest for buried treasure, which is often stored, after being plundered, in treasure chests. Pirate's treasure is usually gold or silver, often in the form of doubloons or pieces of eight. In the 1990s, International Talk Like a Pirate Day was invented as a parody holiday celebrated on September 19. This holiday allows people to "let out their inner pirate" and to dress and speak as pirates are stereotypically portrayed to have dressed and spoken. International Talk Like a Pirate Day has been gaining popularity through the Internet since its founders set up a website, which instructs visitors in "pirate speak." Venganza.org is also a major supporter of this day. In the online community, many games, movies, and other media are built upon the premise, thought to have been generated by Real Ultimate Power, that pirates (in the Caribbean buccaneer sense) and ninjas are sworn enemies. The "Pirates versus Ninjas" meme is expressed offline too, through house parties and merchandise found at popular-culture clothing and gift stores. Pirates also play a central role in the satirical religion of Pastafarianism. Established in 2005, Pastafarians (members of The Church of the Flying Spaghetti Monster) claim to believe that global warming is a result of the severe decrease in pirates since the 1700s, explaining the coldness associated with winter months that follow Halloween as a direct effect of the number of pirates that make their presence known in celebration. Alternative pirate archetypes In addition to the traditional archetype of seafaring pirates, other pirate archetypes exist in popular culture. - Air pirates are science fiction and fantasy character archetypes who operate in the air, rather than sailing the sea. As traditional seafaring pirates target sailing ships, air pirates capture and plunder aircraft and other targets for cargo, money, and occasionally they steal entire aircraft. - Space pirates are science fiction character archetypes who operate in outer space, rather than sailing the sea. As traditional seafaring pirates target sailing ships, space pirates capture and plunder spaceships for cargo, money, and occasionally they steal entire spacecraft. The dress and speech of these alternate archetypes may vary. It may correspond to a particular author's vision of a story's setting, rather than their traditional seafaring counterparts. On the other hand, they may be modeled after stereotypical sea pirates. Pirates in the arts Comics and manga - Terry and the Pirates (1934–1973) by Milton Caniff is an adventure comic strip frequently set among 20th-century pirates of China and Southeast Asia, led by the notorious Dragon Lady. - Redbeard (1959 onwards), a Belgian comic. - Batman: Leatherwing (1994), an Elseworlds comic by Chuck Dixon featuring Batman as a pirate. - One Piece (1997 onwards), set in a fictional world where piracy is at its height, the World Government and its Navy attempt to put it to a stop, and one young man desires to become the next Pirate King. The most popular manga to date in Japan. - Black Lagoon (2002 onwards) is a Japanese manga portraying group of modern day pirates in the southeast Asian sea, largely making money with acts of smuggling, extortion, or acting as mercenaries. - A group of hapless pirates, in themselves parodies of the characters of Redbeard, often run into Asterix and are subsequently beaten up and usually sunk. - The Red Seas (2002 onwards), a mix of pirates and strange phenomena by Ian Edginton and Steve Yeowell. - Outlaw Star, the primary antagonists of the series are members of the Pirate's Guild, a large network of space pirate clans throughout the universe. - Watchmen features a "comic book within a comic book" called Tales of the Black Freighter. Watchmen is set in an alternate history where super-heroes are alive and known to be in disgrace, so instead of comics dealing with super-heroes, comics dealing with pirates are more popular. - The Black Pirate, a 1926 film starring Douglas Fairbanks. - Captain Blood, a 1935 film starring Errol Flynn. - The Sea Hawk, a 1940 film starring Errol Flynn. - The Daughter of the Green Pirate, a 1940 film starring Fosco Giachetti - The Black Swan, a 1942 film starring Tyrone Power, Maureen O'Hara, and Anthony Quinn. - Treasure Island, a 1950 adaptation of Stevenson's book, starring Robert Newton. - Anne of the Indies, a 1951 adventure film loosely based on the life of Anne Bonny (Jean Peters) with Louis Jourdan and Thomas Gomez as Blackbeard. - The Crimson Pirate, a 1952 adventure film, starring and produced by Burt Lancaster. - Long John Silver, a 1954 sequel to Treasure Island, starring Robert Newton. - The animated films of Japanese director Leiji Matsumoto include several pirate characters, including Captain Harlock and Queen Emeraldas, the best known of these pieces being Galaxy Express 999 (1977) and Space Battleship Yamato (1974). - Pirates of the 20th Century, a 1979 Soviet adventure film about modern piracy. - The Island (1980), a film based on Peter Benchley's novel. - The Pirate Movie (1982), an Australian film loosely based on The Pirates of Penzance, stars Christopher Atkins and Kristy McNichol. - Nate and Hayes, a 1983 film based on the adventures of the notorious Bully Hayes, a pirate in the South Pacific in the late 19th century. Also known as Savage Islands. - Yellowbeard A 1983 film starring Graham Chapman as Yellowbeard the pirate - The Goonies 1985. - Pirates, a 1986 Roman Polanski comic/adventure film starring Walter Matthau. - The Princess Bride A 1987 film adaptation of the William Goldman novel that has "The Dread Pirate Roberts" as one of its central characters. - Cutthroat Island, a 1995 Renny Harlin film that was a notable flop, starring Geena Davis - Pirates of the Caribbean: The Curse of the Black Pearl (2003), Pirates of the Caribbean: Dead Man's Chest (2006), Pirates of the Caribbean: At World's End (2007) and Pirates of the Caribbean: On Stranger Tides (2011), movies based on the popular Disneyland attraction, "Pirates of the Caribbean". - Six Days Seven Nights, features piracy in the South China Sea. - Pirates of Treasure Island, a 2006 film adaptation of the novel Treasure Island produced by The Asylum. - The Pirates! In an Adventure with Scientists!, a 2012 Aardman Animations film loosely adapted from a comedy book by Gideon Defoe - Robinson Crusoe (1719) and The Life, Adventures and Piracies of the Famous Captain Singleton (1720) by Daniel Defoe were among the first novels to depict piracy, among other maritime adventures. - A General History of the Robberies and Murders of the most notorious Pyrates (1724) by Captain Charles Johnson (possibly for a pseudonym for Defoe) introduced many features which later became common in pirate literature, such as pirates with missing legs or eyes, the myth of pirates burying treasure, and the name of the pirates flag Jolly Roger. - The Corsair (1814), a poem by Byron concerns a pirate captain. It directly inspired Berlioz' overture Le Corsair (1844). - The Pirate (1821), a novel by Sir Walter Scott. - "The Gold-Bug" (1843), a short story by Edgar Allan Poe featured a search for buried treasure hidden by Captain William Kidd and found by following an elaborate code on a scrap of parchment. - Treasure Island (1883), a novel by Robert Louis Stevenson. - The Black Corsair (1898), first in a series of pirate novels by Emilio Salgari. - Sandokan (1883–1913), a series of pirate novels by Emilio Salgari. Set in Malaysia in the late 1800s. - Captain Blood (1922), a novel by Rafael Sabatini (followed by two sequels: Captain Blood Returns [aka The Chronicles of Captain Blood] and The Fortunes of Captain Blood, each being a collection of Captain Blood adventures). - The Dealings of Captain Sharkey (1925), a novel by Sir Arthur Conan Doyle, famous for his stories of Sherlock Holmes. - Queen of the Black Coast (1934), novelette by Robert E. Howard features Bêlit a pirate queen who has a romantic relationship with Conan. She is Conan's first serious lover. - Atlas Shrugged (1957) by Ayn Rand contains a fictional pirate Ragnar Danneskjöld whose activities are motivated by a capitalist ideology. - The Princess Bride (1973), a novel by William Goldman has "The Dread Pirate Roberts" as one of its central characters. - The Island (1979) by Peter Benchley and the 1980 movie adaptation for which he wrote the screenplay, feature a latter-day band of pirates who prey on civilian shipping in the Caribbean. - On Stranger Tides (1987), a historical fantasy novel by Tim Powers. It was later adapted into the fourth Pirates of the Caribbean film. - Bloody Jack (2002), a historical novel by L.A. Meyer. - The Pirates! in an Adventure with Scientists (2004) by Gideon Defoe, a surreal adventure with stereotypical pirates and Charles Darwin. Defoe has written subsequent books involving the same pirate crew and their anachronistic, absurd adventures. - The Piratica Series (2004, 2006, and 2007), a series of pirate novels by Tanith Lee. - Sea Witch (2006), a novel for adults by Helen Hollick published by DA Diamonds. - The Adventures of Hector Lynch (2007-2009), a pirate series by Tim Severin - The Government Manual for New Pirates (2007), a spoof of survival guides by Matthew David Brozik and Jacob Sager Weinstein. - Isle of Swords (2007), a novel by Wayne Thomas Batson. - Pirate Latitudes (2009), a novel by Michael Crichton. - The Pyrates Way Magazine (2006–Present), a quarterly online magazine by Kimball Publications, LLC. - Musicians have long been drawn towards pirate culture, due to its disestablishmentism and motley dress. An early 1960s British pop group called itself Johnny Kidd and the Pirates, and wore eye patches while they performed. Keith Moon, drummer of The Who, was a fan of Robert Newton. Flogging Molly, The Briggs, Dropkick Murphys, The LeperKhanz, The Coral, The Mighty Mighty Bosstones, Tokyo Ska Paradise Orchestra, Bullets And Octane, Mad Caddies, The Vandals, Gnarkill, Armored Saint, Jimmy Buffett, and Stephen Malkmus have pirate-themed songs as well. - Alestorm is a pirate-themed power/folk metal based in Perth, Scotland. Their fans are also encouraged to dress up like pirates and bring props to concerts. - Swashbuckle is an American thrash metal band who dress up and sing about pirates. - Emerson, Lake & Palmer recorded the song "Pirates", a 13 minute long performance piece from their 1977 tour. It features the Orchestra de L'Opera de Paris. The piece can be found on the album "Works, volume 1" - Running Wild, a German Metal band, adopted a "pirate metal" image in 1987, with its third album. - The Sex Pistols adapted the saucy song "Good Ship Venus" as their hit "Friggin' in the Rigging". Fellow Malcolm McLaren protegée Adam Ant took the pirate image further. One of the tracks on the album Kings of the Wild Frontier was called "Jolly Roger". - Gorillaz recorded a song called "Pirate Jet" which appears as the 16th track on their third studio album Plastic Beach. - In 1986, the Beastie Boys paid homage to the pirate lifestyle on their Licensed to Ill album with the song "Rhymin' and Stealin'". The song is filled with piratical and nautical phrasing liberally mixed with 1980s hip-hop references. - Mutiny is an Australian pirate themed folk-punk band with releases on Fistolo Records. - Goth musician/comedian Voltaire illustrates the sometimes humorous rivalry between vampiric and pirate camps of goths in the song "Vampire Club" from the album Boo Hoo (2002). - The Jolly Rogers is a pirate-themed Renaissance Faire musical troupe based in Kansas City. - American comedy band The Aquabats recorded a song entitled "Captain Hampton and the Midget Pirates" on their 1997 album The Fury of The Aquabats!, which told the story of Jim, a young boy who joins a pirate-hunting crew headed by Captain Hampton. Pirates are also mentioned in the band's 2000 song "The Wild Sea" on Myths, Legends and Other Amazing Adventures, Vol. 2. - The Pirate, a musical starring Judy Garland and Gene Kelly, has a number of songs about piracy in general, and the dread pirate "Mack the Black" Macoco in particular. - The Dreadnoughts are a Vancouver, Canada pirate-based band, including use of an accordion as well as a fiddle. - Relient K released a single covering the song "The Pirates Who Don't Do Anything" for the children's show VeggieTales. It was originally recorded by the cast of VeggieTales, and Relient K's version of the song was later included in the 2003 compilation album called Veggie Rocks! - In Eurovision Song Contest 2008, the Latvian band Pirates of the Sea entered with the song Wolves of the Sea - Nox Arcana recorded a pirate-themed album Phantoms of the High Seas in 2008 that contains a series of hidden puzzles and clues leading to a treasure map. - Cosmo Jarvis released the song "Gay Pirates" on 23 January 2011. - The Original Rabbit Foot Spasm Band released the song "Pirates!" in their album Year of the Rabbit on 3 February 2011. In 1879, the comic opera The Pirates of Penzance was an instant hit in New York, and the original London production in 1880 ran for 363 performances. The piece, depicting an incompetent band of "tenderhearted" British pirates, is still performed widely today, and obviously corresponds to historical knowledge about the emergence of piracy in the Caribbean. In 1904, J.M. Barrie's play Peter Pan, or The Boy Who Wouldn't Grow Up was first performed. In the book, Peter's enemy in Neverland is the pirate crew led by Captain Hook. Details on Barrie's conception of Captain Hook are lacking, but it seems he was inspired by at least one historical privateer, and possibly by Robert Louis Stevenson's Long John Silver as well. In film adaptations released in 1924, 1953, and 2003, Hook's dress, as well as the attire of his crew, corresponds to stereotypical notions of pirate appearance. - Il pirata (The Pirate) is an opera by Vincenzo Bellini, 1827 - The Pirates of Penzance, a comic operetta by Gilbert and Sullivan contains a Pirate King and a crew of orphan pirates. - Captain Sabertooth is a play first performed in the zoo\amusement park at Norway by Terje Formoe. - The Buccaneers of America by John Esquemeling is the supposedly real stories of some Caribbean pirates. - The Lady Pirates of Captain Bree also called Captain Bree and her Lady Pirates by Martin A. Follose and Bill Francoeur, a musical spoof - Captain Pugwash a series of British children's animated television programmes, comic strips and books, first shown on the BBC in 1957. - The Doctor and his friends encountered space pirates in numerous episodes of BBC's Doctor Who (such as The Space Pirates), though they also met historical pirates in The Smugglers (1966) and The Curse of the Black Spot (2011). Both stories involved the bounty of Captain Henry Avery (Hugh Bonneville), who the Doctor eventually befriended. - In a 1969 episode of Hanna-Barbera's Scooby-Doo, Where Are You!, Mystery Inc. faced the ghost of Redbeard (voiced by John Stephenson). - The singing and dancing pirates Nasty Max, Mighty Matt, Massmedia and Sleazeappeal from the animated series Spartakus and the Sun Beneath the Sea. - Disney's TaleSpin (1990) featured the air pirate Don Karnage who always tried to steal goods and sometimes treasures from Baloo. - The Pirates of Dark Water is a Hanna-Barbera animated series of the 1990s. - Mad Jack the Pirate, produced by Bill Kopp, showed on Fox Kids in 1990s. - Pirates was a 1994 children's sitcom about a family of pirates living in a council house. - The animated series SpongeBob SquarePants' theme song is sung by Painty the Pirate, voiced by Pat Pinney. Certain episodes are also introduced by Patchy the Pirate, portrayed by Tom Kenny, the voice of SpongeBob SquarePants. Also in some of the SpongeBob episodes there is a character called The Flying Dutchman who is a pirate ghost. - One Piece (1999 onwards), the animated adaptation of the Japanese comic of the same name (see below). - Black Lagoon is a 2006 anime about pirates in the South China Sea. It is a somewhat realistic look at the underlying themes of modern day piracy. - The seventh season of Survivor, Pearl Islands, and Pirate Master had a piracy theme. - The Comedy Central animated series, South Park aired a pirate-themed episode titled "Fatbeard" in 2009 as part of the show's 13th season. It referred to piracy in the Indian Ocean. In the episode, Cartman, believing that the classic era of piracy has returned to Somalia, heads to Mogadishu, only to be struck by the reality. - In the show Deadliest Warrior, there was an episode titled "Pirate vs. Knight". - The Disney Junior animated series Jake and the Never Land Pirates debuted in 2011. - Kaizoku Sentai Gokaiger (2011) is the 35th anniversary season of the Super Sentai series that has a pirate theme & its American counterpart Power Rangers Super Megaforce which is part of the 20th anniversary season of the Power Rangers which uses costumes, props, & footage from Gokaiger. - Marika Kato is the protagonist and space pirate captain of the Bentenmaru in the anime Bodacious Space Pirates (2012). - Black Sails is a television drama series created by Jonathan E. Steinberg and Robert Levine for Starz Inc., which premiered in January 2014. - Crossbones is an American television series on the NBC network which premiered May 30, 2014. - Assassin's Creed IV: Black Flag features a heavy pirate setting. - Claw is a platform game by Monolith Productions that is a cartoon parody of pirate films. - Donkey Kong Country 2: Diddy's Kong Quest features pirate-themed enemies and locations, including the recurring villain King K. Rool now named Kaptain K. Rool and dressed as a pirate captain. - Doodle Pirate is an Android Game developed by Impudia Games, featuring a comedic side of treasure hunting. - Final Fantasy XII has many characters, including Balthier are sky pirates. Also, Faris in Final Fantasy V and Leila in Final Fantasy II are pirates. - Pirates feature as a character class in several Fire Emblem games. - The Legend of Zelda: The Wind Waker features pirates such as Tetra and her crew. - Lego Racers first boss is Captain Redbeard. When he is beaten, you can build cars using "pirated-themed" lego pieces. - Loot, a card game made by Gamewright. - Maple Story has added a Pirate job class. - Medal of Honor: Warfighter, a first-person shooter made by Danger Close Games - Megaman Battle Network 6 a WWW member named Captain Blackbeard, an operator of Diveman.EXE who dressed as a sailor. - Metroid is a videogame in which the main antagonists are space pirates. - The pirate-themed Monkey Island series of video games is inspired by Tim Powers' book 'On Stranger Tides' and Disneyland's Pirates of the Caribbean ride. It is set in the 18th century Caribbean and stars the hero pirate Guybrush Threepwood and the evil pirate LeChuck. - Pirates of the Burning Sea is a swashbuckling MMORPG set in the early 18th century Caribbean. - Pirates: The Legend of Black Kat by Westwood studios is a mix of third-person adventure and sea battles. - Pirates, Vikings and Knights II is a multiplayer video game in which players can play as a team of highly stereotypical pirates. - Ratchet & Clank Future: Tools of Destruction and Ratchet & Clank Future: Quest for Booty contain pirates as enemies throughout the levels. - Rogue Galaxy is a role-playing video game in which the main character, Jaster Rogue joins a crew of space pirates to help defeat an oppressive empire. - Sid Meier's Pirates! is a well-known video game featuring pirates. - Skies of Arcadia is a video game for the Sega Dreamcast (later remade as Skies of Arcadia Legends for the Nintendo Gamecube) about a group of air pirates that struggle against an oppressive power threatening to take over and destroy the world. - Sonic Rush Adventure takes place in a pirate-themed world. This includes a robot pirate named Captain Whisker. - In the Soul series, Cervantes, a long-standing character in the franchise, is a pirate. In Soul Calibur III specifically, there is a 'Pirate' class option for custom characters. - Star Wars Empire At War contains a non-playable faction called the Black Sun Pirates, a large gang of mercenaries. - In Suikoden IV there are a great deal of pirates to encounter and recruit. - Uncharted Waters is a series of role-playing video games by Koei set in the Age of Exploration where the player takes the role of a naval fleet captain. All the games feature pirates as regular threats and it is possible to play with pirate characters in some of the iterations. - World of Warcraft features pirates as NPCs and quest givers. In addition, Pirate's Day is celebrated in-game on September 19 each year in honour of International Talk Like a Pirate Day. - Yohoho! Puzzle Pirates is a massively multiplayer online game in which the player takes the role of a pirate, having adventures on the high seas and pillaging money from roaming enemy ships. - Zack & Wiki: Quest for Barbaros' Treasure is an adventure video puzzle game for the Nintendo Wii. Pirates in sports Because pirate ships connote fearsomeness, loyalty and teamwork, many professional and amateur sports teams are named "Pirates". - American football - Association football - Ice hockey - Rugby League - Barry University Buccaneers - Sunshine State Conference - East Carolina Pirates – American Athletic Conference - East Tennessee State Buccaneers – Southern Conference - Mass Maritime Buccaneers – Massachusetts State College Athletic Conference - Middle Tennessee Blue Raiders – Sun Belt Conference - Mount Union Purple Raiders – Ohio Athletic Conference - Seton Hall Pirates – Big East Conference - Southwestern Pirates - NCAA Division III Southern Collegiate Athletic Conference - UMass Dartmouth Corsairs – Little East Conference - New Orleans Privateers – Sun Belt Conference - "Barret's Privateers" is a song popular in Nova Scotia detailing the fictional story of Elcid Barret and his privateers and their voyage on the Antelope to raid American shipping vessels. - Pro wrestler Paul Burchill from WWE Friday Night SmackDown dressed like a pirate and claimed that Blackbeard is his great-great-great-great-great-grandfather. Previously, Carl Ouellet wrestled as Jean-Pierre Lafitte (supposedly a descendant of pirate Jean Lafitte). - Maddox (writer) often portrays himself as a pirate on his website The Best Page in the Universe. - List of space pirates - Lego Pirates - A general history of the robberies & murders of the most notorious pirates. By Charles Johnson Introduction and commentary by David Cordingly. Conway Maritime Press (2002). - A general history of the robberies & murders of the most notorious pirates. Page viii - A general history of the robberies & murders of the most notorious pirates. Intro – Page ix - Jason Porterfield, Treasure Island and the Pirates of the 18th Century, Rosen, 2004, p. 12. - Bonanos, Christopher (2007-06-05). "Did Pirates Really Say "Arrrr"? The origin of Hollywood's high-seas slang.". Slate. Washington Post Newsweek Interactive Co. Retrieved 2007-09-16. - Dan Parry (2006). "Blackbeard: The Real Pirate of the Caribbean". p. 174. National Maritime Museum - Angus Konstam (2008) Piracy: The Complete History Osprey Publishing, Retrieved 11 October 2011 - http://www.literarytraveler.com/authors/captain_hook.aspx The Real Life and Fictional Characters Who Inspired J.M. Barrie's Captain Hook - Church of the Flying Spaghetti Monster - Charles Johnson (1724), A General History of the Robberies and Murders of the Most Notorious Pyrates, pp. 411–12. - Bradley, Ian (1982). The Annotated Gilbert and Sullivan. Harmondsworth, England: Penguin Books. pp. 86–87. ISBN 0-14-070848-0.
The pollen tube of most seed plants acts as a passageway. It transports sperm cells from the pollen grain, from the stigma (in flowering plants or angiosperms) to the ovules at the base of the pistil. In some gymnosperms (conifers and gnetophytes) sperms move directly through ovule tissue. In other gymnosperms (Ginkgo and cycads) the pollen tube is involved only in nutrient uptake from ovule tissue by the pollen grain, and does not convey sperm cells to the egg. Like ferns, other basal land plants, and many algae, these gymnosperms have flagellate sperm, which swim through a watery fluid to fertilize the egg cells. In angiosperms the pollen tube germinates from the pollen grain and grows the entire length through the stigma, style, ovary and ovules to reach the eggs. In maize, this single cell can grow longer than 12 inches to traverse the length of the pistil. The sperm cells by themselves are not motile and are carried within the tube. As the tip of the tube reaches an egg it bursts and releases two sperm cells leading to a double fertilization. One sperm unites with the egg cell to produce the embryo of a new plant, while a second sperm unites with the central cell (polar nuclei) to produce the endosperm of the seed. The endosperm is rich in starch, proteins and oils and is a major source of human food (e.g., wheat, barley, rye, oats, corn)
Tree spacing is affected by factors such as the species of citrus concerned, the cultivar, the type of rootstock, the environment, the size of the orchard, and the manage-ment practices which the grower will be using. For example, if he will be using machinery, he must leave enough space between the rows for the machines to pass when the trees are mature (Fig. 1(0)). Small or dwarf citrus species should be planted at a higher density. Tall citrus trees, and those with rapid growth, should be more widely spaced. Planned Close Spacing For maximum benefit per unit area, spacing can be as close as possible when the trees are planted. Later, as the trees grow in size, they can be thinned. Most orchards on level ground can be planted in a square or a rectangle. For close spacing or planned close spacing plantings, a quincunx system can be used. (In a quincunx arrangement, four trees are planted at each corner of a square, with a fifth tree in the middle). In orchards on sloping land, trees should be planted along the contours. Time of Planting The optimum time for planting citrus trees varies according to the species. It also depends on the condition of the roots and shoots of the seedling. Transplanting the seedlings out in the field is best done at the beginning of the wet season. Growers should avoid transplanting seedlings during hot summer weather. Handling the Seedlings Seedlings with bare roots should be planted out as soon as possible after they have been removed from the soil or growing medium. Seedlings grown in containers have generally become root-bound by the time they are ready for transplanting. Roots which are severely bent and twisted should be pruned before the tree is transplanted. Overcrowded leaves and shoots should also be pruned. In orchards on sloping land, or in orchards where the trees are widely spaced, seedlings are usually planted by digging a hole for each seedling (Fig. 2(0)). If the orchard is to have closely planted trees, it is preferable to dig a furrow along which the row of seedlings will be planted. Covering the soil around the planting hole with mulch will help conserve soil moisture and keep down weeds (Fig. 3(0), Fig. 4(0)). An orchard planted at a high density can make a good profit in the early years after planting. However, after the trees become mature, they are too crowded and ventilation is poor. This increases the relative humidity, and the incidence of pests and diseases. At this stage, closely planted trees should be thinned or trimmed, to open up space between them. Index of Images Figure 1 Newly Planted Citrus Orchard Figure 2 Planting Holes for New Orchard Figure 3 Seedling with Straw Mulch to Conserve Soil Moisture and Keep down Weeds Figure 4 Diagram of Planting Hole for Young Seedling Download the PDF. of this document(61), 227,299 bytes (222 KB).