content
stringlengths
275
370k
Fort Pulaski Field Trip Guide - Grade Level: - Middle School: Sixth Grade through Eighth Grade - Social Studies Fort Pulaski guarded the mouth of the Savannah River to prevent enemy ships from attacking the city. The fort was considered to be invincible when it was finished in 1847, and it was built with many advanced features. But by the time of the Civil War in 1861, new cannons had been developed. The fort's strong brick walls were no match for the new cannons. The Confederates defending the fort surrendered to the attacking Union forces in less than two days. This is a teacher-guided activity to be done on-site at the fort. Students will learn about the fort's construction and history. Plentiful photographs and a fort map will assist teachers in leading this tour through the fort. Each stop of the tour includes a question for teachers to ask the students. Correct answers and subsequent discussion will allow teachers to assess students' learning. At the end of this activity, students will be able to: 1) Explain the significance of Fort Pulaski in the Civil War. 2) Explain the importance of Fort Pulaski to Savannah in the Civil War. 3) Explain the role of new technology in the quick surrender of Fort Pulaski. Casemate--a strongly built room designed to hold a cannon. Magazine--a roomdesigned to hold gunpowder and ammunition. Parade ground--a large, flat grassy area where soldiers practiced marching and loading their weapons.
An international team of astronomers, including researchers from the Max Planck Institute for Radio Astronomy and the University of Cologne in Germany, discovered two titanium oxides, TiO and TiO2, at radio wavelengths using telescope arrays in the United States and France. The scientists made the discovery in the course of a study of a spectacular star, VY Canis Majoris (VY CMa), which is a variable star located in the constellation Canis Major the Greater Dog. “VY CMa is not an ordinary star, it is one of the largest stars known, and it is close to the end of its life,” said Tomasz Kaminski from the Max Planck Institute for Radio Astronomy (MPIfR). In fact, with a size of about one to two thousand times that of the Sun, it could extend out to the orbit of Saturn if it were placed in the center of our solar system. The star ejects large quantities of material, which forms a dusty nebula. The image shows the reflection nebula VY CMa, which is visible because of the small dust particles that form around it that reflect light from the central star. The complexity of this nebula has been puzzling astronomers for decades. It formed as a result of a stellar wind, but scientists don’t understand why it is so far from having a spherical shape. They don’t know what physical process blows the wind, i.e., what lifts the material up from the stellar surface and makes it expand. “The fate of VY CMa is to explode as a supernova, but it is not known exactly when it will happen,” said Karl Menten from MPIfR. Observations at different wavelengths provide different pieces of information that are characteristic for atomic and molecular gas and from which scientists can derive physical properties of an astronomical object. Each molecule has a characteristic set of lines, something like a “bar code,” that allows astronomers to identify what molecules exist in the nebula. “Emission at short radio wavelengths, in so-called submillimeter waves, is particularly useful for such studies of molecules,” said Sandra Brünken from the University of Cologne. “The identification of molecules is easier, and usually a larger abundance of molecules can be observed than at other parts of the electromagnetic spectrum.” The research team observed TiO and TiO2 for the first time at radio wavelengths. In fact, titanium dioxide has been seen in space unambiguously for the first time. It is known from everyday life as the main component of the commercially most important white pigment (known by painters as “titanium white”) or as an ingredient in sunscreens. It is also used to color food (coded as E171 on labels). However, stars, especially the coolest of them, are expected to eject large quantities of titanium oxides, which, according to theory, form at relatively high temperatures close to the star. “They tend to cluster together to form dust particles visible in the optical or in the infrared,” said Nimesh Patel from the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. “And the catalytic properties of TiO2 may influence the chemical processes taking place on these dust particles, which are very important for forming larger molecules in space,” said Holger Müller from the University of Cologne. Astronomers have known the absorption features of TiO from spectra in the visible region for more than a hundred years. In fact, these features are used, in part, to classify some types of stars with low surface temperatures (M- and S-type stars). The pulsation of Mira stars, one specific class of variable stars, is thought to be caused by titanium oxide. Mira stars, supergiant variable stars in a late stage of their evolution, are named after their prototype star Mira (the wonderful) in the constellation Cetus the Sea Monster. The observations of TiO and TiO2 show that the two molecules are easily formed around VY CMa at a location that is more or less as predicted by theory. It seems, however, that some portion of those molecules avoid forming dust and are observable as gas phase species. Another possibility is that the dust is destroyed in the nebula and releases fresh TiO molecules back to the gas. The latter scenario is quite likely as parts of the wind in VY CMa seem to collide with each other. The new detections at submillimeter wavelengths are particularly important because they allow studying the process of dust formation. Also, at optical wavelengths, the radiation emitted by the molecules is scattered by dust present in the extended nebula that blurs the picture while this effect is negligible at radio wavelengths, allowing for more precise measurements. The discoveries of TiO and TiO2 in the spectrum of VY CMa have been made with the Submillimeter Array (SMA), a radio interferometer located in Hawaii in the United States. Because the instrument combines eight antennas that work together as one big telescope 226 meters in size, astronomers were able to make observations at unprecedented sensitivity and angular resolution. A confirmation of the new detections was successively made later with the IRAM Plateau de Bure Interferometer (PdBI) located in the French Alps. The new Atacama Large Millimeter/submillimeter Array (ALMA) in Chile has just been officially opened. “ALMA will allow studies of titanium oxides and other molecules in VY CMa at even better resolution, which makes our discoveries very promising for the future,” said Kaminski.
Normal body temperature is about 98.6 Fahrenheit, but it may vary slightly depending on your own normal temperature, the time of day and how the temperature is measured. It is normal for body temperature to increase during exercise, and it may still be above normal immediately after finishing physical activity. Drink lots of water and don't let yourself get overheated to prevent dehydration and heatstroke that can raise your body temperature to dangerous levels. A persistent fever can be a sign of an illness that requires medical attention. Body Temperature and Exercise Dr. Gabe Mirkin writes on his website that many weightlifters have temperatures up to 101 degrees during sessions. Marathon runners can have temperatures up to 103.8 degrees on cool, 50-degree days. Temperature naturally rises as you exercise and continues to increase the harder you exercise, as more than 78 percent of muscle energy is lost as heat. Immediately following your workout, your temperature will usually be elevated, but it goes down quickly as you hydrate and cool off. You may have a fever for many reasons, including illness, being in a hot environment, exercising vigorously or becoming dehydrated and suffering from another heat-related illness. A fever is generally considered to be a temperature of 99.5 degrees or higher when taken orally and 100.4 degrees or higher when taken rectally. It is normal to have a fever during and immediately following exercise, but seek medical attention if the fever persists even after you are rested. If you are not properly hydrated during exercise and get too hot, you can become dehydrated, leading to heatstroke. Heatstroke is a serious heat illness and a medical emergency requiring immediate attention. If you exercise vigorously in a hot environment and are not drinking enough to sweat and cool yourself down, your body temperature may rise too high and cause heatstroke. Heatstroke can cause a temperature of 104 degrees or higher, a headache, dizziness, disorientation, rapid breathing and pulse, and muscle weakness or cramps. To prevent heatstroke as a result of physical activity, don't exercise vigorously in a hot gym or outside on hot days. Drink plenty of water to stay consistently hydrated -- don't wait to become thirsty. Finally, stop exercising if you experience any of the symptoms of an extremely high body temperature and heatstroke. - Creatas Images/Creatas/Getty Images This article reflects the views of the writer and does not necessarily reflect the views of Jillian Michaels or JillianMichaels.com.
Measuring volcanic gases: emission rates of sulfur dioxide and carbon dioxide in volcanic plumes Like this plume rising from the crater of Mount St. Helens, a typical plume of gas rises to some height above a volcano where it reaches equilibrium with the atmosphere and is bent over and blown away. By measuring both the amount of a specific gas in the plume and the wind speed, scientists can calculate the emission rate or discharge of the gas. Several methods are used to measure the amount of specific gases in a volcanic plume. The amount of sulfur dioxide gas (SO2) in a plume is measured with an optical correlation spectrometer (COSPEC) by moving the instrument beneath the plume in an aircraft or along the ground. The amount of carbon dioxide gas (CO2) in a plume is measured with a small infrared analyzer (LI-COR) by flying the instrument through the plume several times so that it can continuously sample an entire cross section of the plume. A third technique for measuring gases in volcanic plumes involves a Fourier Transform infrared spectrometer system (FTIR) that also continuously samples gas in a volcanic plume. Coupled with this instrumentation, Global Positioning System (GPS) technology is now routinely used by USGS scientists to map airborne traverses through volcanic plumes. GPS data is simultaneously collected along with chemical data so that accurate plume cross sections and flight paths can later be accurately constructed. Correlation spectrometer (COSPEC): measuring SO2 emission rate The correlation spectrometer (COSPEC) has been in use for more than two decades for measuring sulfur dioxide emission rates from various volcanoes throughout the world. Originally designed for measuring industrial pollutants, the COSPEC measures the amount of ultraviolet light absorbed by sulfur dioxide molecules within a volcanic plume. The instrument is calibrated by comparing all measurements to a known SO2 standard mounted in the instrument. Although the COSPEC can be used from the ground in a vehicle or on a tripod to scan a plume, the highest quality measurements are obtained by mounting a COSPEC in an aircraft and flying traverses underneath the plume at right angles to the direction of plume travel. COSPEC from the air: in plane or helicopter Airborne SO2 measurements are made by flying below and at right angles to a volcanic plume with the upward-looking COSPEC. Typically, 3-6 traverses are made beneath the plume in order to determine the average SO2 concentration along a vertical cross section of the plume. Wind speed is determined during flight either by GPS or by comparing true air speed, flying with and against the wind, with true ground speed. COSPEC from the ground: in a vehicle COSPEC from the ground: from a stationary tripod Examples of COSPEC Data - Mount St. Helens, 1980-1988 - Kilauea Volcano, 1979-1997: Open-File Report 98-462 - Cook Inlet volcanoes, 1990-1994: Open-File Report 95-55 LI-COR infrared analyzer: measuring CO2 emission rate Use of a small infrared carbon dioxide analyzer (LI-COR), has recently become a standard method for measuring carbon dioxide emission rates at restless volcanoes. The LI-COR is mounted in a small aircraft configured for sampling outside air. Traverses are then systematically flown through the plume at different elevations until the entire cross-section of the plume is analyzed. From these data, a carbon dioxide emission rate can be calculated. This technique was first employed by USGS scientists at Popocatepetl volcano in Mexico in 1995. More recently, it has been used at several domestic volcanoes. LI-COR secured inside aircraft Example of LI-COR Data: Mammoth Mountain, Long Valley caldera, California. Fourier transform infrared spectrometer (FTIR): measuring many volcanic gases A third technique for measuring gases in volcanic plumes involves the use of a Fourier transform infrared spectrometer system (FTIR). The FTIR is capable of analyzing several gases simultaneously using an open-path or closed-path system. The open-path method uses an optical telescope to aim the FTIR at a target gas some distance away. The infrared light source is either natural solar light or light from a heated filament behind the target gas. The closed-path method involves delivering gas from a plume or fumarole to a gas cell within the FTIR. Recently, a prototype closed-path FTIR successfully measured SO2 at Kilauea Volcano in Hawai`i. The volcano's plume was sampled directly by an FTIR mounted in an aircraft in the same way that the LI-COR anlayzer is used to measure CO2.
Many people suffer from nerve damage caused by illness or accidents, but some nerve pain and numbness can be a result of not getting the right vitamins and minerals in your diet. Your food is meant to provide fuel for your activities, but it should also supply the nutrients you need to do things like repair muscles, support eyesight and digest your food. Vitamins and minerals are essential to nerve health. Some play a unique role in keeping your nervous system working the way it should. Here are the vitamins and minerals you need to support your nervous system. Potassium and Sodium It impossible to talk about potassium without talking about sodium, or vice versa, when it comes to nerve function. Your nerves send signals from your brain to the rest of your body. These signals are electrical, and your nerve cells carry them through their unique shape and abilities. Every resting nerve cell has a high concentration of potassium ions inside the cell membrane and a higher concentration of sodium ions outside the cell. The cell is positively charged at resting potential, waiting to fire when needed. When the nerve receives a stimulus, the membrane allows sodium to enter the membrane, and potassium moves to the outside. In order to pass a signal “down the line,” sodium and potassium ions switch positions along the cell membrane. This movement starts a chain reaction down the nerve, allowing for a change in charge that goes from cell to cell. If you do not have enough sodium or potassium in your diet, the cells have a more difficult time with this process, which is commonly referred to as action potential, or the sodium-potassium pump. Sodium deficiency is not a problem for most people because you can get it from salt. Potassium deficiency is more common. Even a slight dip in the right levels of potassium can cause muscle cramps and increased blood pressure because of poor neural firing. If you are an athlete, it’s important to stay on top of eating potassium-rich foods. B vitamins are vital for nerve health. Vitamin B12 is particularly important for nerve health. A deficiency in vitamin B12 can lead to neuropathy, where you experience numbness in your fingers and toes. Persistent deficiency can lead to joint and muscle pain, memory loss and even reduced coordination. Vitamin B12 is used to make new cells, including nerve cells, because it is an essential component of DNA. Without B12, you are also unable to make as many red blood cells. A vitamin B12 deficiency could cause your nerves to begin to lose their myelin sheath, which is a protective coating that helps to contain nerve impulses and speed them up along the axons of the nerve. Other B vitamins that support the nervous system are B1, B2, B3, B6, B7 and B9. You might know these vitamins by other names, including niacin, thiamin, folate and riboflavin. Most breakfast cereals are fortified with these vitamins, and many occur naturally in foods that you eat, especially grains, leafy greens and fruits. However, B12 cannot be obtained from plants. If you do not eat meat, dairy or eggs, you need to talk with your doctor about a good supplement to make sure you don’t suffer any deficiency. Research is still ongoing about how some vitamins affect the nerves directly. One of the emerging vitamins for nerve health is vitamin E, which has been studied more for its effects on skin and tissue health. Some studies show that early treatment of degenerative nerve disorders with vitamin E can cause some improvement. Vitamin E deficiency can also cause a person to start exhibiting signs of neurological degeneration. Fortunately, vitamin E is found in many foods, including almonds, spinach, yams and butternut squash. When people think of calcium, they often think of strong bones and teeth. However, this mineral also plays a strong supporting role for your nervous system and other systems in the body. In the brain, calcium helps to improve cell structure and blood flow. It also helps the brain cells themselves to form pathways to communicate with each other. Calcium ions bridge the gaps between nerve cells by traveling from one to the next, helping to pass on a nerve impulse to specialized cell receptors. Calcium also has a role in cell repair should a nerve cell become damaged. Calcium is found in dairy products, but you can also get calcium from dark green vegetables like broccoli, kale, spinach and chard. It is possible to overdose on calcium if you take a supplement, so make sure you speak with your doctor before adding a calcium supplement to your diet because you might already be getting enough from your food. As you can see, many minerals and vitamins work together to support your nervous system. Without a varied diet that provides these essential nutrients, you might experience increased nerve pain and other troubling symptoms. Speak with us at Southwest Florida Neurosurgical & Rehab Associates for more information on nutrition for nerve health.
Mathematics in medieval Islam or sometimes referred to as Islamic mathematicsmathematicsIslamic world between 622 and 1600, in the part of the world where Islam was the dominant religion. Islamic science and mathematics flourished under the Islamic caliphate (also known as the Islamic Empire) established across the Middle East, Central Asia, North Africa, Sicily, the Iberian Peninsula, and in parts of France and India in the 8th century. The center of Islamic mathematics was located in Persia (including eastern part of present-day Iraq) , but at its greatest extent stretched from North Africa and Spain in the west and to India in the east. is a term used in the history of mathematics that refers to the developed in the While most scientists in this period were Muslims and wrote in Arabic, a great portion and many of the best known of the contributors were of Persian but there were also Berbers, Arabs, Moors, Turks, and sometimes different religions (Muslims, Christians, Jews, Sabians, Zoroastrians, irreligious).. Arabic was the dominant language—much like Latin in Medieval Europe, Arabic was used as the chosen written language of most scholars throughout the Islamic world. origin Use of the term "Islam" "There have been many civilizations in human history, almost all of which were local, in the sense that they were defined by a region and an ethnic group. This applied to all the ancient civilizations of the Middle East—Egypt, Babylon, Persia; to the great civilizations of Asia—India, China; and to the civilizations of Pre-Columbian America. There are two exceptions: Christendom and Islam. These are two civilizations defined by religion, in which religion is the primary defining force, not, as in India or China, a secondary aspect among others of an essentially regional and ethnically defined civilization. Here, again, another word of explanation is necessary." "In English we use the word “Islam” with two distinct meanings, and the distinction is often blurred and lost and gives rise to considerable confusion. In the one sense, Islam is the counterpart of Christianity; that is to say, a religion in the strict sense of the word: a system of belief and worship. In the other sense, Islam is the counterpart of Christendom; that is to say, a civilization shaped and defined by a religion, but containing many elements apart from and even hostile to that religion, yet arising within that civilization." In this article, "Islam" and the adjective "Islamic" is used in the meaning described above (that is of a civilization). Origins and influences The first century of the Islamic Arab Empire saw almost no scientific or mathematical achievements since the Arabs, with their newly conquered empire, had not yet gained any intellectual drive and research in other parts of the world had faded. In the second half of the eighth century Islam had a cultural awakening, and research in mathematics and the sciences increased. The Muslim Abbasid caliph al-Mamun (809-833) is said to have had a dream where Aristotle appeared to him, and as a consequence al-Mamun ordered that Arabic translation be made of as many Greek works as possible, including Ptolemy's Almagest and Euclid's Elements. Greek works would be given to the Muslims by the Byzantine Empire in exchange for treaties, as the two empires held an uneasy peace. Many of these Greek works were translated by Thabit ibn QurraEuclid, Archimedes, Apollonius, Ptolemy, and Eutocius. Historians are in debt to many Islamic translators, for it is through their work that many ancient Greek texts have survived only through Arabic translations. (826-901), who translated books written by Greek, Indian and Babylonian all played an important role in the development of early Islamic mathematics. The works of mathematicians such as Euclid, Apollonius, Archimedes, Diophantus, Aryabhata and Brahmagupta were all acquired by the Islamic world and incorporated into their mathematics. Perhaps the most influential mathematical contribution from India was the decimal place-value Indo-Arabic numeral system, also known as the Hindu numerals.Persian historian al-Biruni (c. 1050) in his book Tariq al-Hind states that the Abbasid caliph al-Ma'mun had an embassy in India from which was brought a book to Baghdad that was translated into Arabic as Sindhind. It is generally assumed that Sindhind is none other than Brahmagupta's Brahmasphuta-siddhanta. The earliest translations from Sanskrit inspired several astronomical and astrological Arabic works, now mostly lost, some of which were even composed in verse. The Indian influences were later overwhelmed by Greek mathematical and astronomical texts. It is not clear why this occurred but it may have been due to the greater availability of Greek texts in the region, the larger number of practitioners of Greek mathematics in the region, or because Islamic mathematicians favored the deductive exposition of the Greeks over the elliptic Sanskrit verse of the Indians. Regardless of the reason, Indian mathematics soon became mostly eclipsed by or merged with the "Graeco-Islamic" science founded on Hellenistic treatises. Another likely reason for the declining Indian influence in later periods was due to Sindh achieving independence from the Caliphate, thus limiting access to Indian works. Nevertheless, Indian methods continued to play an important role in algebra, arithmetic and trigonometry. Besides the Greek and Indian tradition, a third tradition which had a significant influence on mathematics in medieval Islam was the "mathematics of practitioners", which included the applied mathematics of "surveyors, builders, artisans, in geometric design, tax and treasury officials, and some merchants." This applied form of mathematics transcended ethnic divisions and was a common heritage of the lands incorporated into the Islamic world. This tradition also includes the religious observances specific to Islam, which served as a major impetus for the development of mathematics as well as astronomy. Islam and mathematics A major impetus for the flowering of mathematics as well as astronomy in medieval Islam came from religious observances, which presented an assortment of problems in astronomy and mathematics, specifically in trigonometry, spherical geometry, algebra and arithmetic. The Islamic law of inheritance served as an impetus behind the development of algebra (derived from the Arabic al-jabr) by Muhammad ibn Mūsā al-KhwārizmīHisab al-jabr w’al-muqabala devoted a chapter on the solution to the Islamic law of inheritance using algebra. He formulated the rules of inheritance as linear equations, hence his knowledge of quadratic equations were not required.mathematical notation for fractions in the 12th century, and Abū al-Hasan ibn Alī al-Qalasādī, who developed an algebraic notation which took "the first steps toward the introduction of algebraic symbolism" in the 15th century. and other medieval Islamic mathematicians. Al-Khwārizmī's Later mathematicians who specialized in the Islamic law of inheritance included Al-Hassār, who developed the modern symbolic In order to observe holy days on the Islamic calendar in which timings were determined by phases of the moon, astronomers initially used Ptolemy's method to calculate the place of the moon and stars. The method Ptolemy used to solve spherical triangles, however, was a clumsy one devised late in the first century by Menelaus of Alexandria. It involved setting up two intersecting right triangles; by applying Menelaus' theorem it was possible to solve one of the six sides, but only if the other five sides were known. To tell the time from the sun's altitude, for instance, repeated applications of Menelaus' theorem were required. For medieval Islamic astronomers, there was an obvious challenge to find a simpler trigonometric method. Regarding the issue of moon sighting, Islamic months do not begin at the astronomical new moon, defined as the time when the moon has the same celestial longitude as the sun and is therefore invisible; instead they begin when the thin crescent moon is first sighted in the western evening sky. The Qur'an says: "They ask you about the waxing and waning phases of the crescent moons, say they are to mark fixed times for mankind and Hajj." This led Muslims to find the phases of the moon in the sky, and their efforts led to new mathematical calculations. Predicting just when the crescent moon would become visible is a special challenge to Islamic mathematical astronomers. Although Ptolemy's theory of the complex lunar motion was tolerably accurate near the time of the new moon, it specified the moon's path only with respect to the ecliptic. To predict the first visibility of the moon, it was necessary to describe its motion with respect to the horizon, and this problem demands fairly sophisticated spherical geometry. Finding the direction of Mecca and the time of Salah are the reasons which led to Muslims developing spherical geometry. Solving any of these problems involves finding the unknown sides or angles of a triangle on the celestial sphere from the known sides and angles. A way of finding the time of day, for example, is to construct a triangle whose vertices are the zenith, the north celestial pole, and the sun's position. The observer must know the altitude of the sun and that of the pole; the former can be observed, and the latter is equal to the observer's latitude. The time is then given by the angle at the intersection of the meridian (the arc through the zenith and the pole) and the sun's hour circle (the arc through the sun and the pole). Muslims are also expected to pray towards the Kaaba in Mecca and orient their mosques in that direction. Thus they need to determine the direction of Mecca (Qibla) from a given location. Another problem is the time of Salah. Muslims need to determine from celestial bodies the proper times for the prayers at sunrise, at midday, in the afternoon, at sunset, and in the evening. J. J. O'Conner and E. F. Robertson wrote in the MacTutor History of Mathematics archive: "Recent research paints a new picture of the debt that we owe to Islamic mathematics. Certainly many of the ideas which were previously thought to have been brilliant new conceptions due to EuropeanGreek mathematics." mathematicians of the 16th, 17th, and 18th centuries are now known to have been developed by Arabic/Islamic mathematicians around four centuries earlier. In many respects, the mathematics studied today is far closer in style to that of Islamic mathematics than to that of R. Rashed wrote in The development of Arabic mathematics: between arithmetic and algebra: "Al-Khwarizmi's successors undertook a systematic application of arithmetic to algebra, algebra to arithmetic, both to trigonometry, algebra to the Euclidean theory of numbers, algebra to geometry, and geometry to algebra. This was how the creation of polynomial algebra, combinatorial analysis, numerical analysis, the numerical solution of equations, the new elementary theory of numbers, and the geometric construction of equations arose." - Al-Ḥajjāj ibn Yūsuf ibn Maṭar (786 – 833) - Al-Ḥajjāj translated Euclid's Elements into Arabic. - Muḥammad ibn Mūsā al-Khwārizmī (c. 780 Khwarezm/Baghdad – c. 850 Baghdad) - Al-Khwārizmī was a Persian mathematician, astronomer, astrologer and geographer. He worked most of his life as a scholar in the House of Wisdom in Baghdad. His Algebra was the first book on the systematic solution of linearquadratic equations. Latin translations of his Arithmetic, on the Indian numerals, introduced the decimal positional number system to the Western world in the 12th century. He revised and updated Ptolemy's Geography as well as writing several works on astronomy and astrology. and - Al-ʿAbbās ibn Saʿid al-Jawharī (c. 800 Baghdad? – c. 860 Baghdad?) - Al-Jawharī was a mathematician who worked at the House of Wisdom in Baghdad. His most important work was his Commentary on Euclid's Elements which contained nearly 50 additional propositions and an attempted proof of the parallel postulate. - ʿAbd al-Hamīd ibn Turk (fl. 830 Baghdad) - Ibn Turk wrote a work on algebra of which only a chapter on the solution of quadratic equations has survived. - Yaʿqūb ibn Isḥāq al-Kindī (c. 801 Kufah – 873 Baghdad) - Al-Kindī (or Alkindus) was a philosopher and scientist who worked as the House of Wisdom in Baghdad where he wrote commentaries on many Greek works. His contributions to mathematics include many works on arithmeticgeometry. and - Hunayn ibn Ishaq (808 Al-Hirah – 873 Baghdad) - Hunayn (or Johannitus) was a translator who worked at the House of Wisdom in Baghdad. Translated many Greek works including those by Plato, Aristotle, Galen, Hippocrates, and the Neoplatonists. - Banū Mūsā (c. 800 Baghdad – 873+ Baghdad) - The Banū Mūsā were three brothers who worked at the House of Wisdom in Baghdad. Their most famous mathematical treatise is The Book of the Measurement of Plane and Spherical Figures, which considered similar problems as Archimedes did in his On the Measurement of the Circle and On the sphere and the cylinder. They contributed individually as well. The eldest, Jaʿfar Muḥammad (c. 800) specialised in geometry and astronomy. He wrote a critical revision on Apollonius' Conics called Premises of the book of conics. Aḥmad (c. 805) specialised in mechanics and wrote a work on pneumatic devices called On mechanics. The youngest, al-Ḥasan (c. 810) specialised in geometry and wrote a work on the ellipse called The elongated circular figure. - Ahmed ibn Yusuf - Thabit ibn Qurra (Syria-Iraq, 835-901) - Al-Hashimi (Iraq? ca. 850-900) - Muḥammad ibn Jābir al-Ḥarrānī al-Battānī (c. 853 Harran – 929 Qasr al-Jiss near Samarra) - Abu Kamil (Egypt? ca. 900) - Sinan ibn Tabit (ca. 880 - 943) - Ibrahim ibn Sinan (Iraq, 909-946) - Al-Khazin (Iraq-Iran, ca. 920-980) - Al-Karabisi (Iraq? 10th century?) - Ikhwan al-Safa' (Iraq, first half of 10th century) - The Ikhwan al-Safa' ("brethren of purity") were a (mystical?) group in the city of Basra in Irak. The group authored a series of more than 50 letters on science, philosophy and theology. The first letter is on arithmetic and number theory, the second letter on geometry. - Al-Uqlidisi (Iraq-Iran, 10th century) - Al-Saghani (Iraq-Iran, ca. 940-1000) - Abū Sahl al-Qūhī (Iraq-Iran, ca. 940-1000) - Abū al-Wafāʾ al-Būzjānī (Iraq-Iran, ca. 940-998) - Ibn Sahl (Iraq-Iran, ca. 940-1000) - Al-Sijzi (Iran, ca. 940-1000) - Labana of Cordoba (Spain, ca. 10th century) - One of the few Islamic female mathematicians known by name, and the secretary of the Umayyad Caliph al-Hakem II. She was well-versed in the exact sciences, and could solve the most complex geometrical and algebraic problems known in her time. - Ibn Yunus (Egypt, ca. 950-1010) - Abu Nasr ibn `Iraq (Iraq-Iran, ca. 950-1030) - Kushyar ibn Labban (Iran, ca. 960-1010) - Al-Karaji (Iran, ca. 970-1030) - Ibn al-Haytham (Iraq-Egypt, ca. 965-1040) - Abū al-Rayḥān al-Bīrūnī (September 15, 973 in Kath, Khwarezm – December 13, 1048 in Gazna) - Ibn Sina (Avicenna) - Al-Jayyani (Spain, ca. 1030-1090) - Ibn al-Zarqalluh (Azarquiel, al-Zarqali) (Spain, ca. 1030-1090) - Al-Mu'taman ibn Hud (Spain, ca. 1080) - al-Khayyam (Iran, ca. 1050-1130) - Ibn Yaḥyā al-Maghribī al-Samawʾal (ca. 1130, Baghdad – c. 1180, Maragha) - Al-Hassār (ca. 1100s, Maghreb) - Developed the modern mathematical notation for fractions and the digits he uses for the ghubar numerals also cloesly resembles modern Western Arabic numerals. - Ibn al-Yāsamīn (ca. 1100s, Maghreb) - The son of a Berber father and black African mother, he was the first to develop a mathematical notation for algebra since the time of Brahmagupta. - Sharaf al-Dīn al-Ṭūsī (Iran, ca. 1150-1215) - Ibn Mun`im (Maghreb, ca. 1210) - al-Marrakushi (Morocco, 13th century) - Naṣīr al-Dīn al-Ṭūsī (18 February 1201 in Tus, Khorasan – 26 June 1274 in Kadhimain near Baghdad) - Muḥyi al-Dīn al-Maghribī (c. 1220 Spain – c. 1283 Maragha) - Shams al-Dīn al-Samarqandī (c. 1250 Samarqand – c. 1310) - Ibn Baso (Spain, ca. 1250-1320) - Ibn al-Banna' (Maghreb, ca. 1300) - Kamal al-Din Al-Farisi (Iran, ca. 1300) - Al-Khalili (Syria, ca. 1350-1400) - Ibn al-Shatir (1306-1375) - Qāḍī Zāda al-Rūmī (1364 Bursa – 1436 Samarkand) - Jamshīd al-Kāshī (Iran, Uzbekistan, ca. 1420) - Ulugh Beg (Iran, Uzbekistan, 1394-1449) - Abū al-Hasan ibn Alī al-Qalasādī (Maghreb, 1412-1482) - Last major medieval Arab mathematician. Pioneer of symbolic algebra. The term algebra is derived from the Arabic term al-jabr in the title of Al-Khwarizmi's Al-jabr wa'l muqabalah. He originally used the term al-jabr to describe the method of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. There are three theories about the origins of Islamic algebra. The first emphasizes Hindu influence, the second emphasizes Mesopotamian or Persian-Syriac influence, and the third emphasizes Greek influence. Many scholars believe that it is the result of a combination of all three sources. Throughout their time in power, before the fall of Islamic civilization, the Arabs used a fully rhetorical algebra, where sometimes even the numbers were spelled out in words. The Arabs would eventually replace spelled out numbers (eg. twenty-two) with Arabic numerals (eg. 22), but the Arabs never adopted or developed a syncopated or symbolic algebra, until the work of Ibn al-Banna al-Marrakushi in the 13th century and Abū al-Hasan ibn Alī al-Qalasādī in the 15th century. There were four conceptual stages in the development of algebra, three of which either began in, or were significantly advanced in, the Islamic world. These four stages were as follows: - Geometric stage, where the concepts of algebra are largely geometric. This dates back to the Babylonians and continued with the Greeks, and was revived by Omar Khayyam. - Static equation-solving stage, where the objective is to find numbers satisfying certain relationships. The move away from geometric algebra dates back to Diophantus and Brahmagupta, but algebra didn't decisively move to the static equation-solving stage until Al-Khwarizmi's Al-Jabr. - Dynamic function stage, where motion is an underlying idea. The idea of a function began emerging with Sharaf al-Dīn al-Tūsī, but algebra didn't decisively move to the dynamic function stage until Gottfried Leibniz. - Abstract stage, where mathematical structure plays a central role. Abstract algebra is largely a product of the 19th and 20th centuries. Static equation-solving algebra - Al-Khwarizmi and Al-jabr wa'l muqabalah The Muslim Persian mathematician Muhammad ibn Mūsā al-Khwārizmī (c. 780-850) was a faculty member of the "House of Wisdom" (Bait al-hikma) in Baghdad, which was established by Al-Mamun. Al-Khwarizmi, who died around 850 A.D., wrote more than half a dozen mathematical and astronomical works; some of which were based on the Indian Sindhind. One of al-Khwarizmi's most famous books is entitled Al-jabr wa'l muqabalah or The Compendious Book on Calculation by Completion and Balancing, and it gives an exhaustive account of solving polynomials up to the second degree. The book also introduced the fundamental method of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as al-jabr. Al-Jabr is divided into six chapters, each of which deals with a different type of formula. The first chapter of Al-Jabr deals with equations whose squares equal its roots (ax² = bx), the second chapter deals with squares equal to number (ax² = c), the third chapter deals with roots equal to a number (bx = c), the fourth chapter deals with squares and roots equal a number (ax² + bx = c), the fifth chapter deals with squares and number equal roots (ax² + c = bx), and the sixth and final chapter deals with roots and number equal to squares (bx + c = ax²). J. J. O'Conner and E. F. Robertson wrote in the MacTutor History of Mathematics archive: "Perhaps one of the most significant advances made by Arabic mathematics began at this time with the work of al-Khwarizmi, namely the beginnings of algebra. It is important to understand just how significant this new idea was. It was a revolutionary move away from the Greek concept of mathematics which was essentially geometry. Algebra was a unifying theory which allowed rational numbers, irrational numbers, geometrical magnitudes, etc., to all be treated as "algebraic objects". It gave mathematics a whole new development path so much broader in concept to that which had existed before, and provided a vehicle for future development of the subject. Another important aspect of the introduction of algebraic ideas was that it allowed mathematics to be applied to itself in a way which had not happened before." The Hellenistic mathematician Diophantus was traditionally known as "the father of algebra" but debate now exists as to whether or not Al-Khwarizmi Those who support Diophantus point to the fact that the algebra found in Al-Jabr is more elementary than the algebra found in Arithmetica and that Arithmetica is syncopated while Al-Jabr is fully rhetorical. Those who support Al-Khwarizmi point to the fact that he gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots, was the first to teach algebra in an elementary form and for its own sake, whereas Diophantus was primarily concerned with the theory of numbers. R. Rashed and Angela Armstrong write: deserves this title instead. "Al-Khwarizmi's text can be seen to be distinct not only from the Babylonian tablets, but also from Diophantus' Arithmetica. It no longer concerns a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study. On the other hand, the idea of an equation for its own sake appears from the beginning and, one could say, in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems." - Logical Necessities in Mixed Equations 'Abd al-Hamīd ibn Turk (fl. 830) authored a manuscript entitled Logical Necessities in Mixed Equations, which is very similar to al-Khwarzimi's Al-JabrAl-Jabr. The manuscript gives the exact same geometric demonstration as is found in Al-Jabr, and in one case the same example as found in Al-Jabr, and even goes beyond Al-Jabr by giving a geometric proof that if the determinant is negative then the quadratic equation has no solution. The similarity between these two works has led some historians to conclude that Islamic algebra may have been well developed by the time of al-Khwarizmi and 'Abd al-Hamid. and was published at around the same time as, or even possibly earlier than, - Abū Kāmil and al-Karkhi Arabic mathematicians were also the first to treat irrational numbers as algebraic The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850-930) was the first to accept irrational numbers (often in the form of a square root, cube root or fourth root) as solutions to quadratic equations or as coefficients in an equation. He was also the first to solve three non-linear simultaneous equations with three unknown variables. objects. Al-Karkhi (953-1029), also known as Al-Karaji, was the successor of Abū al-Wafā' al-Būzjānī (940-998) and he was the first to discover the solution to equations of the form ax2n + bxn = c. Al-Karkhi only considered positive roots.geometricalarithmetic operations which are at the core of algebra today. His work on algebra and polynomials, gave the rules for arithmetic operations to manipulate polynomials. The historian of mathematicsExtrait du Fakhri, traité d'Algèbre par Abou Bekr Mohammed Ben Alhacan Alkarkhi (Paris, 1853), praised Al-Karaji for being "the first who introduced the theory of algebraic calculus". Stemming from this, Al-Karaji investigated binomial coefficients and Pascal's triangle. Al-Karkhi is also regarded as the first person to free algebra from operations and replace them with the type of F. Woepcke, in In linear algebra and recreational mathematics, magic squares were known to Arab mathematicians, possibly as early as the 7th century, when the Arabs got into contact with Indian or South Asian culture, and learned Indian mathematics and astronomy, including other aspects of combinatorial mathematics. It has also been suggested that the idea came via China. The first magic squares of order 5 and 6 appear in an encyclopedia from Baghdad circa 983 AD, the Rasa'il Ihkwan al-Safa (Encyclopedia of the Brethren of Purity); simpler magic squares were known to several earlier Arab mathematicians. The Arab mathematician Ahmad al-Buni, who worked on magic squares around 1200 AD, attributed mystical properties to them, although no details of these supposed properties are known. There are also references to the use of magic squares in astrological calculations, a practice that seems to have originated with the Arabs. Omar Khayyám (c. 1050-1123) wrote a book on Algebra that went beyond Al-Jabr Omar Khayyám provided both arithmetic and geometric solutions for quadratic equations, but he only gave geometric solutions for general cubic equations since he mistakenly believed that arithmetic solutions were impossible. His method of solving cubic equations by using intersecting conics had been used by Menaechmus, Archimedes, and Alhazen, but Omar Khayyám generalized the method to cover all cubic equations with positive roots. He only considered positive roots and he did not go past the third degree. He also saw a strong relationship between Geometry and Algebra. to include equations of the third degree. Dynamic functional algebra In the 12th century, Sharaf al-Dīn al-Tūsī found algebraic and numericalderivative of cubic polynomials. His Treatise on Equations dealt with equations up to the third degree. The treatise does not follow Al-Karaji's school of algebra, but instead represents "an essential contribution to another algebra which aimed to study curves by means of equations, thus inaugurating the beginning of algebraic geometry." The treatise dealt with 25 types of equations, including twelve types of linear equations and quadratic equations, eight types of cubic equations with positive solutions, and five types of cubic equations which may not have positive solutions. He understood the importance of the discriminant of the cubic equation and used an early version of Cardano's formula to find algebraic solutions to certain types of cubic equations. solutions to cubic equations and was the first to discover the Sharaf al-Din also developed the concept of a function. In his analysis of the equation for example, he begins by changing the equation's form to . He then states that the question of whether the equation has a solution depends on whether or not the “function” on the left side reaches the value . To determine this, he finds a maximum value for the function. He proves that the maximum value occurs when , which gives the functional value . Sharaf al-Din then states that if this value is less than , there are no positive solutions; if it is equal to , then there is one solution at ; and if it is greater than , then there are two solutions, one between and and one between and . This was the earliest form of dynamic functional algebra. In numerical analysis, the essence of Viète's method was known to Sharaf al-Dīn al-Tūsī in the 12th century, and it is possible that the algebraic tradition of Sharaf al-Dīn, as well as his predecessor Omar Khayyám and successor Jamshīd al-Kāshī, was known to 16th century European algebraists, or whom François Viète was the most important. A method algebraically equivalent to Newton's method was also known to Sharaf al-Dīn. In the 15th century, his successor al-Kashi later used a form of Newton's method to numerically solve to find roots of . In western Europe, a similar method was later described by Henry Biggs in his Trigonometria Britannica, published in 1633. Al-Hassār, a mathematician from the Maghreb (North Africa) specializing in Islamic inheritance jurisprudence during the 12th century, developed the modern symbolic mathematical notation for fractions, where the numerator and denominator are separated by a horizontal bar. This same fractional notation appeared soon after in the work of Fibonacci in the 13th century. Abū al-Hasan ibn Alī al-Qalasādī (1412-1482) was the last major medieval Arabalgebraic notation earlier used in the Maghreb by Ibn al-Banna in the 13th century and by Ibn al-Yāsamīn in the 12th century. In contrast to the syncopated notations of their predecessors, Diophantus and Brahmagupta, which lacked symbols for mathematical operations, al-Qalasadi's algebraic notation was the first to have symbols for these functions and was thus "the first steps toward the introduction of algebraic symbolism." He represented mathematical symbols using characters from the Arabic alphabet. algebraist, who improved on the The symbol x now commonly denotes an unknown variable. Even though any letter can be used, x is the most common choice. This usage can be traced back to the Arabic word šay' شيء = “thing,” used in Arabic algebra texts such as the Al-Jabr, and was taken into Old Spanish with the pronunciation “šei,” which was written xei, and was soon habitually abbreviated to x. (The Spanishpronunciation of “x” has changed since). Some sources say that this x is an abbreviation of Latin causa, which was a translation of Arabic شيء. This started the habit of using letters to represent quantities in algebra. In mathematics, an “italicized x” () is often used to avoid potential confusion with the multiplication symbol. The Indian numeral system came to be known to both the Persian mathematician Al-Khwarizmi, whose book On the Calculation with Hindu Numerals written circaArab mathematician Al-Kindi, who wrote four volumes, On the Use of the Indian Numerals (Ketab fi Isti'mal al-'Adad al-Hindi) circa 830, are principally responsible for the diffusion of the Indian system of numeration in the Middle-East and the West . In the 10th century, Middle-Easternfractions using decimal point notation, as recorded in a treatise by Syrian mathematician Abu'l-Hasan al-Uqlidisi in 952-953. 825, and the mathematicians extended the decimal numeral system to include In the Arab world—until early modern times—the Arabic numeral system was often only used by mathematicians. Muslim astronomers mostly used the Babylonian numeral system, and merchants mostly used the Abjad numerals. A distinctive "Western Arabic" variant of the symbols begins to emerge in ca. the 10th century in the Maghreb and Al-Andalus, called the ghubar ("sand-table" or "dust-table") numerals, which is the direct ancestor to the modern Western Arabic numerals now used throughout the world. The first mentions of the numerals in the West are found in the Codex Vigilanus. From the 980s, Gerbert of Aurillac (later, Pope Silvester II) began to spread knowledge of the numerals in Europe. Gerbert studied in Barcelona in his youth, and he is known to have requested mathematical treatises concerning the astrolabe from Lupitus of Barcelona after he had returned to France. of 976 Al-Khwārizmī, the Persian scientist, wrote in 825 a treatise On the Calculation with Hindu Numerals, which was translated into Latin in the 12th century, as Algoritmi de numero Indorum, where "Algoritmi", the translator's rendition of the author's name gave rise to the word algorithm (Latin algorithmus) with a meaning "calculation method". Al-Hassār, a mathematician from the Maghreb (North Africa) specializing in Islamic inheritance jurisprudence during the 12th century, developed the modern symbolic mathematical notation for fractions, where the numerator and denominator are separated by a horizontal bar. The "dust ciphers he used are also nearly identical to the digits used in the current Western Arabic numerals. These same digits and fractional notation appear soon after in the work of Fibonacci in the 13th century. "The introduction of decimal fractions as a common computational practice can be dated back to the Flemish pamphelet De Thiende, published at Leyden in 1585, together with a French translation, La Disme, by the Flemish mathematician Simon Stevin (1548-1620), then settled in the Northern Netherlands. It is true that decimal fractions were used by the Chinese many centuries before Stevin and that the Persian astronomer Al-Kāshī used both decimal and sexagesimalKey to arithmetic (Samarkand, early fifteenth century)." fractions with great ease in his While the Persian mathematician Jamshīd al-Kāshī claimed to have discovered decimal fractions himself in the 15th century, J. Lennart Berggrenn notes that he was mistaken, as decimal fractions were first used five centuries before him by the Baghdadi mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century. The Middle Ages saw the acceptance of zero, negative, integral and fractionalIndian mathematicians and Chinese mathematicians, and then by Arabic mathematicians, who were also the first to treat irrational numbers as algebraic objects, which was made possible by the development of algebra. Arabic mathematicians merged the concepts of "number" and "magnitude" into a more general idea of real numbers, and they criticized Euclid's idea of ratios, developed the theory of composite ratios, and extended the concept of number to ratios of continuous magnitude. In his commentary on Book 10 of the Elements, the Persian mathematician Al-Mahani (d. 874/884) examined and classified quadratic irrationals and cubic irrationals. He provided definitions for rational and irrational magnitudes, which he treated as irrational numbers. He dealt with them freely but explains them in geometric terms as follows: numbers, first by "It will be a rational (magnitude) when we, for instance, say 10, 12, 3%, 6%, etc., because its value is pronounced and expressed quantitatively. What is not rational is irrational and it is impossible to pronounce and represent its value quantitatively. For example: the roots of numbers such as 10, 15, 20 which are not squares, the sides of numbers which are not cubes etc." In contrast to Euclid's concept of magnitudes as lines, Al-Mahani considered integers and fractions as rational magnitudes, and square roots and cube roots as irrational magnitudes. He also introduced an arithmetical approach to the concept of irrationality, as he attributes the following to irrational magnitudes: "their sums or differences, or results of their addition to a rational magnitude, or results of subtracting a magnitude of this kind from an irrational one, or of a rational magnitude from it." The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850–930) was the first to accept irrational numbers as solutions to quadratic equations or as coefficients in an equation, often in the form of square roots, cube roots and fourth roots. In the 10th century, the Iraqi mathematician Al-Hashimi provided general proofs (rather than geometric demonstrations) for irrational numbers, as he considered multiplication, division, and other arithmetical functions. Abū Ja'far al-Khāzin (900-971) provides a definition of rational and irrational magnitudes, stating that if a definite quantity is: "contained in a certain given magnitude once or many times, then this (given) magnitude corresponds to a rational number. . . . Each time when this (latter) magnitude comprises a half, or a third, or a quarter of the given magnitude (of the unit), or, compared with (the unit), comprises three, five, or three fifths, it is a rational magnitude. And, in general, each magnitude that corresponds to this magnitude (i.e. to the unit), as one number to another, is rational. If, however, a magnitude cannot be represented as a multiple, a part (l/n), or parts (m/n) of a given magnitude, it is irrational, i.e. it cannot be expressed other than by means of roots." Many of these concepts were eventually accepted by European mathematicians some time after the Latin translations of the 12th century. Al-Hassār, an Arabic mathematician from the Maghreb (North Africa) specializing in Islamic inheritance jurisprudence during the 12th century, developed the modern symbolic mathematical notation for fractions, where the numerator and denominator are separated by a horizontal bar. This same fractional notation appears soon after in the work of Fibonacci in the 13th century. In number theory, Ibn al-Haytham solved problems involving congruences using what is now called Wilson's theorem. In his Opuscula, Ibn al-Haytham considers the solution of a system of congruences, and gives two general methods of solution. His first method, the canonical method, involved Wilson's theorem, while his second method involved a version of the Chinese remainder theorem. Another contribution to number theory is his work on perfect numbers. In his Analysis and synthesis, Ibn al-Haytham was the first to discover that every even perfect number is of the form 2n−1(2n − 1) where 2n − 1 is prime, but he was not able to prove this result successfully (Euler later proved it in the 18th century). In the early 14th century, Kamāl al-Dīn al-Fārisī made a number of important contributions to number theory. His most impressive work in number theory is on amicable numbers. In Tadhkira al-ahbab fi bayan al-tahabb ("Memorandum for friends on the proof of amicability") introduced a major new approach to a whole area of number theory, introducing ideas concerning factorization and combinatorial methods. In fact, al-Farisi's approach is based on the unique factorization of an integer into powers of prime numbers. The successors of Muhammad ibn Mūsā al-Khwārizmī (born 780) undertook a systematic application of arithmetic to algebra, algebra to arithmetic, both to trigonometry, algebra to the Euclidean theory of numbers, algebra to geometry, and geometry to algebra. This was how the creation of polynomial algebra, combinatorial analysis, numerical analysis, the numerical solution of equations, the new elementary theory of numbers, and the geometric construction of equations arose. Al-Mahani (born 820) conceived the idea of reducing geometrical problems such as duplicating the cube to problems in algebra. Al-Karaji (born 953) completely freed algebra from geometrical operations and replaced them with the arithmetical type of operations which are at the core of algebra today. Early Islamic geometry - See also Applied mathematics Thabit ibn Qurra (known as Thebit in Latin) (born 836) contributed to a number of areas in mathematics, where he played an important role in preparing the way for such important mathematical discoveries as the extension of the concept of number to (positive) real numbers, integral calculus, theorems in spherical trigonometry, analytic geometry, and non-Euclidean geometry. An important geometrical aspect of Thabit's work was his book on the composition of ratios. In this book, Thabit deals with arithmetical operations applied to ratios of geometrical quantities. The Greeks had dealt with geometric quantities but had not thought of them in the same way as numbers to which the usual rules of arithmetic could be applied. By introducing arithmetical operations on quantities previously regarded as geometric and non-numerical, Thabit started a trend which led eventually to the generalization of the number concept. Another important contribution Thabit made to geometry was his generalization of the Pythagorean theorem, which he extended from special right triangles to all right triangles in general, along with a general proof. In some respects, Thabit is critical of the ideas of Plato and Aristotle, particularly regarding motion. It would seem that here his ideas are based on an acceptance of using arguments concerning motion in his geometrical arguments. Ibrahim ibn Sinan ibn Thabit (born 908), who introduced a method of integrationArchimedes, and al-Quhi (born 940) were leading figures in a revival and continuation of Greek higher geometry in the Islamic world. These mathematicians, and in particular Ibn al-Haytham (Alhazen), studied optics and investigated the optical properties of mirrors made from conic sections (see Mathematical physics). more general than that of Astronomy, time-keeping and geography provided other motivations for geometrical and trigonometrical research. For example Ibrahim ibn Sinan and his grandfather Thabit ibn Qurra both studied curves required in the construction of sundials. Abu'l-Wafa and Abu Nasr Mansur pioneered spherical geometry in order to solve difficult problems in Islamic astronomy. For example, to predict the first visibility of the moon, it was necessary to describe its motion with respect to the horizon, and this problem demands fairly sophisticated spherical geometry. Finding the direction of Mecca (Qibla) and the time for Salah prayers and Ramadan are what led to Muslims developing spherical geometry. Algebraic and analytic geometry In the early 11th century, Ibn al-Haytham (Alhazen) was able to solve by purely algebraic means certain cubic equations, and then to interpret the results geometrically. Subsequently, Omar Khayyám discovered the general method of solving cubic equations by intersecting a parabola with a circle. Omar Khayyám (1048-1122) was a Persian mathematician, as well as a poet. Along with his fame as a poet, he was also famous during his lifetime as a mathematician, well known for inventing the general method of solving cubic equations by intersecting a parabola with a circle. In addition he discovered the binomial expansion, and authored criticisms of Euclid's theories of parallelsnon-Euclidean geometry. Omar Khayyam also combined the use of trigonometry and approximation theory to provide methods of solving algebraic equations by geometrical means. His work marked the beginnings of algebraic geometry and analytic geometry. which made their way to England, where they contributed to the eventual development of In a paper written by Khayyam before his famous algebra text Treatise on Demonstration of Problems of Algebra, he considers the problem: Find a point on a quadrant of a circle in such manner that when a normal is dropped from the point to one of the bounding radii, the ratio of the normal's length to that of the radius equals the ratio of the segments determined by the foot of the normal.Find a right triangle having the property that the hypotenuse equals the sum of one leg plus the altitude on the hypotenuse. This problem in turn led Khayyam to solve the cubic equation x3 + 200x = 20x2 + 2000 and he found a positive root of this cubic by considering the intersection of a rectangular hyperbola and a circle. An approximate numerical solution was then found by interpolation in trigonometric tables. Perhaps even more remarkable is the fact that Khayyam states that the solution of this cubic requires the use of conic sections and that it cannot be solved by compass and straightedge, a result which would not be proved for another 750 years. Khayyam shows that this problem is equivalent to solving a second problem: His Treatise on Demonstration of Problems of Algebra contained a complete classification of cubic equations with geometric solutions found by means of intersecting conic sections. In fact Khayyam gives an interesting historical account in which he claims that the Greeks had left nothing on the theory of cubic equations. Indeed, as Khayyam writes, the contributions by earlier writers such as al-Mahani and al-Khazin were to translate geometric problems into algebraic equations (something which was essentially impossible before the work of Muḥammad ibn Mūsā al-Ḵwārizmī). However, Khayyam himself seems to have been the first to conceive a general theory of cubic equations. Omar Khayyám saw a strong relationship between geometry and algebra, and was moving in the right direction when he helped to close the gap between numerical and geometric algebra with his geometric solution of the general cubic equations, but the decisive step in analytic geometry came later with René Descartes. Persian mathematician Sharafeddin Tusi (born 1135) did not follow the general development that came through al-Karaji's school of algebra but rather followed Khayyam's application of algebra to geometry. He wrote a treatise on cubic equations, entitled Treatise on Equations, which represents an essential contribution to another algebra which aimed to study curves by means of equations, thus inaugurating the study of algebraic geometry. In the early 11th century, Ibn al-Haytham (Alhazen) made the first attempt at proving the Euclidean parallel postulate, the fifth postulate in Euclid's Elements, using a proof by contradiction, where he introduced the concept of motiontransformation into geometry. He formulated the Lambert quadrilateral, which Boris Abramovich Rozenfeld names the "Ibn al-Haytham–Lambert quadrilateral", and his attempted proof also shows similarities to Playfair's axiom. and In the late 11th century, Omar Khayyám made the first attempt at formulating a non-Euclidean postulate as an alternative to the Euclidean parallel postulate,elliptical geometry and hyperbolic geometry, though he excluded the latter. and he was the first to consider the cases of In Commentaries on the difficult postulates of Euclid's book Khayyam made a contribution to non-Euclidean geometry, although this was not his intention. In trying to prove the parallel postulate he accidentally proved properties of figures in non-Euclidean geometries. Khayyam also gave important results on ratios in this book, extending Euclid's work to include the multiplication of ratios. The importance of Khayyam's contribution is that he examined both Euclid's definition of equality of ratios (which was that first proposed by Eudoxus) and the definition of equality of ratios as proposed by earlier Islamic mathematicians such as al-Mahani which was based on continued fractions. Khayyam proved that the two definitions are equivalent. He also posed the question of whether a ratio can be regarded as a number but leaves the question unanswered. The Khayyam-Saccheri quadrilateral was first considered by Omar Khayyam in the late 11th century in Book I of Explanations of the Difficulties in the Postulates of Euclid. Unlike many commentators on Euclid before and after him (including of course Saccheri), Khayyam was not trying to prove the parallel postulate as such but to derive it from an equivalent postulate he formulated from "the principles of the Philosopher" (Aristotle): - Two convergent straight lines intersect and it is impossible for two convergent straight lines to diverge in the direction in which they converge. Khayyam then considered the three cases right, obtuse, and acute that the summit angles of a Saccheri quadrilateral can take and after proving a number of theorems about them, he (correctly) refuted the obtuse and acute cases based on his postulate and hence derived the classic postulate of Euclid. It wasn't until 600 years later that Giordano Vitale made an advance on the understanding of this quadrilateral in his book Euclide restituo (1680, 1686), when he used it to prove that if three points are equidistant on the base AB and the summit CD, then AB and CD are everywhere equidistant. Saccheri himself based the whole of his long, heroic and ultimately flawed proof of the parallel postulate around the quadrilateral and its three cases, proving many theorems about its properties along the way. In 1250, Nasīr al-Dīn al-Tūsī, in his Al-risala al-shafiya'an al-shakk fi'l-khutut al-mutawaziya (Discussion Which Removes Doubt about Parallel Lines), wrote detailed critiques of the Euclidean parallel postulate and on Omar Khayyám's attempted proof a century earlier. Nasir al-Din attempted to derive a proof by contradiction of the parallel postulate. He was one of the first to consider the cases of elliptical geometry and hyperbolic geometry, though he ruled out both of them. His son, Sadr al-Din (sometimes known as "Pseudo-Tusi"), wrote a book on the subject in 1298, based on al-Tusi's later thoughts, which presented one of the earliest arguments for a non-Euclidean hypothesis equivalent to the parallel postulate. Sadr al-Din's work was published in Rome in 1594 and was studied by European geometers. This work marked the starting point for Giovanni Girolamo Saccheri's work on the subject, and eventually the development of modern non-Euclidean geometry. A proof from Sadr al-Din's work was quoted by John Wallis and Saccheri in the 17th and 18th centuries. They both derived their proofs of the parallel postulate from Sadr al-Din's work, while Saccheri also derived his Saccheri quadrilateral from Sadr al-Din, who himself based it on his father's work. The theorems of Ibn al-Haytham (Alhazen), Omar Khayyam and Nasir al-Din al-Tusi on quadrilaterals, including the Lambert quadrilateral and Saccheri quadrilateral, were the first theorems on elliptical geometry and hyperbolic geometry, and along with their alternative postulates, such as Playfair's axiom, these works marked the beginning of non-Euclidean geometry and had a considerable influence on its development among later European geometers, including Witelo, Levi ben Gerson, Alfonso, John Wallis, and Giovanni Girolamo Saccheri. The early Indian works on trigonometry were translated and expanded in the Muslim world by Arab and Persian mathematicians. They enunciated a large number of theorems which freed the subject of trigonometry from dependence upon the complete quadrilateral, as was the case in Hellenistic mathematics due to the application of Menelaus' theorem. According to E. S. Kennedy, it was after this development in Islamic mathematics that "the first real trigonometry emerged, in the sense that only then did the object of study become the sphericaltriangle, its sides and angles." or plane In the early 9th century, Muhammad ibn Mūsā al-Khwārizmī (c. 780-850) produced tables for the trigonometric functions of sines and cosine, and the first tables for tangents. He was also an early pioneer in spherical trigonometry. In 830, Habash al-Hasib al-Marwazi produced the first tables of cotangents as well as tangents. Muhammad ibn Jābir al-Harrānī al-Battānī (853-929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants, which he referred to as a "table of shadows" (in reference to the shadow of a gnomon), for each degree from 1° to 90°. He also formulated a number of important trigonometrical relationships such as: By the 10th century, in the work of Abū al-Wafā' al-Būzjānī (959-998), Muslim mathematicians were using all six trigonometric functions, and had sine tables in 0.25° increments, to 8 decimal places of accuracy, as well as tables of tangent values. Abū al-Wafā' also developed the following trigonometric formula: Also in the late 10th and early 11th centuries, the Egyptian astronomer Ibn Yunus performed many careful trigonometric calculations and demonstrated the following formula: Al-Jayyani (989–1079) of al-Andalus wrote The book of unknown arcs of a sphere, which is considered "the first treatise on spherical trigonometry" in its modern form, although spherical trigonometry in its ancient Hellenistic form was dealt with by earlier mathematicians such as Menelaus of Alexandria, who developed Menelaus' theorem to deal with spherical problems. However, E. S. Kennedy points out that while it was possible in pre-lslamic mathematics to compute the magnitudes of a spherical figure, in principle, by use of the table of chords and Menelaus' theorem, the application of the theorem to spherical problems was very difficult in practice. Al-Jayyani's work on spherical trigonometry "contains formulae for right-handed triangles, the general law of sines, and the solution of a spherical triangle by means of the polar triangle." This treatise later had a "strong influence on European mathematics", and his "definition of ratios as numbers" and "method of solving a spherical triangle when all sides are unknown" are likely to have influenced Regiomontanus. The method of triangulation, which was unknown in the Greco-Roman world, was also first developed by Muslim mathematicians, who applied it to practical uses such as surveying and Islamic geography, as described by Abū Rayhān al-Bīrūnī in the early 11th century. In the late 11th century, Omar Khayyámcubic equations using approximate numerical solutions found by interpolation in trigonometric tables. All of these earlier works on trigonometry treated it mainly as an adjunct to astronomy; the first treatment as a subject in its own right was by Nasīr al-Dīn al-Tūsī in the 13th century. He also developed spherical trigonometry into its present form, and listed the six distinct cases of a right-angled triangle in spherical trigonometry. In his On the Sector Figure, he also stated the law of sines for plane and spherical triangles, discovered the law of tangents for spherical triangles, and provided proofs for these laws. (1048-1131) solved Jamshīd al-Kāshī (1393-1449) provided the first explicit statement of the law of cosines in a form suitable for triangulation. In France, the law of cosines is still referred to as the theorem of Al-Kashi. He also gives trigonometric tables of values of the sine function to four sexagesimal digits (equivalent to 8 decimal places) for each 1° of argument with differences to be added for each 1/60 of 1°. In one of his numerical approximations of π, he correctly computed 2π to 9 sexagesimal In order to determine sin 1°, al-Kashi discovered the following triple-angle formula often attributed to François Viète in the 16th century: digits. In French, the law of cosines is named Théorème d'Al-Kashi (Theorem of Al-Kashi), as al-Kashi was the first to provide an explicit statement of the law of cosines in a form suitable for triangulation. His colleague Ulugh Beg (1394-1449) gave accurate tables of sines and tangents correct to 8 decimal places. Taqi al-Din (1526-1585) contributed to trigonometry in his Sidrat al-Muntaha, in which he was the first mathematician to compute a highly accurate numeric value for sin 1°. He discusses the values given by his predecessors, explaining how Ptolemy (ca. 150) used an approximate method to obtain his value of sin 1° and how Abū al-Wafā, Ibn Yunus (ca. 1000), al-Kashi, Qāḍī Zāda al-Rūmī (1337-1412), Ulugh Beg and Mirim Chelebi improved on the value. Taqi al-Din then solves the problem to obtain the value of sin 1° to a precision of 8 sexagesimals (the equivalent of 14 decimals): Around 1000 AD, Al-Karaji, using mathematical induction, found a proof for the sum of integral cubes. The historian of mathematics, F. Woepcke, praised Al-Karaji for being "the first who introduced the theory of algebraic calculus." Shortly afterwards, Ibn al-Haytham (known as Alhazen in the West), an IraqiEgypt, was the first mathematician to derive the formula for the sum of the fourth powers, and using an early proof by mathematical induction, he developed a method for determining the general formula for the sum of any integral powers. He used his result on sums of integral powers to perform an integration, in order to find the volume of a paraboloid. He was thus able to find the integrals for polynomials up to the fourth degree, and came close to finding a general formula for the integrals of any polynomials. This was fundamental to the development of infinitesimal and integral calculus. His results were repeated by the Moroccan mathematicians Abu-l-Hasan ibn Haydur (d. 1413) and Abu Abdallah ibn Ghazi (1437-1514), by Jamshīd al-Kāshī (c. 1380-1429) in The Calculator's Key, and by the Indian mathematicians of the Kerala school of astronomy and mathematics in the 15th-16th centuries. mathematician working in In the 12th century, the Persian mathematician Sharaf al-Dīn al-Tūsī was the first to discover the derivative of cubic polynomials, an important result in differential calculus. His Treatise on Equations developed concepts related to differential calculus, such as the derivative function and the maxima and minima of curves, in order to solve cubic equations which may not have positive solutions. For example, in order to solve the equation , al-Tusi finds the maximum point of the curve . He uses the derivative of the function to find that the maximum point occurs at , and then finds the maximum value for y at by substituting back into . He finds that the equation has a solution if , and al-Tusi thus deduces that the equation has a positive root if , where D is the discriminant of the equation. Geometric art and architecture Geometric artwork in the form of the Arabesque was not widely used in the Middle East or Mediterranean Basin until the golden age of Islam came into full bloom, when Arabesque became a common feature of Islamic art. Euclidean geometry as expounded on by Al-Abbās ibn Said al-Jawharī (ca. 800-860) in his Commentary on Euclid's Elements, the trigonometry of Aryabhata and Brahmagupta as elaborated on by Muhammad ibn Mūsā al-Khwārizmī (ca. 780-850), and the development of spherical geometry by Abū al-Wafā' al-Būzjānī (940–998) and spherical trigonometry by Al-Jayyani (989-1079) for determining the QiblaSalah and Ramadan, all served as an impetus for the art form that was to become the Arabesque. and times of Recent discoveries have shown that geometrical quasicrystal patterns were first employed in the girih tiles found in medieval Islamic architecture dating back over five centuries ago. In 2007, Professor Peter Lu of Harvard University and Professor Paul Steinhardt of Princeton University published a paper in the journal Science suggesting that girih tilings possessed properties consistent with self-similar fractal quasicrystalline tilings such as the Penrose tilings, predating them by five centuries. An impetus behind mathematical astronomy came from Islamic religious observances, which presented a host of problems in mathematical astronomy, particularly in spherical geometry. In solving these religious problems the Islamic scholars went far beyond the Greek mathematical methods. For example, predicting just when the crescent moon would become visible is a special challenge to Islamic mathematical astronomers. Although Ptolemy's theory of the complex lunar motion was tolerably accurate near the time of the new moon, it specified the moon's path only with respect to the ecliptic. To predict the first visibility of the moon, it was necessary to describe its motion with respect to the horizon, and this problem demands fairly sophisticated spherical geometry. Finding the direction of Mecca and the time of Salah are the reasons which led to Muslims developing spherical geometry. Solving any of these problems involves finding the unknown sides or angles of a triangle on the celestial sphere from the known sides and angles. A way of finding the time of day, for example, is to construct a triangle whose vertices are the zenith, the north celestial pole, and the sun's position. The observer must know the altitude of the sun and that of the pole; the former can be observed, and the latter is equal to the observer's latitude. The time is then given by the angle at the intersection of the meridian (the arc through the zenith and the pole) and the sun's hour circle (the arc through the sun and the pole). The Zij treatises were astronomical books that tabulated the parameters used for astronomical calculations of the positions of the Sun, Moon, stars, and planets. Their principal contributions to mathematical astronomy reflected improved trigonometrical, computational and observational techniques. The Zijchronology, geographical latitudes and longitudes, star tables, trigonometrical functions, functions in spherical astronomy, the equation of time, planetary motions, computation of eclipses, tables for first visibility of the lunar crescent, astronomical and/or astrological computations, and instructions for astronomical calculations using epicyclic geocentric models. Some zījes go beyond this traditional content to explain or prove the theory or report the observations from which the tables were computed. books were extensive, and typically included materials on In observational astronomy, Muhammad ibn Mūsā al-Khwārizmī's Zij al-Sindh Al-Farghani's A compendium of the science of stars (850) corrected Ptolemy's Almagest and gave revised values for the obliquity of the ecliptic, the precessional movement of the apogees of the sun and the moon, and the circumference of the earth. Muhammad ibn Jābir al-Harrānī al-Battānī (853-929) discovered that the direction of the Sun's eccentric was changing, and studied the times of the new moon, lengths for the solar year and sidereal year, prediction of eclipses, and the phenomenon of parallax. Around the same time, Yahya Ibn Abi Mansour wrote the Al-Zij al-Mumtahan, in which he completely revised the Almagest values. In the 10th century, Abd al-Rahman al-Sufi (Azophi) carried out observations on the stars and described their positions, magnitudes, brightness, and colour and drawings for each constellation in his Book of Fixed Stars (964). Ibn Yunusastrolabe with a diameter of nearly 1.4 meters. His observations on eclipsesSimon Newcomb's investigations on the motion of the moon, while his other observations inspired Laplace's Obliquity of the Ecliptic and Inequalities of Jupiter and Saturn's. (830) contains trigonometric tables for the movements of the sun, the moon and the five planets known at the time. observed more than 10,000 entries for the sun's position for many years using a large were still used centuries later in In the late 10th century, Abu-Mahmud al-Khujandi accurately computed the axial tilt to be 23°32'19" (23.53°), which was a significant improvement over the Greek and Indian estimates of 23°51'20" (23.86°) and 24°, and still very close to the modern measurement of 23°26' (23.44°). In 1006, the Egyptian astronomer Ali ibn Ridwan observed SN 1006, the brightest supernova in recorded history, and left a detailed description of the temporary star. He says that the object was two to three times as large as the disc of Venus and about one-quarter the brightness of the Moon, and that the star was low on the southern horizon. In 1031, al-Biruni's Canon Mas’udicus introduced the mathematical technique of analysing the acceleration of the planets, and first states that the motions of the solar apogeeprecession are not identical. Al-Biruni also discovered that the distance between the Earth and the Sun is larger than Ptolemy's estimate, on the basis that Ptolemy disregarded the annual solar eclipses. and the During the "Maragha Revolution" of the 13th and 14th centuries, Muslim astronomers realized that astronomy should aim to describe the behavior of physical bodies in mathematical language, and should not remain a mathematical hypothesis, which would only save the phenomena. The Maragha astronomers also realized that the Aristotelian view of motion in the universe being only circular or linear was not true, as the Tusi-couple showed that linear motion could also be produced by applying circular motions only. Unlike the ancient Greek and Hellenistic astronomers who were not concerned with the coherence between the mathematical and physical principles of a planetary theory, Islamic astronomers insisted on the need to match the mathematics with the real world surrounding them, which gradually evolved from a reality based on Aristotelian physics to one based on an empirical and mathematical physics after the work of Ibn al-Shatir. The Maragha Revolution was thus characterized by a shift away from the philosophical foundations of Aristotelian cosmology and Ptolemaic astronomy and towards a greater emphasis on the empirical observation and mathematization of astronomy and of nature in general, as exemplified in the works of Ibn al-Shatir, al-Qushji, al-Birjandi and al-Khafri. In particular, Ibn al-Shatir's geocentric model was mathematically identical to the later heliocentric Copernical model. Mathematical geography and geodesy The Muslim scholars, who held to the spherical Earth theory, used it in an impeccably Islamic manner, to calculate the distance and direction from any given point on the earth to Mecca. This determined the Qibla, or Muslim direction of prayer. Muslim mathematicians developed spherical trigonometry which was used in these calculations. Around 830, Caliph al-Ma'mun commissioned a group of astronomers to measure the distance from Tadmur (Palmyra) to al-Raqqah, in modern Syria. They found the cities to be separated by one degree of latitude and the distance between them to be 66 2/3 miles and thus calculated the Earth's circumference to be 24,000 miles. Another estimate given by Al-Farghānī was 56 2/3 Arabic miles per degree, which corresponds to 111.8 km per degree and a circumference of 40,248 km, very close to the currently modern values of 111.3 km per degree and 40,068 km circumference, respectively. In mathematical geography, Abū Rayhān al-Bīrūnī, around 1025, was the first to describe a polar equi-azimuthal equidistant projection of the celestial sphere.cities and measuring the distances between them, which he did for many cities in the Middle East and western Indian subcontinent. He often combined astronomical readings and mathematical equations, in order to develop methods of pin-pointing locations by recording degrees of latitude and longitude. He also developed similar techniques when it came to measuring the heights of mountains, depths of valleys, and expanse of the horizon, in The Chronology of the Ancient Nations. He also discussed human geography and the planetary habitability of the Earth. He hypothesized that roughly a quarter of the Earth's surface is habitable by humans, and also argued that the shores of Asia and Europe were "separated by a vast sea, too dark and dense to navigate and too risky to try" in reference to the Atlantic Ocean and Pacific Ocean. He was also regarded as the most skilled when it came to mapping Abū Rayhān al-Bīrūnī is considered the father of geodesy for his important contributions to the field, along with his significant contributions to geography and geology. At the age of 17, al-Biruni calculated the latitude of Kath, Khwarazm, using the maximum altitude of the Sun. Al-Biruni also solved a complex geodesic equation in order to accurately compute the Earth's circumference, which were close to modern values of the Earth's circumference. His estimate of 6,339.9 km for the Earth radius was only 16.8 km less than the modern value of 6,356.7 km. In contrast to his predecessors who measured the Earth's circumference by sighting the Sun simultaneously from two different locations, al-Biruni developed a new method of using trigonometricplain and mountain top which yielded more accurate measurements of the Earth's circumference and made it possible for it to be measured by a single person from a single location. calculations based on the angle between a Ibn al-Haytham's work on geometric optics, particularly catoptrics, in "Book V" of the Book of Optics (1021) contains the important mathematical problem known as "Alhazen's problem" (Alhazen is the Latinized name of Ibn al-Haytham). It comprises drawing lines from two points in the plane of a circle meeting at a point on the circumference and making equal angles with the normal at that point. This leads to an equation of the fourth degree. This eventually led Ibn al-Haytham to derive the earliest formula for the sum of the fourth powers, and using an early proof by mathematical induction, he developed a method for determining the general formula for the sum of any integral powers, which was fundamental to the development of infinitesimal and integral calculus. Ibn al-Haytham eventually solved "Alhazen's problem" using conic sections and a geometric proof, but Alhazen's problem remained influential in Europe, when later mathematicians such as Christiaan Huygens, James Gregory, Guillaume de l'Hôpital, Isaac Barrow, and many others, attempted to find an algebraic solution to the problem, using various methods, including analytic methods of geometry and derivation by complex numbers. Mathematicians were not able to find an algebraic solution to the problem until the end of the 20th century. Ibn al-Haytham also produced tables of corresponding angles of incidence and refraction of light passing from one medium to another show how closely he had approached discovering the law of constancy of ratio of sines, later attributed to Snell. He also correctly accounted for twilight being due to atmospheric refraction, estimating the Sun's depression to be 19 degrees below the horizon during the commencement of the phenomenon in the mornings or at its termination in the evenings. Abū Rayhān al-Bīrūnī (973-1048), and later al-Khazini (fl. 1115-1130), were the first to apply experimental scientific methods to the statics and dynamics fields of mechanics, particularly for determining specific weights, such as those based on the theory of balances and weighing. Muslim physicists applied the mathematical theories of ratios and infinitesimal techniques, and introduced algebraic and fine calculation techniques into the field of statics. Abu 'Abd Allah Muhammad ibn Ma'udh, who lived in Al-Andalus during the second half of the 11th century, wrote a work on optics later translated into Latin as Liber de crepisculis, which was mistakenly attributed to Alhazen. This was a "short work containing an estimation of the angle of depression of the sun at the beginning of the morning twilight and at the end of the evening twilight, and an attempt to calculate on the basis of this and other data the height of the atmospheric moisture responsible for the refraction of the sun's rays." Through his experiments, he obtained the accurate value of 18°, which comes close to the modern value. In 1574, Taqi al-Din estimated that the stars are millions of kilometres away from the Earth and that the speed of light is constant, that if light had come from the eye, it would take too long for light "to travel to the star and come back to the eye. But this is not the case, since we see the star as soon as we open our eyes. Therefore the light must emerge from the object not from the eyes." In the 9th century, al-Kindi was a pioneer in cryptanalysis and cryptology. He gave the first known recorded explanation of cryptanalysis in A Manuscript on Deciphering Cryptographic Messages. In particular, he is credited with developing the frequency analysis method whereby variations in the frequency of the occurrence of letters could be analyzed and exploited to break ciphers (i.e. crypanalysis by frequency analysis). This was detailed in a text recently rediscovered in the Ottoman archives in Istanbul, A Manuscript on Deciphering Cryptographic Messages, which also covers methods of cryptanalysis, encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. Al-Kindi also had knowledge of polyalphabetic ciphers centuries before Leon Battista Alberti. Al-Kindi's book also introduced the classification of ciphers, developed Arabic phonetics and syntax, and described the use of several statistical techniques for cryptoanalysis. This book apparently antedates other cryptology references by several centuries, and it also predates writings on probability and statistics by Pascal and Fermat by nearly eight centuries. Ahmad al-Qalqashandi (1355-1418) wrote the Subh al-a 'sha, a 14-volume encyclopedia which included a section on cryptology. This information was attributed to Taj ad-Din Ali ibn ad-Duraihim ben Muhammad ath-Tha 'alibi al-Mausili who lived from 1312 to 1361, but whose writings on cryptology have been lost. The list of ciphers in this work included both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter. Also traced to Ibn al-Duraihim is an exposition on and worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which can not occur together in one word. The first known proof by mathematical induction was introduced in the al-FakhriAl-Karaji around 1000 AD, who used it to prove arithmetic sequencesbinomial theorem, Pascal's triangle, and the sum formula for integralcubes. His proof was the first to make use of the two basic components of an inductive proof, "namely the truth of the statement for n = 1 (1 = 13) and the deriving of the truth for n = k from that of n = k - 1." written by such as the Shortly afterwards, Ibn al-Haytham (Alhazen) used the inductive method to prove the sum of fourth powers, and by extension, the sum of any integral powers, which was an important result in integral calculus. He only stated it for particular integers, but his proof for those integers was by induction and generalizable.Ibn Yahyā al-Maghribī al-Samaw'al came closest to a modern proof by mathematical induction in pre-modern times, which he used to extend the proof of the binomial theorem and Pascal's triangle previously given by al-Karaji. Al-Samaw'al's inductive argument was only a short step from the full inductive proof of the general binomial theorem.[134
Sleep has a distinct architecture with five key stages. Stages 1 and 2 are fairly light, while stages 3 and 4 (or delta wave) sleep are the deeper stages of sleep, followed by REM sleep. Stages 1 and 2 are the non-restful sleep, the type of sleep where you toss and turn or wake up continuously during the night. With only this type of sleep, you will wake up tired and feel a lack of energy during the day. Stages 3 and 4 (delta wave) sleep are the deeper stages of sleep. These are the recuperative stages, the sleep that produces growth hormone, which results in the repair and healing of your body. Stage 5 is REM sleep. This is usually when you will dream. REM sleep will usually occur three to five times each night, each episode increasing in length as the night wears on. Sleep is necessary or us to maintain our health in a variety of areas: - Memory and learning - Mood enhancement and social behavior - Nervous system - Immune system - Growth and development Without the deep, recuperative sleep that is necessary for good health, you can suffer from many undesired effects. Studies have shown that sleep deprivation may be a associated with: - Poor decision-making, poor judgment, increased risk-taking - Poor performance in school, on the job, and in sports - Impaired driving performance and increased risk of car accidents - Increased incidence of obesity, diabetes, illness in general, high blood pressure, and heart disease - Impaired memory, concentration, and ability to learn - Physical impairment, poor coordination, delayed reaction time - Anxiety, depression, and other emotional problems - Magnification of the effects of alcohol on the body - Exacerbation of the symptoms of ADHD, such as impulse control, irritability, and lack of concentration
If you use your computer to solve math problems or to create documents or presentations that have typed mathematical expressions in them, Math Input Panel makes the process easier and more natural. Math Input Panel uses the math recognizer that's built into Windows 7 to recognize handwritten math expressions. You can then insert the recognized math into a word-processing or computational program. Math Input Panel is designed to be used with a tablet pen on a Tablet PC, but you can use it with any input device, such as a touchscreen, external digitizer, or even a mouse. Open Math Input Panel by clicking the Start button . In the search box, type Math Input Panel, and then, in the list of results, tap Math Input Panel. Write a well-formed math expression in the writing area. The recognized math is shown in the preview area. Make any necessary corrections to the math recognition. (To learn how, see "To make corrections" later in this topic.) Tap Insert to put the recognized math into your word-processing or computational program. Math Input Panel can only insert math into programs that support Mathematical Markup Language (MathML). If your handwritten math is misrecognized, you can correct it either by selecting an alternate or by rewriting some of the expression. Here's how to correct an expression: Hold down the pen button (or perform another right-click equivalent) and draw a circle around the part of the expression that was misrecognized. To select an individual symbol, perform a right-click equivalent while tapping the symbol. –or–Tap the Select and Correct button, and then tap the symbol or draw a circle to select the part of the expression that was misrecognized. Tap an alternate from the list. If what you wrote isn't on the list of alternates, try rewriting the part of the expression that you selected. Tap Insert to insert the recognized math into the active program. If you tapped Select and Correct but want to continue writing in Math Input Panel, tap Write to continue writing. It's more likely that your math expression will be recognized correctly if you complete the whole expression before making any corrections. (The more of the expression that you write, the better the chance that it will be recognized correctly.) With the History menu, you can use an expression that you've already written as the baseline for a new expression. This is useful when you need to write similar expressions several times in a row—for example, when working out a mathematical proof. To do this, tap History, and then tap the expression that you want to use. Your handwritten expression appears in the writing area, where you can make changes. After you make changes, the expression is recognized again, and you can insert it into a document, presentation, or computational program. If you use a note-taking program such as Windows Journal to take handwritten notes on a Tablet PC, you can convert the math from your notes so that you can use it in a word-processing or computational program. To convert notes from Journal, do the following: Open Windows Journal by tapping the Start button , typing Journal in the search box, and then tapping Windows Journal in the list of results. Open your math notes and the program that you want to insert the recognized math into. In Journal, use the selection tool to select the expression that you want to convert. Drag the selected math expression from Journal into Math Input Panel, and then make any necessary corrections. In the program where you want to put the recognized math expression, tap Insert. For more information, see Math Input Panel: frequently asked questions.
A genre of map that uses illustrations to convey information about geographic locations, pictorial maps are an important but underused resource. Many of these maps are found in the division's single map file under the term “pictorial map” for a geographic area, followed by the date. A sampling of pictorial maps of the United States dated from 1900 to 1950 reveals an abundance of material about cultural attitudes toward women. More than half of the maps surveyed show at least one female figure, often portrayed stereotypically in activities and settings that reflect social and cultural norms for females. A variety of women from different racial and cultural backgrounds are shown. White women are depicted in familiar roles that include teachers, nurses, bathing beauties, westward-bound pioneers in covered wagons, and southern belles. Native American women are almost always shown performing chores, such as cleaning animal hides, weaving, and making pottery or baskets, usually in close proximity to their tepee homes and often with infants on their backs. African American women are often shown picking cotton, in contrast to only one white woman shown at work in the cotton fields. There is also a dramatic depiction of an African American slave and her child escaping via the underground railroad. Americans of Negro Lineage by Louise E. Jefferson (New York: Friendship Press, 1946; G370.A5 1946.J4 no.12) shows in great detail the contributions African Americans have made to American society and is one of the few maps in the Library's collections to include illustrations of black women as nurses, teachers, housewives, performing artists from the theater, musical, and motion picture industries, musicians, explorers, WAVES and WACS, journalists, and bankers. Another map rich in detail about women's ethnicity and regional roles was drawn by Dorothea Dix Lawrence. Her Folklore Music Map of the United States depicts a multicultural society, with a wide variety of women in traditional dress (see illustration). Black, Hispanic, Cajun, Creole, Native American, and white women dressed in ethnic clothing are shown in close proximity to the areas where these groups lived in the 1940s. Two manuscript pictorial maps created by the Federal Theatre Project document the tours of actresses Fanny Davenport and Lotta Crabtree, providing sketched portraits of them in costume for their stage appearances and showing illustrations of the theaters where they performed ([Federal Theatre Project: Tours by Famous Actors and Actresses 1865?-1904]. G3701. E645 1904.U Vault). Also among the pictorial maps are literary maps, which show places associated with authors and their works. Literary and other pictorial maps published within the past thirty years are more likely to have included women and minority authors as subjects than those of earlier periods. Language of the Land (Washington: Library of Congress, 1999; Z6026.L57 H66 1999; G1046.E65) by Martha Hopkins and Michael Buscher is one such “Diagram of the South Part of Shaker Village, Canterbury, N.H.” Peter Foster. Colored manuscript map (Vault; G3744.S5 1849). Geography and Map Division. Especially valuable to studies of gendered spaces are the detailed manuscript pictorial maps produced by members of the Shaker congregations or, as they called themselves, “families.” A religious sect that lived communally and believed in the equality of men and women, the Shakers made simple but elegant drawings depicting the neat, tidy villages in which they lived. Although they considered men and women to be equal, the Shakers also believed in celibacy. Except for the church which always had double doors so that the men and women could enter the church on an equal footing, each sex had its own buildings, labeled on the maps, showing where they spent their time and what activities were associated with both space and gender (see illustration).
When Nereus is operating in ROV mode during Leg 2, the manipulator mounted on front of the vehicle will be used to collect sulfide and host-rock samples that will be analyzed for mineralogy and chemical composition, providing important data for NASA-JPL investigator Dr. Max Coleman. He will use these samples to constrain the total geologic energy available to sustain these unique hydrothermal ecosystems and the extent to which the energy is utilized or lost to the environment at each trophic level. Dr. Coleman describes below how this investigation at Earth’s deepest hydrothermal vents can be connected to detecting life on Europa, the icy moon of Jupiter. How Most Life on Earth Gets its Energy Let’s deal with the relatively easy part first, life on Earth. The building blocks of life, organic compounds, are either made by organisms or acquired by consuming organisms that have made them. The bottom of the food chain are the organisms, like plants, that make organic matter by reacting carbon dioxide with water using energy from sunlight and releasing oxygen as a byproduct, this process is photosynthesis. Most life on Earth depends on photosynthesis as the base of the food chain. Organic matter produced by photosynthesis at the surface sinks through the water column where it is successively removed by microorganisms digesting it by reacting it with oxygen dissolved in the ocean water, very much the same way as we “burn” carbohydrates using oxygen breathed in to gain energy. In our previous work on the hydrothermal vent fields of the MCR we have shown that there is very little photosynthetic organic matter present at the shallower site at 2300 m depth, Von Damm, and effectively none at the world’s deepest vent site at 4960 m, Piccard, though both have a little dissolved oxygen. So, since we have observed an abundant ecosystem at Piccard, how does it get its organic matter? How the Hydrothermal Vent Community Gets its Energy Around the hydrothermal vents a process similar in effect to photosynthesis occurs, but it is very different. Carbon dioxide, dissolved in ocean water, reacts to form organic matter but instead of sunlight as the energy source, microorganisms use the vent fluids’ chemical energy, which is called chemosynthesis. The metabolic processes of chemosynthetic bacteria at the MCR sites consume some of the little oxygen left in deep water, instead of releasing it. Symbiotic organisms host many of the chemosynthetic bacteria. For example, at Piccard there are shrimp, which have specially adapted gills to accommodate the bacteria. The shrimp position themselves at the optimal interface between the toxic sulfide-rich vent fluid and the somewhat-oxygenated, normal bottom water to allow the bacteria to flourish – and then harvest them. There are predators of the shrimp and in one remarkable specimen we sampled a fish in whose gut was a shrimp with its symbiotic bacteria. So in one specimen we had three levels of the food chain. To understand the details of the food chain we separate the very many individual biochemicals, measure their relative abundances and more importantly their stable isotopic compositions. For example, there are two stable (that is nonradioactive) isotopes of carbon – 99% of all carbon has an atomic weight of 12 and 1%, 13. There are small variations in the 1% C-13 abundance, which are characteristic of where the material came from, and we use these values to trace the origins and pathways of use of the various biochemicals. Even though this ecosystem depends on the chemistry of the vent fluid it also needs oxygen, which originates ultimately from photosynthesis. So, how is this an analog for Europa? Possibility for Life on Europa? Europa is slightly smaller than Earth’s moon but is very different in composition. It is believed to have an iron core in the center of a planetary body with a rocky composition like the Earth’s mantle. Then there is a deep salty ocean topped by the surface, a thick layer of ice. This ocean is in direct contact with its rocky mantle and tidal heating and volcanic activity could potentially drive hydrothermal circulation. Theoretical studies of the effect of Jupiter’s radiation field on the ice suggests that some of it may be broken down releasing oxygen, which could migrate through the ice and dissolve in Europa’s ocean. In fact, the calculations suggest possibly an even greater oxygen concentration than that found at depth in our oceans. Thus, Europa has the potential to support a chemosynthetically based ecosystem, similar in abundance though unlikely to be similar in kind to that found at the MCR.
This lab report format was inspired in large part by Joseph Porter of Falmouth High School. A lab report is a technical means of telling a story. It should be constructed so that you tell the reader: the theory you are testing, how you are testing it, the results you obtained and a discussion of their significance. The sections below will help you to effectively demonstrate your understanding of the lab and the related theory. Cover page. Contains: experiment title, your name, class name and block, and date of lab. A picture or graphic or drawing may be used but is not required. Introduction. Gives basic scientific theory or background information about the experiment. Explains how a theory is being put to use or tested in the lab. This section is very important as it is here that you are best able to demonstrate that you understand the scientific background of the lab. Be careful not to simply paraphrase the background information given in the lab handout! Use your own words. Also, make a connection between what you did and what you measured in the lab and the scientific background information. Procedure. Summarizes what was done during the lab. Written in first person, past tense, active voice. The level of detail should not be excessive but should be sufficient for the reader to repeat your experiments exactly. Tell what you actually did—do not paraphrase the lab handout. Include your observations from the lab in this section so that someone repeating your procedure will know if they are on the right track. Your observations are absolutely essential. This section may run from 3 - 5 paragraphs. Data and graphs. The data section briefly presents the data that was collected. Raw data sheets written during the lab are never presented as part of the formal lab write-up. Construct a neat and well thought-out table to most effectively show your results. Tables should be numbered and titled below the table. If the lab requires it, you will create informative graphs to enhance your discussion of the results. Graphs should be numbered and titled as figures (Figure 1, Figure 2, etc). You must discuss the meaning or significance of the graph in the analysis section. Do not include a graph and fail to mention it in your text. Refer to graphs by number. Sample Calculations. The sample calculation should include the equation with variables first, then with numbers and units substituted for the variables and finally the final result. Equations should be numbered and referenced in your analysis section. Note: you may have used the equation many times, but you only have to show it once. Analysis. The most important section of a lab report. This is where you specifically answer the objective questions. Report numerical results with plus-or-minus amounts and percent error. Discuss physical reasons for the size of your error: come up with a scenario and work out its consequences. If your data were highly variable (that is, imprecise) then discuss physical reasons for this. Try to account for the size and direction of any inaccuracy if you have a standard for comparison. Explain the relationship between your results (refer to data tables by title and number) and the answers to the questions posed in the lab handout. The Analysis will frequently have several parts as you answer some questions that were provided with the lab handout. All answers must be in paragraph form as in an essay. Do not number answers to questions! Scientific explanations usually consist of a claim (also called a hypothesis), evidence for the claim (collected during the lab activity), and the reasoning that connects the two. Here is an example of writing that fulfills the above description: “I know the candle burns vaporized wax because of four key observations. First, the flame moves in the air the way a gas would, flickering and reacting to air currents. Second, the flame dies down after the candle is first lit but grows again after the wax melts. Third, I can collect liquid wax on a watch glass placed in the clear bottom part of the flame. This shows that the wax must have vaporized because the glass never touched the melted wax at the top of the candle but collected wax on it simply by being in the lower part of the flame. Finally, when the candle is extinguished the smoke is flammable and allows the candle to be lit by a flame placed in the stream of smoke. This shows that vaporized wax is the fuel in a candle flame because the stream of smoke, being flammable, must be the material which was feeding the flame until it went out.” Conclusions. This section describes the overall success of the experiments. Discusses whether or not the results were expected, whether errors may have occurred or alternative experimental techniques may have been used for better results. Describes how your understanding of the theory has been improved by performing the lab. How does the lab apply to real life? What did you learn? How would you make the lab better? How could the lab be extended? Make sure your report is neat: typed, stapled or bound, pages are numbered, sections (above) are labeled and in order, no misspellings, no typos, font size 12 pt, margins of 1", 1.5 line spacing. Length of reports are usually 2 - 4 pages. Read the lab handout carefully several times and refer to it when you are doing your labs. Typically the lab handout will spell out what you must consider in your analysis. Make sure you address each question. Write your report for a stranger. Don’t assume your reader knows too much about science or the lab. If your lab is written properly, this type of reader should be able to make sense of what you did during the lab and what you have Be concise. This means present all the relevant information as clearly but as briefly as possible. Extra descriptive words are not useful in scientific writing. This is not a class where you are being graded on how many pages you write. Often the worst reports tend to be the longest ones, as a lack of knowledge causes the writer to go on and on in the hope of stumbling on the Some students failed to include their observations with the procedure. The procedure is meant to relate to others how to do the experiment. Your observations will be very helpful in assuring others that they really are replicating your work. When writing about your observations do not write that you carefully noted them without writing down what they are! Sample calculations must be typed separately from the Analysis section. Do not describe calculations! Just give your results and the interpretation of your results. The Analysis section of the report is meant to interpret your lab results. Discuss your observations and explain how they are connected with the conclusions of the In writing your report be sure to write about what you learned; do not simply state that you learned about Remember that all instruments used for measurement have an inherent uncertainty. For example, discrepancies in results of 0.5 g or less on the lab balances is probably within the expected variation for the instrument. Recognize the difference between errors due to the limitations of equipment and human error. Human error can be eliminated. Learn how to use the equipment properly. Be careful while you work. Follow instructions. If you realize you did something incorrectly, go back and do it correctly. It’s all part of the learning process. Never use the phrase ‘human error’ in a lab report as it is nothing more than a sign of laziness or unwillingness to think about true sources for experimental variation. Equipment error is something you need to be conscious of.Measuring volumes with beakers introduces huge errors in your data: usually ±5%. Volumes measured using a 10 mL graduated cylinder can be precise up to ±0.1 mL. Masses measured using a three-beam balance can be precise up to ±0.2 g. Temperatures can be precise to the tenth of a degree. Lengths measured using a millimeter ruler can be precise to ±0.1 mm. When giving background information do not simply paraphrase (parrot-phrase) what I wrote or what I said in class; figure it out for yourself and put it in your own words
Today, electricity and power generation have become an indispensable part of most of our daily lives. With automation of many manual activities, demand for power has increased many times more than in the past and is constantly increasing, due to the rapid increase in population and developments in science and technology. Conventional sources of power are basically coal and other fossil fuels. There are only limited known reserves of these fuels and they are rapidly being depleted. Moreover, coal and other fossil fuels burn to produce carbon compounds which pollute the air and the environment. This necessitates the need to develop alternate non-conventional sources of power in the future when fossil fuels will be exhausted. Tapping the geothermal energy to generate power is an excellent option to meet the world’s future power. “Geo” means earth and “thermal” means heat. So, geothermal energy is basically the energy derived from the heat already present inside the earth. The earth has a lot of heat energy stored internally because of the extremely high temperatures and pressures in the interior of the earth. Hot springs in many parts of the world are testimonial to this fact. Geothermal energy involves tapping into this heat as a means of producing steam to power electricity generators, and replaces the need to burn coal or other fuels to provide such a heat source. The main advantage of geothermal energy is that it is clean, i.e. does not cause any pollution. Moreover, it is renewable and inexhaustible as the temperature inside the earth’s surface is not expected to decrease. Also, it doesn’t require any conventional fuel and hence, once the site is established with necessary infrastructure in place, it is much cheaper to run. The main problem associated with geothermal energy is that it can be harnessed only at a few places where the underlying rocks are soft enough to be drilled through and the heat is expected to be sustainable for a significant time period. These places are called “hot spots”. Finding hot spots involves land surveys which can take years to complete. Some hot spots may be found in remote areas where setting up of power stations is not financially feasible. Merely finding hot spots is not enough. It has to be seen whether that heat can be extracted to generate power. Only after meeting these requirements, can a power plant be set up to generate geothermal power. All these activities involve a huge installation cost, although the maintenance cost following set up is very low. Due to unavailability of suitable hot spots, only a fraction of the total power needs of the world can be met by geothermal power at present. Another major problem associated with geothermal power may crop up during the operation of the power plant. In order to extract the heat from the interior of the earth, holes are drilled through which steam issues and this steam is used to drive the turbines and generate power. Sometimes along with steam harmful gases may also issue out. So, the power plant must be designed in a way to handle such situations and eliminate those gases safely. While some challenges remain with geothermal energy production, the benefits of switching to this source of power seem likely to outweigh the expected teething problems in bringing them into general usage.
This artist's conception shows the molecular motor (represented with hands) that packages DNA (rope-like structure) into the head of the T4 virus. The new study reveals the motor is made of two ring-like structures, each of which contains five protein segments. Credit: Dec. 26 issue of the journal Cell; Steven McQuinn, independent science artist, and Venigalla Rao, The Catholic University of America. Like microscopic machine shops, some viruses assemble their parts with the help of tiny motors. Now, researchers have figured out the structure and workings of the natural molecular motors in one virus. The discovery could lead to new pharmaceutical approaches to combat diseases, including herpes, which is caused by a virus that possesses a similar type of motor. Unlike bacteria and other forms of life, viruses are unique in that they cannot reproduce or grow outside of a host cell. So figuring out precisely how they thrive inside us is a key to controlling or eradicating them. The research team, including Purdue biologist Michael Rossmann, used two imaging techniques to look at the T4 virus, a type of virus called a bacteriophage that is capable of infecting bacteria. In the case of T4, the bacterial host is Escherichia coli, which in turn is common in the intestines of warm-blooded animals and usually harmless, but some strains can cause food poisoning. The researchers focused on a small motor that many viruses use to package their DNA into their "heads," or capsids — sort of a protein coat for the virus. The images showed the motor is made up of a pair of conjoined protein rings, an upper ring and a lower ring. Here's how the researchers think the tiny motor works: As a T4 virus assembles itself inside its host, the motor's lower ring attaches to a strand of viral DNA, while the upper ring holds onto the virus' head. The upper and lower rings contract and release, alternately tugging at the DNA like a ring of hands pulling on a rope. DNA is made up of two strands held together by weak bonds between nitrogen-containing chemicals called bases on each strand, forming base pairs. In the case of T4, its motor packs about 171,000 base pairs into a head that's just 120 nanometers by 86 nanometers. For comparison, the width of a human hair is about 80,000 nanometers; and the human genome contains about 3 billion base pairs. Once the DNA gets tugged inside the capsid, the motor falls off and a virus tail attaches to the capsid. Now the virus can escape its host, killing it in the process, and seek out another E. coli cell. "The tail is another machine which is necessary for the virus to infect the next host," Rossmann told LiveScience. "The tail is used to puncture and to digest the cell wall of the next cell to be infected." The finding, detailed in the Dec. 26 issue of the journal Cell, has practical implications for fighting off dangerous microbes. "Bacteriophages like T4 are a completely alternative way of dealing with unwanted bacteria," Rossmann said. "The virus can kill bacteria in its process of reproduction, so use of such viruses as antibiotics has been a long looked-for alternative to overcome the problems which we now have with antibiotics." - Video - Special Delivery: Antibiotic Viruses Could Kill Bacteria - Inside Look: How Viruses Invade Us - Viruses: News, Features and Images
|An illustration of the helium atom, depicting the nucleus (pink) and the electron cloud distribution (black). The nucleus (upper right) is in reality spherically symmetric, although for more complicated nuclei this is not always the case. The black bar is one ångström, equal to 10−10 m or 100,000 fm.| Atomic physics (or atom physics) is a field of physics that involves investigation of the structures of atoms, their energy states, and their interactions with other particles and electromagnetic radiation. In this field of physics, atoms are studied as isolated systems made up of nuclei and electrons. Its primary concern is related to the arrangement of electrons around the nucleus and the processes by which these arrangements change. It includes the study of atoms in the form of ions as well as in the neutral state. For purposes of this discussion, it should be assumed that the term atom includes ions, unless otherwise stated. Through studies of the structure and behavior of atoms, scientists have been able to explain and predict the properties of chemical elements, and, by extension, chemical compounds. The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics, which deals with the atom as a system consisting of a nucleus and electrons, and nuclear physics, which considers atomic nuclei alone. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. As noted above, atomic physics involves investigation of atoms as isolated entities. In atomic models, the atom is described as consisting of a single nucleus that is surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles. In practical terms, modeling atoms in isolation may not seem realistic. However, if one considers atoms in a gas or plasma, then the time scales for atom-atom interactions are huge compared to the atomic processes being examined here. This means that the individual atoms can be treated as if each were in isolation because for the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, although both deal with huge numbers of atoms. Electrons form notional shells around the nucleus. These electrons are naturally in their lowest energy state, called the ground state, but they can be excited to higher energy states by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically other electrons). The excited electron may still be bound to the nucleus, in which case they should, after a certain period of time, decay back to the original ground state. In so doing, energy is released as photons. There are strict selection rules regarding the electronic configurations that can be reached by excitation by light, but there are no such rules for excitation by collision processes. If an electron is sufficiently excited, it may break free of the nucleus and no longer remain part of the atom. The remaining system is an ion, and the atom is said to have been ionized, having been left in a charged state. Most fields of physics can be divided between theoretical work and experimental work, and atomic physics is no exception. Usually, progress alternates between experimental observations and theoretical explanations. Clearly, the earliest steps toward atomic physics were taken with the recognition that matter is composed of atoms, in the modern sense of the basic unit of a chemical element. This theory was developed by the British chemist and physicist John Dalton in the eighteenth century. At that stage, the structures of individual atoms were not known, but atoms could be described by the properties of chemical elements, which were then organized in the form of a periodic table. The true beginning of atomic physics was marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron arrangements were concerned, formulation of the atomic orbital model offered a better overall description and also provided a new theoretical basis for chemistry (quantum chemistry) and spectroscopy. Since the Second World War, both theoretical and experimental areas of atomic physics have advanced at a rapid pace. This progress can be attributed to developments in computing technology, which have allowed bigger and more sophisticated models of atomic structure and associated collision processes. Likewise, technological advances in particle accelerators, detectors, magnetic field generation, and lasers have greatly assisted experimental work in atomic physics. - Atomic mass - Atomic nucleus - Chemical element - Electron configuration - J.J. Thomson - Johannes Rydberg - John Dalton - Joseph von Fraunhofer - Max Born - Niels Bohr - Nuclear physics - Periodic table - Quantum mechanics - Bransden, B.H., and C.J. Joachain. 2003. Physics of Atoms and Molecules, 2nd ed. Harlow, UK: Prentice Hall. ISBN 058235692X - Demtröder, W. 2006. Atoms, Molecules and Photons: An Introduction to Atomic-, Molecular-, and Quantum-Physics. Berlin: Springer. ISBN 978-3540206316 - Foot, Christopher J. 2005. Atomic Physics. Oxford Master Series in Atomic, Optical and Laser Physics. Oxford, UK: Oxford Univ. Press. ISBN 0198506961 All links retrieved November 24, 2012. - Atomic and Molecular Physics JILA. - Basic Research in Nuclear and Atomic Physics Physics Division, Oak Ridge National Laboratory.[[Category:Particle physics] New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
A scientist looks through a microscope. (photo credit: INGIMAGE) Nanocubes are not some children’s game, but nano-sized cube-shaped particles. Organic chemistry researchers at Rehovot’s Weizmann Institute of Science who used them to create surprisingly yarn-like strands showed that given the right conditions, they are able to align themselves into winding, helical structures. Their results, which reveal how nanomaterials can self-assemble into unexpectedly beautiful and complex structures, were recently published in Science. Dr. Rafal Klajn and postdoctoral fellow Dr. Gurvinder Singh used nanocubes of an iron oxide material called magnetite. As the name implies, this material is naturally magnetic, and it is found all over the place, including inside bacteria, that use it to sense the Earth’s magnetic field. Magnetism is just one of the forces acting on the nanoparticles. Together with the research group of Prof. Petr Král of the University of Illinois in Chicago, Klajn and Singh developed theoretical models to understand how the various forces could push and pull the tiny bits of magnetite into different formations. “Different types of forces compel the nanoparticles to align in different ways,” said Klajn. “These can compete with one another, so the idea is to find the balance of competing forces that can induce the self-assembly of the particles into novel materials.” The models suggested that the shape of the nanoparticles is important – only cubes would provide a proper balance of forces required for pulling together into helical formations. The researchers found that the two main competing forces are magnetism and a phenomenon called the van der Waals force. Magnetism causes the magnetic particles to both attract and repel one another, prompting the cubic particles to align at their corners. Van der Waals forces, on the other hand, pull the sides of the cubes closer together, coaxing them to line up in a row. When these forces act together on the tiny cubes, the result is the step-like alignment that produces helical structures. In their experiments, the scientists exposed relatively high concentrations of magnetite nanocubes placed in a solution to a magnetic field. The long, rope-like helical chains they obtained after the solution was evaporated were surprisingly uniform. They repeated the experiment with nanoparticles of other shapes but, as predicted, only cubes had just the right structure to align in a helix. Klajn and Singh also found that they could get chiral strands – all wound in the same direction – with very high particle concentrations in which a number of strands assembled closely together. Apparently the competing forces can “take into consideration” the most efficient way to pack the strands into the space, said Klajn. Although the nanocube strands look nice enough to knit with, it is too soon to begin thinking of commercial applications. The immediate value of the work, he added, is that it has proven a fundamental principle of nanoscale self-assembly. “Although magnetite has been well studied, also its nanoparticle form, for many decades, no one has observed these structures before. Only once we understand how the various physical forces act on nanoparticles can we begin to apply the insights to such goals as the fabrication of previously unknown, self-assembled materials.” PIG CHEMICAL STOPS DOG BARKING A dog’s bark is a way of protecting itself and its owner – but it can really get on your nerves if it’s incessant. Now a professor at Texas Tech University has discovered that a pig pheromone (hormone that triggers a social response in members of the same species) named androstenone can stop dogs from barking and jumping. Animal-behavior scientist John McGlone was just like any other pet owner a few years ago – he simply wanted to keep his Cairn terrier from barking incessantly. As part of his work, he just happened to have a product called Boar Mate on hand at his house from a previous research study, an odorous concoction that helps farmers with swine breeding. So he gave one little spritz to his dog, Toto, and immediately the dog stopped barking. “One of the most difficult problems is that dogs bark a lot, and it’s one of the top reasons they are given back to shelters or pounds,” he said. Suddenly, an idea was born. After extensive testing and publishing of the results and with funding help from Sergeant’s pet care products, “Stop That” was developed and hit the stores. It has met with tremendous success among pet owners who were on their last legs in trying to curtail bad behavior in dogs. The pheromone is produced by pigs in their saliva and fat, secreted by males and picked up by females in heat. It is foul-smelling to humans. McGlone conducted double-blind, controlled studies and found that the synthesized phernomone stopped 100 percent of dogs from barking. He also found that the androstenone had no effect on the dogs’ heart rates either before or after being sprayed. But, McGlone warns, it’s not an end-all, beat-all to stopping dogs from barking, as the effects last just about a minute. “If you continue to spray the dog again it will stop,” McGlone said. “If you show the can, they will stop. It’s best used as a training tool rather than a circus act to stop animals from doing what they’re doing.”
Disparities in health care: The black population Disparities in health care: The black population Racial and ethnic disparities can be multifactorial, encompassing socioeconomic factors (eg, education, income, and employment), lifestyle behaviors (eg, physical activity and alcohol intake), social conditions (eg, neighborhoods and work conditions), and access to preventive health care services (eg, cancer screenings and vaccinations).1 Leading health indicators of progress toward national health objectives for 2020 continue to reflect racial and ethnic disparities.1 Eliminating disparities requires culturally appropriate health initiatives and community support, in addition to equal access to health care.1 Furthermore, disparities are not equal among all racial and ethnic populations, and prevalence and incidence of various diseases are also different across the different populations. As we continue to make strides in oncology care (eg, prevention, screenings, and treatment outcomes from diagnosis through end of life), we must make an effort to include all racial and ethnic groups in this progression. Health care disparities for black persons in the United States can mean loss of economic opportunities, lower quality of life, perceptions of injustice, and earlier death.1 From a societal perspective, health care disparities for the black population translate into less than optimal productivity, higher health care costs, and social inequities.1 The literature suggests the heritage and history of black persons dating as far back as 1619-1860 had an impact on the black experience in America, thus making their life stories markedly different from that of other immigrants.2 Elimination of disparities for this group is intertwined with knowledge and awareness that focuses on integration of health-related cultural values and practices, disease incidence and prevalence, and treatment efficacy.2 This article focuses on the disparities in oncology care that exist for black and non-Hispanic black persons in the United States, and the interventions that may reduce such disparities. The black population is estimated to be 61 million people by 2050 and will account for 15% of the total US population.1 The 2000 Census indicated 36.4 million persons, approximately 12.9% of the population, identified themselves as Black or African American, 35.4 million of whom identified themselves as non-Hispanic.1 Cancer is the second leading cause of death in both non-Hispanic blacks and non-Hispanic whites.1 In 2001, the age-adjusted incidence per 100,000 population for various cancers, including colorectal cancer (CRC), was substantially higher in black females than in white females.1,3 Disaggregation studies are examining the relationship between black-white cancer health disparities.3 Recent studies disaggregating the US population based on region showed foreign-born people have better general health outcomes than US-born people; but as the number of years living in the United States increases, health status mirrors that of the US-born population.3 Approximately 6% of persons who identified themselves as Black in the 2000 Census were foreign born.1 What ultimately has emerged from these studies is that despite the limited studies among US black people, specific subgroups of the black population remain at risk. Health promotion efforts need to overcome the barriers facing these specific groups.3 CANCER IN THE BLACK POPULATION A variety of demographic and sociocultural factors are commonly reported barriers to adherence to suggested cancer screenings. These factors include lack of knowledge or awareness of cancer screenings, lack of access to general preventive health care services, institutional or system barriers, socioeconomic status, language barriers, immigrant status, and cultural beliefs.2-4 Related specifically to the black population, researchers believe social isolation leads to a lack of social support.4 This lack of support has a negative impact on worries and concerns often encountered by patients with cancer.4 Black persons experience higher overall cancer incidence and mortality rates, excessive burden of disease, and lower 5-year survival rates compared with non-Hispanic white, Native American, Hispanic, Alaskan Native, Asian American, and Pacific Islander populations.2,5 Approximately 168,900 new cases of cancer were diagnosed among black persons in 2011.6 The most commonly diagnosed cancers in the black population are prostate (40%), lung (15%), and colorectal (9%).6 In 2010, 142,570 new cases of colorectal cancer were diagnosed and an estimated 51,370 patients died from their disease; in 2011, colorectal cancer lead to 7,050 deaths.5,6 Lung cancer is the leading malignancy among both black men and black women, attributing to 65,540 deaths in black persons.6 Colorectal cancer is the third leading type of cancer and cause of cancer-related deaths in the black population.7 A 20% higher incidence and a 40% higher overall mortality are attributed to disparities in access, high-quality screening, and treatment, as well as later stage disease at diagnosis, in this group.3,5,6 Incidence of cervical cancer in black women is 11.1 cases per 100,000 population compared to 8.7 cases per 100,000 population for white women. Mortality rate for cervical cancer in black women is more than twice that of white women.8 The 5-year survival rate is 66% for black women compared with 74% for their white counterpart; in addition, advanced stage disease at diagnosis occurs more frequently in black women.5,6,8 In 2011, 860 deaths in black women were reported as a result of cervical cancer.6 As recently as 5 years ago, a review of studies yielded an increased incidence of oral cancers among black men. Oral cancers are ranked as the 10th leading cause of death among black males.7 Age-adjusted incidence of oral cancer in black males was more than 20% higher than that of white males from 1998 to 2002.7 Breast cancer is one of the most commonly diagnosed malignancies in black women, with an estimated 26,840 new cases diagnosed in 2011.6 Breast cancer incidence increased rapidly among black women during the 1980s largely due to higher detection rates as the use of mammography screening increased.6 Incidence stabilized among black women 50 years and older from 1994 to 2007, while rates decreased by 0.6% per year from 1991 to 2007 among women younger than 50 years.4,6 However, among women younger than 45 years, incidence rates are higher for African American women compared with white women.6 Breast cancers in black women are more likely to be associated with poor prognosis, such as higher grade, distal stage, and negative hormone receptor status.6 Risk for basal-like breast cancer (ie, triple-negative cancers), an aggressive subtype of breast cancer associated with shorter survival in premenopausal black women, is even more prevalent.6 Lung cancer kills black persons more than any other malignancy.6 In 2011, 23,220 new cases of lung cancer were reported, and an estimated 16,790 deaths occurred. The convergence of lung cancer death rates between young black persons and white adults is the result of faster progression of disease in black persons, likely reflecting a greater reduction in smoking initiation among blacks since the late 1970s.6 Also, as with most of the malignancies diagnosed in blacks, increased mortality is also associated with advanced stage at time of diagnosis.6
Their Systematics, Biology, and Evolution Edited by Edward B. Cutler The Sipuncula, a group of ocean-dwelling worms related to annelids and mollusks, play a significant role in the bioerosion of coral reefs and are useful indicators of environmental conditions. The 155 species live in a wide variety ofmarine habitats at all depths, in sand and mud, in burrows in soft rock and dead coral, and inside such protective shelters as mollusk shells. Important food items for fish and invertebrate predators, they also recycle organic wastes and function as bioassay tools for human diseases such as cystic fibrosis and acute cholera. Edward B. Cutler brings together in this volume everything that is known about the entire phylum. An introduction, with practical information about collecting and handling the animals, is followed by Part One, which incorporates new systematic analyses made during the past twenty years and offers illustrated keys to all taxa, replacing the work of A.C. Stephen and S.J. Edmonds. Part Two reviews the past thirty years' work in such areas as ecology, muscular sysetms, blood chemistry, respiration, reproduction, and excretion. Part Three provides a new synthetic perspective on the phylum's zoogeography and evolutionary relationships, both to other phyla and within the phylum. It utilizes information from the fossil record, paleo-oceanographic data, and comparative studies of immunology, physiology, embryology, and anatomy. Edward B. Cutler is Professor of Biology at Utica College of Syracuse University, now on long-term leave at the Museum of Comparative Zoology, Harvard University.
Tuesday, April 16, 2013 Easy Understanding of OOPs concepts Class is the 1st OOPs concept .Class defines the characteristics of objects which includes its attributes , fields properties and behavior . Let us say we have a class called car , then the color , model number , top speed can be its attributes and properties . Accelerating , breaking , turning will be its behavior . Objects can be considered as a thing that performs a set of related functions .Programming objects are used to model real worlds objects. An object is also an instant of a class . For our class Car , Ferrari will be our object One can have an instance of a class; the instance is the actual object created at runtime. The set of values of the attributes of a particular object is called its state. The object consists of state and the behaviour that’s defined in the object’s class. Also called as functions in some programming languages , methods defines the behavior of particular objects . For our Car class , turning() , breaking () will be our methods . In the real world there are many objects that can be specialized. In OOP, a parent class can inherit its behavior and state to children classes. This concept was developed to manage generalization and specialization in OOP .Lets say we have a class called Car and Racing Car . Then the attributes like engine no. , color of the Class car can be inherited by the class Racing Car . The class Car will be Parent class , and the class Racing Car will be the derived class or child class The following OO terms are commonly used names given to parent and child classes in OOP: Superclass: Parent class. Subclass: Child class. Base class: Parent class. Derived class: Child class Abstraction is simplifying complex reality by modeling classes appropriate to the problem . In other words it means representing only the important details without including all the details . For example the car Ferrari can be treated as simple car only . The wrapping up of data and functions into a single unit is called as encapsulation . For example the class car has a method turn () .The code for the turn() defines how the turn will occur . So we don’t need to define how Mercedes will turn and how the Ferrari will turn separately . turn() can be encapsulated with both. Its an important OOPs concept , Polymorphism means taking more than one forms .Polymorphism allows the programmer to treat derived class members just like their parent class’s members. More precisely, Polymorphism in object-oriented programming is the ability of objects belonging to different data types to respond to calls of methods of the same name .If a Dog is commanded to speak(), this may elicit a bark(). However, if a Pig is commanded to speak(), this may elicit an oink(). Each subclass overrides the speak() method inherited from the parent class Animal. JAVA OOPS explain with real time examples OOPS Concepts are mainly 4 Abstraction:-Hidding non-essential features and showing the Hidding unnecessary data from the users details,is called Real Time example:TV Remote Button in that number format and power buttons and other buttons there.just we are seeing the butttons,we don't see the button circuits.i.e buttons circutes and wirings all are hidden.so i think its good example. Writing Operations and methods stored in a single class.This is Called Encapsulation Real Time Example:Medical Capsuals i.e one drug is stored in buttom layer and another drug is stored in Upper layer these two layers are combined in The New Class is Existing from Old Class,i.e SubClass is Existing from Super Class. Real Time Example: Father and Son Relationship Sinle Form behaving diffreantly in diffreant Person in Home act is husband/son, in Office acts Employer. in Public Good Cityzen. Posted by gtulasidhar at 1:38 AM
Cell Cycle: Useful Notes on Cell Cycle! All cells reproduce by dividing into two. Each parental cell produces two daughter cells each time they divide. These newly formed daughter cells can grow by its own and divide. They give rise to a new cell population that is formed by the growth and division of a single parental cell and its progeny. A single cell to form a structure consisting of millions of cells is through that is such cycles of growth and division. A replication and cell growth also take place during the division of a cell. Cell division, DNA replication, and cell growth, have to take place in a coordinated way to ensure correct division and formation of progeny cells containing intact genomes. The sequence of events by which a cell duplicates its genome, synthesis the other constituents of the cell and eventually divides into two daughter cells is termed cell cycle. Cell growth is a continuous process, but DNA synthesis occurs only during one specific stage in the cell cycle although a complex series of events during cell division the replicated chromosomes (DNA) are then distributed to daughter nuclei. A eukaryotic cell cycle is illustrated in colored image 5.1, by human cells in culture. In approximately every 24 hours these cells divide once. However, this duration of cell cycle can vary from organism to organism and also from cell type to cell type. The cell cycle is divided into two basic phases namely: Interphase and M Phase or Mitosis Phase. The M Phase is the actual cell division or mitosis occurs once. The interphase is the phase between two successive M phases. It is important to note that cell division lasts for only about an hour in the 24 hour average duration of cell cycle of a human cell. The interphase continuous for more than 95% of the cell cycle duration. The nuclear division is the M Phase. The resting phase end with corresponding to the separation of daughter chromosomes or karyokinesis at usually ends. During this time the cell is preparing for division by undergoing both cell growth and DNA replication in systematic way. The interphase is divided into three further phases namely G1 phase or Gap1, S phase Synthesis and G2 phase Gap2. G1 phase is being related to the interval between the mitosis and initiation of DNA replication. During G1 phase the cell is metabolically active and continuously grows but does not replicate its DNA. The period during which DNA synthesis or replication takes place is called S phase from or synthesis phase. During this time the amount of DNA per cell doubles. However there is no increase in the chromosome number. In animal cells, during the S phase, in the nucleus DNA replication begins, and the controlee duplicates in the cytoplasm. In the adult animals, many cells divide occasionally and some cells do not appear to exhibit division. This is replacing cells that have been lost because of injury or cell death. These cells that do not divide further exit G1 phase to enter an inactive stage called Quiescent stage G of the cell cycle. In animals, mitotic cell division is only seen in the diploid somatic cells. Interphase details are: (i) G1 phase: The period prior to the synthesis of DNA. In this phase, the cell increases in mass in preparation for cell division. Note that the G in G1 represents gap and the 1 represents first, so the G1 phase is the first gap phase. (ii) S phase: The period during which DNA is synthesized in most cells, there is a narrow window of time during which DNA is synthesized. Note that the S represents synthesis. (iii) G2 phase: The period after DNA synthesis has occurred but prior to the start of prophase. The cell synthesizes proteins and continues to increase in size. Note that the G in G2 represents gap and the 2 represents second, so the G2 phase is the second gap phase.
Landscape corridors – thin strips of habitat that connect isolated patches of habitat – are lifelines for native plants that live in the connected patches and therefore are a useful tool for conserving biodiversity. That’s the result of the first replicated, large-scale study of plants and how they survive in both connected patches of habitat – those utilizing landscape corridors – and unconnected patches. Scientists at North Carolina State University and collaborators at other U.S. universities conducted the study. Patches of habitat connected by corridors contained 20 percent more plant species than unconnected patches at the end of the five-year study, says Dr. Ellen Damschen, the study’s lead author and a postdoctoral researcher at the University of California, Santa Barbara. Damschen completed her Ph.D. in the lab of Dr. Nick Haddad, associate professor of zoology at NC State and a co-author of the paper describing the research. The research appears in the Sept. 1 edition of the journal Science. The loss and fragmentation of habitat is the largest threat to biodiversity globally, Damschen and Haddad say. In an effort to prevent species losses, conservation efforts have intuitively relied on corridors, which have become a dominant feature of conservation plans. However, there has been little scientific evidence showing that corridors do, in fact, preserve biodiversity. To perform the research, the scientists collaborated with the U.S. Forest Service at the Savannah River Site National Environmental Research Park, a federally protected area on the South Carolina-Georgia border. Most of the Savannah River Site is covered with pine plantations. The U.S. Forest Service created eight identical sites, each with five openings, or patches, by clearing the pine forest. A central patch was connected to one other patch by a 150-meter-long, 25-meter-wide corridor, while three other patches were isolated from the central patch – and each other – by the surrounding forest. The patches are home to many species of plants and animals that prefer open habitats, many which are native to the historical longleaf pine savannas of this region. The researchers surveyed all plant species inside connected and unconnected patches from 2000 to 2005; nearly 300 species of plants were found. When the study began, there was no difference in the number of species between connected and unconnected patches, the scientists say. After five years, however, patches with a corridor retained high numbers of species, while those without a corridor lost species. Corridors provided the largest benefit to native species while having no effect on the number of invasive plant species. Invasive species seem to already be everywhere, not needing corridors for their spread, or remain where they originated, Damschen says. These results indicate that using corridors in conservation should provide benefits to native species that outweigh the risk of furthering the spread of exotic species. Damschen says that a number of factors likely contributed to the difference in plant diversity. Seeds dispersed by animals are more likely to be deposited in patches with corridors; flowers are more likely to be pollinated because corridors increase the movement of insects; and animals that eat seeds – like ants and mice – can eat the seeds of more common species in connected patches and give rare seeds an advantage. While the researchers predicted that corridors would be beneficial to increasing plant richness, “It’s surprising that we would see such a dramatic change over a short time scale,” Damschen says. “Plants are thought to be relatively sedentary organisms that are heavily influenced by their environmental surroundings. This study indicates that plants can change relatively quickly through their interactions with the landscape and the animals that interact with them, such as seed dispersers, pollinators and predators.” The next step in their studies of corridors is to make predictions for how corridors affect plants based on plant characteristics, Damschen and Haddad say. The researchers will study the specific effects of pollination and seed dispersal by wind and animals on plants in both connected and unconnected patches of habitat, for example. The study – which included assistance from scientists at Iowa State University, the University of Washington, the University of Florida, and the University of California-Santa Barbara – was funded by the National Science Foundation and by the Department of Energy-Savannah River Operations Office through the U.S. Forest Service Savannah River Institute. The U.S. Forest Service-Savannah River Site provided critical assistance with the creation and maintenance of the experimental landscapes. Cite This Page:
Exploration of a Simple Compiler In this assignment, you'll implement a very simple form of language translation for arithmetic expressions. The source language consists of integers, applications of arithmetic operators to integers, and optional parentheses for grouping. The "keywords" of the language are ordinary arithmetic operators and parentheses for grouping: + * ( ) Legal expressions are given by the grammar: <exp> ::= <number> | <exp> + <exp> | <exp> * <exp> | (<exp>) There may be any number of spaces or line breaks between lexical elements. The usual rules of precedence apply, with multiplication binding higher than addition, and parentheses grouping the tightest of all. Note that we do not have any subtraction or division operators here. Those may be added, for optional extra credit. You are to write a compiler for expressions in our simple language. The target should be a corresponding expression in a postfix form, in which all expressions are rendered by giving the left hand side, then the right, then the operator, with each element separated by one or more spaces. For example, the source-level expression "(5+2*4)*(3+7) + 1" should be translated to the postfix form "5 2 4 * + 3 7 + * 1 +". Included in this assignment is a "virtual machine" evaluator for postfix expressions, which you may use to check your work. There is no particular constraint on how your organize your work here, so long as your compiler satisfies the following requirements: - All legal arithmetic expressions should be supported. - All ill-formed expressions (e.g. " ((3 + )" ) should be rejected. - The postfix that your code outputs should be legal input for the "virtual machine". Despite the open-ended character of these requirements, you'll likely figure out pretty quickly that certain approaches work better than others. For example, you'll almost certainly want an intermediate "tree" representation of source-level expressions. You'll have to think about how to get the precedence of multiplication over addition right, and how to support parenthesized grouping. If you tackle subtraction and division, you'll need to make sure that both are handled as left-associative operators, and that subtraction and addition have the same precedence, as do division and multiplication. For example, the expression "(5+7 * 8-5) / 4 - 2 - 1" should result in the output "5 7 8 * + 5 - 4 / 2 - 1 -", which evaluates to 11, of course. You'll likely find this quite challenging! Remember that it's only an extra credit option, so don't be afraid to try and fail. You don't really need it to build your compiler, but if you want to "run" the "object code" produced by your compiler, you can grab a copy of this working postfix evaluator, which supports all four arithmetic operators: This work must be done individually. You are free to discuss any aspect of your investigation with your colleagues in the class, but the work you submit must be your own.
15 7 RULES OF GRAPHINGFollow these simple rules for GREAT GRAPHS 16 RULE # 1.1. Always draw neat lines with a straight edge or ruler 17 RULE # 2. Make your graph 1 full page in size. Small graphs are too difficult to read patterns or results of your experiment. 18 RULE # 3. Label the x-axis (goes across the bottom of your graph) Label the y-axis ( the line that goes up & down on the left side of your graph) 19 RULE # 4. Label three places on your graph. 1. TITLE the graph descriptivelyWHAT DOES YOUR GRAPH SHOW US? 20 RULE # 4. 2. label the x-axis with the independent variable this is the variable you pre-set before you began collecting data, on the left side of a “T” tablecommon independent variables can be time, or distancesData points should be evenly spaced 21 RULE # 4. 3. label the y-axis with the dependent variable this is the variable you measure when you begin collecting data, on the right side of a “T” tablecommon dependent variables can be mass, or temperatureData points should be evenly spaced 22 RULE # 5.Number the x and y axis with a regular numerical sequence or pattern starting with 0 to space out your data so it fills the entire graphexamples: 0, 5, 10,0, 2, 4, 6, . ., 0, 0.5, 1.0, 1.5, 2.0 23 RULE # 6.Number the x and y axis on the lines of the graph, not the spaces between the lines 24 RULE # 7.If your graph shows more than one trial of data, or has more than 1 line, USE A KEYA key can be different colored lines, lines with different textures or patterns. 25 Choose the best graph for the data Pie chart- shows percentages and parts of a wholeBar graph- best for comparing dataLine graph- best for looking at change over timeStem & Leaf plot- comparing data that can also show mean, mode, and median 26 Statistical AnalysisMean- (average)- add up all the data & divide that total by the number of data points ex. : 1,2,3,2,4,2 = 14 14/6= 7/3 or 2.3Mode- number seen most often ex:1,2,3,2,4,2 mode is 2Median- middle value when data is placed in numerical order Ex.: 1,2,2,2,3,4,5 oddex:1,2,2,2,3,4 even 2+2=4/2=2 median is 2Range- difference between the greatest # and the smallest # in the data set ex.1,2,2,2,3, = 3 data vary over 3 values 27 How to change numbers into % for pie charts. You can refer to your book on page 770.Determine the total number for your data: add up all the values to get one number. 1,2,2,2,3,4 = 14Divide each proportion by the total number. 1/14, 2/14. 3/14, 4/14Multiply that decimal by This will give the number of degrees that your pie piece should contain. Ex. 2/14= x 360= 51.4 degreesUse a protractor to measure the angle of each slice.To find the percentage take the number of degrees in the slice divide it by 360 and multiply the new number by 100% / 360= x 100% = 14.3 % 28 Good Luck and Happy Data Collecting! The EndGood Luck and Happy Data Collecting!
Nearly two-thirds of protected U.S. lands are polluted by noisy humans Noise pollution is typically considered an urban problem. However, new research shows that the invisible threat found in the din of human activity (also known as anthropogenic sound) is increasingly taking its toll on the natural world. A first-of-its-kind study, released last Thursday from scientists at Colorado State University and the National Park Service, shows that anthropogenic noise pollutes 63 percent of all U.S. protected lands—that includes city and county parks, state and national forests, and national parks, monuments, and refuges. After culling more than a million hours of recorded data from 492 protected sites across the continental United States, researchers concluded that anthropogenic noise doubled background sound levels in most of these areas. They also detected elevated sounds in 14 percent of endangered species’ critical habits. Protected areas with the most stringent regulations, such as national parks and designated wilderness areas, suffered the least noise pollution. The issue does more than disrupt the natural serenity of protected places like parks, forests, and historical battlefields. According to George Wittemyer, a Colorado State University conservation biologist and one of the authors of the study, the impact of excessive noise—which has been steadily mounting since the Industrial Revolution—includes increased heart rate in humans, as well as disrupted sleep, irritability, negative cardiovascular and psychophysiological effects, and even shortened life spans. He’s not the only one to say so. The World Health Organization considers noise pollution to be an environmental burden second in scope only to air pollution. In the wild, noise affects the ability of prey to detect predators, masks wildlife communication, and interferes with species’ ability to pick up on signals from other species. This, for instance, confuses bird mating behavior, which depends on the efficacy of males’ songs. “Many species will avoid areas with too much noise,” Wittemyer says. “It can actually re-structure ecosystems.” Even plant life suffers from manmade racket. “Noise might keep seed dispersers or pollinators away, which over time changes the composition of the plant community.” The most common culprits are cars, aircraft, and forms of extractive land use including mining, logging, and drilling. Previous to this study, sound had been only quantified at the site level. It was only when the NPS Natural Sounds and Night Skies Division teamed up with Colorado State’s Sound and Light Ecology Team that researchers devised a uniform algorithm to predict anthropogenic noise levels on a broad scale. “People should react to noise pollution just as they would if they saw someone dumping garbage in a river—take it up with park rangers, politicians, and regulating bodies.” The idea was to identify those places that stand to benefit from noise mitigation efforts and to prioritize the protection of those quiet places where one can still escape the clamor of everyday life. While this study marked the first time sound data was quantified on a continental scale, Wittemyer notes that the issue can’t be resolved with continental-scale policy. “There are different implications for different protected areas; it’s got to be solved at a local management level,” he says. “Citizen action and public involvement is key. People should do exactly what they would if they saw someone dumping garbage in a river—take it up with park rangers, politicians, and regulating bodies.” NPS acoustic biologist Megan McKenna, another study author, says this new data will provide managers of protected areas such as national parks with information to enhance recreation and visitor use planning. It will also benefit the NPS Quiet Parks program, which aims to provide national park units with resources for reducing park-generated noise sources—abating noise from HVAC systems, maintenance trucks, PA systems, lawnmowers, vehicle fleets, etc. “It’s about helping parks to understand their unique acoustic environments and those sound sources present,” says McKenna, who hopes the data will bolster conversations about ways to reduce noise, and about what sounds are appropriate. While McKenna is excited about the early-stage development of promising technology including quiet electric aircrafts, she says there are several simple and inexpensive ways to cut down on noise pollution. She points to Northern California’s Muir Woods, which designated a “quiet zone,” demarcated by signs asking visitors to lower their voices and turn off their music. The signs alone reduced sounds by almost three decibels. Washington, D.C.’s Rock Creek Park recently reduced its sonic footprint, too, albeit inadvertently. “They started closing the main road on weekends,” McKenna says. “The intent wasn’t noise reduction, but that’s been the added benefit—and it helped scientists understand how local bird communication is affected by traffic. A lot of times, it’s just the little things.” While Wittemyer and McKenna are concerned about noise pollution’s prevalence across landscapes, both were surprised by the converse—to learn that a third of protected areas across the country enjoy undisturbed natural soundscapes. “There are still so many opportunities for the public to immerse themselves and really listen, so as to appreciate a soundscape just like they would a beautiful vista,” says Wittemyer. “Once we get people to recognize the value and benefits of these natural soundscapes, I think we’ll do a better job of protecting them.” McKenna echoes that sentiment. “Just like dark skies, pristine soundscapes are pretty incredible,” she says. “The more you learn about the sounds of a place that’s special to you, the more likely you are to turn your cell off and really tune in. And, the more likely you are to really listen when you travel to new places.”
How Gender Stereotyping Leads to Bullying What is gender stereotyping? How can it affect young people? Gender stereotypes are defined as biased and generally established judgments about some traits of people due to their gender. For instance, girls need to wear pink, love to sing and dance, play with dolls, and enjoy cooking. From a young age, boys are expected to like camping, fishing, cars, video games, and sports. These things are considered to be the norm for every girl or boy, and that is what makes these issues stereotypes. Gender stereotyping in children is very dangerous as it creates a cruel treatment of a kid depending on his/her gender. The young person is pressed to act like boy or girl, ignoring personal views and likings. What Are the Most Common Types of Gender Stereotypes? Here are 4 main types of stereotypes regarding gender: - Character: Girls are perceived as modest, tidy and organized. Boys are supposed to be hostile, messy and self-assured. - Physical Appearance: This type of stereotyping differs from country to country. But in general, boys should be handsome, tall, and girls have to be slim and attractive. - Domestic Behavior: Girls should like cooking and do all types of housework. Boys cannot care for kids or tailor. - Profession: Women are assumed to earn less money than men. They are bad at math and don’t have technical skills. Men are good doctors, politicians and engineers. Children gender stereotypes discard the idea of gender individuality. Girls are always seen as uncertain and less ambitious than boys. Stereotypes demonstrate the inequality of two genders, and may cause serious consequences. What Are the Consequences of Gender Stereotyping? Gender bullying is the most serious after effect of stereotyping. Abusive behavior can take place face to face (bullying at school) or online (cyberbullying). Boys and girls who don’t meet the norms become the victims of aggressive behavior from peers. Children and young people can experience bullying for a variety of reasons and in various forms, including verbal bullying (teasing, name calling), physical (kicking, hitting), relational (ignoring, telling rumors), cyberbullying (sending abusive messages, sharing inappropriate photos and videos). According to the gender stereotype statistics, girls are bullied emotionally and verbally more often than boys. Boys are usually the victims of physical bullying. It was also discovered that girls are usually bullied by females. In fact, anyone can be bullied, but there are a few factors that increase the risk of such incidents. For children, they include individual characteristics such as personality, temperament, or physical appearance like hair color, weight, or wearing glasses. How to Deal with Gender Stereotypes? As a parent, you should change your own stereotypes and prejudices because your personal views affect your children. Your aim is to keep your kid safe from the results of bullying. The best way is to monitor your child’s interactions to know how other teens treat your child. Never leave your kids’ emotional stability in the hands of different stereotypes and media impact. Instead, take an active part in their life to secure them from all possible dangers connected with gender stereotypes.
Our sun may be an only child, but most of the stars in the galaxy are actually twins. The sibling stars circle around each other at varying distances, bound by the hands of gravity. How twin stars form is an ongoing question in astronomy. Do they start out like fraternal twins developing from two separate clouds, or "eggs"? Or do they begin life in one cloud that splits into two, like identical twins born from one egg? Astronomers generally believe that widely spaced twin, or binary, stars grow from two separate clouds, while the closer-knit binary stars start out from one cloud. But how this latter process works has not been clear. New observations from NASA's Spitzer Space Telescope are acting like sonograms to reveal the early birth process of snug twin stars. The infrared telescope can see the structure of the dense, dusty envelopes surrounding newborn stars in remarkable detail. These envelopes are like wombs feeding stars growing inside -- the material falls onto disks spinning around the stars, and then is pulled farther inward by the fattening stars. The Spitzer pictures reveal blob-like, asymmetrical envelopes for nearly all of 20 objects studied. According to astronomers, such irregularities might trigger binary stars to form. "We see asymmetries in the dense material around these proto-stars on scales only a few times larger than the size of the solar system. This means that the disks around them will be fed unevenly, possibly enhancing fragmentation of the disk and triggering binary star formation," said John Tobin of the University of Michigan, Ann Arbor, lead author of a recent paper in the The Astrophysical Journal. All stars, whether they are twins or not, form from collapsing envelopes, or clumps, of gas and dust. The clumps continue to shrink under the force of gravity, until enough pressure is exerted to fuse atoms together and create an explosion of energy. Theorists have run computer simulations in the past to show that irregular-shaped envelopes may cause the closer twin stars to form. Material falling inward would be concentrated in clumps, not evenly spread out, seeding the formation of two stars instead of one. But, until now, observational evidence for this scenario was inconclusive. Tobin and his team initially did not set out to test this theory. They were studying the effects of jets and outflows on envelopes around young stars when they happened to notice that almost all the envelopes were asymmetrical. This led them to investigate further -- 17 of 20 envelopes examined were shaped like blobs instead of spheres. The remaining three envelopes were not as irregular as the others, but not perfectly round either. Many of the envelopes were already known to contain embryonic twin stars -- possibly caused by the irregular envelopes. "We were really surprised by the prevalence of asymmetrical envelope structures," said Tobin. "And because we know that most stars are binary, these asymmetries could be indicative of how they form." Spitzer was able to catch such detailed views of these stellar eggs because it has highly sensitive infrared vision, which can detect the faint infrared glow from our Milky Way galaxy itself. The dusty envelopes around the young stars block background light from the Milky Way, creating the appearance of a shadow in images from Spitzer. "Traditionally, these envelopes have been observed by looking at longer infrared wavelengths where the cold dust is glowing. However, those observations generally have much lower resolution than the Spitzer images," said Tobin. Further study of these envelopes, examining the velocity of the material falling onto the forming stars using radio-wavelength telescopes, is already in progress. While the researchers may not yet be able to look at a picture of a stellar envelope and declare "It's twins," their work is offering important clues to help solve the mystery of how twin stars are born. Other authors of this study include Lee Hartmann of the University of Michigan, Ann Arbor; and Hsin-Fang Chiang and Leslie Looney of the University of Illinois, Urbana-Champaign. The observations were made before Spitzer ran out its liquid coolant in May 2009, beginning its "warm" mission. - John J. Tobin, Lee Hartmann, Leslie W. Looney, Hsin-Fang Chiang. Complex Structure in Class 0 Protostellar Envelopes. The Astrophysical Journal, 2010; 712 (2): 1010 DOI: 10.1088/0004-637X/712/2/1010 Cite This Page:
Climate change continues to negatively impact our planet and a new study suggests that several salmon and trout species could vanish from California in the next 100 years. In the report published on Tuesday, May 16, researchers claimed that 23 out of the 31 salmon species found in the Californian waters would likely go extinct within the next century. What Is Causing The Decline In Salmon And Trout Numbers? Researchers from the University of California, Davis, and the conservation group California Trout noted that climate change, agriculture, and dams were the primary reasons behind the fall in the numbers and eventual extinction threat. One of the species in danger of extinction is the highly prized Chinook salmon, which is the only salmon fished and marketed in the state of California. The new report titled State of the Salmonids II: Fish in Hot Water is an update of a 2008 assessment of a similar nature. At the time, the results were not nearly as alarming as it is now. In 2008, researchers concluded that around five salmon species may become extinct in the next five decades. However, the new report almost triples the predicted number and notes that 45 percent fish species may become extinct in the next 50 years. The researchers reasoned that even though degradation of river habitats and water diversions for irrigation could be controlled, the effects of climate change were uncontainable. Salmon and trout prosper in cold water, but due to the rising temperatures, California waters were getting too hot for the fishes to survive. This would eventually be the primary reason for their extinction. Other potential causes for the rapid decrease in salmons and trout species can be attributed to agriculture. Farming and irrigation both disrupt the normal functioning of the salmon. For instance, farming causes sediments to mix with the water and cause impurity. On the other hand, because of irrigation, rivers are drained of large amounts of water. How To Minimize The Damage? The study’s authors claim that to restore some salmon population and to avoid their extinction, efforts must be made toward maintaining floodplains and marshes. These provide a suitable habitat for the species. Similarly, efforts must be undertaken to maintain mountain spring-fed creeks, which would provide salmons with the ideal growth temperature despite rising global temperatures. If the state government decides to employ these processes, the salmons may be saved from total extinction. However, without such efforts, the Californian native salmons and trout are doomed. One can download the complete report here. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.
What is Cognitive Bias? A cognitive bias is a systematic fault in thinking and decision-making that can affect our judgments and perceptions. These biases can arise due to our limited mental capacity, the complexity of the environment, and the influence of our prior experiences and beliefs. A human brain is a powerful tool. But it is also subject to limits of attention, individual motivations, heuristics, social pressures, and emotions. These factors can all contribute to cognitive biases. Many of them are attempts to simplify information processing. Biases can stem from rules of thumb that help you understand the world and reach decisions quickly. But they can also lead to errors and distortions in thinking. For example, limits of attention can lead to incomplete information processing. Individual motivations can circumvent logic, producing biased interpretations. Heuristics, or mental shortcuts, can be helpful in some situations but can also lead to errors if applied inappropriately. Social pressures can also influence decision-making, with individuals often conforming to the opinions of those around them. Finally, emotions can play a significant role in developing biases. Individuals frequently make decisions using feelings rather than logical analysis. Understanding these factors and their potential impact on thinking can help individuals recognize and mitigate cognitive biases’ effects. These biases can cause us to make inaccurate or irrational judgments and decisions, often without our awareness. The study of cognitive biases is a crucial area of research in psychology, neuroscience, and behavioral economics. It provides insights into how the human mind works and how we can improve our decision-making processes. Cognitive Bias in Research Cognitive bias can significantly impact a study’s participants and its researchers. It can cause participants to alter their behavior or responses, potentially affecting the validity of the study’s results. Meanwhile, cognitive bias can lead researchers to perceive and analyze data inaccurately, leading to incorrect or false findings. Our brains tend to seek confirmation of our beliefs and expectations, creating a challenge for objective researchers. These challenges are especially prevalent in high-stakes fields, and pressure to produce positive results can be considerable. Because of this challenge, researchers need to recognize and account for cognitive biases to maintain the integrity of their work. Cognitive bias is just one category of systematic error in research. Others include Selection Bias and Sampling Bias. Each type of bias has its own solutions. Cognitive Bias Examples Numerous types of cognitive biases can affect our thinking and decision-making processes. This list provides some key examples: The propensity to find, interpret, and recall information that supports our existing beliefs and ideas while disregarding information that contradicts them. Consider a person who believes that a particular alternative medicine is effective. They might seek out and accept only positive testimonials and ignore negative scientific evidence suggesting otherwise. Learn more in depth about Confirmation Bias Definition & Examples. The tendency to generalize impressions of a person or entity based on a single positive or negative trait or experience. Thanks to this cognitive bias example, someone might assume a physically attractive teacher has superior teaching skills even when that is not true. For more information, read my post about the Halo Effect. The tendency to overestimate the likelihood or importance of events that are readily available in our memory or experience. A person with this type of cognitive bias might be more afraid of flying in an airplane because they have heard news stories about plane crashes. However, driving a car is statistically much more dangerous. Read more about the Availability Heuristic. The inclination to count too heavily on the first piece of information encountered when making subsequent judgments or decisions. A car salesperson might list a very high price for a car initially so that when they later give a lower price, the buyer thinks they are getting a good deal thanks to this cognitive bias example. Learn more about Anchoring Bias. The tendency to be swayed by how information is presented or framed rather than the content itself. Marketers might take advantage of this type of cognitive bias by advertising a product as “95% fat-free” instead of “5% fat.” The former sounds more positive and attractive to buyers. Read about the Framing Effect. The tendency for people with low ability or expertise in a domain to overestimate their competence and knowledge, while those with higher ability or expertise may underestimate their own. Imagine a software developer who has just learned a new programming language and written a few simple programs. They might believe they have a comprehensive understanding of the language and feel confident enough to start working on complex projects because of this infamous cognitive bias example. However, their limited knowledge and experience may result in errors and suboptimal code. They may not even realize they lack a deeper understanding until receiving feedback from more experienced developers. Learn more in depth about the Dunning-Kruger Effect. The belief that random events will “even out” over time, leading to an expectation of a particular outcome based on previous results. A gambler who has not won anything on a slot machine in a while might experience this type of cognitive bias. The gambler starts to believe that they are “due for a win.” They continue to play, thinking that the machine must eventually pay out. However, the odds of winning on the slot machine are still the same for each individual spin. The previous outcomes do not influence current or future results. Learn more about the Gambler’s Fallacy. The tendency to believe, after an event has occurred, that you could have predicted or expected the outcome, even if you had no prior knowledge or information. After the fact, some events might feel inevitable. After a reading an unusual earnings report, an investor might believe they could have predicted it due to this cognitive bias example. In reality, they would not have had enough information to make an accurate prediction before reading the report. Learn more about Hindsight Bias. The tendency to focus on negative or threatening information over positive or neutral information. A person receives feedback on a presentation they gave at work. Although most of the feedback is positive and constructive, the person focuses solely on the one negative comment and feels discouraged and upset. This type of cognitive bias leads them to disregard all the positive feedback they receive. The tendency to attribute our successes to our abilities and efforts while attributing our failures to external factors beyond our control. Thanks to this type of cognitive bias, a student who did well on an exam might attribute their success to their intelligence and hard work. Conversely, they’ll attribute any mistakes to external factors such as a poorly designed test or the teacher providing inadequate study materials. Learn more in-depth about the Self-Serving Bias. The tendency to make judgments based on how well an individual or event fits into a prototype or stereotype. A person might assume that someone wearing a white coat and carrying a stethoscope is a competent doctor, even if they are not, simply because they fit the “prototype” of what a doctor looks like. Learn more about the Representativeness Heuristic. Cognitive biases can affect our thinking and decision-making processes in various ways, often leading to inaccurate or irrational judgments and decisions. These biases can also affect experimental results, leading to invalid or misleading findings. By being aware of them and taking steps to mitigate their influence, researchers can improve the validity of their research. Comments and Questions
Digital citizens belong to the digital society. They use technology to actively engage in and with society. Digital citizenship empowers people to reap the benefits of digital technology in a safe and effective way. Digital citizenship is a right; digital skills enable people to exercise this right. "Our learners need to be equipped with a wide variety of digital skills to be allowed to be in the driving seat of technological-based innovation." European Schoolnet's Perspective on the New Skills Agenda for Europe Safe and responsible use of online technology The Digital Single Market strategy aims to have every European digital. However, children and young people have particular needs and vulnerabilities. Therefore, governments, civil society and industry have a joint responsibility – together with parents, teachers and peers – to ensure that the internet is a place of opportunities for everyone to access knowledge, to communicate, to develop skills and to improve job perspectives and employability. At European Schoolnet, we believe that digital and media literacies enable children and young people to become critical thinkers, to actively analyse, evaluate and create media messages, and to act responsibly in an online environment. In our view, we can all help to make a difference. Below you can find links to a variety of portals and resources to help you keep up-to-date with the latest trends and issues concerning online safety and responsibility and digital literacy. Online safety and responsibility - A safer online environment: Better Internet for Kids - A global day of focus on a safer and better internet: Safer Internet Day - Certified accreditation for schools: eSafety Label - Combat bullying through online and offline interactions: ENABLE - Foster confidence in behaviour changes through serious games: eConfidence - Foster critical thinking for digital citizens: Web We Want - Managing internet and mobile phone use: Family internet management tool Active and creative digital life To be active citizens in today's society we all need to be conversant with technology, as our everyday life is intertwined with digital tools. To function in a digital world, we need digital skills. We need them for learning, for work, for interacting with services, for buying and selling online, for entertainment, and for cultural, political and civic participation. Digital skills cannot be limited to operational, passive use. They involve active creation, critical understanding, and problem-solving through digital means. Education should strive to empower learners to become creators rather than just consumers of technologies. European Schoolnet supports the enhancement of digital skills of young people in a variety of ways. Below you can find more information concerning our projects and studies in the areas of digital skills, coding and computational thinking. Digital skills for jobs and life - Empowering youth for employability: I-LINC - Bridging the digital skills gap: Digital Skills and Jobs Coalition Coding and computational thinking - Promoting teaching and learning coding and programming: European Coding Initiative and DIS-CODE - Understanding the uptake of computational thinking in formal education: Computhink See all our current and past digital citizenship projects.
What Is Reindeer Moss? What Is Reindeer Moss? Reindeer moss is a species of lichen so called because it is the staple winter food of reindeer (and caribou) in Arctic and sub-Arctic regions. Cladonia rangiferina, also known as reindeer lichen, is light-colored, fruticose lichen belonging to the Cladoniaceae family. Other common names include reindeer moss and caribou moss, but these names may be misleading since it is not a moss. A similar-looking species also known by the common name Reindeer lichen is Cladonia portentosa. The animals reach the plant by scraping away the snow with their feet. But plant growth in those cold northern lands is so slow that the lichen can take more than 30 years to recover after the reindeer have grazed. These domesticated herds therefore have to travel long distances in search of food, and the Laplanders of northern Scandinavia, who depend on the animals for their livelihood, must travel with them. Fortunately, reindeer moss is especially abundant in Lapland, although it also grows extensively in much of northern Europe, the tundra (or treeless plains) of Siberia and the barren expanses of Arctic America. During the short summer the reindeer are able to feed on herbage and shoots then accessible in the valleys. These versatile animals provide the Laplanders with meat, milk, cheese and the raw materials for clothing, shoes and tents. They are also a means of transport. Reindeer moss is sometimes eaten by human beings, after being powdered and mixed with other food. But it leaves a slightly burning sensation on the human palate. This bluish-gray plant grows erect in tufts, and is remarkable for its many branches, which, strangely, resemble a deer’s antlers. In Scandinavia it has been used in the manufacture of alcohol, but difficulties in obtaining reindeer moss arise because of its slow growth rate (3 to 5 mm per year). Its periods of most rapid growth are spring and fall when high humidity and cool temperatures prevail.
Are you looking for fun and educational activities to do with your kindergartener? Look no further than Let’s Count (S4), a printable worksheet that helps children with counting and number recognition. The worksheet features five rows of different objects, such as flowers, stars, and apples. Each row has a different number of objects, ranging from 1 to 5. The child’s task is to count the objects in each row and write the corresponding number in the box provided. This worksheet is not only a great way to practice counting, but it also helps with hand-eye coordination and fine motor skills as children write their numbers. Plus, it’s fun and visually appealing with its colorful illustrations. Incorporate Let’s Count (S4) into your child’s daily routine, such as during a morning learning time or as a quiet activity during a car ride. It’s also a great resource for teachers and homeschoolers. To download Let’s Count (S4) and other kindergarten worksheets, visit educational websites or create your own using basic software such as Microsoft Word. Overall, Let’s Count (S4) is a simple yet effective tool for helping young children develop their math skills. Make learning fun and engaging with this printable worksheet. - You can download the higher-quality version worksheet in pdf format by clicking the download button. - Contact us if you cannot download the file for any reason.
Infrared vision and high spatial resolution are a powerful combination that allows Webb to show never-before-seen details across the universe. Previous space telescopes could not capture the intricate backgrounds observed in Webb’s images. The Carina Nebula sits approximately 7,600 light years from Earth. Sometimes referred as the “Cosmic Cliffs,” this star-forming region is massive. The highest peaks are seven light-years tall. This stellar nursery is home to many massive stars, several times larger than the Sun. It is one of the largest and bright nebulae in the sky, found in the southern constellation Carina. Discovered in 2014, WASP-96 b completes an orbit around its local star every 3.4 Earth days. The exoplanet is estimated to be half the mass of Jupiter. JWST captured the distinct signature of water, evidence of clouds and haze, in the atmosphere of gas giant WASP-96 b located in a distant solar system 1,150 light years from Earth. The observation is the most detailed of its kind, demonstrating Webb’s ability to analyze atmospheres from incredibly far distances giving Scientists a new tool to further characterize potentially habitable planets. A planetary nebula, the Southern Ring Nebula is an expanding cloud of gasses surrounding by a dying star. Approximately 2,500 light years away from Earth, the nebula is nearly a half light-year wide. For thousands of years, the dim star at the center has released rings of gas and dust. Webb captured this nebula with both of its onboard cameras, the Near-Infrared Camera (NIRCam) and the Mid-Infrared Instrument (MIRI). Continued, consistent imagery of these kinds of nebulae allows astronomers to dig into the specifics of what composes the nurseries: molecules, gasses, dust, etc. Found in the constellation Pegasus, Stephan’s Quintet sits around 290 million lightyears away from Earth. It was the first compact galaxy group ever discovered in 1877. Today, Webb combined 150 million pixels from nearly 1000 separate image files to create this enormous mosaic. It is Webb’s largest image created so far. President Joe Biden revealed this first image to the public on Monday, July 11. “Webb’s First Deep Field” shows a galaxy cluster of SMACS 0723, near the constellation Volans. When revealed, it was said by NASA administrator Bill Nelson that the size of this section of the sky would be as if you held a grain of sand at arm’s length. Faint structures in extremely distant galaxies are clearly visible, offering detailed views into the early history of the universe. Not counting the stars in the foreground, characterized by their 6 spoked lens flares, each dot present is an entire galaxy, providing unparalleled information about the deepest parts of space that humans have witnessed with their eyes. Edited by David Diebold.
A common public misconception is that bacteria live alone and act as solitary organisms. This misconception, however, is far from reality. Bacteria always live in very dense communities. Most bacteria prefer to live in a biofilm, a name for a group of organisms that stick together on a surface in an aqueous environment. The cells that stick together form an extracellular matrix which provides structural and biochemical support to the surrounding cells. In these biofilms, bacteria increase efficiency by dividing labor. The exterior cells in the biofilm defend the group from threats while the interior cells produce food for the rest. While it has long been known that bacteria can communicate through the group with chemical signals, also known as quorum sensing, new studies show that bacteria can also communicate with one another electrically. Ned Wingreen, a biophysicist at Princeton describes the significance of the discovery; “I think these are arguably the most important developments in microbiology in the last couple years, We’re learning about an entirely new mode of communication.” An entirely new mode of communication it is! Heres how it works: Ion channels in a bacteria cell’s outer membrane allow electrically charged molecules to pass in and out, just like a neuron or nerve cell. Neurons pump out Sodium ions and let in Potassium ions until the threshold is reached and depolarization occurs. This is known as an action potential. Gurol Suel, a biophysicist at UCSD emphasizes that while the bacteria’s electrical impulse is similar to a neuron’s, it is much slower, a few millimeters per hour compared to a neuron’s 100 meters per second. So what does this research mean? Scientists agree that this revelation could open new doors to discovery. Suel says that electrical signaling has been shown to be stronger than traditional chemical signaling. In his research, Suel found that potassium signals could travel at constant strength for 1000 times the width of a bacteria cell, much longer and stronger than any chemical signal. Electrical signaling could also mean more communication between different bacteria. Traditional chemical signaling relies on receptors to receive messages, while bacteria, plant cells, and animal neurons all use potassium to send and receive signals. If these findings are correct, there’s potential in the future for the development of new antibiotics. Learning about electrical signaling in bacteria has complicated our understanding of these previously thought to be simple organisms. El Naggar, another biophysicist at USC says, “Now we’re thinking of [bacteria] as masters of manipulating electrons and ions in their environment. It’s a very, very far cry from the way we thought of them as very simplistic organisms.” "Hypotheoni, I'm currently enrolled in a graduate education course, and we were required to ..." "Hi Blakelement! The first line of your post was very attention grabbing! It ..." "Hi Lukewarm! That stats included in your post were shocking! It's crazy to ..." "Hi Lobiotic! I really like how your post connected to people in your ..." "Hi ITSALIVE! This is a really informative post about the higher rate ..."
Solder is an alloy of nonferrous metal that, when melted, becomes fluid and bonds with the metals to be joined. The alloy used is constituted to have a melting and flow point lower than the metals being soldered. The fluid solder flows through the space between the metals being joined and becomes a permanent filler. A bond results and the soldered items are joined together. It is imperative that the items to be bonded are clean and fit well together. Often a flux is added to protect the metal surface from oxidation during the soldering process and to alter the surface tension of the molten solder. The particular alloy used depends on the metals to be bonded. Platinum solders are 60-70% gold with platinum, palladium (or other platinum family metal), or silver to give it a white color. Platinum solder can have a higher melting point because platinum itself has a very high melting point. Gold solders are alloys of gold, copper, and silver with the low temperature melting solders having an addition of cadmium, bismuth, zinc, or tin. The proportions of gold to other metals are manipulated so the solder matches the color of the gold on which it’s being applied. Silver solder usually contains silver and copper, sometimes zinc is added. A range of melting points for each type of solder is generally available so that the item being assembled or repaired can be soldered multiple times without releasing earlier solder joints. Generally, they run the gamut of very hard (highest temperature melting point) through several lower melting ranges to easy (the lowest melting solder available.) An item is considered to be soldered when it has been bonded through the use of solder. The process of soldering varies according to the metals being bonded, the temperature required to melt the solder, and the oxidation properties of the metals involved. A vast array of tools are used to work with solder, to clamp the items to be soldered, and to do the actual soldering. These tools are also dependant on the metals being soldered and the solder’s melting point. The actual fuel used to generate the soldering flame also varies, sometimes by the user’s choice and sometimes dictated by the item being soldered. There are three basic steps to soldering: - Preheating the entire item. - Localized heat on the solder joint. - Heat withdrawal. Post soldering most items are put into a pickling or acid bath to remove firestain, surface oxide, flux, and other residues. Note: In jewelry, we sometimes see the use of inappropriate, extremely low temperature melting solders such as lead solder. This happens when a repair person wants a quick repair without the hassle of determining what has occurred earlier in the manufacture or repair of an item. These joins are sometimes visible and unsightly and can often be remedied by the use of modern technology such as a jewelry laser. The Inset at the Upper Left Corner Shows the Area of Improperly Used Lead Solder on the Reverse of this Brooch.
Rabindranath Tagore, an eminent poet, philosopher, musician, and polymath, is a towering figure in Indian literature and culture. Every year, on the auspicious occasion of Rabindranath Tagore Jayanti, we commemorate his birth anniversary to honor his extraordinary contributions to art, literature, and social reform. This blog post serves as a tribute to this remarkable luminary, shedding light on his life, achievements, and enduring impact. Early Life and Education: Rabindranath Tagore was born on May 7, 1861, in Calcutta (now Kolkata), India, into a prominent Bengali family. He was the youngest of thirteen children, and his father, Debendranath Tagore, was a revered philosopher and religious leader. Tagore's early education was unconventional, as he was primarily homeschooled and exposed to a diverse range of subjects and disciplines. This broad foundation had a profound influence on his later intellectual pursuits. Literary and Artistic Journey: Tagore's literary journey began at a young age when he started writing poetry. His first collection of poems, "Kabi Kahini" (The Poet's Tale), was published when he was just sixteen years old. His lyrical and poignant verses, infused with themes of love, nature, and spirituality, soon established him as a prodigious poet. Tagore's creative genius extended beyond poetry. He was also an accomplished playwright, novelist, essayist, and songwriter. His most famous work, "Gitanjali" (Song Offerings), earned him the prestigious Nobel Prize in Literature in 1913, making him the first non-European to receive this honor. The collection of poems in "Gitanjali" beautifully captures his philosophical reflections and spiritual yearnings. Educational Reforms and Visionary Philanthropy: Tagore was a visionary reformer in the field of education. He founded a school named "Visva-Bharati" in Santiniketan, West Bengal, which aimed to provide a holistic education that integrated the best of both Western and Indian traditions. The institution became a center for intellectual and artistic pursuits, attracting scholars, artists, and students from all over the world. Today, Visva-Bharati University stands as a testament to Tagore's enduring educational philosophy. Social and Political Activism: Deeply influenced by the socio-political conditions of his time, Tagore was an outspoken critic of British colonial rule in India. He used his literary and intellectual prowess to advocate for social justice, equality, and cultural pride. Tagore's songs, often referred to as "Rabindra Sangeet," served as anthems of resistance and inspired countless individuals in the struggle for independence. Legacy and Global Influence: Rabindranath Tagore's impact extends far beyond the boundaries of India. His works have been translated into numerous languages, spreading his poetic and philosophical wisdom to a global audience. His ideas on universalism, humanism, and the celebration of diversity continue to resonate in today's world, transcending time and cultural barriers. Furthermore, Tagore's contributions to literature and music have left an indelible mark on the world stage. His songs and poems are celebrated in various cultural festivals, and his plays are performed on stages across the globe. His work remains a source of inspiration for artists, writers, and thinkers worldwide, affirming the enduring relevance and power of his ideas. On Rabindranath Tagore Jayanti, we pay homage to a multifaceted genius whose indomitable spirit and creative brilliance continue to inspire generations. We hope you loved going through this blog. Spread the word by sharing this wonderful blog with your friends and loved ones. Make sure you plant a sapling on important occasions and behave responsibly towards the environment. Thank you very much for reading this blog :) Image source: Poem Hunter. 06-Jun-2023 , 04:30 AM 05-Jun-2023 , 05:42 AM
Scientists have much to learn about predicting future climate conditions, particularly when calculating change for certain regions on the earth's globe, says Elwynn Taylor, Iowa State University Extension climatologist. Yet, he also warns that both long- and short-term warming and cooling cycles signal potential troubles ahead for Corn Belt crop production. “Long term, we've had a natural warming that's been going on for about 20,000 years, since the last glaciers melted from on top of Des Moines,” says Taylor. “More recently, our climate has been going through 90-year, short-term warming and cooling cycles,” he says. If history repeats itself, Taylor says the next 90-year warming cycle would likely peak in 2025. “We haven't had any year as bad as 1936 since 1936 — when the last 90-year warming cycle peaked,” he explains. “However, the effect that people are now having on our climate might speed up the cycle a bit.” More intense summer heat for the region — either man-made or natural — would be detrimental to corn production, warns Jerry Hatfield, a supervisory plant physiologist at the National Soil Tilth Research Laboratory, Ames, IA. He notes that the corn plant is more vulnerable to extreme heat than other row crops, such as soybeans. “In the Corn Belt, if this (global warming) trend continues, we could see significantly reduced corn yields in the next 30-50 years,” says Hatfield. “As global warming increases, the Corn Belt would likely encounter much higher temperatures during the pollination phase of corn plant development than in (more temperate) years. We would likely see the daytime high temperatures above 95° F, which is lethal to corn pollination. We would also have higher nighttime temperatures and respiration rates, which would result in smaller grain size and less grain fill.” RISING TEMPERATURES and CO2 levels in the traditional Corn/Soybean Belt would likely boost — rather than deflate — soybean yields compared to corn if rainfall remains ample, adds Hatfield. “Soybeans respond well to high CO2 levels by increasing photosynthesis and production,” he explains. “Temperatures aren't lethal to pollination in soybeans until they reach 104° F.” Yet, adequate precipitation will still be the “biggest wild card” for crop production if global warming continues, he says. Hatfield cites the drought-suffering Southeastern U.S. as an example of what increased global warming might have in store for southern portions of the current Corn/Soybean Belt. “In the Southeastern U.S., as temperatures have increased in recent years, precipitation has decreased,” he says. “So, in the Corn Belt, it may be another 10-20 years before we see a consistent negative impact of global warming on crop production, whereas in the Southeast it might occur sooner or already be occurring.” To adapt to warmer summertime conditions, enterprising farmers could continue to expand the current Corn Belt to the north and possibly further west, says Hatfield. “However, if such an expansion were to occur, the key to good production would be getting reliable rainfall for that northwestern Corn Belt,” he emphasizes. AN INCREASED SHIFT in soybean production to the north and west may soon transpire, concurs Mark Seeley, University of Minnesota Extension climatologist. “(With more global warming), we're estimating that soybean yields will continue to increase in the Corn Belt and decrease in the Southeast,” he says. “So, there may be a shift in crop production, where the Southeast will grow fewer soybeans and northwestern states grow more soybeans.” Still, global warming has yet to hurt Corn Belt crop production, points out Seeley. “Over the last several years, most areas in the Corn Belt have been reporting longer frost-free growing seasons and an upward trend in growing degree days,” he says. “Generally, we've been getting region-wide boosts in corn production as a result. However, this (longer growing season) hasn't been established in the region long enough to know if that trend will hold.” More extreme weather patterns are likely to develop if global warming continues, emphasizes Seeley. “From a precipitation standpoint, we've already seen amplified variability in the Great Lakes region in recent years — more episodes of extreme dry and extreme wet, with very abrupt transitions between the two,” he says. “According to the Intergovernmental Panel on Climate Change (IPCC), this extreme precipitation variability is now becoming characteristic of many different landscapes besides the Great Lakes region.” If temperatures continue to climb, weather volatility will likely escalate, agrees Hatfield. “One thing we do know is that with higher global temperatures, precipitation will increase in variability,” he says. “More heat will usher in an extremely variable precipitation regime, which will wreak havoc on crop production in the traditional corn and soybean growing areas. So, the hardest thing to manage in the future, if global warming continues, will be the extreme variability in precipitation.” CONSECUTIVE COLD, WET SPRINGS UNLIKELY After enduring an unusually chilly, soggy spring this year, many Midwestern farmers might wonder if they should expect similar conditions in 2009. Not likely, say regional climatologists. The last time Iowa experienced two consecutive wet, cold springs was 1992 and 1993, says Elwynn Taylor, Iowa State University Extension climatologist. “It just doesn't happen very often,” he says. “Only about one out of six springs are either too wet or too cool to be ideal.” Few years exhibit spring weather similar to what occurred in 2008, says Mark Seeley, University of Minnesota Extension climatologist. To experience similar back-to-back years is even rarer, he adds. “The aberration that we had this spring with cool, persistent wetness was a fairly close match to 1996 - so it's been 12 years since we've seen this,” says Seeley. “Historically, there are just very few years when you might see back-to-back wet, cool springs.” On the other hand, Taylor points out that if next spring starts out unusually cool, then the odds increase that it will also be wet. “There is some connection between hot-and-dry and cool-and-wet weather - especially in Iowa,” he says. “So, it's more likely to have a cold, wet spring than a cold, dry one.”
For the first time ever, scientists have from an organ from a group of cells transplanted into mice — a thymus, an which is located near the heart and essential for immune system function. The findings published in Nature Cell Biology could have massive implications for the future of organ transplants, though scientists say they are years away from attempting the procedure in humans. The organ was grown out of mouse embryo cells, which were genetically “reprogrammed” to become a thymus. In the past, scientists have been able to grown human brain matter equivalent to that found in a 9-week-old fetus. But the thymus is a much less complex organ and it was actually able to achieve functionality in the experiment. “This was a complete surprise to us, that we were really being able to generate a fully functional and fully organised organ starting with reprogrammed cells in really a very straightforward way,” researcher Clare Backburn told the BBC. “This is a very exciting advance and it’s also very tantalising in terms of the wider field of regenerative medicine. Regenerative therapies scientist Dr. Paolo de Coppi gave some context for the achievement: “Engineering of relatively simple organs has already been adopted for a small number of patients and it is possible that within the next five years more complex organs will be engineered for patients using specialised cells derived from stem cells in a similar way as outlined in this paper. “It remains to be seen whether, in the long term, cells generated using direct reprogramming will be able to maintain their specialised form and avoid problems such as tumour formation.”
The Importance Of Cultural Literacy On Culture And Education Cultural literacy is the ability to understand and participate fluently in a given culture. Teachers are supposed to address issues in a way that learners are interested in with the basis of their culture. To achieve this, teachers should recognize the different unique needs that students have that fall into different cultural categories. In other words, local schools should be aware of the cultural background of the learners. Also, learners who come from literacy-based culture perform better in literacy. In school setting, the concentration on studying specific and doing exams make it impossible for the students to learn culture. A teacher who involves culture in between the lesson attracts more attention as the learners …show more content… One of the approaches that are prioritized is the promotion of learning across-curriculum. This idea has been accommodated in the syllabus and is aimed at improving the performance of learners. These priorities call teachers to make links between subjects. In so doing, they draw on a range of themes and topics that can be learnt at the same time. It captures the idea of integrated learning assisting the learners retain the important information learnt earlier from a different subject. According to the syllabus, teachers should instruct students by covering a variety of subjects in one lesson. To achieve that, the instructor incorporates all subjects into the lesson plan. As a result, students are introduced to various ways to incorporate learning into life, including career planning. Additionally, the syllabus requires the teachers adapt how to teach across the curriculum. Teachers will be in a position to show learners how different subjects apply to all aspects of …show more content… Teachers are required to plan how student using the ESL scale are going to learn. These scales provide a common framework for the teachers to assess the students who are using English as a second language. Identifying the needs of the ELD/D students help in providing a platform where they can be rated fairly with other students who use English as the first language. The reason why the syllabus paid more attention to the ELD/D students and students with special needs was to address their unique requirements and position them where they can access education fairly. The syllabus provides for the use of texts that can be accessed by students with special needs. There are two different of texts used in a learning environment. One of the texts is used address issues shallowly in providing mentorship. On the other hand, text can be used to conduct in-depth teaching. Therefore, there is need for teachers to include a wide range of texts in teaching in order to improve the student’s understanding. Such texts should address the issues to do with daily lives of the
Stroke survivors and their loved ones understand that a stroke has devastating physical and neurological effects. Every stroke is different, and there is no way to predict stroke severity until examination by specialized healthcare professionals. Physicians measure the initial damage of a stroke by using the NIH Stroke Scale or NIHSS. The NIHSS measures the level of brain damage from a stroke along with physical and cognitive impairment. Brain functions including consciousness, vision, sensation, movement, speech, and language are measured when evaluating stroke severity. The larger the NIH stroke score, the more devastating the damage to brain functions. - 0: no stroke - 1-4: minor stroke - 5-15: moderate stroke - 15-20: moderate to severe stroke - 21-42: severe stroke The NIHSS test may be administered similarly to this: During this test, a physician will asses the following and assign points appropriately: - Consciousness: Tested by asking the patient a simple question (month and day) and assessing their ability to follow a simple command (closing eyes and squeezing hand) - Gaze: Tests patients capability of moving their eyes normally by following an object with their gaze - Visual Field: Examines how much a patient can see outside of what is directly in front of them - Facial Palsy: Verifies if a patient can adequately move their facial muscles - Motor Arm: Tests if a patient can hold their arm out for 10 seconds without drift - Motor Leg: Tests if a patient can hold their leg up for 5 seconds without drift - Limb Ataxia: Tests for motor damage in the cerebellum by having the patient touch their fingers to their nose and their heels to their shins on both sides - Sensory: Assess response to sensory stimuli such as a pinprick - Language: Patient is asked to describe the situation taking place in a picture to test their language capabilities - Dysarthria: Evaluates amount of speech slurring in the patient - Extinction and inattention: Assess the amount of attention the patient gives to their five senses and their environment Scores on the NIHSS can be a tool for predicting patient outcome. Generally speaking, the lower the score, the greater the probability of full recovery while the higher the score, the greater the probability of patient death. Have you or a loved one suffered from an ischemic stroke? There is hope after stroke with CBC Health’s regenerative cord blood treatment. Call today to learn more and see if you qualify +1 855 426 4623.
Definition: The Net Present Value (NPV) is a means of evaluating the actual long-term profitability of an investment or a project through the initial outflow, future cash flows and time value of money. Also known as the discounted cash flow method, it backs the capital budgeting decisions of a company. It is an effective means of forecasting the future outcome of a particular investment project. However, it cannot be taken as a single-handed tool for financial analysis since it is paired up with various other practices. Content: Net Present Value (NPV) Net Present Value (NPV) Formula The NPV method provides the actual profitability of a project by assessing the future value of returns. To compute the net present value of a project or any investment opportunity, we need to apply the following formula: Net Present Value Example An electronics manufacturing company plans to undertake a new investment opportunity, i.e., manufacturing of next-generation home theatre. The estimated life is four years. Also, the discount rate is 10%. Whereas, the cashflows in the subsequent years are given in the following table: Net Present Value Calculation Net present value is always shown in numerical value. Therefore The three possible outcomes of this computation are as follows: Negative NPV: A negative NPV shows that the present value i.e., PV of the cash inflow is lower than its outflow. This type of investment opportunity is not worth it. It is denoted as: Zero NPV: It is when both the cash inflow and outflow have equivalent present values, i.e.: Positive NPV: Here, the PV of cash inflow is higher than that of its outflow. Therefore, it is a favourable investment idea. It is shown as: The value so acquired from the above computation is considered to be profitable if it provides a positive value. In the above example, the NPV is $33771; therefore, it is a suitable investment opportunity. Advantages of Net Present Value The net present value of a project in business guides the finance team for making wise decisions. Let us now go through the numerous benefits it has for the company, in the long run: Simple to Use: The net present value method is easy to apply to a real business project if the cash flows and discount rate are known. Provides Time Value of Money: This method takes into consideration the effect of inflation on the future profitability of the project, thus estimating the time value of money. Customization: In NPV, the discount rate can be adjusted according to the risk prevailing in the industry, along with various other factors, to obtain an appropriate output. Determines Investment Value: The earnings throughout the project’s life span can be acquired by using the NPV method, which facilitates the company to know the future value of a specific investment. Comparable: It facilitates the comparison of values generated in future, by two or more similar projects to find out the most feasible option. Comprehensive Method: It finds the present value of a project by examining the effect of various factors like risk, cash outflows and inflows. Measures Profitability: It is one of the most proficient methods of determining the actual profitability of a project in its lifetime. Identifies Risk: In the absence of NPV, the managers would fail to estimate the risk of loss or meagre profitability in case of a long-lived project. It is otherwise possible by identifying the project with negative or zero NPV. Reinvestment Assumption: The net present value is quite logical since, here the cash flows are not expected to be reinvested in the financial market, as done in internal rate of return. Disadvantages of Net Present Value Net present value is an effective means of evaluating a project’s profitability; however, it has certain drawbacks. These are as follows: Forecasting Errors: While assessing the viability of a long-lived project, the estimation of cash flows may not be that accurate for the later years. Minimum Contribution to Shareholder’s Value: The shareholder’s value maximization is the result of the overall growth of a company, whereas a high NPV contributes little towards it. Depends Upon Discount Rates: Since this method is based on discount rates, even a slight change may result in entirely different NPV. Neglects Sunk Cost: The sunk cost like research and development, trial, etc. incurred before the project starts, is mostly high. This cost is wholly ignored under the computation of NPV. No Effect on EPS and ROE: Oftenly, the projects with high NPV but the short duration may not enhance the earning per share and return on equity. Incomparable for Differing Project Size: The concept of capital rationing is applied in NPV; therefore, the projects which do not lie under the capital budget limit, cannot be compared under this method. Imagine if you plan to invest in a million-dollar project that has a payback period of five years. Will you straightaway go by the profitability it generates in percentage? No, since the value of money reduces with time and so does the profitability. Therefore, it becomes essential to analyze the net present value of future cash flows.
|PostgreSQL 8.3.23 Documentation| |Prev||Fast Backward||Appendix F. Additional Supplied Modules||Fast Forward||Next| The fuzzystrmatch module provides several functions to determine similarities and distance between strings. The Soundex system is a method of matching similar-sounding names by converting them to the same code. It was initially used by the United States Census in 1880, 1900, and 1910. Note that Soundex is not very useful for non-English names. The fuzzystrmatch module provides two functions for working with Soundex codes: soundex(text) returns text difference(text, text) returns int soundex function converts a string to its Soundex code. The difference function converts two strings to their Soundex codes and then reports the number of matching code positions. Since Soundex codes have four characters, the result ranges from zero to four, with zero being no match and four being an exact match. (Thus, the function is misnamed — similarity would have been a Here are some usage examples: SELECT soundex('hello world!'); SELECT soundex('Anne'), soundex('Ann'), difference('Anne', 'Ann'); SELECT soundex('Anne'), soundex('Andrew'), difference('Anne', 'Andrew'); SELECT soundex('Anne'), soundex('Margaret'), difference('Anne', 'Margaret'); CREATE TABLE s (nm text); INSERT INTO s VALUES ('john'); INSERT INTO s VALUES ('joan'); INSERT INTO s VALUES ('wobbly'); INSERT INTO s VALUES ('jack'); SELECT * FROM s WHERE soundex(nm) = soundex('john'); SELECT * FROM s WHERE difference(s.nm, 'john') > 2; This function calculates the Levenshtein distance between two strings: levenshtein(text source, text target) returns int Both source and target can be any non-null string, with a maximum of 255 characters. test=# SELECT levenshtein('GUMBO', 'GAMBOL'); levenshtein ------------- 2 (1 row) Metaphone, like Soundex, is based on the idea of constructing a representative code for an input string. Two strings are then deemed similar if they have the same codes. This function calculates the metaphone code of an input string: metaphone(text source, int max_output_length) returns text source has to be a non-null string with a maximum of 255 characters. max_output_length sets the maximum length of the output metaphone code; if longer, the output is truncated to this length. test=# SELECT metaphone('GUMBO', 4); metaphone ----------- KM (1 row) The Double Metaphone system computes two "sounds like" strings for a given input string — a "primary" and an "alternate". In most cases they are the same, but for non-English names especially they can be a bit different, depending on pronunciation. These functions compute the primary and alternate codes: dmetaphone(text source) returns text dmetaphone_alt(text source) returns text There is no length limit on the input strings. test=# select dmetaphone('gumbo'); dmetaphone ------------ KMP (1 row)
The SS49E is a linear hall-effect sensor. It can measure both north and south polarity of a magnetic field and the relative strength of the field. The output pin provides an analog output representing if a magnetic field is present, how strong a present field is, and if it is a north or south polar field. If no magnetic field is present the SS49E will output a voltage around half of the source voltage. If the south pole of a magnet is placed near the labeled side of the SS49E (the side with text etched on it), then the output voltage will linearly ramp up towards the source voltage. The amount of the output voltage increase is proportional to the strength of the magnetic field applied. If the north pole of a magnet is placed near the labeled side of the SS49E then the output voltage will linearly ramp down toward the ground voltage relative to the strength of the magnetic field. For example, if you power the SS49E with 5V and there is no magnetic field present then the sensor's output will be around 2.5V. In the same example, if you place the south pole of a strong magnet near the labeled side of the sensor, then the output voltage will go up to around 4.2V and if you placed the north pole of a strong magnet near the labeled side of the sensor, then the output voltage will drop to around 0.86V. You can easily use the SS49E with a microcontroller (such as Arduino) or single board computer (SBC). Just provide power to the GND and VCC pins of the SS49E and connect its output pin to an analog input on your microcontroller or SBC, which you can then measure the analog voltage of to calculate the sensor's measured data. So why use a Hall-effect sensor? Hall-effect sensors are immune to most environmental disturbances that may affect optical or mechanical devices, such as vibration, moisture, dirt or oil films, ambient lighting, etc. Also, they are a simple way to measure the presence of a magnet and even electrical current running through a conductor.
Parent Conversation Guides Shapes & Colors The purpose of these conversations is to build your child’s awareness of shapes in action – shapes in their daily life. - Let’s see if we can find 10 circles in the kitchen – in 1 minute? - Hold a glass upright in front of them. Can they find the circle? - Why do they think plates are round? - Can you find a green square? - Think about having mealtimes with a shape or color focus. Have foods cut into the shape, napkins folded into the shape. Let your child pick the shape or color the day before and spend some time thinking about it. - Notice tires on cars and bikes, halos - Why no round buildings?
A beary important lesson A beary important lesson Share a story about a disobedient bear cub. Discussion point: Just as Mama Bear teaches her cubs survival rules, parents also teach rules that keep kids safe. Share the following story about a mother bear and a disobedient cub if you feel it is appropriate for your children. After reading any of these stories about obedience, have your children re-enact the different roles to further enhance their learning. The grizzly bear and her cubs* There were three young grizzly cubs playfully exploring the woods near their den. The scent of food caused them to be drawn away from the protection of their mother. Their curiosity drew them closer and closer to danger. The food they had smelled was a caribou carcass that belonged to a family of wolves who had hidden it in the bushes and dirt near their own den. As the cubs came closer, the wolves circled preparing for a deadly attack. The mother bear heard the cries from her cubs and came running to their defence. The mother bear fought savagely and was finally able to separate herself and her cubs from the attacking wolves. They had just reached the safety of a nearby hill when the weakest of the three cubs ignored the protective wishes of its mother and returned to sniff the caribou. In seconds it was surrounded by the wolf pack. The mother now had to expose herself and the other two cubs to the battle again. The lead wolf had returned from hunting and distracted the mother bear while four other wolves attacked the weak cub. The mother bear broke free from the lead wolf and roared furiously at those who were attacking her cub. She wildly swung her paws in defence. Finally, she drove the three cubs through a thick patch of brush and into a glacial stream. The smallest cub cringed on the shore, frightened . . . The mother pushed it into the water so the wolves would no longer follow. The wounds the young cub suffered were a lasting reminder of the consequences of not following the instructions of the one caring for you. * Reproduced from Character Sketches from the Pages of Scripture, Illustrated in the World of Nature Volume I. Institute in Basic Life Principles, Oak Brook, IL, 1976. www.iblp.org. Reproduced with permission. Questions for discussion - Which rule did the little bear disobey? - What rules do Mom and Dad give you for your own safety? - What could happen to you when you choose to disobey these rules? - What do kinds of food do Mom and Dad ask you to eat, but you would prefer not to? - What do you think would happen to you if you ate cake, ice cream and cookies at every meal? - What kinds of traps do bears get stuck in? - What kinds of “traps” does Satan set for people? Here are some key points to emphasize in discussing this story with your children: A mother grizzly must help her cubs to survive by teaching them what foods are available in which seasons and how to find them. Likewise, as parents, it is our responsibility to teach you how to eat healthy meals. For example, we encourage you to eat balanced meals, including lots of vegetables. A mother bear also teaches her cubs how to avoid natural dangers such as hunters, bear traps, porcupines and wolves. Again, without her guidance, the cubs’ chances of survival would decrease significantly. It is our job, as your parents, to teach you how to be safe. That’s why we give your rules such as “Don’t play in the street” and “Don’t go anywhere with strangers.” These rules keep you safe physically. It’s also our job to keep you safe spiritually. Satan is our enemy and he is always tempting us to do wrong. We need to teach you how to overcome the temptation that Satan sends and how to live to please God instead.
Atoms, Molecules, and Ions The atom is a basic unit of matter consisting of a dense, central nucleus surrounded by a cloud of negatively charged electrons. An atom is the smallest unit of an element that could part in a chemical reaction. The atomic nucleus contains a mix of positively charged protons and electrically neutral neutrons (except in the case of Hydrogen-1, which is the only stable isotope with zero neutrons). The electrons of an atom are bound to the nucleus by the electromagnetic force. Likewise, a group of atoms can remain bound to each other, forming a molecule. An atom containing an equal number of protons and electrons is electrically neutral, otherwise it has a positive or negative charge and is an ion. An atom is classified according to the number of protons and neutrons in its nucleus: the number of protons determines the chemical element, and the number of neutrons determine the isotope of the element. The Concept of Atom first came when an Indian Thinker Maharish Kanad said that if we go on dividing all the materials present around us then a stage will come when it cannot be divided further. He named the material found at this stage as Parmanu, which is what atom is called in hindi. Later Greek philosopher Democritus coined the name Atom which in Greek means indivisible. Later French Chemist Antoine L. Lavosier (Father of Chemistry) discovered Two laws of Chemical Combination - Law of conservation of mass and Law of constant proportion, which raised the question, "What comprises the elements?" To answer this question came another British Scientist Dalton who again introduced the theory that there exists a stage where the material can no more be divided and named that stage as Atom. He also stated that Atom can neither be created nor destroyed. The principles of quantum mechanics were used to successfully model the atom. Relative to everyday experience, atoms are minuscule objects with proportionately tiny masses. Atoms can only be observed individually using special instruments such as the scanning tunneling microscope. Over 99.9% of an atom's mass is concentrated in the nucleus, with protons and neutrons having roughly equal mass. Each element has at least one isotope with unstable nuclei that can undergo radioactive decay. This can result in a transmutation that changes the number of protons or neutrons in a nucleus. Electrons that are bound to atoms possess a set of stable energy levels, or orbitals, and can undergo transitions between them by absorbing or emitting photons that match the energy differences between the levels. The electrons determine the chemical properties of an element, and strongly influence an atom's magnetic properties. Everything, from the computer you are on, to the fingers you are typing with, is made of atoms. These tiny particles combine, transform, and bond to create the world around us. Atoms, in turn, are made of electrons, protons and neutrons, in various combinations. The basic Bohr's Model shows their relationship. Electrons are tiny, subatomic particles that are 'negatively' charged. Their mass is insignificant compared to that of protons and neutrons, so, for most purposes, it is ignored. Protons are small, subatomic particles that are 'positively' charged. Their mass is not ignored in calculations of atomic mass (also known as atomic weight), but is given a value of 1 Atomic Mass Unit, or amu (also known as a dalton). The atomic number of an element is the number of protons in one atom of an element. A neutron is a particle that has no charge. Their mass is included in calculations of atomic mass (also known as atomic weight), and, like protons, is given a value of 1 Atomic Mass Unit, or amu (also known as a dalton). An ion is an atom possessing an electric charge. Atom(s) which contain more electrons than protons are said to be negatively charged ions or anions, while atoms possessing more protons that electrons are said to be positively charged ions or cations. Sometimes, group of atoms possessing an electric charge which react as a single unit is called a radical.
There are two ways of delivering web hosting services before the introduction of cloud computing systems-using stand-alone physical or virtual servers or some form of physical computers (servers) cluster pf. The most common was to host websites as Virtual Hosting accounts using stand-alone physical servers. Virtual Hosting is a method to host multiple websites and domain names on a stand-alone server. The website accounts are divided into different directories and share them on the underlying physical server as CPU, memory and all the resources. Most shared hosting accounts are “name-based” accounts. “Name-based” means that the physical host server uses multiple hostnames and/or domain names running in the IP address. For example, the server is able to receive domain requests, domain1.com, domain2.com, domain3.com, domain4.com, domain5.com and many more. Everyone resolves to the same IP address. However, if the server receives an HTTP request for domain1.com, for example, it sends it to an HTML file placed in the /var / www / user / domain1/site/ directory and so on. Virtual Hosting’s downside has always been the server load, which creates different accounts when they need to use a lot of resources. There was only one way to overcome the server load issues before Cloud commuting. It was to create some form of a cluster of physical servers that would divide between the different computing tasks. Although technically possible, due to the way server automation software (control panels) was used to work, this was not a very common method of improving the Virtual Hosting systems. Usually, they lock one of the physical hosts to the CPU and physical hardware and were thus unable to function properly in a multi-server environment. So, those who used to do Shared Hosting before cloud computing, which could be considered as computer virtualization that works in a multi-server environment, had little choice. They had to deal with server loads on a daily basis without being able to distribute the load to another instance or scale the physical or virtual server’s CPU and RAM resources that host the Virtual accounts. How changed that with cloud computing? In every service niche, cloud computing has significantly improved Web Hosting. For instance, the cloud infrastructure in the VPS niche allows the isolated virtual servers to be instantly scaled up and down. The cloud systems in the Virtual Hosting (Shared Hosting) niche not only allow any virtual cloud-based host to be scale-up. They allow Failover and High availability. High Availability (HA) is a strategy that prevents any potential malfunction of a computer system by making it possible for this system to come online very quickly following a failure of its operating system (OS) or a physical server outage where the certain virtual host used to deliver Shared Hosting services resides. HA is usually a function that monitors overload, OS failure, or downtime in the computing instance. If any happens, it simply restarts on the same physical server, or if it’s down on a different physical host, part of the cloud computing system-and here comes the cloud computing. High Availability’s main purpose is to reduce the downtime of the service and not to avoid it. It simply very quickly returns the virtual server used for delivering any online computing service. Failover refers to a cloud computing system’s ability to continue to deliver any services without interruption in the event of an OS or hardware failure. There are different techniques and scenarios for failover. What is important is that Failover (or Fault-Tolerance) is a function which transfers the workloads from any failed computing node (server) to a new one in the event of an outage. Web hosting service providers had to deal with server loads before cloud computing, suspend the virtual account that used to overload the underlying server, move it to a new host, or simply place it in an isolated environment on a VPS or a dedicated server. Cloud computing systems nowadays allow providers to scale up the computing resources of the servers they used to deliver web hosting services in real-time. It’s a Web Hosting service for the cloud! When High Availability is available as a function, it reduces service outage to less than 1 minute, while failover prevents service downtime either by duplicating and mirroring any cloud server instances or by automating real-time service migration when a failure is detected. We came to the close. It is “Ask the web hosting company you want to use if they have a cloud infrastructure in place and provide the web hosting accounts with high availability and/or some sort of failure.”
Generally, the antenna, key sensory organ of insects is known to aid insects in perceiving information about its surroundings such as availability of food, danger of predator, obstacles and potential mates and so on. They have many sensory receptors for audition, olfaction, balance, stability, gustation, graviception, thermo, hygro and mechanoreception, to name a few. They also play an important role during social interactions. In German cockroaches (Blattella germanica), during such social interaction, antennal contact alters juvenile hormone production which leads to an increase in female reproduction rate. In short the touch of cockroach is sufficient to speed up reproduction. Researchers from North Carolina State University conducted trials to check whether duck feathers (artificial antennae) can expedite reproduction in females as achieved when stimulated by a cockroach antenna. Under certain conditions, when female roaches come in contact with other females roaches or with artificial antennae like duck feathers, tend to procreate much faster than the isolated females or those without any physical stimulation. The same results were seen, even when cockroaches of different species were brought in contact. To further understand the process behind physical stimulation and accelerated reproduction, researchers used duck feathers in a motor driven system, to mimic cockroach antennae. They discovered that even duck feathers were capable of activating hormone in female roaches that is responsible for accelerating reproduction. According to senior researcher and Professor of Entomology at NC State, Dr. Coby Schal, Blanton J. Whitmire, the shape of the artificial antennae and the duration and momentum of the touch is what makes a difference and speed up procreation. Researchers describe that female roaches become capable of laying eggs when they mature. Therefore, reproduction speed can be specified as the time span between the beginning of maturation and the first spell of laying eggs. Activating production of juvenile hormone in mature female roaches, hasten the growth of eggs. The female lays the eggs when they attain a specific size. Therefore, females tend to lay eggs quickly if the eggs attain a specific size quickly, thus accelerating reproduction. There is a difference of many days, between egg laying during speedy reproduction and slow reproduction. Researcher Adrienn Uzsak, conducted a number of tests to explain the significance of physical stimulation in reproduction and found the reproduction cycle slowing down when female roaches were either kept in isolation or paired with a dead cockroach. The experiments conducted in petri dish containing a segregated female cockroach, lays eggs faster, when the antennae of another cockroach was introduced into the petri dish. However, when the cockroach antenna was removed from the core, suppressed reproduction speed as compared to the one kept in segregation. Researchers also conducted experiments using various types of duck feathers in a motor driven system. Their findings suggested that long, barbed feathers, hasten reproductive process than shorter and unbarbed feathers. Such research has clarified the importance of physical stimulation in reproduction, even when it was done using artificial antennae, way distinct than the cockroach antennae. The antenna touch and their rate and duration of contact can accelerate reproduction considerably. Still more study is required to understand the mechanism behind such physical stimulation and the changes that occur in females as producing more hormones and accelerating reproduction. And maybe this is the reason that these insects are so adaptable and so hard to control, as they are so easily stimulated to lay more eggs.
The first quasars discovered looked like stars but had strong radio emission. Their visible-light spectra at first seemed confusing, but then astronomers realized that they had much larger redshifts than stars. The quasar spectra obtained so far show redshifts ranging from 15% to more than 96% the speed of light. Observations with the Hubble Space Telescope show that quasars lie at the centers of galaxies and that both spirals and ellipticals can harbor quasars. The redshifts of the underlying galaxies match the redshifts of the quasars embedded in their centers, thereby proving that quasars obey the Hubble law and are at the great distances implied by their redshifts. To be noticeable at such great distances, quasars must have 10 to 100 times the luminosity of the brighter normal galaxies. Their variations show that this tremendous energy output is generated in a small volume—in some cases, in a region not much larger than our own solar system. A number of galaxies closer to us also show strong activity at their centers—activity now known to be caused by the same mechanism as the quasars. 27.2 Supermassive Black Holes: What Quasars Really Are Both active galactic nuclei and quasars derive their energy from material falling toward, and forming a hot accretion disk around, a massive black hole. This model can account for the large amount of energy emitted and for the fact that the energy is produced in a relatively small volume of space. It can also explain why jets coming from these objects are seen in two directions: those directions are perpendicular to the accretion disk. 27.3 Quasars as Probes of Evolution in the Universe Quasars and galaxies affect each other: the galaxy supplies fuel to the black hole, and the quasar heats and disrupts the gas clouds in the galaxy. The balance between these two processes probably helps explain why the black hole seems always to be about 1/200 the mass of the spherical bulge of stars that surrounds the black hole. Quasars were much more common billions of years ago than they are now, and astronomers speculate that they mark an early stage in the formation of galaxies. Quasars were more likely to be active when the universe was young and fuel for their accretion disk was more available. Quasar activity can be re-triggered by a collision between two galaxies, which provides a new source of fuel to feed the black hole.
亚博绑定银行卡有危险吗voltage is one of the fundamental parameters associated with any electrical or electronic circuit. voltage is seen widely in specifications of a host of electrical items from batteries to radios and light bulbs to shavers, and on top of this it is a key parameter that is measured within circuits as well. the operating voltage of an item of equipment is very important - it is necessary to connect electrical and electronic items to supplies of the correct voltage. connect a 240 volt light bulb to a 12 volt battery and it will not light up, but connect a small 5v usb device to a 240 volt supply and far too much current will flow and it will burn up and be irreparably damaged. 亚博绑定银行卡有危险吗on top of this, the voltage levels within a circuit give a key to its operation - if the incorrect voltage is present, then it may give an indication of the reason for the malfunction. for these and many reasons, electrical voltage is a key parameter and knowing what the voltage is can be a key requirement in any circumstance. 亚博绑定银行卡有危险吗voltage can be considered as the pressure that forces the charged electrons to flow in an electrical circuit. this flow of electrons is the electrical current that flows if a positive potential is placed on one end of a conductor, then this will attract that negative charges to it because unlike charges attract. the higher the potential attracting the charges, the greater the attraction and the greater the current flow. in essence, the voltage is the electrical pressure and it is measured in volts which can be represented by the letter v. normally the letter v is used for volts in an equation like ohm’s law, but occasionally the letter e may be used - this stands for emf or electro-motive force. 亚博绑定银行卡有危险吗to gain a view of what voltage is and how it affects electrical and electronic circuits, it is often useful as a basic analogy to think of water in a pipe, possibly even the plumbing system in a house. a water tank is placed up high to provide pressure (voltage) to force the water flow (current) through the pipes. the greater the pressure, the higher the water flow. the unit of electrical potential is the volt which is named after alessandro volta, an italian physicist who lived between 1745 and 1827. Note on Alessandro Volta: alessandro volta was one of the pioneers of dynamic electricity. investigating the basic properties of electricity, he invented the first battery and advanced the understanding of electricity. Read more about Alessandro Volta. Potential differenceThe electrical potential or voltage is a measure of the electrical pressure available to force the current around a circuit. In the comparison of a water system mentioned when describing current, the potential can be likened to the water pressure at a given point. The greater the pressure difference across a section of the system, the greater the amount of water which will flow. Similarly the greater the potential difference or voltage across a section of an electrical circuit, the greater the current which will flow. What is a volt the basic unit of voltage is the volt, named after the italian scientist, alessandro volta, who made some early batteries and performed many other experiments with electricity. The volt definition: 亚博绑定银行卡有危险吗 the standard unit of potential difference and electromotive force in the international system of units(si), formally defined to be the difference of electric potential between two points of a conductor carrying a constant current of one ampere, when the power dissipated between these points is equal to one watt. to give an idea of the voltages which are likely to be encountered, a cb radio will usually operate from a supply of around 12 volts (12 v). the cells used in domestic batteries have a voltage of around 1.5 volts. rechargeable nickel cadmium cells have a slightly smaller voltage of 1.2 volts, but can normally be used interchangeably with the non-rechargeable types. in other areas voltages much smaller and much greater than this can be encountered. the signal input to an audio amplifier will be smaller than this, and the voltages will often be measured in millivolts (mv) or thousandths of a volt. the signals at the input to a radio are even smaller than this and will often be measured in microvolts (µv) or millionths of a volt. at the other extreme much greater voltages may be heard about. the cathode ray tubes in a television or computer monitors require voltages of several kilovolts (kv) or thousands of volts, and even larger voltages of millions of volts or megavolts (mv) may be heard of in conjunction with topics like lightning. EMF and PDThe voltage for a battery or single cell is stated as a voltage. However it is found that when the battery is in use its voltage will fall, especially as it becomes older and it has been used. The reason for this is that there is some resistance inside the cell. As current flows a voltage drop forms across this and the voltage seen at the output is less than expected. Even so the voltage seen at the terminals if the battery was not supplying current would still be the same. This no load voltage is known as the electro-motive force (EMF), and is the internal voltage which is generated by the cell, or other source of power. How to measure voltage 亚博绑定银行卡有危险吗one of the key parameters that needs to be known in any electrical or electronic circuit is the voltage. there are several ways in which voltage measurements can be made, but one of the most common is to use a multimeter. either analogue or digital multimeters can be used, but these days digital multimeters are most commonly used as they are more accurate and are available for very reasonable prices. Note on how to measure voltage with a Multimeter: voltage is one of the key parameters that needs to be known in any electrical or electronic circuit. voltage can easily be measured using an analogue or digital multimeter where accurate readings can be taken very easily. Read more about how to measure voltage.
Ahoy there!! FREE again (normally $1.99), Sail through Math from McGraw Hill – a fun way for students to practice their math facts/skills. There are 3 levels of play for addition, subtraction, multiplication, and division facts. Other areas covered are addition equations. subtraction equations, multiplication equations, two-step equations, names for numbers, patterns/factors/multiples, and compare numbers. This app provides of great practice built around a pirate theme – complete with music and firing cannonballs! At the end of each game, the child can see what problems he/she missed as well as a percentage score. What a fun way to learn basic facts – in the classroom, at home, in the car on a way to an evening activity. I used to tell my students to practice during commercials – 10 to 15 minutes a day works wonders. This app provides a fun way to accomplish that goal. 🙂 Common Core Standards met: - 1.OA.6 – Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. - 2.OA.2 – Fluently add and subtract within 20 using mental strategies. By end of Grade 2, know from memory all sums of two one-digit numbers. - 3.OA.7 – Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division or properties of operations. By the end of Grade 3, know from memory all products of two one-digit numbers.
A parallel beam of light of wavelength 500 nm falls on a narrow slit and the resulting diffraction pattern is observed on a screen 1 m away. It is observed that the first minimum is at a distance of 2.5 mm from the centre of the screen. Find the width of the slit. Wavelength of light beam, λ = 500 nm = 500 × 10−9 m` Distance of the screen from the slit, D = 1 m For first minima, n = 1 Distance between the slits = d Distance of the first minimum from the centre of the screen can be obtained as: x = 2.5 mm = 2.5 × 10−3 m It is related to the order of minima as: `nlambda = x d/D` `d = (nlambdaD)/x` `= (1xx500 xx 10^(-9) xx 1)/(2.5 xx10^(-3)) = 2xx10 ^(-4) m = 0.2 mm` Therefore, the width of the slits is 0.2 mm.
Fundamentals of Virology - 0 (Registered) A virus is a small infectious particle which can only replicate inside the cells of a living organism. Most viruses are harmless to humans, some are infectious and make us ill for a short period of time such as the flu virus, but some are highly infectious and can be deadly such as the Ebola virus. The study of viruses is called virology and is a very important scientific discipline. In this virology course you will be introduced to the fundamentals of virology. You will learn the basics of what a virus is and how it attaches to and replicates within cells. You will study the infectious cycle of a virus and how it uses the DNA replication processes within a cell to multiply rapidly. This virology course will be of interest to all healthcare professionals who would like to learn more about viruses and how they affect human health, and to any lay person wishing to dispel the myths surrounding viruses. Curriculum is empty
Topic: what is Force, Moment of Force, Couple, Torque in detail? It is a physical quantity that might cause an acceleration in mass. there are many definitions of Force are available. Force can also be described as a push or a pull on an object. “Anything that causes an object to undergo “unnatural motion,” said as Force” Isaac Newton‘s described as Force = Mass x Acceleration [∴ F= ma ] What is meant by Magnitude? Magnitude is nothing but the Size or Quantity. What is force from the Newton’s Second Law? From Newton’s second law it is stated that “The rate of change of momentum is directly proportional to the impressed force and takes place in the same direction in which the force acts” Momentum ∝ Force Momentum = Mass x Velocity m = Mass of the body a = constant acceleration u = Initial velocity v = Final Velocity t = Time required to change the velocity from initial velocity to final velocity Change in momentum = Mass x Change in Velocity = Mass x (Final Velocity – Initial Velocity) = m x (v-u) Rate of change of momentum = m(v-u)/t momentum = m.a [∴Acceleration(a) = (v-u)/t] From Newton’s second law, Force and Momentum are directly proportional to each other. Force ∝ Momentum F ∝ ma F= k m a K is a proportionality constant For the unit of force, it produces a unit acceleration to the body of a unit mass S.I units for force is newton. a newton is defined as the force while acting on a body of the mass of 1kg with an acceleration of 1meter/second² 1N = 1kg x 1m/s² = 1 kg-m/s² There are some other few concepts related to the force are torque, a moment of force, couple Moment of force A simple form of force that causes the body to rotate about a point is known as the moment of force Moment of force = Force x perpendicular distance from the given point to the line of action of the force Moment of force = F x l It is simply the product of the force acting on the body and the perpendicular distance from the given point to the line of action of the force. Torque is equivalent to a couple acting upon a body. Torque = Force x perpendicular distance T = F x l Two equal and opposite parallel forces acting upon a body with a different line of acting points said as a couple. Moment of Couple = Force x Distance between the two line of action of the forces. Moment of Couple = F x x Tags: what is force? Physics, Machine Design, work, power, torque, a moment of force, Couple, Mechanical Engineering Basics
What We can Learn from Near Earth Objects A couple of weeks ago many people were startled to learn that a space rock—an asteroid called 2005 YU55—was about to pass just inside of our Moon’s orbit. This tumbling piece of debris is big enough that a decent-sized ocean liner could fit inside it, and its 1.22-year orbit occasionally brings it close to Earth. This time, we were in no danger of an impact from it during the November 8th flyby. Scientists took the opportunity to study the asteroid in great detail. The radio astronomy community was all over it. The Arecibo radio telescope, the Very Long Baseline Array, the Green Bank Telescope, and the Goldstone telescopes all focused on 2005 YU55. The Herschel Space Telescope also looked at the asteroid in far-infrared light, which helps us understand the temperature of the asteroid and what it’s made of. In particular, astronomers used the Goldstone Deep Space Antenna to bounce radar signals off the asteroid and then examine the data to see what this baby looked like. The movie below shows a series of the highest resolution radar “images” ever taken of a near-Earth object. The movie consists of six frames made from 20 minutes of radar data, and is a work in progress. Word is there will be another, more detailed movie released here after astronomers get through analyzing all the data—perhaps in a week or two. 2005 YU55 rotates on its axis once every 18 hours, so what you see below is five repetitions of the same loop, and the loop shows the rotation faster than in real time. What About NEOs? So, I’ve had people ask me what NEOs mean. The close passage of this one raised concerns again about what we would do if such a rock were headed straight toward our planet. Obviously if it had hit Earth, 2005 YU55 would have dug out a crater about six kilometers across (nearly four miles) if it had impacted on solid ground. The consequences could have been pretty severe. Of course, the asteroid didn’t hit, for which we all breathed a sigh of relief. But, that’s not to say that Earth is safe from a collision with one of these orbiting space rocks. It turns out the solar system is peppered with them, and in particular, the region we inhabit (the inner solar system) has a good-sized population of these rocks. They’ve BEEN around since the earliest history of the solar system. In fact, populations of such objects were spread out across much of the proto-solar nebula. They were the precursor “worldlets” that combined and collided to form the larger bodies such as Earth, the Moon, and so on. What we have now are the ones that didn’t participate in that early solar system tango to create worlds. They still zip around in their own orbits, and occasionally get close enough to another world (like Earth) to pose a collision threat. There are communities of scientists who track these objects (once they’re discovered) and do a good job of assessing the chances of impact, near misses, and close encounters. You can read their work at the Web page for NASA’s Near Earth Object Program , the Minor Planets Center , and at the European Space Agency’s NEO’s pages here and here. There are a number of search programs called asteroid surveys that constantly watch the sky and catalog just about everything that moves. They are scattered around the world, and you can see a list of the major ones here. These surveys aim to find as many NEOs as possible, down to the limits of what they can see. Planned future surveys will need to use ever-more sensitive detectors to find smaller and dimmer objects with orbits intersecting Earth’s. So, what can we learn about these NEOs as they whiz by? The radar imaging you saw in the movie here tells scientists something about the surface characteristics of an object. That is, is it cratered, does it have other surface features like hills or outcrops? What is its shape? Sometimes they can figure out what its surface is made of—that is, the minerals that make up a rocky asteroid, for example. And, by sussing out the composition and “look and feel” of these asteroids, we learn more about the raw materials that made up Earth and other worlds. We find out what conditions were like in various parts of the solar system during the early days when these types of objects were forming, colliding, and contributing themselves to build larger worlds. So, in a sense, these asteroids are historical treasure troves that give us a look at the early history of the solar system. In another sense, the ongoing discovery of NEOs also tells us about their distribution—that is, how many of them there are and WHERE their orbits are in the inner solar system. NEOs have always been there, folks. As I mentioned above, the solar system was born with an inventory of these guys, and over time they collide with planets and Sun. The inner solar system’s collection of NEOs is constantly being replaced by asteroids that migrate from the main Asteroid Belt, or from objects that are bumped from their orbits out near Jupiter and Saturn and sent inward toward the Sun. Currently we’ve discovered most of the larger ones. In recent decades, we’ve developed much better detectors to find the smaller near-Earth objects (the size of city blocks, for example). Most are so small and so dim (their surfaces can be as dark as charcoal, which makes them hard to spot, particularly when they’re little guys). Once a NEO is discovered, scientists have to make many observations of it to pin down its orbit very accurately. This is like watching a plane land: the more observations you have of that plane, the more accurately you can figure out its path to its landing site. In the days after a NEO discovery, scientists are very careful to point out that their calculations of the object’s orbit and trajectory are preliminary AND that the orbital parameters will change as more observations come in. This is completely normal and nothing to worry about. Yet, I often see people, particularly in the media or as part of the conspiracy theory crowd ignoring that fact and getting all upset because they think scientists are hiding information or aren’t telling the truth. The truth is that calculating orbits, particularly when you want to figure out whether or not something will impact us, requires observations over a long period of time, and those observations should be very precise. It’s not an overnight job— it’s like any other quality work—it reflects the amount of time and effort put into it. We pay our scientists well to do their jobs, and so it’s only fair to LET them DO their jobs without having people screech about it. I’ve also seen a lot of nonsense on the Web about how NEOs can change our magnetic fields or shift our polar axes or how they are being hidden by NASA/ESA/whoever. Such speculations are the work of people who either don’t know much about the reality of NEOs (or about the laws of physics for that matter) or don’t care to know because they can get more attention by making stuff up and then posting their “fantasies” on the Web. That’s the politest way I can term such nonsense. There’s good, solid science behind the discovery and characterization of NEOs, and I wish people would pay more attention to THAT. The universe is always much more fascinating and wondrous than our imaginations can dream up. So, to sum up: NEOs are fascinating rocks from space. Sure, they can pose a threat, and we should be looking for ways to mitigate that threat. But, in the larger sense, NEOs hand us a unique chance to learn more about our neck of the woods, by giving us a look at what was once the undiscovered country of small bodies of the solar system. (Special thanks to Dr. Paul Chodas at NASA/JPL for his insights on these NEOs. If you want to read more commentary about NEOs, check out David Ropeik’s discussion of impact risks here , and Alan Boyle’s comments on CosmicLog at MSNBC. Both of their blog entries were written after a workshop about communicating risks of NEO impacts, sponsored by the Secure World Foundation that I and a number of other scientists and writers attended this past week.)
Table Of Contents: Euglena 1. Euglena Characteristics A euglena is a one-celled alga with both plant and animal characteristics. 2. Euglenas Have Chlorophyll It has chloroplasts that contain chlorophyll and makes its own food when sunlight is available. 3. Euglena Eyespot Its eyespot responds to light which helps the euglena find areas with sunlight needed for photosynthesis. 4. Euglena Nutrition In the absence of light, a euglena acts like an animal and captures nutrients from the environment. 5. Euglena Movement A euglena moves by whipping its flagellum around like a little motor. A star-like structure called the contractile vacuole helps to remove excess water.
By Dr Kenneth Backhouse OBE The title of this article may sound a little erudite and of little significance to dance. However, anyone suffering from damaged cartilage, common enough in both young and old, can testify to the pain and disability associated with the condition. The body is made up of many tissues, each with differing character and functions. Of these, bone is the hard material which gives protection to vital structures as the skull protects the brain. It also forms rigid levers on which muscles act to produce movement. Although bone is often said to be the hardest material in the body, this is not strictly true. That substance is the enamel, the outer layer of teeth but this is a non-living crystalline substance, laid down as the tooth is formed and not replaceable if damaged. Bone is the hardest vital substance, dependent for its efficiency on stresses of use, available structural substances (calcium etc) and a supporting system of living cells (osteocytes). It requires a rich blood supply and, having nerves, pain can be induced from bone. Cartilage is also an important component of the firm skeletal system. It could be compared with plastics relative to harder structural materials such as wood or steel. Like bone it has an extensive supporting matrix which gives it the strength and resilience required for its particular function. It contains living cells (chondrocytes) but these have to be supplied with nutrients through the substance of the cartilage, as it has no direct blood supply and is also without nerves. Sustenance and nervous stimulae must come from the surrounding tissues. As such it has a low metabolic rate (unlike bone) and having no nerves, can be injured without producing pain unless this in induced indirectly from the surrounding tissues. Due to its low metabolic rate it has a poor capacity for repair and so any injury may be permanent. Whereas bone has a hard calcareous matrix, cartilage has one of a series of mucoproteins (glycosaminoglycans or proteoglycans), complex compounds of proteins and glycogen. These have a high propensity to attract and hold water (about 75% of weight) which, where the cartilage is load bearing as on the bone surface of joints, can be squeezed out into the joint space to assist synovial fluid in lubrication (weeping lubrication). The water is then reabsorbed when the load is reduced. As with plastics there are differences in hardness and resilience (elasticity). The hardest cartilage is that lining the load bearing surfaces of bone ends in joints. This is hyaline cartilage, which in the living state is translucent, bluish white in colour and has a smooth glassy surface. It has very fine fibrils running through its matrix but these are only visible under high magnification, hence the glassy appearance. Think of it as the equivalent of a plastic such as nylon. It is also found where this degree of hardness and resilience are required as in the larynx, tracheal and bronchial rings and part of the nose. It also forms the early skeleton in the embryo, later to be replaced by bone but being retained as the epiphyseal growth plates of the bone to near adulthood. In other situations, more resilient flexible cartilage is required with a greater fibrous component, fibro-cartilage and even more so elastic-cartilage. The costal cartilages of the chest wall are of fibro-cartilage. (Feel how the lower anterior part of the rib cage moves under pressure). Of particular interest to dancers, the vulnerable semilunar cartilages (menisci) of the knee joint are also of fibro-cartilage. The intervertebral discs are usually described as being fibro-cartilaginous but in fact they are rather more complex in structure than that name implies. The hyaline cartilage covering the load bearing ends of bone in synovial joints is subject to considerable stress, particularly in the legs in dance. Without reasonable care in the control of joint movement while under load, the stresses can result in irreversible damage, often leading to severe disability. The function of the hyaline cartilage is to give the bone a smooth, slippery, hardish, plastic surface in the joint. It has a low coefficient of friction being three times as slippery as smooth ice. In this it is assisted by synovial fluid, a viscous substance (hyaluronic acid), produced by the synovial membrane, to produce a relatively friction free surface for movement. With reasonably correct training patterns the stresses should not lead to cartilage damage in a healthy person. Admittedly some people seem to have greater likelihood of age change leading to cartilage wear than others, but this does not have any direct link with correct exercise loading. Cartilage and the Knee Joint In the knee the bone ends of both femur and tibia are covered by hyaline cartilage, as is the deep surface of the patella, where it is related to the femur. In addition, the two fibro-cartilaginous menisci (semilunar cartilages) run around the periphery of the tibio-femoral joint spaces. The main load and hence friction in movement, should not fall on these cartilages but on the more central regions of the two tibial tables, i.e. the hyaline cartilage. As the load is frequently heavy, lubrication could be a problem and one important function of the semilunar cartilages is to assist in the control of the lubrication of the joint space. They should not be unduly subject to the major stresses in the joint but can become so in poorly controlled and particularly in abnormal movements, such as rotation under load. Rotation should not be possible in the near extended knee other than a small controlled rotation, the so-called locking into extension and a reverse at the beginning of flexion. In these movements the lateral cartilage is controlled by the activating muscles and associated ligaments, adjusting its position as needed, with the medial cartilage largely static. A degree of rotation of the knee is permitted, maximally at about 60� of flexion but this should not normally be under heavy load. However, if this occurs the cartilages, particularly the medial, can be trapped and split producing partially loose flaps which are free to intervene between the load surfaces, leading to locking of the joint, usually at the most inconvenient moment. In classical ballet, where turn-out is used to allow a wide range of movement at the hip joint, it has become fashionable to expect a flat turn-out as displayed by the feet. For some dancers with a highly mobile hip joint and/or a suitably directed socket, this is possible. For others (and I speak with feeling) a flat turn-out is impossible but not affecting the prime object of the action or in most cases the artistic presentation of the dance. The latter are often forced by mechanically orientated teachers to achieve a flat turn-out by developing a lateral twist at the knee, producing a less controllable joint, so increasing the risk of lower leg injuries and particularly cartilage ruptures. Many dance careers have been destroyed by this manoeuvre as well as inducing long term disability. How often one sees the foot and patella facing in different directions and the scar of surgery for a ruptured cartilage. Although surgery is less interfering now when carried out by arthroscope, nevertheless there is still a greater likelihood of early osteoarthrosis after such injuries. Another important focus of cartilage damage in the knee is to that on the joint surface of the patella (chondromalacia patellae), leading to pain in flexion/extension movements of the joint, particularly under load. When the knee is flexed towards 90� or beyond, the patella is carried round the end of the femur. On powerful extension, as on rising from a grand plié, the stresses on the patellar cartilage are great, gradually reducing as the knee straightens. If for any reason this is not under perfect control and health, damage to the cartilage can occur. For this reason there have been medical recommendations that the grand plié should be banned from the dance curriculum. "Bunny hopping", a former popular exercise for quadriceps training of footballers has disappeared for a similar reason. But where would the range of dance be without the practical equivalent of the grand plié? The importance is that in practice it is not a repetitive activity and this is where much of the danger lies. Hence it should be remembered that as a class exercise it be practiced when fully warmed up: certainly not as a repetitive exercise at the beginning of a cold class as often happens. Furthermore, illness can materially affect the health of the cartilage and aspects of lubrication of the joint. Alcohol can lead to dehydration with similar problems on the joint so that after the evening alcoholic party, not only should there be an effective rehydration (water to get over a hangover!) but also a slow warm-up in order to protect the joint cartilage.
In a new study by NASA and University of California, Irvine, several maps combining data from the satellites of NASA's Gravity Recovery and Climate Experiment (GRACE) with other satellite and ground-based measurements modeled the quantity of groundwater in the Colorado River Basin. The scientists found our that more than 75 percent of the water lost since 2004 in the drought-stricken Colorado River Basin has come from underground sources. Scientists at NASA have used GRACE data to map the groundwater deficit in the entire United States. The satellite data helped to get a clear picture on the water amount below the surface. The pumping from underground aquifers is regulated by individual states and is often not well documented. "There's only one way to put together a very large-area study like this, and that is with satellites," said senior author Jay Famiglietti, a senior water cycle scientist. "There's just not enough information available from well data to put together a consistent, basin-wide picture." NASA explains how the gravity satellite can detect underground water: "Within a given region, the change in mass due to rising or falling water reserves influences the strength of the local gravitational attraction. By periodically measuring gravity regionally, GRACE reveals how much a region's water storage changes over time."
Iron and Your Child Ever wonder why so many cereals and infant formulas are fortified with iron? Iron is a nutrient that’s needed to make hemoglobin, the oxygen-carrying component of red blood cells (RBCs). Red blood cells circulate throughout the body to deliver oxygen to all its cells. Without enough iron, the body can’t make enough RBCs, and tissues and organs won’t get the oxygen they need. So it’s important for kids and teens to get enough iron in their daily diets. How Much Iron Do Kids Need? Kids require different amounts of iron at various ages and stages. Here’s how much they should be getting as they grow: - Infants who breastfeed tend to get enough iron from their mothers until 4-6 months of age, when iron-fortified cereal is usually introduced (although breastfeeding moms should continue to take prenatal vitamins). Formula-fed infants should receive iron-fortified formula. - Infants ages 7-12 months need 11 milligrams of iron a day. Babies younger than 1 year should be given iron-fortified cereal in addition to breast milk or an infant formula supplemented with iron. - Toddlers need 7 milligrams of iron each day. Kids ages 4-8 years need 10 milligrams while older kids ages 9-13 years need 8 milligrams of iron each day. - Adolescent boys should be getting 11 milligrams of iron a day and adolescent girls should be getting 15 milligrams. (Adolescence is a time of rapid growth and teen girls need additional iron to replace what they lose monthly when they begin menstruating.) - Young athletes who regularly engage in intense exercise tend to lose more iron and may require extra iron in their diets. What’s Iron Deficiency? Iron deficiency (when the body’s iron stores are becoming depleted) can be a problem for some kids, particularly toddlers and teens (especially girls who have very heavy periods). In fact, many teenage girls are at risk for iron deficiency — even if they have normal periods — if their diets don’t contain enough iron to offset the loss of iron-containing RBCs during menstrual bleeding. Also, teen athletes lose iron through sweating and other routes during intense exercise. After 12 months of age, toddlers are at risk for iron deficiency because they no longer drink iron-fortified formula and may not be eating iron-fortified infant cereal or enough other iron-containing foods to make up the difference. Drinking a lot of cow’s milk (more than 24 fluid ounces [710 milliliters] every day) can also put a toddler at risk of developing iron deficiency. Here’s why: - Cow’s milk is low in iron. - Kids, especially toddlers, who drink a lot of cow’s milk may be less hungry and less likely to eat iron-rich foods. - Milk decreases the absorption of iron and can also irritate the lining of the intestine, causing small amounts of bleeding and the gradual loss of iron in the stool (poop). Iron deficiency can affect growth and may lead to learning and behavioral problems. And it can progress to iron-deficiency anemia (a decrease in the number of RBCs in the body). Many people with iron-deficiency anemia don’t have any signs and symptoms because the body’s iron supply is depleted slowly. But as the anemia progresses, some of these symptoms may appear: - fatigue and weakness - pale skin and mucous membranes - rapid heartbeat or a new heart murmur (detected in an exam by a doctor) - decreased appetite - dizziness or a feeling of being lightheaded If your child has any of these symptoms, talk to your doctor, who might do a simple blood test to look for iron-deficiency anemia and may prescribe iron supplements. However, because excessive iron intake can also cause health problems, you should never give your child iron supplements without first consulting your doctor. Iron in an Everyday Diet Although iron from meat sources is more easily absorbed by the body than that from plant foods, all of these iron-rich foods can make a diet more nutritious: - red meat - dark poultry - enriched grains - dried beans and peas - dried fruits - leafy green vegetables - blackstrap molasses - iron-fortified breakfast cereals Here are other ways you can make sure kids get enough iron: - Limit their milk intake to about 16-24 fluid ounces (473-710 milliliters) a day. - Continue serving iron-fortified cereal until kids are 18-24 months old. - Serve iron-rich foods alongside foods containing vitamin C — such as tomatoes, broccoli, oranges, and strawberries — which improves the body’s absorption of iron. - Avoid serving coffee or tea at mealtime — both contain tannins that reduce iron absorption. - If you have a vegetarian in the family, monitor his or her diet to make it includes sufficient iron. Because iron from meat sources is more easily absorbed than iron from plant sources, you may need to add iron-fortified foods to a vegetarian diet. Stock up on iron-rich or fortified foods for meals and snacking, and serve some every day. And be sure to teach kids that iron is an important part of a healthy diet. Reviewed by: Mary L. Gavin, MD Date reviewed: February 2012
Today is the Vernal Equinox, the first day of spring in the northern hemisphere. The new season officially began at 5:14 UTC, which is 1:14 A.M. Eastern Standard Time. Astronomical seasons are the result of the tilt of the Earth’s axis, a 23.5-degree angle. Today, as spring begins, the Earth’s axis is tilted neither toward nor away from the sun. As a result, we receive approximately equal hours of day and night. The vernal equinox usually marks the end of winter’s chill and the gradual return of warmth. Following our fourth warmest winter on record, however, spring conditions are already in full bloom across many parts of United States. Image Credit: scijink.nasa.gov
NASA wants to build the next Concorde, bringing in a new age of supersonic jets that hopefully won’t rupture your eardrum. But in order to do that safely, there needs to be research and lots of it. Onen research technique, pictured above, uses an modernized 19th century method called schlieren imagery “to visualize supersonic flow phenomena with full-scale aircraft in flight,” according to NASA. Images like these will help analyze the location and strength of shock waves, so NASA engineers can develop aircraft that can minimize those effects. The dream of Concorde air travel might not be so dead after all.
1996 Tyler Laureates Dr. Willi Dansgaard, Dr. Claude Lorius, and Dr. Hans Oeschger Preserved within the great polar ice sheets in an exquisite record of the earth's global climate extending back thousands of years. Within them lie concentrations of oxygen isotopes, carbon dioxide and other gases present in ancient atmospheres, the acids from numerous volcanic eruptions, evidence of storms that raged around the world, and other traces of global climate change deposited during the span of human existence. Searching for clues to the earth's climate record through the analysis of ancient polar ice was a revolutionary idea when first proposed in 1954. Today, it is a basic tenet of global climate research showing a strong relationship between climate and the chemical composition of the atmosphere. In addition to providing the scientific community with a fundamental understanding of climate on earth, the data from polar ice studies is used in virtually all reports about global warming to emphasize the potential for atmospheric pollution to adversely affect global climate. The three scientists most responsible for the scientific imagination, long-term vision, and wisdom that led to this breakthrough in understanding the earth's system are honored for their scientific accomplishments with the year's Tyler Prize for Environmental Achievement. Taken together, the work of these three scientists has revolutionized scientific knowledge of how the temperature and composition of the atmosphere have changed over the past 150,000 years. By drilling into the ice caps of Greenland and Antarctica, and by analyzing the chemical and isotopic composition of the ice and of air-bubbles trapped in the ice, they have shown that the succession of glacial and interglacial ages that dominates the climatic history of the earth over the past 150,000 years involves substantial changes in carbon dioxide and methane. This discovery has launched a major international research effort to understand the mechanisms by which these atmospheric changes are linked to changes in the land surface and particularly to changes in ocean circulation and chemistry. "I believe that a few hot summers would not have been sufficient to raise the global climate changes as a central scientific issue, had it not been for the ice core evidence provided by these scientists," said Dr. Edwin Boyle, Professor of Earth, Atmospheric and Planetary Sciences at the Massachusetts Institute of Technology, in supporting their nomination for the Tyler Prize. The importance of Drs. Dansgaard, Oeschger, and Lorius' research extends far beyond the scientific community and has had a profound impact in the environmental policy making arena. Stephen H. Schneider, former Head of Interdisciplinary Climate Systems at the National Center for Atmospheric Research, observed, "While they have not themselves participated in environmental advocacy or policy analysis, their fundamental scientific contributions are frequently used by those interested in policy implications... to build the credibility of scientific understanding needed for environmental action in the are of global warming and global change. Willi Dansgaard, Professor Emeritus of Geophysics at the University of Copenhagen, was the first paleoclimatologist to demonstrate that measurements of the trace isotopes oxygen-18 and deuterium (heavy hydrogen) in accumulated glacier ice could be used as an indicator of climate and atmospheric environment as derived from samples of successive layers of polar ice, often collected under extreme weather conditions. The first polar deep ice core drilling expedition took place in 1966, with the collection of the American Camp Century Core from Greenland. In cooperation with other laboratories, Dr. Dansgaard and his group performed the first isotopic analysis of the ice and perfected the methods to date the ice sheets and measure acidity and dust records, thus demonstrating its value as an environmental indicator. Since that time, Dr. Dansgaard has organized or participated in 19 expeditions to the glaciers of Norway, Greenland, and Antarctica. Dr. Dansgaard is a member of the Royal Danish Academy of Science and Letters, the Royal Swedish Academy of Sciences, the Icelandic Academy of Sciences, and the Danish Geophysical Society. He is the recipient of the Royal Swedish Academy of Sciences' Crafoord Prize, the International Glaciological Society's Seligman Crystal, and the royal Swedish Society of Geography and Anthropology's Vega medal. Claude Lorius, chairman, French Institute of Polar Research and Technology (Grenoble) has participated in 17 polar field campaigns, with a cumulative total of 5 years spent in some of the coldest spots on the planet. He was the first to appreciate the value of the air bubbles trapped in the ice sheets and developed methods to determine the atmospheric pressure at the time of ice formation thus providing insight to the original thickness of the ice. He played a significant role in promoting international cooperation in polar ice research. Foremost among these efforts was the successful collaboration between Soviet, American, and French scientists in the recovery and analysis of the longest ice core drilled to date. The information obtained from the Vostok Core, collected in East Antarctica, is exceptional because it provides the first continuous ice record of the drastic swings in global climate over the last 150,000 years extending from the present interglacial (or warming) period through about 100,000 years of glacial cooling, then on through the previous interglacial episode and into the tail of another glaciation. The drilling has now reached a depth of 3,100 meters which will allow scientists to extend the time scale to about 400,000 years. Data from the analysis of the Vostok Core by Dr. Lorius and his team are stunning and include detailed records of air temperature, methane, carbon dioxide, and aerosols, to name but four climate system properties that this record has faithfully preserved. Of particular interest has been the reconstruction of atmospheric carbon dioxide and methane variations during the last climatic cycles which shows a strong relationship between climate and the chemical composition of the atmosphere (in particular the concentration of greenhouse gases.) This data provides a strong warning signal about the possible impact of human activities on climate. Dr. Lorius was born on February 27, 1932 in Besancon, France. He received a masters and doctorate degree in Physical Sciences from the Sorbonne University in Paris. Dr. Lorius began his scientific career in 1955 as a researcher on the Antarctic Committee for the International Geophysical Year at the National Center for Scientific Research (CNRS). Hans Oeschger, Professor Emeritus of Physics, University of Bern, Switzerland, is the pioneer of gas composition measurements on polar ice. A physicist by training, he developed numerous methods for extracting data from sequential layers of polar ice, thus demonstrating the wealth of geochemical information present in the ice archive. Dr. Oeschger and his colleagues developed techniques for measuring radiocarbon on very small samples of carbon dioxide, oxygen isotopes, and the radiocarbon dating of ice. Their measurement of carbon dioxide concentrations from air bubbles trapped in ice revealed for the first time the important role that the world's oceans play in influencing global climate. Thus, it is now widely held that it is ocean-influenced changes in the levels of atmospheric gases that support the creation of the great glacial ice caps. Dr. Oeschger began his work on isotopes and greenhouse gases around the same time as Dr. Dansgaard initiated his studies. Their combined work documented that abrupt climate swings are associated with changes in atmospheric greenhouse gases. The paradigm has come to be known as "Dansgaard-Oeschger events," the study of which has led to profound insights about the response of the present-day climate system to man's activities. Dr. Oeschger was born on April 2, 1927 in Ottenbach, Switzerland. He earned a doctor of science degree from the University of Bern in 1955 and has been associated with that institution since that time as a researcher and professor. He became professor emeritus in 1992. Dr. Oeschger is a member of a number of scientific academies and honor societies including the National Academy of Sciences, Swiss Academy of the Technical Sciences, and the Swiss Academy of Natural Sciences. Past honors include the Harold C. Urey Medal from the European Association of Geochemistry and the Seligman Crystal from the International Glaciological Society. For More Information on the Tyler Prize, Contact: Amber Brown, Administrator
From Meridian Observation of the Sun At noon on October 18, Lewis used his sextant and artificial horizon to obtain the meridian altitude of the sun's upper limb. This observation produced a double altitude of 68°57'30" from which the Lewis calculated a latitude of 46°15'13.9".1 Most of the 3'18"-difference just noted comes from a recurring mistake the captains made in their calculations when using the sextant and artificial horizon. Their procedure was to divide the observed altitude by two, then subtract the sextant's full index error. They either should have (a) subtracted the full index error from the observed altitude before dividing the altitude by two or (b) subtracted half the index error after dividing the altitude by two. This mistake, by itself, results in a latitude that is 4'22½" (5 statute miles) too far north. But, as seen above, the captains' latitude for the observation of October 18 is only 3'18" too far north—not 4'22½". Somewhere in the process of calculating the latitude they also must have made what is called a "compensating error." Unfortunately, as they did not save their calculations it is not possible to find this "error." Most likely they either 1) made a simple mistake in adding, subtracting or dividing or 2) incorrectly determined refraction, parallax, sun's semidiameter or the sun's declination. The 2 arc seconds difference between the recalculated latitude and that derived from map and aerial photo interpretation is equivalent to about 200 feet. Considering that the smallest angle that Lewis's sextant was capable of displaying was 7½", one might conclude that this day's Meridian Altitude observation either was first-rate and the sextant's index error continued to be +8'45" as it had been since the fall of 1803 or there were some unusual compensating errors in this observation. From Double Altitudes of the Sun At about 8 a.m. and 10 a.m. on October 18, Lewis took observations of the sun's altitude. Those two observations, together, are generally called Double Altitudes of the sun2 and commonly are used to determine latitude. These observation pairs, however, can be used to determine the chronometer's error on Local Time provided the latitude is known. Because the captains took a Meridian Altitude observation of the sun less than two hours after the second observation of Double Altitudes, it is clear that they took this set of Double Altitude observations to find the error of the chronometer and its rate of loss since noon on October 17; see Lewis: 1805, July 20.3 When the sun's declination is changing rapidly (a month or so on either side of the Equinoxes) and the time between observations is more than about 3 hours, the declination of the sun should be determined for each observation. At other seasons, a simple average generally is adequate to obtain a latitude to within plus or minus a few arc minutes. Nevertheless, a latitude derived from a Double Altitude observation, even when made with great care, tends to be less reliable than a latitude derived from a Meridian observation. 1. The latitude of the mouth of Snake River is shown at about 46°15 N on the Lewis and Clark map of 1806, Clark's map of 1810, and the Lewis and Clark map of 1814. David Thompson, on July 8, 1811, obtained 46°12'35" for a point "close above" this junction. In his narrative for 1811 August 5, he gives the latitude of the junction, itself, as 46°12'15" N (longitude 119°31'33" W). 2. Not to be confused with the double altitude of the sun which results from the use of the artificial horizon. 3. "Having lost my post-meridian observations for equal altitudes in consequence of a cloud which obscured the sun for several minutes about that time, I had recourse to two altitudes of the sun with sextant." Funded in part by a grant from the National Park Service's Challenge Cost Share Program
- Celiac disease is an autoimmune digestive disease that damages the small intestine and interferes with nutrient absorption. - People with celiac disease cannot tolerate gluten, a protein in wheat, rye, barley, and possibly oats. - A person with celiac disease may or may not have symptoms, which often include diarrhea, abdominal pain and bloating, fatigue, and anemia. - Celiac disease is treated by eliminating all gluten from the diet. The gluten-free diet is a life-long requirement. - Without treatment, people with celiac disease can develop complications like malnutrition, cancer, infertility, osteoporosis, anemia, and seizures. - Diagnosis involves blood tests and a biopsy of the small intestine. - Celiac disease is an inherited condition; therefore family members of a person with celiac disease may wish to be tested. - A dietitian in conjunction with a gastroenterologist can give detailed guidance and information about food selection, label reading, and other strategies to help manage the disease. What is Celiac Disease? Celiac disease, also called celiac sprue, is a digestive disease that damages the small intestine and interferes with absorption of nutrients from food. People who have celiac disease cannot tolerate a protein called gluten, found in wheat, rye, and barley. Gluten is found mainly in foods, but is also found in products we use every day, such as stamp and envelope adhesive, medicines, and vitamins. When people with celiac disease eat foods containing gluten, their immune systems respond by damaging the small intestine. This injury occurs to tiny fingerlike protrusions, called villi, which line the small intestine, and are critical in allowing absorption of nutrients and preventing malnutrition. Because the body's own immune system causes the damage, celiac disease is considered an autoimmune disorder. However, it is also classified as a disease of malabsorption because nutrients are not absorbed, as well as a genetic disease, meaning it runs in families. Sometimes the disease is triggered by events such as surgery, pregnancy, childbirth, a viral infection, or severe stress. Celiac disease affects people in different ways. Young children most often show growth failure, weight loss, diarrhea, constipation or abdominal distension. The most common symptoms in adults include weight loss, chronic diarrhea, abdominal cramping, bloating and gas, muscle wasting, weakness, and fatigue. Less commonly, people with celiac disease have joint pain, osteoporosis or osteopenia (low bone mass before osteoporosis), anemia (from impaired iron absorption), leg numbness (from nerve damage), muscle cramps (from impaired calcium absorption), aphthous ulcers (sores in the mouth from vitamin deficiency), seizures, infertility, or behavioral changes. In a limited number of people with celiac disease, a gluten related skin disorder, called dermatitis herpetiformis, appears as small itchy blisters on the skin surface, typically on body pressure points such as the elbows, knees, and feet. Even if an individual with celiac disease has no symptoms, that person is still at risk for complications of celiac disease, including malnutrition. What causes such varied symptoms? All of the clinical manifestations of celiac disease are caused by the inability of the damaged small intestine to absorb nutrients normally. The wide variation in symptoms are attributable to a number of factors only some of which are known and include the age at which a child or individual begins eating gluten products and the amount of gluten containing foods a person ingests. The amount of intestinal damage is also a significant factor. The symptoms of celiac disease can be easily confused with those of other diseases such as irritable bowel syndrome, chronic fatigue syndrome, inflammatory bowel disease, or intestinal infections. As a result, celiac disease is often under diagnosed or misdiagnosed. In recent years, autoantibodies, or proteins that react against the body's own tissues, have been discovered in the blood of persons with celiac disease. These autoantibodies serve as markers for celiac disease. To diagnose celiac disease, blood tests can be done for these markers as well as for certain proteins. The levels of these proteins may be abnormal in individuals with celiac disease who are ingesting foods containing gluten. Therefore, if someone has started a gluten free diet, these tests will not be accurate. These tests include IgA (immunoglobulin A); TTG (anti-tissue transglutaminase); and EMA (IgA anti-endomyseal antibodies). If the blood tests and the person's symptoms suggest celiac disease, the physician will perform a biopsy of the small intestine. This involves placing an endoscope (long thin flexible tube) through the mouth and stomach into the small intestine, from which a tiny sample of the intestinal lining can be taken. The intestinal biopsy showing a damaged, flat surface, is often called the "gold standard" for diagnosing celiac disease. Screening for celiac disease in relatives of affected people is not often done in the United States. However, family members of people with celiac disease who wish to be tested may have blood tests done to check for autoantibodies. Approximately 5 percent to 15 percent of first degree relatives of an affected person will also have the disease. An adjunctive technology in the diagnosis of celiac disease is wireless capsule endoscopy. This procedure involves the ingestion of a camera encapsulated in a 1-inch pill which can take more than 50,000 digital images of the small bowel. Although this is not useful for subtle cases, in a patient with severe celiac disease, these pictures may show a deeply scalloped or furrowed lining of the small bowel, or crevices. These pictures can also show complications of celiac disease involving the small intestine such as ulcerations and cancer. The only treatment for celiac disease is life-long adherence to a gluten-free diet. When gluten is removed from the diet, symptoms improve, the small intestine begins to repair the existing damage, and further damage is prevented. Improvements begin within a few days of starting the diet and an adult's intestine is usually healed within 2 years. Reintroduction of gluten into the diet, even in small quantities, will damage the small intestine again. When first diagnosed with the disease, a person often consults with a dietitian (health care professional who specializes in food and nutrition) who can help the person learn how to identify foods which contain gluten. Dietitians may help people with celiac disease plan meals and make informed decisions when grocery shopping. Some people have unresponsive celiac disease, which means that they show no improvement on a strict gluten free diet. This may mean that there are small amounts of hidden gluten still present in the diet. In rare cases, the intestinal injury is so severe that it cannot heal. Persons with this condition may need to receive nutrition intravenously (directly into the bloodstream through a vein). Other medications, such as steroids, may help heal the damaged mucosa in people who do not respond to dietary changes. A gluten-free diet contains no wheat, rye, or barley, or any foods made from these grains, such as most pasta, cereal, and many processed foods. People with celiac disease can use potato, rice, soy, amaranth, quinoa, buckwheat, or bean flour instead of wheat flour. Gluten-free bread, pasta and other products are becoming increasingly more available from specialty food companies as well as from regular stores. Hidden sources of gluten include additives such as modified food starch, preservatives, and stabilizers or thickeners. Checking labels for the "gluten-free" notice is important. Gluten may be used in some medications, and a person with celiac disease should check with the pharmacist to learn which medicines contain gluten. "Plain" fish, meats, rice, fruits, and vegetables contain no gluten, so people with celiac disease can eat as much of these foods as they wish. The gluten-free diet is very challenging and requires a completely new approach to eating. Advice and support from the doctor, dietitian, and celiac support groups are helpful for most persons with this disease. Complications associated with Celiac Disease Damage to the small intestine and the resulting nutrient absorption put people with celiac disease at risk for malnutrition and anemia. Other less common associated risks include: Cancers such as lymphoma and adenocarcinoma of the small intestine; Osteoporosis, a condition in which bones become brittle and are at risk for fracture; Miscarriage and congenital malformation of an affected woman's fetus such as neural tube defects; Short stature, which can occur when childhood celiac disease prevents nutrient absorption and seizures. Diseases linked to Celiac Disease People with celiac disease may often have other autoimmune diseases such as thyroid disease, systemic lupus erythematosus (SLE), Type I diabetes, rheumatoid arthritis, Sjogren's syndrome, or collagen vascular diseases. Prevalence of Celiac Disease (number of cases in a specific population at a specific time) The prevalence of celiac disease in Europe, for example in Italy and Ireland, is approximately 1: 250-300 (1 person in 250 to 300 people). In other areas of the world such as Asia, South America, or Africa, the disease is being diagnosed more frequently than previously described. Until recently, celiac disease was considered to be rare in the United States. With increasing use of blood tests to diagnose the disease, it appears that celiac disease is quite common, occurring perhaps as frequently as 1 in 133 people, or affecting approximately 2 million people in this country. Among people with first degree relatives (parent, sibling, or child) with celiac disease, as many as 1 in 22 people may have the disease. Research is ongoing to determine the true prevalence of this disease. It is important for an individual with celiac disease to be followed by a gastroenterologist familiar with its clinical importance. Reviewed November 2010 Taken From: http://www.asge.org/press/press.aspx?id=556
Groundwater (or ground water) is the water present beneath Earth's surface in soil pore spaces and in the fractures of rock formations. A unit of rock or an unconsolidated deposit is called an aquifer when it can yield a usable quantity of water. The depth at which soil pore spaces or fractures and voids in rock become completely saturated with water is called the water table. Groundwater is recharged from, and eventually flows to, the surface naturally; natural discharge often occurs at springs and seeps, and can form oases or wetlands. Groundwater is also often withdrawn for agricultural, municipal, and industrial use by constructing and operating extraction wells. The study of the distribution and movement of groundwater is hydrogeology, also called groundwater hydrology. Typically, groundwater is thought of as water flowing through shallow aquifers, but, in the technical sense, it can also contain soil moisture, permafrost (frozen soil), immobile water in very low permeability bedrock, and deep geothermal or oil formation water. Groundwater is hypothesized to provide lubrication that can possibly influence the movement of faults. It is likely that much of Earth's subsurface contains some water, which may be mixed with other fluids in some instances. Groundwater may not be confined only to Earth. The formation of some of the landforms observed on Mars may have been influenced by groundwater. There is also evidence that liquid water may also exist in the subsurface of Jupiter's moon Europa. Groundwater is often cheaper, more convenient and less vulnerable to pollution than surface water. Therefore, it is commonly used for public water supplies. For example, groundwater provides the largest source of usable water storage in the United States and California annually withdraws the largest amount of groundwater of all the states. Underground reservoirs contain far more water than the capacity of all surface reservoirs and lakes in the US, including the Great Lakes. Many municipal water supplies are derived solely from groundwater. Polluted groundwater is less visible, but more difficult to clean up, than pollution in rivers and lakes. Groundwater pollution most often results from improper disposal of wastes on land. Major sources include industrial and household chemicals and garbage landfills, excessive fertilizers and pesticides used in agriculture, industrial waste lagoons, tailings and process wastewater from mines, industrial fracking, oil field brine pits, leaking underground oil storage tanks and pipelines, sewage sludge and septic systems. - 1 Aquifers - 2 Water cycle - 3 Issues - 4 Government regulations - 5 See also - 6 References - 7 External links An aquifer is a layer of porous substrate that contains and transmits groundwater. When water can flow directly between the surface and the saturated zone of an aquifer, the aquifer is unconfined. The deeper parts of unconfined aquifers are usually more saturated since gravity causes water to flow downward. The upper level of this saturated layer of an unconfined aquifer is called the water table or phreatic surface. Below the water table, where in general all pore spaces are saturated with water, is the phreatic zone. Substrate with low porosity that permits limited transmission of groundwater is known as an aquitard. An aquiclude is a substrate with porosity that is so low it is virtually impermeable to groundwater. A confined aquifer is an aquifer that is overlain by a relatively impermeable layer of rock or substrate such as an aquiclude or aquitard. If a confined aquifer follows a downward grade from its recharge zone, groundwater can become pressurized as it flows. This can create artesian wells that flow freely without the need of a pump and rise to a higher elevation than the static water table at the above, unconfined, aquifer. The characteristics of aquifers vary with the geology and structure of the substrate and topography in which they occur. In general, the more productive aquifers occur in sedimentary geologic formations. By comparison, weathered and fractured crystalline rocks yield smaller quantities of groundwater in many environments. Unconsolidated to poorly cemented alluvial materials that have accumulated as valley-filling sediments in major river valleys and geologically subsiding structural basins are included among the most productive sources of groundwater. The high specific heat capacity of water and the insulating effect of soil and rock can mitigate the effects of climate and maintain groundwater at a relatively steady temperature. In some places where groundwater temperatures are maintained by this effect at about 10 °C (50 °F), groundwater can be used for controlling the temperature inside structures at the surface. For example, during hot weather relatively cool groundwater can be pumped through radiators in a home and then returned to the ground in another well. During cold seasons, because it is relatively warm, the water can be used in the same way as a source of heat for heat pumps that is much more efficient than using air. The volume of groundwater in an aquifer can be estimated by measuring water levels in local wells and by examining geologic records from well-drilling to determine the extent, depth and thickness of water-bearing sediments and rocks. Before an investment is made in production wells, test wells may be drilled to measure the depths at which water is encountered and collect samples of soils, rock and water for laboratory analyses. Pumping tests can be performed in test wells to determine flow characteristics of the aquifer. Groundwater makes up about twenty percent of the world's fresh water supply, which is about 0.61% of the entire world's water, including oceans and permanent ice. Global groundwater storage is roughly equal to the total amount of freshwater stored in the snow and ice pack, including the north and south poles. This makes it an important resource that can act as a natural storage that can buffer against shortages of surface water, as in during times of drought. Groundwater can be a long-term 'reservoir' of the natural water cycle (with residence times from days to millennia), as opposed to short-term water reservoirs like the atmosphere and fresh surface water (which have residence times from minutes to years). The figure shows how deep groundwater (which is quite distant from the surface recharge) can take a very long time to complete its natural cycle. The Great Artesian Basin in central and eastern Australia is one of the largest confined aquifer systems in the world, extending for almost 2 million km2. By analysing the trace elements in water sourced from deep underground, hydrogeologists have been able to determine that water extracted from these aquifers can be more than 1 million years old. By comparing the age of groundwater obtained from different parts of the Great Artesian Basin, hydrogeologists have found it increases in age across the basin. Where water recharges the aquifers along the Eastern Divide, ages are young. As groundwater flows westward across the continent, it increases in age, with the oldest groundwater occurring in the western parts. This means that in order to have travelled almost 1000 km from the source of recharge in 1 million years, the groundwater flowing through the Great Artesian Basin travels at an average rate of about 1 metre per year. Recent research has demonstrated that evaporation of groundwater can play a significant role in the local water cycle, especially in arid regions. Scientists in Saudi Arabia have proposed plans to recapture and recycle this evaporative moisture for crop irrigation. In the opposite photo, a 50-centimeter-square reflective carpet, made of small adjacent plastic cones, was placed in a plant-free dry desert area for five months, without rain or irrigation. It managed to capture and condense enough ground vapor to bring to life naturally buried seeds underneath it, with a green area of about 10% of the carpet area. It is expected that, if seeds were put down before placing this carpet, a much wider area would become green. Certain problems have beset the use of groundwater around the world. Just as river waters have been over-used and polluted in many parts of the world, so too have aquifers. The big difference is that aquifers are out of sight. The other major problem is that water management agencies, when calculating the "sustainable yield" of aquifer and river water, have often counted the same water twice, once in the aquifer, and once in its connected river. This problem, although understood for centuries, has persisted, partly through inertia within government agencies. In Australia, for example, prior to the statutory reforms initiated by the Council of Australian Governments water reform framework in the 1990s, many Australian states managed groundwater and surface water through separate government agencies, an approach beset by rivalry and poor communication. In general, the time lags inherent in the dynamic response of groundwater to development have been ignored by water management agencies, decades after scientific understanding of the issue was consolidated. In brief, the effects of groundwater overdraft (although undeniably real) may take decades or centuries to manifest themselves. In a classic study in 1982, Bredehoeft and colleagues modeled a situation where groundwater extraction in an intermontane basin withdrew the entire annual recharge, leaving ‘nothing’ for the natural groundwater-dependent vegetation community. Even when the borefield was situated close to the vegetation, 30% of the original vegetation demand could still be met by the lag inherent in the system after 100 years. By year 500, this had reduced to 0%, signalling complete death of the groundwater-dependent vegetation. The science has been available to make these calculations for decades; however, in general water management agencies have ignored effects that will appear outside the rough timeframe of political elections (3 to 5 years). Marios Sophocleous argued strongly that management agencies must define and use appropriate timeframes in groundwater planning. This will mean calculating groundwater withdrawal permits based on predicted effects decades, sometimes centuries in the future. As water moves through the landscape, it collects soluble salts, mainly sodium chloride. Where such water enters the atmosphere through evapotranspiration, these salts are left behind. In irrigation districts, poor drainage of soils and surface aquifers can result in water tables' coming to the surface in low-lying areas. Major land degradation problems of soil salinity and waterlogging result, combined with increasing levels of salt in surface waters. As a consequence, major damage has occurred to local economies and environments. Four important effects are worthy of brief mention. First, flood mitigation schemes, intended to protect infrastructure built on floodplains, have had the unintended consequence of reducing aquifer recharge associated with natural flooding. Second, prolonged depletion of groundwater in extensive aquifers can result in land subsidence, with associated infrastructure damage – as well as, third, saline intrusion. Fourth, draining acid sulphate soils, often found in low-lying coastal plains, can result in acidification and pollution of formerly freshwater and estuarine streams. Another cause for concern is that groundwater drawdown from over-allocated aquifers has the potential to cause severe damage to both terrestrial and aquatic ecosystems – in some cases very conspicuously but in others quite imperceptibly because of the extended period over which the damage occurs. Groundwater is a highly useful and often abundant resource. However, over-use, or overdraft, can cause major problems to human users and to the environment. The most evident problem (as far as human groundwater use is concerned) is a lowering of the water table beyond the reach of existing wells. As a consequence, wells must be drilled deeper to reach the groundwater; in some places (e.g., California, Texas, and India) the water table has dropped hundreds of feet because of extensive well pumping. In the Punjab region of India, for example, groundwater levels have dropped 10 meters since 1979, and the rate of depletion is accelerating. A lowered water table may, in turn, cause other problems such as groundwater-related subsidence and saltwater intrusion. Groundwater is also ecologically important. The importance of groundwater to ecosystems is often overlooked, even by freshwater biologists and ecologists. Groundwaters sustain rivers, wetlands, and lakes, as well as subterranean ecosystems within karst or alluvial aquifers. Not all ecosystems need groundwater, of course. Some terrestrial ecosystems – for example, those of the open deserts and similar arid environments – exist on irregular rainfall and the moisture it delivers to the soil, supplemented by moisture in the air. While there are other terrestrial ecosystems in more hospitable environments where groundwater plays no central role, groundwater is in fact fundamental to many of the world’s major ecosystems. Water flows between groundwaters and surface waters. Most rivers, lakes, and wetlands are fed by, and (at other places or times) feed groundwater, to varying degrees. Groundwater feeds soil moisture through percolation, and many terrestrial vegetation communities depend directly on either groundwater or the percolated soil moisture above the aquifer for at least part of each year. Hyporheic zones (the mixing zone of streamwater and groundwater) and riparian zones are examples of ecotones largely or totally dependent on groundwater. Subsidence occurs when too much water is pumped out from underground, deflating the space below the above-surface, and thus causing the ground to collapse. The result can look like craters on plots of land. This occurs because, in its natural equilibrium state, the hydraulic pressure of groundwater in the pore spaces of the aquifer and the aquitard supports some of the weight of the overlying sediments. When groundwater is removed from aquifers by excessive pumping, pore pressures in the aquifer drop and compression of the aquifer may occur. This compression may be partially recoverable if pressures rebound, but much of it is not. When the aquifer gets compressed, it may cause land subsidence, a drop in the ground surface. The city of New Orleans, Louisiana is actually below sea level today, and its subsidence is partly caused by removal of groundwater from the various aquifer/aquitard systems beneath it. In the first half of the 20th century, the San Joaquin Valley experienced significant subsidence, in some places up to 8.5 metres (28 feet) due to groundwater removal. Cities on river deltas, including Venice in Italy, and Bangkok in Thailand, have experienced surface subsidence; Mexico City, built on a former lake bed, has experienced rates of subsidence of up to 40 cm (1'3") per year. In general, in very humid or undeveloped regions, the shape of the water table mimics the slope of the surface. The recharge zone of an aquifer near the seacoast is likely to be inland, often at considerable distance. In these coastal areas, a lowered water table may induce sea water to reverse the flow toward the land. Sea water moving inland is called a saltwater intrusion. In alternative fashion, salt from mineral beds may leach into the groundwater of its own accord. Polluted groundwater is less visible, but more difficult to clean up, than pollution in rivers and lakes. Groundwater pollution most often results from improper disposal of wastes on land. Major sources include industrial and household chemicals and garbage landfills, industrial waste lagoons, tailings and process wastewater from mines, oil field brine pits, leaking underground oil storage tanks and pipelines, sewage sludge and septic systems. Polluted groundwater is mapped by sampling soils and groundwater near suspected or known sources of pollution, to determine the extent of the pollution, and to aid in the design of groundwater remediation systems. Preventing groundwater pollution near potential sources such as landfills requires lining the bottom of a landfill with watertight materials, collecting any leachate with drains, and keeping rainwater off any potential contaminants, along with regular monitoring of nearby groundwater to verify that contaminants have not leaked into the groundwater. Groundwater pollution, from pollutants released to the ground that can work their way down into groundwater, can create a contaminant plume within an aquifer. Pollution can occur from landfills, naturally occurring arsenic, on-site sanitation systems or other point sources, such as petrol stations or leaking sewers. Movement of water and dispersion within the aquifer spreads the pollutant over a wider area, its advancing boundary often called a plume edge, which can then intersect with groundwater wells or daylight into surface water such as seeps and springs, making the water supplies unsafe for humans and wildlife. Different mechanism have influence on the transport of pollutants, e.g. diffusion, adsorption, precipitation, decay, in the groundwater. The interaction of groundwater contamination with surface waters is analyzed by use of hydrology transport models. The danger of pollution of municipal supplies is minimized by locating wells in areas of deep groundwater and impermeable soils, and careful testing and monitoring of the aquifer and nearby potential pollution sources. ||The examples and perspective in this section deal primarily with the United States and do not represent a worldwide view of the subject. (July 2015) (Learn how and when to remove this template message)| In the United States, laws regarding ownership and use of groundwater are generally state laws; however, regulation of groundwater to minimize pollution of groundwater is by both states and the federal-level Environmental Protection Agency. Ownership and use rights to groundwater typically follow one of three main systems: Rule of Capture The Rule of Capture provides each landowner the ability to capture as much groundwater as they can put to a beneficial use, but they are not guaranteed any set amount of water. As a result, well-owners are not liable to other landowners for taking water from beneath their land. State laws or regulations will often define "beneficial use", and sometimes place other limits, such as disallowing groundwater extraction which causes subsidence on neighboring property. Limited private ownership rights similar to riparian rights in a surface stream. The amount of groundwater right is based on the size of the surface area where each landowner gets a corresponding amount of the available water. Once adjudicated, the maximum amount of the water right is set, but the right can be decreased if the total amount of available water decreases as is likely during a drought. Landowners may sue others for encroaching upon their groundwater rights, and water pumped for use on the overlying land takes preference over water pumped for use off the land. Environmental protection of groundwater In November 2006, the Environmental Protection Agency published the groundwater Rule in the United States Federal Register. The EPA was worried that the groundwater system would be vulnerable to contamination from fecal matter. The point of the rule was to keep microbial pathogens out of public water sources. The 2006 groundwater Rule was an amendment of the 1996 Safe Drinking Water Act. Reasonable Use Rule (American Rule) This rule does not guarantee the landowner a set amount of water, but allows unlimited extraction as long as the result does not unreasonably damage other wells or the aquifer system. Usually this rule gives great weight to historical uses and prevents new uses that interfere with the prior use. Groundwater scrutiny upon real estate property transactions in the US In the US, upon commercial real estate property transactions both groundwater and soil are the subjects of scrutiny, with a Phase I Environmental Site Assessment normally being prepared to investigate and disclose potential pollution issues. In the San Fernando Valley of California, real estate contracts for property transfer below the Santa Susana Field Laboratory (SSFL) and eastward have clauses releasing the seller from liability for groundwater contamination consequences from existing or future pollution of the Valley Aquifer. - Richard Greenburg (2005). The Ocean Moon: Search for an Alien Biosphere. Springer Praxis Books. - National Geographic Almanac of Geography, 2005, ISBN 0-7922-3877-X, page 148. - "What is hydrology and what do hydrologists do?". The USGS Water Science School. United States Geological Survey. 23 May 2013. Retrieved 21 Jan 2014. - "Learn More: Groundwater". Columbia Water Center. Retrieved 15 September 2009. - United States Department of the Interior (1977). Ground Water Manual (First ed.). United States Government Printing Office. p. 4. - File:Groundwater flow.svg - Hassan, SM Tanvir (March 2008). Assessment of groundwater evaporation through groundwater model with spatio-temporally variable fluxes (PDF) (MSc). Enschede, Netherlands: International Institute for Geo-Information Science and Earth Observation. - Al-Kasimi, S. M. (2002). Existence of Ground Vapor-Flux Up-Flow: Proof & Utilization in Planting The Desert Using Reflective Carpet 3. Dahran. pp. 105–119. - Sophocleous, Marios (2002). "Interactions between groundwater and surface water: the state of the science". Hydrogeology Journal 10: 52–67. Bibcode:2002HydJ...10...52S. doi:10.1007/s10040-001-0170-8. - "Free articles and software on drainage of waterlogged land and soil salinity control". Retrieved 2010-07-28. - Ludwig, D.; Hilborn, R.; Walters, C. (1993). "Uncertainty, Resource Exploitation, and Conservation: Lessons from History" (PDF). Science 260 (5104): 17–36. Bibcode:1993Sci...260...17L. doi:10.1126/science.260.5104.17. JSTOR 1942074. PMID 17793516. - Zektser et al. - Sommer, Bea; Horwitz, Pierre; Sommer, Bea; Horwitz, Pierre (2001). "Water quality and macroinvertebrate response to acidification following intensified summer droughts in a Western Australian wetland". Marine and Freshwater Research 52 (7): 1015. doi:10.1071/MF00021. - Zektser, S.; Lo�Iciga, H. A.; Wolf, J. T. (2004). "Environmental impacts of groundwater overdraft: selected case studies in the southwestern United States". Environmental Geology 47 (3): 396–404. doi:10.1007/s00254-004-1164-3. replacement character in |last2=at position 3 (help) - Upmanu Lall. "Punjab: A tale of prosperity and decline". Columbia Water Center. Retrieved 2009-09-11. - Dokka, Roy K. (2011). "The role of deep processes in late 20th century subsidence of New Orleans and coastal areas of southern Louisiana and Mississippi". Journal of Geophysical Research 116 (B6). doi:10.1029/2010JB008008. ISSN 0148-0227. - Sneed, M; Brandt, J; Solt, M (2013). "Land Subsidence along the Delta-Mendota Canal in the Northern Part of the San Joaquin Valley, California, 2003–10" (PDF). USGS Scientific Investigations Report 2013-5142. Retrieved 22 June 2015. - Tosi, Luigi; Teatini, Pietro; Strozzi, Tazio; Da Lio, Cristina (2014). "Relative Land Subsidence of the Venice Coastland, Italy": 171–173. doi:10.1007/978-3-319-08660-6_32. - Aobpaet, Anuphao; Cuenca, Miguel Caro; Hooper, Andrew; Trisirisatayawong, Itthi (2013). "InSAR time-series analysis of land subsidence in Bangkok, Thailand". International Journal of Remote Sensing 34 (8): 2969–2982. doi:10.1080/01431161.2012.756596. ISSN 0143-1161. - Arroyo, Danny; Ordaz, Mario; Ovando-Shelley, Efrain; Guasch, Juan C.; Lermo, Javier; Perez, Citlali; Alcantara, Leonardo; Ramírez-Centeno, Mario S. (2013). "Evaluation of the change in dominant periods in the lake-bed zone of Mexico City produced by ground subsidence through the use of site amplification factors". Soil Dynamics and Earthquake Engineering 44: 54–66. doi:10.1016/j.soildyn.2012.08.009. ISSN 0267-7261. - "Appendix H, Groundwater Law and Regulated Riparianism", Final Report: Restoring Great Lakes Basin Water thorough the Use of Conservation Credits and Integrated Water Balance Analysis System, The Great Lakes Protection Fund Project # 763 (pdf), retrieved 16 January 2014 - Ground Water Rule (GWR) | Ground Water Rule | US EPA. Water.epa.gov. Retrieved on 2011-06-09. - EPA; http://water.epa.gov/type/groundwater/index |Wikimedia Commons has media related to Underground water.|
What Printing Workers Do Printing press operators prepare, operate, and maintain printing presses. Printing workers produce print material in three stages: prepress, press, and binding and finishing. They review specifications, calibrate color settings on printers, identify and fix problems with printing equipment, and assemble pages. Printing workers typically do the following: - Review job orders to determine quantities to be printed, paper specifications, colors, and special printing instructions - Arrange pages so that materials can be printed - Operate laser plate-making equipment that converts electronic data to plates - Feed paper through press cylinders and adjust equipment controls - Collect and inspect random samples during print runs to identify any needed adjustments - Cut material to specified dimensions, fitting and gluing material to binder boards by hand or machine - Compress sewed or glued sets of pages, which are called signatures, using hand presses or smashing machines - Bind new books, using hand tools such as bone folders, knives, hammers, or brass binding tools The printing process has three stages: prepress, press, and binding or finishing. In small print shops, the same person may take care of all three stages. However, in most print shops, workers specialize in an occupation that focuses on one step in the printing process: Prepress technicians and workers prepare print jobs. They do a variety of tasks to help turn text and pictures into finished pages and prepare the pages for print. Some prepress technicians, known as preflight technicians, take images from graphic designers or customers and check them for completeness. They review job specifications and designs from submitted sketches or clients’ electronic files to ensure that everything is correct and all files and photos are included. Some prepress workers use a photographic process also known as “cold-type” technology to make offset printing plates (sheets of metal that carry the final image to be printed). This is a complex process, involving ultraviolet light and chemical exposure, through which the text and images of a print job harden on a metal plate and become water repellent. These hard, water-repellent portions of the metal plate are in the form of the text and images that will be printed. More recently, however, the printing industry has moved to technology known as direct-to-plate. Many prepress technicians now send the data directly to a plating system, bypassing the need for the photographic technique. The direct-to-plate technique is an example of how digital imaging technology has largely replaced cold-type print technology. Printing press operators prepare, run, and maintain printing presses. Their duties vary according to the type of press they operate. Traditional printing methods, such as offset lithography, gravure, flexography, and letterpress, use a plate or roller that carries the final image that is to be printed and then copies the image to paper. In addition to the traditional printing processes, plateless or nonimpact processes are becoming more common. Plateless processes—including digital, electrostatic, and ink-jet printing—are used for copying, duplicating, and document and specialty printing, usually in quick-printing shops and smaller printing shops. Commercial printers are increasingly using digital presses with longer-run capabilities for short-run or customized printing jobs. Digital presses also allow printers to transfer files, blend colors, and proof images electronically, thus avoiding the costly and time-consuming steps of making printing plates that are common in offset printing. Print binding and finishing workers combine printed sheets into a finished product, such as a book, magazine, or catalog. Their duties depend on what they are binding. Some types of binding and finishing jobs take only one step. Preparing leaflets or newspaper inserts, for example, requires only folding and trimming. Binding books and magazines, however, takes several steps. Bindery workers first assemble the books and magazines from large, flat, printed sheets of paper. They then operate machines that fold printed sheets into signatures, which are groups of pages arranged sequentially. They assemble the signatures in the right order and join them by saddle stitching (stapling them through the middle of the binding) or perfect binding (using glue, not stitches or staples). Some bookbinders repair rare books by sewing, stitching, or gluing the covers or the pages.
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work! - Philosophy of mind and empirical psychology - Terminology and distinctions - Some basic metaphysical categories - Main problematic phenomena - Traditional metaphysical positions - Eliminativism: Behaviourism and instrumentalism - The computational-representational theory of thought (CRTT) - Further issues - Consciousness reconsidered Terminology and distinctions Some basic metaphysical categories Mental phenomena appear in the full variety of basic categories displayed by phenomena in most other domains, and it is often extremely important to bear in mind just which category is being discussed. Providing definitions of these basic categories is the task of metaphysics in general and will not be undertaken here. What follows are some illustrative examples. Substances are the basic things—the basic “stuff”—out of which the world is composed. Earth, air, fire, and water were candidate substances in ancient times; energy, the chemical elements, and subatomic particles are more contemporary examples. Historically, many philosophers have thought that the mind involves a special substance that is different in some fundamental way from material substances. This view, however, has largely been replaced by more moderate claims involving other metaphysical categories to be discussed below. Objects are, in the first instance, just what are ordinarily called “objects”—tables, chairs, rocks, planets, stars, and human and animal bodies, among innumerable other things. Physicists sometimes talk further about “unobservable” objects, such as molecules, atoms, and subatomic particles; and psychologists have posited unobservable objects such as drives, instincts, memory traces, egos, and superegos. All of these are objects in the philosophical sense. Particularly problematic examples, to be discussed below, are “apparent” objects such as pains, tickles, and mental images. Abstract and concrete Most objects one thinks of are located somewhere in space and time. Philosophers call anything that is potentially located in space and time “concrete.” Some apparent objects, however, seem to be neither in space nor in time. There exists, after all, a positive square root of nine, namely, the number three; by contrast, the positive square root of -1 does not exist. But the square root of nine is not located in any particular part of space. It seems to exist outside of time entirely, neither coming into existence nor passing out of it. Objects of this sort are called “abstract.” Some mental phenomena are straightforwardly abstract—for example, the thoughts and beliefs that are shared between the present-day citizens of Beijing and the citizens of ancient Athens. But other mental phenomena are especially puzzling in this regard. For example, Brutus might have had regretful thoughts after stabbing Julius Caesar, and these thoughts might have caused him to blush. But precisely where did these regretful thoughts occur so that they could have had this effect? Does it even make sense to say they occurred at a point one millimeter away from Brutus’s hypothalamus? Sensations are even more peculiar, since they often seem to be located in very specific places, as when one feels a pain in one’s left forearm. But, as occurs in the case of phantom limb syndrome, one could have such a pain without actually having a forearm. And mental images seem downright paradoxical: people with vivid visual imaginations may report having images of a cow jumping over the Moon, for example, but no one supposes that there is an actual image of this sort in anyone’s brain. Properties and relations Objects seem to have properties: a tennis ball is spherical and fuzzy; a billiard ball is spherical and smooth. To a first approximation, a property can be thought of as the thing named by that part of a simple sentence that is left over when the subject of the sentence is omitted; thus, the property expressed by is spherical (or the property of sphericality, or being spherical) is obtained by omitting a tennis ball from A tennis ball is spherical. As these examples show, a property such as sphericality can be shared by many different objects (for this reason, properties have traditionally been called universals). Mental properties, such as being conscious and being in pain, can obviously be shared by many people and animals—and, much more controversially, perhaps also by machines. Relations are what is expressed by what is left when not only the subject but also the direct and indirect object (or objects) of a sentence are omitted. Thus, the relation of kissing is obtained by omitting both Mary and John from Mary kissed John; and the relation of giving is obtained by omitting Eve, Adam, and an apple from Eve gave Adam an apple. Likewise, the relation of understanding is obtained by omitting both Mary and that John is depressed from Mary understands that John is depressed. In this case the object that Mary understands is often called a thought (see below Thoughts and propositions). Properties and relations are often spoken of as being “instantiated” by the things that have them: a ball instantiates sphericality; the trio of Eve, Adam, and the apple instantiates the relation of giving. A difficult question over which philosophers disagree is whether properties and relations can exist even if they are completely uninstantiated. Is there a property of being a unicorn, a property of being a round square, or a relation of “being the reincarnation of”? This question will be left open here, since there is widespread disagreement about it. In general, however, one should not simply assume without argument that an uninstantiated property or relation exists. States and events States consist simply of objects having properties or standing in relations to other objects. For example, Caesar’s mental state of being conscious presumably ended with the event of his death. An event consists of objects’ losing or acquiring various properties and relations; thus, Caesar’s death was an event that consisted of his losing the property of being alive, and John’s seeing Mary is an event that consists of John’s and Mary’s coming to stand in the relation of seeing.
An easy lesson about water In This lesson we learn to order some water in a restaurant in Japanese and the best way to learn is by telling a little story. In this story you and a friend named Yuki will have a conversation, joined in by a waitress. We also give you a guide on ordering food and list a few food items that you can use when practising what you have learned here. You and a friend are tourists in Japan. You are both getting hungry and decide to go out to sample some food. You both walk around until you see a quaint little eatery. You walk in and are immediately greeted by a friendly face that takes you to a table, and leaves a couple of menus. You realize you have not had anything to drink for a while. You look at your friend, Yuki, who understands Japanese. “Hey, Yuki? How can I order us some water?” You ask. Yuki smiles. “I can order it if you want.” “No,” You say, eager to learn new words. “Teach me” Yuki grins at you. “Okay, I will teach you three useful words and a sentence. Tell me if I go too fast.” “Okay, ” You nod your head. “Mizu means water in Japanese, ni-hai means two cups or glasses and kudasai mean, please. Now repeat these words after me, “Mizu” Yuki nods. “Good, you are doing great. Now to ask for water, simply say ‘mizu ni-hai kudasai’ You say the sentence word for word ‘mizu ni-hai kudasai’. Your waitress nears your table with a cake platter and gives it to a couple of people sitting at the table next to you. The waitress walks over to your table. “Nan ni shimashō ka?” She asks Yuki in a friendly voice. Yuki gesture to her to wait a second, gesturing to you, saying ‘chotto matte kudasai’. While she waits patiently Yuki explains that she asked you what you will have and that he asked her to wait. “Now you can ask her for our water.’ Yuki says. Eager to try out a new string of words you tell the waitress: ‘Mizu ni-hai kudasai’. She smiles and bows politely and says ‘Hai’ and walks off to get the water. “Well done!” Yuki says. “Yuki, when she spoke to you I heard something I have heard in a lot of anime I watched. She said ‘Nan Ni’, doesn’t that mean ‘what’ or something?” You ask your friend. “Good catch, my friend. Yes, she said ‘nan ni shimashō ka?’ It means ‘what will you have?’. I wanted you to answer her, so I said: ‘chotto matte kudasai’ asking her to wait a moment, giving you a chance to practice your lesson.” We hope this dialogue with Yuki have been helpful. You can use the words you have learned and combine them with other words and before you know it you will know more Japanese than you thought. Using a language is combining words to give sentences meaning and a great way to learn. Ordering food guide: If you know what you want to eat, you can always say it, by ordering it. Example let say you want some ramen you can tell the waiter ‘Ramen onegaishimasu’ and if you want many types of dishes you can list them easily by saying the word ‘to’ in-between. Like this: ‘Ramen to yakisoba onegaishimasu’ (Ramen and Yakisoba please) There is one thing that is important when asking for food in a restaurant. If you need more than one item, you need to use a different counting system. If you want two bowls of ramen saying ‘Ramen futari’ will mean ‘two ramen people’ and will just confuse the poor waiter. Instead you say ‘Ramen (w)o futatsu onegaishimasu’ . As you might notice, the word -tsu is the counter here. Here is a way to count to five when talking about objects: 1 thing – hitotsu 2 things – futatsu 3 things – mittsu 4 things – yottsu 5 things – itsustu Learn to count objects by singing along to this video Here is a list of types of food and drink to help you on your way: |karē raisu||curried rice| |sashimi||sliced raw fish|
Learn something new every day More Info... by email Shutter speed is a photography term that indicates the length of time the shutter is open to allow light exposure to the film or image sensor. Used in conjunction with aperture size (f-stops), this speed determines total exposure and can be changed to create different effects. It is measured in seconds, typically fractions of seconds. When a camera is being used in automatic mode, the shutter speed is adjusted automatically, but the speed can be adjusted manually on most SLR film and digital cameras. Lighting and movement are typically used to determine the proper speed. A slower one is used in low lighting, while a short, or quick, speed is usually used to capture moving objects. To create dramatic effects, such as intentional blurring or other artistic effects, the speed may be adjusted to atypical levels for the given conditions. The shutter speed of most cameras can be adjusted in increments from 1 second to 1/1000 of a second, but longer and shorter exposure times can be achieved on some cameras. There are some rules of thumb for setting the speed, such as slower settings in low light and quicker settings for fast-moving subjects, but determining the right amount for the desired effects is more a matter of trial and error. To adjust shutter speed, a person must first set his or her camera to a manual setting. Most cameras today have a digital display viewable on the screen in the viewfinder. Most displays omit the 1 and display only the denominator of the fraction, so a shutter speed of 1/125 will be displayed as 125, while 1/500 will be displayed as 500 on screen. A setting of 125 is slower than a setting of 500. While adjusting the speed in various conditions and for various subjects will change the overall effect of the image, experimenting with apertures and sensitivity (ISO) as well is essential to understanding the full impact specific settings can have on the overall photograph. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
Climate change – global policy and cooperation Climate change has wide-ranging impacts. That is why actions to combat it must be incorporated into all aspects of societal policy, including foreign policy, security policy, trade policy and development policy. The consequences of climate change burden poor developing countries the most seriously. Finland supports the climate measures of developing countries as part of development cooperation. The Paris Agreement, adopted in 2015, is one of the most important milestones of the work against climate change. The other two key agreements are the 1992 United Nations Framework Convention on Climate Change (UNFCCC) and its supplement, the Kyoto Protocol. The Paris Agreement is a comprehensive legally binding instrument. For the first time nearly all the countries of the world have indicated their willingness to take action to tackle climate change. The goal of the Paris Agreement is to keep the increase in global average temperature to well below 2 °C and to aim to limit the temperature increase to 1.5 °C. The agreement also sets a long-term target to focus on climate change adaptation. Finance flows will be made consistent with a route to low-carbon climate-resilient development. The aim is to get global emissions to peak as soon as possible and undertake rapid reductions thereafter. Another goal is to achieve a balance between man-made emissions and removals by carbon sinks in the second half of this century. The Paris Agreement does not contain any quantified obligations for the reduction of emissions. The countries will prepare their own national emission targets under the Paris Agreement and report to each other on how well they are doing to implement their targets. The Paris Agreement entered into force in November 2016, and it has been ratified also by Finland. At present international climate negotiations are focusing on how to implement the Paris Agreement. The Paris Agreement will reduce emissions on a global scale as of 2020. Until then, emission reduction measures will be based on the Kyoto Protocol and the short-term measures agreed on in Paris. The Kyoto Protocol entered into force in 2005. It is the first legally binding instrument that has managed to reduce emissions internationally. The adverse impacts of climate change on local climate, such as the increase in storms or droughts, cause problems especially for the poorest countries and small island states. Impacts of climate change must be taken into account in countries’ plans for the future so that the results achieved so far are not nullified. Climate change also poses security threats. It generates migration, undermines food security, creates new health threats, increases competition for natural resources and can thus feed conflicts. The broad security perspective of the EU’s and Finland’s security policy focuses on comprehensiveness and preventive measures. Finland supports the climate measures of developing countries as part of development cooperation. Adaptation to climate change means that the adverse effects of climate change are identified and provision is made for them. The aim is to reduce the vulnerability of both human communities and ecosystems to the impacts of climate change and to improve their ability to recover from disasters caused by climate change. Adaptation measures may vary greatly depending on location. The means to tackle climate change are different in coastal areas susceptible to storms than in grazing land affected by draught, for example. Climate change affects men and women in different ways Climate change has adverse impacts on the food security of households, which in developing countries is largely the responsibility of women. Women have diverse everyday experience of how best to adapt to climate change and how it can be curbed most effectively. However, women’s possibilities to influence decision-making are often weak. Finland has supported inclusion of the gender perspective in climate measures since 2008. The new Paris Agreement takes into account the gender equality perspective, which was Finland’s objective in the negotiations. The Contracting Parties are urged to observe gender equality promotion and the empowerment of women in their climate actions. Developing countries need support in their national climate actions. They need support, for example to develop legislation and climate officials’ know-how and strengthen climate institutions. They also need to build citizens’ resilience and decrease their vulnerability to the adverse effects of climate change in both rural and urban environments. Industrialised countries support the poorest countries by various means. They provide funding and expert assistance so that developing countries can develop their own potential to respond to climate change. Technology development and transfer also play important roles. In all this development cooperation has an important part to play. Finland uses a variety of channels to provide this support, including funds established under the UNFCCC, the Global Environment Facility (GEF), the Green Climate Fund (GCF), bilateral development cooperation projects and NGO projects. In 2017, development policy investments were channeled into the Finland–IFC Climate Change Program, a joint climate fund that Finland set up together with the IFC (International Finance Corporation), a member of the World Bank Group. The climate perspective is also taken into account in Finnfund’s financing. Examples of Finland’s support for the climate measures of developing countries: Developing countries’ adaptation is supported through meteorology projects carried out by the Finnish Meteorological Institute (FMI). They focus on the development of developing countries’ own meteorological services. Finland supports Deforestation and Forest Degradation – REDD+ programmes in almost all of the target countries for forest cooperation, such as Zambia and Myanmar. These countries are studying the coal and biomass of forests and developing forest information systems. Forests bind carbon dioxide. The greenhouse gases from deforestation, or the disappearance of forests, account for almost one-fifth of global greenhouse gas emissions. Reducing deforestation and forest degradation also yields many other benefits, such as the protection of water reserves and biodiversity and the prevention of soil erosion. Finland has vast areas of forest, and uses its long-term forest knowledge when supporting sustainable forestry in developing countries. Climate change is taken into account when planning forestry projects to be financed in developing countries. Climate change mitigation needs cost-effective ways of reducing emissions. By pricing greenhouse gas emissions, investments are directed to lower carbon alternatives. Finland supports the pricing of emissions in developing countries among others in the World Bank’s Partnership for Market Readiness Fund. Through the fund, 19 countries receive support for the development of emissions trading schemes, carbon taxes and other emissions pricing schemes. Alongside development cooperation, the Ministry for Foreign Affairs procures emission reductions from investments made through the Clean Development Mechanism (CDM). It is a system under the Kyoto Protocol where industrialised countries finance emission reduction projects in developing countries. Target countries benefit from the projects: they obtain funding and new technologies promoting sustainable development. Industrialised countries get access to emission reduction credits from projects; they can use these credits to supplement their own emission reduction obligations. Finland’s CDM project portfolio has a total of about 150 projects in developing countries. The Ministry of Economic Affairs and Employment coordinates the purchase programme under the Kyoto mechanism. During the ongoing second commitment period of the Kyoto Protocol (2013–2020), emission reductions are acquired through the following carbon funds: the Asian Development Bank’s Future Carbon Fund, the NEFCO Carbon Fund and the World Bank’s Prototype Carbon Fund. In these funds, the focus has shifted to the repatriation of emission reductions. During the second commitment period of the Kyoto Protocol, Finland also has two bilateral clean development mechanism projects, the Ningxia Federal Solar Cooker Project in China and the Reduction of Methane Emissions from Ruseifeh Landfill project in Jordan. - More information on the Kyoto mechanisms on the website of the Ministry of Employment and the Economy - Ningxia Federal Solar Cooker Project in China - Reduction of Methane Emissions from Ruseifeh Landfill project in Jordan - Services and financial support: The Clean Development Mechanism (CDM) projects (in Finnish)
Alliteration Teacher Resources Find Alliteration educational ideas and activities Showing 1 - 20 of 711 resources Students explore alliteration in poetry. In this poetry lesson, students listen to examples of alliteration and identify the alliteration within a poem. Students read a poem with a partner and identify the alliteration contained in the poem. Students write examples of alliteration on index cards as an assessment. Alliteration is an entertaining literary device to utilize in reading and writing instruction. Students explore the concept of alliteration. In this sound devices lesson, students use educational software to create alliterative phrases that are accompanied by appropriate clip art, design tools, and graphics. Effective examples, descriptive definitions, and super slides make up this PowerPoint presentation. While 24 slides is quite long, the slides are simple and filled with real-life images that show alliteration in use. At the end, learners can take a quiz and then create their own alliterative poems using PowerPoint. Students draw pictures of alliteration sentences. For this six-traits lesson plan on word choice, students create illustrations of alliteration sentences using their name that were created with teacher assistance. The book, Potluck by Anne Shelby, is featured in this lesson. Are you looking for a way to bring writing into your history lesson - or history into your writing lesson? This cross-curricular activity is helpful and fun, no matter what class you're teaching! Using "Boogie Woogie Bugle Boy" by the Andrews Sisters, you can begin a discussion about World War II as well as alliteration and word choice. Your class will explore the elements of the song and imitate its style in their own original songs, using the topic of people from World War II. Third graders explore the use of alliteration. They discuss alliteration and examine various examples of alliteration in various stories. Students discuss the examples of alliteration and create their own examples of alliteration using their names. Learners study personification and alliteration in various fiction texts. In this literary devices lesson, students use various texts to identify the literary devices of personification and alliteration. Learners use examples of both devices in an original sentence and create an illustration for personification. Pupils review personification and alliteration. In this literary devices lesson, students use personification and alliteration in a sentence. Pupils draw a picture reflecting personification. Second graders are able to practice identifying alliteration. The teacher reads aloud from one of the picture or poetry books listed, 2nd graders stand up every time they hear alliteration. They identify the alliteration and the repeated consonant sound before continuing on with the poem or story. In this alliteration worksheet, 4th graders write alliterations with their first names, find rhyming patterns in poems, and more. Students complete 4 activities. Students review examples of alliteration in Shel Silverstein's poems. They are assigned a letter of the alphabet and then write an original alliterative poem using that letter. Students complete activities to learn to write with alliteration. In this alliteration lesson, students read the story Thank You for the Thistle and listen to the sounds at the beginning of each word. Students write sentences containing alliteration. Students then create an alphabet book. Students write an alliterative sentence with the letter and draw a picture. Learners explain what alliteration means. For this language arts lesson, students read excerpts of the book, Thank You for the Thistle. They write a sentence repeating the same letter sound, using adjectives, adverbs, and vivid verbs to lengthen the alliteration. Pupils explore alliteration and tongue twisters. They read and discuss alliteration examples, select and illustrate ten tongue twisters, and write original tongue twisters. In this poetry lesson plan, students listen to stories that contain alliteration. While listening to stories, student pairs make a list of words that they then use to construct "silly sentences" that contain alliteration. Each student pair gets up in front of the class and reads their silly sentences. Students discuss alliteration and how it is used in the book The Z Was Zapped. Students choose a letter and create alliterative sentences and illustrate the letter. Students read Thank You for the Thistle and understand what alliteration is. In this alliteration lesson, students write sentences using alliteration. Students choose a letter of the alphabet and the class writes an alphabet book of alliteration examples. Learners practice using vocabulary words to write alliterations. In this language arts lesson, students collaborate with classmates to create ideas for fun alliterations as they create their own using a children's word processing program. Learners post their tongue twisters on the Internet after they read work by other children. Review and discuss how to correctly write alliterations and then write seven original Christmas alliterations to share with their classmates. This activity has a suggested extension that has them make a book illustrating their alliterations.
Spelling is the order in which letters are put to make up words. Many languages have phonetic spelling, in other words, each letter represents a certain sound, however in English this is not the case. An English letter can have many different sounds. For example, the letter c can be pronounced: /k/ as in cat /s/ as in nice The spelling of an English word depends very much on its linguistic origin. English is primarily derived from the Greek, Latin and Germanic languages and the diverse spelling of English reflects this. Another reason for the confused state of English spelling is tied up with the history of the language and printing. - Changes in pronunciation over time. - Imported words bringing their original spelling over. - Words imported from non-Roman alphabets being transcribed differently. - Conquests bringing in changing ways of spelling. - Early publishers writing words to reflect the way in which they pronounced them. On this last point, when publishing began in earnest in England there were major differences in accents between one region and the next and printers often wrote phonetically in the way they spoke, thus different printers wrote the same word in a different way. This point is perhaps most explicitly seen now in the different between American and British spelling where huge distances between the two cultures meant them developing in slightly different directions. Interestingly often the spelling (and grammar) of American English is older than that of British English. Unlike many other languages, diacritics (accents and so on) are rarely used in English. Mostly these are imported words – often from French – and it is increasingly common for the diacritic to be dropped. Thus whilst it is common to see crêpe in French, in English it may well be written as crepe.
A loosely knit band of roving ice boulders in orbit around Saturn could be providing the raw material for one of the planet's rings, scientists say. The finding, detailed in the Aug. 3 issue of the journal Science, could solve the puzzle of what sustains Saturn's "G-ring" and might be evidence that a Saturnian moonlet was destroyed during an ancient collision. The formation of Saturn's rings is a general mystery, but theorists figure they're the result of one or more breakups of icy objects in the past. In particular though, the G-ring has really puzzled scientists since its discovery in the late 1970s by the Voyager mission. The odd ring The G-ring is a faint and narrow circlet of debris located beyond Saturn's main set of rings. There is no obvious way it could have formed. Material for Saturn's E-ring is supplied by debris shed from the moon Enceladus, and the planet's F-ring is created by the shepherding actions of the moons Prometheus and Pandora, which act like snowplows to clear lanes on either side of the ring. But Mimas, the G-ring's closest Saturnian moon, is located a relatively far 9,300 miles (15,000 kilometers) away from the ring. In September 2006, NASA's Cassini spacecraft provided scientists with one of their best glimpses of the G-ring. Images revealed a bright, curved streak of material near the ring's inner edge composed of icy particles ranging in size from less than a centimeter to a meter in diameter. "You don't normally expect in a ring system to see something confined to a range of latitudes around Saturn," said study team member Matthew Hedman of Cornell University in New York. "By definition, things should smear out and become a continuous ring all the way around the planet." Scientists estimate the arc is about 155 miles (250 kilometers) wide and about 100,000 miles (170,000 kilometers) long, or about one-sixth the circumference of the G-ring. If all the material in the arc were gathered into a single body, it would form an icy moonlet more than 300 feet (100 meters) across. And like a moon, the bright arc circles around Saturn, taking about 19 hours to make one complete orbit. It moves in a nearly synchronous orbit with Mimas, going around seven times for every six orbits that Mimas makes. Despite Mimas's distant location from the G-ring, scientists think the moon's gravity helps herd in the larger pieces of debris in the arc, keeping them in a crescent shape as they go around Saturn. Occasionally, these large chunks of ice smash into one another, releasing clouds of dust and fine ice crystals into space. The researchers speculate that the shed material gets bumped around by highly charged particles and electrons, called plasma, in Saturn's magnetosphere and eventually drift out of the confines of the arc to settle into a ring. "The big [arc] particles only feel gravity, so they don't spread very much. They're all trapped in the arc," Hedman told SPACE.com. "But the little dust grains can interact with the plasma in Saturn's magnetosphere. Since they're smaller, they can feel those forces and [the interaction] can cause material to spread radially." The origin of the arc's bigger particles is still a mystery. One idea is that they are remnants of a small satellite destroyed long ago through a collision with another object. "One possibility is that it was a moonlet that was broken up," Hedman said. "The trick is that it has to get into this configuration with Mimas, and we're still trying to understand how that could've happened in the first place." - VIDEO: The Source of Saturn's G-ring - Image Gallery: Cassini’s Latest Discoveries - VIDEO: Cassini’s Crossing
Imagine if Martians traveled to Earth and they named the planet Xiksa (Martian for Water). It might rub a few Earthlings the wrong way. Now imagine they travel to specific continents, like Turtle Island, what most people call North America; and imagine they name it Zdinsc (after the first Martian to alight on the continent). How would that feel, especially after the Martians launch a full scale invasion and colonization of the planet? Recently, Dictionary.com featured a question: “Why is it called America, not Columbusia?”: But what about America itself? Why aren’t the continents of North and South America called “Columbusia” after Christopher Columbus? The word America comes from a lesser-known navigator and explorer, Amerigo Vespucci.1 Maybe Vespucci is the source for the naming of the western hemisphere, but it is disputed by others. The historian and sailor Samuel Morison was sure the hemisphere’s continents are named after Welshman Richard Amerike, the man who financed John Cabot’s westward voyage in 1497.2 BBC History wrote, “… it is also probable that, as the chief sponsor of the Matthew’s voyage, and with Cabot’s wife and children then living, at his instigation, in a house belonging to a close friend, Amerike sought reward for his patronage by asking that any new-found lands should be named after him.”3 A weeks ago, I read a grade 10 Social Studies test. On it was a question: “Who discovered Vancouver Island?” The multiple-choice question offered the names of five Europeans. Even if the question had been posed as “Which non-Indigenous explorer first reached an island later to become named Vancouver Island?,” all five proposed names were wrong. It was a terribly worded and trivial question. People who are not blinkered by ethnocentrism today realize that it is incorrect to depict a place where human beings already reside as being discovered by human beings from another ethnic group. Can it therefore be morally correct to append a colonial designation upon the land inhabited by another people without their consent? Three major First Nations reside on Vancouver Island (immodestly named Quadra and Vancouver Island by seafarers Bodega y Quadra and George Vancouver): Nuu-chah-nulth, Kwakwaka’wakw, and Coast Salish. I have never been able to determine an Indigenous designation for the island. These nations each reside in their own section of the largest island on the west coast of Turtle Island. Turning to the northern continent, how then should one refer to the landmass in deference to the Original Peoples? The eastern nations of the Haudenosaunee and Anishnabek both refer to the continent as Turtle Island – a name derived from folklore. One Indigenous website, Mexica Uprising!, urges Indigenous peoples to “rise up against the illegal settler population whom continue to enslave us socially, economically, politically and spiritually.”4 It proffers another name for the landmasses of the western hemisphere. The website complains, “Latin America is named after the White people of Latin descent who stole our land and claimed it as their own. The Europeans brand everything they ‘own’ with their name, it is no different with our land.” The proper name in Nahuatl is given as Ixachilan – “one mass of land united by the Eagle and Condor not two seperate [sic] continents.” Mexica Uprising! implores Indigenous peoples, “It is time to de-colonize our minds and think as individuals. Don’t let the wasicu control your destiny, learn your true history and culture!” Is de-colonization just meant for the minds of the colonized? Is it not about time for those who have profited from the actions of colonialist ancestors to reorient their thinking along a different moral path — a path that acknowledges and rejects past crimes against humanity and seeks to atone for past crimes, not committed by themselves, but from which they profit in some sense? Or is aggressive Martian morality acceptable? - the hot word, “Why is it called America, not Columbusia?” Dictionary.com, 9 October 2011. [↩] - Samuel Eliot Morison, The European Discovery of America: The Northern Voyages, Oxford University Press, New York, 1971. [↩] - Peter MacDonald, “The Naming of America,” BBC History. Last updated 29 March 2011. [↩] - “Welcome to Mexica Uprising!” Mexica Uprising. [↩]
History is alive through the stories of others. We are all part of history. The documentary, Revolution ‘ 67, discussed the story of the Newark riots. Many individuals were featured in this video. This documentary included interviews from Tom Hayden and Carol Glassman, activists, Amiri Baraka, a poet from Newark, and historian Clement Price from Rutgers. One was able to get an idea how Newark was during the riots through its vivid photos and video recordings from the time period. The narration in the video comes from first hand experience and described the events in detail from the snipers, the looting in the streets, and lack of human dignity and rights from the National Guard. The Beginning: July 12, 1967 The documentary shows how race played a role during the Newark riots. The video shows the 1st event of the Newark riots on July 12, 1967. The documentary showed racial disparity in Newark, New Jersey during the 1960s. Italian American police officers arrested an African American named John Smith for tailgating. The police were a symbol of dominant culture over the majority of African Americans. The majority of African Americans did hold occupations in the police department or government. The cab driver became the victim of the police. According the video, the incident could have been one of many incidents that could have sparked the Newark riots, but this incident came at the right time. Other cities, like Detroit, were rioting for social justice. Newark’s history before the riots Newark was experiencing poverty and inequality . Meanwhile, the United States was battling a war overseas. Most of the United States ignored the issues of poverty in the cities. We live in a racially divided society. African Americans could not follow in the footsteps of previous immigrant groups. African Americans carried a history of slavery and discrimination into the twentieth century. Newark is proof of this. Newark was destined to be a poor city. The practice of redlining was used to determine where one would live and where one couldn’t live. Getting a mortgage depended on your skin color. There would be sections where blacks and white could live. Many African Americans were left with less options for mobility. Many jobs in industry were sent to South Jersey and beyond. This left poor housing and health care in Newark after the 1950s. Reasons for the riots: Power Struggle, Race relations, human rights The riots could be viewed in many ways according to the documentary. The riots could be seen as a response of dominant culture losing dominance. The documentary pointed out the reforms from the early 1960s. One of them was known as the anti-poverty programs. These programs would go directly to the individuals instead of the government. The city government was being stripped of its power as money was being given to the people directly. The federal government gave power to the citizens of Newark through this program. The riots were also a response towards the lack of human rights. There was corruption in many areas of government. The housing authority did the manage the housing projects properly. The housing projects were ran without regarding safety. The council did not have any representations of African American heritage yet they were the majority of the population. The majority was discarded in all fields, especially in health care. LeRoi Jones, now known as Amiri Baraka, experienced brutality against the police . When he was injured, he arrived at the hospital. The hospital staff did not treat him like a human being when he needed stitches. The hospital staff sewed Jones without numbing the area. It could also be seen as a week when martial law took place. The National Guard, the New Jersey state police, and the Newark police were searching citizens of Newark at checkpoints. Barbed wire was used to create checkpoints. .Individuals could be stopped if they were the citizens of Newark. According to one individual, they were stopping us because we [the majority of Newark citizens] were black. National Guardsmen were instructed to shoot . This is because they could have been snipers hiding somewhere. The image of the sniper became the image of the African American. This image used to scare people about “the other”. The documentary shows how Newark was heading towards a riot. This is because of its history, the government, and social inequality in Newark. As Price puts it, “I am surprised Newark did not have an earlier riot”. Newark experienced the crimes against human rights and the corruption of the government. The riots were a struggle between power of the people and power of the government. The riots became a symbol to fight for social justice. Tibaldo-Bongiorno, Marylou. “Revolution ‘ 67” Recorded July 10 2007. POV. DVD
Conflict Resolution in Congo: Is There an Answer? This Conflict Resolution in Congo: Is There an Answer? lesson plan also includes: - Join to access all included materials Students research and participate in a simulation of a meeting of African nations. They investigate and attempt to resolve the conflict in Congo and the neighboring nations. 22 Views 43 Downloads - Activities & Projects - Graphics & Images - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - Study Guides - Writing Prompts - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - All Resource Types - Show All See similar resources: Exercise in Conflict Resolution How do major religions, including Judaism, Christianity, and Islam, differ in how they view the role of individual freedoms within society, the definition of morality, and the importance of politically satisfying the greater good? Here... 10th - 12th Social Studies & History CCSS: Adaptable How Can Conflict and Disagreement be Managed and Resolved? As you explore the meaning of cultural understanding and diffusion with your learners, discuss how dialogue can play a role in resolving conflicts based on misunderstanding. Examine keywords such as compromise, communication, and... 9th - 12th Social Studies & History CCSS: Adaptable Conflict is Inevitable, Bullying is Not—7th Grade Differentiate between a simple conflict and true bullying. A chart listing key differences between bullying and normal conflict acts as a template for your class to create similar charts. The lesson provides scenarios, which students... 7th Social Studies & History What IS the Difference Between Sunni and Shi'ite Muslims? The distinction between Shiite and Sunni Muslims is an often misunderstood concept, yet very important for its implications in global affairs and for a more comprehensive understanding of the religion of Islam. 8th - 12th Social Studies & History CCSS: Adaptable Thin Ice Encompassed by Increasing Fire: Conflict and Struggle for Human Rights in the Middle East To gain a deeper understanding of the scope of the Universal Declaration of Human Rights (UDHR) groups examine a series of videos, articles, and images, to identify the human right's represented. They then research that right in the... 11th - 12th Social Studies & History CCSS: Designed
In 1928, British biologist Alexander Fleming discovered the antibiotic properties of penicillin. That discovery has led to millions of human lives saved. But to Fleming, penicillin was more than a biological wonder. It was also an artistic medium: He was a member of the Chelsea Arts Club, where he created amateurish watercolors. Less well known is that he also painted in another medium, living organisms. Fleming painted ballerinas, houses, soldiers, mothers feeding children, stick figures fighting and other scenes using bacteria. He produced these paintings by growing microbes with different natural pigments in the places where he wanted different colors. He would fill a petri dish with agar, a gelatin-like substance, and then use a wire lab tool called a loop to inoculate sections of the plate with different species. The paintings were technically very difficult to make. Fleming had to find microbes with different pigments and then time his inoculations such that the different species all matured at the same time. These works existed only as long as it took one species to grow into the others. When that happened, the lines between, say, a hat and a face were blurred; so too were the lines between art and science.
Read more: “Rooted in experience: The sensory world of plants“ What do plants see? The obvious answer is that, like us, they see light. Just as we have photoreceptors in our eyes, they have their own throughout their stems and leaves. These allow them to differentiate between red and blue, and even see wavelengths that we cannot, in the far red and ultraviolet parts of the spectrum. Plants also see the direction light is coming from, can tell whether it is intense or dim and can judge how long ago the lights were turned off. In one of his last studies, Charles Darwin showed that plants bend to the light as if hungry for the sun’s rays, which is exactly what they are. Photosynthesis uses light energy to turn carbon dioxide and water into sugar, so plants need to detect light sources to get food. We now know they do this using phototropins – light receptors in the membranes of cells in the plant’s tip. Phototropins are sensitive to blue light. When they sense it, they initiate a cascade of signals that ends up modulating the activity of the hormone auxin. This causes cells on the shaded side of the stem to elongate, bending the plant towards the light. Plants see red light using receptors in their leaves called phytochromes. A phytochrome is a sort of light-activated switch: when irradiated with red light, it changes its conformation so that it is primed to detect far-red light, and when irradiated by far red it changes back to the form that is sensitive to red light. This has two key functions. It allows
Bacterial Meningitis is a disease refers to any illness that is caused by the type of bacteria called Neisseria meningitidis, also known as meningococcus. These illnesses are often severe and include infections of the lining of the brain and spinal cord (meningitis) and bloodstream infections (bacteremia or Bacterial Meningitis are spread through the exchange of respiratory and throat secretions like spit (e.g., by living in close quarters, kissing). Bacterial Meningitis can be treated with antibiotics, but quick medical attention is extremely important. Keeping up to date with recommended vaccines is the best defense against meningococcal disease. Extrapolation of Incidence Rate for Meningitis to Belgium is 951. Bacterial meningitis affects over 4,000 people and causes 500 deaths in the Belgium each year. 3,000 cases of pneumococcal meningitis. Meningococcal meningitis infects about 1,600 people in the U.S. each year.
A fun craft that helps children remember that God keeps His promises. Noah, Promises, Protection, Rainbow Approximately 10 minutes Using paper plates cut out arches that resemble the shape of a rainbow. With a black marker write, "God keeps His promises." Make one for each child in class. WHAT YOU WILL DO: The children will take the paper plate arches and color them all the beautiful colors of the rainbow. Instruct them to take one color at a time and follow the shape of the arch. It will be helpful to complete a craft along with the children so they can have an idea of how it should look. Once completed tell the children that they can take the rainbows home and hang them on the refrigerator or on the wall in their rooms. They should look at the craft each day and remember that God keeps His promises. You can easily turn this craft into a mobile by printing out a set of mobile pictures on card stock for each student. Have the kids attach one picture under each leg of the rainbow with yarn. WHAT YOU WILL SAY: As the kids work on their projects ask the following questions: 1. Have you ever seen a rainbow? 2. Do you remember the first time you saw a rainbow? 3. How do you think Noah felt the first time he saw God's rainbow?
|RULES & REGULATIONS||DOCUMENTS -Bird List-| |GEOLOGY & HIDROLOGY||SITUATION| |ABOUT THE PARK & MAP||METEO ON-LINE - WeatherStation| |PHOTOS - Birds photos - VIDEO||LINKS| S’Albufera, the largest and most important wetland area in the Balearics, is a former lagoon separated from the sea by a belt of dunes, which for many centuries – but especially in the last two as a result of human influence – has filled up with sediments converting it into an extensive flood plain. The Natural Park affords protection to some 1708 hectares of marshes and dunes. S´Albufera traces its origins back some 18 million years, but the present wetland was formed less than 100,000 years ago. The current sea dunes are even more recent, being around 10,000 years old. The basis of S´Albufera´s ecological richness is water. The virtually permanent inundation of much of the Natural Park provides favourable conditions for vegetation growth and variety according to the depth of water, proximity of the sea and type of terrain. The range of plant species gives cover and food to a multitude of animals, which in turn are food for many more. Thanks to the abundance of water the diversity of living organisms (known as biodiversity by scientists) is very high, indeed S´Albufera’s suite of ecosystems supports the greatest biodiversity of any site in the Balearics. S´Albufera derives a large part of its water from rain falling on some 640 square kilometres of north and central Mallorca, by way of seasonal streams (“torrents”) and springs from subterranean aquifers, known as “ullals”. A relatively small amount of seawater intrusion in summer nevertheless has a particular effect on the vegetation and fauna. The biological description of vegetation must begin with the dominant reed (Phragmites australis), saw-sedge (Cladium mariscus) and reedmace (Typha latifolia), large emergent plants growing in the flooded areas. Also important are the species which live submerged in the canals, small lagoons (known as ´llisers´) and flooded marshes. Among the most notable we may find fennel pondweed (Potomogeton pectinatus), spineless hornwort (Ceratophyllum submersum) or duckweeds (Lemna sp.) The more brackish areas support rushes (Juncus species) and glassworts (Salicornia and Arthrocnemum species). The main trees are white poplar (Populus alba), elm (Ulmus minor) and tamarisk (Tamarix africana). We must not overlook the wide variety of fungi recorded: 66 species so far. One of these, the toadstool Psathyrella halofila, was discovered new to science in 1992 and is still only known from S’Albufera. We can also note the wealth of fish: 29 species, the majority marine in origin. The most numerous are the eel (Anguilla anguilla) and a variety of mullet species. Among the amphibians the marsh frog (Rana perezi) population stands out, and reptiles include the water snake (Natrix maura) and European pond terrapin (Emys orbicularis). The most abundant mammals are the rodents (rats and mice) and bats (8 species), including important rarities such as the Barbastelle bat (Barbasterella barbastrellus). The number and diversity of invertebrates is enormous. The most notable groups are the dragonflies, flies (including endemic species), spiders and, above all, the moths – of which more than 300 species are currently known. However, the most celebrated and appreciated group is the birds. Birds which fly effortlessly between marshlands separated by hundreds or thousands of kilometres find food and shelter amongst the lagoons and canals. S´Albufera is the only site in the archipelago where over two-thirds the total number of species recorded in the Balearics have been seen, 271 different species. The 61 species breeding in the Park comprise both sedentary species (remaining throughout the year) and summer visitors which migrate south once breeding is over. A third group comprises visitors from the north which come for the coldest months of winter. Large flocks of ducks (shoveler, wigeon, teal...), a range of heron species, starlings... Every winter the numbers of these main species comfortably exceed 10,000 individuals. Migrants are species which visit the Park in the course of their journeys, remaining in transit for just a few days. They include substantial numbers of garganey, ruff and other waders, hirundines... Lastly there are the vagrants, or occasional visitors, such as cranes, glossy ibis or spoonbills. The attached list gives the most interesting species for visitors and for nature conservation. The Balearic Government declared S´Albufera a Natural Park on 28th February 1988, constituting the first naturally protected area in the Balearics. This declaration authorises the conservation and restoration of the Park´s natural and cultural values, the empowerment of educational and scientific activities and contact between man and nature, as well as the Park’ss harmonisation in the local and Mallorca-wide socioeconomic contexts, with its principal function the conservation of nature. S’Albufera de Mallorca. Special Protection Area for Birds In 1979, the European Commission adopted the 79/409/CE Directive for the conservation of wild birds. Based on the premise that birds are a Europe-wide heritage shared by all, the Directive sets out to promote the conservation and suitable management of all wild birds living within the European Community. Within it protection measures are defined, and restrictions applied for quarry species and the sale of wild birds. In addition, the Directive identifies habitat protection as a prerequisite for species protection. At such sites, known as Special Protection Areas for Birds (SPAs), measures are adopted to avoid any habitat deterioration or other disturbance which may affect the birds. S’Albufera has been a SPA from the moment Spain became a member of the European Community. S’Albufera de Mallorca and the Ramsar Convention December 1989 the Council of Ministers registered S´Albufera de Mallorca in the list of the Convention on Wetlands of International Importance (with special reference to water birds), better known as the Ramsar Convention (Iran 1971). The governments which ratified it committed themselves to promote the protection and the balanced use of wetlands. Visiting hours at the Park are from 09.00h to 18.00h between 1st April and 30th September, and from 09.00h to 17.00h between 1st October and 31st March, A VISITING PERMIT IS REQUIRED which can be obtained (FREE) at the Reception Centre (open 09.00h to 16.00h.) For group visits (more than 15 people) a special permit is required and this must be applied for in advance, please enquire at the (open 09.00h to 16.00 - Tel.: +34 971 89 22 50). Entry to the Park is resticted to groups of under 30 people at all times. ACCESS TO THE PARK IS ON FOOT OR BY BICYCLE. Cars can be parked in the side-streets of residential areas adjacent to the park entrance or in the dedicated parking area opposite the Hotel Parc Natural. People with mobility problems should seek special access arrangements by telephoning or faxing the Park (+34 971 89 22 50 - 9 to 16h. only - Fax: +34 971 89 21 58). Respect nature and the values which have made this protected area possible. The gathering of flowers, plants, animals or their remains is not permitted. · Always move around using the paths indicated, at slow speed on bicycles and respect the existing signposting. · Bicycles with more than two wheels are not permitted. · Respect the Park´s visiting hours. · Noise disturbs animals and the other visitors. MOVE AROUND IN SILENCE. · It is not permitted to eat in the hides or to have picnics in the Park. In all cases, occupy the tables at Sa Roca for brief periods only. · Sporting activities are not permitted in the Park (jogging, horse riding, mountain biking, etc...). · Domestic animals (especially dogs) frighten the fauna. Their entrance to the Park is not permitted. · In the case of a breach of regulations Park personnel may revoke the visiting permit. · Share in the Conservation of the park by making known to us any suggestions you have for the improvement of this protected natural area. S'ALBUFERA GEOLOGY.. long ago S'Albufera is one of the most striking geomorphological landscapes of Majorca, its formation being a consequence of the geological processes which created the Island. The emergence of Majorca as an island is relatively recent in geological terms, dating from the Upper Tertiary Era about 18 million years ago. Since then the coastline has changed repeatedly, due to several periods of sea level fluctuation. S'Albufera is one of the areas affected by these processes. In the Miocene, one of the periods of the Tertiary Era, the whole plain of Sa Pobla was flooded, due to a rise in the sea level. Coral reefs, similar to those in the Indian and Pacific Oceans, developed in these shallow marine waters. A few million years later the Straits of Gibraltar closed and the Mediterranean sealevel fell rapidly due to evaporation. Then the Mediterranean was reduced to a series of salt lakes, but by the end of the Tertiary Era, in the Pliocene, Gibraltar opened once again, allowing the waters of the Atlantic Ocean to flood the low-lying Mediterranean area.The formation of small brackish lagoons in the plain of Sa Pobla and Inca dates from these times. This geological process, the sedimentary deposits of clay, result in a coastal lagoon having a relatively brief life (in geological, thought not in human terms) due to dessication. If it had not been for the continual subsidence in this area during the Miocene and Pleistocene, the coastal lagoon of S'Albufera would have disappeared. Glaciation in the last Quarternary Era caused great fluctuations in the sea level, alternately flooding and drying S'Albufera and other areas of the plain of Majorca. About 100000 years ago (in the Riss Glacial Period) the formation of a sandy coast gives the first indication of the emergence of the current S'Albufera. A study of the sedimentation of S'Albufera has allowed geologists to determine that there are epochs in which salt water predominated, and other periods -of maybe centuries- when the water was almost, or even completely fresh. During these fresh water periods, layers of peat were deposited. These variations were a consequence of slight changes in the sea level, as well as of the increase of fresh water flowing into S'Albufera from streams or springs from the plain of Sa Pobla. The landscape of S'Albufera has varied considerably during different times. At the peak of higher water levels, in the last 10000 years, it reached the Roman Amphitheatre at Alcudia, the whole side of the Murterar and beyond Son Fe, to where the Alcudia road runs nowadays; to the South it reached the Pont Gros and the Punta de S'Amarador and to the East up to Ca N'Eixut and Son Bosc. During Roman times the water level was about 2-3 metres higher than today. Then S'Albufera became a succession of relatively shallow ponds, linked by canals. The pond called L'Estany dels Ponts had an approximate depth of 7-8 metres. Historical documents record the condition of S'Albufera in more recent years, such as Berard's description (1789) or the one made by the engineer A. Lopez (1859), from which we have taken the information for a reconstruction in the 19th century. Water is the basic element of a landscape and ecosystem such as S'Albufera. This is why it merits a chapter in itself- it determines everything else. There are three sources of water: flowing from the Island countryside, underground springs and seawater. S'Albufera is the delta area of a large drainage area. The rain that falls in this basin passes along various routes: either sinking into the substrata, evaporating, nourishing the vegetation or swelling the streams (Muro and Sant Miquel) which flow into S'Albufera. These two streams carry 20-40 cubic Hm. per year (the Sant Miquel 16 and the Muro 4-8). The bigger stream, the Sant Miquel, originates in the springs, Ses Ufanes de Gabellí, which flow periodically from a poit about 10 Km. NW of S'Albufera. In fact, only a limited amount of water from both streams enters S'Albufera. During the last century embankments were raised along these streams, and thus by canalization their flow is directed to the sea so that it does not flood the farmland. You can see this canal system on the map. One area, called Es Forcadet, is allowed to fllod- a triangular area before the two streams join the Gran Canal. If the flood of the streams coincides with a high tide, two lateral canals at the mouth of the Gran Canal (called Sol and Siurana) cope with the overflow, channelling it to S'Albufera. There is another conduit at the same latitude as the Punta des Vent, which passes under the es Mig road via a floodgate into the Canal Loco and the Colombar. There are other floodgates and conduits at different points of the streams. In some places the embankments are in a bad condition, allowing water to flood arable land, which greatly annoys the farmers. Some fresh (or slightly brackish) water comes from underground. An unknown number of underground springs flow into the farmland, mainly in the South. It is estimated that water from this source totals between 25 and 30 cubic Hm. per annum. It is mainly this water which flows through the canals towards two outlets: the Pont del Anglesos (The bridge of the English), where the Sol and Siurana flow into the Gran Canal, and L'estany dels Ponts, which flows out mainly through the Canal Ferragut. To allow the water from the SW to cross the Gran Canal there is a series of conduits from the Canal del Sol to the Canal Siurana, passing underneath the two tracks and the Gran Canal. These conduits have been in operation for more than a century. The outlet of one of them can be seen as a powerful jet into the Siurana from the Pont de Sa Roca. Seawater flows into S'Oberta at high tide and into L'Estany dels Ponts. The balance between salt and fresh water is critically important for vegetation and determines the entire ecosystem of the area. There problems of pollution associated with the agricultural use of fertilizers and pesticides. Fortunately the waters from nearby built-up areas that were a source of pollution are now almost totally treated. The plantlife of S'Albufera is determined by two decisive elements: water and salt, ecological factors of obvious importance. Human influence has also had a discernible effect on the variety and evolution of the flora of S'Albufera. Environmental factors (climate, soil, etc...) act together, and in the case of S'Albufera reinforce each other: the winter and spring rains coincide with increased flows from subterranean sources and from the springs around S'Albufera. In summer the lack of rainfall and the high temperatures increase evaporation and therefore salt concentration in many places. The human influence is important: dessication, the construction of canals, embankments, the introduction and conservation of species, the cultivation of farmland and its later abandon are all factors directly influencing plant-life. There is another factor related to man which has influenced the ecosystem immensely, namely fire. Until recently, the reeds were usually burned off after harvesting, and sometimes the fires have burnt all across S'Albufera. The long-term effects included the killing of many trees such as tamarisks and elms. Although fire can be a useful tool, its long-term effects from a broader perspective have been very damaging when abused. To present the plantlife of S'Albufera we have grouped together the plants that share the same habitat. We will follow an imaginary journey from the beach into the interior and describe how different communities of plants make a home in a variety of circumstances. THE BEACH AND DUNES The coast of S'Albufera is sandy, with a narrow beach and a series of dunes. The sand is dry and loose allowing water to filter, easily moved by the winds, and poor in nutrition for plants. The first plant we find on the beach is the wrongly named alga, actually a flowering plant, the Possidonia. It is a species which forms submarine posidonia prairies, wahsing up onto the beach when it dies. It is very fibrous and ends up formig earthen-coloured balls which wash up onto the beach, always a fascinating discovery for children playing on the beach. Nothing much grows on the beach, as the breakers make it impossible for anything to grow. A few metres from the shore we find the long, yellow leaves of the marram and other Graminaceceous species, such as Elymus Sarctus and Sporobolus arenenarius. Growing nearby we can observe two different species of short herbs: the Medicago marina and Lotus creticus, with bent stems and compound leaves -at the beginning of spring these bloom with spectacular yellow flowers. The most beautifoul flower on the beach blossoms in summer: the large, white, beautifully-scented Seadaffodil. The best knows, if not the best loved plant, remembered by bare-footed bathers is the Seaholly, an umbellifer, small but with powerful thorns. Many insects are attracted to its blue petaled flowers. Also abundant are the Sea rocket and a kind of stock, the Matthiola sinuata, with big, purple flowers. These are the salt-Resistent plants which inhabit the first crest of the dunes. Further inland we start to find woody plants wich bind the sand with their roots, spreading widely to gather the water they need. These underground networks can be clearly seen where the dunes are affected by erosion or the passage of man. These types of plants play an important role in binding the sand into more permanent dunes.One unique local plant, not found anywhere else in the Balearic Islands, is the prickly juniper. Beyond the first juniper bushes we arrive at the pine trees, with umbrella pines, mastic trees, rosemary (the typical scent of the Mediterranean), heather (Erica multiflora) (with spectacular sprays of pink flowers in autumn), mock privet, Mediterranean mezereon, asparagus etc... Lianas grow amongst the bushes, and masses of Balearic sarsaparilla form a spectacular and impenetrable tangle unknown elsewhere on the Island. Also to be found are the honeysuckle and the tiny wild madder with its bitter leaves. These dunes are also notable for a sort of thyme exclusively found in Majorca and Menorca called Thymelaea hirsuta. This is a bush of interwoven, hairy leaves which is extremely rare. In spring a diversity of orchids, with tiny and beautiful flowers, bloom amongst the pines. THE PLANTLIFE OF THE WETLAND Here, just behind the dunes, where there is a clay, often water logged subsoil, we find the typical wetland flora. The plants growing nearest to the seashore are the ones best adapted to a salty environment. The most important is the Salicornia, with joined fleshy leaves, sometimes reddish in colour. Besides this, Sea purslane is often ot be found, identfied by its opposite silvery leaves. In areas that are often flooded but with low salinity we find the rushes. There are several kinds of rushes, always with tall, spiky stems. Their leaves are scarcely visible, although they perform a particular vital function, as they accumulate the salt absorbed the plant, eventually dropping off and ridding the plant of excess sodium chloride. This is a densely-vegetated area, forming a mosaic of the various species. This patchiness is due to slight topographical variations, which cause changes in humidity, evaporation, salt-accumulation, etc. In the fossil dunes (ancient dunes) we find a particular plantlife: formed by groups of Scirpus holoschoenus (from the rush family), Plantago coronopus (from the plantain family) and pine groves of varying sizes. Here too we find the rare blooms of orchids such as the mirror orchid, etc. For a few weeks the dunes are covered by a beautiful carpet of thousands of these tiny flowers. Nearby Orchis palustris. Orchids and other rare plants are protected and to pick or uproot them is illegal -as well as immoral. Areas which are permanently flooded by fresh-water are covered with a thick mass of reeds and Cladium mariscus (a plant with sharp, ribbon like leaves from the sedge family). These two plants totally dominate the lanscape of S'Albufera and form the basis of the ecosystem -by their dominance they actually limit the diversity and number of animal species, so it is necessary to curb their growth. Frequently, especially beside the roads, bellbines, with grouped leaves and white flowers are entwined around these plants. There are aquatic plants too -the Potamogeton pectinatus being probable the most numerous, identifible by its hairlike leaves. The Ceratophyllum demersum is an attractive plant with leaves growing vertically from small bright red stems. The Chara and water-cress and Zanichellia palustris, with tiny leaves, are often to be seen. In fresher, calmer waters the surface is often covered with a thick soup of duckweed. The greater bullrush of cat's tail (still gathered to use in handicrafts), the branched bur-reed and the lesser bullrush grow along the canals. Smooth-leaved elms and poplars have been planted along the embankments and roadsides, forming small, rather strange, covering, deciduous woods. With them also grow hawthorn and bramble, which bears the delicious fruit so loved by walkers and birds, the periwinkle, lilac-coloured and windmill shaped, also the creeping cinquefoil with yellow flowers and palmate leaves. Here and there stand Tamarix africana (of the tamarisk family) which have survived the fires. Biel Perelló & Jeroen Veraart TOP - HOME
Related Topics:1. Energy is a physical quantity that follows precise natural laws.2. Physical processes on Earth are the result of energy flow through the Earth system.5. Energy decisions are influenced by economic, political, environmental, and social factors.6. The amount of energy used by human society depends on many factors.7. The quality of life of individuals and societies is affected by energy choices.4. Various sources of energy are used to power human activities. Associated Grade Levels:7-89-1011-12Public 6.8 Amount of energy used can be calculated and monitored. 4.5 Humans generate electricity in multiple ways. 7.6 Some populations are more vulnerable to impacts of energy choices than others. 5.1 Decisions concerning the use of energy resources are made at many levels. 5.4 Energy decisions are influenced by economic factors. 6.5 Social and technological innovation affects the amount of energy used by human society. 7.3 Environmental quality is impacted by energy choices. 5.7 Energy decisions are influenced by social factors. 1.5 Energy comes in different forms and can be divided into categories. 6.6 Behavior and design affect the amount of energy used by human society. 7.4 Increasing demand for and limited supplies of fossil fuels affects quality of life. 7.1 Economic security is impacted by energy choices. 5.5 Energy decisions are influenced by political factors. 6.4 Earth has limited energy resources. 5.2 Energy infrastructure has inertia. 6.7 Products and services carry with them embedded energy. 7.2 National security is impacted by energy choices. 2.7 The effects of changes in Earth’s energy system are often not immediately apparent. 5.3 Energy decisions can be made using a systems-based approach. 5.6 Energy decisions are influenced by environmental factors. 7.5 Access to energy resources affects quality of life.
Nuclear energy is one of the renewable energy sources being developed today because of the positive benefits it gives especially to the environment. This alternative energy is produced in two ways -- when atoms come together in a fusion process and when atoms split apart in a fission process. The discovery Enrico Fermi is considered a major figure in the discovery of nuclear energy. This physicist born in Rome, Italy was the first scientist to split the atom and his research later led to nuclear power generation. Together with Leo Szilard, Fermi discovered the first nuclear reactor that caused nuclear chain reactions. Fermi obtained his degree from the University of Pisa in 1922 after which he worked as a lecturer at the University of Florence for a period of two years. He later moved to Rome where he taught theoretical physics. It was in 1934 when Fermi achieved success in his beta ray emission theory in radioactivity. He then pursued further study to determine the creation of artificially radioactive isotopes through the bombardment of neutron. His research on the bombardment of uranium with slow neutrons is now what we call atomic fission. This led Fermi to continue his research together with Leo Szilard. Together, they worked on building an atomic pile which could produce a controlled release of nuclear energy initially at Columbia and later on at the University of Chicago. They completed this project in 1942. For his research on nuclear power, this scientist received a Nobel Prize for physics in 1938. By 1945, Fermi worked as a professor at the Institute of Nuclear Studies in Chicago. That was the same year he obtained his American citizenship. Challenges and rebirth The nuclear power industry suffered some setbacks from the late 1970s to the year 2002. There were fewer orders of new reactors despite the increased capacity and output of 60 percent owing to the improved load factors. From the mid-1980s, nuclear energy? share in electricity output worldwide remained at the same level of 16 to 17 percent. From the 1970s, many orders for reactors were also cancelled resulting in the drop in uranium price and a rise in secondary supplies. What happened next was that oil companies that had ventured into the uranium field backed out. Fortunately, the potential of nuclear power gained a new attention in the new century as the demand for electricity worldwide notably in developing countries is foreseen to go up. Other factors that led to the harnessing of this renewable energy are the essence of energy security and the need to prevent global warming by reducing carbon dioxide emissions. With these concerns came the availability of newer nuclear power reactors which are now used in the different parts of the world including Finland, France and the U.S. Nuclear energy has been used since 1953 and it has been instrumental in producing electricity since 1955. Currently, 16 percent of the world? electricity is produced through nuclear power. The U.S. is a major producer of nuclear power with 103 power plants that generate electricity spread over 31 states. France, meanwhile, is the top user of nuclear power followed by Lithuania, Belgium, the Slovak Republic and Ukraine.
Northwest Exposures: A Geologic Story of the Northwest The tale of the Northwest's geology began more than two billion years ago when an ancient continent split, creating oceanfront property in what is now western Idaho. Pacific islands mashed into that coastline, making large parts of Washington and Oregon. These events were followed by monstrous volcanic eruptions, catastrophic ice age floods, and mountains rising to an accompaniment of earthquakes. (Includes California, Idaho, Montana, Nevada, Oregon, Washington, Wyoming and southernmost British Columbia.) Customers Also Bought Send to Friend Under Michigan : The Story of Michigans Rocks and Fossils Most people recognize Michigan by its mitten-shaped Lower Peninsula and the Great Lakes embracing the state. Underneath the earth’s surface, however, is equally distinctive evidence of an exciting history. Michigan rests on sedimentary rocks that reach down into the earth’s crust more than fourteen thousand feet—a depth three-and-a-half times deeper than the Grand Canyon. Within these layers of rock rest all sorts of ancient fossils and minerals that date back to the eras when tropical seas spread across Michigan and hot volcanoes flung molten rock into its skies—long before mile-thick glaciers bulldozed over Michigan and plowed through ancient river valleys to form the Great Lakes. Under Michigan is the first book for young readers about the geologic history of the state and the structure scientists call the Michigan Basin. A fun and educational journey, Under Michigan explores Earth’s geological past, taking readers far below the familiar sights of Michigan and nearby places to explain the creation of minerals and fossils and show where they can be found in the varying layers of rock. Readers will learn about the hard rock formations surrounding Michigan and also discover the tall mountain ridges hidden at the bottom of the Great Lakes. With beautiful illustrations by author Charles Ferguson Barker, a glossary of scientific terms, and charming page to keep field notes, Under Michigan is a wonderful resource for young explorers to use at home, in school, or on a trip across Michigan. There have been no reviews for this product. Sign in to post a review
An element added during foundation construction. Designed for bending, which helps maintain structural integrity during an earthquake. Used in conditions when the surface soil has less load-bearing capacity than its anticipated design loads. Made out of reinforced concrete. What is a grade beam? Grade beams are an essential part of foundation construction. They are a type of foundation system typically used to distribute weight within a foundation when the soils underneath des not provide the appropriate support. It is important to use grade beams in wall construction in cases such as when the load bearing capacity of surface soil is less than the amount anticipated in design. How is it made? A grade beam is a concrete beam that is reinforced in order to adequately shift the weight from a bearing wall into either caissons or pile caps, which are spaced foundations. A grade beam is typically used in conjunction with floor joists, used to support bearing walls at or near the ground. Grade beams are usually designed to minimize deflection instead of transferring loads directly to the ground below. This means that they instead of passing along the weight of the structure on top of the foundation, the beams are able to support weight and pass it along to the areas better set up to support the loads (such as where soils are more compact). While a grade beam cannot be added to a foundation after the fact, it is useful to keep in mind if your foundation needs replacement. Some foundation issues cause such damage to a structure’s foundation that it needs to be rebuilt to restore building strength. This is a process to keep in mind when that happens, it’s an easy way to make the best of a bad situation and ensure structural integrity in the future.
There are several questions that come up when learning about soldering fluxes like, what does a flux do? How does a flux work? Why do I need a flux? This post explores these questions. A flux removes the oxides of a base metal, cleaning its surface to better enable the formation of an excellent solder joint. But, how is a metal oxide formed? A metal oxide is formed because a metal atom likes to be surrounded on all sides. Metals exert a surface energy force, which attracts oxygen atoms to the surface of the metal, causing it to form a metal oxide and allowing it to be surrounded. This is ideal for the metal, but not ideal for soldering. These oxides inhibit metals from forming high quality solder joints. As a result, a flux needs to do several things to be effective. These three things are: - remove oxides - prevent re-oxidation - displace air Rosin is good at preventing re-oxidation; this is why a lot of fluxes are rosin based. Fluxes don’t necessarily have to be rosin based, but the rosin does help in keeping the surface from re-oxidizing. Fluxes also keep the air from getting back down to the surface of the metal. The flux forces the air off of the air-to-metal interface and allows the solder to flow in behind it. This happens when the surface energy is greater than the surface tension of the solder. However, when the surface tension is greater than the surface energy, the solder will ball up (non-wetting). The flux allows for the surface energy to be greater than the surface tension. Therefore you need a flux to form a solder joint with air reflow. If you have any additional questions please feel free to email me or [email protected].
Ventilating Your Nottingham Property to Prevent Condensation Condensation occurs when moist air comes into contact with air, or a surface, which is at a lower temperature. Air contains water vapour in varying quantities; its capacity to do so is related to its temperature – warm air holds more moisture than cold air. When moist air comes into contact with a colder surface, the air condenses some of its moisture onto that surface. The air in our homes contains water vapour from cooking, washing, drying clothes and other activities. During cold weather this warm, moist air travels to cooler parts of our homes. The excess water vapour in the air then deposits on cold, impermeable surfaces such as windows and, in some cases, walls. This is called condensation. Condensation can also occur in less visible places like behind furniture, blocked-in fireplaces and underneath laminate flooring. CAUSES OF CONDENSATION In order to make homes more energy efficient, they have been fitted with insulation and made air-tight with double glazing. This means that the moist air generated by everyday activities (as detailed above) are unable to escape which leads to condensation. The conditions that determine condensation are as follows: - The level of moisture in the air - The temperature of the air in your home - The surface temperature of the windows & walls and other surfaces DO I HAVE CONDENSATION? Condensation is generally noticeable when it forms on non-absorbent surfaces (i.e. windows or tiles) but it can form on any surface. You may not notice condensation until you can see mould growth or notice materials rotting. Any room in a house that is colder will act as a magnet for condensation. The following symptoms suggest you have condensation: - Steamed-up windows and puddles of water on the window sills - Walls that are damp to touch - Peeling wallpaper - Black spots of mould on walls and ceilings – particularly common in bathrooms - A musty smell PROBLEMS WITH CONDENSATION Condensation can lead to a whole range of damp and mould problems. Some of these cause cosmetic damage, some of these will incur financial costs and others can actually be harmful to the health of you and your family. Condensation can lead to mould which in turn results in mould spores that easily become airborne. When inhaled, these microscopic spores can trigger a range of respiratory conditions such as asthma, dust allergies and hayfever. This is particularly harmful to young child and older people. Condensation is a problem when there is inadequate ventilation, which is also associated with headaches, tiredness, and dizziness, and in extreme cases carbon monoxide poisoning. There are increasingly worrying reports about so-called Toxic House Syndrome, which is when a person’s health deteriorates due to the air quality in their homes. It is related to the large number of potentially harmful chemicals like carbon monoxide and dander in our homes that increase the risk of heart disease and cancer. Proper ventilation is the main way that exposure to these airborne pollutants can be dramatically reduced. HOW DO I CONTROL CONDENSATION? Condensation can be controlled through ventilation. There are a variety of different ventilation methods and systems that can focus on single rooms or be installed to ventilate whole homes. You want to achieve these things with ventilation: - No more condensation - No more damp, stale air - Clean, filtered fresh air As a good starting point, it makes sense to open windows to allow cross ventilation and take measures to create less moisture. You can put lids on pans when cooking, add cold water before hot water when running a bath, dry clothes outdoors or in a room with a humidity controlled extractor fan, avoid drying damp clothes on warm radiators. You should also regulate heating so that is constantly on at a lower heat. By preventing rapid changes in the temperature, you will help reduce condensation. Despite taking these initial steps, you are likely to find that condensation is still an issue, in particular in colder months. Correctly installed ventilation systems are the most effective way of stopping condensation. The issue of heat loss is one that is inherent with ventilation and must be addressed by the system. The common ventilation systems include passive ventilation systems, humidity controlled extractor fans, passive stack ventilation, and heat recovery units.
Steve has a string of lowercase characters in range ascii[‘a’..’z’]. He wants to reduce the string to its shortest length by doing a series of operations. In each operation he selects a pair of adjacent lowercase letters that match, and he deletes them. For instance, the string aab could be shortened to b in one operation. Steve’s task is to delete as many characters as possible using this method and print the resulting string. If the final string is empty, print Complete the superReducedString function in the editor below. It should return the super reduced string or Empty String if the final string is empty. superReducedString has the following parameter(s): - s: a string to reduce A single string, . If the final string is empty, print Empty String; otherwise, print the final non-reducible string. Sample Input 0 Sample Output 0 Steve performs the following sequence of operations to get the final string: aaabccddd → abccddd → abddd → abd Sample Input 1 Sample Output 1 aa → Empty String Sample Input 2 Sample Output 2 baab → bb → Empty String Solution in Python import re def superReducedString(s): while re.search(r"(\w)\1", s): s = re.sub(r"(\w)\1", "", s) return s or "Empty String" print(superReducedString(input())) The question says "In each operation, he selects a pair of adjacent lowercase letters that match, and he deletes them." Which means we have to remove any 2 repeating characters. I have used a while loop because we have to replace all 2 repeating characters, and not only one. Example let our string s = "aabbabb" In our first loop re.search will match "aa". We use re.search only to check if there is any repeating characters. Then we will use re.sub to remove any 2 repeating character s = "aabbabb" Loop 1. Matches -> "aa" ,"bb", "bb" New String -> "a" No more two repeating characters. Output -> "a" Example let our string s = "abbabb" In our first loop re.search will match "bb". Then we will use re.sub to remove any 2 repeating character s = "abbabb" Loop 1. Matches -> "bb" , "bb" New String -> "aa" Loop 2. Match -> "aa" New String -> "" No more two repeating characters. Output -> "Empty String"
3. Nowadays, various types of computer are available. These computers are different from each other on the basis of their purpose, capacity, size, working principle, brand etc. The features of computers vary depending on the nature of the work they perform. 4. On the basis of their purpose, computers are broadly categorized into two types: -General Purpose Computer -Special Purpose Computer On the basis of purpose 5. These types of computers are designed to perform more than one task. The user can load programs into the computer as per requirement to perform a different task. Desktop computer, laptop, notebook, etc. are the example of general-purpose digital computers. 6. These types of computer are designed to perform a single specific task. The program is loaded during manufacturing time in this type of digital computer which cannot be changed by user. Digital thermometer, digital watch, self- driven vehicle, washing machine, digital television, etc. are the example of special-purpose computers. 7. On the basis of their data types they operate, computers are broadly categorized into 3 types: On the basis of their work 8. Analog computers are special-purpose computers which can measure continuously changing data such as pressure, temperature, voltage, etc. It can perform a single task. For example, speedometer which displays speed of vehicles, voltmeter, analog watch, seismograph, etc. The features of analog computer are given below: cheaper than other device. on continuous data. storage capacity is low. works in real-time. gives output in the form of graph and signals.. 9. Digital computers are general-purpose computers which solve problems by computing discrete data. It works on digital values, binary digits (0 or 1). It can perform many tasks according to user requirements. Computer in school, home and office are examples of Feature of digital computer works on discontinuous. highly accurate and reliable. used for general purpose. based on discrete data (digit 0 and 1). 10. The computer-designed with combined features of analog computer and the digital computer is called a hybrid computer. These computers are designed for a special purpose. They are used in hospital for Ultra Sound, ECG (Electro Cardio Graph), CT scan (Computed Tomography scan), etc., in aero planes for air pressure, temperature, speed, weight, in scientific lab, in ships, large industries Feature of hybrid computer designed for special purpose works on both has continuous and discrete value more complex and limited storage 11. Analog Computer Digital Computer 1. They process continuous data. 1. They process continuous data. 2. They are special purpose 2. They are general purpose 3. They are based on analog 3. They are based on discrete 4. They generate analog signals.eg. Analog watch 4. They generate digitals 12. On the basis of size and performance, digital computers can be categorized into 4 types: 13. Microcomputer is also called PC (Personal Computer) because it is used by a single person at a time. Microprocessor is used as main processing unit (CPU). IBM-PC was the first microcomputer designed by IBM (International Business Machine) company. Microcomputers are used in the home, school, college, hospital, offices, etc. for data processing 14. Minicomputer is more powerful and expensive than microcomputer but less powerful and costly than mainframe computer. So, the capabilities of a minicomputer are in between microcomputer and mainframe computer. Minicomputer is used in scientific research, banking system, telephone switch, etc. These computers work on multiprocessing system and about two hundred of PCs can be connected to the network. PDI-1 was the first minicomputer designed by DEC (Digital Equipment Crop) company in 1960.Time-sharing, batch processing, online processing, etc. are the services provided by minicomputer. IBM-System/3, Honeywell 200, etc. are some examples of minicomputer. 15. Mainframe computers are more powerful, have large storage capacity and more expensive than minicomputer but less powerful and costly than supercomputer. These computers allows multi-user and have multi- processor and support more than 200 PCs. These computers are used as a server on WWW (World Wide Web) and also used in large organizations such as a bank, telecommunication, airlines and universities for large data processing. IBM is the major manufacturer of mainframe computer. IBM 1401 mainframe computer was brought to Nepal for the first time to process census data in year. IBM-2 series, system 210 servers, CDC (Control Data Cyber) 6600 etc. are the popular examples of mainframe computer. 16. Supercomputers are the most powerful, most expensive and have the highest processing speed most than other computers. It has parallel processing for performing any task. These computers are mainly used in weather forecasting, nuclear energy research, national security, space-related research, etc. Nowadays, most powerful supercomputer is Sunway Taihulight from National Super Computing Centre, Wuxi, China. Supercomputer can perform more than one trillion calculations per second. Piz Daint, Tianhe-z, Titan, Seq voie, Cori, ETA-10, etc. are the popular examples of 18. A desktop computer is a computer that fits on or under a desk. They utilize peripheral devices for interaction, such as a keyboard and mouse for input, and display devices like a monitor, projector, or television. Desktop computers can have a horizontal or vertical (tower) form factor, or be combined with a monitor to create an All-in-One computer. Unlike a laptop, which is portable, desktop computers are generally made to stay at one location. 19. A laptop, sometimes called a notebook computer by manufacturers, is a battery- or AC-powered personal computer (PC) smaller than a briefcase. A laptop can be easily transported and used in temporary spaces such as on airplanes, in libraries, temporary offices and at meetings. Laptops combine many of the input/output components and capabilities of a desktop computer into a single unit, including a display screen, small speakers, a keyboard, and a pointing device (such as a touch pad or pointing stick). 20. A computing device that can be easily held in one hand while the other hand is used to operate is called hand held computers. The term handheld computer refers to highly portable terminals designed for data collection. In recent years, they are commonly used for part and product management using items. The size of handheld computers ranges from credit card to small notebook computer, and the available features and power generally increase with greater size. Personal digital assistants (PDA), cellular phones, tablet PCs and portable media players are all considered handheld devices.
Largely identified with Dr. Martin Luther King Jr., the civil rights movement continues to have a lasting impact on American politics and society. Though focused on African Americans, it helped make possible later protests by other groups. As one of the most important movements in modern U.S. history, it has been at the center of a number of historical misconceptions. This book examines some mistaken ideas about the civil rights movement and the truths behind the myths. Each chapter is devoted to a particular historical misconception about the civil rights movement, such as the belief that Southern whites were not civil rights activists or that the movement ended with King. Chapters discuss how the misconception developed and spread, along with what we now believe to be the historical truth and why. Quotations from primary sources provide evidence for the historical facts and fictions, and a selected, general bibliography directs readers to additional sources of information. - Chapters individually discuss misconceptions related to the civil rights movement - Each chapter considers how a historical misconception developed and spread, along with what we now believe to be the truth behind the myth - Quotations from primary source documents provide evidence for the mistaken beliefs and the historical truths - A selected, general bibliography directs users to additional resources
Dr Linda Armbrecht Scientists have discovered 1-million-year-old marine DNA in deep-sea sediments of the Scotia Sea, north of the Antarctic continent, which gives insights into past ocean ecosystem-wide changes, and will help predict how marine life will respond to climate change now and into the future. The Institute for Marine and Antarctic Studies (IMAS) led international study team found the marine ‘sedimentary ancient DNA’ (sedaDNA) in sediment samples collected up to 178 metres below the seafloor, during a 2019 International Ocean Discovery Program (IODP) expedition. “The fragments are the oldest authenticated marine sedaDNA discovered to date – and these have been preserved due to factors like very low temperatures and oxygen concentrations, and an absence of UV radiation,” said Dr Linda Armbrecht, IMAS researcher and lead author of the study published in Nature Communications. “To analyse these fragments, we use a new technique called sedaDNA analysis, which can help us decipher what has lived in the ocean in the past and when, across multiple ice-age cycles. “With this knowledge, we can better predict how marine life around Antarctica will respond to ongoing climate change.” Among the organisms detected in the sediment were diatoms, a type of phytoplankton that are the basis of many marine food webs. Dated back to around 540,000 years ago, the diatom sedaDNA data showed they were consistently abundant during warm climatic periods. Study co-author, Dr Michael Weber from the University of Bonn in Germany, said the last change like this in the Scotia Sea’s food web occurred about 14,500 years ago. “This interesting and important change is associated with a world-wide and rapid increase in sea levels and massive loss of ice in Antarctica due to natural warming – warming that apparently caused an increase in ocean productivity around Antarctica at that time.” The study demonstrates that marine sedaDNA analyses can be expanded to hundreds of thousands of years, opening the pathway to investigating ecosystem-wide ocean shifts and paleo-productivity phases throughout multiple glacial-interglacial cycles. “Antarctica is one of the most vulnerable regions to climate change on Earth, so studying this polar marine ecosystem’s past and present responses to environmental change is a matter of urgency,” Dr Armbrecht said. The international study was funded by the Australia-New Zealand IODP Consortium (ANZIC), Australian Research Council, German Research Foundation, British Natural Environmental Research Council and United States National Science Foundation.
Soil has a key role to play in terms of combatting climate change, feeding 10 billion people in 2050, and the preservation and recovery of biodiversity. A large percentage of the soil is owned by farmers, placing pressure on them to resolve these major issues. The Soil Navigator assesses the initial capacities of five soil functions within a field including primary productivity, nutrient cycling, water purification and regulation, carbon sequestration and climate regulation, as well as biodiversity and habitat provision. It is currently only available as a decision-making tool for 7 EU countries: Austria, Denmark, France, Germany, Ireland, Italy and Romania, where the research has been conducted. The Soil Navigator decision support system (DSS) was developed in the Horizon 2020 project LANDMARK, where scientists from 22 partner institutions across 14 European countries to developed this assessment tool for policy makers and a policy framework for Brussels.
World Bank Group. Sources: Wikipaedia.org; opendatahandbook.org; worldbank.org (image cropped) What are open data? The World Bank Group. Sources Wikipaedia.org; opendatahandbook.org; worldbank.org (cropped image) The Open Knowledge Foundation defines data as open ‘…if anyone is free to access, use, modify, and share it — subject, at most, to measures that preserve provenance and openness.’ The open data movement has roots in open access reforms spanning back to Ancient Greece, and more recently the open science movement which started in the 1950s, but it only manifested in a modern technological sense in this millennium. Open data share deep philosophical roots with other open movements, including the open source, open access, and open science movements. These movements believe that putting more resources and work in the public domain for others to use freely in a manner consistent with the Open Knowledge Foundation’s definition will accelerate research and development on a global scale. The movement took a quantum leap forward in the early 2000s as technology thought leaders contributed to the open government movement. Why make open data to the public? Advocates cite that in addition to improving government efficiency and transparency, open data reduce corruption and advance public policy analysis and formation by enabling the participation of citizenry. Open data spur innovation and development of improved or new products and services in the private sector. A study by McKinsey & Company found that open data have the potential to generate more than $3 trillion a year in economic value across the education, health care, and transportation sectors, among others. In 2009, on the first day of his first term, US President Barack Obama issued his Memorandum on Transparency and Open Government. This marked his commitment to ‘an unprecedented level of openness in Government’ which would eventually include the launching of data.gov as a public repository for federal government data and the passing of the Data Act focused on transparency in federal expenditure data. Within a similar timeframe, the United Kingdom (UK) launched data.gov.uk, providing another example of a progressive government setting a standard around data transparency and accessibility. Development of open data Principles of open government data In late 2007, thirty open government advocates with global interests met in the United States, including technology and government policy notables, Tim O’Reilly and Lawrence Lessig, to formulate the 8 principles of open government data which provided a major catalyst and framework for the open data movement: - Complete: All public data is made available. Public data is data that is not subject to valid privacy, security or privilege limitations. - Primary: Data is as collected at the source, with the highest possible level of granularity, not in aggregate or modified forms. - Timely: Data is made available as quickly as necessary to preserve the value of the data. - Accessible: Data is available to the widest range of users for the widest range of purposes. - Machine processable: Data is reasonably structured to allow automated processing. - Non-discriminatory: Data is available to anyone, with no requirement of registration. - Non-proprietary: Data is available in a format over which no entity has exclusive control. - License-free: Data is not subject to any copyright, patent, trademark or trade secret regulation. Reasonable privacy, security and privilege restrictions may be allowed. Since 2007, thousands of national governments, non-governmental organizations, international governing bodies, research organisations, special interest groups, and local governments have embraced the open data movement. Open data standards and collective commitments adopted internationally such as the G8 Open Data Charter are proof that opening data is a shared prerogative worldwide. Despite the potential of open data, a 2017 report by the World Wide Web Foundation found that only seven governments included a statement on open data by default in their policies, just one in four datasets had an open license and half of all datasets were machine-readable. Intellectual property, technology and data hygiene pose significant barriers to adopting and implementing open data initiatives. Intellectual property restrictions increase alongside advances in data-sharing processes. In the health sector, the complexities of protected health information and sensitive personal data add a layer of difficulty that slows its adoption of open data principles. Open data in the health sector In the health sector, the open data movement has grown in parallel with the concept of big data. Open data systems promise opportunities ranging from generating early warning for outbreaks and pandemics, through offering personalised medicine to individuals, to supporting health system management. Degrees of openness There are varying degrees of openness of health data, namely: - Open data files which anyone can freely download and analyse - Restricted files which people must request permission to download and use - Data that users can only interrogate using an analytic tool available on the website. The most restrictive categories apply to data sets that consist of individual health-related records of disease incidence/prevalence, treatment, compliance and outcomes. Openness for health and health-related data Data providers remove individual identifiers before rendering the data available to external users. Health data may be: - Anonymised survey or research records of people, health events, specimens, households, facilities, resources and so on - Linked anonymized patient records and specimens from health facilities and registries - Aggregated data such as mortality rates or numbers of health workers per hospital, district or country - Assorted information gathered and linked through social media or crowd-sourcing platforms. Health-related open data are available from different sectors, for example census data, economic, employment and education survey data, and climate data. Files of open health data are available on data.gov websites, academic journal websites, institutional websites, United Nations agency websites, or general purpose websites, for example: Monitoring and surveillance of infectious diseases Public Health England publishes monthly the number of methicillin resistant staphylococcus aureus (MRSA) infections in UK hospitals on data.gov.uk. Using these data, hospitals can compare figures and share best practices. Linked clinical data The Danish National Patient Registry (DNPR) links patient data and publishes them for research, under strict conditions of individual confidentiality. DNPR collects longitudinal administrative and clinical data for patients discharged from Danish Hospitals, including, for example over 8 million people between 1977 and 2012. Cross-sectional government health surveys Countries that maintain data.gov websites usually publish national health survey data for researchers to analyse. For example, the US Behavioral Risk Factor Surveillance System undertakes telephone surveys of US residents about their risk behaviours, chronic health conditions, and use of preventive healthcare services. Cross-sectional data from multiple international sites The USAID-funded Demographic and Health Surveys (DHS) Program has collaborated with over 90 countries to undertake more than 300 cross-sectional surveys over 30 years. Every survey uses the same set of questionnaire modules, with common metadata and statistical analyses. Datasets are freely available on completion of a short registration form, and the DHS website offers a customized tool to analyse aggregated indicators within or across surveys. Longitudinal survey data from multiple international sites The International Network for the Demographic Evaluation of Populations and Their Health (INDEPTH) has created a data repository which includes harmonized longitudinal datasets of health and demographic events in geographically defined populations studied by the network’s research centres in 20 countries across Africa, Asia and the Pacific region. Kostkova et.al. propose that: Ultimately, healthcare policymakers at international level need to develop a shared policy and regulatory framework supporting a balanced agenda that safeguards personal information, limits business exploitations, and gives out a clear message to the public while enabling the use of data for research and commercial use. One such example is the International Code of Conduct for genomic and health-related data sharing.The Code comprise six core elements, including: transparency; accountability; data security and quality; privacy, data protection and confidentiality; minimising harm and maximising benefits; recognition and attribution; sustainability; accessibility and dissemination. The open data progression model We have developed the Open Data Progression Model to provide stages for governments and organisations to follow in making their data open. Although there is consensus about best practices around an effective open data programme, there is less agreement about the sequences to develop open data programmes. There are compelling arguments as to why one stage could precede another, and many of these stages overlap or cycle between each other, but in our experience the Open Data Progression Model minimizes repetition and maximizes utility of the data. Stage 1 – Collect the data Data collection is the foundation on which to build an open data programme. The success of any downstream use of the data depends on their quality and completeness. Other topics on this website describe methods for collecting health data for specific purposes. We emphasize the additional information that investigators need to collect and provide to assist others to use their data, bearing in mind that they may not be subject specialists. For example, investigators must make sure that they capture data fields that potential users need to understand and validate the data, and use common data standards and schemas whenever possible. Open data source solutions Some significant open source solutions provide tools to make data collection and storage easier and more efficient. These software use open source which a community of developers, implementers, and users continually improve and develop. Tools include built-in collection forms and surveys combined with data storage and data collection on mobile devices which can synchronize and aggregate data to a central server. For example: Open Data Kit community produces free and open-source software for collecting, managing, and using data in resource-constrained environments. KoBoToolbox is a suite of open source tools for field data collection for use in challenging environments. Epi InfoTM is a public domain suite of interoperable software tools designed for the global community of public health practitioners and researchers. It provides for easy data entry form and database construction, a customized data entry experience, and data analyses with epidemiologic statistics, maps, and graphs for public health professionals who may lack an information technology background. District Health Information Software 2 (DHIS2) is an open source, web-based health management information system platform designed to assist governments and other organizations in their decision-making. Stage 2 – Document the data People who work with open data commonly complain that documentation does not provide sufficient description of context, making it difficult to understand a dataset and to determine if it is useful. Providing metadata – or information about data – is critical to helping people understand and validate data, and to encourage usage. The following represent the most critical context issues to capture and share: What is the origin and source of the data? Who collected and aggregated them? Has anyone changed the data since their original collection? By whom? When? How? What is the lineage of the data How did enumerators collect these data? Did they capture the data using an electronic system or manually? What was the population from which they collected the data? Over what time-period? How have data managers organised the data? If there are multiple files in the dataset, what is the relationship among the files? What does each item of data mean? What do key abbreviations mean? Do identifier codes need to be translated? Stage 3 – Open the data There are two dimensions to making the data open: Publishing the data The two primary criteria to use when choosing where to publish online are: Visibility: Topical or geographical open data portals often have the infrastructure to release data rapidly and with high visibility. General purpose open data portals include: data.world which has a broad catalogue of open data on different topics and a large community of users; ckan, Socrata, and OpenDataSoft specialise in helping organisations custom build and manage their own open data portals. Utility: Functionality of the platform is key to assist consumers understand, access, and work with the data. Consider whether the open data portals has any capabilities for consumers to explore data quickly, or whether the platform offers. Application Programming Interface (API) access enables consumers to programmatically pull the data directly into software tools that they use. APIs are increasingly the means to transfer data are at scale among tools and systems, and are a big part of what makes the data genuinely accessible in a technical sense. Selecting the license The absence of a license or the selection of a restrictive or custom license are among the main reasons why open data programmes fail to have their potential impact. Owners should either clearly relinquish all rights to their datasets and dedicate them to the public domain by noting public domain alongside the datasets or select an open recognized license for all their datasets. Licenses developed by the Creative Commons are now the licenses of choice among dataset owners given their breadth of adoption, their applicability to databases, and how they facilitate collaboration. The Creative Commons website provides a tool for choosing the appropriate license depending on the purpose of the dataset. When analysts combine datasets from various sources, the most restrictive license involved in that combination then becomes the license for the enhanced dataset or derivative work. All derivative works that utilize the dataset, even if the dataset is a very small part of the derivative work, are now hampered in their usage by the constraints of that license. Work that involves some datasets from multiple sources often face a complex analysis concerning how different licenses may conflict, restrict, or even prohibit certain types of work output. Stage 4 – Engage the community of data users According to the Africa Data Consensus: ‘A data community refers to a group of people who share a social, economic or professional interest across the entire data value chain – spanning production, management, dissemination, archiving and use.’ A data community is likely composed of a broad range of people and entities with differing skill sets, including, for example, large organisations such as non-governmental organizations and government agencies as well as independent researchers, non-technical subject-matter experts, and citizen data scientists. A vibrant community is a force multiplier of an open data programme, creating value through three dimensions: The community can provide feedback on what data they are interested in and details of the metadata and context that would be most useful for them. The community can indicate not only what data to invest in collecting but also how to collect and publish them. Community members can help to clean, annotate, and enhance the data, whether this is improving the data dictionary or building schemas and ontologies that can help contextualise the data within a specific field or topic. Good data work is inherently social, and the global effort for progress benefits not only from leveraging the work others have done cleaning and prepping the data, but also in the exploratory analysis, visualisation, and other derivative works others have created from those data. It is important that the community creates a mechanisms to work together efficiently, for example, by naming an owner of a dataset who engages with the community to answer their questions, proactively seek their feedback, and capture their user stories. Stage 5 – Ensure interoperability Interoperability is the ability to exchange and use information between systems. Important issues to consider when optimising interoperability are: Prepare the data Prepare the data so that they are structured or machine-readable as opposed to unstructured data meant to be read by people. Think about the difference between a word processor document and a spreadsheet. Both might contain statistical data, but users need to read the document to pull data out, whereas they can query the data in a spreadsheet using software. Use open formats and standards It is best to publish structured data in open formats and standards, as opposed to proprietary, closed formats. A growing number of open and commercial software programs support open formats and standards. Such software allows consumers of the data to more easily interpret and convert the data within their regular tools. Proprietary formats, on the other hand, often rely on commercial software that consumers would need to purchase or open software based on unpublished specifications, and may have licensing or usage restrictions that make them unsuitable for many projects. Use tidy data Use tidy data that provide a standard way to organise data connecting their meaning to their structure – such that a data consumer can easily discover what the columns, rows, and cell values represent. Consider a situation in which an enumerator interviews ten individuals and asked each of them their age, gender, and where they live. A tidy dataset will consist of ten rows (one for each individual) and three columns (one for each variable or type of observation); each cell will contain the value of the corresponding variable (column) for the corresponding individual (row). Use standard vocabularies, codes, and taxonomies Controlled vocabularies ensure that multiple observations for the same variable use the same coding system, supporting comparison and aggregation. It is preferable to use a standard code for values that have a commonly understood meaning. Where there are several common taxonomies for a concept, crosswalk data can map values from one taxonomy to another – allowing data using either one to be joined. Stage 6 – Link data The possibilities of open health data become most fully realized at the final stage in the progression model when the data are linked. When users link data, they become more interoperable, which in turn significantly improves discoverability and facilitates collaboration. The health research community was one of the earliest adopters of linked data. The pharmaceutical industry has benefited from creating a body of knowledge around particular drug compounds. DrugBank and RxNorm, for example, link individual drugs to clinical trials, drug-drug interaction data, and manufacturer information. This allows pharmaceutical researchers to see where a new drug may be successfully applied or where dangerous side effects may arise if combined with other medications. The four principles Tim Berners-Lee outlined four principles that would maximise the potential of linked data, following similar principles to the World Wide Web: Principle 1: Use Uniform Resource Identifiers (URIs) as names for things; Principle 2: Use HTTP URIs so that people can look up those names when they look up a URI; Principle 3: Provide useful information about the data in standardized ways (RDF and the query language SPARQL); Principle 4: Include links to other URIs to discover more things. The four principles have a common purpose: 1) to facilitate the organisation of information; 2) enable linkage to related concepts; and 3) to make it easier for machines and humans to follow those linkages. Ontologies provide a powerful way of leveraging these linked concepts and the relationships between them. Ontologies extend the idea of using standard identifiers and taxonomies for concepts by modelling the relationships themselves and the logical connections between them. Data sharing is widely regarded as best practice. But there are many difficulties, particularly in sharing individual health records, for example: Alter and Vardigan point out there are ‘ethical issues that arise when researchers conducting projects in low- and middle-income countries seek to share the data they produce.’ Concerns relate to ethics of informed consent, data management, and intellectual property and ownership of personal data. Wyber et al observe ‘sheer size increases both the potential risks and potential benefits of [data sharing]. The approach may have most value in low-resource settings. But it is also most vulnerable to fragmentation and misuse in such settings.’ Kostkova et al acknowledge that whereas the potential of opening healthcare data and sharing big datasets is enormous, the challenges and barriers to achieve this goal are similarly enormous, and are largely ethical, legal and political in nature. A balance needs to be struck between the interests of government, businesses, health care providers and the public. Significant barriers to global progress are lack of data visibility and poor connectedness among people and institutions seeking to solve similar problems. A sustained open data revolution that lowers these barriers would accelerate collaboration and problem-solving on a global scale. This would provide a key to solving some of the world’s biggest challenges in global health. The complete chapter on which we based this page: Laessig M., Jacob B., AbouZahr C. (2019) Opening Data for Global Health. In: Macfarlane S., AbouZahr C. (eds) The Palgrave Handbook of Global Health Data Methods for Policy and Practice. Palgrave Macmillan, London. Chignard S. A Brief History of Open Data. The Open Data Barometer. this site actively tracks and scores the progress and quality of over 100 open data programmes. G8 Open Data Charter and Technical Annex. In this policy paper published in June 2013 G8 members agree lays to follow a set of five principles as the foundation for access to, and the release and re-use of, data made available by their governments. How Linked Data creates data-driven cultures (in business and beyond). This white paper describes the potential of linked data and provides tips for its practical adoption. The IHME Global Health Data Exchange provides a list of country open data sites
Many times educators and students wonder about the importance of research in education. However, experts have their reasons to support research. Some of them say that knowledge, when applied rightly, becomes wisdom. Carrying out research is the best option to attain wisdom. For a student, research can be an exciting and informative journey. What are the benefits of research in education? Research offers several benefits for both students and educators. Let’s check some of them. - Makes students self-sufficient Keep aside the result of the research, the research process alone is enough for a student to become self-sufficient. Students attain the ability to dig out information and go deep into the subject to learn more. Conducting research will help students to learn the current status of the subject. They also get to develop their basic library skills and can review several writing styles, enhancing both writing and reading skills. It also helps to develop the critical thinking skills of a student. - An opportunity to grow and prosper Research is a tool that combines the known and the unknown. You can grow and prosper in the process. The knowledge you gathered through your previous research will serve as the basic foundation to gain new knowledge. Only research can offer you new knowledge, which will be then passed on to the knowledge community. The knowledge continuum will be affected if research is not conducted. - Initiated scientific study in the next generation Scientific methods are the basis of all research. While students are instructed to root on empirically-based research, it will ignite the fuel of scientific inquiry in the coming generation too. Besides relying on the already established conventional methods, students will look for innovative ways and perspectives to conduct their research. Creative and critical thinking is very important for the young generation to survive in today’s highly competitive globalized world. The research will inspire the young generation to explore new knowledge for the growth of the community. - Carries out different purposes Just imagine if a doctor does not conduct research or read a research article and relies completely on traditional methods and opinions. This is where research serves several roles. It helps students to update themselves with new findings. They also get to understand the limitations and drawbacks of the current situation, which needs a change. Studying the theories and studies of experts in the subject will give them a wider view of the world. Several options are available now for students to carry out their research well. - Can learn different perspectives While there may be evidence for some statements we believe, some will not have scientific evidence. They can be regarded as opinions. Conducting research requires you to be open to more opinions. People will have different perspectives and opinions on the subject you research. Listening to them will help you to think wider. Try to get in touch with people who succeeded in their research on your subject and also those who failed without getting any evidence. Your research will be improved with these perspectives.
Broadsides are single sheets of paper with one or more song lyrics printed on them. In their most elementary form, the sheets are only printed on one side (in broadsheet). If more than one song was printed on a single sheet, the lyrics were placed alongside each other in columns, so the sheet could be cut into strips. But some broadsides were folded in two (resulting in four pages), and there were also instances of loose quires (resulting in eight or sixteen pages). The paper that was used was of poor or mediocre quality. Broadsides were printed by the popular press, which used the cheapest possible printing methods. This type of printing was usually set in gothic type until well into the eighteenth century. There was hardly ever any musical notation, and illustrations (wood carvings) were often re-used. The low cost made these sheets affordable to all. Little is known about the way in which broadsides were produced: much of the source material has been lost. In the broadside printing process, six roles are of pivotal importance: the author and composer write a lyric and melody; the publisher has the song published; the printer prints it; the vendor sells it, and the singer sings it. Not in all cases were all these roles necessary (an existing melody, for instance, didn't require a composer), and in most cases a single person filled several roles.
You will find the Printed Circuit Board (PCB) in many electronics. The concept of using the PCB Board in balancing the electrical connection of electrical components is buoyed by the need to ensure that no wires are lying about. Besides, the application of the PCB Board makes it possible to solder the electrical components into the board. Worthy of mention is that the popularity of the PCB Board is not just because of the impressive inputs is makes in the electrical and electronic aspects of manufacturing. The board has since been adopted in many industries, such as in the aviation, electronics, and defence industries. What Is a PCB Board? What is this PCB Board that has been making the news lately? What does it mean? How does it work? Are there any benefits that come with using the board? First, the PCB Board is an abbreviation for the Printed Circuit Board (PCB). The PCB Board is a board that contains lines, pads, paths, and tracks. It is those elements as mentioned that are used to connect the various points or components of the board together. Second, the PCB Board allows the soldering of the components onto the board with the aim of connecting the board to the mechanical and electronic components. The third point to note is that the PCB Board creates a room for the passage of power and signals ahead of their routing between physical devices. Fourth, the PCB Board is made up of substrate. It is atop this that the copper material will be laminated as a way of creating the needed connection route between the different components of the PCB Board. Methods of Designing Printed Circuit Boards (PCBs) Printed Circuit Board (PCB) has come a long way. It is the final outcome of a series of technological advancements aimed at bolstering the connection of electronic and electrical components in any appliance. Contrary to the last invention (Wire Wrapping), the PCB Board happens to offer many other impressive upsides over the others. So, it is not out of place to posit that the Printed Circuit Board (PCB) stands a better chance in the market. With that in mind, PCB has now taken different forms. Currently, there are two (2) ways to go about it. The first is called Through-Hole Technology. The second is called Surface Mount Technology (SMT). We will explain each of those briefly. - Through-Hole Technology Through-Hole Technology is the first method of designing Printed Circuit Boards (PCBs). It involved the mounting or placement of electronic components by leads. These leads will, on the one hand, be inserted through the holes on the side of the board. On the other hand, the leads will be soldered onto the copper traces on the other side of the board. In as much as the Through-Hole Technology is now “obsolete,” it had a swell time when it was in vogue. It was because of the use of this method that PCB Boards manufactured at the time often had wires passing through the holes before getting soldered to the required components on the board. - Surface Mount Technology The Surface Mount Technology (SMT) is the second and the most advanced process of designing and manufacturing Printed Circuit Boards (PCBs). In this case, the components or the required parts of the PCB Board are directly mounted on the surface of the Printed Circuit Board (PCB). Worthy of mention is that the electronic devices made from the use of the Surface Mount Technology (SMT) are called the Surface Mount Devices (SMD). The Surface Mount Technology is not only faster. It is also affordable and makes use of the latest PCB innovations. Types of Printed Circuit Boards (PCBs) It is no news that many people tend to confuse the PCB methods (as discussed above) with the PCB types. We would like to point out here that they are different. On the other hand, the methods of PCB involve the two major processes involved or used in the designing and manufacturing of Printed Circuit Boards (PCBs). On the other hand, the types of Printed Circuit Boards (PCBs) have to do with the different variants which the PCB Board can come in. So, we are going to look at the different types of Printed Circuit Boards. You will discover how each of them works, and what makes each of them different from the others. - Single-Sided PCBs As the name suggests, the Single-Sided PCB Board has only “one side.” Hence, it only has one layer of the base material, which is called the Substrate. Single-Sided PCB Boards are ideal for people who are just starting out in the PCB industry. The non-complex circuitry makes it one of the fastest ways to master how to design and manufactures PCB Boards. Metal is usually used to laminate one side of the base material of the Single-Sided PCB Board. The lamination helps to create a path for the building of electrical connection between the different electronic components that have or will be soldered on the board. Protection of the board is done by placing a solder mask atop it. When it comes to creating a conducting path, copper metal will be used. The choice is because of the fact that copper metal has low resistance and doubles as a good conductor. Benefits of Using Single Sided PCBs There are many benefits or upsides that come with using the Single Sided Printed Circuit Boards. Some of the benefits are: - Easy availability, which creates a room for mass production - Doesn’t involve complex circuitry - Can be used in many applications and use cases, such as printers, stereo components, power supplies, and calculators - Double-Sided PCB Boards You can easily guess that the Double Sided Printed Circuit Boards (PCBs) are the advanced form of the Single-Sided PCB Boards. They also tend to have two “two sides.” That is not all there is to the Double Sided PCB Board. It will interest you to know that the board has an exception to the features of the Single-Sided PCB Boards. The exception is that it has copper materials on both sides of the substrate material. Now, the components used on this board are connected using both the Through-Hole Technology and the Surface Mount Technology (SMT). The circuits positioned on one side of the board will also be connected to the ones on the side of the board. This will be done by making use of holes, which will be drilled on the board. Benefits of Using Double-Sided PCB Boards The Double-Sided PCB Boards have many amazing features, which will have you yearning to have one. Here are some of the benefits of using the board: - Uses the two major technologies in PCB design and manufacturing - Has many advanced use cases, such as HVAC System, LED Lighting, Amplifiers, Vending Machines, and Automotive Dashboard. - The Double-Sided PCB Board has a moderate level of complexity, which makes it ideal for beginners, intermediates, and experts in the PCB industry. - Multilayer PCB Boards The Multilayer PCB Boards comprise both the properties of the Single-Sided PCB Board and those of the Double Sided PCB Board. In this instance, the Multilayer PCB Boards make use of multiple layers. Also, the incorporation or addition of more layers is meant to serve as a preventive measure. The additional layers, among many other benefits, help to prevent the electromagnetic interferences, which are commonplace in this type of PCB Board design. Besides, additional security measures are in place. This is clearly seen in the placement of a piece of insulation between each of the boards. This insulation helps in protecting the electronic and electrical components on the PCB Board from getting burnt when the heating process is ongoing. Benefits of Using the Multilayer PCB Boards What are the benefits or the advantages of using the Multilayer PCB Boards? Discover them below: - This type of PCB allows for the designing of complex and thick designs - The extra layers help to stop electromagnetic lapses in the course of the design - Multilayer PCB Boards are used in far advanced applications. The potential use cases are in analyzing weather, GPS Technology, and File Servers How Are PCB Boards Made? You want to know the processes involved in the designing and the manufacturing of PCB Boards. You are not alone on this. We will summarize the processes of making Printed Circuit Board. - Making the Substrate The substrate will be made via many processes. The most outstanding of these procedures is the dipping in and spraying the substrate with epoxy resin. It will later be rolled to get the desired thickness. The next step will be to get the needed solidity by placing it on the oven. - Bonding the Copper Layers The copper layers will be bonded on the surface of the substrate. This can be done in many ways, such as: - Applying adhesive and fixating the copper layers to the surface of the substrate - Applying pressure After this, the other necessary components will be integrated, especially by using the soldering iron. Some of the components that can be incorporated here are: - Solder Masking There will be the final step of applying the solder mask. The solder mask performs man functions, such as protecting the metal parts and ensuring the smooth flow of currents to the relevant points in the PCB Board. The PCB Board manufacturer will then remove the unnecessary parts and materials still stuck to the PCB Board. You then have your PCB Board on your hands. The use of PCB Board saves time, improves the efficiency of the electronic materials, and saves costs. Always make sure you hire the services of a PCB Board manufacturer that understands the job, and who can follow your instructions to the letter.
Off the coast of France (bottom right) and the United Kingdom (top right), microscopic marine plants known as phytoplankton are blooming in the waters of the Atlantic Ocean, coloring the ocean blue and green. The Bristol Channel, which separates England from Wales, appears filled with murky water. The tan color could be a mixture of sediment and organic matter flowing into the Channel from rivers and streams as well as material churned up by waves and tidal actions. This image is from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra satellite on June 12, 2003. The high-resolution image provided above is 500 meters per pixel. The MODIS Rapid Response System provides this image at MODIS’ maximum spatial resolution of 250 meters. Iridescent shades of peacock blue and emerald green decorated the South Atlantic Ocean off the coast of Argentina on December 24, 2007. Though hundreds of kilometers in length, these bright bands of color were formed by miniscule objects—tiny surface-dwelling ocean plants known as phytoplankton. Ocean plants color the water of the Great Australian Bight off the shore of Victoria, Australia, in this photo-like Moderate Resolution Imaging Spectroradiometer (MODIS) image, taken by NASA’s Terra satellite on January 11, 2007.
The triangle illustrates the three elements a fire needs to ignite: heat, fuel, and an oxidizing agent (usually oxygen). A fire naturally occurs when the elements are present and combined in the right mixture, and a fire can be prevented or extinguished by removing any one of the elements in the fire triangle. For example, covering a fire with a fire blanket removes the “oxygen” part of the triangle and can extinguish a fire. Our training is done on your premises and includes: identifying different types of extinguishers and basic use for different types of fire. Participants will get the opportunity to discharge the different types of extinguishers.
The math problem presented to your class is straightforward: One hundred ants have 600 legs. How many legs do 10 ants have? We ask a question. A student answers. We quickly note the answer as correct, maybe show an equation, and move on to the next question. Everything goes smoothly, and incorrect answers are quickly resolved with a standard procedure: a similar equation or approach to calculating the correct answer. What if we stopped along the way and asked 'Why?' Or, "How do you know?' How about 'I wonder why that answer is correct?' Questions are a beautiful, powerful force for change. Even seemingly simple math tasks can transform student understanding if we delve deeper into their underlying concepts. Watch this short video that shows how questioning transforms a simple math problem into an interesting examination of addition, subtraction, multiplication and division: What if you tried this? To facilitate a growth mindset in your math classroom, try asking 1 or 2 follow up questions during class work: How do you know your solution works? Can you make a model of your solution? Does that make sense to everyone? Can you explain? How is your strategy the same or different than others? Can we predict what would happen if...? Questioning sends the message that you want students to actively participate in their learning. They are the main stakeholders, and this responsibility encourages deeper thinking and greater effort. Change Isn't Easy We are all conscious of the time tension in math class. Even so, when you ask probing questions, you not only reveal what your students understand, but also what don't yet understand. These questions and the discussion that follows may be a basis for your next lesson plan.
Accretion is a process by which material is added to a tectonic plate. This material may be sediment, volcanic arcs, seamounts or other igneous features. When two tectonic plates collide, one of the plates may slide under the other. This process is called subduction. The plate which is being subducted (the plate going under), is floating on the asthenosphereand is pushed up and against the other plate. Sediment on the ocean floor will often be scraped by the subducted plate. This scraping causes the sediment to come off the subducted plate and form a mass of material called the "accretionary wedge", which attaches itself to the subducting plate (the top plate). Volcanic island arcs or seamounts may collide with the continent, and as they are of relatively light material (i.e. low density) they will often not be subducted, but are thrust into the side of the continent, thereby adding to it. Continental plates are formed of rocks that are very noticeably different from the rocks that form the ocean floor. The ocean floor, is usually composed of basaltic rocks that make the ocean floor denser than continental plates. In places where plate accretion has occurred, land masses may contain the dense, basaltic rocks that are usually indicative of oceanic lithosphere. In addition, a mountain range that is distant from a plate boundary suggests that the rock between the mountain range and the plate boundary is part of an accretionary wedge. This process is present in many places, but especially around the Pacific Rim, including the western coast of North America, the eastern coast of Australia, and New Zealand. New Zealand consists of areas of accreted rocks which were added on to the Gondwana continental marginover a period of many millions of years. The western coast of North Americais made of accreted island arcs. The accreted area stretches from the Rocky Mountainsto the Pacific coast. * Robert, Ballard D. Exploring Our Living Planet. Washington D.C.: The National Geographic Society, 1983. * Sattler, Helen Roney. Our Patchwork Planet. New York: Lee & Shepard, 1995. * Watson, John. "This Dynamic Planet." US Geological Survey. 6 December. 2004 [http://pubs.usgs.gov/pdf/planet.html] Wikimedia Foundation. 2010. Look at other dictionaries: Accretion — may refer to:*Accretion (finance), predictable changes in the price of certain securitiesAccretion in scienceIn science, accretion is a process in which the size of something gradually increases by steady addition of smaller parts. This term is… … Wikipedia Geology of Mars — Mars Mars as seen by the Hubble Space Telescope Designations … Wikipedia Geology of solar terrestrial planets — The geology of solar terrestrial planet mainly deals with the geological aspects of four planets of the Solar system namely, Mercury, Venus, Earth and Mars and one terrestrial dwarf planet, Ceres. Objects like Pluto are similar to terrestrial… … Wikipedia Geology of Australia — Australia is a continent situated on the Indo Australian Plate.The geology of Australia includes virtually all known rock types and from all geological time periods spanning over 3.8 billion years of the Earth s history. ComponentsAustralia s… … Wikipedia Geology of the Moon — The geology of the Moon (sometimes called selenology, although the latter term can refer more generally to lunar science ) is quite different from that of the Earth. The Moon lacks a significant atmosphere and any bodies of water, which… … Wikipedia geology — /jee ol euh jee/, n., pl. geologies. 1. the science that deals with the dynamics and physical history of the earth, the rocks of which it is composed, and the physical, chemical, and biological changes that the earth has undergone or is… … Universalium Geology of the Death Valley area — The exposed geology of the Death Valley area presents a diverse and complex story that includes at least 23 formations of sedimentary units, two major gaps in the geologic record called unconformities, and at least one distinct set of related… … Wikipedia Geology of the Himalaya — [ Fig 1: The earth in the Early Permian. At that time, India is part of Gondwana and bordered to the north by the Cimmerian Superterrane. Paleogeographic reconstructions. By Dèzes (1999), based on Stampfli and Borel (2002) and Patriat and Achache … Wikipedia Geology of British Columbia — The geology of British Columbia is a function of its location on the leading edge of the North American continent. The mountainous physiography and the diversity of rock types and ages hint at the complex geology which is still undergoing… … Wikipedia Geothermal (geology) — In geology, geothermal refers to heat sources within the planet. Strictly speaking, geo thermal necessarily refers to the Earth but the concept may be applied to other planets. Geothermal is technically an adjective (e.g., geothermal energy ) but … Wikipedia
During the Covid 19 lockdown daily phonics lessons will be available from the 27th April 2020 via this link. https://www.youtube.com/channel/UCP_FbjYUP_UtldV2K_-niWw/channels?view_as=public What is phonics? Phonics is a way of teaching children to read. They are taught how to: Children can then use this knowledge to segment and blend new words that they hear or see. Currently in school we use the Letters and Sounds programme. Research shows that when phonics is taught in a structured way – starting with the easiest sounds and progressing through to the most complex – it is the most effective way of teaching young children to read. All children are individuals and develop at different rates. A phonics screening check at the end of Year One ensures that teachers understand which children need extra help with phonic decoding.
College PhysicsScience and Technology Static Electricity and Charge: Conservation of Charge What makes plastic wrap cling? Static electricity. Not only are applications of static electricity common these days, its existence has been known since ancient times. The first record of its effects dates to ancient Greeks who noted more than 500 years B.C. that polishing amber temporarily enabled it to attract bits of straw (see [link]). The very word electric derives from the Greek word for amber (electron). Many of the characteristics of static electricity can be explored by rubbing things together. Rubbing creates the spark you get from walking across a wool carpet, for example. Static cling generated in a clothes dryer and the attraction of straw to recently polished amber also result from rubbing. Similarly, lightning results from air movements under certain weather conditions. You can also rub a balloon on your hair, and the static electricity created can then make the balloon cling to a wall. We also have to be cautious of static electricity, especially in dry climates. When we pump gasoline, we are warned to discharge ourselves (after sliding across the seat) on a metal surface before grabbing the gas nozzle. Attendants in hospital operating rooms must wear booties with aluminum foil on the bottoms to avoid creating sparks which may ignite the oxygen being used. Some of the most basic characteristics of static electricity include: - The effects of static electricity are explained by a physical quantity not previously introduced, called electric charge. - There are only two types of charge, one called positive and the other called negative. - Like charges repel, whereas unlike charges attract. - The force between charges decreases with distance. How do we know there are two types of electric charge? When various materials are rubbed together in controlled ways, certain combinations of materials always produce one type of charge on one material and the opposite type on the other. By convention, we call one type of charge “positive”, and the other type “negative.” For example, when glass is rubbed with silk, the glass becomes positively charged and the silk negatively charged. Since the glass and silk have opposite charges, they attract one another like clothes that have rubbed together in a dryer. Two glass rods rubbed with silk in this manner will repel one another, since each rod has positive charge on it. Similarly, two silk cloths so rubbed will repel, since both cloths have negative charge. [link] shows how these simple materials can be used to explore the nature of the force between charges. More sophisticated questions arise. Where do these charges come from? Can you create or destroy charge? Is there a smallest unit of charge? Exactly how does the force depend on the amount of charge and the distance between charges? Such questions obviously occurred to Benjamin Franklin and other early researchers, and they interest us even today. Charge Carried by Electrons and Protons Franklin wrote in his letters and books that he could see the effects of electric charge but did not understand what caused the phenomenon. Today we have the advantage of knowing that normal matter is made of atoms, and that atoms contain positive and negative charges, usually in equal amounts. [link] shows a simple model of an atom with negative electrons orbiting its positive nucleus. The nucleus is positive due to the presence of positively charged protons. Nearly all charge in nature is due to electrons and protons, which are two of the three building blocks of most matter. (The third is the neutron, which is neutral, carrying no charge.) Other charge-carrying particles are observed in cosmic rays and nuclear decay, and are created in particle accelerators. All but the electron and proton survive only a short time and are quite rare by comparison. The charges of electrons and protons are identical in magnitude but opposite in sign. Furthermore, all charged objects in nature are integral multiples of this basic quantity of charge, meaning that all charges are made of combinations of a basic unit of charge. Usually, charges are formed by combinations of electrons and protons. The magnitude of this basic charge is The symbol is commonly used for charge and the subscript indicates the charge of a single electron (or proton). The SI unit of charge is the coulomb (C). The number of protons needed to make a charge of 1.00 C is Similarly, electrons have a combined charge of −1.00 coulomb. Just as there is a smallest bit of an element (an atom), there is a smallest bit of charge. There is no directly observed charge smaller than (see Things Great and Small: The Submicroscopic Origin of Charge), and all observed charges are integral multiples of . With the exception of exotic, short-lived particles, all charge in nature is carried by electrons and protons. Electrons carry the charge we have named negative. Protons carry an equal-magnitude charge that we call positive. (See [link].) Electron and proton charges are considered fundamental building blocks, since all other charges are integral multiples of those carried by electrons and protons. Electrons and protons are also two of the three fundamental building blocks of ordinary matter. The neutron is the third and has zero total charge. [link] shows a person touching a Van de Graaff generator and receiving excess positive charge. The expanded view of a hair shows the existence of both types of charges but an excess of positive. The repulsion of these positive like charges causes the strands of hair to repel other strands of hair and to stand up. The further blowup shows an artist’s conception of an electron and a proton perhaps found in an atom in a strand of hair. The electron seems to have no substructure; in contrast, when the substructure of protons is explored by scattering extremely energetic electrons from them, it appears that there are point-like particles inside the proton. These sub-particles, named quarks, have never been directly observed, but they are believed to carry fractional charges as seen in [link]. Charges on electrons and protons and all other directly observable particles are unitary, but these quark substructures carry charges of either or . There are continuing attempts to observe fractional charge directly and to learn of the properties of quarks, which are perhaps the ultimate substructure of matter. Separation of Charge in Atoms Charges in atoms and molecules can be separated—for example, by rubbing materials together. Some atoms and molecules have a greater affinity for electrons than others and will become negatively charged by close contact in rubbing, leaving the other material positively charged. (See [link].) Positive charge can similarly be induced by rubbing. Methods other than rubbing can also separate charges. Batteries, for example, use combinations of substances that interact in such a way as to separate charges. Chemical interactions may transfer negative charge from one substance to the other, making one battery terminal negative and leaving the first one positive. No charge is actually created or destroyed when charges are separated as we have been discussing. Rather, existing charges are moved about. In fact, in all situations the total amount of charge is always constant. This universally obeyed law of nature is called the law of conservation of charge. Total charge is constant in any process. In more exotic situations, such as in particle accelerators, mass, , can be created from energy in the amount . Sometimes, the created mass is charged, such as when an electron is created. Whenever a charged particle is created, another having an opposite charge is always created along with it, so that the total charge created is zero. Usually, the two particles are “matter-antimatter” counterparts. For example, an antielectron would usually be created at the same time as an electron. The antielectron has a positive charge (it is called a positron), and so the total charge created is zero. (See [link].) All particles have antimatter counterparts with opposite signs. When matter and antimatter counterparts are brought together, they completely annihilate one another. By annihilate, we mean that the mass of the two particles is converted to energy E, again obeying the relationship . Since the two particles have equal and opposite charge, the total charge is zero before and after the annihilation; thus, total charge is conserved. Only a limited number of physical quantities are universally conserved. Charge is one—energy, momentum, and angular momentum are others. Because they are conserved, these physical quantities are used to explain more phenomena and form more connections than other, less basic quantities. We find that conserved quantities give us great insight into the rules followed by nature and hints to the organization of nature. Discoveries of conservation laws have led to further discoveries, such as the weak nuclear force and the quark substructure of protons and other particles. The law of conservation of charge is absolute—it has never been observed to be violated. Charge, then, is a special physical quantity, joining a very short list of other quantities in nature that are always conserved. Other conserved quantities include energy, momentum, and angular momentum. Why does a balloon stick to your sweater? Rub a balloon on a sweater, then let go of the balloon and it flies over and sticks to the sweater. View the charges in the sweater, balloons, and the wall. - There are only two types of charge, which we call positive and negative. - Like charges repel, unlike charges attract, and the force between charges decreases with the square of the distance. - The vast majority of positive charge in nature is carried by protons, while the vast majority of negative charge is carried by electrons. - The electric charge of one electron is equal in magnitude and opposite in sign to the charge of one proton. - An ion is an atom or molecule that has nonzero total charge due to having unequal numbers of electrons and protons. - The SI unit for charge is the coulomb (C), with protons and electrons having charges of opposite sign but equal magnitude; the magnitude of this basic charge is - Whenever charge is created or destroyed, equal amounts of positive and negative are involved. - Most often, existing charges are separated from neutral objects to obtain some net charge. - Both positive and negative charges exist in neutral objects and can be separated by rubbing one object with another. For macroscopic objects, negatively charged means an excess of electrons and positively charged means a depletion of electrons. - The law of conservation of charge ensures that whenever a charge is created, an equal charge of the opposite sign is created at the same time. There are very large numbers of charged particles in most objects. Why, then, don’t most objects exhibit static electricity? Why do most objects tend to contain nearly equal numbers of positive and negative charges? Problems & Exercises Common static electricity involves charges ranging from nanocoulombs to microcoulombs. (a) How many electrons are needed to form a charge of (b) How many electrons must be removed from a neutral object to leave a net charge of ? If electrons move through a pocket calculator during a full day’s operation, how many coulombs of charge moved through it? To start a car engine, the car battery moves electrons through the starter motor. How many coulombs of charge were moved? A certain lightning bolt moves 40.0 C of charge. How many fundamental units of charge is this? - College Physics - Introduction: The Nature of Science and Physics - Introduction to One-Dimensional Kinematics - Vectors, Scalars, and Coordinate Systems - Time, Velocity, and Speed - Motion Equations for Constant Acceleration in One Dimension - Problem-Solving Basics for One-Dimensional Kinematics - Falling Objects - Graphical Analysis of One-Dimensional Motion - Two-Dimensional Kinematics - Dynamics: Force and Newton's Laws of Motion - Introduction to Dynamics: Newton’s Laws of Motion - Development of Force Concept - Newton’s First Law of Motion: Inertia - Newton’s Second Law of Motion: Concept of a System - Newton’s Third Law of Motion: Symmetry in Forces - Normal, Tension, and Other Examples of Forces - Problem-Solving Strategies - Further Applications of Newton’s Laws of Motion - Extended Topic: The Four Basic Forces—An Introduction - Further Applications of Newton's Laws: Friction, Drag, and Elasticity - Uniform Circular Motion and Gravitation - Work, Energy, and Energy Resources - Linear Momentum and Collisions - Statics and Torque - Rotational Motion and Angular Momentum - Introduction to Rotational Motion and Angular Momentum - Angular Acceleration - Kinematics of Rotational Motion - Dynamics of Rotational Motion: Rotational Inertia - Rotational Kinetic Energy: Work and Energy Revisited - Angular Momentum and Its Conservation - Collisions of Extended Bodies in Two Dimensions - Gyroscopic Effects: Vector Aspects of Angular Momentum - Fluid Statics - Fluid Dynamics and Its Biological and Medical Applications - Introduction to Fluid Dynamics and Its Biological and Medical Applications - Flow Rate and Its Relation to Velocity - Bernoulli’s Equation - The Most General Applications of Bernoulli’s Equation - Viscosity and Laminar Flow; Poiseuille’s Law - The Onset of Turbulence - Motion of an Object in a Viscous Fluid - Molecular Transport Phenomena: Diffusion, Osmosis, and Related Processes - Temperature, Kinetic Theory, and the Gas Laws - Heat and Heat Transfer Methods - Introduction to Thermodynamics - The First Law of Thermodynamics - The First Law of Thermodynamics and Some Simple Processes - Introduction to the Second Law of Thermodynamics: Heat Engines and Their Efficiency - Carnot’s Perfect Heat Engine: The Second Law of Thermodynamics Restated - Applications of Thermodynamics: Heat Pumps and Refrigerators - Entropy and the Second Law of Thermodynamics: Disorder and the Unavailability of Energy - Statistical Interpretation of Entropy and the Second Law of Thermodynamics: The Underlying Explanation - Oscillatory Motion and Waves - Introduction to Oscillatory Motion and Waves - Hooke’s Law: Stress and Strain Revisited - Period and Frequency in Oscillations - Simple Harmonic Motion: A Special Periodic Motion - The Simple Pendulum - Energy and the Simple Harmonic Oscillator - Uniform Circular Motion and Simple Harmonic Motion - Damped Harmonic Motion - Forced Oscillations and Resonance - Superposition and Interference - Energy in Waves: Intensity - Physics of Hearing - Electric Charge and Electric Field - Introduction to Electric Charge and Electric Field - Static Electricity and Charge: Conservation of Charge - Conductors and Insulators - Coulomb’s Law - Electric Field: Concept of a Field Revisited - Electric Field Lines: Multiple Charges - Electric Forces in Biology - Conductors and Electric Fields in Static Equilibrium - Applications of Electrostatics - Electric Potential and Electric Field - Electric Current, Resistance, and Ohm's Law - Circuits, Bioelectricity, and DC Instruments - Introduction to Magnetism - Ferromagnets and Electromagnets - Magnetic Fields and Magnetic Field Lines - Magnetic Field Strength: Force on a Moving Charge in a Magnetic Field - Force on a Moving Charge in a Magnetic Field: Examples and Applications - The Hall Effect - Magnetic Force on a Current-Carrying Conductor - Torque on a Current Loop: Motors and Meters - Magnetic Fields Produced by Currents: Ampere’s Law - Magnetic Force between Two Parallel Conductors - More Applications of Magnetism - Electromagnetic Induction, AC Circuits, and Electrical Technologies - Introduction to Electromagnetic Induction, AC Circuits and Electrical Technologies - Induced Emf and Magnetic Flux - Faraday’s Law of Induction: Lenz’s Law - Motional Emf - Eddy Currents and Magnetic Damping - Electric Generators - Back Emf - Electrical Safety: Systems and Devices - RL Circuits - Reactance, Inductive and Capacitive - RLC Series AC Circuits - Electromagnetic Waves - Geometric Optics - Vision and Optical Instruments - Wave Optics - Introduction to Wave Optics - The Wave Aspect of Light: Interference - Huygens's Principle: Diffraction - Young’s Double Slit Experiment - Multiple Slit Diffraction - Single Slit Diffraction - Limits of Resolution: The Rayleigh Criterion - Thin Film Interference - *Extended Topic* Microscopy Enhanced by the Wave Characteristics of Light - Special Relativity - Introduction to Quantum Physics - Atomic Physics - Introduction to Atomic Physics - Discovery of the Atom - Discovery of the Parts of the Atom: Electrons and Nuclei - Bohr’s Theory of the Hydrogen Atom - X Rays: Atomic Origins and Applications - Applications of Atomic Excitations and De-Excitations - The Wave Nature of Matter Causes Quantization - Patterns in Spectra Reveal More Quantization - Quantum Numbers and Rules - The Pauli Exclusion Principle - Radioactivity and Nuclear Physics - Medical Applications of Nuclear Physics - Particle Physics - Frontiers of Physics - Atomic Masses - Selected Radioactive Isotopes - Useful Information - Glossary of Key Symbols and Notation