content
stringlengths
275
370k
A repeater (English literally for "repeater"), also called regenerator , is an electrical or optical signal amplifier or conditioner in communication technology to increase the range of a signal. The repeater is located some distance from the transmitter, receives its signals and sends them on in a processed form, which means that a greater distance can be bridged. When using digital transmission methods, the signal can also be decoded by the repeater, which removes signal interference (such as noise or distortion of the pulse shape ). The signal is then re-encoded, modulated and sent on. The transmitted information is not influenced by simple repeaters, only the electrical or optical signal is processed. In contrast, more sophisticated digital repeaters can match the signal z. B. add an identifier that enables the traceability of the signal path if there are several possible paths. As a signal traverses a communication channel, it is gradually degraded due to degradation in performance . For example, when a phone call is made over a telephone line, some of the power is dissipated as heat. The longer the wire, the more power is lost and the smaller the amplitude of the signal at the other end. If the line is long enough, the call cannot be heard at the other end. The further away a receiver is from a radio station, the weaker the radio signal and the worse the reception. A repeater is an electronic device on a communication channel that increases the power of a signal and retransmits it so it can travel further. As it amplifies the signal, it requires a source of electrical energy. The term "repeater" originated in 19th century telegraphy and referred to an electromechanical device (a relay) used to regenerate telegraph signals. The use of the term has continued in telephony and data communications. Repeaters can be divided into two types depending on the type of data they are processing This type is used in channels that carry data in the form of an analog signal where the voltage or current is proportional to the amplitude of the signal, like an audio signal. They are also used in trunklines that carry multiple signals using frequency division multiplexing (FDM). Analog repeaters consist of a linear amplifier and can contain electronic filters to compensate for frequency and phase distortions in the line. The digital repeater is used in channels that carry data by binary digital signals, the data being in the form of pulses with only two possible values representing the binary digits 1 and 0; a digital repeater amplifies the signal and can also retiming, resynchronizing and shaping the impulses . A repeater that performs the retiming or resynchronization functions can be called a regenerator . Repeater in computer technology In computer networks, repeaters are part of the bit transmission layer (layer 1 of the OSI model ) to expand network segments. Transceivers and star couplers are special variants of repeaters . A repeater with more than two connections is also known as a hub or multi-port repeater. A media converter can also be viewed as a repeater as long as it does not contain a bridge function. Repeater in network technology The use of repeaters lends itself to z. B. in LANs in bus topology to the maximum cable length of z. B. to expand 185 m with 10BASE2 . The repeater divides the network into two physical segments, but the logical bus topology is retained. As a result of this effect, the repeater increases the reliability of the network, since if one sub-network fails, the other can continue to act independently. In a "normal" bus topology, the entire network would fail. Repeaters do not increase the available data rate of a network. There are two types of repeaters in LAN technology: - Local repeaters that connect two local network segments with each other, and - Remote repeaters that connect two physically separate network segments via a so-called link segment . A link segment consists of two repeaters that are interconnected by fiber optic cables. This means that larger distances can be bridged. Repeaters can not be cascaded arbitrarily in an Ethernet . Since segments connected with repeaters form a collision domain , two stations may only be far enough apart due to the delay times of the signal that the collision detection still clearly functions. This is done with the 5-4-3 rule . Repeater in fiber optic technology In optical submarine cables ( submarine cables ), the light is amplified every 50 to 80 km with optical amplifiers built into the cable on the seabed. In addition, optical repeaters are required at greater intervals (approx. Every 500 to 1000 km) in order to regenerate the steepness of the edge of the light pulses and to compensate for any time delays. The optical amplifiers and repeaters are supplied with electrical energy through the copper sheath of the cable. In information technology, WLAN repeaters can be used to increase the range of a wireless radio network. When using a repeater, however, the data transmission rate of the wireless network behind the device is halved , as the repeater communicates with both the clients and the wireless access point . Halving the data rate can be avoided in that the communication between client and repeater takes place on a different frequency than that between repeater and router. This option is usually not built into commercially available repeaters. With a wireless connection through several walls or over longer distances, a WLAN repeater can be used to achieve better transmission quality and thus speed. Almost all modern, commercially available wireless access points offer a repeater mode in order to provide larger buildings, properties and sites with sufficient network coverage. Using roaming , clients can move freely in the entire coverage area of the network without the data traffic being impaired by disconnections. In theory, up to 254 WLAN repeaters can be operated in one network. In practice, the repeaters should not be connected in series, but rather in a star configuration, as radio signal overlaps occur from as little as 20 repeaters. Repeater in the telecommunications network Repeater in telecommunications networks (for example, for SHDSL / G.SHDSL , HDSL and E1 / Primary Rate Interface ) are predominantly as repeaters designated (ZWR). Areas of application are both copper and fiber optic transmission. However, the basic structure of the ZWR is identical for both areas of application: Usually there is a central inverter from an NT (Network Termination, network termination ) and LT (Line Termination, line termination ) that "back-to-back" are interconnected (ger .: "Back-to-back"). The NT terminates the incoming transmission path (e.g. SHDSL) and decodes the digital values. The LT receives the digital values through fixed wiring and encodes them into a new SHDSL signal. For this, however, this ZWR requires an additional power supply (remote feed), which can be switched on from the main distributor or (especially in the case of routes with a second ZWR) from the customer's NT. Repeater in (mobile) radio networks Radio network repeaters for cell phone networks ( GSM , UMTS , Tetrapol ) are mainly used as two-way amplifiers (uplink and downlink) to enlarge a cell phone and enable reception in buildings, garages, tunnels and ships. As well as regenerating the signal quality, more intelligent repeaters can re- synchronize the electrical signal , as with repeaters that are used in the direct call network of Deutsche Telekom . Repeater in amateur radio - AE Loring: A Hand-book of the Electromagnetic Telegraph . D. Van Nostrand, 1878, p. 53 (English, limited preview in Google Book search). - First Antennas For Relay Stations. (PDF) Retrieved on May 21, 2019 (English). - TechTarget: Repeater. Retrieved May 21, 2019 . - Electronics Compendium: Repeater. Retrieved May 21, 2019 . - What is a fiber-optic repeater? Retrieved May 21, 2019 . - SignalBoost.de. June 17, 2015, accessed June 17, 2015 . - ISDN - The new telecommunications network of the Deutsche Bundespost Telekom, R. v. Decker's Taschenbuch Telekommunikation, 4th revised and expanded edition, 1992, p. 17
The History of Indians in Alaska Let's start by going back in time. To begin with, when we talk about Indians living in Alaska, it's important to clarify that we're referring to the Native Alaskan tribes, not individuals from the South Asian nation of India. The history of Native Alaskans is rich and fascinating, tracing back thousands of years. These indigenous people have been living in Alaska long before it became a part of the United States. The Native Alaskan tribes, also known as the Alaskan Natives, are divided into eleven distinct cultures. Each of these cultures has its own unique traditions, languages, and ways of life. These tribes include the Inupiaq, Yupik, Aleut, Eyak, Tlingit, Haida, Tsimshian, and others. The influences of these tribes can be seen in every aspect of Alaskan life, from its culture and art to its food and festivals. Contemporary Life of Indians in Alaska Today, the Alaskan Natives continue to play a crucial role in shaping the state's identity. They comprise about 15% of Alaska's population, making them a significant minority group. They live in all parts of the state, from the bustling city of Anchorage to the remote, rural villages in the Alaskan wilderness. Alaskan Natives today are involved in a variety of professions. Some continue to embrace their traditional ways of life, living off the land and sea, while others work in modern industries such as healthcare, education, and government. The Alaskan Native corporations, established by the Alaska Native Claims Settlement Act in 1971, are among the largest employers in the state. Challenges Faced by Indians in Alaska Despite their resilience and cultural strength, the Alaskan Natives face several challenges. Many of these issues are a result of historical trauma, systemic racism, and socio-economic disparities. These include higher rates of poverty, unemployment, substance abuse, and health issues compared to the general U.S. population. Additionally, many Alaskan Native communities are located in remote areas, which can make access to healthcare, education, and other essential services difficult. Climate change also poses a significant threat to these communities, particularly those that rely on subsistence hunting and fishing. Preserving the Cultural Heritage of Indians in Alaska Cultural preservation is a significant aspect of life for the Alaskan Natives. They are actively working to keep their languages, arts, and traditions alive. This includes efforts to teach the younger generation their ancestral languages, practicing traditional crafts, and organizing cultural festivals and events. Organizations like the Alaska Native Heritage Center in Anchorage are doing a fantastic job of showcasing the rich history and culture of the Alaskan Natives. They provide educational resources, host cultural events, and offer immersive experiences for visitors. The Influence of Indians in Alaska The influence of Alaskan Natives can be seen all across the state. Their contributions have shaped Alaska's culture, economy, and way of life. From the indigenous art showcased in galleries across the state to the traditional foods that have become a staple of Alaskan cuisine, their influence is all-pervasive. Moreover, the Alaskan Natives have been instrumental in conserving the state's natural resources and biodiversity. Their traditional knowledge and sustainable ways of living are an invaluable asset in an era of climate change and environmental degradation. Legal Rights and Representation of Indians in Alaska Over the years, Alaskan Natives have fought for their rights and representation, leading to significant policy changes. The Alaska Native Claims Settlement Act of 1971, for instance, was a landmark legislation that settled land claims and established the system of Native corporations. Today, Alaskan Natives are represented in various spheres of government and society. They have their representatives in the state legislature, city councils, and on corporate boards. They also have a significant voice in matters related to land use, natural resources, and cultural preservation. Conclusion: The Vibrant Indian Community in Alaska In conclusion, the Alaskan Natives or Indians in Alaska are an integral part of the state's history, culture, and identity. Their resilience, cultural richness, and contributions have helped shape Alaska into what it is today. Despite the challenges they face, they continue to thrive, preserving their cultural heritage, influencing the state, and actively participating in its governance and economy. So, when one asks, "Do Indians live in Alaska?" The answer is a resounding yes. They do, and they make Alaska a more vibrant, diverse, and interesting place.
Human biology's complex, dynamic inbodied processes are inextricably linked to what we have called the circumbodied. In particular, light is a key circumbodied environmental factor that affects each cell in our body, presenting multiple dimensions for an inbodied interaction designer considering the body as a site of adaptation. For instance, circumbodied light helps us consider qualities of light and darkness (intensity, frequency) and foregrounds temporality as a factor. Body clocks, rhythms, and associated design opportunities are this article's topics of discussion. Each of our inbodied systems has, in essence, a clock. Specifically, cellular clocks interact with each other (orchestrated by the suprachiasmatic nucleus, or SCN—the master clock in our brain) to generate oscillations in gene expression. This successive gene activation forms a cycle, with the initial activation of a gene regulated by the last gene in the sequence, creating an auto-regulatory feedback loop that takes about 24 hours, in accordance with the Earth's daily light-dark cycle. These cycles are therefore referred to as circadian rhythms (circa: about, diem: a day). Preciseness of these rhythms is maintained by a process known as entrainment, whereby the SCN uses external information (e.g., light, which specialized cells in the retina pass to the brain) in order to keep our body clocks synchronized with changes in our environment. Our individual differences, both in terms of genetics and response to environmental changes, are also important to note. Different circadian phenotypes (or chronotypes) exist and show variability in the phase and amplitude of their rhythms. A common distinction is made between early and late chronotypes (early birds and night owls)—though chronotype is not binary but rather lies on a continuous spectrum from extreme early to extreme late. Demographic factors such as age, ethnicity, and gender are known to influence chronotype, which also shifts over the lifespan. Altogether, we can see how inbodied concepts are exemplified by our circadian system, as it is regulated by a combination of endogenous (inbodied) and environmental (circumbodied) factors and kept synchronized through behavioral (embodied) actions, such as when we expose ourselves to light and make choices related to eating, moving, and sleeping. This brief chronobiological case study also highlights the in5's principle of holism, as each of the in5 fundamental MEECS processes (see article on tuning in this section) exhibit or entrain circadian rhythms through a continuous interplay. Designing from this perspective therefore opens up numerous ways to explore monitoring, stabilizing, and helping individuals to live in better alignment with their innate biological rhythms and, in turn, to experience enhanced well-being and less disease. A specific direction that stands out is sleep-support tools, which could move away from largely generic recommendations. For example, by translating personal data streams into behavioral biomarkers of caffeine-related traits, future systems could provide a caffeine cut-off window tuned to a user's predicted genetic response to intake. Similarly, users could be provided with chronotype-tailored napping opportunities that help stabilize rather than disrupt sleep. We could also move beyond the current focus on only sleep timing and duration and instead aim to improve chronobiologically relevant metrics such as social jet lag and midsleep stability. For instance, based on the current time, the user's past sleep-wake data, and her known future schedule constraints, an adaptive system could provide optimal sleep schedules to maximize stability across days. Beyond sleep, personalized tools could assist users in scheduling light exposure or meals at the times that would help restabilize their circadian systems, while chronobiology-aware fitness tools could help a person explore and determine the optimal type, intensity, and duration of exercise to perform at a particular time of day. Chrono-intelligent calendars could likewise assist an individual user in identifying when to perform cognitively intensive versus lightweight tasks, in harmony with other in5 elements, while social features could go beyond considering mutual availability to additionally consider if most participants are likely to be closer to peak alertness. Professional or classroom applications similarly focused on engagement could help pair collaborators or form teams whose members are synchronized in terms of chronotype and performance patterns. Numerous other areas are also ripe for attention, such as exploring ways to capture chronobiological data and determining appropriate evaluation strategies. All in all, inbodied interaction helps give us a roadmap for traversing this rich design space for chronobiology-aware and adaptive technology. Elizabeth Murnane is a postdoctoral scholar in computer science at Stanford University. She conducts research in human-computer interaction, informatics, social computing, and ubiquitous computing, aiming to develop technologies that empower people in managing various aspects of their daily lives and well-being, on both personal and collective levels. [email protected] Copyright held by author. Publication rights licensed to ACM. The Digital Library is published by the Association for Computing Machinery. Copyright © 2020 ACM, Inc.
The Western Wall or Wailing Wall is an ancient limestone wall in the Old City of Jerusalem. The wall was originally erected as part of the expansion of the Second Jewish Temple begun by Herod the Great, which resulted in the encasement of the natural, steep hill known to Jews and Christians as the Temple Mount, in a large rectangular structure topped by a huge flat platform, thus creating more space for the Temple itself and its auxiliary buildings. The Western Wall is considered holy due to its connection to the Temple Mount. Because of the Temple Mount entry restrictions, the Wall is the holiest place where Jews are permitted to pray, though it is not the holiest site in the Jewish faith, which lies behind it. The original, natural and irregular-shaped Temple Mount was gradually extended to allow for an ever-larger Temple compound to be built at its top. This process was finalised by Herod, who enclosed the Mount with an almost rectangular set of retaining walls, built to support extensive substructures and earth fills needed to give the natural hill a geometrically regular shape. Of the four retaining walls, the western one is considered to be closest to the former Temple, which makes it the most sacred site recognised by Judaism outside the former Temple Mount esplanade. Just over half the wall's total height, including its 17 courses located below street level, dates from the end of the Second Temple period, and is commonly believed to have been built around 19 BCE by Herod the Great, although recent excavations indicate that the work was not finished by the time Herod died in 4 BCE. The very large stone blocks of the lower courses are Herodian, the courses of medium-sized stones above them were added during the Umayyad era, while the small stones of the uppermost courses are of more recent date, especially from the Ottoman period. The term Western Wall and its variations are mostly used in a narrow sense for the section traditionally used by Jews for prayer, and it has also been called the 'Wailing Wall', referring to the practice of Jews weeping at the site over the destruction of the Temples. During the period of Christian Roman rule over Jerusalem (ca. 324–638), Jews were completely barred from Jerusalem except to attend Tisha be-Av, the day of national mourning for the Temples, and on this day the Jews would weep at their holy places. The term 'Wailing Wall' was thus almost exclusively used by Christians, and was revived in the period of non-Jewish control between the establishment of British Rule in 1920 and the Six-Day War in 1967. The earliest source mentioning this specific site as a place of worship is from the 16th century. The previous sites used by Jews for mourning the destruction of the Temple, during periods when access to the city was prohibited to them, lay to the east, on the Mount of Olives and in the Kidron Valley below it. With the rise of the Zionist movement in the early 20th century, the wall became a source of friction between the Jewish and Muslim communities, the latter being worried that the wall could be used to further Jewish claims to the Temple Mount and thus Jerusalem. During this period outbreaks of violence at the foot of the wall became commonplace, with a particularly deadly riot in 1929 in which 133 Jews were killed and 339 injured. After the 1948 Arab-Israeli War the Eastern portion of Jerusalem was occupied by Jordan. Under Jordanian control Jews were completely expelled from the Old City including the Jewish quarter, and Jews were barred from entering the Old City for 19 years, effectively banning Jewish prayer at the site of the Western Wall. This period ended on June 10, 1967, when Israel gained control of the site following the Six-Day War. Three days after establishing control over the Western Wall site the Moroccan Quarter was bulldozed by Israeli authorities to create space for what is now the Western Wall plaza.References: German crusaders known as the Livonian Brothers of the Sword began construction of the Cēsis castle (Wenden) near the hill fort in 1209. When the castle was enlarged and fortified, it served as the residence for the Order's Master from 1237 till 1561, with periodic interruptions. Its ruins are some of the most majestic castle ruins in the Baltic states. Once the most important castle of the Livonian Order, it was the official residence for the masters of the order. In 1577, during the Livonian War, the garrison destroyed the castle to prevent it from falling into the control of Ivan the Terrible, who was decisively defeated in the Battle of Wenden (1578). In 1598 it was incorporated into the Polish-Lithuanian Commonwealth and Wenden Voivodship was created here. In 1620 Wenden was conquered by Sweden. It was rebuilt afterwards, but was destroyed again in 1703 during the Great Northern War by the Russian army and left in a ruined state. Already from the end of the 16th century, the premises of the Order's castle were adjusted to the requirements of the Cēsis Castle estate. When in 1777 the Cēsis Castle estate was obtained by Count Carl Sievers, he had his new residence house built on the site of the eastern block of the castle, joining its end wall with the fortification tower. Since 1949, the Cēsis History Museum has been located in this New Castle of the Cēsis Castle estate. The front yard of the New Castle is enclosed by a granary and a stable-coach house, which now houses the Exhibition Hall of the Museum. Beside the granary there is the oldest brewery in Latvia, Cēsu alus darītava, which was built in 1878 during the later Count Sievers' time, but its origins date back to the period of the Livonian Order. Further on, the Cēsis Castle park is situated, which was laid out in 1812. The park has the romantic characteristic of that time, with its winding footpaths, exotic plants, and the waters of the pond reflecting the castle's ruins. Nowadays also one of the towers is open for tourists.
One half of the 2019 Nobel Prize in Physics was awarded for the discovery of a planet going around another sun i.e an exoplanet. Since the first exoplanet discovery in the 1990s, scientists have found a plethora of worlds around different stars in our galaxy. According to the NASA Exoplanets website, over 4,000 exoplanets have been discovered to date. Just like how planets in our solar system are different than each other, exoplanets come in various kinds too. Some of these exoplanets border on the extreme and their discovery defied what scientists considered possible for a planet. I’ve picked some that intrigued me the most below, feel free to dig into the Exoplanet Catalog for more. Wasp-12b – A planet that’s being eaten by its star Imagine a planet twice the size of Jupiter, the largest planet in our solar system, and place it very close to its star. At such proximity, the gas giant would be tidally locked to its star, like the Moon is to Earth. It thus shows only one face to the star. The discrepancy in the gravitational attraction on the star-facing side and the eternally dark side makes the planet egg-shaped. Welcome to Wasp-12b. Scientists have found that Wasp-12b is being stripped of its atmosphere by the host star. They estimate that the star will devour the entire planet in a mere 10 million years (astronomically speaking). To think that one day the Sun will swell to become a Red Giant and might eat up Earth in a similar fashion paints quite a picture. PSR B1257+12 b, c and d – Radiation-riddled worlds The very first exoplanets that scientists discovered, designated PSR B1257+12 b, c and d, were found around a type of star that wasn’t expected to host planets. When a massive star explodes, its core forms a neutron star – an Earth-sized object having ridiculously high density. Scientists don’t expect planets around giant stars to sustain stellar explosions and keep orbiting the newly formed neutron stars, but here were three planet doing just that, around the neutron star PSR B1257+12! Neutron stars emit a lot of harmful radiation, including X-rays and gamma rays that are severely damaging to life on Earth. Ergo, the three planets going around the neutron star are constantly bathed in radiation. The planets are considered to be as lifeless as planets can be, much like the ones around the black hole in the Hollywood film Interstellar should’ve been. But planets around neutron stars might be a pretty sight from a distance, say from a spacecraft passing by. The radiation from the neutron star can cause dazzling auroras on the planets, much like on Earth and Jupiter. TeES-2b – A planet darker than coal TrES-2b is another Jupiter-sized planet very close to its star. Hot Jupiters like this are expected to be dark i.e. their clouds reflect very little light. Scientists found TrES-2b to be the record holder, as it only reflects less than 1% of starlight that comes it way. Even coal is brighter than that! The planet only shows a faint red glow due to its proximity to its star. Low reflectivity wasn’t totally unexpected. Scientists use physics and chemistry models to understand what elements the clouds of Hot Jupiters must contain. And sure enough, scientists found TeES-2b to host light-absorbing chemicals like sodium, potassium and titanium oxide. However, scientists aren’t convinced that these elements alone suffice to explain the extreme blackness of TeES-2b. It remains an unsolved problem. Super Saturn – The planet with rings 200 times bigger than Saturn’s Saturn’s rings are among the most beautiful sights in the solar system and an astronomer’s delight. But can exoplanets have rings too? Turns out, they can. Meet J1407b – an exoplanet 20 times more massive than Saturn, around which scientists found huge rings. The rings of Super Saturn J1407b span 180 million kilometers wide. That’s larger than the Earth-Sun distance (150 million kilometers) and 200 times bigger than Saturn’s rings! Scientists think Saturn’s rings could disappear in 100 millions years as its particles slowly fall onto the planet over time. One wonders just how does this Super Saturn maintain its massive ring system. Scientists aren’t even sure why does the gravity of its star not disintegrate the rings. KELT-9b – A planet hotter than most stars in the Universe Scientists were exclaimed by yet another unusual planet, KELT-9b. They found that the Jupiter-like planet orbits its star from pole to pole, unlike most planets ever found which orbit in the same plane as the rotation of their star. Did something cause this 90 degrees tilt in the orbit of KELT-9b? We don’t know. If this wasn’t bizarre enough, scientists found the planet to be quite hot, 4800 degrees Celsius hot to be exact. That’s hotter than the surface temperatures of most stars in the Universe. Give your brain a moment to digest that fact. Scientists set out to find planets around other stars in a quest for second Earth. While that search is still on, some other discoveries in the process were worldview-changing by themselves. Our galaxy is full of extreme worlds that break the rules, which have forced scientists to alter theories on how planets form. Which of these extreme worlds is your favorite? Originally published at The Print. Like my work? I don’t display ads, support me and get exclusive benefits in return. 🚀
The GIS or the Geographical Information System acts as the database for the geographical data. Unlike the GPS, which sends signals and find information about a particular location, the GIS collect the data, stores and analyze it. Then, it will arrange the data to the way users can easily understand the information or data that was gathered. The GIS is used for storing, capturing, displaying and checking data that is related to the earth’s surface. It has the ability to show various kinds if data in just one map. People can easily analyze, see and understand the relationships and patterns through this technology. Using the GIS technology, you can compare the locations of various things ad discover their relationship to each other. A concrete example of using the GIS map is that the same map can be included in the sites that are producing pollution including wetlands. The GIS map can be used in determining which wetlands have more risks. The use of the GIS can help you in identifying information including location. Locations can be expressed and interpreted in different ways such as longitude, latitude, Zip code or address. A wide range of information can be interpreted, compared and distinguished with the help of the GIS. The Geographical Information system can include the data about the people, including, population, education level and income. It can also include the information about the land, which includes different kinds of vegetation, location of streams, and, different kinds of soils. The information about the sites of factories, schools, farms, electric power lines and roads are what the GIS can give. The GIS and DATA—those data with different forms can be entered in the GIS and the data that can be found on the map can be included in the GIS. The computerized or digital data can also be entered into the GIS. This only implies that the GIS is allowing different types of data and information n matter what original or source format it has, it will be included and entered in the GIS. Data capture is the process of putting the information into the GIS. Those data that are in its digital form like images that are taken by most tables and satellites can be easily uploaded to the GIS. The GIS MAPS—whenever the desired data and information are already included in the GIS system, it can be combined in order to produce a wide range of individual maps that depends on which the data layers are included. The GIS maps are used in showing the information about the density and number. Integrating the use of the GIS among researchers will definitely give and provide the best information and data that they need. It will enable the researchers to look at the change over time. The GIS requires knowledge and skills about the process for it is not as easy as you think collecting and analyzing the data. That is why knowing the important things that the GIS gives and provides will be of great help especially to the Research Centers.
The number of lattices that can fill two- or three-dimensional space with periodically repeating units without leaving gaps or causing overlaps is limited. Therefore, there is a finite number of different crystal structures, and different crystalline solids may crystallise according to the same pattern. The metrics of the lattice may be different, but the symmetry is the same in such cases. Lattices which fill space without gaps are called Bravais lattices. There are five of them in two dimensions and 14 in three dimensions. The most general and least symmetric Bravais lattice in two dimensions is the oblique lattice. If the angle between the two lattice vectors is 90°, the higher symmetry of the cell gives rise to a distinct Bravais lattice, either rectangular or square depending on whether the unit cell vectors have different length or not. In the case of a rectangular lattice, we can distinguish between a primitive rectangular lattice and a centred rectangular lattice, which has an extra lattice point (atom) at the centre. The centred rectangular lattice could be set up as a primitive lattice with lower symmetry (unit cell shown in green), but convention prefers the more symmetric description. Finally, if the lattice vectors are the same length and the angle is 120°, we have another special case with higher symmetry, the hexagonal lattice. Clearly, the difference between the Bravais lattices boils down to symmetry. If the lattice vectors are at right angles, the unit cell can be folded over along lines intersecting either lattice vector at its centre: there are mirror lines crossing each pair of edges of the unit cell in the middle, and the two mirror lines intersect at the centre of the cell. This explains the special position of the central point in the centred rectangular lattice - it is located at the centre of symmetry of the cell and always maps onto itself under any symmetry operation. In two dimensions, the effect of a mirror line and a rotation by 180° is the same, so the rectangular lattices both have a two-fold rotation axis at their centre. In the case of the square lattice, it is even a four-fold axis - turning the cell by 90° maps it onto itself because both lattice vectors are the same length. This is indicated by a green diamond shape in the diagram. There is also a four-fold axis in each corner of the cell: a 90° rotation around any of the lattice points again maps the lattice (though not the specific unit cell) onto itself. The hexagonal lattice has a three-fold symmetry by rotating in 120° steps around a lattice point, indicated by a triangular marker. The same symmetry principles apply in three dimensions. The concept of a centred lattice expands into three distinct cases, depending on whether the additional point is at the centre of the unit cell (body-centred), of one face and, because of the translational periodicity, its opposite (side-centred) or on all its faces (face-centred). By convention, the lattice vectors are named a, b and c and the angles are given the Greek letter corresponding to the lattice vector that is not spanning the angle, i.e. the angle between a and c is β. The equivalent of the two-dimensional oblique lattice in three dimensions is the triclinic Bravais lattice. All angles are irregular and the three lattice vectors have different lengths. More symmetric lattices arise when some or all angles are 90° or 120° or when two or all three lattice vectors have the same length. Among the lattices with exclusively right angles are the orthorhombic, tetragonal and cubic lattices depending on whether there are three, two or just one distinct lattice vectors in terms of their length. If only two angles are 90°, the cell is monoclinic, resulting in four rectangular and two skewed faces to the unit cell. If none of the angles is a right angle, the cell is trigonal if all lattice vectors are the same length but triclinic if they are different. If only one angle is skewed, the resulting cell is called monoclinic. Finally, the hexagonal lattice has one angle at 120° and two at 90°. The gaps in the table arise because the missing lattices can be expressed in terms of one of the others by choosing a different unit cell. For example, a side-centred cubic lattice is the same as a primitive tetragonal one with a smaller unit cell based on four of the corner atoms and four side-centred atoms of the cubic cell. Next (in the Concepts lecture), we'll discuss some common crystal structure types found in materials and minerals, and how they are structurally related to one another. For a more in-depth view of crystal structures (in the Structure Determination lecture) we'll see how to deal with crystals containing more than one type of atom by introducing the wider concept of symmetry groups.
- When was art first invented? - What was the first art? - How much is the Mona Lisa worth? - Can I buy the Mona Lisa? - Why did cavemen paint? - When did cavemen start drawing? - How old is the earliest known human artwork? - What is the oldest drawing? - How did Mona Lisa died? - How was Mona Lisa stolen? - Who was the first person to art? When was art first invented? The earliest known examples of art created on a flat surface date from 30 000 BP or later, from the Later Stone Age of Namibia, the Late Palaeolithic of Egypt and the Upper Palaeolithic of Europe.. What was the first art? The oldest secure human art that has been found dates to the Late Stone Age during the Upper Paleolithic, possibly from around 70,000 BC but with certainty from around 40,000 BC, when the first creative works were made from shell, stone, and paint by Homo sapiens, using symbolic thought. How much is the Mona Lisa worth? Guinness World Records lists Leonardo da Vinci’s Mona Lisa as having the highest ever insurance value for a painting. On permanent display at the Louvre in Paris, the Mona Lisa was assessed at US$100 million on December 14, 1962. Taking inflation into account, the 1962 value would be around US$850 million in 2019. Can I buy the Mona Lisa? Truly priceless, the painting cannot be bought or sold according to French heritage law. As part of the Louvre collection, “Mona Lisa” belongs to the public, and by popular agreement, their hearts belong to her. Why did cavemen paint? Perhaps the cave man wanted to decorate the cave and chose animals because they were important to their existence. The second theory could have been that they considered this magic to help the hunters. … Prehistoric man could have used the painting of animals on the walls of caves to document their hunting expeditions. When did cavemen start drawing? The earliest known European figurative cave paintings are those of Chauvet Cave in France. These paintings date to earlier than 30,000 BCE (Upper Paleolithic) according to radiocarbon dating. Some researchers believe the drawings are too advanced for this era and question this age. How old is the earliest known human artwork? In September 2018, scientists reported the discovery of the earliest known drawing by Homo sapiens, which is estimated to be 73,000 years old, much earlier than the 43,000 years old artifacts understood to be the earliest known modern human drawings found previously. What is the oldest drawing? Stone Age crayon doodle ‘World’s oldest drawing is Stone Age crayon doodle. ‘Hashtag’ pattern drawn on rock in South African cave is 73,000 years old. How did Mona Lisa died? Francesco and Lisa del Giocondo placed their ldest daughter in this cloister at age 12. She died, perhaps of plague or another infectious illness, at age 19. How was Mona Lisa stolen? The right eye of Leonardo da Vinci’s “Mona Lisa.” On Aug. 21, 1911, the then-little-known painting was stolen from the wall of the Louvre in Paris. … And on that morning, with the Louvre still closed, they slipped out of the closet and lifted 200 pounds of painting, frame and protective glass case off the wall. Who was the first person to art? Yet those people did not invent art, either. If art had a single inventor, she or he was an African who lived more than 70,000 years ago. That is the age of the oldest work of art in the world, a piece of soft red stone that someone scratched lines on in a place called Blombos Cave.
The Met Office and space weather The Met Office Space Weather Operations Centre (MOSWOC) is one of three space weather prediction centres around the globe. Space weather is recognised as a significant potential threat by the UK Government. Solar storms were added to the National Risk Register (NRR) of Civil Emergencies in 2011. MOSWOC provides the vital information to help build the resilience of UK infrastructure and industries in the face of space weather events, thereby supporting continued economic growth. Current space weather services The Met Office provides 24/7 forecasts and warnings of space weather for Government and responder communities, critical national infrastructure providers and the public and will continue to develop its forecast capability Why is space weather such a threat? Severe space weather events can have potentially significant impacts on the UK's critical national infrastructure. The Sun is in constant flux and the impact of this solar activity is more apparent as people become more reliant on technology and systems such as satellites, Global Navigation Satellite System (GNSS), also known as Global Positioning System (GPS), power and radio communications. Solar flares can cause high-frequency radio and GNSS to perform erratically, extreme CMEs can put power grids at risk. Therefore, space weather prediction is of crucial importance to power companies, satellite operators and the aviation industry. For more information, please see What is space weather? Working in partnership Met Office staff work with partners around the world to develop space weather forecasting capability and share knowledge about space weather and its impacts. Here are some of our recent space weather presentations given at scientific and industry conferences.
Scientists have discovered that the shape of the female reproductive tract is not as simple as previously thought. It is less like a path and more like an obstacle course. This means that only the strongest sperm cells are able to get through to reach a woman’s egg cell. Sperm cells are fiercely competitive The average ejaculate of a man contains 40 to 150 million sperm cells. These cells all enter a race to reach and fertilise a single egg cell in a woman. In order to fertilise the egg, sperm cells have to swim along the female reproductive tract. This tract includes the vagina, cervix, uterus and fallopian tubes. New research focuses on the female reproductive tract Alireza Abbaspourrad and his colleagues at Cornell University in New York examined how sperm cells travel through the female reproductive tract 1. The team used small-scale models and computer simulations involving sperm cells from men and bulls. The team specifically focused on tight spots in the female reproductive tract, called strictures. These narrow regions act like gates as they only allow the strongest sperm cells through. To replicate this scenario, the team used a ‘microfluidic’ device. This consisted of three compartments which were eye-shaped, and they were connected to each other by a narrow channel. This replicated the conditions inside the female body, where the diameter of the reproductive tract varies. One particularly tight area is the opening between the uterus and the fallopian tubes. Only the strongest sperm are able to make it through Abbaspourrad and his team found that only the strongest sperm cells were able to pass through the narrow spots, or strictures. This is because the weaker sperm cells could not propel themselves against the current of fluid, which travels in the opposite direction to them. The current forced the weaker sperm cells backwards. This meant that the sperm tended to accumulate below the narrow spots. The team had the same results with both bull and human sperm. Sperm cells form their own hierarchy The team found that the movement of sperm cells was particularly interesting. When weaker sperm cells reached a narrow spot, they were pushed backwards before attempting to pass through it again. The scientists noticed that their movement was in the shape of a butterfly, or sideways figure-of-eight. This meant that the fastest sperm cells within this group were closer to the opening of the narrow spot, and the slower sperm cells were at the back of the crowd, further away. As a result of this movement, the best swimmers in this group were closest to the opening of the stricture. Therefore these faster sperm cells were the cells which were eventually able to pass through the stricture, and continue their journey to the egg cell. Sperm cells do not have an easy ride There are many obstacles facing sperm cells in their journey to reach an egg. These include the acidic environment in the female reproductive tract, which has a key role in killing bacteria. This prevents sexually transmitted infections (STIs) from occurring 2. This acidity is neutralised by semen, which is alkaline. Previous research has investigated the journey of sperm cells The female reproductive tract can be difficult to navigate. But previous research has found that sperm cells tend to swim along walls. Therefore they follow the walls of the female reproductive tract to eventually reach the egg 4. Another difficulty sperm cells face is swimming through the fluid in the female reproductive tract. Muscle contractions and fluid secretions create currents within the fluid, which can interfere with the movement of sperm cells. Scientists have found that at low velocities, sperm cells are able to orientate themselves to swim against the current and this is called rheotaxis 5. However at higher velocities, the sperm cells are pushed away by the currents. In particular, weaker sperm cells struggle to propel themselves forwards. What does the new research mean? The study shows that the fastest sperm cells have the best chance of winning the race to reach an egg cell. Therefore these sperm have the highest chances of successfully fertilising an egg. This is biologically important as the fastest sperm cells are assumed to be the best and strongest. Therefore it is advantageous for these cells to produce offspring. As a result of this, women’s bodies have mechanisms to eliminate weaker sperm cells from the race, such as strictures. The path to fertilising an egg cell is challenging for sperm cells. As a result, the fastest and strongest sperm cells have the best chance of successful fertilisation. There are ways you can boost the health of your sperm cells to maximise your chances of having a child. One method is taking fertility supplements which can improve sperm count, motility, morphology and semen volume. Read our comparison of 12 top fertility supplements here. - Zaferani M, Palermo GD, Abbaspourrad A. Strictures of a microchannel impose fierce competition to select for highly motile sperm. Science Advances. Internet. 2019. 5(2). Available from: http://advances.sciencemag.org/content/5/2/eaav2111 ↩ - Tevi-Benissan C, Belec L, Levy M, Schneider-Fauveau V, Mohamed AS, Hallouin MC, Matta M, Gresenguet G. In vivo semen-associated pH neutralization of cervicovaginal secretions. American Society for Microbiology Journals. Internet. 1997. 4(3):367-374. Available from: https://cvi.asm.org/content/4/3/367 ↩ - Suarez SS, Pacey AA. Sperm transport in the female reproductive tract. Human Reproduction Update. Internet. 2006. 12(1):23-37. Available from: https://www.ncbi.nlm.nih.gov/pubmed/16272225 ↩ - Suarez SS. Mammalian sperm interactions with the female reproductive tract. Cell and Tissue Research. Internet. 2016. 363(1):185-194. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4703433/ ↩ - Kantsler V, Dunkel J, Blayney M, Goldstein RE. Rheotaxis facilitates upstream navigation of mammalian sperm cells. eLife. Internet. 2014. 3. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4031982/ ↩
Each one of us inherits 23 chromosomes from our Mother and 23 chromosomes from our Father. Residing on each one of those 46 chromosomes are genes. It is estimated that humans have approximately 23,000 genes. So, one chromosome can be home to literally hundreds of genes. Each gene is like a computer command that directs the RNA within a specific cell to manufacture a specific protein needed to help the cell function. Different genes command RNA to make different proteins, so multiple genes are needed to keep just one cell functioning properly. DNA (deoxyribonucleic acid) is the technical name given to your 46 chromosomes. Those 46 chromosomes, in actuality, represent an instruction manual that tells RNA in each cell to manufacture the various proteins necessary to keep the cells in your body running smoothly. Interestingly, an individual gene can be turned on or off by something called methylation. Methylation is the process of creating enzymes that make genes active or inactive. What triggers methylation? Many things, including habits. The simple act of forging a new habit can have profoundly beneficial or harmful effects on your health and wellbeing. Good habits, such as reading to learn, is an example of a habit that turns on certain good genes. These good genes, once activated by your reading habit, instruct RNA to produce proteins that help grow and strengthen brain cells that are being called into service as you read. Reading, in effect, stimulates genes to help maintain and grow brain cells. So long as you keep reading and learning, those genes will keep churning out proteins that help strengthen brain cells, which, in turn, boosts your IQ. Bad habits, or time-wasting habits, such as sitting on a couch watching Netflix for hours at a time, is an example of a habit that keeps good genes toggled in the off position. This TV watching, time-wasting habit, in effect, keeps those learning genes inactive due to your lack of mental activity. Without these good genes working to maintain and strengthen brain cells, the brain cells and their synapses weaken. In other words, your brain cells become impaired by your time-wasting habits. This can result in a lower IQ or cognitive impairment. When you have bad habits, good genes can’t do their job. We can see this manifest itself in the form of various disorders such as obesity, Type II Diabetes, heart disease, dementia, and all sorts of other preventable diseases. You are who you are because of your genes. And your genes make you who your are, through your habits.
It sounds downright creepy at first. As human beings, each of us has 300–500 different species of bacteria living in a complex ecosystem within our guts. In fact, the bacteria living in our guts nearly outnumber our own cells. But while the idea might sound creepy, these “bugs” in our intestines are actually quite important to our health and well-being, says Sara Campbell, assistant professor of kinesiology and health at Rutgers University. They aid in digestion, breaking down nutrients for us that we wouldn’t otherwise be able to access, and they also work in concert with our immune system to protect us from disease. “These tummy bugs are there to help us, and without them, we wouldn’t be able to survive,” explains Campbell. Where do we get those bugs? At a macro level, every human gut microbiome has a lot in common, says Hannah Holscher, assistant professor of food science and human nutrition at the University of Illinois at Urbana-Champaign. We all tend to have the same five basic phyla (categories) of bacteria in our guts. But at a micro level, there are substantial differences from one person to the next, since we each carry different strains of microbes within those five broad phyla. “It’s somewhat similar to our fingerprints, in that you’ll have your own unique composition of gut microbiome,” says Holscher. The specific composition of a person’s gut bacteria is influenced right from birth. As a baby passes out of its mother’s body, the baby is colonized with bacteria from the mother, which passes into the infant’s gut. (Importantly, the gut microbiomes of babies born via C-section are typically not as rich as those born vaginally.) A baby’s earliest feeding experiences also have an impact. Those who are breastfed have markedly different gut microbiomes than babies who are formula fed. And antibiotic use, especially in the first few years of life, also plays a role. Two Finnish studies published in 2016 both showed that when children received antibiotics early in life—especially the broad-spectrum antibiotics commonly used to treat respiratory infections—their gut microbiomes developed at a slower rate and showed less diversity than peers who hadn’t taken antibiotics. The gut-brain connection Although it might seem like the gut and the brain should be separate from each other, they’re actually not. In fact, the two are directly connected by the vagus nerve, a “walkie-talkie” of sorts that allows microbes in the gut to send nerve signals to the brain. The vagus nerve is a two-way line of communication, says Holscher, with the gut sending signals to the brain and the brain, in turn, sending signals back to the gut. That gut-brain connection, as it turns out, is an important key to our cognitive functioning and our mental health. Indeed, scientists are currently uncovering evidence that specific compositions of gut bacteria are related both to mood and to a number of cognitive disorders and diseases. Here’s what we currently know about the impact of the gut microbiome on several aspects of brain health: 1. Depression and anxiety There are multiple routes through which people can develop symptoms of clinical depression and anxiety, says Ruth Ann Luna, director of medical metagenomics at the Microbiome Center at Texas Children’s Hospital. Therefore, people with such mood disorders can have gut microbiomes that look quite different. In other words, there’s no single gut bacterial profile that “marks” all individuals with depression. Still, says Luna, there is convincing evidence of a link between the gut microbiome and mood disorders. Much of the research so far has been in animals rather than humans, but several studies have shown that when researchers deplete the microbiomes of mice—in other words, when they deprive the mice of a normal, healthy mix of bacteria in their guts—there is a measurable effect on their moods. “That absence of a healthy microbiome can contribute to a variety of emotional and behavioral symptoms, like anxiety, increased pain perception, and depression,” says Luna. If a lack of good bacteria in the microbiome can cause mood disorders, new research shows that introducing “good” bacteria into the gut can improve symptoms of depression. Several studies have found such an effect in animals, but a small 2017 study from researchers in Canada found that the effect occurs in human beings too. Patients with depression who were given a specific type of probiotic—a dietary supplement containing healthy bacteria—had reduced depression scores at the end of 10 weeks compared to patients who were given a placebo. Although probiotics can have a positive effect on mood, that doesn’t mean the same probiotic would work for everyone, says Luna. That’s because people with mood disorders have different types of imbalances in their gut microbiomes. “Across-the-board interventions are not always effective because we’re not considering what’s already there,” she says. First things first. There is little to suggest that gut bacteria causes autism, per se; most of the work connecting gut bacteria to autism shows only a correlation between the two, not a cause-effect relationship. There are some studies suggesting that a specific bacterial community in the gut can induce a condition in animals that looks like autism spectrum disorder, according to Luna, but the more important question is what causes such a bacterial community to exist in the first place. That said, problems with gut function—such as constipation—are a common complaint among individuals on the autism spectrum, says Luna, and the constant pain associated with these gut malfunctions appears to drive at least some of the characteristic behavioral symptoms associated with autism. “We’ve finally hit the point where there is a general acceptance that the gut microbiome does play a role,” says Luna. As is the case with mood disorders, scientists haven’t been able to tease out any single bacteria (or set of bacteria) that “identifies” individuals with autism, simply because autism is a spectrum disorder and there is variation in the gut profiles of people on different ends of the spectrum. Still, says Luna, studies in mice have shown that there are meaningful differences in the gut microbiomes of individuals with autism spectrum disorder compared to individuals without the disorder. Perhaps more importantly, altering the gut microbiome—for example, through taking probiotics or adhering to a specific diet—has been shown to correct some of the gastrointestinal symptoms associated with autism spectrum disorder. Improving those symptoms, in turn, appears to impact the behavioral symptoms of autism. Therefore, according to Luna, research on the connection between gut bacteria and autism may not necessarily provide a “cure” for autism, but it is providing significant hope for improving the lives of individuals who live with the disorder. 3. Cognitive function One exciting frontier in research on the gut microbiome is its relationship to crippling cognitive diseases, such as Alzheimer’s disease. In early 2017, researchers in Sweden who were studying the development of Alzheimer’s in mice found that mice with Alzheimer’s showed a different composition of bacteria in their guts than mice without the disease. Not only that, they were able to show that there is actually a cause-effect link between the gut microbiome and Alzheimer’s. The researchers took bacteria from the guts of diseased mice and transferred them into the guts of germ-free mice (mice who had no bacteria in their guts at all). Those who received bacteria from diseased mice developed more of the classic signs of Alzheimer’s disease than mice who remained germ-free. If that’s depressing, there’s good news as well. Research has also shown that introducing healthy bacteria into the gut microbiome of people suffering from Alzheimer’s can have a positive influence on their cognitive function. In one 2016 study, for example, researchers in Iran gave Alzheimer’s patients a daily dose of probiotics that contained two different kinds of beneficial bacteria. After just 12 weeks, those who had been taking the probiotics showed a moderate improvement in their performance on a standard test designed to measure cognitive impairment. Previous studies had shown similar effects in animals, but this was the first study to show that altering the gut microbiome improves cognition in human beings as well. According to the researchers, the findings offer hope that improving the mix of bacteria in the gut might be a way to slow down or even prevent the development of Alzheimer’s and related diseases. Reposted with permission from Vibrant Life magazine.
Executive Functioning: What Is It and Why Is Everyone Suddenly Talking About It?Wednesday May 15, 2019 While research on executive functioning has been taking place since the early 1970s, it has recently become a common buzzword in the worlds of education and speech-language pathology. This may be due to new research showing that a child’s future success depends less on their ability to memorize math facts and decode words, and more on having strong executive functioning and social-emotional skills (see Unbabbled podcast episode 4 for info on social-emotional skills). Children’s executive functioning skills gradually develop throughout childhood, beginning as early as infancy and continuing through the teen years. So, what is executive functioning? This is a simple question with a complex answer. There are 33 varying definitions of executive functioning used in the research and educational fields. However, there is an overall agreement that executive functioning refers to “an all-encompassing construct or an umbrella term for the complex cognitive process that underline flexible, goal-directed behavior.” In simpler terms, it’s often described as the “air traffic controller” of our brains—planning; organizing; regulating behavior; attending to important information; remembering past, present and upcoming tasks; etc. There are three generally agreed upon cognitive processes that make up executive functioning (with many skills that fall within these areas): - Inhibitory Control: Also known as self-regulation, this includes your ability to restrain your own thoughts/actions, to initiate tasks, regulate your emotions, and ignore distractions to focus on important information. - Cognitive Flexibility: Your ability to think about something in different ways. This includes your ability to see a variety of perspectives, solve problems, plan and organize, shift attention and engage in future thinking (what things will look like in the future that may be different from right now). - Working Memory: Your ability to hold information in your memory and be able to use it. This is necessary for following directions, sequencing, listening comprehension and holding numbers in your head to complete math problems. Other skills that fall under the executive functioning umbrella include self-monitoring, goal setting and reasoning. Why are executive functioning skills so important for learning? Executive functioning skills are necessary for a child to: - Learn within a group setting - Block out distractions and pay attention to important information and tasks - Regulate energy levels and emotions - Plan, organize and complete assignments - Self-monitor their work, problem solve and make changes when needed - Initiate work and maintain attention through completion - Interact with Peers - Take perspective of others - Think flexibly to engage in conversations - Initiate and maintain play and conversations with peers - Self-regulate energy level and emotions - Develop Reading Comprehension - Remember words and sentence meanings when decoding words - Remember important information from a passage or story - Sequence events in stories - Hold sound-letter associations in mind while sounding out new words A child with impaired executive functioning may have difficulty: - Controlling impulses to “follow the rules” - Attending to tasks - Initiating, planning and completing assignments - Following directions - Remembering key information to answer questions (math, reading, science, etc.) - Telling stories - Staying organized - Engaging in group discussions or staying on topic during conversations - Completing homework and/or turning in completed work Who may struggle with executive function difficulties? Children with the following diagnoses may struggle with executive functioning: - Attention-deficit disorder (ADD) - Attention-deficit hyperactivity disorder (ADHD) - Autism spectrum disorder - Sensory processing disorder - Learning differences - Language disorders References and Resources: - Executive Functioning Fact Sheet, National Center for Learning Disabilities (NCLD) 2008. - Executive Functions, Adele Diamond (2013) Annual Review of Psychology., Vol 64. - Handbook of Executive Functioning, Goldstein & Naglieri editors (2012), Springer Publishing. - Executive Function in Education, Second Edition: From Theory to Practice, Lynn Meltzer (2018). The Guilford Press. - Center on the Developing Child, Harvard University (www.developingchild.harvard.edu).
California Fish Species |Fish||Salt Creek Pupfish| |Scientific Name||Cyprinodon salinus| Salt Creek pupfish are found exclusively in the Salt Creek drainage of Death Valley. One 1.5 km section of stream is perennial and the amount of habitat available grows with increased rainfall. Conversely extreme rainfall events such as flash floods may result in high mortality rates. Salt Creek pupfish live primarily in an entrenched section of stream where pools are lined with plants and may be as deep as 2 m. They survive extreme changes in environment and live in water that ranges in temperature from near freezing to 40°C and may have salinities as high as 35 ppt. However, pupfish may be able to seek deep parts of pools that rarely exceed 28°C. Salt Creek pupfish feed mostly on algae and cyano-bacteria, but may also consume snails and crustaceans. They are capable of rapid re-colonization and populations may go through drastic changes in a matter or months. Salt Creek pupfish presumably increase population numbers by going through several generations in a year. Breeding habits in these fish are similar to those of the desert pupfish. |Watershed||Death Valley-Lower Amargosa Watershed| Please note, watersheds are at the USGS 8-digit Hydrologic Unit Code (HUC) scale, so they often include a lot of sub-watersheds. If a species occurs in any sub-watershed within the HUC, the species appears within the HUC. Link to an EPA page that shows HUCs.
Futuristic computing designs inside beetle scales Lauren Richey’s research may advance the pursuit of ultra-fast computers that manipulate light rather than electricity. Though it began as a science fair project involving a shiny Brazilian beetle, Lauren Richey’s research may advance the pursuit of ultra-fast computers that manipulate light rather than electricity. While still at Springville High School, Lauren approached Brigham Young University professor John Gardner about using his scanning electron microscope to look at the beetle known as Lamprocyphus augustus. When Lauren and Professor Gardner examined the scales, they noticed something unusual for iridescent surfaces: They reflected the same shade of green at every angle. The reason? Each beetle scale contained a crystal with a honeycomb-like interior that had the same structural arrangement as carbon atoms in a diamond. What that has to do with futuristic computers is a stretch, but here is how the two connect: Scientists have long dreamed of computer chips based on light rather than electricity. In “optical computing,” chips would need photonic crystals to channel light particles. That’s easier said than done when dealing with high frequencies such as visible light. During her first year at BYU, Lauren co-authored a study describing the photonic properties of these beetle scales. In reaction, one photonics expert told Wired that “This could motivate another serious round of science.” Potentially these beetle scales could serve as a mold or template to which semiconductor material, like titanium or silica, can be added. The original beetle material can then be removed with acid leaving an inverse structure of the beetle crystal, a now usable photonic crystal in the visible light regions. “By using nature as templates, you can create things that you cannot make synthetically,” Lauren said. Now two years shy of a degree in physics, Lauren received funding from ORCA to examine the photonic crystal structures of two more species of iridescent beetles. With the help of a new ion beam microscope, she’s so far nailed down the structure of one (it’s a “face-centered cubic array of nanoscopic spheres”) and is still working on the other. From BYU, Lauren hopes to launch into a Ph.D. program at either MIT or Cal-Berkeley and continue research in photonics.
An informative abstract is a concise, jargon-free paragraph that explains the topic of a research paper, the research findings, and the author’s conclusions. The abstract should be understandable enough to stand on its own and at the same time entice readers into wanting to read more. If you find it challenging to boil your work down to 250 words, remember that your abstract should only include the few pieces of knowledge that you want your readers to take away from the report, even if they have forgotten the details of the main paper, according to the University of Mississippi Writing Center. Write a draft of the entire paper or report before starting the abstract. As you complete your work, take note of important elements that you want to stress in the summary. Introduce your subject with a sentence about the reason for your research. What made you start the project? Why should readers be interested? Give an explanation of the problem that your experiment or research will address. Explain the methods you used to answer the problem you just outlined. Provide the results of your research or experiment. Offer your conclusions based on the results and include additional questions that your research has raised. Polish the abstract draft, paying particular attention to avoiding passive verbs and wordy phrases. For example, you can shorten “a ratio of 2 to 1” to “twice as” without losing meaning, says the Colorado State University Writing Center. Make sure each sentence in the abstract flows smoothly into the next. Your professor, or the journal for which you are writing, may have a required abstract format. Be sure to follow specific guidelines as you complete your work. Use essential keywords in your abstract, so that readers searching for the topic electronically can find your report easily.
How to Start a Compost Pile What is Compost? Before we talk about how to start composting, it is important to know what compost is, how to use it in your garden and the benefits of using compost. Compost is a nutrient-rich soil amendment that is the result of aerobic biodegradation of organic materials. Microorganisms process this organic matter and turn it into compost that can be used in your vegetable garden or flowerbeds or around ornamental landscaping features or fruit trees. Benefits of Starting a Compost Pile The benefits of starting a compost pile are many. Here are five of the most common reasons people start backyard compost piles for use in their gardens. 1. Reduce waste: Instead of tossing kitchen scraps and unbleached coffee filters in the trash, you can reduce the amount of waste you send to landfills by adding these items to your compost bin. 2. Reduce recycling: Recycling is wonderful and is an important part of reducing our collective impact on the environment. However, recycling things like cardboard and paper still requires transportation, energy and water. We can reduce the amount of paper and cardboard we send to recycling centers by adding it to our compost bins instead. 3. Improve soil: Compost improves your garden soil by adding nutrients, increasing porosity for better soil structure, increasing the number of microbes in your soil, and feeding both the new and already present microbes for better plant health. 4. Conserve water: Compost helps soil retain moisture better. This allows your plants to benefit from the moisture retention and allows you to reduce the amount of water required to keep your plants healthy. 5. Reduce erosion and run-off: Compost absorbs water and holds on to it, which means it can reduce run-off and slow erosion. Further reading: 14 Reasons to Start Using Compost in Your Garden How Do I Use Compost? Compost can be mixed in with soil before planting, can be added around established plants as top dressing or can be sprinkled throughout your garden as a light mulch that will add nutrients to the soil over time. If you have flowerbeds or a vegetable garden that you need to overwinter, you can spread a thick layer of compost over the area, and then till it into the soil in spring. You can also use compost to make compost tea, which you can then use to both water and provide nutrients to your plants. Where Should I Put My Compost Pile? One of the most important parts of how to start composting is choosing a location for your compost pile. Your compost bin is going to have bugs, might smell a little bit sometimes and, depending on the type of enclosure you use, may attract rats or other wildlife or may leak compost tea. Compost bins and piles are also not necessarily attractive, so, if this is of concern to you, you will also need to take that into consideration. You want to place your compost bin or pile close enough to be easily accessible from your kitchen, since you are less likely to make the effort to take kitchen scraps to your bin if it is not in a convenient location. However, you do not want it right by your back door or too close to your outdoor living areas. This is particularly true if you plan to use an open compost pile, since this may attract rats, mice and other critters — and will definitely attract insects. If you have a vegetable garden where you plan on using your compost, you may want to locate the compost pile near your garden. This will make it easier to transport the finished compost to the beds in which you want to use it. Be sure to keep your pile away from fences and other wooden structures, since the moisture and decomposing matter can hasten rot in wood or, at the very least, discolor it. If you are using a compost bin that allows liquid to escape from the bin, you will probably not want to place your bin on hardscapes, such as concrete or paving stone patios, since this may require regular rinsing off to keep your patio clean and free of stains. What Type of Compost Bin Should I Use? There are a handful of compost bin styles to choose from, including options that sit on the ground and options that are raised above the ground on a stand or other structure. Some are stationary, some can be turned to mix your compost, and some have more than one compartment. The first choice you will need to make is whether you want to use a manufactured compost bin, build your own compost bin, or simply make a compost pile with no bin at all. Anything that is open or sits on the ground may attract rodents. Therefore, if you purchase or build a bin that sits on the ground, you may want to place gopher mesh on the ground under your bin. If you choose to use an open bin or just a pile, you can use a product like hardware cloth to partially close off the area, but if there are rats or mice in the vicinity, they will probably find their way into your compost. Rotating bins that are off of the ground generally do not attract rodents and are easier to mix. Stationary piles or bins require you to use a pitchfork or shovel to turn your pile, but rotating bins can simply be tumbled to mix your compost ingredients. There is nothing wrong with using open bins or simple compost piles; just keep in mind that you will be throwing things like eggshells, vegetable peelings and torn up cardboard on the pile. So, if you are the type of host that does not want your guests to see rotting kitchen scraps at your next social gathering, you may want to choose an enclosed bin for your composting needs. How to Start Composting After you have chosen a location and either purchased a compost bin, built a bin or created a spot for your pile, it is time to figure out how to start composting. The easiest way to start is to simply start tossing kitchen scraps in your bin or pile. Ideally, you want to layer your ingredients by adding a few inches of green ingredients alternated with a thicker layer of brown ingredients. Your green ingredients are materials that are higher in nitrogen, such as fresh lawn clippings or vegetable peelings. Your brown ingredients bring in the carbon and include things like dried leaves and cardboard. Once you have created several layers of green and brown materials, you patiently wait for the ingredients to decompose and turn into nutrient-rich compost that can be used around your yard. While it is ideal to layer your compost pile in this manner, there are plenty of folks who just toss stuff in the bin and stir it every once in a while to mix the green and brown materials. This method of composting is usually more convenient for backyard composters who do not start out with enough materials to make green and brown layers and may have only a handful of ingredients to put in the bin some days. If you choose to go with this more-casual approach to how to make a compost pile, just try to keep an eye on your ratio of brown to green materials. If you add a large amount of lawn clippings after mowing your grass, look around from brown ingredients, such as twigs or dead leaves, to balance things out. If there are no dried leaves or sticks around, tear up some cardboard to add in with the lawn clippings. What Can I Put in My Compost Bin? You can put most of your kitchen scraps in your compost bin, such as vegetable peelings, the ends you cut off carrots, fruits and vegetables that have been in your fruit basket a little too long, and the remnants of the salad you had for lunch. You can also include things like coffee grounds, unbleached coffee filters, tea bags, and eggshells. You can put some cooked foods, such as bread, in your compost bin, but keep in mind that this is going to attract critters and is usually considered a bad idea. Outside of the kitchen, you can add paper, cardboard, fireplace ashes, weeds that have not gone to seed, grass clippings, herbivore manure, and dead plants that did not have fungus or disease. Further reading: 20 Things You Can Compost in Your Backyard What Should I not Put in My Compost Bin? There are composters out there who put pretty much everything in their compost piles, including their own feces, so it can sometimes seem like anything goes in regard to how to start a compost pile. However, you are better off sticking to the above-mentioned items and avoiding additions that may cause issues or could even be dangerous for your family. For example, omnivore and carnivore manure can contain harmful bacteria and disease. If you use compost made from this manure on plants grown for food, you could be putting your health and the health of your family in danger. This is why only herbivore manure is recommended for compost piles. Some folks compost animal products, such as dairy, meat and bones, but it also best to avoid these. While they will eventually break down, introducing animal products into your compost pile will attract critters and can cause your pile to smell really bad. You may want to avoid adding cooked foods – particularly foods cooked with oil — to your pile for the same reasons. Do not include weeds that have flowers or that have clearly gone to seed. This will introduce seeds into your compost that may not be destroyed if the temperature in your pile does not get hot enough. This means that you may spread those weed seeds throughout your garden when you use your compost. Further reading: 15 Things You Should NOT Compost Going Beyond the Basics Once you feel like you have a handle on how to start composting and want to go beyond the basics with your backyard compost pile or bin, check out Backyard Composting Tips: 16 Accessories to Take Your Compost Pile to the Next Level.
We inspire a love for learning and develop a passion for knowledge How we do this We prepare the environment, provide the resources and stimulus and take a proactive approach to engage with children as they are learning. We use language to describe what is happening and help to scaffold children’s learning and understanding. This involvement is thoughtful, purposeful and focused. Being intentional about literacy and numeracy means taking an active role in promoting it, through the experiences we provide and also through the way that we interact with children and deliberately focus on literacy and numeracy concepts. Literacy and Numeracy is incorporated throughout the curriculum and is integrated in all types of play. While incidental learning is an essential strategy in play-based learning, it is important to recognise that not all learning will happen in this way. Incidental learning can be a powerful and effective method and is encouraged. However, if we want children to make important connections and to transfer knowledge and understanding between experiences, then we need to think beyond a purely incidental approach. This is particularly true for complex ideas such as those involved in literacy and numeracy. We want children to make important connections and to transfer knowledge and understanding between experiences. This will involve spontaneous responses to children’s play where we take advantage of opportunities to talk about literacy and numeracy as they arise, as well as more carefully planned experiences that we have deliberately designed to introduce or extend an idea or concept. We have a wide range of books and actively promote reading, even if it is, only looking at pictures. Introducing children to books early has huge social and educational advantages. Reading to toddlers sets the foundation for later independent reading. Reading problems can be challenging to fix later on and, if exposure to reading starts in the pre-school years, not only can most reading problems be prevented, but children also develop a love for reading which lasts a lifetime.
The rise of industrialization in the second half of the nineteenth century led to many changes in the American social fabric. The population expanded, poverty spread, and crime became a bigger problem. The legal system became an important means for policing people in American communities. Children had been prosecuted in American courts since their earliest days. In fact, children in the eighteenth century had been subject to the death penalty. In the early nineteenth century, reformers felt that the law dealt too harshly with children. They created special schools where children could be reformed rather than punished. Judges could sentence children who had committed crimes to go to these schools until they reached adulthood. In the early twentieth century, juvenile delinquency was removed from the penal (or criminal) code and separate courts were established for juvenile and family matters. Children are no longer considered to be legally accountable for their actions in the same way adults are. Judges today are required to consider what would be least restrictive when deciding punishments for juveniles. What does these documents tell us about social problems that existed at the turn of the twentieth century? Check for Understanding - Students write a paragraph answering the essential question. - Students discuss how these documents relate to what they have read in their textbook on urban life in the Progressive Era.
When students need to submit any research paper, chances are they’re also required to write a special Mla bibliography page. Oftentimes, teachers also ask to include a list of references and a works cited page. These are all different names for the same thing, which is a brief list of sources, like articles, websites, newspapers, and books, used to research and produce a good paper. This page makes it easier for others to see where you find relevant and interesting information for your paper. You can write a bibliography page manually, but it takes some time, work, and effort. Some students use special programs to create and update this page automatically to save time and get help while ensuring their all of their references are correct and accurate. This style (Modern Language Association) is used very often to write academic papers and cite all sources within humanities and liberal arts. Check the MLA Handbook to get common guidelines and become familiar with basic rules for the general format of MLA in-text citations, research papers, footnotes and endnotes, etc. Based on this formatting style, you need to put a works cited page in the end of your draft, and all of its entries should correspond to the works that you cite right in the text. When writing your Mla bibliography page, take into account new rules, such as: All entries must be listed alphabetically by the last name of each author or editor names for entire edited collections. Besides, author names are written with the last name first and middle initials or named that follow it. Don’t list any degrees or titles with them, but you should include suffixes when needed. When citing more than 1 work by specific authors, order all entries alphabetically by a title. Use 3 hyphens in place of author names for each entry after the first one. Alphabetize all works with unknown authors by their titles and use their shortened versions.
Lincoln’s Emancipation Proclamation - First and Foremost a Military Measure The Emancipation Proclamation—enacted exactly 150 years ago on Jan. 1, 2013—marked the beginning of the end for slavery in the United States, but historians say that the document was first and foremost a military measure. “As you read the Emancipation Proclamation, you can see that there’s nothing eloquent about it,” said Margaret Washington, professor of history and American studies at Cornell University. “The term itself sounds eloquent, but if you read the document, it is specifically for purposes of war—and winning the war, either by military power or by forcing the south back into the union by virtue of emancipating their slaves. Glancing back over the course of American history, it is impossible to separate the Civil War from the end of slavery. Nonetheless, in President Abraham Lincoln’s time, many considered the two developments mutually exclusive, even Lincoln himself. A Progressive President In his 1860 inaugural address, Lincoln promised to leave slavery alone in states where it already existed. In an 1862 New York Tribune letter, the president stated that his primary objective was saving the Union, and “not either to save or destroy slavery.” In the midst of America’s bloodiest conflict—a war that pitted brother against brother—the Emancipation Proclamation offered a time-tested military strategy that would quickly bring the fighting to an end. “That’s what was done with the American Revolution,” Washington said. “The British promised slaves freedom if they defected from the southern plantations in Virginia and came over to the British side and fought with them. Americans in the North used that same tactic with their slaves to fight against the British, and that’s how Northerners became free.” While Lincoln stated for years that he was ethically opposed to slavery, he was no abolitionist. But the influence of the American anti-slavery movement had become a powerful cultural force, especially among radicals in Lincoln’s own Republican Party. Washington’s freshman class traced the trajectory of this change in mindset by examining Lincoln’s writings and debate transcripts. These historical documents portray a man who had gone from tempering anti-slavery sentiment with a defense of white superiority, to suggesting weeks before his assassination that educated black men should be allowed to vote. The celebrations that went around the world when the Emancipation Proclamation came across the telegraph lines were huge. And yet it didn’t really free anybody. —Margaret Washington, professor of History and American Studies, Cornell University “It was merely a suggestion, but you can see the way he’s moving at the time of his death,” Washington said. “He was becoming more and more progressive.” Burning the Constitution Even before Lincoln became president, much had already been done to end slavery in America. While abolitionists were driven by a moral imperative, slaveholders had the law on their side, and their interests remained legally protected by a formidable Democratic Party, which was solidly pro-slavery in both North and South. However, the biggest legal hurdle to the anti-slavery effort was also a document fundamental to the nation—the U.S. Constitution. “Abolitionists adored the Declaration of Independence, but they despised the Constitution. As a matter of fact, William Lloyd Garrison burned it in public,” Washington said. The political might of early 19th-century slave owners can be seen in the 1850 Fugitive Slave Act, which obligated all U.S. citizens to send runaways back to their masters. Months after the law was passed, a fugitive that had escaped to Boston compelled the expense of thousands of troops to ensure his return. At an anti-slave rally ignited by the absurdly expensive capture of this single fugitive, Garrison demonstrated abolitionist fury by burning copies of both the Slave Act and the Constitution—a document he described as a “covenant with death” and an “agreement with Hell.” Although the Constitution did not contain the term “slave” until the practice was finally outlawed under the 13th Amendment, scholars say that the context is clear. According to Washington, former Supreme Court Justice Thurgood Marshall confirmed the abolitionist attitude toward the Constitution. “It was a pro-slavery document, except that the flexibility of the Constitution allowed it to have the 13th, 14th, and 15th Amendments that changed it,” Washington said. Read the full article at: theepochtimes.com
In animals photoreception refers to mechanisms of light detection that lead to vision and depends on specialized light-sensitive cells called photoreceptors, which are located in the eye. The quality of vision provided by photoreceptors varies enormously among animals. For example, some simple eyes such as those of flatworms have few photoreceptors and are capable of determining only the approximate direction of a light source. In contrast, the human eye has 100 million photoreceptors and can resolve one minute of arc (one-sixtieth of a degree), which is about 4,000 times better than the resolution achieved by the flatworm eye. The following article discusses the diversity and evolution of eyes, the structure and function of photoreceptors, and the central processing of visual information in the brain. For more information about the detection of light, see optics; for general aspects concerning the response of organisms to their environments, see sensory reception. The eyes of animals are diverse not only in size and shape but also in the ways in which they function. For example, the eyes of fish from the deep sea often show variations on the basic spherical design of the eye. In these fish, the eye’s field of view is restricted to the upward direction, presumably because this is the only direction from which there is any light from the surface. This makes the eye tubular in shape. Some fish living in the deep sea have reduced eyelike structures directed downward (e.g., Bathylychnops, which has a second lens and retina attached to the main eye); it is thought that the function of these structures is to detect bioluminescent creatures. On the ocean floor, where no light from the sky penetrates, eyes are often reduced or absent. However, in the case of Ipnops, which appears to be eyeless, the retina is still present as a pair of plates covering the front of the top of the head, although there is no lens or any other optical structure. The function of this eye is unknown. The placing of the eyes in the head varies. Predators, such as felines and owls, have forward-pointing eyes and the ability to judge distance by binocular triangulation. Herbivorous species that are likely to be victims of predation, such as mice and rabbits, usually have their eyes almost opposite each other, giving near-complete coverage of their surroundings. In addition to placement in the head, the structure of the eye varies among animals. Nocturnal animals, such as the house mouse and opossum, have almost spherical lenses filling most of the eye cavity. This design allows the eye to capture the maximum amount of light possible. In contrast, diurnal animals, such as humans and most birds, have smaller, thinner lenses placed well forward in the eye. Nocturnal animals usually have retinas with a preponderance of photoreceptors called rods, which do not detect colour but perceive size, shape, and brightness. Strictly diurnal animals, such as squirrels and many birds, have retinas containing photoreceptors called cones, which perceive both colour and fine detail. A slit pupil is common in nocturnal animals, as it can be closed more effectively in bright light than a round pupil. In addition, nocturnal animals, such as cats and bush babies, are usually equipped with a tapetum lucidum, a reflector behind the retina designed to give receptors a second chance to catch photons that were missed on their first passage through the retina. Animals such as seals, otters, and diving birds, which move from air to water and back, have evolved uniquely shaped corneas—the transparent membrane in front of the eye that separates fluids inside the eye from fluids outside the eye. The cornea functions to increase the focusing power of the eye; however, optical power is greatly reduced when there is fluid on both sides of the membrane. As a result, seals, which have a nearly flat cornea with little optical power in air or water, rely on a re-evolved spherical lens to produce images. Diving ducks, on the other hand, compensate for the loss of optical power in water by squeezing the lens into the bony ring around the iris, forming a high curvature blip on the lens surface, which shortens its focal length (the distance from the retina to the centre of the lens). One of the most interesting examples of amphibious optics occurs in the “four-eyed fish” of the genus Anableps, which cruises the surface meniscus with the upper part of the eye looking into air and the lower part looking into water. It makes use of an elliptical lens, with the relatively flat sides adding little to the power of the cornea and the higher curvature ends focusing light from below the surface, where the cornea is ineffective. Though the eyes of animals are diverse in structure and use distinct optical mechanisms to achieve resolution, eyes can be differentiated into two primary types: single-chambered and compound. Single-chambered eyes (sometimes called camera eyes) are concave structures in which the photoreceptors are supplied with light that enters the eye through a single lens. In contrast, compound eyes are convex structures in which the photoreceptors are supplied with light that enters the eye through multiple lenses. The possession of multiple lenses is what gives these eyes their characteristic faceted appearance. In most of the invertebrate phyla, eyes consist of a cup of dark pigment that contains anywhere from a few photoreceptors to a few hundred photoreceptors. In most pigment cup eyes there is no optical system other than the opening, or aperture, through which light enters the cup. This aperture acts as a wide pinhole and restricts the width of the cone of light that reaches any one photoreceptor, thereby providing a very limited degree of resolution. Pigment cup eyes are very small, typically 100 μm (0.004 inch) or less in diameter. They are capable of supplying information about the general direction of light, which is adequate for finding the right part of the environment in which to seek food. However, they are of little value for hunting prey or evading predators. In 1977 Austrian zoologist Luitfried von Salvini-Plawen and American biologist Ernst Mayr estimated that pigment cup eyes evolved independently between 40 and 65 times across the animal kingdom. These estimates were based on a variety of differences in microstructure among pigment cup eyes of different organisms. Pigment cup eyes were undoubtedly the starting point for the evolution of the much larger and more optically complex eyes of mollusks and vertebrates. Pinhole eyes, in which the size of the pigment aperture is reduced, have better resolution than pigment cup eyes. The most impressive pinhole eyes are found in the mollusk genus Nautilus, a member of a cephalopod group that has changed little since the Cambrian Period (about 542 million to 488 million years ago). These organisms have eyes that are large, about 10 mm (0.39 inch) across, with millions of photoreceptors. They also have muscles that move the eyes and pupils that can vary in diameter, from 0.4–2.8 mm (0.02–0.11 inch), with light intensity. These features all suggest an eye that should be comparable in performance to the eyes of other cephalopods, such as the genus Octopus. However, because there is no lens and each photoreceptor must cover a wide angle of the field of view, the image in the Nautilus eye is of very poor resolution. Even with the pupil at its smallest, each receptor views an angle of more than two degrees, compared with a few fractions of a degree in Octopus. In addition, because the pupil has to be small in order to achieve even a modest degree of resolution, the image produced in the Nautilus eye is extremely dim. Thus, a limitation of pinhole eyes is that any improvement in resolution is at the expense of sensitivity; this is not true of eyes that contain lenses. There are one or two other eyes in gastropod mollusks that could qualify as pinhole eyes, notably those of the abalone genus Haliotis. However, none of these eyes rival the eyes of Nautilus in size or complexity. Relative to pinhole eyes, lens eyes have greatly improved resolution and image brightness. Lenses were formed by increasing the refractive index of material in the chamber by adding denser material, such as mucus or protein. This converged incoming rays of light, thereby reducing the angle over which each photoreceptor receives light. The continuation of this process ultimately results in a lens capable of forming an image focused on the retina. Most lenses in aquatic animals are spherical, because this shape gives the shortest focal length for a lens of a given diameter, which in turn gives the brightest image. Lens eyes focus an image either by physically moving the lens toward or away from the retina or by using eye muscles to adjust the shape of the lens. For many years the lens properties that allow for the formation of quality images in the eye were poorly understood. Lenses made of homogeneous material (e.g., glass or dry protein) suffer from a defect known as spherical aberration, in which peripheral rays are focused too strongly, resulting in a poor image. In the 19th century, Scottish mathematician and physicist James Clerk Maxwell discovered that the lens of the eye must contain a gradient of refractive index, with the highest degree of refraction occurring in the centre of the lens. In the late 19th century the physiologist Matthiessen showed that this was true for fish, marine mammals, and cephalopod mollusks. It is also true of many gastropod mollusks, some marine worms (family Alciopidae), and at least one group of crustaceans, the copepod genus Labidocera. Two measurements, focal length and radius of curvature of the lens, can be used to distinguish gradient lenses from homogeneous lenses. For example, gradient lenses have a much shorter focal length than homogeneous lenses with the same central refractive index, and the radius of curvature of a gradient lens is about 2.5 lens radii, compared with 4 radii for a homogeneous lens. The ratio of focal length to radius of curvature is known as the Matthiessen ratio (named for its discoverer, German physicist and zoologist Ludwig Matthiessen) and is used to determine the optical quality of lenses. The lens eyes of fish and cephalopod mollusks are superficially very similar. Both are spherical and have a Matthiessen ratio lens that can be focused by moving it toward and away from the retina, an iris that can contract, and external muscles that move the eyes in similar ways. However, fish and cephalopod mollusks evolved quite independently of each other. An obvious difference between the eyes of these organisms is in the structure of the retina. The vertebrate retina is inverse, with the neurons emerging from the front of the retina and the nerve fibres burrowing out through the optic disk at the back of the eye to form the optic nerve. The cephalopod retina is everse, meaning the fibres of the neurons leave the eye directly from the rear portions of the photoreceptors. The photoreceptors themselves are different too. Vertebrate photoreceptors, the rods and cones, are made of disks derived from cilia, and they hyperpolarize (become more negative) when light strikes them. In contrast, cephalopod photoreceptors are made from arrays of microvilli (fingerlike projections) and depolarize (become less negative) in response to light. The developmental origins of the eyes are also different. Vertebrate eyes come from neural tissue, whereas cephalopod eyes come from epidermal tissue. This is a classic case of convergent evolution and demonstrates the development of functional similarities derived from common constraints. When vertebrates emerged onto land, they acquired a new refracting surface, the cornea. Because of the difference in refractive index between air and water, a curved cornea is an image-forming lens in its own right. Its focal length is given by f = nr/(n-1), where n is the refractive index of the fluid of the eye, and r is the radius of curvature of the cornea. All land vertebrates have lenses, but the lens is flattened and weakened compared with a fish lens. In the human eye the cornea is responsible for about two-thirds of the eye’s optical power, and the lens provides the remaining one-third. Spherical corneas, similar to spherical lenses, can suffer from spherical aberration. To avoid this, the human cornea developed an ellipsoidal shape, with the highest curvature in the centre. A consequence of this nonspherical design is that the cornea has only one axis of symmetry, and the best image quality occurs close to this axis, which corresponds with central vision (as opposed to peripheral vision). In addition, central vision is aided by a region of high photoreceptor density, known as the fovea or the less clearly defined “area centralis,” that lies close to the central axis of the eye and specializes in acute vision. Corneal eyes are found in spiders, many of which have eyes with excellent image-forming capabilities. Spiders typically have eight eyes, two of which, the principal eyes, point forward and are used in tasks such as the recognition of members of their own species. Hunting spiders use the remaining three pairs, secondary eyes, as movement detectors. However, in web-building spiders, the secondary eyes are underfocused and are used as navigation aids, detecting the position of the Sun and the pattern of polarized light in the sky. Jumping spiders have the best vision of any spider group, and their principal eyes can resolve a few minutes of arc, which is many times better than the eyes of the insects on which they prey. The eyes of jumping spiders are also unusual in that the retinas scan to and fro across the image while the spider identifies the nature of its target. Insects also have corneal single-chambered eyes. The main eyes of many insect larvae consist of a small number of ocelli, each with a single cornea. The main organs of sight of most insects as adults are the compound eyes, but flying insects also have three simple dorsal ocelli. These are generally underfocused, giving blurred images; their function is to monitor the zenith and the horizon, supplying a rapid reaction system for maintaining level flight. Scallops (Pecten) have about 50–100 single-chambered eyes in which the image is formed not by a lens but by a concave mirror. In 1965 British neurobiologist Michael F. Land (the author of this article) found that although scallop eyes have a lens, it is too weak to produce an image in the eye. In order to form a visible image, the back of the eye contains a mirror that reflects light to the photoreceptors. The mirror in Pecten is a multilayer structure made of alternating layers of guanine and cytoplasm, and each layer is a quarter of a wavelength (about 0.1 μm in the visible spectrum) thick. The structure produces constructive interference for green light, which gives it its high reflectance. Many other mirrors in animals are constructed in a similar manner, including the scales of silvery fish, the wings of certain butterflies (e.g., the Morpho genus), and the iridescent feathers of many birds. The eyes of Pecten also have two retinas, one made up of a layer of conventional microvillus receptors close to the mirror and out of focus, and the second made up of a layer with ciliary receptors in the plane of the image. The second layer responds when the image of a dark object moves across it; this response causes the scallop to shut its shell in defense against potential predation. Reflecting eyes such as those of Pecten are not common. A number of copepod and ostracod crustaceans possess eyes with mirrors, but the mirrors are so small that it is difficult to tell whether the images are used. An exception is the large ostracod Gigantocypris, a creature with two parabolic reflectors several millimetres across. It lives in the deep ocean and probably uses its eyes to detect bioluminescent organisms on which it preys. The images are poor, but the light-gathering power is enormous. A problem with all concave mirror eyes is that light passes through the retina once, unfocused, before it returns, focused, from the mirror. As a result, photoreceptors see a low-contrast image, and this design flaw probably accounts for the rare occurrence of these eyes. Compound eyes are made up of many optical elements arranged around the outside of a convex supporting structure. They fall into two broad categories with fundamentally different optical mechanisms. In apposition compound eyes each lens with its associated photoreceptors is an independent unit (the ommatidium), which views the light from a small region of the outside world. In superposition eyes the optical elements do not act independently; instead, they act together to produce a single erect image lying deep in the eye. In this respect they have more in common with single-chambered eyes, even though the way the image is produced is quite different. Apposition eyes were almost certainly the original type of compound eye and are the oldest fossil eyes known, identified from the trilobites of the Cambrian Period. Although compound eyes are most often associated with the arthropods, especially insects and crustaceans, compound eyes evolved independently in two other phyla, the mollusks and the annelids. In the mollusk phylum, clams of the genera Arca and Barbatia have numerous tiny compound eyes, each with up to a hundred ommatidia, situated around their mantles. In these tiny eyes each ommatidium consists of a photoreceptor cell and screening pigment cells. The eyes have no lenses and rely simply on shadowing from the pigment tube to restrict the field of view. In the annelid phylum the tube worms of the family Sabellidae have eyes similar to those of Arca and Barbatia at various locations on the tentacles. However, these eyes differ in that they have lenses. The function of the eyes of both mollusks and annelids is much the same as the mirror eyes of Pecten; they see movement and initiate protective behaviour, causing the shell to shut or the organism to withdraw into a tube. In arthropods most apposition eyes have a similar structure. Each ommatidium consists of a cornea, which in land insects is curved and acts as a lens. Beneath the cornea is a transparent crystalline cone through which rays converge to an image at the tip of a receptive structure, known as the rhabdom. The rhabdom is rodlike and consists of interdigitating fingerlike processes (microvilli) contributed by a small number of photoreceptor cells. The number of microvilli varies, with eight being the typical number found in insects. In addition, there are pigment cells of various kinds that separate one ommatidium from the next; these cells may act to restrict the amount of light that each rhabdom receives. Beneath the photoreceptor cells there are usually three ganglionic layers—the lamina, the medulla, and the lobula—that form a set of neuronal relays, and the rhabdom is connected to these layers by a single axon. The neuronal relays map and remap input from the retinal photoreceptors, thereby generating increasingly complex responses to contrast, motion, and form. In aquatic insects and crustaceans the corneal surface cannot act as a lens because it has no refractive power. Some water bugs (e.g., Notonecta, or back swimmers) use curved surfaces behind and within the lens to achieve the required ray bending, whereas others use a structure known as a lens cylinder. Similar to fish lenses, lens cylinders bend light, using an internal gradient of refractive index, highest on the axis and falling parabolically to the cylinder wall. In the 1890s Austrian physiologist Sigmund Exner was the first to show that lens cylinders can be used to form images in the eye. He discovered this during his studies of the ommatidia of the horseshoe crab Limulus. A problem that remained poorly understood until the 1960s is the relationship between the inverted images formed in individual ommatidia and the image formed across the eye as a whole. The question was first raised in the 1690s when Dutch scientist Antonie van Leeuwenhoek observed multiple inverted images of his candle flame through the cleaned cornea of an insect eye. Later investigations of the ommatidial structure revealed that in apposition eyes each ommatidium is independent and sees a small portion of the field of view. The field of view is defined by the lens, which also serves to increase the amount of light reaching the rhabdom. Each rhabdom scrambles and averages the light it receives, and the individual ommatidial images are sent via neurons from the ommatidia to the brain. In the brain, the separate images are perceived as a single overall image. The array of images formed by the convex sampling surface of the apposition compound eye is functionally equivalent to the concave sampling surface of the retina in a single-chambered eye. Neural superposition eyes Conventional apposition eyes, such as those of bees and crabs, have a similar optical design to the eyes of flies (Diptera). However, in fly eyes the photopigment-bearing membrane regions of the photoreceptors are not fused into a single rhabdom. Instead, they stay separated as eight individual rodlets (effectively seven, since two lie one above the other), known as rhabdomeres, each with its own axon. This means that each ommatidium should be capable of a seven-point resolution of the image, which raises the problem of incorporating multiple inverted images into a single erect image that the ordinary apposition eye avoids. In 1967 German biologist Kuno Kirschfeld showed that the angles between the individual rhabdomeres in one ommatidium are the same as those between adjacent ommatidia. As a result, each of the seven rhabdomeres in one ommatidium shares a field of view with a rhabdomere in a neighbouring ommatidium. In addition, all seven rhabdomeres that share a common field of view send their axons to the same place in the first ganglionic layer—the lamina. Thus, at the level of the lamina the image is no different from that in an ordinary apposition eye. However, because each of the seven photoreceptor axon inputs connects to second-order neurons, the image at the level of the lamina is effectively seven times brighter than in the photoreceptors themselves. This allows flies to fly earlier in the morning and later in the evening than other insects with eyes of similar resolution. This variant of the apposition eye has been called neural superposition. Although there is no further spatial resolution within a rhabdom, the various photoreceptors in each ommatidium do have the capacity to resolve two other features of the image, wavelength and plane of polarization. The different photoreceptors do not all have the same spectral sensitivities (sensitivities to different wavelengths). For example, in the honeybee there are three photopigments in each ommatidium, with maximum sensitivities in the ultraviolet, the blue, and the green regions of the spectrum. This forms the basis of a trichromatic colour vision system that allows bees to distinguish accurately between different flower colours. Some butterflies have four visual pigments, one of which is maximally sensitive to red wavelengths. The most impressive array of pigments is found in mantis shrimps (order Stomatopoda), where there are 12 visual pigments in a special band across the eye. Eight pigments cover the visible spectrum, and four cover the ultraviolet region. Unlike humans, many arthropods have the ability to resolve the plane of polarized light. Single photons of light are wave packets in which the electrical and magnetic components of the wave are at right angles. The plane that contains the electrical component is known as the plane of polarization. Sunlight contains photons polarized in all possible planes and therefore is unpolarized. However, the atmosphere scatters light selectively, in a way that results in a pattern of polarization in the sky that is directly related to the position of the Sun. Austrian zoologist Karl von Frisch showed that bees could navigate by using the pattern of polarization instead of the Sun when the sky was overcast. The organization of the photopigment molecules on the microvilli in the rhabdoms of bees makes this type of navigation possible. A photon will be detected only if the light-sensitive double bond of the photopigment molecule lies in the plane of polarization of the photon. The rhabdoms in the dorsal regions of bee eyes have their photopigment molecules aligned with the axes of the microvilli, which lie parallel to one another in the photoreceptor. As a result, each photoreceptor is able to act as a detector for a particular plane of polarization. The whole array of detectors in the bee’s eyes is arranged in a way that matches the polarization pattern in the sky, thus enabling the bee to easily detect the symmetry plane of the pattern, which is the plane containing the Sun. The other physical process that results in polarization is reflection. For example, a water surface polarizes reflected light so that the plane of polarization is parallel to the plane of the surface. Many insects, including back swimmers of Notonecta, make use of this property to find water when flying between pools. The mechanism is essentially the same as in the bee eye. There are pairs of photoreceptors with opposing microvillar orientations in the downward-pointing region of the eye, and when the photoreceptors are differentially stimulated by the polarized light from a reflecting surface, the insect makes a dive. The reason that humans cannot detect polarized light is that the photopigment molecules can take up all possible orientations within the disks of the rods and cones, unlike the microvilli of arthropods, in which the molecules are constrained to lie parallel to the microvillar axis. The number of ommatidia in apposition eyes varies from a handful, as in primitive wingless insects and some ants, to as many as 30,000 in each eye of some dragonflies (order Odonata). The housefly has 3,000 ommatidia per eye, and the vinegar fly (or fruit fly) has 700 per eye. In general, the resolution of the eye increases with increasing ommatidial number. However, the physical principle of diffraction means that the smaller the lens, the worse the resolution of the image. This is why astronomical telescopes have huge lenses (or mirrors), and it is also why the tiny lenses of compound eyes have poor resolution. A bee’s eye, with 25-μm- (0.001-inch-) wide lenses, can resolve about one degree. The human eye, with normal visual acuity (20/20 vision), can resolve lines spaced less than one arc minute (one-sixtieth of one degree) apart, which is about 60 times better than a bee. In addition, the single lens of the human eye has an aperture diameter (in daylight) of 2.5 mm (0.1 inch), 100 times wider than that of a single lens of a bee. If a bee were to attempt to improve its resolution by a factor of two, it would have to double the diameter of each lens, and it would need to double the number of ommatidia to exploit the improved resolution. As a result, the size of an apposition eye would increase as the square of the required resolution, leading to absurdly large eyes. In 1894 British physicist Henry Mallock calculated that a compound eye with the same resolution as human central vision would have a radius of 6 metres (19 feet). Given this problem, a resolution of one-quarter of a degree, found in the large eyes of dragonflies, is probably the best that any insect can manage. Because increased resolution comes at a very high cost in terms of overall eye size, many insects have eyes with local regions of increased resolution (acute zones), in which the lenses are larger. The need for higher resolution is usually connected with sex or predation. In many male dipteran flies and male (drone) bees, there is an area in the upper frontal region of the eyes where the facets are enlarged, giving resolution that is up to three times more acute than elsewhere in the eye. The acute resolution is used in the detection and pursuit of females. In one hover fly genus (Syritta) the males make use of their superior resolution to stay just outside the distance at which females can detect them. In this way a male can stalk a female on the wing until she lands on a flower, at which point he pounces. In a few flies, such as male bibionids (March flies) and simuliids (black flies), the high- and low-resolution parts of the eye form separate structures, making the eye appear doubled. Insects that catch other insects on the wing also have special “acute zones.” Both sexes of robber fly (family Asilidae) have enlarged facets in the frontal region of the eye, and dragonflies have a variety of more or less upward-pointing high-resolution regions that they use to spot flying insects against the sky. The hyperiid amphipods, medium-sized crustaceans from the shallow and deep waters of the ocean, have visual problems similar to those of dragonflies, although in this case they are trying to spot the silhouettes of potential prey against the residual light from the surface. This has led to the development of highly specialized divided eyes in some species, most notably in Phronima, in which the whole of the top of the head is used to provide high resolution and sensitivity over a narrow (about 10 degrees) field of view. Not all acute zones are upward-pointing. Some empid flies (or dance flies), which cruise around just above ponds looking for insects trapped in the water surface, have enlarged facets arranged in a belt around the eye’s equator—the region that views the water surface. Crepuscular (active at twilight) and nocturnal insects (e.g., moths), as well as many crustaceans from the dim midwater regions of the ocean, have compound eyes known as superposition eyes, which are fundamentally different from the apposition type. Superposition eyes look superficially similar to apposition eyes in that they have an array of facets around a convex structure. However, outside of this superficial resemblance, the two types differ greatly. The key anatomical features of superposition eyes include the existence of a wide transparent clear zone beneath the optical elements and a deep-lying retinal layer, usually situated about halfway between the eye surface and the centre of curvature of the eye. Unlike apposition eyes, where the lenses each form a small inverted image, the optical elements in superposition eyes form a single erect image, located deep in the eye on the surface of the retina. The image is formed by the superimposed (hence the name superposition) ray-contributions from a large number of facets. Thus, in some ways this type of eye resembles the single-chambered eye in that there is only one image, which is projected through a transparent region onto the retina. Refracting, reflecting, and parabolic optical mechanisms In superposition eyes the number of facets that contribute to the production of a single image depends on the type of optical mechanism involved. There are three general mechanisms, based on lenses (refracting superposition), mirrors (reflecting superposition), and lens-mirror combinations (parabolic superposition). The refracting superposition mechanism was discovered by Austrian physiologist Sigmund Exner in the 1880s. He reasoned that the geometrical requirement for superposition was that each lens element should bend light in such a way that rays entering the element at a given angle to its axis would emerge at a similar angle on the same side of the axis. Exner realized that this was not the behaviour of a normal lens, which forms an image on the opposite side of the axis from the entering ray. He worked out that the only optical structures capable of producing the required ray paths were two-lens devices, specifically two-lens inverting telescopes. However, the lenslike elements of superposition eyes lack the necessary power in their outer and inner refracting surfaces to operate as telescopes. Exner solved this by postulating that the elements have a lens cylinder structure with a gradient of refractive index capable of bending light rays continuously within the structure. This is similar to the apposition lens cylinder elements in the Limulus eye (see above Apposition eyes); the difference is that the telescope lenses would be twice as long. The lens cylinder arrangement produces the equivalent of a pair of lenses, with the first lens producing a small image halfway down the structure and the second lens turning the image back into a parallel beam. In the process the ray direction is reversed. Thus, the emerging beam is on the same side of the axis as the entering beam—the condition for obtaining a superposition image from the whole array. In the 1970s, studies using an interference microscope, a device capable of exploring the refractive index distribution in sections of minute objects, showed that Exner’s brilliant idea was accurate in all important details. There is one group of animals with eyes that fit the anatomical criteria for superposition but that have optical elements that are not lenses or lens cylinders. These are the long-bodied decapod crustaceans, such as shrimps, prawns, crayfish, and lobsters. The optical structures are peculiar in that they have a square rather than a circular cross section, and they are made of homogeneous low-refractive index jelly. For a period of 20 years—between 1955, when interference microscopy showed that the jelly structures lacked appropriate refracting properties, and 1975, when the true nature of these structures was discovered—there was much confusion about how these eyes might function. Working with crayfish eyes, German neurobiologist Klaus Vogt found that these unpromising jelly boxes were silvered with a multilayer reflector coating. A set of plane mirrors, aligned at right angles to the eye surface, change the direction of rays (in much the same way as len cylinders), thereby producing a single erect image by superposition. The square arrangement of the mirrors has particular significance. Rays entering the eye at an oblique angle encounter two surfaces of each mirror box rather than one surface. In this case, the pair of mirrors at right angles acts as a corner reflector. Corner reflectors reflect an incoming ray through 180 degrees, irrespective of the ray’s original direction. As a result, the reflectors behave as though they were a single plane mirror at right angles to the ray. This ensures that all parallel rays reach the same focal point and means that the eye as a whole has no single axis, which allows the eye to operate over a wide angle. The third type of superposition eye, discovered in 1988 in the crab genus Macropipus by Swedish zoologist Dan-Eric Nilsson, has optical elements that use a combination of a single lens and a parabolic mirror. The lens focuses an image near the top of the clear zone (similar to an apposition eye), but oblique rays are intercepted by a parabolic mirror surface that lines the crystalline cone beneath the lens. The parabolic mirror unfocuses the light and redirects it back across the axis of the structure, producing an emerging ray path similar to that of a refracting or reflecting superposition eye. All three types of superposition eyes have adaptation mechanisms that restrict the amount of light reaching the retina in bright conditions. In most cases, light is restricted by the migration of dark pigment (held between the crystalline cones in the dark) into the clear zone; this cuts off the most oblique rays. However, as the pigment progresses inward, it cuts off more and more of the image-forming beam until only the central optical element supplies light to the rhabdom (located immediately below the central optical element). This effectively converts the superposition eye into an apposition eye, since in the dark-adapted condition up to a thousand facets may contribute to the image at any one point on the retina, potentially reducing the retinal illumination a thousandfold. Superposition optics requires that parallel rays from a large portion of the eye surface meet at a single point in the image. As a result, superposition eyes should have a simple spherical geometry, and, in fact, most superposition eyes in both insects and crustaceans are spherical. Some moth eyes do depart slightly from a spherical form, but it is in the euphausiid crustaceans (krill) from the mid-waters of the ocean that striking asymmetries are found. In many krill species the eyes are double. One part, with a small field of view, points upward, and a second part, with a wide field of view, points downward (similar to the apposition eyes of hyperiid amphipods). It is likely that the upper part is used to spot potential prey against the residual light from the sky, and the lower part scans the abyss for bioluminescent organisms. The most extraordinary double superposition eyes occur in the tropical mysid shrimp genus Dioptromysis, which has a normal-looking eye that contains a single enormous facet embedded in the back, with an equally large lens cylinder behind the facet. This single optical element supplies a fine-grain retina, which seems to act as the “fovea” of the eye as a whole. At certain times the eyes rotate so that the single facets are directed forward to view the scene ahead with higher resolution, much as one would use a pair of binoculars.
Streptomycin is an antibiotic drug, the first of a class of drugs called aminoglycosides to be discovered, and was the first antibiotic remedy for tuberculosis. It is derived from the actinobacterium Streptomyces griseus. Streptomycin stops bacterial growth by damaging cell membranes and inhibiting protein synthesis. Specifically, it binds to the 23S rRNA molecule of the bacterial ribosome, which prevents the release of the growing protein (polypeptide chain). Humans have structurally different ribosomes than bacteria, thereby allowing the selectivity of this antibiotic for bacteria. Streptomycin cannot be given orally, but must be administered by regular intramuscular injection. An adverse effect of this medicine is ototoxicity. It can result in permanent hearing loss. It was first isolated on October 19, 1943 in the laboratory of Selman Abraham Waksman at Rutgers University by Albert Schatz, a graduate student in his laboratory. Waksman and his laboratory discovered several antibiotics, including actinomycin, clavacin, streptothricin, streptomycin, grisein, neomycin, fradicin, candicidin, candidin, and others. Two of these, streptomycin and neomycin, found extensive application in the treatment of numerous infectious diseases. Streptomycin was the first antibiotic that could be used to cure the disease tuberculosis. Waksman is credited with having coined the term antibiotics. The details and credit for the discovery of streptomycin were strongly contested by Albert Schatz and resulted in litigation. The contention arose because Schatz was the graduate student in charge of performing the lab work on streptomycin; however, it was argued that he was using techniques, equipment and lab space of Waksman's while under Waksman's direction. There is contention as to whether or not Schatz should have been included in the Nobel Prize awarded in 1952. However, the committee stated that the Nobel Prize was awarded not only for the discovery of streptomycin but also for the development of the methods and techniques that led up to its discovery and the discovery of many other antibiotics. The litigation ended with a settlement for Schatz and the official decision that Waksman and Schatz would be considered co-discoverers of streptomycin. Schatz was awarded the Rutgers medal in 1994, at the age of 74. The controversy ultimately had a negative impact on the careers of both Waksman and Schatz and the controversy continues today. Uses : Tuberculosis in combination with other anti-TB drugs
There are nearly two dozen different species of small moths that feed on grasses and which are referred to as “sod webworms”. They are found in at least 10 different genera, but the genus Crambus comprises the majority of these moths and will be used as the example here. Female moths haphazardly drop their eggs while in flight, particularly when seen fluttering over an area. Some species may land and rest in the turf to deposit their eggs, and several hundred eggs per female will be common. Larvae immediately create burrows from bits of leaves and soil held together with webbing, and they tend to remain within this tube during the daytime and often while feeding. Most species feed on the crown and blades above the crown but some will feed on the roots of turf as well. Development to the adult moth may require only a few weeks. Removal of unnecessary thatch will help to remove larvae as well as plant materials they need for their tubes. Contact insecticides applied to the turf will kill the larvae as they feed.
Pollution is the introduction of contaminants into an environment that causes instability, disorder, harm or discomfort to the physical systems or living organisms they are in. Pollution can take the form of chemical substances, or energy, such as noise, heat, or light energy. Pollutants, the elements of pollution, can be foreign substances or energies, or naturally occurring; when naturally occurring, they are considered contaminants when they exceed natural levels. Pollution is often classed as point source or nonpoint source pollution. Humankind has had some effect upon the environment since the Paleolithic era during which the ability to generate fire was acquired. In the Iron Age, the use of tooling led to the practice of metal grinding on a small scale and resulted in minor accumulations of discarded material probably easily dispersed without too much impact. Human wastes would have polluted rivers or water sources to some degree. However, these effects could be expected predominantly to be dwarfed by the natural world. The first advanced civilizations of Mesopotamia, Egypt, India, China, Persia, Greece and Rome increased the use of water for their manufacture of goods, increasingly forged metal and created fires of wood and peat for more elaborate purposes (for example, bathing, heating). The forging of metals appears to be a key turning point in the creation of significant air pollution levels. Core samples of glaciers in Greenland indicate increases in air pollution associated with Greek, Roman and Chinese metal production. Still, at this time the scale of higher activity probably did not disrupt ecosystems. The European Dark Ages during the early Middle Ages probably saw a reprieve in wide spread pollution, in that industrial activity fell, and population levels did not grow rapidly. Toward the end of the Middle Ages populations grew and concentrated more within cities, creating pockets of readily evident contamination. In certain places air pollution levels were recognizable as health issues, and water pollution in population centers was a serious medium for disease transmission from untreated human waste. Since travel and widespread information were less common, there did not exist a more general context than that of local consequences in which to consider pollution. Foul air would have been considered a nuissance and wood, or eventually, coal burning produced smoke, which in sufficient concentrations could be a health hazard in proximity to living quarters. Septic contamination or poisoning of a clean drinking water source was very easily fatal to those who depended on it, especially if such a resource was rare. Superstitions predominated and the extent of such concerns would probably have been little more than a sense of moderation and an avoidance of obvious extremes. But gradually increasing populations and the proliferation of basic industrial processes saw the emergence of a civilization that began to have a much greater collective impact on its surroundings. It was to be expected that the beginnings of environmental awareness would occur in the more developed cultures, particularly in the densest urban centers. The first medium warranting official policy measures in the emerging western world would be the most basic: the air we breathe. The earliest known writings concerned with pollution were Arabic medical treatises written between the 9th and 13th centuries, by physicians such as al-Kindi (Alkindus), Qusta ibn Luqa (Costa ben Luca), Muhammad ibn Zakarīya Rāzi (Rhazes), Ibn Al-Jazzar, al-Tamimi, al-Masihi, Ibn Sina (Avicenna), Ali ibn Ridwan, Ibn Jumay, Isaac Israeli ben Solomon, Abd-el-latif, Ibn al-Quff, and Ibn al-Nafis. Their works covered a number of subjects related to pollution such as air contamination, water contamination, soil contamination, solid waste mishandling, and environmental assessments of certain localities. King Edward I of England banned the burning of sea-coal by proclamation in London in 1272, after its smoke had become a problem. But the fuel was so common in England that this earliest of names for it was acquired because it could be carted away from some shores by the wheelbarrow. Air pollution would continue to be a problem there, especially later during the industrial revolution, and extending into the recent past with the Great Smog of 1952. This same city also recorded one of the earlier extreme cases of water quality problems with the Great Stink on the Thames of 1858, which led to construction of the London sewerage system soon afterward. It was the industrial revolution that gave birth to environmental pollution as we know it today. The emergence of great factories and consumption of immense quantities of coal and other fossil fuels gave rise to unprecedented air pollution and the large volume of industrial chemical discharges added to the growing load of untreated human waste. Chicago and Cincinnati were the first two American cities to enact laws ensuring cleaner air in 1881. Other cities followed around the country until early in the 20th century, when the short lived Office of Air Pollution was created under the Department of the Interior. Extreme smog events were experienced by the cities of Los Angeles and Donora, Pennsylvania in the late 1940s, serving as another public reminder. Early Soviet poster, before the modern awareness: "The smoke of chimneys is the breath of Soviet Russia" Pollution became a popular issue after WW2, when the aftermath of atomic warfare and testing made evident the perils of radioactive fallout. Then a conventional catastrophic event The Great Smog of 1952 in London killed at least 8000 people. This massive event prompted some of the first major modern environmental legislation, The Clean Air Act of 1956. Pollution began to draw major public attention in the United States between the mid-1950s and early 1970s, when Congress passed the Noise Control Act, the Clean Air Act, the Clean Water Act and the National Environmental Policy Act. Bad bouts of local pollution helped increase consciousness. PCB dumping in the Hudson River resulted in a ban by the EPA on consumption of its fish in 1974. Long-term dioxin contamination at Love Canal starting in 1947 became a national news story in 1978 and led to the Superfund legislation of 1980. Legal proceedings in the 1990s helped bring to light Chromium-6 releases in California--the champions of whose victims became famous. The pollution of industrial land gave rise to the name brownfield, a term now common in city planning. DDT was banned in most of the developed world after the publication of Rachel Carson's Silent Spring. The development of nuclear science introduced radioactive contamination, which can remain lethally radioactive for hundreds of thousands of years. Lake Karachay, named by the Worldwatch Institute as the "most polluted spot" on earth, served as a disposal site for the Soviet Union thoroughout the 1950s and 1960s. Second place may go to the to the area of Chelyabinsk U.S.S.R. (see reference below) as the "Most polluted place on the planet". Nuclear weapons continued to be tested in the Cold War, sometimes near inhabited areas, especially in the earlier stages of their development. The toll on the worst-affected populations and the growth since then in understanding about the critical threat to human health posed by radioactivity has also been a prohibitive complication associated with nuclear power. Though extreme care is practiced in that industry, the potential for disaster suggested by incidents such as those at Three Mile Island and Chernobyl pose a lingering specter of public mistrust. One legacy of nuclear testing before most forms were banned has been significantly raised levels of background radiation. International catastrophes such as the wreck of the Amoco Cadiz oil tanker off the coast of Brittany in 1978 and the Bhopal disaster in 1984 have demonstrated the universality of such events and the scale on which efforts to address them needed to engage. The borderless nature of atmosphere and oceans inevitably resulted in the implication of pollution on a planetary level with the issue of global warming. Most recently the term persistent organic pollutant (POP) has come to describe a group of chemicals such as PBDEs and PFCs among others. Though their effects remain somewhat less well understood owing to a lack of experimental data, they have been detected in various ecological habitats far removed from industrial activity such as the Arctic, demonstrating diffusion and bioaccumulation after only a relatively brief period of widespread use. Growing evidence of local and global pollution and an increasingly informed public over time have given rise to environmentalism and the environmental movement, which generally seek to limit human impact on the environment
Wild Turkey, Meleagris gallopavo In zoology, a turkey is any of the large birds comprising the subfamily Meleagridinae of Phasianidae, a family of birds that consists of the pheasants and their allies. There are two extant (living) species of turkeys, the wild turkey (Meleagris gallopavo) and the ocellated turkey (Meleagris ocellata or Agriocharis ocellata). Formerly, turkeys were considered a distinct family, Meleagrididae, but more recently were reclassified as the subfamily Meleagridinae (AOU 2007). Members of the two extant species have a distinctive, fleshy caruncle that hangs from the beak, called a snood. As with many galliform species (order Galliformes), the female is smaller than the male, and much less colorful. With wingspans of 1.5–1.8 meter (almost 6 feet), the turkeys are by far the largest birds in the open forests in which they live, and are rarely mistaken for any other species. The usual lifespan for a turkey is 10 years. The wild turkey is native to North America and Central America and has been domesticated by the Aztecs since before Columbus arrived (Herbst 2001). The occellated turkey, which is native to Central America and Mexico, is not domesticated. It has eye-like spots on the tail and is the more brilliantly colored of the two species. Turkeys provide a number of values to the ecosystem and to humans. Ecologically, they are integral to food chains, foraging a wide variety of plants and animals food, including acorns and nuts, seeds, berries, roots, insects, and even small vertebrates, like frogs and salamanders. In turn, they provide food for animals such as foxes, bobcats, and coyotes. For humans, turkeys provide a popular and nutritious food, rich in protein, niacin, and B vitamins (Bender and Bender 2005). They are a common staple of holiday feasts in North America,including Mexico, where turkey meat with mole sauce (mole de guajolote) is a popular national dish (Gerlach 2007). Before the arrival of European settlers, wild turkeys, Meleagris gallopavo, inhabited North America, including the area that now is the United States and Mexico, and Central America (Herbst 2001). The Spanish conquistadors found them as a favorite domesticated animal among the Aztecs, and some were taken back to Spain. Since the modern domesticated turkey is a descendant of the wild turkey, it is concluded that the Aztecs had chosen to domesticate this species rather than the ocellated turkey, which is found in far southern Mexico. (The ocellated turkey, M. ocellata, also may have been domesticated, but by the Mayans.) The Aztecs relied on the turkey (Mexican Spanish guajolote, from Nahuatl huexolotl) as a major source of protein (meat and eggs), and also utilized its feathers extensively for decorative purposes. The turkey was associated with their trickster god, Tezcatlipoca (Ramsdale 2006). The Aztecs in Mexico dedicated two religious festivals a year to the "huexolotlin," and all year round, it was not unusual for over 1000 turkeys to be sold each day in the Aztec market (Ramsdale 2006). The popularity of the turkey spread beyond the Aztecs to other tribes beyond Mexico by the time of the European arrival (Ramsdale 2006). After taking the birds to Europe in 1523 (Bender and Bender 2005), they were bred by the Europeans into even plumper birds, and some of these domesticated turkeys went back to the New World in the 1600s, where they eventually were crossed with stocks of wild turkeys (Herbst 2001). When Europeans first encountered turkeys in the Americas, they incorrectly identified the birds as a type of guinea fowl (Numida meleagris), also known as a turkey-cock from its importation to Central Europe through Turkey, and the name of that country stuck as the name of the bird. The confusion is also reflected in the scientific name: Meleagris is Greek for guinea-fowl. The names for M. gallopavo in other languages also frequently reflect its exotic origins, seen from an Old World viewpoint, and add to the confusion about where turkeys actually came from. The many references to India seen in common names go back to a combination of two factors: First, the genuine belief that the newly-discovered Americas were in fact a part of Asia, and second, the tendency during that time to attribute exotic animals and foods to a place that symbolized far-off, exotic lands. The latter is reflected in terms like "Muscovy Duck" (which is from South America, not Muscovy). This was a major reason why the name "turkey-cock" stuck to Meleagris rather than to the guinea fowl (Numida meleagris): The Ottoman Empire represented the exotic East, much the same as India. Several other birds which are sometimes called "turkeys" are not particularly closely related: The Australian brush-turkey is a megapode, and the bird sometimes known as the "Australian turkey" is in fact the Australian bustard, a gruiform. The bird, sometimes called a Water Turkey, is actually an anhinga (Anhinga rufa). In a similar confusion, Spanish explorers thought the turkey to be a kind of peacock and called it by the same word, pavo. Today, the turkey is still called pavo in Spanish (except in Mexico, where the Nahuatl-derived name guajalote is commonly used), and the peacock is commonly referred to as pavo real ("royal turkey"). The two species are the wild turkey (M. gallopavo), largely of North America (United States and Mexico) and ocellated turkey (M. ocellata) of Central America and Mexico. Both species in the wild are strong fliers (up to 55 mph for short distances) and fast runners (15-30 mph) (Ramsdale 2006). The wild turkey (Meleagris gallopavo) is native to North America and is the heaviest member of the Galliformes. Adult wild turkeys have a small, featherless, bluish head; a red throat in males; long reddish-orange to grayish-blue legs; and a dark-brown to black body. The head has fleshy growths called caruncles; in excited turkeys, a fleshy flap on the bill expands, becoming engorged with blood. Males have red wattles on the throat and neck. Each foot has four toes, and males have rear spurs on their lower legs. Turkeys have a long, dark, fan-shaped tail and glossy bronze wings. They exhibit strong sexual dimorphism. The male is substantially larger than the female, and his feathers have areas of red, green, copper, bronze, and gold iridescence. Female feathers are duller overall, in shades of brown and gray. Parasites can dull coloration of both sexes; in males, coloration may serve as a signal of health (Hill et al. 2005). The primary wing feathers have white bars. Turkeys have between 5,000 and 6,000 feathers. Tail feathers have the same length in adults, different lengths in juveniles. Males typically have a "beard" consisting of modified feathers that stick out from the breast. Beards average 9 inches in length. In some populations, 10 to 20 percent of females have a beard, usually shorter and thinner than that of the male. The average weight of the adult male is 8.2 kg (18 lb) and the adult female is 3.2 kg (8 lb). The average length is 1.09 m (3.5 ft) and the average wingspan is 1.44 m (4.8 ft). The record-sized adult male wild turkey, according to the National Wildlife Turkey Federation, was (38 lbs). The ocellated turkey (Meleagris ocellata) has sometimes been treated in a genus of its own, as Agriocharis ocellata, but the differences between this species and Meleagris gallopavo are too small to justify generic segregation. The ocellated turkey is a large bird, at around 70-100 cm (28-40 in) long and an average weight of 3 kg (6.6 lbs) in females and 5 kg (11 lbs) in males. Adult hens typically weigh in at about 8 pounds before laying eggs and 6-7 pounds the rest of the year, and adult males typically weighing about 11-12 pounds during breeding season. However, ocellated turkeys are much smaller than any of the subspecies of North American wild turkey. The ocellated turkey exists in a 50,000 square mile range comprised of the Yucatán Peninsula (which includes the states of Quintana Roo, Campeche, and Yucatán), \parts of southern Tabasco, and northeastern Chiapas (NWTF 2006). They also can be found in Belize and the northern part of Guatemala. The body feathers of both sexes are a mixture of bronze and green iridescent color. Although females can be duller with more green, the breast feathers do not generally differ and can not be used to determine sex. Neither sex have beards. Tail feathers of both sexes are bluish-grey with an eye-shaped, blue-bronze spot near the end with a bright gold tip. The spots, for which the ocellated turkey is named, lead some scientists to believe that the bird is more related to peafowl than to wild turkeys. The upper, major secondary wing coverts are rich iridescent copper. The primary and secondary wing feathers have similar barring to that of North American turkeys, but the secondaries have more white, especially around the edges. Both sexes have blue heads with some orange or red nodules, which are more pronounced on males. The males also have a fleshy blue crown covered with nodules, similar to those on the neck, behind the snood. During breeding season, this crown swells up and becomes brighter and more pronounced in its yellow-orange color. The eye is surrounded by a ring of bright red skin, which is most visible on males during breeding season. The legs are deep red and are shorter and thinner than on North American turkeys. Males over one year old have spurs on the legs that average 1.5 inches, with lengths of over 2 inches being recorded. These spurs are much longer and thinner than on North American turkeys. Many turkeys have been described from fossils. The Meleagridinae are known from the Early Miocene (around 23 million years ago) onwards, with the extinct genera Rhegminornis (Early Miocene of Bell, U.S.) and Proagriocharis (Kimball Late Miocene/Early Pliocene of Lime Creek, U.S.). The former is probably a basal turkey, the other a more contemporary bird not very similar to known turkeys; both were much smaller birds. A turkey fossil not assignable to genus, but similar to Meleagris, is known from the Late Miocene of Westmoreland County, Virginia (Olson, 1985). In the modern genus Meleagris, a considerable number of species have been described, as turkey fossils are robust, fairly often found, and turkeys show much variation among individuals. Many of these supposed fossilized species are now considered junior synonyms. One, the well-documented California turkey, Meleagris californica (tormerly Parapavo californica) became extinct recently enough to have been hunted by early human settlers (UU 2006; Broughton 2004), though its actual demise is more probably attributable to climate change at the end of the last ice age. The modern species and the California turkey seem to have diverged approximately one million years ago. Turkeys known only from fossils: New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia:
Leukoplakia is the medical term for a condition, where white thick patches develop on the bottom of the mouth, inner cheeks or the tongue. This is generally regarded as a condition prior to cancer, as nearly 3% of patches on the body generally prove to end up cancerous. Most often, oral cancer springs up near these patches. The most common type of oral cancer is “Squamous cell carcinoma”. This could lead to disastrous results on the tissues surrounding the area if neglected. It could easily spread to other areas of the body aggressively. There could be several reasons attributed for reaching this condition. Rough teeth, improper dentures, poor oral hygiene and continuous use of tobacco contribute to the causes. Leukoplakia is commonly prevalent in older people after the age of 40, as they develop nearly 95% of oral cancers during this period. Patients who are HIV positive or those who develop Epstein – Barr virus along with others can develop this disease. This is also true for patients after transplants who take immune-suppressants, as they end up with compromised immune systems. Leukoplakia is the direct result of improper oral hygiene. It is necessary to look after the mouth, as failure to care can lead to Periodontitis or Gingivitis. Statistics and research prove that people suffering from Periodontitis could easily be linked to developing oral cancer. Any infection that occurs in the mouth makes it highly susceptible to developing the disease. Dentists are of the opinion that one can easily prevent periodontitis. Regular brushing of the teeth and flossing will prevent this condition from occurring. This, combined with checkups regularly, will help to avoid the risk of severe dental issues from surfacing. Visit the dentist regularly and check for occurrences of Leukoplakia. Just like with other cancers, a patient can survive the condition if it is detected in the early stages. A dentist will perform a biopsy, if he detects any white patches and send a sample of these deadly white tissues for chemical analysis and microscopic studies. The results of the biopsy will indicate the seriousness of the case and the dentist can decide the immediate treatment for preventing the development of further metastasis. The appearance of patches itself can be arrested by regular checkups. The dentist is able to treat the problem, as they generally occur because of improper oral hygiene. The dentist can take corrective measures by prescribing new dentures, removing or repairing, damaged rough teeth, plaque or tartar to reduce the chances of developing this condition. If the patient notices strange patches in the mouth in between health checkups, they should notify their dentist immediately so that he can evaluate the case. It is entirely possible that the patient has contracted thrush or a yeast infection, which also manifests itself in white patches. The dentist will arrest the problem if detected in the early stages, hence even conditions such as thrush should be brought to the dentist’s notice immediately. It is advantageous to prevent problems, such as Leukoplakia from developing, even though it could be benign for the most part. Regular dentist checkups and excellent oral care will reduce the chances of a patient contracting this condition and turning malignant.
Gazing deep into the universe, NASA's Hubble Space Telescope has spied a menagerie of galaxies. Located within the same tiny region of space, these numerous galaxies display an assortment of unique characteristics. Some are big; some are small. A few are relatively nearby, but most are far away. Hundreds of these faint galaxies have never been seen before until their light was captured by Hubble. This image represents a typical view of our distant universe. In taking this picture, Hubble is looking down a long corridor of galaxies stretching billions of light-years distant in space, corresponding to looking billions of years back in time. The field shown in this picture covers a relatively small patch of sky, a fraction of the area of the full moon, yet it is richly populated with a variety of galaxy types. A handful of large fully formed galaxies are scattered throughout the image. These galaxies are easy to see because they are relatively close to us. Several of the galaxies are spirals with flat disks that are oriented edge-on or face-on to our line of sight, or somewhere in between. Elliptical galaxies and more exotic galaxies with bars or tidal tails are also visible. Many galaxies that appear small in this image are simply farther away. These visibly smaller galaxies are so distant that their light has taken billions of years to reach us. We are seeing these galaxies, therefore, when they were much younger than the larger, nearby galaxies in the image. One red galaxy to the lower left of the bright central star is acting as a lens to a large galaxy directly behind it. Light from the farther galaxy is bent around the nearby galaxy's nucleus to form a distorted arc. Sprinkled among the thousands of galaxies in this image are at least a dozen foreground stars that reside in our Milky Way Galaxy. The brightest of these foreground stars is the red object in the center of the image. The stars are easily discernable from galaxies because of their diffraction spikes, long cross-hair-like features that look like they are emanating from the centers of the stars. Diffraction spikes are an image artifact caused by starlight traveling through the telescope's optical system. This image is a composite of multiple exposures of a single field taken by the Advanced Camera for Surveys. The image, taken in September 2003, was a bonus picture, taken when one of the other Hubble cameras was snapping photos for a science program. This image took nearly 40 hours to complete and is one of the longest exposures ever taken by Hubble. For additional information, please contact: Keith Noll, Hubble Heritage Team, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, (phone) 410-338-1828, (fax) 410-338-4579, (e-mail) [email protected]. John Blakeslee, Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, (cell phone) 410-967-1204 Object Name: Galaxy Field in Fornax Image Type: Astronomical Acknowledgment: J. Blakeslee (JHU) and R. Thompson (University of Arizona)
Blood studies are tests that examine a patient’s blood, and are the most common tests done for cancer patients. The doctor will choose from a list of chemical studies to be performed in a laboratory on your blood sample. Blood studies provide important clues about what’s going on inside your body and help doctors follow the course of a patient’s disease and select the right treatment dosage. Blood can be drawn in a variety of ways, depending on your child’s situation. The most common method of drawing blood is via inserting a needle into a vein. If only a small amount of blood is needed, the doctor may obtain the blood sample by simply pricking your child’s finger. Children undergoing chemotherapy may have a central venous line in place from which blood can be drawn. The blood sample is placed on a glass laboratory slide to be examined under a microscope or in a test tube for analysis. Supporting Your Child Children respond differently to getting blood drawn; some of them like to know in advance if blood is going to be drawn while others will become overly anxious at the prospect. Some children prefer finger pricks, while others prefer their blood to be drawn from a vein. It is important to try to find ways to give your child some choices, such as which finger or arm should be used so that they feel they have some control over what happens to them – it will help keep their anxiety levels down. If your child is anxious about needles, talk to a member of the treatment team. There are various ways in which your child can be helped through medical play or relaxation to relieve their anxiety: - Ask that the person drawing the blood use EMLA to reduce discomfort - Distract younger children from the needle - Give older children choices to help them feel in control - Holding your child’s other hand or arm can be very comforting for them - Plan something fun after the blood is drawn; give a young child an award right after the test to create a positive association with blood tests It is important that you understand your child’s preferences and how they react to blood being drawn in order to appropriately prepare them in a way that minimises their anxiety. Uploaded on Feb 2, 2012 Peter Kuhn, PhD, biophysicist at Scripps Research, and Kelly Bethel, MD, pathologist at Scripps Health, speak about their work, published in the journal Physical Biology, on a new test to detect tumor cells in the bloodstream. In the first of three video clips, the scientists discuss the need for the new technology. Types of Blood Studies Complete Blood Count (CBC) A complete blood count (CBC) is a test to thoroughly examine the blood and which gives a general picture of an individual’s health. A CBC measures the number of red cells, white cells (neutrophils, eosinophils, basophils, monocytes and lymphocytes) and platelets and levels of haemoglobin and haematocrit in your blood. It is the most common test done for children with cancer because it informs doctors how current treatment is affecting the bone marrow where blood cells are made. A CBC can identify when your child is ready for their next round of chemotherapy, if a transfusion is needed, or whether there is an increased risk for infection. Blood can be drawn for a CBC in a variety of ways, depending on your child’s situation. The most common method of drawing blood is to insert a needle into a vein, but blood can also be taken from a central venous line (a tube inserted into a large vein during a period of treatment). Common Information Reviewed in a CBC White Blood Count (WBC) measures the number of white blood cells present in the peripheral blood (blood that circulates in the body). White blood cells help fight infection. Abnormal results could be a sign of infection, inflammation, cancer, bone marrow problems or other issues within the body. Diff (Differential Count) refers to the distribution of the various types of white cells in the peripheral blood; the values are expressed in percentages. These values change frequently in response to what is happening in the body. Increases in particular types of white cells can be signs of temporary or chronic conditions. Platelet Count refers to the number or quantity of platelets (the smallest type of blood cell) present in the blood. Platelets prevent bleeding by helping the blood clot. The platelet count can be used to monitor or diagnose diseases. Significant decreases in the platelet count could mean that someone is at risk for bleeding in any part of the body. Haemoglobin refers to the substance found in red blood cells that carries oxygen to other tissues of the body. It is expressed as a percentage of total blood weight. High numbers could be the result of dehydration or problems with the kidneys. Low numbers indicate anaemia, which could be the result of blood loss, problems with bone marrow, malnutrition or other issues. Haematocrit measures the percentage of red blood cells in a given volume of whole blood. High numbers could be the result of dehydration or problems with the kidneys. Low numbers indicate anaemia, which could be the result of blood loss, problems with bone marrow, malnutrition or other issues. Retic (Reticulocyte Count) refers to the percentage of young, non-nucleated erythrocytes (red blood cells) present in peripheral blood. It helps doctors determine the rate at which red blood cells are being created within the bone marrow. Blood Chemistry Studies (CMP or BMP) Blood chemistry studies consist of a group of tests called “chemistry panels,” and provide information about how your child’s organs (such as liver and kidneys) are functioning. It is especially important to monitor organ function during cancer treatment. Depending on the type of panel, these tests can measure: - Electrolyte balance (such as sodium or potassium) - Blood glucose (sugar) - Chemical substances that indicate liver and kidney function - Antibodies, including those developed from vaccinations (such as poliovirus antibodies) - Hormones (such as thyroid hormone) - Minerals (such as iron, calcium or potassium) - Vitamins (such as B12 or folate) Blood can be drawn for blood chemistry studies in a variety of ways, depending on your child’s situation. The most common way to draw blood is to insert a needle into a vein. Blood can also be taken from a central venous line (a tube inserted into a large vein during a period of treatment).
- Press Release - August 9, 2022 Phobos As Seen By Mars Express This picture of Phobos near the limb of Mars was captured in 2010 by Mars Express currently orbiting Mars. Phobos is a heavily cratered and barren moon, with its largest crater located on the far side. From images like this, Phobos has been determined to be covered by perhaps a meter of loose dust. Phobos orbits so close to Mars that from some places it would appear to rise and set twice a day, but from other places it would not be visible at all. Phobos’ orbit around Mars is continually decaying — it will likely break up with pieces crashing to the Martian surface in about 50 million years. Credit: G. Neukum (FU Berlin) et al., Mars Express, DLR, ESA; Acknowledgement: Peter Masek. Source: NASA Astronomy Picture of the Day
Ryder crater (43.877°S, 143.246°E, ~15 km diameter) is a Copernican-aged crater located within the South-Pole Aitken basin. A pond of impact melt is present on the crater floor, and boulders and melt streamers pepper the crater rim. Taking a look at the crater wall just interior to the rim (opening image), the wall is littered with boulders of varying sizes and shapes as well as areas smoothed by impact melt flows and veneers. Even though substantial ejecta and impact melt were deposited exterior to the crater, the rim and immediate surroundings were also littered with vast quantities of ejected material and impact melt. Today's Featured Image displays the complicated relationship between impact melt and ejecta emplacement, specifically around the crater rim. In some places, the crater wall is very smooth, indicating that the impact melt deposited was thick enough to bury the fractured wall material. However, as observed in the opening and above images, jumbles of boulders and fragmented ejecta are interspersed among impact melt-smoothed surfaces. Many of these boulders are veneered with impact melt where a thin layer of impact melt splashed onto the surface of the rock and solidified, and some boulders are partially buried within the smoother regions of impact melt. Some boulders do not appear to have impact melt veneers at all - why might that be? Furthermore, channels formed in some places that allowed impact melt to flow from the crater rim back toward the crater floor. Unlike other channels, those observed in the above image are not as well-formed, suggesting that less melt utilized these pathways and perhaps the impact melt had cooled substantially as it flowed back into the crater so that it was not able to flow quickly nor hot enough to maintain thermal erosion in the channels. However, as seen in the above image, the channel halts abruptly in the downslope direction (right side of the image). What could be the cause? The answer to both the halted channel and presence of boulders without melt veneer is that erosion has taken place since Ryder crater formed. Simply put, things (rocks) like to move downhill. Over time, boulders from the crater rim and higher up on the crater walls dislodged and traveled toward the crater center. While some of these blocks do not have impact melt veneers now, they may have in the past, but their downhill travels may have fractured those blocks even further so that any melt veneer present cracked off or was left behind on another fragment. Take a look at the central boulder in the opening image; although the majority of the boulder face visible has an impact melt veneer, there are fractured areas of the block that do not. Additionally, crater wall erosion may be invoked as an explanation for the apparent halt in the impact melt channel. Observations of the cracks perpendicular to the channel flow direction suggest that the jagged edge of the channel (middle-right) probably cracked off and fragmented to fall toward the crater floor. Or perhaps the ejecta blocks entrained within the melt that formed the channel dislodged and carried the lower portion of the channel downhill. How many different impact melt features and morphologies do you observe when you traverse the entire LROC NAC frame? In case you missed it, be sure to check out the LROC NAC oblique view of Ryder crater, too! Related Posts: Farside impact! Go to Images Homepage
Before Einstein, it was known that a beam of light pushes against matter; this is known as radiation pressure. This means the light has momentum. A beam of light of energy E has momentum E/c. Einstein used this fact to show that radiation (light) energy has an equivalent mass. Consider a cylinder of mass M (see accompanying figure-"energy"). A pulse of light with energy E is emitted from the left side. The cylinder recoils to the left with velocity v=E/(Mc). If the mass of the cylinder is large, it doesn't move far before the light reaches the other side. So, the light must travel a distance L, requiring time t=L/c. In this time, the cylinder travels a distance x=vt=[E/(Mc)](L/c). Einstein reasoned that the center of mass of an isolated system doesn't just move on its own. So, the motion of the cylinder must be compensated by the motion of some other mass. Let's assume the light has mass m. Then, Mx=mL, since the cylinder moves x to the left and the light moves L to the right. Substituting the expression for x given above, the equation can be simplified to E=mc2. From the fact that light has momentum, Einstein showed that light energy has the characteristics of mass also. In other words, energy has inertia. It turns out that all energy has this feature. That's because one form of energy can be transformed into another. So, if one kind of energy has this characteristic, all forms of energy do. Einstein himself explains the meaning of E=mc2 in this sound clip. The fine print: The word proof is in quotes above because this is not truly a rigorous proof. Simplifications and approximations were made to facilitate understanding. Some of these are easy to eliminate at the expense of a little more algebra. Some of them are of a more fundamental nature and require significant modification of the gedankenexperiment. However, the basic concepts are correct and this "proof" conveys the essence of the connection between mass and energy. Back to the Syllabus to the home page. This page is copyright ©1997-2005 by G. G. Lombardi. All rights reserved.
Python is a language of shortcuts. Things that take multiple lines of code in other languages take fewer lines in Python. String manipulation is a particularly strong suit for Python. When defining a string both single and double quotes work. To escape special characters such as carriage return you must use the backslash (\) character followed by the decimal equivalent. The concept of slices makes it easy to grab a part of a string as in: >>> Str = 'This is a string' When you need to build a really long string that spans several lines of text you can use the triple quote construct to surround the entire line as in: VeryLongString = ''' Now is the time for all good men to come To the aid of their countrymen. ''' Strings can span multiple lines of text as long as they are enclosed in triple quotes.''' While Python is a dynamic language it is still strongly typed in a sensemeaning every object has a type. Python includes a number of built-in types such as lists, tuples, and dictionaries for simplifying many traditional programming tasks. On the other hand you don't declare variables with a type as you do in other languages such as C# or Java. The Python interpreter takes care of that for you sometimes to a fault. If you use the same name for a variable in two different parts of a program the first gets overwritten by the secondan obvious potential for hard-to-find bugs. As a scripting language Python does the job of letting you build quick little programs in a short amount of time and test them interactively to make sure they work correctly. The Python library includes a multitude of pre-defined functions that make the job of coding much easier. You can also find numerous sample examples code on the Web, including code for such tasks as automating Word and Excel, interacting with Active Directory, and accessing the Windows Management Interface (WMI). |Figure 1. IronPython Console: The IronPython Console app provides an interactive interpreter where you can try out parts of the language.| The first public mention of IronPython was at the PyCON conference held in March of 2004 in Washington, DC. In this paper , Jim Hugunin describes the work he did to implement the full semantics of the Python language on top of the Common Language Runtimeeither Microsoft's .NET version or the Mono platform. The paper goes on to describe his research and the test results. Early implementations of IronPython worked on version 1.1 of the .NET runtime. Later releases target the upcoming version (2.0). As of this writing the latest IronPython version is 0.7.5 and requires the .NET Framework Version 2.0 Redistributable Package Beta 2 The interactive console includes a built-in command (dir ) that lists all the functions within a module. Figure 1 shows a list of all the functions included in the sys module. Python's help function has not been implemented in this version of IronPython.
There were several groups who fought for independence, the most notable being the Provisional Government of the Republic of Korea () was adopted as the legal English name for the new country.Since the government only controlled the southern part of the Korean Peninsula, the informal term South Korea was coined, becoming increasingly common in the Western world.South Korea is a member of the ASEAN Plus mechanism, the United Nations, Uniting for Consensus, G20, the WTO and OECD and is a founding member of APEC and the East Asia Summit. The name Goryeo itself was first used by the ancient kingdom of Goguryeo in the 5th century as a shortened form of its name.The 10th-century kingdom of Goryeo succeeded Goguryeo, Despite the coexistence of the spellings Corea and Korea in 19th century publications, some Koreans believe that Imperial Japan, around the time of the Japanese occupation, intentionally standardised the spelling on Korea, making Japan appear first alphabetically.In 1897, the Joseon dynasty changed the official name of the country from Joseon to Daehan Jeguk (Korean Empire). Archaeology indicates that the Korean Peninsula was inhabited by early humans starting from the Lower Paleolithic period (2.6 Ma–300 Ka).. The Korean War began in 1950 when forces from the North invaded the South. S., China, the Soviet Union and several other nations.The name Korea is derived from Goryeo, a dynasty which ruled from the 10th to 14th centuries.South Korea lies in the north temperate zone and has a predominantly mountainous terrain. zone in 1948 which led to the creation of the Republic of Korea (ROK), while the Democratic People's Republic of Korea (DPRK) was established in the Soviet zone.However, the name Joseon was still widely used by Koreans to refer to their country, though it was no longer the official name.Under Japanese rule, the two names Han and Joseon coexisted.
Keeping the Tempo of Music Tempo means, quite basically, “time,” and when you hear people talk about the tempo of a musical piece, they are referring to the speed at which the music progresses. The point of tempo isn’t necessarily how fast or slowly you can play a musical piece, however. What tempo really does is set the basic mood of a piece of music. The importance of tempo can truly be appreciated when you consider that the original purpose of much popular music was to accompany people who were dancing. Often the movement of the dancers’ feet and body positions worked to set the tempo of the music, and the musicians followed the dancers. Prior to the 17th century, though, composers had no real control over how their transcribed music would be performed by others, especially by those who had never heard the pieces performed by their creators. It was only in the 1600s that the concept of using tempo and dynamic markings in sheet music began to be employed. The metronome: Not just for hypnotists anymore It wasn’t until more than 100 years later that two German tinkerers, Dietrich Nikolaus Winkel and Johann Nepomuk Maelzel, worked independently to produce the spring-loaded design that is the basis for analog (non-electric) metronomes today. Maelzel was the first to slap a patent on the finished product, and as a result, his initial is attached to the standard 4:4 beat tempo sign, MM=120. MM is short for Maelzel’s metronome, and the 120 means there will be 120 bpm in the piece played. Musicians and composers alike embraced the metronome. From then on, when composers wrote a piece of music, they could give musicians an exact numeric speed at which to play the piece. (Insert Click track here) Although the metronome was the perfect invention for control freaks, such as Beethoven and Mozart, most composers were happy instead to use the growing vocabulary of tempo notation to generally describe the pace of a song. Even today, the same words used to describe tempo and pace in music are used. They are Italian words, simply because when these phrases came into use (1600–1750), the bulk of European music came from Italian composers. Following are some of the most standard tempo notations in Western music, usually found written above the time signature at the beginning of a piece of music: - Grave:The slowest pace. Very formal, and very, very slow. - Largo: Funeral march slow. Very serious and somber. - Larghetto: Slow, but not as slow as Largo. - Lento: Slow. - Adagio: Leisurely. Think graduation and wedding marches. - Andante: Walking pace. - Moderato: Right smack in the middle. Not fast or slow, just moderate. - Allegretto: Moderately fast. - Vivace: Lively, fast. - Presto: Very fast. - Prestissimo: Think Presto after a few too many espressos.
A neutron star is a formerly large star that has run out of fuel and exploded as a supernova. As gravity forces the star to collapse to the size of a small city, the star becomes so dense that a single teaspoon of the collapsed star would have as much mass as a mountain. The star’s core, now a neutron star, can be rotating as fast as 10 times a second or more. Over time the rotation of the core can start speeding up by pulling matter from its surroundings, rotating over 700 times a second! Some neutron stars, called radio pulsars, have strong magnetic fields and emit radio waves in predictable, reliable pulses. Other neutron stars have even stronger magnetic fields, displaying violent, high-energy outbursts of X-ray and gamma ray light. These are called magnetars, and their magnetic fields are the strongest known in the universe, a trillion time stronger than that of our sun. Since the 1970s, scientists have treated pulsars and magnetars as two distinct populations of objects. But, in the last decade, evidence has emerged that shows they might sometimes be stages in the evolution of a single object. Neutron stars and magnetars might just be two sides of the same coin – first it’s a radio pulsar and later becomes a magnetar. Or maybe it’s the other way around. Some scientists argue that objects like magnetars gradually stop emitting X-rays and gamma rays over time. Others propose the opposite theory: that the radio pulsar comes first and then, over time, a magnetic field emerges from the neutron star causing those magnetar-like outbursts to start. No one knows for sure which scenario is correct, but this is an active area of study among astronomers. The NASA video above – released on May 30, 2018 – has more. Bottom line: Radio pulsars and magnetars might be two sides of the same coin, that is, two stages in the life of a single object. Members of the EarthSky community - including scientists, as well as science and nature writers from across the globe - weigh in on what's important to them. Photo by Robert Spurlock.
HYPOGLYCEMIA is a potentially dangerous event that can occur in patients with diabetes mellitus (DM), especially those prescribed insulin, a sulfonylurea, or a meglitinide.1 The American Diabetes Association defines hypoglycemia as a blood glucose level of 70 mg/dL or lower.2 Detecting hypoglycemia in your patients with diabetes as soon as possible will enable immediate treatment and prevent life-threatening complications. Signs and symptoms Hypoglycemia can be classified as mild, moderate, or severe (see Classifying hypoglycemia).3–5 All classifications are in relation to the patient's clinical status, not simply the measured blood glucose level. In some cases, a patient's blood glucose level may not correlate with signs and symptoms. For example, patients with blood glucose levels that are chronically over 200 mg/dL could have signs and symptoms of hypoglycemia when their blood glucose levels drop to 100 mg/dL. Similarly, some patients remain conscious with blood glucose levels as low as 50 mg/dL. Patients with mild hypoglycemia may experience autonomic signs and symptoms such as palpitations or pallor, but are typically alert enough to treat themselves.3,4 In moderate hypoglycemia, patients have both autonomic and neuroglycopenic signs and symptoms, such as headache, blurred vision, irritability, and fatigue.5 Neuroglycopenia occurs due to decreased glucose in the brain, which requires glucose for energy. Patients with moderate hypoglycemia may or may not need assistance treating hypoglycemia. Patients with severe hypoglycemia may become confused, unresponsive, and experience seizures.3 Patients with severe hypoglycemia are incapable of treating their low blood glucose and need assistance. Classic warning signs: Not always present Typically, when a person's blood glucose level drops to hypoglycemic levels, the body tries to elevate it by decreasing insulin release and increasing glucagon and epinephrine release. Glucagon stimulates the liver to increase glucose production (gluconeogenesis) and to break down stored glucose (glycogenolysis). Epinephrine has similar effects on the liver as glucagon, as well as inhibiting insulin secretion and glucose utilization. Autonomic nervous system activation causes the classic early warning signs of hypoglycemia such as diaphoresis, hunger, tremor, anxiety, parasthesias, palpitations, and tachycardia. These signs and symptoms also trigger a behavioral response to hypoglycemia, eating food.6 However, not all patients with DM experience the characteristic signs and symptoms of hypoglycemia, a condition known as hypoglycemia unawareness. Patients who have a long history of type 1 diabetes and those who have frequent hypoglycemic episodes are more likely to experience hypoglycemia unawareness.7 By the time these patients become symptomatic, they already have severe hypoglycemia, along with cognitive dysfunction such as confusion, disorientation, and loss of consciousness.5 If left untreated, their blood glucose levels will continue to fall, resulting in seizures and possibly death. These patients are “unaware” because they've lost the normal physiologic responses to hypoglycemia, referred to as hypoglycemia-associated autonomic failure.5,6 Occasionally, patients can be resensitized to hypoglycemia by increasing their glycemic target for several weeks. This may help to partially reverse hypoglycemia unawareness and reduce further episodes.2 Knowing the signs and symptoms of hypoglycemia is just the first step in early detection. Being proactive can help prevent hypoglycemia, so attentively monitor your patient's blood glucose levels using a blood glucose meter. Knowing your patient's health history, medication regimen, and lifestyle will also provide clues to potential problems with blood glucose levels. Paying careful attention to your patient's scheduled diagnostic studies and procedures may help prevent a hypoglycemic episode and help you distinguish signs and symptoms of hypoglycemia from those due to another disorder. Know your facility's protocols for treating hypoglycemia, especially those actions that can be taken independently by nurses. Treatments for hypoglycemia can be categorized by the patient's level of consciousness. To raise their blood glucose levels, conscious patients with mild or moderate hypoglycemia need to ingest 15 to 20 g of fast-acting carbohydrates, such as glucose tablets, gels, sprays, juice (adding sugar isn't necessary), or regular soda.4 (See Foods with 15 g of carbohydrates for more ideas.) Reevaluate their blood glucose level 15 minutes after they've ingested the carbohydrates. If the blood glucose level doesn't improve, give another 15 to 20 g of carbohydrates.2 Once the blood glucose level is stable at 70 to 100 mg/dL, patients can be given complex (long-acting) carbohydrates to prevent recurrence of signs and symptoms.5 Upon discharge, provide instructions on the “Rule of 15” to patients with frequent episodes of hypoglycemia (see Follow the rules). In general, oral carbohydrates should be avoided in patients with impaired consciousness from severe hypoglycemia because they're at high risk for aspiration and airway obstruction. If I.V. access can be established, patients with severe hypoglycemia should receive I.V. dextrose (25 g of 50% glucose), 50% dextrose (D50) and 5% dextrose (D5). It can be irritating to the veins so administer it slowly. The patient should respond immediately because the glucose goes directly into the bloodstream. If I.V. access can't be obtained, administer I.M. glucagon.3 Glucagon is packaged as a powder that must be reconstituted with the supplied diluting solution before administration. Glucagon works by stimulating the liver to produce glucose, so the patient may not respond to it for 10 to 20 minutes. Glucagon can cause vomiting, so following injection position patients on their side to prevent aspiration.8 If the patient remains unresponsive, another dose of glucagon can be administered. Glucagon may not work for patients who have depleted glycogen stores (for example, those in starvation states) and in those who have refractory hypoglycemia secondary to agents that stimulate the pancreas to release insulin (such as sulfonylureas or meglitinide); these patients may need octreotide (off-label use) as a reversal agent.9 If I.V. dextrose or glucagon aren't available, a small quantity of sugar granules, liquid glucose, or even cake frosting can be carefully placed under the tongue with the patient in a side-lying recovery position. Glucose can be absorbed through the buccal mucosa. Continuously monitor the airway to prevent aspiration.10 Once the patient is alert, provide long-acting carbohydrates to prevent a recurrence of hypoglycemia.8 Evaluate the patient's blood glucose level within 15 minutes to assess whether more interventions are required. Once the patient is stable, provide a light snack or meal to prevent hypoglycemia recurrence.2 Once the patient is stable, notify the healthcare provider of your assessment findings, blood glucose levels, your intervention, and how the patient responded to treatment. Monitor blood glucose levels according to your facility's policy. Depending on the patient's condition, additional monitoring may be necessary to avoid recurring hypoglycemia. Reducing the risk Before patients are discharged, educate them and their families about common causes of hypoglycemia, such as changes in medication regimen, an increase in physical activity, and delayed or missed meals. Advise patients to check their blood glucose levels before driving and to make sure they have easy-to-reach snacks and/or fast-acting sugars with them at all times. Encourage them to always wear a medical ID tag or bracelet and to contact their healthcare provider if they experience low blood glucose levels more than twice a week.2 When teaching, remember that patients often become frustrated when trying to manage their blood glucose levels. Adjusting for fluctuations in health and lifestyle, such as stress, exercise, and illness, can be difficult. Listen to their concerns and answer their questions. Reassure them that, with increased knowledge and awareness, they can learn to prevent and manage hypoglycemic situations.11 By closely monitoring patients for hypoglycemia and intervening immediately, you can help patients avoid dangerous complications and maintain an active lifestyle. Foods with 15 g of carbohydrates - 3 to 4 chewable glucose tablets - 1 tablespoon jam - 1 tube glucose gel - 4 to 6 oz fruit juice - 4 to 6 oz regular soft drink - 3 packets or 1 tablespoon sugar (not sugar substitute) dissolved in a small amount of water, or use 1 tablespoon honey - 5 to 7 hard candies Follow the rules3 Teach your patients the “Rule of 15”: - Test to determine blood glucose is below 70 mg/dL. - Eat or drink 15 g of simple, concentrated carbohydrates. - Wait 15 minutes. - Check blood glucose again. - If blood glucose is still below 70 mg/dL, consume an additional 15 g of carbohydrates. - Once the glucose is stable, follow up with a light snack or meal.1 Mild-to-moderate hypoglycemia can usually be reversed rapidly, within 5 to 10 minutes. Try to avoid foods that are high in fat such as pizza, candy bars, or doughnuts, because fatty foods slow the absorption of carbohydrates, delaying the increase in blood glucose. When the only sugary food nearby is a candy bar or a doughnut, however, it's better than nothing at all. If patients experience a “low” just before mealtime, encourage them to eat the meal without applying the Rule of 15 as long as the meal has adequate carbohydrates to raise the blood glucose level back to normal.
About Question Three Here’s how TOEFL speaking question 3 works: - First, you will read a short (100 words) article on an academic topic. You will have 45 seconds to read it. - Next, you will hear a short lecture about the same topic. The lecture will illustrate it using either one or two examples. - Finally, you will be asked to summarize the reading and lecture. - You will be given 30 seconds to prepare, and 60 seconds to speak. Note that this is the same as question four on the old version of the TOEFL. The reading is usually about a specific term or concept. It usually has a clear title and about five sentences that define the term and give some basic details. When I surveyed 500 students in October 2019, they said the most common topics were: - Biology/Animals – 60% - Business/Marketing – 20% - Psychology/Learning – 10% - Art/History/Literature – 10% The lecture is usually 1.5 minutes or 2 minutes long. It is about the same term or idea from the reading. Most of it will consist of one or two examples that demonstrate the term or idea. It could be an example from the personal life of the speaker. If there is just one example, listen for two parts (like cause/effect or before/after). The Question Prompt The question will look something like one of these: - Describe what _____ is, and how the professor’s example illustrates this idea. - Describe how the example of the ____ illustrates the concept of ____. - Explain the concept of _____ using the examples of ____ and ____ given in the lecture. - Using the examples from the lecture, explain the concept of ______. The good news is that you can always use the same template to organize your answer to TOEFL speaking question three. Try using this one: Stating the Term or Idea - “The reading is about (TERM/CONCEPT)” Give a Small Amount of Detail from the Reading - “It states that…” - “The professor elaborates on this by providing an example.” - “The professor elaborates on this by providing two examples.” First Example/First Part - “To begin with, he/she mentions that…” Second Example/Second Part - “Next, he/she says that…” Tips and Tricks - Try to use transitional phrases like “as a result,” “consequently,” “moreover,” and “therefore.” - Spend about 10-13 seconds summarizing the reading… at most. Remember that most of your score is based on the listening summary. - If you are a slow speaker, omit the “small amount of detail” part of the template. - Use a mix of simple and compound sentences if possible. - Paraphrase. Don’t just copy the sources word for word. (this is based on a question from the official ETS practice set) State the Term or Idea - The reading is about revealing coloration. Give a Small amount of Detail from the Reading - It states that this is a strategy used by certain animals to protect themselves. By suddenly revealing colorful parts of their body they can confuse predators and escape. - The professor elaborates on this by providing two examples. State the First Example or First Part - To begin with, he mentions that while the front wings of the peanut bug blend in with its environment, its back wings have very colorful spots. These back wings are usually closed, but when it is attacked by a predator it can quickly open them and reveal the colors. As a result, it is able to escape to safety. State the Second Example or Second Part - Next, he says that hidden parts of the morpho butterfly’s wings are very shiny and can reflect sunlight. When a bird approaches the butterfly it suddenly flaps its wings to reflect light and confuse it. The bird can only see the light reflected from the wings, and therefore the butterfly is able to evade capture. Sign up for express essay evaluation today! Submit your practice essays for evaluation by the author of this website. Get feedback on grammar, structure, vocabulary and more. Learn how to score better on the TOEFL. Feedback in 48 hours.
The motion unit provides some background information necessary for further physics studies. More importantly, it will provide students with a general feel of senior physics. I can not stress to you enough just how important it is for you to do all of the practice problems and homework. In order for you to learn it you need to practise it. I have included a list of major concepts, some detail you should be familiar with and some links to numerous sites which will allow you to view, download or print material which is directly related to these concepts. In addition there may be some material which is of interest to you (hey you never know!). The links will assist you if you run into any difficulties and have specific questions. Scalers and Vectors - Distinguish between and provide examples of scalers and vectors - Add collinear and non-collinear vectors algebraically and graphically. - Draw and calculate average velocity and instantaneous velocityfrom graphs - Draw position-time and velocity/time graphs. - Calculate constant acceleration and displacement - Derive equations from graphs - Distinguish between constant, instaneous, and average speed and velocity - Derive and/or use equations to solve problems involving different forms of: velocity, speed and acceleration I WILL POST AS MANY APPROPRIATE REFERENCES AS POSSIBLE WHEN I FIND THEM Return to Grade 10
The oceans cover 70 percent of the planet’s surface and constitute 99 percent of its living space, and every drop of ocean water holds living things. Without its oceans, Earth would be a rock in space, and life may never have appeared on our planet. The sea is the great experimental laboratory of evolution. In three billion years of Earth history, its waters have nurtured nearly every form of life that has ever existed, including probably the first entities that were truly alive. The ocean is home to the greatest part of Earth’s biodiversity, containing 90 percent of the major groups of living things. They range from immense to minute and live everywhere, from geysers on the seafloor to the lips of lobsters. In the 20th century, new technology enhanced traditional collecting methods to locate organisms and characterize their habitats. Satellite pictures of light reflected from chlorophyll in the ocean revealed broad patterns of phytoplankton abundance, and satellite maps of ocean temperatures helped us understand the distribution of pelagic animals. Submersibles, manned and robotic, explored parts of the deep ocean never before visited, retrieving images and specimens of creatures new to human knowledge. Even surface waters yielded new discoveries of ubiquitous microbes, including photosynthetic bacteria responsible for half the primary production in the ocean. Revolutionary biotechnology concepts and methods, applied to life in the sea, helped us discover new organisms, untangle evolutionary relationships, explain adaptations, and reveal fundamental mechanisms of life. Entering the 21st century, ocean biology faces tremendous challenges—not only to understand the complex ecosystems of the sea, but to learn how to maintain the integrity, productivity, and resources of the ocean for the future. The sea and its biology is crucial for us and our planet—for balancing oxygen and carbon dioxide, for maintaining genetic diversity, and for producing food. Human civilization is putting increasing pressure on ocean life, from overfishing, nutrient pollution, waste dumping, and climate change due to greenhouse effects. These are large and complex problems; understanding and alleviating them is essential. But the promise is also great. We know the major problems and largely how they came about. We now understand better how fish populations respond to fishing pressure, how toxins affect marine animals, how nutrients stimulate phytoplankton blooms, how whales react to noise, or how species diversity maintains stable ecosystems. Much of the information and technology for defining problems and identifying solutions is within our grasp, or will be soon. That knowledge and capability give us the basis for action to understand, sustain, and restore the ocean’s ecosystems. Public awareness, funding, regulatory action, and economic adjustments are also needed, but with continued research, we can ensure that the necessary scientific knowledge will be at the ready. The Ocean Life Institute (OLI) fosters research in ocean biology under three broad charges: Discover Life, Sustain Ecosystems, and Develop Tools. The goal for the OLI is to support pioneering basic science, both for its own value and to help solve important ecological and societal problems of the ocean. This theme broadly includes exploration, discovery, and characterization of ocean organisms. The OLI has funded studies on new deep-sea microbes, fossil corals, and magnetic bacteria. Discovery may mean new information about where organisms live, how they evolved, and how their particular traits fit into the tapestry of marine communities. Often discovery happens when we look in familiar places with new tools and techniques. Organisms together create communities that then provide stable habitats. These ecosystems, whether as small as a single coral head or as large as the Sargasso Sea, are maintained by the interaction of the particular organisms in the ecosystems with environmental forces, such as temperature, currents, nutrients, and sunlight. Changes in the abundance and diversity of key species (perhaps due to fishing or toxicity) or in the physical or chemical environment (from climate change or excess nutrients) can upset an ecosystem’s equilibrium and lead to dramatic shifts that could decimate resources or imperil species survival. The OLI sponsors studies on toxicity of copper mine waste to seaweeds and industrial chemicals to fish, on responses of whales to stress, and on mathematical models to help manage fisheries and save threatened albatross populations. Even as it makes ocean life possible, water impedes research, and we need special equipment and techniques to extract specimens and information from the depths. New electronics, optics, computers, and molecular biology add a huge range of possibilities for tools to explore ocean life. Such tools, including biological and chemical sensors, can be deployed in many ways. Whether lowered from ships, borne on submarine vehicles, or mounted on moored or mobile observatories, these new sensors yield information on organisms both at small scales and over large distances and long time periods. New tools developed with OLI support include imaging systems for phytoplankton cells, heartbeat monitors for whales, and molecular probes to sample and identify microbes. Stewardship of the future rests on today’s knowledge. Important decisions must be made soon about how to conserve, restore, or manage ocean environments and resources. Such efforts have often failed, lacking accurate information about biology and ecology. The vision for the WHOI Ocean Institutes includes furnishing knowledge and awareness to those who need to use solid scientific information to benefit society and the environment. With this goal, the OLI has launched two research initiatives: First, to provide focused scientific information to help conserve the highly endangered North Atlantic right whale. (See "Scientists Muster to Help Right Whales"); and second, to provide life-history data needed for effective policies to regulate fishing on coral reefs and enable the rejuvenation of important reef species. (See "Tracking Fish to Save Them") People breathe the ocean’s oxygen, eat its fish, and marvel at the beauty of its inhabitants. But we also overreach in our harvest, pour our wastes into ocean waters, and damage the framework of many habitats. Achieving a new balance with the ocean will prove a challenge for the burgeoning human population, but one that can be met if we inform our actions with scientific knowledge.
Monitoring your child's Internet use can help to lower the odds that he or she will find him or herself in harmful or dangerous situations. Unsupervised children on the Internet, like anywhere else, have greater opportunity to experiment with risky behaviors. This is becoming more and more difficult as our connections to the web are more portable. It was once relatively simple to ensure that internet-connected computers were in an open, common area of the house. But now that most cell phones are also wired for the Internet and many game consoles like the Wii, PlayStation, Nintendo, and X-Box as well as most on-line games are also social networking platforms, we need to be more aware of the times that young people are on-line, in contact with others and help them to keep in mind that the Internet is an inherently public medium. Learn about the digital devices that your teen uses: computer, cell phone, game consoles, etc. Learn about what each device does, how it works, and how your teen uses it. Consider using the same technology that your kids use so that you can better understand what they are doing and how you might effectively teach and/or monitor their behavior. For example, you might set up your own account on a social networking site that your teen uses or communicate with them over text messaging to check in. Remember that kids can access the Internet in locations outside your home, in locations such as school, the library, or the homes of friends. Find out the safeguards used in these locations. But mostly, empower your child with the tools s/he needs to make good decisions no matter where s/he is using the web. Establish and communicate clear and consistent rules about computer and Internet use - no matter whether it is at home or away. Set consequences for breaking the rules and make sure your child understands them. Explain which sites are inappropriate and off-limits, amount of time and times of day computer use is allowed, and which information cannot be shared online. Other possible rules to consider: - Never agree to meet an online friend. - Do not download music, programs or other files without permission (this can be illegal and may place your computer at risk of viruses or other dangers). - Do not give out email addresses online, do not respond to junk mail, and use email filters, to protect users from spam. - Do not make any financial transactions online (buying, selling, ordering, auction bidding) without permission. - Do not gamble online. Online gambling is illegal and risky. - When communicating online, act responsibly and ethically. The Internet should not be used for gossip, bullying, or threats. Always maintain access to your child's online account and randomly check his/her email. Be up front with your child about your access and reasons why. Encourage your children to tell you if something or someone online makes them feel uncomfortable or threatened. Stay calm and remind your kids they are not in trouble for bringing something to your attention. Praise their behavior and encourage them to come to you again if the same thing happens. Talk to your teenagers about online adult content and pornography, and direct them to positive sites about health and sexuality. There is a wide range of software available to parents and guardians that hold themselves our as helping to monitor Internet use. Remember that no filtering or blocking system is fool-proof. And since children can access the Internet outside of your home, you can never monitor what they are doing 100 percent of the time. Parents will always need to remain involved in their child's online life. Children and teens will always need to know what their families' rules are and how to stay safe. Different software can: - Filter inappropriate content (content that is sexually explicit, hateful or intolerant, graphic and violent, illegal, or any other content you define as inappropriate); - Monitor computer activity (with or without your child's knowledge) without necessarily limiting access, such as recording the addresses of websites visited or providing warning messages for visiting inappropriate sites; - Set limits on the times of day and lengths of time that children can go online or use the computer; and - Block content being sent from your computer, such as personal information, to help supervise kids' behavior in online communication. Many operating systems will include filtering or similar software within their packages.
Many diets are often missing seven essential nutrients that include calcium, fiber, magnesium, vitamin A, vitamin C, vitamin E and potassium. These essential nutrients are necessary to have a healthy diet.Continue Reading Calcium can be found in soy beverages, milk and yogurt. It is essential for bones to grow and for them to remain strong, no matter what age. Fiber is an important part of a healthy diet because it helps to regulate the digestive system and keep the stomach full after a meal. It can be found in fruits, vegetables and whole grain products. Magnesium can be found in quinoa, pumpkin seeds and legumes. It is an important nutrient because it helps promote immunity and helps the heart function better. Vitamin A helps the body promote tissue growth and can increase immunity; it can be found in spinach. Vitamin C promotes the immune system and helps increase the rate of collagen growth in the body. It can be found in sweet bell peppers and citrus fruits. Vitamin E is a strong antioxidant and helps prevent free radicals; it can be found in sunflower seeds and nut butters. Potassium works to help produce energy and support the skeleton. It can be found in legumes, bananas and broccoli.Learn more about Nutrition & Diets
Hydroelectric energy is as we all understand one of the five forms of Renewable Energy Sources, RES, and as the current facts and figures indicate, Hydroelectric energy is the number one Renewable Energy Source. It is one of the 3 forms of water energy or hydro energy as it is otherwise known. The other two forms are the Tidal Energy and Wave Energy. Hydroelectric Energy is the power from either falling or moving water. Hydro is the Greek word for water. Falling water has gravitational energy which when captured it is converted into useful energy to produce electricity or to turn the mills, as it was the case in the ancient Greek and Roman times. Hydroelectricity is the electricity produced from hydroelectric power. As it is the case with all forms of energy, Hydroelectric Energy has both advantages and disadvantages. In this case we will present the Pros and Cons of Hydroelectric Energy (Power), from data we have gathered from various sources, in an objective way. Hydroelectric Energy (Power) Pros - Constant production Cost: The cost of hydroelectric energy is relatively constant and does not fluctuate due to political and other conditions. This means that electricity produced from hydroelectric power has more or less fixed cost thus enabling a sustainable economic growth in the regions depended from hydroelectricity. - Free Energy Source: Hydroelectric energy uses water as the means to produce energy which has zero cost. Water is free and comes from the rivers or from the outflow of dams. - Clean Energy: It is a clean renewable green energy source which does not pollute the environment with any CO2 emissions as the fossil fuel energy does. - Renewable Energy: Hydroelectric energy as a renewable energy does not depend on finite resources like other fossil depended energy sources. After all this is the reason it is called renewable. It depends on water which is renewed constantly with evaporation and rains. - Minimal Operational Cost: Hydroelectric energy plants require minimal staff to operate and have a long life, about 50-100 years. These are two of the reasons for the low cost of hydroelectricity produced. - Low Failure rate: Hydroelectric plants have low failure rates thus making them a reliable and dependable source of energy. - Variable Size: Hydroelectric plants can be built of any size so as to be accommodated in the environment prevailing in the area and they do not have a certain fixed size as it is with other types of energy. For this reason there are 3 main technologies/methods used in hydroelectric plants. - Controllable production: If the Hydroelectric plant is associated with dam, the electricity comes with the lowest cost and if electricity is not needed the water overflow can be stopped thus saving energy. - Manageable Production: Hydroelectric energy is a completely controllable energy source and this means that hydroelectricity can be produced as and when needed thus enabling electricity authorities to manage the peaks and valleys of their electricity demand. - Quick start and shutdown: The time required for the hydroelectric plant to start and shutdown is much lower than other conventional power stations. While Conventional power stations take about 8 hours to start, a hydroelectric plant takes only a few minutes. This makes hydroelectric plants suitable to cover unplanned needs or to act as standby plants to handle emergencies. - Alternative Uses: Dams which are built for the production of electricity can be also used for other activities as well such as water recreational activities or they are used to breed fish. This means that alternative economies are built around hydroelectric plants giving an economic diversification in the area and an alternative way of living for people around dams. Hydroelectric Energy (Power) Cons - Need of Large Areas: The first con of hydroelectric Energy is the need of a large area to build a dam in order to be able to produce hydroelectric energy from falling water. Large areas are not always available without the creation of problems and side effects. - Destruction of the Environment: Due to the creation of large dams for the needs of hydroelectricity we have the disruption of the surrounding environment with all the relative consequences. - Greenhouse gases: When hydroelectric plants are built in forest areas we have greenhouse gases released from the decaying trees. This is unavoidable but it may be controllable in magnitude and finite in duration. - Risk and Threat for People: Hydroelectric energy plants cause the displacement and disruption of the life of people living in area since they are either relocated due the construction of a dam or if they live in areas of possible flooding they are relocated for security reasons. - Destruction of the marine life: The creation of dams prevents the flow of silt, microorganisms and plankton downstream to the beaches and estuaries thus leading the destruction of the sea life in that area. - Destruction of fish passages: The building of dams along rivers destructs fish passages which have been there for thousands of years. - Drought: In some cases drought may affect the flow of water either to the dam or through the hydroelectric plant and thus affect the production of electricity. - High initial cost: Although hydroelectric plants last for 50-100 years they are expensive to build even though the return on investment, ROI, may justify their construction. Hydroelectric Energy technology used in hydroelectric plants has evolved and adapted according to the size of the plant and the magnitude of the water resources available in the area of the plant. There are 3 main technologies/methods used to build hydroelectric plants. Hydroelectric energy is a renewable green energy source on which we can depend and invest. In USA 96% of the produced renewable energy is hydroelectric energy and about 5000 more sites in the USA have been identified as potential sites to host hydroelectric plants giving hydroelectric energy an even greater potential.
[Thanan] Evolution of Dromae Hunting Adaptions There is significant debate amongst dromae paleontologists about whether their primitive dinosaurian ancestors were capable of flight. It is clear that their distant cousins the birds are, and that some species closely related to raptorian ancestors may have been more than gliders. Dromae paleontologists are firm that they are not descended from birds. The presence of teeth, prominent primary feathers on the legs, partially keeled sternum and shoulder girdle adaptions indicate the split was from the protoavians. Unfortunately the small size, fragile bones and forested habitat combine to form a sharp gap in the fossil record. Primitive raptorians next become obvious in desert salt-lake deposits and there are multiple potential paths that could have brought them there. It is thought that small primitive raptorians may have roosted in trees and outcroppings and retained the legwings for ease of return to the ground for hunting. Hunting techniques are thought to be similar to that used by young dromae or by adults against small prey: the prey is pounced upon from a distance, with the spread arms lengthening the jump and acting as large control surfaces while the tail and its feather vanes stabilise and make fine alterations. Upon landing on the prey the sickle claws sink into its flesh to provide an anchor point while the arms and their protowings are rapidly flapped to stabilise the hunter and prevent falling. The jaws are used to tear into the prey and inflict injuries. Prey death is by shock and blood loss from the combined teeth and claw wounds while more skilled hunters will dispatch the prey with a targeted bite to the neck or spine. Larger prey is hunted communally. While the sickle claw of dromae is still curved, unlike ancestral dinosaurs the base is straighter and the claw more pronounced at the tip, and the inner surface sharper. After cutting out and harrying a chosen large prey animal pack members will leap onto its back and flanks, avoiding the threatening front quarters where possible. Front and rear claws are used to grip the prey’s hide, sometimes supplemented by jaws, while one or both legs kick out with sickle claws to inflict deep injuries. Once the prey has been sufficiently weakened and dragged down the neck and throat can be safely targeted and the prey dispatched. The scaled belly and inner thighs of the dromae is an adaption for this method of predation. A struggling prey animal is challenging to cling to and the smooth surface of feathers lacks grip. Contour feathers are likely to suffer significant damage by being scraped over rough hide while offering little protection in return as the hunter is shaken. The naked face reduces fouling by blood and offal as the dromae feeds, as it will be reaching deeper into the body cavity given the size of the prey. If crests were not such an important visual communication tool and if ancestral Arenicalia had scavenged more frequently, it is possible they would have lost their head and neck plumage all together as some distantly related species have done.
The toxic chemicals that are used in the manufacturing and production of textiles are often less recognized than the toxic chemicals that are present in our food and water. But these toxins are just as important. Every person wears some form of clothing every day, which exposes them to a multitude of various toxic chemicals that are harmful to both humans and the environment. Awareness of the negative effects of these toxins has been increasing over the years, which has led to the search for less harmful and more sustainable alternatives to the processes and products that are used to manufacture and produce textiles. One particular group of chemicals that has gained attention over the years is perfluorinated compounds (PFCs). PFCs are used in the manufacturing and production of many products, such as food packaging and clothing. Specifically, PFCs have the ability to make a product resistant to oil, stain, and water. This makes the usage of PFCs as a water-proofing agent extremely effective for outdoor apparel. One of the most popular brands of water-proof fabrics to utilize the properties of PFCs is Gore-tex, which first appeared during the 1970s when it became commercially available on many products being sold by companies such as Columbia Sportswear. So what is the problem with PFCs? Some PFCs bio-accumulate in humans and the environment and do not degrade by natural processes. This means that they remain in the environment as persistent organic pollutants and act as greenhouse gases. Studies show that PFCs are present in the wastewater from PFC manufacturing plants and drinking water near PFC manufacturing plants in multiple states. Data from these studies have indicated that PFCs can cause several types of tumors and neonatal death, as well as toxic effects on the immune, liver, and endocrine systems of mammals, fish, and bird wildlife. While PFCs have been produced, used, and disposed of without regulation for the last sixty years, new regulations are being implemented to reduce their impact on the environment. Numerous investigations by the EU and EPA have been conducted that address the negative impact that PFCs have on the environment, though the relationship between PFCs and human health effects are still fully unknown. |Figure 1: Global PFC emissions by world region (1970-2005).| (click image to enlarge) |Figure 2: PFCs in women ages 16-49 years (1999-2008).| (click image to enlarge)
Kindergarten students were formally introduced to the difference between high and low this week. After reviewing the chant Two Little Puppets, the puppets were hidden behind Mrs. Aaronson's back and students were asked to guess which puppet was talking. This activity gave students the opportunity to differentiate between high and low voices. Students were then led in a discussion about how they knew which puppet was speaking and led to discover that one puppet has a high voice while the other has a low voice. Students then reviewed the song Higher Than a House and used the words high and low to identify the last note of the song. Students also moved their bodies high and low as the puppets Bella and Bo tried to play the piano! Additionally, students explored the upper and lower ranges of their voice by following a ribbon up and down with their voice. First grade students learned a new song entitled Acaka Backa. This folk song contains a fun chase game similar to duck, duck, goose. Students enjoyed playing the game and after mastering the song, decoded which notes were sol mi and la. Students also enjoyed playing a 'wind game' that is similar to the game hot/cold. One student hid their eyes while another student hid a blue puff ball somewhere in the music room. The class helped the student who hid his eyes find the puff ball by making low and high 'wind' sounds with their voices. The lower the students' voices, the farther away from the puff ball the student was. The higher the students' voices, the closer to the puff ball the student was. Lastly, students enjoyed learning about musical form by dancing to Cumberland Square. This two-part dance to two-part music allowed students to get their hearts pumping while learning about patterns in music! Second grade students reviewed new low note do this week. Students reviewed the note's placement on the staff in relation to the other notes that they know. Students practiced singing melodic patterns containing do and practiced writing the pattern do mi sol la on the interactive white board. Students also learned a new song entitled Mouse, Mousie. This song includes a fun cat and mouse chase game in which the student playing the mouse does not who the cat is until the end of the song! This song will be used to help students practice melodic note do. Students began learning a jump rope song entitled Mother, Mother which will also be used to help students practice melodic note do. Lastly, students reviewed half notes, quarter notes, and eighth notes through a song entitled Farmer John. Third grade students were formally introduced to dotted half notes this week. Students were shown how a three-beat note can be written as a half note tied to a quarter note or as a dotted half note. Students then played a game called poison pattern to practice this new note. In the game poison pattern, students echo all patterns clapped by the teacher except for the one designated as the poison pattern. Students also began learning a new song entitled Music Alone Shall Live which will be used to help students practice dotted half notes. Lastly, students sang, read, and identified melodic patterns containing melodic notes low sol, low la, do, re, and mi. Students identified melodies containing steps, skips, and repeats. Students completed their melodic post-assessment this week and have made great progress on their melodic note reading since the beginning of the year! |Some of the third grade students thought the dotted half note is was like Batman and Robin. Batman is the main character, the half note, yet to be truly powerful he needs his trusty side kick Robin, the dot.| Fourth grade students began the week with a recorder day. Students reviewed the song Hop, Skip, Jump and tried playing it at a faster tempo! Students were also given the opportunity to test for their next recorder karate belt. During the rest of the week, students reviewed the differences between bands and orchestras and began learning about jazz. Students were introduced to the instruments of the jazz band and jazz techniques. Students learned about improvisation and scat singing. Students listened to and discussed recordings of the First Lady of Song, Ella Fitzgerald, and jazz legend Louis Armstrong. Fifth grade students continued their music evaluation unit this week. Students were introduced to listening maps and followed instrument/instrument family listening maps for Mussorsky's Promenade and Saint-Saens' Fossils. Students identified qualities of a good listening map and suggested ways to improve the listening map for Fossils. Students were led to discover how the theme of a piece of music can be used when creating a listening map. For example, one listening map for Fossils included dinosaurs. Students also reviewed dynamics terms and examined a listening map for Dvorak's Slavonic Dance which contained many dynamic markings. Slavonic Dance was also used to review form. Students were led to discover how the capital letters marked throughout the map indicate repetition. Students also worked with partners to pair the English and Italian words for dynamics. |A pair of fifth grade students correctly matched English and Italian dynamic terms.|
A close look at the Apollo 11 EVA footage shows ghostly astronauts, which of course has launched speculation that the footage is faked. If NASA could get to the Moon, why couldn’t it capture good video?! The footage wasn’t faked. The poor quality and ghostly look is an artifact from the odd way NASA had to convert the lunar footage to a format that could be broadcast. To understand this, we have to unpack how exactly TVs worked in the mid 20th century. In the United States at least, the cathode ray tube technology that yielded television in the 1950s remained pretty much unchanged from the 1950s until the 2000s. The black and white TVs gave way to colour without too much change, but then flat screen LED technology took over and our living rooms became less cluttered. But for the moment we’re interested in black and white cathode ray tube televisions, and it all starts with the camera, so let’s start there. Inside an analogue video camera, the image or scene being filmed is focussed through a lens onto a photosensitive plate. That plate is scanned by an electron beam. Two coiled wires around the camera tube deflect the beam so it scans in lines, left to right, top to bottom, covering the whole plate. It does this 30 times every second, which, incidentally, is where we get the standard frame rate of 30 frames a second. As it scans, information is encoded on that electron beam. The brighter the point on a frame the higher the point on the wave. The signal — that beam — is then sent to a monitor. The monitor has its own electron beam, which is changed by the voltage according to information from the camera. The voltage is increased when the wave is higher, corresponding to a brighter point. That electron beam is pushed to the front of the monitor to strike a screen coated with phosphor. Every electron strike yields a point of visible light, and the higher the voltage the brighter the point. The beam scans the monitor’s screen the same way it did the image on the plate in the camera, line by line 30 frames each second, leaving behind points of light mirroring the original filmed scene. Our brains put those dots together to form an image. There are other signals encoded in the beam that help create the video. Synchronizing signals tell the beam when to transmit no light at the end of a line, when to go back up to the top of the screen, and to make sure that the lines are aligned properly to avoid a wavy image. But there’s another element to the video image, a slight complication born of the technological limits of the era. The image we see on a screen is the glow of an electron hitting the phosphor on the faceplate. Each point glows then fades. If the electron beam scanned the screen top to bottom, the image at the top would be faded by the time the image was shown on the bottom. The solution was to break up each frame. A standard broadcast frame has 525 lines, so each frame is broken into two fields with 262.5 lines each, which also means the 30 frames a second becomes 60 fields a second when you’re interlacing the image. The beam fills in all the odd numbered lines first and then returns to the top to fill in the even numbered lines. It’s a process called interlacing — two fields of video are put together to create one frame. Another electrical pulse in the beam ensures the two fields are properly interlaced with one field coming in a half line after the other, and it all happens so fast our brains just see a clean image. As we know from the name — cathode ray tube — this technology relied on tubes. It was the tube inside the monitor that generated the electron beam inside a vacuum vessel, hot wires providing the electron beam, and coiled wires deflecting the beam so it could scan the plate or screen. It was a hot, heavy system that drew a lot of power. And none of those are things you want to have when you’re working on the Moon. So NASA used a simpler camera for Apollo 11’s moonwalk. For simplicity’s sake, Apollo 11 used black and white cameras, which had the added benefit of using less bandwidth when the signal was sent from the Moon to the Earth. The camera had one imaging tube that scanned at just 10 frames a second with 320 lines per frame. There was no interlacing. The bandwidth of was also low, 0.4 MHz vs. 5MHz for what was then standard broadcast. Adding to the low quality video, the vidicon type of imaging tube caused a lag, adding a bit of a smeariness to the image. Bandwidth and smeared image aside, the 10 frames per second, 320 lines, and no interlacing was a wholly incompatible type if image that couldn’t be seen on pretty much any TV system in the world when it came back from the Moon. Before it could be broadcast it had to be converted by systems installed at certain ground stations, generating the right kind of broadcast signal. This was a two-stage process. First, another vidicon camera was set up facing a TV screen showing the lunar footage. This camera recorded the video at a rate of 60 fields per second but only when there was a full image on the screen. This meant that the converted image video had a full image every tenth of a second. Only one out of every six fields contained an image. So the next step was replacing the missing five fields. To do this, the good frame was recorded onto a magnetic disk then replayed five times. This yielded the necessary 60 fields per second for the 262.5 lines, the same as 30 frames per second of the full frame of 525 lines. The signal was ready for broadcast around the world via radio dishes, the same way TV was always broadcast at the time. The repetition of frames, however, give the footage that super low quality, ghostly look. Even if it’s not great, we’re lucky to have had a live broadcast of Apollo 11’s landing at all. The mission was about politics and technology, not about television. Wally Schirra somewhat infamously resisted live TV broadcasts from Apollo 7, arguing that it would interfere with the primary goal of the mission in supporting the eventual landing on the Moon. But when Apollo 8 broadcast a live image of Earth from the Moon around Christmas in 1968, audiences were glued to their televisions and NASA realized that sharing the landing live with the world would not only have an impact on everyone, it would impact how the world remembered the program. The original plan to send Apollo 11 with only a 16mm movie camera without enough film to record the whole moonwalk was scrapped in favour of this somewhat awkward system for bringing live images from the Moon to the world. Curious how we got that first shot of Armstrong walking down the LM ladder if no one was on the surface? I’ve got a video about that right here: Source: How Apollo Flew to the Moon by David Woods; Basic TV Technology: Digital and Analog by Robert L. Hartwig;
ALTHOUGH paleontologists agree that the history of life on earth has been punctuated by five great extinction events, the causes of these catastrophes, including the one that killed off the dinosaurs, are subjects of endless controversy. A new hypothesis based on cosmic "neutrino bombs" promises to complicate the debate still more. If the hypothesis is correct, the human race itself has one more hazard to think about, a danger against which no protection seems possible. It may be that once every 100 million years or so, a flood of all but undetectable subnuclear particles surges through the earth, causing an epidemic of fatal cancers and genetic mutations with dire results for many species. The scientist who conceived the idea notes that the human race has existed for less than one-twentieth of this span, and the danger, if it exists, is probably remote. In a paper accepted for publication by the journal Physical Review Letters, Dr. Juan I. Collar, a Spanish astrophysicist at CERN, the European Laboratory for Particle Physics near Geneva, suggests that high-energy cosmic neutrinos spawned by collapsing stars could pose a much greater risk to life on earth than has been supposed. Neutrinos are tiny particles that have no electric charge and little or no mass. They interact with atoms of matter so rarely that the average neutrino can pass through the Earth (or even the entire universe) without hitting anything. Scientists assume that great numbers of neutrinos constantly stream through human beings and everything else on Earth, causing neither sensation nor discernible injury. Part of the neutrino flow reaching the Earth comes from the thermonuclear reactions that fuel the Sun, but there are too few solar neutrinos with high enough energy to cause appreciable biological damage, Dr. Collar said. But the torrents of neutrinos produced by the quick collapse of massive stars are a more serious matter, he speculates. In such a star's final stage of collapse when it has used up its nuclear fuel, the star's ordinary atoms are crushed by gravity into a kind of super-dense neutron soup, and most of the "binding" energy that had held together the original atomic nuclei is released in the form of neutrino particles. The collision of any one of these neutrinos with an atom anywhere in the universe is highly unlikely. But because stellar collapses produce such astronomical numbers of high-energy neutrinos, the chances that some would hit other atoms are greatly increased. Neutrino detectors built at laboratories in various parts of the world usually consist of enormous tanks of water, in which the rare impact of a neutrino produces a tiny flash of light. When a high-energy neutrino hits an atom, it transfers most of its "recoil" energy to the atom, which then becomes a microscopic but potentially deadly projectile. A recoiling atom can rip deeply into biological tissue, releasing its damaging energy very rapidly along its track, destroying cells essential to life, causing mutations of DNA genetic material, and initiating cancers. Taking into account calculations by Dr. John Bahcall of the Institute for Advanced Studies in Princeton, N. J., Dr. Collar suggests that about once every 100 million years, a "silent" stellar collapse -- one that does not produce a visible supernova explosion -- may occur close enough to Earth to have catastrophic effects. By making some assumptions based on known biological effects of different types of radiation, Dr. Collar estimates that besides killing many animals outright, the neutrinos from a close "silent" stellar collapse would produce 12 cancer sites per kilogram, or 2.2 pounds, of body weight. This, he said, would be "an insult that would be severe enough to kill a vast percentage of large animals with a frequency comparable to that of most major extinctions." The effects of a spectacular nearby supernova would have even more devastating effects, of which neutrino damage would be only one. Fortunately, astrophysicists estimate that the explosion of a relatively nearby supernova is so rare that the odds are that there would have been at most one since life on Earth began some three billion years ago. Dr. Collar acknowledges that his hypothesis is "purely speculative," especially since biological effects of neutrino impacts have never been identified. "This idea is really an outgrowth of a mass extinction hypothesis developed by John Ellis, a theorist at CERN, and David Schramm of the University of Chicago," he said. Dr. Ellis and Dr. Schramm made calculations suggesting that a supernova occurring anywhere within about 33 light-years of the Earth would produce a blast of cosmic rays that would destroy the Earth's protective ozone layer and expose its creatures to deadly solar ultraviolet radiation. "I can't pass judgment on Dr. Collar's hypothesis," Dr. Ellis said in an interview, "but as far as I can tell, he's estimated the figures correctly. We know that neutrinos are produced by stellar collapses, because neutrino bursts were detected in the United States and Japan in 1987 at the same time a relatively nearby supernova flared up. But, of course, we can't be sure of possible biological effects." Paleontologists who have studied mass extinctions revealed by the fossil record seemed unimpressed with Dr. Collar's "neutrino bomb" hypothesis.
The PBL Methodology CERTL teachers in all grade levels and subject areas are trained to produce PBL activities following a standard methodolgy. Each CERTL approved "case" is facilitated by a CERTL qualified teacher in the following manner: - A student reads the problem aloud in group. - Students identify the facts, “What they know.” - Students identify Learning Issues, “What they don’t know.” - Students identify what could be going on, their ideas to move them forward in exploration. - Students make decisions about how to proceed. - Students acquire new information through research or additional resources. - Students test their ideas against new knowledge, re-rank ideas as needed. - Students continue to acquire new information and integrate it with that they know. - Students arrive at most viable and defendable hypothesis/solution. At the End of the Day... - Students know what they know with confidence. - Students can identify what they do not know/need to know. - Students can efficiently and effectively acquire new information, integrate it with existing knowledge, use it to move towards problem resolution. View and download sample PBL Cases.
Are you wondering about when to use a comma? Read on the following article for an elaborate explanation on comma usage. When To Use A Comma People often ignore punctuation unaware of the fact that the absence of proper punctuation might just kill the essence or the depth of a sentence. The comma was initially introduced with the intention of providing breath or halt, while reciting a particular fragment or a complete text; and it continues to do so even today. In fact, the absence of a comma can change the entire meaning of a text, even to the degree of absurdity. Yet, people often ignore its significance and hardly take the pain of using it. For instance, take this sentence, "He invariably throws darts to the left and then dashes for a touchdown," compared to saying, "He invariably throws, darts to the left, and then dashes for a touchdown." While in the first statement, the person seems to be throwing darts to his left, in the second he seems to be running left after throwing something else. See how a misplaced comma can sabotage the complete meaning? So, in order to avoid such silly and embarrassing errors, here are some of guidelines on when to use a comma. Read the article below to learn when and where to use a comma. - Commas are used when the objective is to avoid confusion. The use of comma is essential when separating independent clauses. E.g., The kid was hurt, but he didn't cry. - Commas should be used while separating two adjectives qualifying the same noun. E.g. Thomas is a strong, well-built man. - It must be used when a name is being used in a sentence and a person is being addressed directly. E.g. Will you, Kunal, help me lift this box? - A comma must be used to separate dates and addresses. E.g. Caroline met her boyfriend on October 3, 2010 in the splendid city of Sydney. - While mentioning a state and a city consecutively in a sentence, a comma should be used in between. If, however, the abbreviation of a city is used then a comma need not be used. E.g. 'I lived in Tamil Nadu, India, for ten years.' or 'I lived in San Francisco, CA for ten years'. - Commas can be used to highlight or give a break to degrees or titles used with names in a sentence. It is also no longer right to use commas around Jr. or Sr. Commas can also never be used to 'set off' II, III and so on. E.g. Eric Rogers, C.E.O, was the first one to arrive at the party. - When introductory words like, well, yes, no, why, etc., begin a sentence, use a comma after them. E.g., 'Why, that's great!' or 'No, I will not go today'. - Commas are very commonly used when the need is to add a break or an interval to expressions that otherwise interfere the flow of a sentence. E.g. I can't believe this, although I should not believe this, I will. - You will have to use a comma when you start a sentence with a clause that can be described as weak. However, you will not need a comma when the sentence starts off with a strong clause and is then followed by a weak one. E.g. If I don't know what to do, I will come to you for help. - A comma is necessary while shifting from main discourse to a quotation. Put the comma in between the fragment of the quotation where the explanatory words about the speaker come. E.g., "Do not jump to conclusion", said he, "before hearing the complete story". - A comma must be used when an –ly adjective is used with other adjective. E.g. The Queen was a lovely, young lady. - A comma has to be used when the phrases which comprise more than three words start off a sentence. However, if the phrase that starts off a sentence consists of equal to less than three words, the use of a comma is optional. E.g. To score a century on this ground, you have to play well. - If any particular thing or even a particular person is identified properly, the description that follows must be considered superfluous, which is where commas come into play. E.g., Velutha, of strong body and weak mind, fell in love with them, the two-egged twins. - A comma can be used when you are looking to separate two strong clauses that are otherwise joined by a coordinating conjunction. However, if the clauses are too short, the comma can be chosen to be ignored. E.g., They are tough people who know what they are doing. - If there is no subject in front of the second verb, do not use a comma. This is as good as a rule of thumb. E.g., There are weak and hungry people all over the world. - Commas can be used to introduce or break the flow in direct quotations, specifically the ones that amount to lesser than three lines. - A comma can be used to separate a statement from a question. E.g., What is this, you say. The usage of comma is not restricted only to the above mentioned cases and it spreads way beyond the domain that has been expressed so far. The idea behind presenting this write-up is to explain that a comma, as trivial as it may look, isn't a punctuation that you'd like to mess with. It is the second most common punctuation mark after the full-stop and perhaps the most commonly misunderstood one also. The trick here would be to not give up, and keep trying to understand the comma till it finally reveals all its secrets to you. It isn't as difficult as you think and neither is it as insignificant. Comment On This Article
|Name: _________________________||Period: ___________________| This test consists of 5 short answer questions, 10 short essay questions, and 1 (of 3) essay topics. Short Answer Questions 1. Misha, a penpal, first writes to Alma when she is how old? 2. Pained that he does not know his son in the same way, what does Leo do? 3. For whom does Alma search the Internet? 4. What does Leo wonder about the book? 5. Leo looks under his pillow. What does he find? Short Essay Questions 1. Why is Emanuel Chaim known as Bird? What else is unique about Emanuel? 2. Who does Leo find in his apartment when he returns from the library? What does this person say to him? 3. What is learned about Zvi at the beginning of this chapter? 4. How does the reader learn most information about Zvi Litvinoff? 5. Why does Leo go into an empty bedroom in Bernard's house? 6. How do Charlotte and Alma deal with David's death? 7. About what does Leo write? 8. What takes place that affects Misha and Alma's friendship? 9. What does Leo do when he sees that his son has died? 10. What has Alma learned about Alma Mereminski? What does she believe is the connection between Mereminski and Litvinoff? Write an essay for ONE of the following topics: Essay Topic 1 David Singer dies when Alma is six. Part 1) How does he die? How does his death affect the family? Part 2) Do the family members grieve in healthy ways? Why or why not? Part 3) How does the grief of each family member affect the others in the family? How does it affect their relationships with others? Essay Topic 2 Zvi adds a last chapter to the book called, "The Death of Leopold Gorsky." Part 1) From does this chapter come? Why is this chapter important to Zvi? Part 2) Why does he add this chapter? What does this say about Zvi? Part 3) How does this act reinforce one of the story's themes? Essay Topic 3 Leopold Gursky is one of the narrators. Part 1) Describe him. How does the author give you information about this character throughout the story? How does this connect with what is being learned about other characters? Part 2) What type of narrator is Leo? Do you feel that having the story narrated by him is effective? Why or why not? Part 3) How can the reader relate to this character? Why? How can you relate to him? Why? This section contains 962 words (approx. 4 pages at 300 words per page)
Let's start with a definition from Wikipedia: "Global warming is the increase in the average temperature of Earth's near-surface air and oceans since the mid-20th century and its projected continuation." Here's one from the EPA: "Global warming is an average increase in the temperature of the atmosphere near the Earth's surface and in the troposphere, which can contribute to changes in global climate patterns. Global warming can occur from a variety of causes, both natural and human induced. In common usage, "global warming" often refers to the warming that can occur as a result of increased emissions of greenhouse gases from human activities." So ... we're not talking about a cold winter or even a hot summer. We're talking about an overall trend. And that trend shows us that the average temperature, near the earth's surface, is increasing: As you'll notice ... the trend started out relatively flat and then it started to increase ... with the sharpest incline occurring during the last 20-30 years. It has been argued that global warming is the natural progression of a planet's life. And that's true. But one wonders if natural elements would have caused the dramatic spike in temperatures. Most experts say that the accelerated warming is caused by human activity. So ... why do we care? After all, a warming trend ... and a change in climate ... is not necessarily bad for everyone. Some areas will find that warmer temperatures allow them to grow larger and more varied crops. And after this year's seriously cold temperatures, who wouldn't want the comfort of warm weather? But consider this ... a slight warming, over a long period of time, wouldn't be a big deal because all life would adapt. For example, as heat patterns slowly changed, altering the amount of available food, animal life would learn to consume other things or perhaps move to an area where there is adequate food. But that's not what's happening ... we're warming up fast, leaving no time to adjust to a new climate. Consequently, species are being lost. As you've heard me say, countless times, I believe that species extinction will have a direct, negative impact on our life. What can we do? I'm happy you asked. Here are a few ideas: - Reduce energy consumption (use energy efficient appliances, change light bulbs to CFLs or LEDs, etc.) - Clean the air filters on your heating/cooling unit - Reduce water consumption - Eat meatless meals and avoid processed foods - Buy local and organic - Use "elbow grease" instead of electric power (use a push mower instead of a power mower ... use a broom or rake instead of a leaf blower) - Keep the car in good working order (check the tires for proper inflation, fix any leaks, etc.) - Reduce emissions while driving by going easy on the brake pedal and gas pedal - Turn off the engine rather than idle (except when idling at a traffic light) - Remove stuff from the trunk of a car and lighten it's weight (saves gas) - Carpool or take public transportation - Reduce, Reuse and Recycle As you bundle up to head out into freezing temperatures ... just remember ... global warming is real and we, humans, are the cause of it's acceleration. Let's slow it back down. As always ... I would love to hear from you!
All gardens benefit from the yearly addition of organic matter, such as compost. On sandy soils, organic matter improves the soil's ability to hold water and nutrients. On clayish soils, organic matter helps "break up the clay," which reduces soil stickiness and improves water movement. As a rule of thumb, you should spread and cultivate one to two inches of compost into vegetable and flower gardens yearly. Yearly, not one-time, applications of compost make soil highly productive and easy to manage. Seven bushels of compost will cover 100 square feet, one inch deep. Compost can also be used as a mulch. A layer of mulch spread over the soil surface helps control weed-seed germination, conserve water, moderate soil temperature extremes, and reduce compaction caused by heavy rains and sprinkler irrigation. You also can use aged compost as a peat-moss substitute in potting soils and seed-starting mixes. Aged compost can comprise up to one-third of the mix. The primary benefit of compost is soil improvement, not nutrients. Compost adds small amounts of nutrients and substantial organic matter to the soil. If you don't add nitrogen fertilizer to low-fertility soils, the soil may suffer from nitrogen deficiency. Compost is a valuable resource, because it helps maintain soil quality for optimum garden productivity. And home composting is an efficient way to recycle yard wastes. For more information, see the following Colorado State University Extension fact sheet(s). - Choosing a Soil Amendment - Composting Yard Waste - Organic Fertilizers - Perennial gardening - Vegetable garden: Soil Management and Fertilization - Preventing E. coli From Garden to Plate Do you have a question? Try Ask an Expert! Updated Friday, October 31, 2014
Evaluating Cognitive Web Accessibility A few important points: - There are many types of cognitive and learning disabilities and an even wider variety of interests and capabilities of users who have these disabilities. - This population is larger than those with all other physical and sensory disabilities combined. - Because needs vary across these disabilities, it's difficult to make definitive recommendations that will universally help all users with cognitive and learning disabilities. Despite this, there is much we can say that is useful. - Beyond research that WebAIM and others have conducted, there is much insight that can be drawn from learning sciences, usability, and other areas of web accessibility. - Improving web accessibility for this audience will improve access for everyone. Principles of Cognitive Web Accessibility Cognitive accessibility can be defined by the following principles: - Improving web accessibility for this audience will improve access for everyone. In many ways, it's hard to define when a page is accessible to users with cognitive disabilities. How simple is simple enough? For the most part, cognitive web accessibility is one of those "you know it when you see it" things. Common sense, holistic evaluation, and user testing should predominantly guide cognitive web accessibility evaluation. The following items attempt to address these principles (though they do not map directly to them) and can serve as a guide for maximizing cognitive web accessibility. These recommendations are based on a combination of internal and existing research, commonly-assumed best practices, and thoughtful speculation. Cognitive Web Accessibility Checklist Assistive Technology Compatibility Users with cognitive or learning disabilities often use screen readers or other assistive technologies to access content through various senses or to modify content to be best perceivable to them. Users with other physical or sensory disabilities also have a higher prevalence of cognitive or learning disabilities. Many assistive technology issues are addressed in WCAG 2.0 or Section 508. Assistive technology accessibility includes (but is not limited to): - Ensure that navigation is consistent throughout a site Navigation placement, display, and functionality should not change from page to page. - Similar interface elements and similar interactions should produce predictably similar results - Support increased text sizes The page should remain readable and functional when text is increased 200-300%. - Ensure images are readable and comprehensible when enlarged Content within images, particularly text, should be understandable when the image is scaled 200-300%. Use true text instead of text within images when feasible. - Ensure color alone is not used to convey content If page colors are removed or changed, content should not be lost. - Support the disabling of images and/or styles Ensure that the page remains readable and functional when images are disabled (alternative text will be displayed instead) or when styles are disabled. - Provide content in multiple mediums Video or audio alternatives provide an additional method of perceiving content. A text alternative (captions and/or a transcript) should be provided for video and audio content. Closed captioning, which gives users the option to turn off the captions, is optimal. - Use images to enhance content Images can be used to convey or enhance content. Illustration, diagrams, icons, and animations can convey complex information. - Pair icons or graphics with text to provide contextual cues and help with content comprehension Focus and Structure - Use white space and visual design elements to focus user attention The design of a page (white space, color, images, etc.) should focus the user on what is most important (typically the body content of that page). - Avoid distractions Animation, varying or unusual font faces, contrasting color or images, or other distracters that pull attention away from content should be avoided. Complex or "busy" background images can draw attention away from the content. Avoid pop-up windows and blinking or moving elements. - Use stylistic differences to highlight important content, but do so conservatively Use various stylistic elements (italics, bold, color, brief animation, or differently-styled content) to highlight important content. Overuse can result in the loss of differentiation. Do not use italics or bold on long sections of text. Avoid ALL CAPS. - Organize content into well-defined groups or chunks, using headings, lists, and other visual mechanisms Break long pages into shorter sections with appropriate headings (use true and visually significant headings rather than simply big bold text). Very long pages may be divided into multiple, sequenced pages. Unordered, ordered, and definition lists provide a visual structuring and convey semantic meaning (e.g., an unordered list conveys a group of parallel items). Use shorter, multi-step forms for complex interactions, rather than lengthy, all-in-one forms. - Use white space for separation White space is a design term that refers to empty space between elements in a page. It is not necessarily the color white. White space should be used to separate navigation from main body, body text from side elements and footer, main content from supplementary items (floating boxes, for example) and to separate headings, paragraphs, and other body text. - Avoid background sounds Give the user control over playing audio content within the page, or at a minimum, give the user control to stop the background sounds. Readability and Language - Use language that is as simple as is appropriate for the content - Avoid tangential, extraneous, or non-relevant information Stick to the content at hand. - Use correct grammar and spelling Use a spell-checker. Write clearly and simply. - Maintain a reading level that is adequate for the audience Readability tests can be performed on the body text (for accuracy, do not include web site navigation, side bar, footer, or other extraneous text elements in the evaluation). Generally, web content should be understandable by those with a lower secondary education, though an elementary reading level may be necessary for some users with certain cognitive or learning disabilities. More complex content may necessitate diligence in implementing other recommendations in this list. - Be careful with colloquialisms, non-literal text, and jargon - Expand abbreviations and acronyms Provide the full meaning in the first instance and use the <abbr> or <acronym> elements. Complex content may necessitate a glossary. - Provide summaries, introductions, or a table of contents for complex or lengthy content - Be succinct Provide the minimum amount of text necessary to convey the content. - Ensure text readability - Line height The amount of space between lines should generally be no less than half the character height. - Line length Very long lines of text (more than around 80 characters per line) are more difficult to read. - Letter spacing, word spacing, and justification Provide appropriate (but not too much) letter and word spacing. Avoid full justified text as it results in variable spacing between words and can result in distracting "rivers of white" - patterns of white spaces that flow downward through body text. - Sans-serif fonts These fonts are generally regarded to be more appealing for body text. - Adequate text size Text should generally be at least 10 pixels in size. - Content appropriate fonts Visually appealing and content-appropriate fonts affect satisfaction, readability, and comprehension. - Paragraph length Keep paragraph length short. - Adequate color contrast Ensure text is easily discerned against the background and that links are easily differentiable from surrounding text. - No horizontal scrolling Avoid horizontal scrolling when the text size is increased 200-300% - Line height Orientation and Error Prevention/Recovery - Give users control over time sensitive content changes Avoid automatic refreshes or redirects. Allow users to control content updates or changes. Avoid unnecessary time-outs or expirations. Allow users to request more time. - Provide adequate instructions and cues for forms Ensure required elements and formatting requirements are identified. Provide associated and descriptive form labels and fieldsets/legends. - Give users clear and accessible form error messages and provide mechanisms for resolving form errors and resubmitting the form - Give feedback on a user's actions Confirm correct choices and alert users to errors or possible errors. - Provide instructions for unfamiliar or complex interfaces - Use breadcrumbs, indicators, or cues to indicate location or progress Allow users to quickly determine where they are at in the structure of a web site (e.g., a currently active "tab" or Home > Products > Widget, for example) or within a sequence (Step 2 of 4). Next/Previous options should be provided for sequential tasks. - Allow critical functions to be confirmed and/or canceled/reversed - Provide adequately-sized clickable targets and ensure functional elements appear clickable Use labels for form elements, particularly small checkboxes and radio buttons, and ensure all clickable elements appear clickable and do not require exactness. - Use underline for links only - Provide multiple methods for finding content A logical navigation, search functionality, index, site map, table of contents, links within body text, supplementary or related links section, etc. all provide multiple ways for users to find content. Funding for this material provided by the Office of Special Education and Rehabilitative Services Steppingstones of Technology Innovation Grant #H327A070057.
Learning Through Play Playing games is a good way for your child to learn to recognise numbers. For one such game you can use some number cards. Give your child a pile of counters or buttons. Hold up one of the cards and ask him or her to give you that number of counters. At first you can say the number as you show the card but later just hold it up for the child to look at. He or she can check by putting the counters over spots on the reverse of the card. Developing the ability to estimate is also a useful skill. Asking a child to guess how many items are on a tray will help to develop this. Always count them out together afterwards, so that the child can see how close he or she was. Recognising the Symbols A fun way to help recognition of numbers is to select a few number cards. Take one from the pile without letting your child see it. Ask him or her to guess which one you have as you gradually expose the number from behind a screen (eg. a book) If your child guesses wrongly explain what the number is. Introduce a few numbers at first and build up slowly. Some children will want to start writing numbers themselves as soon as they can recognise them. Adults should make sure that they encourage children to form their numbers correctly as incorrect letter formation can be very difficult to correct later. Encourage your child to look for a number to copy. Print out our number formation guide intended for parent or carer information.
At a close approach of only 12,000 miles away, positioned directly between Ceres and the sun, Dawn was able to snap excellent photos of the Occator Crater. Occator is the brightest spot on Ceres, and in recent years scientists have speculated that the shine from the crater is the result of the presence of sodium carbonate, taken as evidence of volcanic activity. This makes Ceres the closest object to the sun to undergo cryovolcanism, a volcano that erupts cold volatiles such as water and methane (and sodium carbonate) instead of molten rock like our terrestrial volcanoes. So the theory goes, something smashed into Ceres and created Occator Crater, which also triggered the creation of a cryovolcano. "The bright spots of Occator stand out particularly well on an otherwise relatively bland surface," NASA said in a release that accompanied the video. Dawn launched in 2007 and arrived at the asteroid belt in 2011, with a pair of missions. First, it spent a year studying Vesta, a protoplanet that is the second-largest object in the asteroid belt with a surface area comparable to Pakistan. It then exited Vesta's orbit and made its way to its second and final object of observation: Ceres. Ceres is the third largest object in the sun's habitable zone, after Mars and Earth. This means that it's possible for the planet to support liquid water: many astronomers believe that Ceres has a thin water vapor atmosphere, and others have argued that there may be an underground ocean of liquid water beneath its icy surface. In February 2017, Dawn detected organics in one of Ceres's craters, which suggests underground volcanic activity. While it is highly unlikely for even simple life to exist on Ceres, astronomers continue to study the world to get a better idea of how organics can form on water worlds. Moons like Ganymede, Europa, and Enceladus are believed to have similar compositions as Ceres. Dawn is the first human spacecraft to ever orbit Ceres, and in its two years of observation it has taught us much we didn't know about the smallest of our solar system&'s four dwarf planets. Initially, Dawn was to visit a third target but NASA decided against it. Instead, they are likely going to decommission the satellite late in the year, at which point it will become a permanent satellite of Ceres.
(1512–1594) Dutch cartographer and geographer Mercator, originally named Kremer, was born at Rupelmonde, now in Belgium. At the University of Louvain (1530–32) he was a pupil of Gemma Frisius. After learning the basic skills of an instrument maker and engraver, he founded his own studio in Louvain in 1534. Despite accusations of heresy and imprisonment in 1544, he remained in Louvain until 1552, when he moved to Duisburg and opened a cartographic workshop. Mercator first made his international reputation as a cartographer in 1554 with his map of Europe in which he reduced the size of the Mediterranean from the 62° of Ptolemy to a more realistic, but still excessive, 52°. He produced his world map in 1569 and his edition of Ptolemy in 1578, while his Atlas, begun in 1569, was only published by his son after his death. It was intended to be a whole series of publications describing both the creation of the world and its subsequent history. Mercator was the first to use the term ‘atlas’ for such works, the book having as its frontispiece an illustration of Atlas supporting the world. The value of Mercator's work lies not just in his skills as an engraver, but also in the introduction of his famous projection in his 1569 map of the world. Navigators wished to be able to sail on what was called a rhumb-line course, or a loxodrome, i.e., to sail between two points on a constant bearing, charting their course with a straight line. On the surface of a globe such lines are curves; to project them onto a plane chart Mercator made the meridians (the lines of longitude) parallel instead of converging at the Poles. This made it straightforward for a navigator to plot his course but it also produced the familiar distortion of the Mercator projection – exaggeration of east–west distances and areas in the high latitudes. The big difference, apart from projection, between Mercator's and classical maps was in the representation of the Americas. He was not the first to use the name America on a map, that distinction belonging to Martin Waldseemüller in 1507, but he was the first to divide the continent into two named parts – Americae pars septentrionalis (northern part of America) and Americae pars meridionalis (southern part of America).
The water cycle describes the existence and movement of water on Earth. Evaporation and sublimation both occur within the Water Cycle. Evaporation is when water changes into vapor (liquid -> gas). Sublimation is when ice changes into water vapor, completely skipping the liquid state (solid -> gas). Evapotranspiration is the sum of evaporation from the land surface along with transpiration from plants. There are 5 factors that affect the transpiration rate, including : Temperature, Relative Humidity, Wind and Air Movement, Soil-Moisture Availability, and the Type of Plant. Condensation is when gas turns into water (gas -> liquid). Precipitation is when something falls from the clouds; it can be different depending on temperature. Transpiration is when liquid is drawn from plants. Runoff is when precipitation rolls down a hill into a body of water. The Carbon Cycle The carbon cycle is the circulation of carbon in the biosphere. Carbon moves from the atmosphere to plants and animals by attaching to oxygen, causing it to infuse with plants, which are then eaten by animals. Carbon moves to the ground from plants and animals dying. Every time we exhale, carbon is released into the atmosphere, which is called respiration. Carbon dioxide is a greenhouse gas, and keeps heat in the atmosphere. Coal, oil, and limestone store a great deal of carbon. The Nitrogen Cycle The nitrogen cycle is the continuous occurrence of events in which the atmospheric nitrogen and nitrogenous compounds in the soil are converted, by nitrification and nitrogen fixation, into substances that can be used by plants. Nitrogen is a component of DNA, RNA, and proteins. Nitrogen is unusable by living organisms due to the strong triple bond between N atoms and N2 molecules. Nitrate (NO3-), Nitrite (NO2-), and Ammonium (NH4+), are all involved in the nitrogen cycle. Nitrogen fixation is a process in which N2 is converted to ammonium. Denitrification is when nitrate and nitrite convert into dinitrogen and NO2. Nitrification is when ammonium is converted to nitrate. Nitrogen mineralization is when nitrogen is converted back into inorganic nitrogen. Nitrogen uptake is when ammonium is converted to organic N. Humans also have an impact on the nitrogen cycle; burning fossil fuels, using synthetic nitrogen fertilizers, and cultivating legumes. The Phosphorous Cycle The phosphorous cycle describes the movement of phosphorous through the lithosphere, hydrosphere, and biosphere. Phosphates are a critical part of life because they make up framework that holds DNA and RNA together. The phosphorous cycle differs from each other cycle because it doesn't include a gas phase. The largest store of phosphorous can be found in sedimentary rock. Because so much phosphorous can be found within rock, when it rains phosphates are distributed to the soil and water, which plants take in, and then animals eat those plants, causing phosphorous to be found within animals, plants, and rocks. Though phosphorous is important, too much can be considered a pollutant because it stimulates the growth of plankton and plants. Humans can contribute to the excessive levels of phosphorus by cutting trees down, and using fertilizers.
Fish Can Recognize Faces Based On UV Pattern Two species of damselfish may look identical””not to mention drab””to the human eye. But that’s because, in comparison to the fish, all of us are essentially colorblind. A new study published online on February 25th in Current Biology, a Cell Press publication, reveals that the fish can easily tell one species from another based entirely on the shape of the ultraviolet (UV) patterns on their faces. Although scientists have long known that some animals have UV vision, the new findings suggest that this sense can be keener and perhaps more useful as a “communication channel” than had been anticipated, according to the researchers. “Researchers have been assuming for a long time that UV vision is not very good””and that it is only useful for detecting the presence and absence of UV light, or objects in front of UV bright backgrounds,” said Ulrike Siebeck of the University of Queensland in Australia. “The exciting thing is that we can show that these fish can tell the difference between intricate UV patterns””something that was not expected based on previous assumptions.” In fact, researchers had some good reasons to doubt the precision of UV vision. The short wavelengths of light that characterize UV are prone to scattering in air and water. And even animals that can see in the UV range usually don’t have all that many UV cones, or photoreceptors, in their eyes. But apparently nobody told that to the damselfish. In the new experiments, Siebeck’s team presented the very aggressive fish with two intruders, representing different species that vary in appearance only in their UV patterns. Those initial choice tests showed that the fish always attacked one species over the other. But, when the researchers took away the fishes’ ability to see in UV, that preference between species disappeared. The researchers next transferred the two species-specific UV patterns onto otherwise blank pieces of paper. They trained the fish to swim up to and nudge one of the patterns by offering food rewards. When the fish were later presented with both patterns, they still selected the pattern they had been trained on. Put together, the two lines of evidence support the notion that the UV patterns are both necessary and sufficient for the fish to tell the two species apart. The ability to see in this visual field is likely quite convenient for the fish, Siebeck said. “If you think about it in simple terms, fish have to be inconspicuous if they want to go undetected by their predators and prey, but at the same time, they have to be conspicuous if they want to attract the attention of potential mates, for example. Using UV patterns to do this is a clever way to maximize both at the same time””they are still inconspicuous to predators but very conspicuous to other fish with UV vision.” The researchers say the new findings now call for more detailed investigation of UV vision in damselfish and other UV-sighted animals, to find out just how well animals can see in this range, and over what distances. The researchers are also testing whether fish can tell different individuals””as opposed to whole species””apart based on fine-scale variation in their UV facial patterns. The researchers include Ulrike E. Siebeck, University of Queensland, Brisbane, Australia; Amira N. Parker, University of Queensland, Brisbane, Australia; Dennis Sprenger, University of Tubingen, Tubingen, Germany; Lydia M. Mathger, Woods Hole Oceanographic Institution, Woods Hole, MA; and Guy Wallis, University of Queensland, Brisbane, Australia. On the Net:
In 1756, war erupted into a world-wide conflict known as the Seven Years’ War. It was was referred to in the colonies as The French and Indian War and thus came to be regarded as the North American theater of that war. It was the beginning of open hostilities between the colonies and Great Britain. In Canada, it is referred to as the Seven Years’ War, and French Canadians call it La guerre de la Conquête (“The War of Conquest”). The name refers to the two main enemies of the British colonists: the royal French forces and the various Native American forces allied with them, although Great Britain also had Native allies. England and France had been building toward a conflict in America since 1689. These efforts resulted in the remarkable growth of the colonies from a population of 250,000 in 1700, to 1.25 million in 1750. Britain required raw materials including copper, hemp, tar, and turpentine. They also required a great deal of money, and so they provided that all of these American products be shipped exclusively to England (the Navigation Acts). In an effort to raise revenue and simultaneously interfere with the French in the Caribbean, a 6 pence tax on each gallon of molasses was imposed in 1733 (the Molasses Act and The Sugar Act). Enforcement of these regulations became difficult, so the English government established extensive customs services, and vice-admiralty courts empowered to identify, try, and convict suspected smugglers. These devices were exclusive of, and superior to, the colonial mechanisms of justice. The colonies were wholly interested in overcoming the French in North America and appealed to the King for permission to raise armies and monies to defend themselves. Despite determined petitions from the royal governors, George II was suspicious of the intentions of the colonial governments and declined their offer. English officers in America were also widely contemptuous of colonials who volunteered for service. A few of the men who signed the Declaration had been members of volunteer militia who, as young men, had been dressed down and sent home when they applied for duty. Such an experience was not uncommon. It led communities throughout the colonies to question British authorities who would demand horses, feed, wagons, and quarters – but deny colonials the right to fight in defense of the Empire, a right which they considered central to their self-image as Englishmen.
Outdoor play is an exciting part of the routine in most early learning programs. You can sense a change in the air as playground time approaches. Have you ever stopped to wonder why that is? What is it about the outdoors that inspires such interest and engagement? Is it the sliding board or the tricycles? Perhaps it is the freedom to run and jump and roll around. Or maybe it is the opportunity to discover, observe, and interact with nature. Whatever the reason, children get really excited when it is time to go outside, and where there is excitement, there is the opportunity for learning. Being mindful of not stripping away the child-led, free-play nature of outdoor time, there are ways that programs can adopt a nature-based learning approach: According to an article by Childhood by Nature. Nature-based learning includes learning about the natural world but extends to engagement in any subject, skill or interest while in natural surroundings. It’s more than bringing items from nature into the classroom, which should be a normal practice. It is about creating engaging learning experiences outdoors. In this month’s newsletter, we will explore how this can be accomplished. For the article Benefits of Nature-Based Learning, CLICK HERE For the article Materials for Nature-Based Learning, CLICK HERE For the article Nature-Based Learning Tips, CLICK HERE For the article Director’s Corner – Supporting Staff as They Enhance Nature-Based Learning Opportunities, CLICK HERE
Microbiome and Cancer Human body hosts 10-100 trillion microbes (small microorganisms such as bacteria, viruses and fungi) and their groupings or community is referred to as microbiota. The complete collection of genes of all the microbes in a community is termed as microbiome. Microbes and Human Body Microbes start inhabiting the human body soon after the birth. They colonize in the areas exposed directly to the environment such as human respiratory tract (most commonly the nose), the skin (especially lined with the mucous membrane such as groin), the urinary tract, and the digestive tract (mainly colon and the mouth). Generally, the microbes live in mutualistic relation with the human body (host) where both the host and the microbes are benefitted, or in communalistic relation where the microbes are benefitted but the host (human body) is neither benefitted nor harmed. However, this relationship can also become negative where the microbes benefit at the expense of the host. Research suggests that when these microbes are disturbed from their natural environment, such as by excessive use of antibiotics, or the changes in the diet, or if new pathogens invade the body, this positive mutualistic relation becomes negative. Microbiome and Colorectal Cancer (Lower Gastrointestinal Cancer) Colorectal cancer (CRC) is the third most common form of cancer. New technologies such as metagenome sequencing have made it possible to understand the relationship between the microbiome and colorectal cancer. Microbes inhabit different sites in the gut, including ascending colon, distal colon, proximal ileum, and jejunum; and they play a crucial role in maintaining its healthy functioning. This natural flora helps in the food digestion, vitamin biosynthesis, and protection from pathogens. Dysbiosis is a condition when these microbes become imbalanced in the gut. Studies have indicated that imbalance in the community of gut microbes is associated with CRC development. Also, there is a scope of managing gut bacteria which can help in curing the CRC. Some bacteria and their roles are as below: - CRC growth is stimulated by a bacteria known as Fusobacterium nucleatum. This bacteria is responsible for either activating the Wnt signaling pathway or lowering down the CD3þ T cell-mediated immunity, which leads to the growth and development of the tumor cells. - Escherichia coli (E. coli) is another microbe found in intestinal microbiota. Research suggests that pathologic strain of E. coli plays a critical role in triggering CRC. E. coli can induce inflammation and has been found to release certain chemicals such as cytolethal distending toxin (CDT) and cytotoxic necrotizing factor (CNF) which can induce carcinogenesis. - Bacteroides fragilis (B. fragilis) bacteria has two major forms – nontoxigenic B. fragilis (NTBF) and enterotoxigenic B. fragilis (ETBF). ETBF is responsible for causing the CRC. ETBF infection increases the levels of T-helper cell 17 (Th17) and T regulatory (Treg) cells, which promote the tumor growth and development. - Bifidobacterium has been found to be protective in CRC. It reduces the beta-glucuronidase activity, and in CRC its levels are found to be significantly reduced. - Lactobacillus has also been found to be beneficial in reducing CRC. It produces lactic acid, activates toll-like receptors, and reduces inflammation. Breast Cancer and Endometrial Cancer Researchers have found that there is a difference in bacteria found in breasts of women with breast cancer and the ones without the disease. In case of endometrial cancer, similar results were observed which indicated that the microbes found in the vaginal environment of women with endometrial cancer were different from the microbes in the vaginal environment of healthy women. Microbiome and Upper Gastrointestinal Cancer Studies performed by the International Agency for Research on Cancer have indicated that Helicobacter pylori has a carcinogenic nature. Certain risk factors such as use of tobacco, high body mass index, and altered pepsinogen levels, are associated with disturbing the microbial balance and thereby increasing the risk of gastric cancer. Role of the Human Microbiome in Immunotherapy Immunotherapy is becoming a modern tool in treating cancer, where the natural immunity is strengthened to combat cancer, and the role of microbiome is becoming a promising tool in aiding immunotherapy. The type and variety of bacteria in the gut can have a major effect on the results of immunotherapy. Studies indicate that individuals who had a high ratio of beneficial bacteria were more likely to respond to these drugs compared to those who had less number of good and more number of harmful bacteria. Research has also indicated that manipulating the microbes in the patients who are less responsive to immunotherapy has shown benefits. Once their microflora is regulated and replaced with healthy microbiota, the immune system becomes more responsive in recognizing the tumors. Also, diverse microflora has the ability to modify the overall response of immunotherapy drugs. - All Microbiome Content - The Human Microbiome Project (HMP) - How Does the Diet Impact Microbiota? - Achievements of the Human Microbiome Project - Human Microbiome Last Updated: Feb 27, 2019 Akshima is a registered dentist and seasoned medical writer from Dharamshala, India. Akshima is actively involved in educating people about the importance of good dental health. She examines patients and lends free counseling sessions. Taking her passion for medical writing ahead, her aim is to educate the masses about the value of good oral health. Source: Read Full Article
Rocks and Minerals Activities for Upper Elementary Rocks and minerals activities in Science can get messy but they provide valuable learning experiences for students. However, when you pair your students’ experiment knowledge with ELA you create a powerful duo of Science and comprehension. Today, I’m sharing 3 ways you can integrate rocks and minerals activities into your upper elementary classroom. 1. Rocks and Minerals Reading Passages Integrate ELA with your rocks and minerals activities using passages that will teach them to read with a purpose and answer questions that help with comprehension. How do you know they are comprehending the passage? Especially with all the Scientific lingo? You can assess them using Google Forms or a paper assessment. As students read these rocks and minerals passage you can go back during ELA time and teach them about main idea and details. Students can use their passages and colored pencils to highlight the main idea in one color and the details in a second color. Do the first passage as a class and as time goes on, allow students to complete the passages by themselves. Another great way to use these passages is to have students underline where they found the answer to the question. So many times, students are inclined to rush through reading the passages and answering the questions. Have students slow down and underline where they found the answer in the text and number it based on the question number (put a #1 next to the underlined answer for number 1) allows them to show evidence of their thinking. Also, this is great test prep! 2. Demonstrate Understanding Through Writing Bringing in writing helps students express their knowledge creatively. For example, I use a graphic organizer and prompt to have students write to a geologist and share with them what they’ve learned about how different rocks types are formed. This activity is a great way to finish a rocks and minerals unit to assess students’ understanding of the content. You can also teach the writing process and have students write a really polished piece of writing that they can use for a portfolio or parent teacher conference. 3. Digital Activities Follow up your rocks and minerals reading passages with an interactive Google Classroom activity. Students have a chance to show what they know and you don’t have to make any copies! Score! Therefore, using Google Classroom activities also allows students to see the activities in full color! For example, determining the kind of rocks from a black and white copy my be difficult for some students to differentiate. These full color interactive slides are a great way to engage students. Now that you know the 3 ways to integrate rocks and minerals activities into your ELA time, let me help you get started! Download this rocks and minerals activities bundle and you will have everything you need to get going! Before you go, I have to let you in on a little secret… when you buy a bundle on TpT you get 20% off the resource. Crazy right?! Who doesn’t love a good deal? And if you already have one of the products in the bundle, TpT will refund the you the cost of the first product. WOW! So go download your rocks and minerals bundle now!
Just the other day I had a super important conversation with a concerned parent. Their child had recently been screened for Dyslexia at school and the teacher was recommending moving forward with more testing and possible intervention. The parent was wondering why a wait-and-see approach couldn’t be taken given that the student was only just entering the 2nd grade. It is true that with some skills areas watching and observing is an appropriate approach but this is not the case with delays in literacy development or if Dyslexia is suspected. Research has consistently shown that early intervention can make a world of difference in a child's ability to overcome dyslexia and succeed academically and emotionally. In this article, we will explore why early intervention is so important and how it can positively impact your child's future. 1. The Window of Opportunity: Imagine a window of opportunity that opens during your child's early years, allowing them to build the fundamental skills necessary for reading and writing. As reported in by the American Academy of Pediatrics, this window is real and it's during these early stages that the brain is most highly adaptable and responsive to interventions. The earlier we provide support and evidence-based instruction, the better equipped your child will be to develop strong reading skills. 2. Academic Success Starts Early: Researchers, as cited in the Journal of Pediatrics, found that the achievement gap between Dyslexic readers and their peers is evident as early as first grade. This means that early intervention is crucial to preventing your child from falling behind in school. The National Institute of Health states that "95% of poor readers can be brought up to grade level if they receive effective help early." Early intervention is key to narrowing or even closing the achievement gap between children with Dyslexia and their typically developing peers. Academic success often begins with reading proficiency, and early intervention gives your child the tools they need to keep up with their classmates. 3. Lasting Impact: One of the most significant advantages of early intervention is its long-lasting impact. Programs targeting dyslexia in first or second grade have been shown to result in greater gains in basic reading skills. Moreover, these improvements continue well after the intervention program ends. Early intervention sets your child on a path to ongoing success in their reading and language development. 4. Emotional Well-Being: Difficulty with reading and writing can be frustrating and stressful for children. The struggles they face can lead to secondary problems such as behavioral issues, anxiety, and even depression. Early intervention tackles the root cause of these challenges, reducing the likelihood of emotional difficulties and ensuring your child's self-esteem remains intact. 5. Building a Strong Support System: Early intervention is a collaborative effort involving educators, parents, and specialists. Identifying and addressing Dyslexia early ensures your child receives the necessary assistance and accommodations throughout their educational journey. This strong support system can make all the difference in your child's progress. In summary, early intervention for Dyslexia is not just important; it's critical for your child's academic and emotional well-being. The research is clear: the earlier we provide support, the better the outcome. By addressing Dyslexia early, you give your child the best possible foundation for a bright and successful future. Don't wait; take action today to ensure your child gets the help they need to thrive in school and beyond.
Lesson Plan - Get It! Haha... There are actually much longer words in the dictionary than rubber band, like supercalifragilisticexpialidocious. It's a good thing we have dictionaries to help us spell words like that! It is important to know how to spell words correctly so others can read and understand your writing. Learn how to use a dictionary and other reference materials to help you look up how to spell words. - → is a resource (can be a book or online) - → has a lot of words in a language - → tells you what those words mean - → helps you spell those words - → tells you how to pronounce (or say!) those words - → gives you different versions of those words - → tells you more information about those words This lesson will cover how a dictionary helps you spell words when writing. You can also use a dictionary to fix the spelling if you spelled a word wrong in your writing. - How do you use a dictionary that is a book? First, it is important to know that the words in a dictionary are in alphabetical order. That means words that start with the letter A are first. Then, words that start with B are next, and so on, all the way to Z. All the words in a dictionary are in alphabetical order based on their first letters, but many words start with each letter. So, you have to look at the second letters and put those in alphabetical order too. Look at aardvark and ape as an example. Both start with an A, so they will be at the beginning of the dictionary. To see which comes first, look at the second letter. The a in an aardvark comes before the p in ape, so aardvark will come before ape. Now, you must guess how to spell the first few letters of the word you want to look up. Turn to the pages with the words that start with those letters in the dictionary. You will be able to find the word spelled correctly as well as the definition and other information about the word. - How do you use a dictionary online? There are several online dictionaries you can use like this Student Dictionary for Kids. Guess how to spell the word and type it into the search bar. The dictionary will give you options. Select the word you wanted to see its proper spelling as well as the definition and other information. - What other reference materials besides a dictionary could help with spelling? If the word you are trying to spell is written in a book nearby, you can look inside it to see how it is spelled. Authors always spell the words correctly! You could also look inside a thesaurus, which is set up exactly like a dictionary. However, instead of definitions, a thesaurus lists other words that mean the same thing as the word you are looking up. For example, if you look up the word new in a thesaurus, you will see similar words like novel, original, or fresh. - Is this making sense so far? Great! Move on to the Got It? section to practice it together!
What is anxiety and when does it become a disorder? Anxiety is a normal response that occurs in anticipation of harm or threat. Anxiety becomes a disorder when it is associated with excessive fear and distress which results in the person being unable to function well. There are different anxiety disorders and this booklet will briefly cover the most common disorders. These are: - Panic Disorder - Social Anxiety Disorder - Generalised Anxiety Disorder - Post Traumatic Stress Disorder (PTSD) At the end of the booklet we will also cover the basic management of these disorders. Panic disorder is diagnosed when a person is observed to have repetitive panic attacks. These panic attacks occur suddenly and are unpredictable. The panic attacks are then followed by a month or more of persistent worry about additional attacks resulting in withdrawal and avoidance from certain situations. Symptoms of a panic attack - Shortness of breath - Chest pain or discomfort - Heart palpitations - Choking and difficulty swallowing - Abdominal discomfort - Trembling and shaking - Flushes or chills - Intense fear A phobia is an intense and irrational fear of something. Phobias are common mental disorders. Agoraphobia is the fear that a certain public space or situation could cause them panic, embarrassment or humiliation. Someone with agoraphobia often feels like these places or situations are difficult or impossible to escape from. As a result, they will often become withdrawn, isolated and refrain from normal activities. Such public places include: - Public transport - Open spaces (eg parking lots) - Enclosed spaces (eg shops or cinemas) - Standing in line or in a crow - Being outside of the house alone This then results in limitation of function of the person as they cannot be anywhere by themselves without panic. Social anxiety disorder or social phobia Social anxiety is an intense fear of embarrassment, humiliation or judgement in social situations. This can include social interactions like having a conversation or performing in front of others or even when you feel like you are being observed like when you are eating or drinking. This fear can trigger a panic attack in which the person starts to blush or tremble. In children, the anxiety can be expressed by crying, tantrums and freezing. People with social anxiety disorder then avoid social settings out of this intense fear of humiliation. Generalised anxiety disorder Generalised anxiety disorder is excessive anxiety and worry on most days of the week for at least six months. A person with generalised anxiety disorder excessively worries about any activity they are worried would cause a panic attack. The anxiety and worry can cause: - Restlessness or being on edge - Being easily tired - Problems concentrating - Being irritable - Tense muscles - Sleep problems Post-traumatic stress disorder (PTSD) PTSD can occur after a person is exposed to death, injury or violence by: - Directly experiencing the event - Witnessing the event as it occurs to others - Learning of a traumatic event happening to a close friend - Repeated exposure to details of the traumatic event Memory and dreams of these events can result in depressive symptoms, substance abuse and panic attacks. Management of Anxiety Disorders The different anxiety disorders have differing management. The two most effective forms are a type of therapy called cognitive behavioural therapy and medication. Cognitive behavioural therapy (CBT) CBT helps a person with anxiety disorders to face their fears and anxieties through taught practical methods so that the person with anxiety can be better skilled at coping with anxiety-inducing situations and to develop the confidence to face them. - Antidepressants, e.g. fluoxetine, citalopram: work to alleviate anxiety and improve mood. - Benzodiazepines, e.g. diazepam (valium), lorazepam. Benzodiazepines reduce anxiety and stress by calming you. - Drugs called beta blockers can also be used to reduce nervous tension, sweating, panic and shakes.
Here, you will find summaries, questions, answers, textbook solutions, pdf, extras etc. of (Nagaland Board) NBSE Class 11 Political Science Chapter 16: Federalism. These solutions, however, should be only treated as references and can be modified/changed. India’s federal constitution is a complex and dynamic system that balances power between the central government and the states. The constitution, originally consisting of 395 articles and eight schedules, has evolved over time to include 22 parts and 12 schedules. It is neither entirely flexible nor rigid, with different parts requiring varying levels of majority for amendments. The Supreme Court and High Courts serve as the final interpreters of the constitution, ensuring that actions of the Union or State governments align with constitutional provisions. The constitution divides legislative powers among the Union List, State List, and Concurrent List. The Union List covers subjects like defense and foreign affairs, while the State List includes items of local interest such as public health and agriculture. The Concurrent List contains items like criminal law and education, where both Parliament and State Legislatures have the power to make laws. Despite being a federation, India’s constitution exhibits a pronounced unitary bias, leading some observers to label it a “quasi-federation”. The Parliament has extensive legislative powers, even over State subjects, if deemed necessary in the national interest. However, the relationship between the Centre and the States is not without its tensions. Issues such as financial relations, the role of Governors, the imposition of President’s Rule, and demands for new states have often led to conflicts. For instance, the dismissal of state governments under Article 356 has been a contentious issue. The formation of new states based on linguistic and cultural lines has also been a source of tension. The creation of Andhra Pradesh led to similar demands from other regions, resulting in the formation of states like Maharashtra, Gujarat, Nagaland, and more recently, Telangana. Textual questions and answers A. Long answer questions 1. Explain the features of a Federal Government with reference to the following: (a) Division of Powers between the National Government and State Governments Answer: The first important feature is the division of state powers between a federal (national) government and governments of federating units (state governments). The subjects of national importance, such as defence, foreign affairs, currency and coinage, are placed under the control of the national government, while those that are of local or regional importance, such as police, land or public order, are placed under the control of the governments of federating units. (b) Two Sets of Identities Answer: A Federation combines the advantage of ‘national unity’ with those of local identity’, i.e. the right of self-government. In a Federation sovereignty is located neither in the Central government, nor in the governments of constituent States. All these have limited powers. Division of powers in a federation is the division of governmental authority and not of sovereignty. In fact, sovereignty resides in the Federation or the Union itself. (c) A Written Constitution Answer: Another feature of a federal government is that it has a written, rigid and supreme Constitution. According to Dicey, the Constitution of a Federation has to be a written one. Constitution is in the nature of an agreement, which not only enacts distribution of powers, but also lays down the conditions and processes through which these powers have to be exercised. It is also a Rigid Constitution because no amendment can be affected unless both the National and State governments consent to the change through a well defined procedure. It is the Constitution which is supreme. The central and state legislatures exercise their powers within the limits set by the Constitution. (d) An Independent Judiciary to decide Inter-State Disputes Answer: Since a federal government is based on the principle of division of powers, conflicts of jurisdiction are bound to arise between the National government and State governments. Therefore, in every federation there is some institution that has the power to decide disputes between the Central Government and any State or between two States. In most federations such powers are vested in an independent Supreme Court, which acts as the Guardian of the Constitution. 2. Discuss the Federal features of the Constitution of India. Answer: The federal features of the Constitution of India are as follows: Two Sets of Government: A Federal Constitution is marked by the coexistence of two governments, with limited and co-ordinate authority. Article 1 of the Constitution declares that “India, that is Bharat, shall be a Union of States.” The word ‘Union’ implied that the component units had no right to secede from the Indian Union. Today India consist of 29 States and 7 Union Territories. Division of Powers: All subjects of legislation and administration have been classified into three lists-the Union List, the State List and the Concurrent List. Whereas subjects of national importance, such as defence, atomic energy, railways and currency have been placed under the Union List, subjects of local importance like police, agriculture and local government form a part of the State List. The Concurrent List contains matters (marriage, education, etc.) on which both the Parliament and the State Legislature have power to make laws. If some dispute arises between the Union and the States or for that matter between the States themselves, it has to be settled by the Supreme Court. Written and Rigid Constitution: Another characteristic of a federal constitution is that it is written as well as rigid. Our Constitution originally consisted of 395 Articles and eight Schedules. It has now 395 Articles arranged in twenty two Parts and 12 Schedules. The Indian Constitution is neither very flexible nor very rigid. Certain parts of the Constitution can be amended by simple majority in Parliament, others by a two-thirds majority. Amendments to a third category of provisions have to be approved by at least one-half of the States, after having been passed by the Parliament. The Supreme Court acts as Final Interpreter of the Constitution: In every federation there is some institution with powers to decide disputes arising out of conflicts of jurisdiction. The Supreme Court and the High Courts in India have power to interpret the Constitution. They can nullify any action of the Union or State governments in case it violates the provisions of the Constitution. 3. Examine those provisions of the Indian Constitution that make the Central Government very powerful. (Or) “The Indian Constitution is Federal in form, but Unitary in spirit.” Comment. Answer: The provisions of the Indian Constitution that make the Central Government very powerful are as follows: Parliament has very Wide Scope of Legislation: India’s Parliament has power of legislation over State subjects also. This is possible if the Rajya Sabha declares by a resolution (supported by two-thirds of the Members present and voting) that it is necessary in national interest that Parliament should make laws with respect to any matter enumerated in the State List. Parliament can alter the Boundaries of States in order to form a New State: The Parliament can increase or diminish the area of any State. It can also alter the boundaries or change the names of the States. Overwhelming Financial Powers of the Union: The Union Government has also the freedom and authority to decide how much Grant or Loan has to be given to a State in a particular situation. Role of the Governor: The executive head of the State is the Governor, who is appointed by the President and holds office during the pleasure of the President. The Governor is required to reserve certain specified bills for the Assent of the President. In an Emergency the Federal Structure may be converted into a Unitary One: Once the President issues a Proclamation of Emergency, the Union Parliament can legislate on any subject enumerated in the State List. President’s Rule: In case of break-down of the Constitutional machinery of a State, the President may assume to himself all or any of the functions of the Government of the State. Centre’s Control Over All-India Services: Even during normal circumstances the Union Government may assert much control over State Administration by means of All-India Services i.e. IAS and IPS officers. 4. Discuss the causes of tension and conflict in Centre-State relations with reference to the following: (a) Demands for Autonomy Answer: Demand for Autonomy means that the States should be free to make their own decisions, rather than being politically and financially controlled by the Union Government. The Tamil Nadu Government appointed a Committee (Rajamannar Committee) to look into the question of Centre-State relations. It recommended that major alterations should be made in Constitution’s Seventh Schedule that comprises the Union List, the State List and the Concurrent List. Its another major recommendation was that Article 356 (that provides for President’s Rule in States) should be examined in order to decide what changes are needed in it. (b) Financial Relations Answer: The Union Government “is financially stabler and stronger than the State governments.” So long as the States do not get their due share of the ‘fiscal cake’, there will be tensions in Centre-State relations. The Chief Ministers are of the view that while the responsibilities of the States have been increasing, their resources remained stagnant. States were required to follow the policies laid down earlier by the Planning Commission and now-a-days by the Niti Ayog. They cannot do so without obtaining sufficient funds from the Union. (c) Office of the Governor Answer: The Constitution-makers wanted the Governor to become a coordinating factor between the States and the Centre. Gradually, however, the clashes between the Chief Minister and the Governor became quite common. On February 21, 1998, the Governor of Uttar Pradesh dismissed Kalyan Singh Government on the eve of the Lok Sabha elections. The Allahabad High Court nullified the Governor’s decision. The Goa Governor had dismissed the BJP government early in 2005, despite a Confidence Vote in Chief Minister’s favour. It is but natural that a partisan Governor would be unacceptable to most Chief Ministers. The Governor should be an eminent personality, not connected with the local politics of the State. 5. Examine the main causes of conflict and tension in Centre-State relations on account of the following factors: (a) Provision of President’s Rule in States Answer: Kerala was the first victim of an improper use of Article 356 which empowers the Centre to dismiss a State government. Since then this power has been used about a 100 times, though the Founding Fathers of the Constitution hoped that it would only remain a “dead letter”. On 7 October 2005 the Supreme Court held the May 23 Presidential Proclamation dissolving the Bihar State Assembly as “unconstitutional”. Thus, Article 356 was “conditional and not absolute.” It was open for the Supreme Court or the High Courts to know the facts on the basis of which Presidential Proclamation was based. (b) Demands for New States Answer: The demand for new States has also been a cause of tension between Centre-State relations. Soon after the First General Election in 1952 the Andhra Pradesh Congress Committee passed a resolution asking for a separate State for the Telugu speaking people of Madras (Tamil Nadu). The Union Government did not concede the demand in the first instance but after Sriramulu’s death, who had gone on fast for the fulfilment of his objective, the State of Andhra Pradesh came into existence on 1 October 1953. The creation of a separate State of Andhra Pradesh led to similar demands from other quarters. In 1960 the State of Bombay was bifurcated into Maharashtra and Gujarat. In 1963 Nagaland formally became a State of the Indian Union and in 1972 many separate States, such as Manipur and Meghalaya, were carved out of the large State of Assam. B. Short answer questions 6. Mention any five recommendations that were made by the Sarkaria Commission to strengthen federal ties for better cooperation between the Union and States? Answer: The Sarkaria Commission made the following five recommendations to strengthen federal ties for better cooperation between the Union and States: - The Governor should be an eminent person and should not be too intimately connected with the local politics of States. - The Chief Minister should be consulted before appointing a particular person as Governor of that State. - Power to impose President’s rule in a State should be used very sparingly. - Before deploying paramilitary forces into disturbed regions in a State, the concerned Chief Minister should be consulted. - According to Article 258, the President may entrust such powers and functions to State Governments that belong to the Union Government. The President should generously give States more powers and responsibilities. 7. What are the Special Provisions for the State of Jammu and Kashmir in Indian Constitution? Answer: [Note: the answer is outdated now because of recent changes in Jammu and Kashmir] The Constitution of India contains some Special Provisions with respect to the State of Jammu and Kashmir: - Jammu and Kashmir is the only State in the country having a Constitution of its own and a separate Flag. - When the President of India proclaims Emergency under Article 352 due to external aggression or armed rebellion, it automatically becomes applicable to the whole of India, except Jammu and Kashmir. It shall not be applicable to Jammu and Kashmir without the concurrence of the State. - The President cannot proclaim a Financial Emergency in the State of Jammu and Kashmir. - The Power of Parliament to make laws for Jammu and Kashmir shall be limited to those matters which have been specified in the Instrument of Accession. The Parliament may make laws on other matters also, but that can be done only with the concurrence of the State government. - The Union Parliament cannot repeal Article 370 without the concurrence of the State Legislative Assembly. 8. What are the Special Provisions for the Bodos in Assam and hill tribes in Darjeeling and the neighbouring regions? Answer: The Special Provisions for the Bodos in Assam and hill tribes in Darjeeling and the neighbouring regions are: Bodoland Territorial Council: The Bodos in Assam had been agitating for the establishment of a separate Bodo State. Both the Union Government and the Assam Government rejected the demand for a separate Bodo State. On 10th February 2003, the Union Government, the Assam Government and the Bodo Liberation Tigers signed a Memorandum of Settlement. They reached a joint decision on the creation of a Bodoland Territorial Council, which is expected to fulfil economic, educational and linguistic aspirations of the Bodos. In 2003 the Bodo language was included in the Eighth Schedule of the Constitution. District Councils for Dimasas and Karbis: Autonomy demands of Dimasas and Karbis were satisfied by the formation of the District Councils to look after the social and economic well-being of these communities. Gorkhaland Territorial Administration: The Gorkha National Liberation Front had waged a fierce agitation under Subhas Ghising’s leadership with the aim of having a separate State out of the Darjeeling and neighbouring regions of West Bengal. In 1988 the GNLF, accepted government’s proposal to form a Hill Development Council with the condition that the proposed Council should include the word ‘Gorkha’ in its nomenclature. In 2007 the demand for Gorkhaland as a separate State was voiced again. This came under the direction of Gorkha Janmukti Morcha (GJM). In 2011 the Darjeeling Gorkha Hill Council was replaced by Gokhaland Territorial Administration (GTA) which was vested with a few significant powers towards developing Hill areas. 9. What are the Special Provisions with respect to the State of Nagaland? Answer: Special Provisions with respect to the State of Nagaland Section are: Section (a) of this Article provides that no Act of Parliament in respect of: (i) religious or social practices of the Nagas (ii) Naga customary law and procedure (iii) administration of civil and criminal justice involving decisions according to Naga customary law (iv) ownership and transfer of land and its resources, shall apply to the State of Nagaland unless the Legislative Assembly of Nagaland by a resolution so decides. Section (b) of the Article provides that the Governor shall have special responsibility with respect to law and order so long as internal disturbances in the Naga Hills Tuensang Area continue. The Governor shall, after consulting the Council of Ministers, exercise his individual Judgement as to the action to be taken. C. Very short answer questions 10. Mention any two subjects included in each of the following Lists: (a) Union List Answer: The Union List includes subjects such as defence of India, atomic energy. (b) State List Answer: The State List consists of items which are mainly of local interest such as public order, police. (c) Concurrent List Answer: The Concurrent List consists of items such as criminal law, marriage and divorce. 11. Should common language, namely Hindi, be the sole basis for formation of a State? Give reasons for your answer. Answer: A common language, such as Hindi, should not be the sole basis for the formation of a State because of the following reasons: Diversity: India is a country known for its diversity. There are numerous languages spoken across different regions. Making a single language the sole basis for state formation could undermine this rich linguistic diversity. Representation: Each state in India has its unique cultural, historical, and socio-economic context, which is often closely tied to its regional language. Forming a state solely based on a common language might not adequately represent these unique contexts. Potential for Conflict: Making a single language the basis for state formation could potentially lead to conflicts and tensions among different linguistic communities. It could also marginalize those who do not speak the dominant language. Administrative and Economic Factors: The formation of a state should also consider administrative convenience and economic viability. These factors are crucial for the effective functioning of a state and the well-being of its residents. D. Multiple Choice Questions: Tick (✔) the correct answer. 12. Which of the following is not a Federation? Answer: (c) France 13. Which of the following factors should be taken into account as regards the formation of a new State under the Indian Union? Answer: (a) A common language of that region 14. Which among the following is the only State in the country having a Constitution of its own and a separate Flag? Answer: (b) The Jammu and Kashmir State 15. Which Article of the Constitution makes special provision with respect to the State of Nagaland? Answer: (c) Article 371-A Additional/extra questions and answers 1. What is the principal characteristic of a Unitary Government? Answer: The principal characteristic of a unitary government is concentration of powers in the national government. 2. How can a Federal government be defined according to Carl J. Friedrich? Answer: According to Carl J. Friedrich, a Federation can be defined “as a political organisation in which the activities of government are divided between regional governments and a central government in such a way that each kind of government has some activities on which it makes final decision.” 3. List the key features of a Federal Government. - Division of Powers, i.e. Two Sets of Polities - Two Sets of Identities - A Written and a Rigid Constitution - An Independent Judiciary 4. What is the significance of a ‘written and rigid’ Constitution in a Federal Government? Answer: A written and rigid Constitution in a federal government is significant as it serves as an agreement that enacts the distribution of powers and also lays down the conditions and processes through which these powers have to be exercised. It is rigid because no amendment can be affected unless both the National and State governments consent to the change through a well-defined procedure. It underscores the supremacy of the Constitution, within whose limits the central and state legislatures exercise their powers. 5. What does the division of powers mean in a federal government? Answer: In a federal government, division of powers means that the state powers are divided between a national (or federal) government and the governments of federating units. Subjects of national importance such as defence, foreign affairs, currency and coinage, are controlled by the national government, whereas subjects of local or regional importance such as police, land, or public order, are controlled by the governments of federating units. 44. Discuss the issues and challenges related to the special provisions for the State of Jammu and Kashmir under Article 370. Answer: Article 370, providing special provisions to the state of Jammu and Kashmir, has been subject to severe criticism. It has been seen as creating an impression of Jammu and Kashmir being separate from the rest of India. Presently, there’s a significant disorder in the state, with many leaders advocating for autonomy or even complete independence (‘Azadi’) for the state. External influences, such as the Pakistani military’s alleged support for terrorists in the valley and recent interests shown by groups like the Islamic State and al-Qaeda, have further complicated the issue. Despite these challenges, the Union Government has shown its capacity for taking bold initiatives in recent months. The need of the hour is for peace-loving individuals to resist the manipulative tactics of those exploiting mass sentiments around autonomy, and for the state’s leaders to prioritize peace and order. 1. What type of government vests all governmental powers in the national government? A. Federal Government B. Unitary Government C. Confederation D. Parliamentary Democracy Answer: B. Unitary Government 2. Which country is known for its tradition of unitary rule? A. India B. Switzerland C. USA D. France Answer: D. France 3. What is a Federation according to Carl J. Friedrich? A. A political organization with concentrated powers B. A political organization with a single identity C. A political organization dividing government activities D. A political organization with flexible constitution Answer: C. A political organization dividing government activities 4. Which country is not an example of a federal government as of now? A. Switzerland B. USA C. India D. United Kingdom Answer: D. United Kingdom 5. What is the first key feature of a Federal Government? A. Two sets of identities B. An independent judiciary C. A written and a rigid constitution D. Division of powers Answer: D. Division of powers 70. Which body of the State of Nagaland can decide if an Act of Parliament should apply to the State? A. The High Court of Nagaland B. The Naga Hoho C. The Legislative Assembly of Nagaland D. The Governor of Nagaland Answer: C. The Legislative Assembly of Nagaland Get notes of other boards, classes, and subjects
Introduction: The skin, our body's largest organ, is a remarkable tapestry of cells, tissues, and structures working harmoniously to protect, regulate, and communicate. In this journey into the intricate world of skin physiology, we will delve deep into the building blocks, functions, and needs of the skin that contribute to its resilience and vitality. Understanding the Architecture: The skin comprises three main layers: the epidermis, dermis, and subcutaneous tissue. The outermost layer, the epidermis, acts as a protective shield against environmental elements. It constantly renews itself, with cells moving from the basal layer to the surface, shedding dead skin cells in the process. Beneath the epidermis lies the dermis, home to collagen, elastin, and various structures that provide strength and elasticity. The subcutaneous tissue, the deepest layer, offers insulation and houses fat cells. Cellular Harmony: At the cellular level, the skin is a bustling community of different cell types. Keratinocytes, the predominant cells in the epidermis, produce keratin—a protein that forms the skin's protective barrier. Melanocytes, responsible for skin pigmentation, dictate our skin tone. Langerhans cells play a crucial role in the immune response, defending against external threats. Fibroblasts in the dermis synthesize collagen and elastin, ensuring skin strength and flexibility. The Dynamic Role of Blood Vessels and Nerves: A complex network of blood vessels traverses the skin, delivering oxygen and nutrients while removing waste products. Nerves facilitate sensation, transmitting signals of touch, temperature, and pain. This intricate communication system helps us respond to our environment and protect ourselves from potential harm. Skin's Protective Barrier: The epidermis acts as a formidable barrier, preventing pathogens, UV radiation, and environmental pollutants from entering the body. Lipids, produced by specialized cells called keratinocytes, form the lipid barrier, crucial for retaining moisture and preventing dehydration. Maintaining Homeostasis: The skin plays a pivotal role in maintaining internal balance, or homeostasis. Sweat glands release perspiration, regulating body temperature. Sebaceous glands produce sebum, an oily substance that moisturizes and protects the skin. These processes collectively contribute to the skin's ability to adapt to various environmental conditions. What Your Skin Needs: Hydration: Adequate water intake is fundamental for skin health. Hydrated skin is supple, resilient, and better equipped to perform its protective functions. Nutrition: A balanced diet rich in vitamins, minerals, and antioxidants supports skin health. Nutrients like vitamin C, E, and omega-3 fatty acids contribute to collagen synthesis and protect against oxidative stress. Sun Protection: UV radiation is a significant threat to skin health. Regular use of sunscreen helps prevent sun damage, premature aging, and reduces the risk of skin cancer. Cleansing and Moisturizing: A gentle cleansing routine removes impurities without disrupting the skin's natural barrier. Moisturizing helps maintain hydration, especially in dry or harsh environments. Rest and Sleep: Quality sleep is essential for skin repair and regeneration. During deep sleep, the body produces growth hormone, aiding in the restoration of skin cells. Conclusion: Our skin is a masterpiece of biological engineering, a living testament to the marvels of nature. Understanding its physiology allows us to appreciate its resilience and complexity. By providing the care and nourishment it needs, we not only maintain healthy and vibrant skin but also contribute to our overall well-being. So, let's celebrate the intricate dance of cells, the protective embrace of layers, and the dynamic symphony that is our skin. After all, it's not just an organ—it's a work of art.
According to Lenz's law of electromagnetism, when a conductor falls within a certain range of an oscillating (alternating) magnetic field, it generates an oscillating field of its own, which opposes the primary field. A magnetometer can pick up the resulting changes in the overall field, signaling the nearby presence of a conductive object, typically a piece of metal. The range of metal detectors varies from a few feet for the smallest coils, to 10 feet (3 m) for 12 to 15-inch (30.5 to 38.1 cm) coils. The key to a functioning metal detector is the presence of eddy currents generated by conductive objects in the environment. Just like pushing a paddle through a lake of water can cause little vortices to appear on the surface, producing an oscillating field in the environment causes electromagnetic vortices when the electrons in metal generate their own oscillating field. Frequencies of 3 to 20 kHz are known to produce the best results, and some more modern metal detectors even allow the operator to change the frequency of the alternating field. A different, newer type of metal detector uses a technology called pulse induction. This metal detector blasts the ground with a large electromagnetic pulse, and observes the length of time it takes for the voltage to decrease to ambient levels. If there is a conductive object under the ground, it will take a longer amount of time for the voltage to decrease. It is a small effect, but modern sensors can pick it up well. This technique has certain advantages over conventional metal detectors, such as the ability to detect objects under highly mineralized "black sand." The applications of metal detectors are numerous and generally well-known. Perhaps the most important application for any metal detector is to locate mines or improvised explosive devices buried just under the surface. In some countries where mines still remain from old wars, such as Vietnam, people are advised to use metal detectors when walking through unfamiliar areas known to be at risk for the presence of land mines. This can save many lives. Another common use for the metal detector is searching for "buried treasure" - coins and relics from years or even millennia in the past. Searching a beach that has many visitors can bring up lost items from only a few days past. This is not a viable way to make a living, but some people enjoy it as a hobby.
Living things are symmetrical organisms In nature, almost the majority of animals are built on an unshakable symmetrical form: two ears, two eyes, and often two pairs of limbs. This symmetry does not stop outside living things, but it also applies inside the body because many organs are the same and move in pairs. We have lungs, kidneys, and cerebral hemispheres. >> Video: Nature does the math This organization is amazing because it is also present for simple organisms such as jellyfish, corals, starfish or single-celled animals. The concern is not limited to just animals because plants and flowers in particular recognize symmetry and simple shapes just like the majority of molecules involved in metabolism. Whoever says symmetry, necessarily says axis of symmetry. From an evolutionary point of view, Symmetry allows the body to be organized Rotate one or more axes in binary (right/left) like a human or radially like a starfish. Researchers have known for some time that some of our genes encode the essential information for each cell to find its location along the symmetry axis especially during embryonic development. >> Read also: Is mathematics the language of nature? Evolution favors simple “algorithms”. The researchers behind this study wondered why evolution prefers simple, symmetric forms when there are an infinite number of other possible forms. When they build structures like bridges for example, engineers design modular and symmetric systems in order to increase the rigidity of the whole, but also to facilitate possible modifications. It would be tempting, by a simple technical analogy, to apply the same principle to biological structures. That would be very simple because unlike engineers who work on a construction site, nature can’t plan ahead! This means that symmetry in biology must necessarily provide an immediate selective advantage. But in their study, it was another hypothesis put forward by the researchers. They combine both biological data, but also mathematics and computer science. Their hypothesis is based on a computational picture of evolution. The symmetrical structures They appear, of course, by natural selection, but also because these structures are easier to encode by the genes responsible for them. They require less data and are therefore more likely to appear as phenotypic changes due to random genetic mutations. As Professor Ian J. Johnston of the University of Bergen in Norway and co-author of the article: “It’s a bit like explaining to a friend how to tile the floor using as few words as possible. You’d probably ask him to lay square tiles across the entire surface rather than squares in the center of the room, then wide rectangular tiles And finally a long rectangular tile around the edge to finish the job.” >> Read also: We have revealed the secret of diversity of life forms Symmetry in Nature: Better Understand Through Computer Modeling The team of scientists used computer modeling to try to understand how the selection of simplicity or difficulty works in the field of biology. They show that at the genetic level, random mutations that occur during the evolution of a species produce phenotypes with symmetrical structures Simple because it requires less information. This may also explain why such symmetry exists for simple forms at the macroscopic level and at the level of microstructures such as fully homologous protein complexes, in the secondary structure of RNA or even in the regulatory network of genes. To better understand this principle, the researchers used a famous thought experiment in evolutionary biology where you have to imagine a room full of monkeys trying to type a book by typing randomly on a computer keyboard. If instead of writing a 500-page book the monkeys tried to write a simple cooking recipe, each of them would have a much better chance of writing the letters needed to write this short and simple recipe. If we take this simple recipe (genetic information) literally, we will surely get a simple dish (biological structures). Thus researchers have demonstrated that many biological structures and systems, but also macromolecules such as RNA, DNA, proteins, carbohydrates and many other biological molecules adopt simple, symmetrical structures. >> Read also: 100 years ago, Darcy Thompson reveals the meaning of natural forms “Subtly charming problem solver. Extreme tv enthusiast. Web scholar. Evil beer expert. Music nerd. Food junkie.”
One in ten Americans takes an anti-depressant drug like Zoloft or Prozac. These drugs have been shown to work in some patients, but their design is based on a so-called “chemical imbalance” theory of depression that is incomplete, at best. Image Credit: Tom Varco | CC BY-SA 3.0 The number of people taking antidepressants has increased by over 400% since the early ‘90s. In a certain light, this could be perceived as a success for public health; it is clear, for example, that tens of millions of people have found antidepressants to be effective. What’s less clear is why these medications work, but decades of research on the subject suggest that an explanation parroted in ad campaigns and physicians’ offices alike – that depression can be chalked up to low levels of serotonin in the brain – is insufficient. “Chemical imbalance is sort of last-century thinking. It’s much more complicated than that,” Dr. Joseph Coyle, a professor of neuroscience at Harvard Medical School, told NPR in 2012. “It’s really an outmoded way of thinking.” This is the story of how pharmaceutical companies and psychiatrists convinced the public that depression was the result of a simple chemical imbalance – and how scientists, patients, and psychiatrists are working to piece together the more complicated truth. Better Thinking (And Feeling) Through Chemistry Flickr user spike55151. CC BY-NC-SA 2.0 Psychiatry in the 1950s was a field in transition. Mental disorders were often believed to be the direct result of social circumstance, and many psychiatrists relied on talk therapy to treat their patients. The few drug therapies that did exist were rarely well-suited for treating particular maladies. Morphine and opium were sometimes used to treat depression, while insulin shock therapy was used to render uncooperative schizophrenic patients comatose. By the end of the 1950s, Thorazine, a new psychiatric drug, had become the treatment of choice for schizophrenia. Thorazine simplified the problem of safely keeping aggressive patients calm and docile, and was seen as far less cruel than putting those patients in a coma. During the 1960s, researchers confirmed that neurotransmitters, like dopamine or serotonin, served as the chemical signals that allowed neurons to communicate, underpinning much of the brain’s function. Thorazine was soon found to inhibit dopamine receptors in mice, blocking the normal effects of dopamine, and potentially explaining its sedating effects in humans. [Image Credit: Dan Century | CC BY 2.0] Drugs similar to Thorazine were then developed on the premise that excessive dopamine in the brain could be responsible for certain aspects of schizophrenia. These drugs quickly demonstrated that the chemical manipulation of neurotransmitters could be effective in treating mental disorders. Psychiatry had lagged behind other medical fields for decades, in large part because it lacked treatments tailored to treat specific conditions. Thorazine helped accelerate the acceptance of biological psychiatry, which focused on the biological basis of mental disorders. Biological psychiatry also provided a welcome opportunity for psychiatrists to work directly with pharmaceutical companies to develop targeted, drug-based treatments for mental disorders. Change was in the air. Pinpointing Low Serotonin As The Culprit Behind Depression Psychiatrists in the mid-20th century were also keen to develop drug therapies for more common mental disorders, like depression. Case reports had documented mood changes in patients being treated with various drugs for non-psychiatric illnesses. Iproniazid, used to treat tuberculosis, seemed to improve patients’ moods, while reserpine, originally used to manage high blood pressure, appeared to mimic depression. Just why either of these drugs influenced mood remained anyone’s bet. Thorazine’s documented effects on dopamine receptors raised the possibility that iproniazid and reserpine might be influencing mood via their effects on some neurotransmitter. Remarkably, this appeared to be the case. Iproniazid increased serotonin levels in the brain, while reserpine decreased serotonin levels. Other drugs which had similarly shown promise as anti-depressants in the 1950s, like imipramine, were also shown to raise serotonin levels. [Image Credit: Kevin Dooley | CC BY 2.0] These examples (with the exception of reserpine’s serotonin-sapping effects – more on this later) suggested that low serotonin might be responsible for depression’s symptoms, and that boosting serotonin’s levels might alleviate these symptoms. In other words, they indicated that depression could be due to a chemical imbalance in the brain, and that this imbalance could be corrected through the targeted use of proper drugs. Based on rodent studies, researchers could reasonably surmise that the drugs would increase serotonin levels. What they couldn’t assume was that a boost in serotonin levels would be of benefit to people suffering from depression. And yet, at least for some patients, the therapeutic effects of these drugs were undeniable. But these early anti-depressants caused severe side effects, and psychiatrists were skeptical that patients would agree to take them. Pharmaceutical companies saw a major (and, potentially, majorly lucrative) opportunity: A drug that could increase serotonin levels without causing severe side effects could revolutionize the treatment of depression. These companies began hunting for new chemicals that met these criteria. A New Class Of Antidepressants In the early 1970s, pharmaceutical chemists struck gold with the invention of drugs like fluoxetine (Prozac) and sertraline (Zoloft). These compounds were part of a new class of anti-depressants, called selective serotonin reuptake inhibitors (SSRIs), that raised serotonin levels by preventing neurons from recycling serotonin that had already been released. Promisingly, SSRIs were about as good at treating depression as their predecessors, but they caused milder side-effects. Left: Prozac prevents serotonin from being reabsorbed, increasing its concentration in the synapse. Wikimedia Commons/vtvu. CC BY-SA 3.0 It took about twenty years for the first SSRIs to pass through clinical trials and receive FDA approval. Psychiatrists and drug companies alike were happy to trumpet a biological explanation for depression (low serotonin), and an appropriate, relatively safe remedy (SSRIs). “Why am I depressed, and what can I do about it?” a patient might ask. “Well, there’s research indicating that depression is related to low levels of serotonin,” a psychiatrist might reply. “And here’s a pill that will increase your serotonin levels, and alleviate your depression.” Television commercials, too, leaned heavily on the chemical imbalance theory: The ad, like many pharmaceutical commercials, was careful not to make absolute claims about Zoloft’s effectiveness. Instead, it contextualized a definitive statement (that Zoloft works to correct an imbalance) with an inconclusive one (i.e. that while its cause is unknown, depression may be related to an imbalance of natural chemicals). Couched in this careful language was an implication, that psychiatrists not only had a solid grasp of depression’s biological underpinnings, but had deduced from this understanding how to treat its symptoms in a targeted way. It’s difficult to say what effect direct-to-consumer marketing campaigns like this one had on antidepressant sales, but it seems reasonable to assume that it was significant; by 2006, anti-depressants in the U.S. represented the most popular category of prescription drug. But those familiar with depression know that it can often resist treatment. Not every person faced with depression can be helped by anti-depressants designed to “correct” a supposed serotonin deficit – a fact that underscores the insufficiency of the chemical imbalance theory, and the complexity of depression, in general. The Myth Of The Chemical Imbalance Theory There is no question that the chemical imbalance theory has spurred chemists to invent new anti-depressants, or that these anti-depressants have been shown to work; but proof that low serotonin is to blame for depression – and that boosting serotonin levels is the key to its treatment – has eluded researchers. For starters, it is impossible to directly measure brain serotonin levels in humans. You can’t sample human brain tissue without also destroying it. A crude work-around involves measuring levels of a serotonin metabolite, 5-HIAA, in cerebrospinal fluid (CSF), which can only be obtained with a spinal tap. A handful of studies from the 1980s (like this one) found slightly decreased 5-HIAA in the CSF of depressed and suicidal patients, while later studies have produced conflicting results on whether SSRIs lower or raise CSF levels of 5-HIAA. These studies are all circumstantial with regards to actual serotonin levels, though, and the fact remains there is no direct evidence of a chemical imbalance underlying depression. [Left: The only way to measure brain serotonin levels in living people is to take a sample of cerebrospinal fluid, via spinal tap. Credit: Blausen.com staff, Wikiversity Journal of Medicine | CC BY 3.0.] The corollary to the chemical imbalance theory, which implies that raising brain serotonin levels alleviates depression, has also been hard to prove. As mentioned previously, the serotonin-depleting drug reserpine was itself shown to be an effective anti-depressant in the 1950s, the same decade in which other studies claimed that reserpine caused depression-like symptoms. At the time, few psychiatrists acknowledged these conflicting reports, as the studies muddled a beautiful, though incorrect, theory. Tianeptine is another drug that decreases serotonin levels while also serving as a bona-fide anti-depressant. Tianeptine does just the opposite of SSRIs – it enhances serotonin reuptake. Wellbutrin is a third anti-depressant that doesn’t increase serotonin levels. You get the picture. If you prefer your data to be derived more accurately, but less relevantly, from rodents, you might consider a recent meta-analysis carried out by researchers led by McMaster University psychologist Paul Andrews. Their investigation revealed that, in rodents, depression was usually associated with elevated serotonin levels. Andrews argues that depression is therefore a disorder of too much serotonin, but the ambiguous truth is that different experiments have shown “activation or blockage of certain serotonin receptors [to improve] or worsen depression symptoms in an unpredictable manner.” Other problems with the chemical imbalance model of depression have been well documented elsewhere. For instance, if low serotonin levels were responsible for symptoms of depression, it stands to reason that boosting levels of serotonin should alleviate symptoms more or less immediately. In fact, antidepressants can take more than a month to take effect. Clearly, something here just doesn’t add up. Bringing The Public Up To Speed With 50 Years Of Brain Science To spur psychiatry forward, we need an improved public understanding of depression, and new forms of treatment. To learn more about the former , I contacted Jeffrey Lacasse – an assistant professor in the College of Social Work at Florida State University who specializes in mental health and psychiatric medications – and neuroanatomist Jonathan Leo of Lincoln Memorial University in Tennessee. In 2007, Lacasse and Leo published research on the media’s propagation of the chemical imbalance theory. In their investigation, the researchers followed up on every mention of the chemical imbalance theory they could find over a one-year span. I wanted to know the extent to which the public dialogue about depression has shifted since their investigation was published. In a joint e-mail, Lacasse and Leo told me that the public portrayal of the chemical imbalance theory has dropped off noticeably in the past few years. Though TV commercials promoted SSRIs using the chemical imbalance theory in the early 2000s, “we noticed these advertisements came to a screeching halt around 2006-07,” they said. It’s not entirely clear why these advertisements disappeared, but the researchers speculate it’s because the underlying science had failed to corroborate the theory, and finally come to the attention of advertising execs who had knowingly skipped their homework. But Lacasse and Leo say depressed patients are still routinely told by their GPs and psychiatrists that they have a chemical imbalance, in spite of criticisms from prominent academic psychiatrists like Ronald Pies, who “states that no knowledgable, well-trained clinician would say such a thing.” “If patients search the internet on these issues,” Lacasse and Leo say, “we would expect them to be very confused.” The two researchers are concerned “that the story most patients have been hearing from their clinicians for the past 25 years simply has never lined up with the actual scientific data,” raising the question of whether patients have had the opportunity to give fully-informed consent. There is no question that antidepressants can be very beneficial for some people. But the effectiveness of these medications has been shown to vary widely. As noted in a meta-analysis of antidepressant drug effects published January 2010 in The Journal of the American Medical Association: The magnitude of benefit of antidepressant medication compared with placebo increases with severity of depression symptoms and may be minimal or nonexistent, on average, in patients with mild or moderate symptoms. For patients with very severe depression, the benefit of medications over placebo is substantial. Some psychiatrists vehemently disagree with the way journalists and other psychiatrists have pushed back against the chemical imbalance theory, and anti-depressants in general, noting that these therapies are effective, even if we don’t fully understand why they work. For what it’s worth, the sudden cessation of televised versions of the chemical imbalance theory still perplexes Lacasse and Leo, who are continuing to study how the public portrayal of depression influences patients. Today, the chemical imbalance theory appears to exist predominantly in the lay audience’s mind. It seems there exists opportunity for change. The Science Of Depression Advances – With Luck, Psychiatry Will Follow To get a sense of where an expert in depression felt the study and treatment of depression was heading, I contacted Poul Videbech, a professor of psychiatry at Aarhus University Hospital in Denmark. He was frank with his assessment of the field: “ The truth is, the chemical imbalance theory has been immensely fruitful, as it has inspired us to develop new drugs,” he said. “At the same time,” he adds, “it has probably been wrong, or at very least partially wrong. Depression – which is several disease entities – is much more complicated than this simplistic theory assumes. “ Videbech says depression’s wide range of symptoms can be linked to myriad overlapping factors, from genetic vulnerability, to deficiency of certain neurotransmitters (call it “chemical-imbalance-theory-lite”), to disruptions in circadian rhythms, to factors that can alter the survival and growth of neurons. The birth of new neurons, for example, is a hallmark of a healthy brain; a prominent new theory about how SSRIs work has connected elevated serotonin levels to the elevated birth of neurons. But the science still has a ways to go. “ It is also obvious that psychological stress and so-called early lifetime stress can cause depression,” he says. That’s not to say that depression’s social underpinnings are distinct from its biological ones, Videbech adds. “The dichotomy of depressions being either ‘biological’ or ‘psychological’ disorders,” he says, “is thus false, and not justified by scientific literature.” This dichotomy, he says, is upheld in large part by lay people, who may think that treatment with anti-depressants implies an exclusively biological origin for the disease. “It is a major pedagogical task for doctors (and journalists) to eradicate these old fashioned beliefs. They are so beautifully simple to explain,” says Videbech, “but nevertheless wrong.” Videbech also mentioned several new therapies that could gain traction in coming years. Ketamine, for example, shows promise, but must be given at regular intervals; transcranial magnetic stimulation, in which magnets are used to non-invasively manipulate brain activity, and wake therapy, in which patients are kept awake for prolonged periods, are two other options backed by reams of scientific evidence. In the future, we may even see psychedelics return to the psychiatric clinic; a number of psychedelic compounds – including psilocybin, the hallucinogen found in magic mushrooms – have shown promise as antidepressants in recent years, a fact that has led many to call for an end to bans on psychoactive drug research. SSRIs remain an effective form of therapy for millions of patients, but scientists and psychiatrists are eager to improve our understanding of depression and its treatment. That understanding may eventually incorporate some aspect of the chemical imbalance theory, but the whole picture is almost certainly more complex. Update: This piece originally stated that “studies have shown that much of [antidepressants’] effect is likely due to placebo.” In fact, meta analyses have concluded that the magnitude of benefit of antidepressant medication compared with placebo tends to increase with severity of depression symptoms. The piece has been revised to clarify this point. Update #2: As many readers have pointed out, this piece originally overstated several of its points regarding the validity of the chemical imbalance theory. While it is certainly incomplete, aspects of the theory may one day fit into a more complex and comprehensive understanding of depression. The headline and various sections throughout the piece have been revised to correct this. We apologize for the error, and thank those of you who called us out on the mistake.
Scientists regenerate parts of the skull affected by craniosynostosis, a common birth defect. Using stem cells to regenerate parts of the skull, scientists corrected skull shape and reversed learning and memory deficits in young mice with craniosynostosis, a condition estimated to affect 1 in every 2,500 infants born in the United States, according to the Centers for Disease Control and Prevention. The only current therapy is complex surgery within the first year of life, but skull defects often return afterward. The study, supported by the National Institute of Dental and Craniofacial Research (NIDCR), could pave the way for more effective and less invasive therapies for children with craniosynostosis. The findings were published in Cell. NIDCR is part of the National Institutes of Health. “This is a pivotal study demonstrating both structural regeneration and functional restoration in an animal model of craniosynostosis, said Lillian Shum, PhD, director of NIDCR’s Division of Extramural Research. “It holds great potential for translation to treatment of the human condition.” Healthy infants are born with sutures — flexible tissue that fills the space between the skull bones — that allow the skull to expand as the brain grows rapidly in the first few years of life. In craniosynostosis, one or more sutures turn into bone too early, closing the gap between skull plates and leading to abnormal growth. The resulting increase in pressure inside the skull may cause physical changes in the brain that lead to thinking and learning problems. “The connection between changes in the skull and the development of cognitive deficits had not been fully explored,” said Yang Chai, D.D.S., Ph.D., director of the Center for Craniofacial Molecular Biology and associate dean of research at the Herman Ostrow School of Dentistry at the University of Southern California, Los Angeles, who led the study. “We wanted to know if restoring sutures could improve neurocognitive function in mice with mutations in a gene that causes craniosynostosis in both mice and humans.” That gene, called TWIST1, is thought to be important for suture formation during development. In humans, mutations in this gene can lead to Saethre-Chotzen syndrome, a genetic condition characterized by craniosynostosis and other skeletal abnormalities. To see if flexible sutures could be restored in mice with craniosynostosis due to Twist1 mutations, the scientists focused on a group of stem cells normally found in healthy sutures. Previous studies by the group indicated that these stem cells—called Gli1+ cells—are key to keeping skull sutures of young mice intact. The team had also found that Gli1+ cells are depleted from the sutures of mice that develop craniosynostosis due to Twist1 mutations. Chai and his colleagues reasoned that replenishing the cells might help regenerate the flexible sutures in affected animals. To test this idea, the researchers added Gli1+ cells from healthy mice to a biodegradable gel. They deposited the mixture into grooves meant to re-create the space where skull sutures had been in mice with craniosynostosis. Skull imaging and tissue analysis revealed that after six months, new fibrous sutures had formed in treated areas and that the new tissue remained intact even after a year. In contrast, the same grooves closed in mice that received a gel that lacked Gli1+ cells. Closer analysis showed that Gli1+ cells in the regrown sutures had different origins: some were descended from the cells that had been implanted, while others were the animals’ own, having migrated from nearby areas. The findings suggest that Gli1+ cell implantation leads to suture regeneration in part by recruiting native Gli1+ stem cells to help in the process. Further experiments showed that untreated mice with craniosynostosis had increased pressure inside their skulls and poor performance on tests of social and spatial memory and motor learning. After treatment, these measures all returned to levels typical of healthy mice. The skull shapes of treated mice were also partially corrected. The treatment also reversed the loss of brain volume and nerve cells in areas involved in learning and memory. According to the scientists, this finding sheds light on the mechanisms underlying impaired brain function and its improvement after suture regeneration. “We have discovered that Gli1+ stem-cell-based suture regeneration restores not only skull shape but also neurocognitive functions in a mouse model of craniosynostosis,” said Chai. The scientists note that more work remains before such an intervention can be tested in humans, including studies to determine the optimal timing of surgery and the ideal source and amount of stem cells. “This study provides a foundation for efforts to develop a less-invasive, stem cell-based therapeutic strategy that can benefit patients who suffer from this devastating disorder,” Chai said.
What is Reading for Gold? Reading for Gold is our personal reading scheme designed to encourage pupils to read independently, use sources such as the library to develop their reading, and enhance their analysis and evaluation skills across a range of genres. Pupils should be encouraged to select and read texts for enjoyment and interest, and express their personal response. As pupils progress through the six awards in Reading for Gold they should be using their developing language skills to access more challenging texts. What are the benefits of Reading for Gold? - Creates enthusiasm for reading - Pupils take responsibility for their reading - Encourages independent and critical thinking - Encourages pupils to analyse texts in different ways - Pupils learn how to evaluate texts Find out more by reading our Guide for Parents. Use the following website to discover the AR level of the book your child is reading:
Spina Bifida and Anencephaly Spina bifida and anencephaly are serious birth defects that occur when a baby’s spinal cord or brain does not develop properly during early pregnancy. These are called neural tube defects. Anencephaly babies do not survive, while babies born with spina bifida face a high probability of death in early childhood, require ongoing medical and surgical care, and are often confined to life in a wheelchair. These birth defects place significant emotional and financial burden on families, affecting children from all economic and ethnic groups. The brain and spinal cord are formed during the first 28 days of pregnancy, often before a woman knows she is pregnant. Thus, to prevent these birth defects, an intervention must occur before conception and continue through early pregnancy. Globally, it is estimated that 300,000 births are affected by these neural tube defects each year. Of those births, 240,000 would be due to folic acid deficiency. Current prevention efforts are able to prevent about 35,000 cases, leaving close to 200,000 affected births that could be prevented each year. This number represents 20 times the total number of babies harmed by thalidomide 60 years ago, yet thalidomide (a drug initially prescribed to prevent nausea in early pregnancy) was quickly removed from use. The solution to prevent folic acid-preventable spina bifida and anencephaly has been known for 25 years – provide sufficient folic acid to women of child-bearing age. Yet to date there has not been a well-planned, funded, and coordinated global program to eliminate these birth defects, as there has been for other conditions, such as polio. The high count and severity, in the face of an achievable solution, is why the Center for Spina Bifida Prevention at Emory University is focusing on the problem.
The discovery by an international team of researchers, including University of Alberta professor Michael Caldwell, rolls back the clock on snake evolution by nearly 70 million years. "The study explores the idea that evolution within the group called 'snakes' is much more complex than previously thought," says Caldwell, professor in the Faculty of Science and lead author of the study published today in Nature Communications."Importantly, there is now a significant knowledge gap to be bridged by future research, as no fossils snakes are known from between 140 to 100 million years ago." New knowledge from ancient serpents The oldest known snake, from an area near Kirtlington in Southern England, Eophis underwoodi, is known only from very fragmentary remains and was a small individual, though it is hard to say how old it was at the time it died. The largest snake,Portugalophis lignites, from coal deposits near Guimarota in Portugal, was a much bigger individual at about a metre long. Several of these ancient snakes (Eophis, Portugalophis and Parviraptor) were living in swampy coastal areas on large island chains in western parts of ancient Europe. The North American species, Diablophis gilmorei, was found in river deposits from some distance inland in western Colorado. This new study makes it clear that the sudden appearance of snakes some 100 million years ago reflects a gap in the fossil record, not an explosive radiation of early snakes. From 167 to 100 million years ago, snakes were radiating and evolving toward the elongated, limb-reduced body shape characterizing the now well known, ~100-90 million year old, marine snakes from the West Bank, Lebanon and Argentina, that still possess small but well-developed rear limbs. Caldwell notes that the identification of definitive snake skull features reveals that the fossils -- previously associated with other non-snake lizard remains -- represent a much earlier time frame for the first appearance of snakes. "Based on the new evidence and through comparison to living legless lizards that are not snakes, the paper explores the novel idea that the evolution of the characteristic snake skull and its parts appeared long before snakes lost their legs," he explains. He adds that the distribution of these newly identified oldest snakes, and the anatomy of the skull and skeletal elements, makes it clear that even older snake fossils are waiting to be found.
By Blog Editor Susan Wells Recently at a Girl Scout overnight, the girls worked on earning a badge while playing a game about the Water Cycle. It’s hands-on, interactive and a great way to teach about water molecules and their journeys. The water cycle is usually portrayed in a circular diagram – water from the clouds precipitates or rains down on the land, the water runs into rivers and the ocean and evaporates back into cloud form. This is a simple explanation of how water travels. Water actually moves through several places or compartments water visits through its journey. The water molecules also may spend a long time or a very short time in a compartment. For example, water is frozen into glaciers for hundreds of years, or travels in underground for a long time. For example, the Antarctic Bottom Water, the deep ocean water formed in the Antarctic, takes over 250 years to travel along the bottom of the Pacific Ocean before it resurfaces in the Aleutian Islands. Animals and plants also move water. It is consumed, extracted and leaves during respiration, perspiration, excretion or evaporation. Think of all of the places water is found and essential. |What You Will Need: Time it Will Take to Play: Before playing the game, start out with a mini lesson in the water cycle and conservation. All of the water molecules on our planet are the original molecules. There is no way to get more water – the water on the earth just moves through different forms and locations but it does not grow or increase. Water is always in motion – sometimes it moves quickly and other times it’s slow. Playing the Game Kids will become water molecules traveling through the water cycle and gain a strong understanding in the movement of water. - Divide the students up into nine even groups. - Give each student a pipe cleaner with a loop at the bottom to hold the beads. - Each group starts at one of the nine compartments of the water cycle – clouds, lakes, rivers, glaciers, groundwater, soil, ocean, plants, and animals. Each station has a sign, one color of beads, and the corresponding block. - Students line up at each station and take a colored bead that represents that station. One at a time, each child rolls the dice. - Depending on where the dice lands, the student will move to the next station. Not every roll will move the water molecule. For example, water molecules in glaciers can get stuck for awhile. - Each time the dice is rolled, a bead is added to the pipe cleaner. If a water molecule gets stuck at the glacier for three rolls, three beads are added. - For a twist, you can use Color Changing UV Beads at six of the stations. These beads will be white inside the classroom and turn color outside in sunlight. Students won’t see the colors representing some of the stations until they step outside with their bead stories. - Have students also write the name of each station they land at and how they got there in their science notebooks. - Continue the game until each student or water molecule has cycled about 10 times. - Pipe cleaners can bend into bracelets for students to take home. After the game has ended, have some of the students share their unique journey. Did they make it through all nine stations? Did they move through one station more than once? Get stuck anywhere? You can also discuss how water molecules may move in different situations. Have students practice acting out some of the motions water molecules make as they move. Snow and rain molecules stick together, while vapor molecules move alone. Cold molecules move slow, while warm ones move faster. This game was originated by NOAA on their education page. They offer downloads of game cube layouts, station signs and full instructions.
Zebra Swallowtail is a beautiful species of swallowtail butterfly mainly found in different regions of United States. The black and white striped pattern of these beautiful butterflies resembles the coloration of zebras. The Zebra Swallowtail Butterflies are rarely found far from the pawpaw shrubs. The butterflies of this species have a unique wing-shape along with long tails. Their distinctive appearance makes them easy to identify. Color: Their white or greenish white wings are striped with black longitudinal bands. There are two blue spots in the corner of the inner margin of their hind wings. These wings also have a red spot near the body of this butterfly. There is a long red stripe running along the middle of their ventral hind wings. These butterflies have two different forms for the seasons of summer and spring. Their spring form is smaller and whiter while they appear much larger with broad black stripes on their wings during summer. The caterpillars of this species are green or black. Picture 1 – Zebra Swallowtail Butterfly Wingspan: The wingspan of these butterflies ranges from 2.5 inches to 4.1 inches (6.4 to 10.4 cm). Shape: They have triangular wings. Tail: These butterflies have a pair of sword like tails extending from their hind wings. The black tails are short and tipped with white during spring. During summer, the tails appear much longer and graceful with wide white borders. These butterflies are native to the eastern regions of US and southeastern Canada. Their distribution range extends from the southern parts of Ontario and Michigan along the coast of Atlantic to the Gulf States and Florida. Various species of pawpaw shrubs are the host plants for the Zebra Swallowtail Butterflies. They do not usually wander far from their host plants. They inhabit southern pine woodlands, deciduous woodlands, savannas and prairies where these trees grow. These swallowtail butterflies prefer intact woodland habitats much more than developed areas. Adult butterflies of this species sip flower nectar using their proboscis (a straw-like organ). Like other butterfly species these adult butterflies do not have jaw. Sometimes they use their proboscis to collect pollen from various flowers. Digesting these pollens helps them to absorb protein which gives them extra nutrition and energy for reproduction. They feed on different flowers including lilac, blackberry, verbena and redbud. Male Zebra Swallowtail Butterflies obtain various nutrients through the practice of puddling that help them in reproduction. The caterpillars feed on eggshells after hatching. They also eat the long leaves of their host pawpaw plants. Some caterpillars even feed on other caterpillars living in the same plants. This species flies between 2 feet and 6 feet (0.5 and 1.8 meters) above the ground. These butterflies visit various flowers from different families including brassicaceae, lythraceae, apocynaceae, fabaceae, polemoniaceae and rosaceae. The female Zebra Swallowtail Butterflies fly slowly when looking for appropriate host plants. During mating seasons, the male butterflies of this species fly swiftly near host plants searching for females. These male butterflies are known to participate in “puddling” which means individual butterflies gathering together on moist soil, gravel and sand to obtain amino acids and salts. Invertebrates like spiders, ants and different types of wasps feed on this butterfly species. Their adaptive features help them to survive in their natural habitat: The bold black stripes of these butterflies, along with their low erratic flight make it hard for predators to follow and capture them. The larvae of this species have an orange y-shaped gland on their neck called osmeterium. This gland gives off an unpleasant odor helping them to avoid predators. Unlike many other species swallowtails, this species have shorter proboscis. Due to this reason they prefer flat small flowers. These swallowtail butterflies are found in the northern regions of their distribution range between late March and August. They can be seen in their southern distribution areas between February and December. The male Zebra Swallowtail Butterflies look for mate during warm days near pawpaw plants. After mating the female butterflies find suitable larval host plants to lay eggs. They lay the eggs singly on the leaves or trunks of pawpaw due to the cannibalistic nature of their larvae. This species produces two broods (young produced during one hatching) in the northern regions of their habitat while they have three to four broods in the south. The first brood of each reproductive season is the largest in number. Picture 3 – Zebra Swallowtail Butterfly Picture The eggs are pale green in the initial stage turning orange-brown as they mature. Young caterpillars are black with light transverse stripes on their body. Older larvae are commonly green banded with white and yellow. Black larvae transversely striped with white and orange are rarer. The larvae take somewhere around a month to develop and turn into an adult butterfly. They have an average lifespan between 5 and 6 months. These butterflies are fairly easy to take care of. It is quite entertaining as well as educative to watch them passing through various stages of their lifecycle. Housing: The caterpillars of this species should be housed in their host pawpaw plant. The whole plant should be covered with a soft net to prevent the caterpillars from escaping. Adult butterflies should be housed around flower plants from which they can easily collect nectar. They should ideally be housed with flowers like lilac, verbena and blackberry. It is important to cover the flower plants along with the host plants. Feeding: The larvae feed on the leaves of their host plants while adult Zebra Swallowtails collect nectar and pollen from the flowers of the other plants they are housed with. Caring: They do not need much tending and handling. However, one should provide them with proper host plants and flowering plants from which they will get their food. Find out some interesting facts about this species: They are the official state-butterfly of Tennessee, United States. Their zebra like coloration earned them the name Zebra Swallowtail. The lifespan of these swallowtail butterflies is longer than most other butterfly species. Here are some images of these graceful attractive butterflies.
Marie Curie — Marie Sklodowska was born in Warsaw on 7 November 1867, the daughter of a teacher. In 1891, she went to Paris to study physics and mathematics at the Sorbonne where she met Pierre Curie, professor of the School of Physics. They were married in 1895. The Curies worked together investigating radioactivity, building on the work of the German physicist Roentgen and the French physicist Becquerel. In July 1898, the Curies announced the discovery of a new chemical element, polonium. At the end of the year, they announced the discovery of another, radium. The Curies, along with Becquerel, were awarded the Nobel Prize for Physics in 1903. Pierre’s life was cut short in 1906 when he was knocked down and killed by a carriage. Marie took over his teaching post, becoming the first woman to teach at the Sorbonne, and devoted herself to continuing the work that they had begun together. She received a second Nobel Prize, for Chemistry, in 1911. The Curie’s research was crucial in the development of x-rays in surgery. During World War One Curie helped to equip ambulances with x-ray equipment, which she herself drove to the front lines. The International Red Cross made her head of its radiological service and she held training courses for medical orderlies and doctors in the new techniques. Despite her success, Marie continued to face great opposition from male scientists in France, and she never received significant financial benefits from her work. By the late 1920s her health was beginning to deteriorate. She died on 4 July 1934 from leukaemia, caused by exposure to high-energy radiation from her research. The Curies’ eldest daughter Irene was herself a scientist and winner of the Nobel Prize for Chemistry.
Be Prepared for a Flood Floods are one of the most common hazards in the United States. Flood effects can be local, impacting a neighborhood or community, or very large, affecting entire river basins and multiple states. However, all floods are not alike. Some floods develop slowly, sometimes over a period of days. But flash floods can develop quickly, sometimes in just a few minutes and without any visible signs of rain. Flash floods often have a dangerous wall of roaring water that carries rocks, mud, and other debris and can sweep away most things in its path. Overland flooding occurs outside a defined river or stream, such as when a levee is breached, but still can be destructive. Flooding can also occur when a dam breaks, producing effects similar to flash floods. Be aware of flood hazards no matter where you live, but especially if you live in a low-lying area, near water or downstream from a dam. Even very small streams, gullies, creeks, culverts, dry streambeds, or low-lying ground that appear harmless in dry weather can flood. Every state is at risk from this hazard. HOW DO I PROTECT MYSELF FROM A FLOOD? Read the Free information below from FEMA, the U.S Governemnt’s Federal Emergency Management Agency Before a FloodTo prepare for a flood, you should: Avoid building in a floodplain unless you elevate and reinforce your home. Elevate the furnace, water heater, and electric panel if susceptible to flooding. Install "check valves" in sewer traps to prevent flood water from backing up into the drains of your home. Construct barriers (levees, beams, floodwalls) to stop floodwater from entering the building. Seal walls in basements with waterproofing compounds to avoid seepage. The smartest thing you can do to prepare for floods is PURCHASE FLOOD INSURANCE. During a Flood If a flood is likely in your area, you should: - Listen to the radio or television for information. - Be aware that flash flooding can occur. If there is any possibility of a flash flood, move immediately to higher ground. Do not wait for instructions to move. - Be aware of streams, drainage channels, canyons, and other areas known to flood suddenly. Flash floods can occur in these areas with or without such typical warnings as rain clouds or heavy rain. If you must prepare to evacuate, you should do the following: - Secure your home. If you have time, bring in outdoor furniture. Move essential items to an upper floor. - Turn off utilities at the main switches or valves if instructed to do so. Disconnect electrical appliances. Do not touch electrical equipment if you are wet or standing in water. If you have to leave your home, remember these evacuation tips: - Do not walk through moving water. Six inches of moving water can make you fall. If you have to walk in water, walk where the water is not moving. Use a stick to check the firmness of the ground in front of you. - Do not drive into flooded areas. If floodwaters rise around your car, abandon the car and move to higher ground if you can do so safely. You and the vehicle can be quickly swept away. Driving Flood Facts The following are important points to remember when driving in flood conditions: - Six inches of water will reach the bottom of most passenger cars causing loss of control and possible stalling. - A foot of water will float many vehicles. - Two feet of rushing water can carry away most vehicles including sport utility vehicles (SUV’s) and pick-ups. After a Flood The following are guidelines for the period following a flood: - Listen for news reports to learn whether the community’s water supply is safe to drink. - Avoid floodwaters; water may be contaminated by oil, gasoline, or raw sewage. Water may also be electrically charged from underground or downed power lines. - Avoid moving water. - Be aware of areas where floodwaters have receded. Roads may have weakened and could collapse under the weight of a car. - Stay away from downed power lines, and report them to the power company. - Return home only when authorities indicate it is safe. - Stay out of any building if it is surrounded by floodwaters. - Use extreme caution when entering buildings; there may be hidden damage, particularly in foundations. - Service damaged septic tanks, cesspools, pits, and leaching systems as soon as possible. Damaged sewage systems are serious health hazards. - Clean and disinfect everything that got wet. Mud left from floodwater can contain sewage and chemicals. FREE FLOOD TERMS GLOSSARY Flood: Know Your Terms Familiarize yourself with these terms to help identify a flood hazard: Flooding is possible. Tune in to NOAA Weather Radio, commercial radio, or television for information. Flash Flood Watch: Flash flooding is possible. Be prepared to move to higher ground; listen to NOAA Weather Radio, commercial radio, or television for information. Flooding is occurring or will occur soon; if advised to evacuate, do so immediately. Flash Flood Warning: A flash flood is occurring; seek higher ground on foot immediately. Disaster Preparedness Products: This site contains information produced by FEMA and compiled by the site owners. We are not responsible for the accuracy or completeness of this information. Layout and site design copyright 2007 powerfy.com.
Decoding is the ability to apply your knowledge of letter-sound relationships, including knowledge of letter patterns, to correctly pronounce written words. Understanding these relationships gives children the ability to recognize familiar words quickly and to figure out words they haven't seen before. Although children may sometimes figure out some of these relationships on their own, most children benefit from explicit instruction in this area. Phonics is one approach to reading instruction that teaches students the principles of letter-sound relationships, how to sound out words, and exceptions to the principles. What the problem looks like A kid's perspective: What this feels like to me - I just seem to get stuck when I try to read a lot of the words in this chapter. - Figuring out the words takes so much of my energy, I can't even think about what it means. - I don't know how to sound out these words. - I know my letters and sounds, but I just can't read words on a page. A parent's perspective: What I see at home - She often gets stuck on words when reading. I end up telling her many of the words. - His reading is very slow because he spends so much time figuring out words. - She's not able to understand much about what she's read because she's so busy trying to sound out the words. - It's as if he doesn't know how to put the information together to read words. - Saying "sound it out" to her just seems to make her more frustrated. - He guesses at words based on the first letter or two; it's as if he doesn't pay close attention to the print. A teacher's perspective: What I see in the classroom - She has difficulty matching sounds and letters, which can affect reading and spelling. - She decodes in a very labored manner. - He has trouble reading and spelling phonetically. - She has a high degree of difficulty with phonics patterns and activities. - He guesses at words based on the first letter or two. - Even though I taught several short vowel sounds (or other letter sounds or patterns), the corresponding letters are not showing up in his writing samples. - Even though I taught certain letter patterns, she isn't able to recognize them when reading words. How to help With the help of parents and teachers, kids can learn strategies to overcome word decoding and phonics problems that affect their reading. Below are some tips and specific things to do. What kids can do to help themselves - Play with magnetic letters. See how quickly you can put them in alphabetical order while singing the alphabet song. - Look at written materials around your house and at road signs to see if you can spot familiar words and letter patterns. - Write notes, e-mails, and letters to your friends and family. Represent each sound you hear as you write. - When you're trying to sound out a word, pay close attention to the print. Try to look at all the letters in the word, not just the first one or two. What parents can do to help at home - For a younger reader, help your child learn the letters and sounds of the alphabet. Occasionally point to letters and ask your child to name them. - Help your child make connections between what he or she might see on a sign or in the newspaper and the letter and sound work he or she is doing in school. - Encourage your child to write and spell notes, e-mails, and letters using what he knows about sounds and letters. - Talk with your child about the "irregular" words that she'll often see in what she's reading. These are the words that don't follow the usual letter-sound rules. These words include said, are, and was. Students must learn to recognize them "at sight." - Consider using computer software that focuses on developing phonics and emergent literacy skills. Some software programs are designed to support children in their writing efforts. For example, some programs encourage kids to construct sentences and then cartoon characters will act out the completed sentence. Other software programs provide practice with long and short vowel sounds and creating compound words. What teachers can do to help at school - Have students sort pictures and objects by the sound you're teaching. At each stage, have children say the letter sound over and over again. - Teach phonics in a systematic and explicit way. If your curriculum materials are not systematic and explicit, talk with your principal or reading specialist. - Be sure to begin the systematic and explicit phonics instruction early; first grade would be best. - Help students understand the purpose of phonics by engaging them in reading and writing activities that requires them to apply the phonics information you've taught them. - Use manipulatives to help teach letter-sound relationships. These can include counters, sound boxes, and magnetic letters. - Provide more of your instruction to students who you've divided into need-based groups.
A motor-car engine that runs on petrol has a number of cylinders that are fitted with pistons. When the engine is working, the pistons move up and down inside the cylinders and each upward or down ward movement of a piston is known as a stroke. The motor-car engine works on a four-stroke energy cycle. This means that the fuel-air mixture is ignited at each cycle to provide the needed energy for the engine to work properly. The efficient working of the motor-car engine is dependent on the fuel-air mixture which (i) must ignite at the correct stage in each cycle and (ii) must be completely burnt. The fuel should posses the correct volatility and temperature of ignition for it to be burnt at the right time of the cycle. If the fuel is too volatile and has too low an ignition point, it would be burnt prematurely. When this happens, a condition called knocking, or pinking, results. Knocking is a characteristic metallic sound caused by vibrating pistons. This knocking reduces the efficiency of the motor-car engine and shortens its life. On the other hand, a fuel which is not volatile enough and has too high an ignition temperature may not be burnt completely. Excessive black smoke and unburnt hydrocarbons are expelled from the car via the exhaust, causing air pollution. Petrol is the only fuel that meets all the requirements necessary to power a motor-car engine. Therefore, motor-car engines are known as petrol engines. Nowadays, more efficient petrol engines have been designed, however, even these newly designed petrol engines have a tendency to knock if low quality fuel is used.
Why Are Cheetahs Endangered? The cheetah is endangered due to a combination of genetic frailties and the adverse effects of a dwindling habitat. The species has also been decimated by farmers seeking to protect their herds. While speed is the cheetah’s greatest asset, it can also put the animal at risk. During a high-speed chase of its prey, the cheetah can reach speeds of up to 70 miles per hour. This leaves the animal exhausted and vulnerable to attack. The cheetah is a timid defender of its family and prey. Because they run when threatened rather than fight, cheetahs lose much of the food they kill to more aggressive species. Young cheetahs are at great risk when their solitary cheetah mother leaves them alone to hunt, and relatively few cubs survive to adulthood. Since they have no pack to rely on, even a fairly minor injury can have devastating consequences. Conservation efforts have not been able to completely eliminate the adverse effects of poaching on the animal. Even in captivity successful breeding has proven to be a challenge as cheetahs suffer from high infant mortality rates. The habitat of the cheetah continues to shrink as farmers use more of its territory for crop production.
The late John Martin demonstrated the paramount importance of iron for microscopic plant growth in large areas of the world’s oceans. Iron, he hypothesized, was the nutrient that limited green life in seawater. Over twenty years later, Martin’s iron hypothesis is widely considered to be the major contribution to oceanography in the second half of the 20th century. Originating as an ecosystem experiment to test Martin’s iron hypothesis, iron fertilization experiments are now used as powerful tools to study the world’s oceans. Some oceanographers are concerned that these experiments are catapulting ocean science into a new era. The vast stretches of ocean play a key role in the global carbon cycle, and thus in regulating Earth’s climate. Some scientists, engineers and international policy makers claim that dissolving iron in the ocean will help stop global warming. Adding large amounts of iron to the oceans may drastically increase the amount of carbon dioxide that phytoplankton can capture from the atmosphere, thereby reducing the most common greenhouse gas. But intentional iron fertilization over great expanses of the ocean may have unintended consequences for the world’s largest ecosystem. The open ocean is one of the planet’s last frontiers and a part of the global commons. As such, using the open ocean as a means to solve the complex problem of global warming raises deep questions about how humans think of and use the Earth. The question remains: Should humans use the ocean as a means to regulate a changing climate? Ocean Fertilization: Ecological Cure or Calamity Share this Post
5 years ago Understanding Everyday Mathematics This section provides explanations for many of the common questions parents have about the Everyday Mathematics curriculum. Here you can learn more about the rationale behind Everyday Mathematics' position on topics like basic math facts and calculator use, in addition to tips on how to assist your child. Helping children learn the basic facts is an important goal in the Everyday Mathematics curriculum. In this section, you can find out more about how the curriculum employs a variety of techniques to help children develop their "fact power", or basic number-fact reflexes. Everyday Mathematics recognizes that, even in the computer age, it is important to teach children how to compute "by hand". Here you can read more about how the curriculum provides all students with a variety of dependable and understandable methods of computation. Research has shown that teaching the standard U.S. algorithms for each of the four basic operations of arithmetic fails with large numbers of children, and that alternative algorithms are often easier for children to understand and learn. In this section, you can read more about how Everyday Mathematics introduces children to a variety of alternative procedures in addition to the customary algorithms. In the Everyday Mathematics program, emphasis is placed on using the calculator as a tool for learning mathematics. You can read more here about how the use of calculators is incorporated to provide practice with place value and problem-solving skills in the curriculum. Simply stated, the primary goal of Everyday Mathematics is to help more children learn more mathematics. This section explains how the curriculum expects higher levels of accomplishment at every grade level while also incorporating features that help make mathematics accessible to all students. We believe it is very important to help parents become actively involved in their child's mathematical education. Here you can see some suggestions for how you can learn about the mathematics your child is studying in school, and how you can help reinforce their math learning at home. With a login provided by your child's teacher, access resources to help your child with homework or brush up on your math skills. McGraw-Hill Education offers many resources for parents, including tips, activities, and helpful links. EverydayMath.com features activity ideas, literature lists, and family resources for the EM curriculum. Learn more about the EM curriculum and how to assist your child.
課程介紹 (About the course) This course will introduce computer programming in C. We will cover basic operations about computer, then move on to how to write computer program in a language called C. Various C concepts will be introduced. 授課形式 (Course format) We will have video lecture to introduce the concept of programming. The video will switch between the presentation slides and the actual coding process. After that we will have weekly programming homework to ensure that the students are able to practice what they learned from the video presentation. The students will practice on ideone.com, a web platform for compiling and running computer programs. 修課背景要求 (Recommended background) No special prior computer knowledge is required. However, the students are expected to be able to use a web browser, has basic English vocabulary, and arithmetic skills of junior high school graduates.
Overview of Blood Sugar Level(Glucose level) This page states ‘normal’ blood sugar ranges and blood sugar ranges for adults and children with type 1 diabetes, type 2 diabetes and blood sugar ranges to determine people with diabetes. If a person with diabetes has a meter, test strips and is testing, it’s important to know what the blood glucose level means. When you have diabetes, your body isn’t able to get the sugar from blood into cells, or make enough, or any, insulin. This causes high levels of blood sugar, or high glucose levels. The carbohydrates in food cause blood sugar levels to go up after meals. When you eat foods that contain carbohydrates, the digestion process turns them into sugars. These sugars are released into the blood and transported to the cells. The pancreas, a small organ in the abdomen, releases a hormone called insulin to meet the sugar at the cell. Insulin acts as a “bridge,” allowing the sugar to go from the blood into the cell. When the cell uses the sugar for energy, blood sugar levels go down. If you have diabetes, there’s either a problem with the pancreas producing insulin, or the cells using insulin, or both. The different types of diabetes and diabetes-related conditions include: - Type 1 diabetes is when the body stops making insulin. - Type 2 diabetes is usually a combination of the pancreas not making enough insulin and the cells not using insulin well, which is called insulin resistance. - Pre-diabetes is usually when the cells do not use insulin well. - Gestational diabetes is when you develop diabetes in your second or third trimester of pregnancy. When to check blood glucose levels Talk to your doctor or healthcare providers about the best times to check your blood glucose. Optimal times vary for each person. Some options include: - after fasting (after waking or not eating for eight to 12 hours), or before meals - before and after meals, to see the impact that the meal had on your blood sugar - before all meals, to decide how much insulin to inject - at bedtime How to check your blood glucose levels You will need to take a blood sample to check your blood glucose levels. You can do this at home using a blood glucose monitor. The most common type of blood glucose monitor uses a lancet to prick the side tip of your finger to draw a small drop of blood. Then you place this drop of blood on a disposable testing strip. You insert the testing strip into an electronic blood glucose meter before or after the blood is applied. The meter measures the level of glucose in the sample and returns a number on a digital readout. Another option is a continuous glucose monitor. A small wire is inserted beneath the skin of your abdomen. Every five minutes, the wire will measure blood glucose levels and deliver the results to a monitor device worn on your clothing or in a pocket. This allows you and your doctor to keep a real time reading of your blood glucose levels. Recommended blood sugar targets Blood glucose numbers are measured in milligrams per deciliter (mg/dL). The American Diabetes Association (ADA) and the American Association of Clinical Endocrinologists (AACE) have different recommendations for blood glucose targets for most people with type 2 diabetes: |Timing||ADA Recommendations||AACE Recommendations| |Fasting and before meals||80-130mg/dl for Non pregnant Adults||less than 110mg/dl| |2 hours after Eating a meal||less than 180mg/dl for Non pregnant Adults||less than 140mg/dl| Talk to your doctor to learn more about your blood glucose targets. Your doctor can help you determine which guidelines to target. Or they can work with you to set your own glucose targets. What should I do if my glucose levels are too high? You should establish a treatment plan with your doctor. You may be able to manage your glucose levels through diet and other lifestyle changes, like weight loss. Exercise can also help lower your glucose levels. Medications may be added to your treatment if needed. Most people with type 2 diabetes will start on metformin as their first medication. There are many different types of diabetes medications that act in different ways. Injecting insulin is one way to quickly reduce your glucose levels. Your doctor may prescribe insulin if you need help managing your glucose levels. Your doctor will determine your dosage and go over with you how to inject it, and when. Let your doctor know if your glucose levels are consistently high. This could mean you need to take regular medication or make other changes to your diabetes treatment plan. Working with your doctor to get your glucose levels under control is important. Consistently high levels can lead to serious complications, like diabetic neuropathy or kidney failure. Diabetes eating plan The foods you eat can have a big impact on your glucose levels. Don’t skip meals. Irregular eating patterns can cause spikes and dips in your blood glucose and make it difficult to stabilize. Include healthy carbohydrates, fiber-rich foods, and lean proteins in your diet. Healthy carbohydrates include: - whole grains - beans and other legumes Manage the amount of healthy carbohydrates you eat at meals and snacks. Add protein and fat to slow digestion and avoid blood sugar spikes. Limit foods high in saturated and trans fats, cholesterol, and sodium. Instead, eat healthy fats, which are important to a balanced diet. They include: - olive oil Limit your consumption of processed foods. They often digest quickly and spike blood sugar levels. These foods can be high in: - trans fats Cook healthy foods in bulk and then store them in single serving size containers in the refrigerator or freezer. Having easy-to-grab, healthy choices can help you avoid choosing less healthy options when you’re in a hurry or really hungry. In addition to eating healthy foods, remember to include regular exercise in your daily routine. If you’re new to exercise, check with your doctor before starting. Then start slowly and work your way up to more vigorous routines. You can also add more exercise through small changes, including: - taking stairs instead of an elevator - walking around the block or your office during breaks - parking further from store entrances when shopping Over time, these small changes can add up to big wins for your health. Monitoring your blood glucose levels is an important step in managing your diabetes. Knowing your numbers will also help inform your doctor about changes you may need to make to your treatment plan. Following a healthy and balanced diet, exercising, and taking medicines as prescribed should help you to maintain normal glucose levels. Talk to your doctor if you need help coming up with a diet or exercise plan, or if you are unclear about how to take medications.
If we humans want to slow down global warming due to carbon emissions, clean energy is the way. But, as with all things, there are cons to go along with those pros. New research reports that installing large-scale wind farms across the country could raise the temperature of the continental United States. The study, published in the journal Joule, is based on mathematical modeling done by experts at Harvard University. First, the team created a climate baseline; they used a standard weather forecasting model for 2012-2014. Then, they tweaked the model to see what would happen if wind power became a key player in helping us cut carbon emissions. In the model, that meant about one third of the continental U.S. was covered with turbines. This degree of coverage, according to the researchers, would lead to a 0.24 degree Celsius increase in temperature. That’s because these turbines alter the flow of the atmosphere, redistributing heat and moisture in the air, which can alter climate. Plus, it could take at least a century for the benefits we’d reap from wind energy to offset this uptick. Even though it’s a drop in the bucket compared to the levels of warming fossil fuels will cause, it’s still something to consider, especially compared to other options for clean energy. “This work should not be seen as a fundamental critique of wind power,” says David Keith, one of the paper’s authors, in a press release. “Rather, the work should be seen as a first step in getting more serious about assessing these impacts.”
"The world we have created is a product of our thinking; it cannot be changed without changing our thinking." --Albert Einstein Thinking exists as the top mental activity demonstrated by man. All human accomplishments and advancement come from the results of thought. Civilization, knowledge, science and technology arise from the thinking process. Thought and activity are inseparable. Man normally perceives an action in his mind before undertaking an activity. The Brain Building Blocks The brain's primary building element starts with the brain cells known as neurons. Chemical processes in the brain send out messages through the neurons that determine the mental processes along with thinking. Cells called glia exist between the neurons in the brain. Mark Treadwell, an educator from New Zealand at I-learnt Website, indicates the glia interact with the neurons and hormones chemically in the production of thought. The motor neurons produce the action in our muscles and the sensory neurons connect to our five senses. The Five Senses The five senses in the body are sight, taste, smell, touch and hearing. The senses bring information back to the central process in the brain. Emotions exert an effect on human thinking by producing actions such as crying, laughing and sadness that modify the sensory information. Thinking brings together information to link the various parts into something comprehensible. Cognition refers to the thought process. The American College of Radiology and the Radiology Society describe functional MRI as a diagnostic procedure that can determine precisely the location of thought processes in the brain. A positron emission topography scan also can document images of the brain during a range of thought processes. The future has promise for new insights into the thinking process using these new technologies. Reasoning implies taking facts and evidence perceived by the senses and combining it with thinking to draw conclusions. Changing Minds Organization lists 20 types of reasoning. The most common types include inductive and deductive reasoning. Inductive reasoning refers to the process of starting from specifics and expanding the concepts to cover a range of observations. Deductive reasoning implies starting from a general rule and moving to a specific item. The Learning Process Learning occurs to help the individual think. Human beings learn using a trial and error process along with incorporating experiences, abstract thought and deduction. According to Science Daily, intelligence arises from the number of connections learned. The brain integrates the incoming data with the information stored in the brain.
Researchers from Oxford University have developed a computer program called LipNet, which allows you to read lips. Recognition accuracy is 93.4% – an indicator, not even for people-professionals. According to the developers, the program will help people with hearing problems. In addition, LipNet will allow you to communicate even in very noisy places. The program can be used for more nefarious purposes, for example, it can help to know what they are talking about people caught in CCTV camera. In order to LipNet successfully identify words, the experts at first “passed” through her more than 30 thousand videos, which people pronounce different words. Feature of the program is that it treats the whole phrase, not individual words. It is possible to achieve more accurate recognition. While no program can recognize words in the real world. Currently it only works with phrases that are built a certain way. The proposal, which is able to recognize LipNet must have the following structure: command, preposition, letter, digit, and adverb. For example, “put blue in a scale of 1 quickly.” In addition, the program still understands the phrase only 34 people that participated in the experiment. In order to LipNet able to understand people with different accents, that is needed with a large number of videos.
Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss Humans and other animals rely on other forms of life on land for food, clean air, clean water, and as a means of combatting climate change. Plant life makes up 80% of the human diet. Forests, which cover 30% of the Earth’s surface, help keep the air and water clean and the Earth’s climate in balance. That’s not to mention they’re home to millions of animal species. But the land and life on it are in trouble. Arable land is disappearing 30 to 35 times faster than it has historically. Deserts are spreading. Animal breeds are going extinct. We can turn these trends around. Fortunately, the Sustainable Development Goals aim to conserve and restore the use of terrestrial ecosystems such as forests, wetlands, drylands and mountains by 2030.
By studying an African population underrepresented in most datasets, researchers find genetic complexity of pigmentation varies by latitude Skin pigmentation is far more genetically complex than previously thought Many studies have suggested that the genetics of skin pigmentation are simple. A small number of known genes, it is thought, account for nearly 50 percent of pigment variation. However, these studies rely on datasets that heavily favor northern Eurasian populations—those that reside mostly in higher latitude regions. Reporting in the November 30 issue of Cell, researchers from the Broad Institute of MIT and Harvard, Stanford University, and Stony Brook University report that while skin pigmentation is nearly 100 percent heritable, it is hardly a straightforward, Mendelian trait. By working closely with the KhoeSan, a group of populations indigenous to southern Africa, the researchers have found that the genetics of skin pigmentation become progressively complex as populations reside closer to the equator, with an increasing number of genes—known and unknown—involved, each making a smaller overall contribution. “Africa has the greatest amount of phenotypic variability in skin color, and yet it’s been underrepresented in large scale endeavors,” said Alicia Martin, a postdoctoral scientist in the lab of Broad Institute member Mark Daly. “There are some genes that are known to contribute to skin pigmentation, but by and large there are many more new genes that have not been discovered.” “We need to spend more time focusing on these understudied populations in order to gain deeper genetic insights,” said Brenna Henn, assistant professor in the Department of Ecology and Evolution at Stony Brook University who, along with Martin, is a co-corresponding author. The paper is a culmination of seven years of research that spanned several institutions, starting with a collaboration between Stellenbosch University in South Africa and Stanford University in Carlos Bustamante’s lab, where Martin and Henn trained. Martin, Henn, and their colleagues spent a great deal of time with the KhoeSan, interviewing individuals, and taking anthropometric measurements (height, age, gender), and using a reflectometer to quantitatively measure skin color. In total, they accumulated data for approximately 400 individuals. The researchers genotyped each sample — looking at hundreds of thousands of sites across the genome to identify genetic markers linked with pigmentation measures — and sequenced particular areas of interest. They took this information and compared it to a dataset that comprised nearly 5,000 individuals representing globally diverse populations throughout Africa, Asia, and Europe. What they found offers a counter-narrative to the common view on pigmentation. The prevailing theory is that "directional selection” pushes pigmentation in a single direction, from dark to light in high latitudes and from light to dark in lower latitudes. But Martin and Henn’s data showed that the trajectory is more complex. Directional selection, as a guiding principle, seems to hold in far northern latitudes. But as populations move closer to the equator, a dynamic called “stabilizing selection” takes effect. Here, an increasing number of genes begins to influence variability. Only about 10 percent of this variation can be attributed to genes known to affect pigmentation. In addition, the researchers found some unexpected insights into particular genes associated with pigmentation. A derived mutation in one gene, SLC24A5, is thought to have arisen in Europe roughly 10,000 to 20,000 years ago. However, in the KhoeSan populations it appears in a much higher frequency than recent European admixture alone would suggest, indicating that it has either been positively selected in this population, actually arose in this population, or entered the population through gene flow thousands of years ago. “We’re still teasing this apart,” said Martin. They also found that a gene called SMARCA2/VLDLR, which has not previously been associated with pigmentation in humans, seems to play a role among the KhoeSan. Several different variants are all uniquely associated with pigmentation near these genes, and variants in these genes have been associated with pigmentation in animals. “Southern African KhoeSan ancestry appears to neither lighten or darken skin,” said Martin. “Rather, it just increases variation. In fact, the KhoeSan are approximately fifty percent lighter than equatorial Africans. Ultimately, in northern latitudes pigmentation is more homogenous, while in lower latitudes, it’s more diverse—both genetically and phenotypically.” “The full picture of the genetic architecture of skin pigmentation will not be complete unless we can represent diverse populations worldwide,” said Henn. Martin is a member of both the Program in Medical and Population Genetics and the Stanley Center for Psychiatric Research at the Broad Institute. This work is part of the Stanley Center’s global initiative to ensure that datasets increasingly represent individuals from developing countries. This research was funded by the Stanford Center for Computational, Evolutionary, and Human Genomics. Martin A, et al. An Unexpectedly Complex Architecture for Skin Pigmentation in Africans. Cell. Online November 30, 2017. DOI: 10.1016
What do the most selective colleges believe incoming students should know and be able to do so as to make the most of their post- secondary learning opportunities? A Harvard University booklet, “Choosing Courses to Prepare for College,” draws largely from a study of how Harvard students’ high-school preparation affected performance in the college’s Core program of study. For Essential Schools, several recommendations stand out particularly. - Though the booklet lists areas of study primarily by discipline (English literature, foreign language, history, mathematics, and science), it emphasizes “important knowledge, skills, or habits of thought, rather than naming specific courses.” Research and writing stands as a separate category equal in importance to any subject area. The principal message: Depth matters more than coverage, since high school programs vary. Use your school’s strongest teachers and resources to take the most demanding courses you can find. - In English literature, Harvard emphasizes critical and analytical reading. Make reading as deep as possible in a particular area, the college says, rather than superficially covering unrelated readings. “Besides reading novels for what they can tell you about life in [other] times and places,” the booklet reads, “you will notice how authors treat different problems or how they treat the same problems in different ways.” (Essential questions, anyone?) - In foreign languages, Harvard advises studying one foreign language and its literature in depth rather than a smattering of several languages. - In history Harvard urges students to take much more than their required American history course, arguing that the rigorous study of history provides a more basic preparation for college work than other social sciences courses. By taking additional courses that focus on time periods or other areas, the college says, students learn to understand “the assumptions underlying our political, social, and economic institutions.” Studying ideas and institutions in a historical context, it writes, teaches students “to think about these matters analytically; to understand not only what happened but how and why.” - Four years of high school mathematics alone won’t do the trick, Harvard says; students need to “acquire the habit of puzzling over mathematical relationships.” Don’t just memorize formulas and definitions but question and understand them, Harvard urges, asking students to solve hard problems containing applications. “The ability to wrestle with difficult problems is far more important than the knowledge of many formulae or relationships,” the booklet says. “It is not what courses you have taken, but how much you have thought about mathematics, that counts. More important than the knowledge of a specific mathematical topic is the willingness to tackle new problems.” - “The study of science begins with the habit of asking questions,” Harvard advises, asking students to study the basic sciences of chemistry, physics, and biology for four years if possible. The booklet urges practice in the scientific processes of performing experiments, making measurements, and developing theories to explain and predict phenomena. - Research and writing about texts is key to preparing for college, Harvard says: “If you read with curiosity and purpose, you will be able to take notes more easily, to weigh one author’s view against another, to categorize your research under leading questions, and to form your own observations and opinions.” Write regularly in coursework and journals “to find out what you think,” it says, asking students also to reflect critically on their own writing. Following its advice will not ensure admission, the booklet admonishes. Harvard selects students not only by their academic preparation, it says, but by many other criteria. “Most of all we look for students who make the most of their opportunities and the resources available to them, and who are likely to continue to do so throughout their lives,” the authors conclude. A Crefeld School Transcript This is the second page of a student’s transcript from the Crefeld School, a small Coalition member school in Philadelphia serving grades 7 through 12. The first page contains basic student data, then lists course topics (under such headings as “Social Studies Topics,” “Science Topics,” and “Electives”), year of completion, and grade. INQUIRY AND EXPRESSION - Analytical Reading Skills - Mechanics of Writing - Essay Writing Skills - Classroom Discussion Skills - Research Skills - Experimental Procedures - Analysis of Arts - Inquiry and Questioning - Critical Thinking - Prompt Homework Completion - Mathematics Concepts - Applying Math Concepts (Tests and Homework) - Science Content (Tests and Homework) - Application of Science Concepts - Social Studies Content (Tests and Homework) - Application of Social Studies Concepts - English Content (Tests and Homework) - Literary Analysis - Community Service Project - Instructional Activity - Study/Proposal Topic - Cultural Appreciation Topic - Creative Exhibition - Post-Graduation Plan EXPLANATION OF SYMBOLS ON TRANSCRIPT - AP = Advanced Placement test taken or planned - NA – Not Applicable - INC = Incomplete - A = Excellent - B = Good - C = Average - D = Below Average - F = Failure The Crefeld School Does NOT use G.P.A.’s, class ranks, percentiles, weighted courses, honors course designations, or credits accumulated. Students are graded on the basis of their own personal growth, effort, and ability and not in comparison to other students. Inquiry & Expression grades are determined by the faculty in conference. CURRICULUM AND GRADUATION REQUIREMENTS All students are required to follow a core curriculum. The Core Curriculum includes Humanities (social studies, literature, language arts, and fine arts) and Math/Science (mathematics, chemistry, physics, and biology). In addition, all students are required to participate in physical education, community service, and two electives each semester. Only courses designated “elective” permit student choice. Humanities and Math/Science each meet 7 hours/week. Electives and Physical Ed. each meet 2.5 hours/week. All students are required to perform 2 hours of community service/week. Students graduate when they have completed a 3 semester residency, satisfied all curriculum requirements, passed all 5 senior exams with a grade of 90% and completed a portfolio of 6 exhibitions of mastery.
Increased carbon dioxide in the atmosphere drives global warming. That is what scientists have been saying about the ongoing warming in the 21st century (with the decade of 2001 to 2010 the warmest since 1850, for example, and with more than 15,000 temperature records for warmth broken in March, 2012, for example). It is also what scientists now say happened to bring the last Ice Age to an end. These scientists used a supercomputer and a global dataset of paleoclimate records to analyze 15,000 years of climate history. Their results are published in the April 5, 2012 issue of the peer-reviewed journal Nature. Why is this study important? It deepens scientists’ understanding of climate history, for one thing. More critically, the findings contrast with earlier studies, which skeptics of human-triggered global warming said showed that carbon dioxide wasn’t important in bringing the last Ice Age to an end. According to an April 5 article in the Christian Science Monitor: The result stands in contrast to previous studies that showed temperatures rising ahead of increases in atmospheric CO2 levels. This has led some skeptics of human-triggered global warming to argue that if warming temperatures came first, CO2 [carbon dioxide] wasn’t an important factor then and so can’t be as significant a factor today as most climate scientists calculate it to be. This multi-institutional study was led by climate researchers at Harvard, Oregon State University and the University of Wisconsin. They used the Jaguar supercomputer at Oak Ridge National Laboratory to answer the question: Which came first, greenhouse gases or global warming? The answer provided by their study is greenhouse gases. Jeremy Shakun is a National Oceanic and Atmospheric Administration (NOAA) Climate and Global Change postdoctoral fellow at Harvard and Columbia Universities and first author of the paper. He said in a press release: We constructed the first-ever record of global temperature spanning the end of the last ice age based on 80 proxy temperature records from around the world. It’s no small task to get at global mean temperature. Even for studies of the present day you need lots of locations, quality-controlled data, careful statistics. The proxy temperature records he speak of are from ice cores and ocean and lake sediments, collected by scientists in locations around the world. Carbon-14 dating helps show what temperatures were occurring at what times in the past. Shakun said: We found that global temperature mirrored and generally lagged behind rising carbon dioxide during the last deglaciation, which points to carbon dioxide as the major driver of global warming. After examining the evidence in the climate record, the researchers turned to a supercomputer, running simulations that used 4.7 million processor hours in 2009, 6.6. million in 2010, and 2.5 million in 2011 – coupled with a climate model called the Community Climate System Model version 3. In other words, they used the climate model to look at multiple possible interactions between Earth’s atmosphere, oceans, lands, and sea ice, seeking the right combination of inputs that would match the temperature record as observed in lakes sediments and ice cores. As a result of these simulations, these scientists are now convinced that increased carbon dioxide in the air drove the global warming that ended the last ice age. This result is in contrast to results from an earlier study, based on Antarctic ice cores, which had indicated that local temperatures in Antarctica started warming before carbon dioxide began rising. That earlier result implied that carbon dioxide was merely a feedback, or result of warming, not a main driver of warming. This study found the opposite – that carbon dioxide was the primary driver of worldwide warming. Where did the excess carbon dioxide come from, and why is this result opposite the earlier result from Antactica? According to the press release: Geologic data show that about 19,000 years ago, Northern Hemisphere glaciers began to melt, and sea levels rose. Melting glaciers dumped so much freshwater into the ocean that it slowed a system of currents that transports heat throughout the world. Called the Atlantic meridional overturning circulation (AMOC), this ocean conveyor belt is particularly important in the Atlantic where it flows northward across the equator, stealing Southern Hemisphere heat and exporting it to the Northern Hemisphere. The AMOC then sinks in the North Atlantic and returns southward in the deep ocean. A large pulse of glacial meltwater, however, can place a freshwater lid over the North Atlantic and halt this sinking, backing up the entire conveyor belt. The simulation showed weakening of the AMOC due to the increase in glacial melt beginning about 19,000 years ago, which decreased ocean heat transport, keeping heat in the Southern Hemisphere and cooling the Northern Hemisphere. Other studies suggest this southern warming caused sea ice to retreat and shifted winds around the Southern Ocean, uncorking carbon dioxide that had previously been stored in the deep ocean and venting it to the atmosphere around 17,500 years ago. This rise in carbon dioxide then initiated worldwide warming. Bottom line: Increased carbon dioxide in the atmosphere drove global warming to bring the last Ice Age to an end, say scientists who used a supercomputer and a global dataset of paleoclimate records to analyze 15,000 years of climate history. Their results are published in the April 5, 2012 issue of the peer-reviewed journal Nature. Deborah Byrd created the EarthSky radio series in 1991 and founded EarthSky.org in 1994. Today, she serves as Editor-in-Chief of this website. She has won a galaxy of awards from the broadcasting and science communities, including having an asteroid named 3505 Byrd in her honor. A science communicator and educator since 1976, Byrd believes in science as a force for good in the world and a vital tool for the 21st century. "Being an EarthSky editor is like hosting a big global party for cool nature-lovers," she says.
In ancient India, Hinduism was the predominant religion. Hinduism is considered to be the oldest main religion which had its origin in northern India. The early Vedic culture was for the early Hinduism where its interaction with the cultures that were not Aryan let to the development of what is referred to as classical Hinduism. You should know that a lot of things in the classical, ancient and the modern culture of Indians were greatly shaped by the idea of Hindu. In the modern world, there is still a nation that uses Hinduism as the religion of the state. It is known as Buddha dharma in ancient India. It has its origin from the north of India just like Hinduism and it is the present day Bihar. In the 9th century, the followers of this religion were counting up to millions. However what led to the decline of this religion in India is disputable; it is believed that the interaction between Hindu and the Buddhist societies led to the formation of movements that competed with Buddhism. In the recent years, however, there have been attempts to revive the religion and the progress has fruits. People are converted to this religion including prominent leaders. Christianity in India Research has shown that religion was first brought to India by a certain apostle by the name of Thomas. He did much in converting people from southern India and hence they continued to practice the religion until today. Later it receives consolidation after the arrival of a Christian who was a Jewish named Knanaya. People who received this religion have a strong belief and their Christianity is one of the most ancient which is also referred to as the Eastern Orthodox Church. They bank on the biblical purpose of life. During the colonization period, Catholicism for the Romans also reached India at around 1498. Islam in India The religion arrived in India at around the 8th century. Despite the fact that the culture of Indians was already rich, the religion also contributed to its enhancement to a greater level. It shaped the northern classical music for the Indians as well as encouraging the development of the Urdu literature. This refers to the melding of the Arabic, Persian and the Hindi languages. There were around 130 million people in 2001 who were Muslims in India. Most of these people received their conversion during the period of Mughal. They currently live in the west and the north of the country. They are among the religious minority who live in the predominantly Hindu people. Judaism was, however, the first religion to get to India and local people were assimilated into it by cultural diffusion. It is not something difficult to estimate the population of this religion since its followers have distinct origins. There are those who came from the Judah kingdom and some are from lost tribes of Israel who were ten. About a half the total population of Jews live in Mizoram and some live in Mumbai. They have however suffered due to attacks from terrorists hence enmity between the Islam and the Jews and Hindus.
Why do people in poverty tend to have poorer health? This study looks at hundreds of theories to consider how income influences health. There is a graded association between money and health – increased income equates to better health. But the reasons are debated. Researchers have reviewed theories from 272 wide-ranging papers, most of which examined the complex interactions between people’s income and their health throughout their lives. This research identifies four main ways money affects people’s wellbeing: - Material: Money buys goods and services that improve health. The more money families have, the better the goods they can buy. - Psychosocial: Managing on a low income is stressful. Comparing oneself to others and feeling at the bottom of the social ladder can be distressing, which can lead to biochemical changes in the body, eventually causing ill health. - Behavioural: For various reasons, people on low incomes are more likely to adopt unhealthy behaviours – smoking and drinking, for example – while those on higher incomes are more able to afford healthier lifestyles. - Reverse causation (poor health leads to low income): Health may affect income by preventing people from taking paid employment. Childhood health may also affect educational outcomes, limiting job opportunities and potential earnings. The research is part of our programme of work on poverty in the UK.
When the graph is planar, it comes down to the problem of coloring a map. The problem of coloring a graph arises in many practical areas such as pattern matching, sports scheduling, designing seating plans, exam timetabling, the scheduling of taxis, and solving Sudoku puzzles. Example: A given set of jobs need to be assigned to time slots, each job requires one such slot. Jobs can be scheduled in any order, but pairs of jobs may be in conflict in the sense that they may not be assigned to the same time slot, for example because they both rely on a shared resource. The corresponding graph contains a vertex for every job and an edge for every conflicting pair of jobs. The chromatic number of the graph is exactly the minimum makespan, the optimal time to finish all jobs without conflicts. Clique – Complete graph The following graph has a clique of order 5: Relationship between the three problems and heuristics By this definition, we deduce that an independent set of maximum order is also a maximum stable by inclusion, the reciprocal is not true. We deduce the following heuristic (in french): The Brélaz algorithm is a greedy algorithm that allows to know a maximum bound of the chromatic number of a graph. Its principle is simple and proceeds iteratively. The notion of independent set can also provide indications on the coloring of a graph. Indeed, we can assume that in an independent set, all the vertices have the same color. The coloration in k colors amounts to finding a partition of the set of vertices in a k independent set. Welsh & Powell algorithm is based on this proposition: To see the connection between Sudoku and graph coloring, we will first describe the Sudoku graph, which for convenience we will refer to as S. The graph S has 81 vertices, with each vertex representing a cell. When two cells cannot have the same number (either because they are in the same row, in the same column, or in the same box) we put an edge connecting the corresponding vertices of the Sudoku graph S. For example, since cells a3 and a7 are in the same row, there is an edge joining their corresponding vertices; there is also an edge connecting a1 and b3 (they are in the same box), and so on. When everything is said and done, each vertex of the Sudoku graph has degree 20, and the graph has a total of 810 edges. S is too large to draw, but we can get a sense of the structure of S by looking at a partial drawing. The drawing shows all 81 vertices of S, but only two (a1 and e5) have their full set of incident edges showing. The second step in converting a Sudoku puzzle into a graph coloring problem is to assign colors to the numbers 1 through 9. This assignment is arbitrary, and is not a priority ordering of the colors as in the greedy algorithm, it’s just a simple correspondence between numbers and colors. Once we have the Sudoku graph and an assignment of colors to the numbers 1 through 9, any Sudoku puzzle can be described by a Sudoku graph where some of the vertices are already colored (the ones corresponding to the givens). To solve the Sudoku puzzle all we have to do is color the rest of the vertices using the nine colors.
Quick summary: Students compare the ancient findings of Aboriginal life at Lake Mungo and findings from the Roman period with our own lives. The focus is on the volume of waste each period generated. This lesson has been developed as part of the Schools Recycle Right Challenge for Planet Ark’s National Recycling Week. Register your lesson or other activities so they can be counted towards the national achievement and to receive other free support materials. Learning goals for this activity include: The remains from past periods provide clues about how people lived. Archaeological sites are extremely valuable and must always be managed and protected. Australian Curriculum content descriptions: - Year 7 History: The importance of conserving the remains of the ancient past, including the heritage of Aboriginal and Torres Strait Islander Peoples. (ACDSEH148) Additional Cross-curriculum priorities: Aboriginal and Torres Strait Islander Histories and Cultures OI.5. Year level: 7 Time needed: 60 min Level of teacher scaffolding: Medium – Discuss any issues that students bring up. Challenge preconceived options about Indigenous peoples. Resources needed: Computers or tables, Internet, paper for taking notes Digital technology opportunities: Lake Mungo: Detailed information and curriculum is available at http://www.mungoexplorer.com.au/ Assumed prior learning: Aboriginals have occupied Australia for at least 50,000 years. They have a very rich culture. When referencing The Romans, it refers to a period in Europe’s history around 2000 years ago. Key words: Archaeologist, archaeology, waste, rubbish, restoration, conservation. Cool Australia’s curriculum team continually reviews and refines our resources to be in line with changes to the Australian Curriculum. These Planet Ark resources were developed by Cool Australia with funding from the Alcoa Foundation.
Because the governments of the world have yet to undertake any meaningful efforts to mitigate climate change, it is of the utmost importance that locally caused stressors to reefs such as overfishing and deforestation are minimized”Cramer, study lead author “Because researchers did not really begin to study Caribbean reefs in detail until the late 1970s, we don’t have a clear understanding of why these reefs have changed so dramatically since this time,” said Cramer. “So, we set out to reconstruct an older timeline of change on reefs by looking at the remains of past reefs – coral skeletons and mollusk shells.”To reconstruct this timeline, the team dug below modern reefs in incremental layers and, using radiocarbon dating of the coral skeletons they found, linked fluctuations in the types and numbers of coral and mollusks over time to historical records of land clearing. Changes in the relative numbers of these various species represent clear indicators of the overall health of the coral reef. The team also improved upon the standard technique of taking long, narrow core samples of coral fossils that cannot track fluctuations in the numbers of larger species of coral. “We wanted to look at the whole complement of the coral community,” said Cramer.To catalog the relative numbers of dozens of coral and molluscan species, the researchers dug two-foot-wide by three-foot-deep pits into reefs at several coastal lagoon and offshore sites near Bocas del Toro, Panama, that were heavily affected and less affected by land runoff, respectively. At each of these sites they also conducted surveys and recorded the composition of living corals. “We dug up over a ton of coral rubble and tens of thousands of shells,” said Cramer, who led the fieldwork at STRI and likened the laborious experience to doing underwater construction.Systematically sifting through the coral and shell fossils, the scientists noted several indicators of environmental stress, including a decrease in the overall size of bivalves such as oysters, clams, and scallops, a transition from branching to non-branching species of coral, and large declines in the staghorn coral and the tree oyster, which were once the dominant coral and bivalve on these reefs. These indicators were observed in layers of the excavated pits at coastal lagoon sites that were dated before 1960 and as far back as the 1800s, corresponding to a period of extensive deforestation in the Bocas del Toro region. Similar evidence of environmental stress at offshore sites was dated after 1960, indicating that the negative impacts of land clearing have more recently begun to affect reefs further offshore. With the decline of the branching coral species, the reefs now have fewer nooks and crannies that are used as habitat for reef fish and other organisms. Also, the non-branching species that have taken their place grow at a much slower rate. “Consequently, there is less of a chance that the reefs will be able to keep up with sea level rise from climate change,” said Cramer. “Because the governments of the world have yet to undertake any meaningful efforts to mitigate climate change, it is of the utmost importance that locally caused stressors to reefs such as overfishing and deforestation are minimized,” said Cramer. “Advocating for more intelligent use of land as well as implementing sustainable fisheries management, those are things that can be done right now.”The research team, which also includes Jill Leonard-Pingel of Scripps, Thomas Guilderson of the Lawrence Livermore National Laboratory and the Institute of Marine Sciences at the University of California at Santa Cruz, and Christopher Angioletti, will publish its findings in the April issue of Ecology Letters. An early online version has been released today. This research was funded by the National Science Foundation, the Smithsonian Institution, the Center for Marine Biodiversity and Conservation at Scripps, the UC San Diego Academic Senate, and the Project AWARE Foundation. Photo courtesy of Lauretta Burke, World Resources Institute, 2007
By Connecting for Health | 2013 What is Linked Data? In a nutshell Linked Data is a set of concepts, principles and standards aimed at making it easy for people and more importantly applications to: - Discover relevant data on the Web - Access and use the data - Integrate data from new previously unknown sources The concepts of Linked Data are based on those of the existing Word Wide Web, but applied to data rather than web pages. The relevance of Linked Data is explained here by Tim Berners-Lee: Tim Berners-Lee set out four simple Linked Data rules or principles: - Use URIs as names for things. - Use HTTP URIs so that people can look up those names. - When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL). - Include links to other URIs, so that they can discover more things. A URI is a Universal Resource Identifier. It is used to uniquely name a resource; where resources can be real world objects like people, organisations, places and things, as well as data like HTML pages or JPEG files and also abstract concepts. The more familiar URL (Uniform Resource Locator) or web address we use in our web browsers is a type of URI. What does a URI look like? Pretty much the same as a familiar URL. Below is a URI for Leeds Teaching Hospitals NHS Trust: This URI has been created (often termed minted) by the Health Developer Network (often termed a publisher) using the following namespace rules: - https://data.developer-test.nhs.uk is the registered DNS name for the Health Developer Network data services - /ods is used to indicate this thing has been derived from the NHS Organisation Data Service (ODS) - /org is used to indicate this thing is an organisation - /RR8 is the three character code that ODS uses to identify Leeds Teaching Hospitals NHS Trust The last three local namespace rules are internal to the publisher and as long as these local namespace rules produce a unique URI when new URIs are minted, they can be anything that makes sense to the publisher. To the rest of the world the URI is just seen as an opaque identifier. Note a URI is unique in that the same URI should not be used to name something different, for example the publisher should not also use https://data.developer-test.nhs.uk/ods/org/RR8 to identify Harrogate and District NHS Foundation Trust. However there may be other URIs minted by other publishers that also identify Leeds Teaching Hospitals NHS Trust. Given a URI you may want to look up the name to find out something about the thing this name represents. If you are a person then pointing your web browser at the URI and treating it as a URL usually gives you some human readable descriptive information. Try https://data.developer-test.nhs.uk/ods/org/RR8 to see what the Health Developer Network data services tells you about the URI. If you are an application such as a Linked Data client, then issuing a HTTP GET to the URI usually returns some descriptive information in the form of a RDF document. RDF is the Resource Description Framework data model which represents data in the form of a directed graph. Figure 1 shows a fragment of RDF data for Leeds Teaching Hospitals NHS Trust as published by the Health Developer Network in RDF/XML format. Without understanding RDF you can see that this fragment of RDF is about the URI https://data.developer-test.nhs.uk/ods/org/RR8, and that it is providing some useful information about it such as its formal name (LEEDS TEACHING HOSPITALS NHS TRUST), full address and the date on which it was opened (note 1/4/1998 is the date on which Leeds Teaching Hospitals NHS Trust organisation was formed not when the hospital was first opened in Leeds which was back in the 18th century). It also contains links to other URIs, for example the Government Office Region (gor) the trust is in is identified by the URI https://data.developer-test.nhs.uk/ods/org/D. This URI has also been minted by the Health Developer Network and figure 2 shows a fragment of RDF data for this URI as published by the Health Developer Network in RDF/XML format. These links become more interesting when they are outgoing links to other publishers URIs and data. Serving Linked Data There are several technical approaches to serving Linked Data in RDF format: - Static RDF file - Relational database - Wrapping API The first is to simply serve static RDF files. You can generate these RDF files in a variety of ways; manually create then in a text editor or use a tool to convert existing structured data files such as CSV, XML and Excel into RDF. Note there are several RDF serialisation formats available: - RDF/XML is an XML format for RDF - RDFa is format that embeds RDF in HTML documents - Turtle is a plain text format for RDF - N-Triples is a subset of the Turtle format for RDF RDF/XML is currently the only standardised format by the W3C and is also the most widely used, so it is the recommended format to use. Once you have created the static RDF files you can publish them on a web server. URLs should end in .rdf and have a MIME type of application/rdf+xml. Serving static RDF files is a good choice if the files are small and their content does not change often. If the data is already stored in a relational database, then it can be served as RDF data by using a tool that dynamically maps the database contents to RDF and serves it up. A widely used tool to do this is the D2R server. For large volumes of data and/or which change frequently this is potentially a good approach. However careful consideration should be given to required RDF serving performance and the contention impact on the underlying relational database if this approach is to be used for large scale serving, as each RDF request will involve some sort of SQL query on the relational database followed by a transformation of the query result into an RDF data model. If the data is managed within an existing system that provides proprietary APIs to access the data, you can develop custom wrappers around these APIs that exposes them as HTTP URIs and return RDF. As for a relational database, if large scale serving is required there may be performance limitations imposed by the wrapper and a significant contention and load impact on the underlying system. The final approach is to use a triplestore. This is a repository specially designed to store RDF data in its native structure which consists of triples of subject, predicate and object. Triplestores probably offer the best technical approach it terms of scalability and performance. Triplestores are normally used with a SPARQL processor to serve RDF data from the store. SPARQL (pronounced “sparkle”) is the recursive acronym for SPARQL Protocol and RDF Query Language. Similar to SQL for relational databases it provides a standard way to query and get result sets from RDF data. Was this article useful?6
‘Hypogammaglobulinaemia’ means that there are low levels of immunoglobulins (also known as antibodies) in the body, which are important in fighting infections. There are three main different groups of immunoglobulin – IgG, IgA and IgM. How does the immune system normally develop? When babies are born, their immune systems are very immature. They will have received some IgG from their mothers by transfer across the placenta during the last few months of pregnancy, and they will be producing only small amounts of their own IgA and IgM. During the first six months of life, the IgG which came from the mother is gradually lost. At the same time, the baby starts to make his or her own IgG, and more IgA and M. However, as the baby does not make IgG as fast as it loses that which came from its mother, the total amount of IgG in the blood falls steadily. It usually reaches its lowest level at about six months of age. This is normal, and is called “physiological hypogammaglobulinaemia.” After this, immunoglobulin levels rise gradually throughout childhood, until adult levels are reached when children are about 14 years old. If a baby is born very early, there will not have been time for the normal transfer of IgG from the mother to take place. Premature babies may therefore have earlier and more marked physiological hypogammaglobulinaemia than normal. What happens to the immune system in THI? Babies are sometimes slow to start producing immunoglobulins. All types of immunoglobulin may be low, or one or two may be normal. This problem does not usually last for very long, and levels in most children will have ‘caught up’ by the time they are three to four years of age. In a few children, there may not be complete catch up until they are about ten years old. What problems does THI cause? Children with THI may have more frequent and prolonged infections than other children of similar ages. These are often throat and ear infections, or non- specific viruses. A typical story is that the child is having to go to the doctor often and is being given many courses of antibiotics, particularly in the winter. Some parents report that their child is unwell again as soon as antibiotics are stopped. However, it’s important to remember that frequent infections in normal children are particularly common at times when they start to mix with other children, such as starting nursery or school. Lots of young children suffer from frequent minor infections and most have completely normal immune systems. Children with THI are occasionally at risk of serious infections such as pneumonia or meningitis, but this is relatively unusual. What can happen in the long term? The problem gets better by school age in the vast majority of children. However, a small minority of those thought to have THI do not improve with time, and their immunoglobulin levels remain low or even fall further. In this very small number of children, a long-term immune deficiency develops, known as common variable immunodeficiency (CVID). It is important to emphasise that most children with recurrent infections and low immunoglobulin levels in infancy will not develop lifelong problems, but will have normal immune systems and lead normal healthy lives. What causes THI? This is not known. The rate of development of the immune system varies greatly in different individuals, and THI probably simply represents one end of the spectrum. How common is THI? The true frequency of THI is unknown. It is possible that many children who suffer from frequent infections in the first few years of life may in fact have THI, but are simply never investigated. Is there a risk that other children in the family could have THI? There is a slightly increased risk for other children in the same family, compared with the general population, but the overall risk is still very low. How will my child be investigated? If there is concern that the child suffers from more frequent, more prolonged, or more severe infections than normal, he or she may be referred to a paediatrician and possibly to an immunologist. A blood test will be needed to measure his or her immunoglobulin levels and probably to check for specific antibodies which should have been produced following vaccination against certain infections such as tetanus and Hib. It is unlikely that more complicated tests will be necessary. If the child is found to have low levels of vaccination antibodies it may be necessary to give some ‘booster’ immunisations, followed by a repeat blood test, to check that he or she is properly protected against certain infections, and as a further test of his or her immune system. What is the treatment? There is no standard treatment for THI. Management is aimed at maintaining good day to day health and a normal life, including regular school or nursery attendance. Some children can be managed simply by treating infections quickly as they arise. However, if a child is getting very frequent infections – perhaps every four weeks – he or she may need to be given a regular low dose of antibiotics. This can be very successful, and can sometimes transform a child from being constantly unwell, feeling miserable and growing slowly, to a normal, lively, happy one. Regular antibiotics can be continued for several years if necessary, although in practice this is unusual. In some children, regular antibiotics are only necessary during the winter months. Occasionally children with THI may have had, or continue to have, more serious infections. In this very small group, replacement immunoglobulin may be considered. This might be continued for several years but would not usually be needed after about ten years of age. If infections and the degree of hypogammaglobulinaemia are severe enough to require immunoglobulin replacement there is a higher chance that the problem will persist and evolve into CVID. Are there any long-term effects of THI? There are no long-term problems for almost all children with THI. They will grow and develop well and lead normal healthy lives. If, however, they have had serious infections before the problem was recognised, it is possible that there could be some damage, particularly to ears and lungs. Hearing may be affected and require follow-up by ear, nose and throat specialists, and audiologists. Lung damage is much more unusual and only occurs if there have been repeated episodes of pneumonia. How will my child be monitored? The child will have regular reviews by an immunologist or general paediatrician, usually every four to six months. His or her immunoglobulin levels will be checked by a blood test every six to twelve months. If you are worried at any time between regular reviews, additional appointments can be made. Do I need to take any special precautions to protect my child? Children with THI should ideally lead completely normal lives. They can take part in all activities. The only difference will be that parents should ask their GP early if their child is unwell, since antibiotics may be needed. What about immunisations? Most children will have had their first immunisations, including live polio vaccine, before their THI is diagnosed. Many will not yet have received MMR. Part of the initial investigation of THI includes assessment of antibody responses to vaccines, as mentioned above. If good responses to the first set of vaccines can be demonstrated, then there is no reason not to proceed with MMR. If, however, the responses are poor or absent, MMR should be delayed until the immune system can be shown to be maturing – with evidence of good responses to previous vaccines. In infants with THI there is no evidence that live vaccines should be avoided. Our patients provide us with a range of extraordinary stories. Catch up with their their own accounts in which they describe how they battle the most complex illnesses.