content
stringlengths
275
370k
By Deborah Espitia Get your students jazzed about learning languages and motivate them with some rocking strategies by incorporating music into your instruction. The benefits of using music in language instruction have long been known. From his work beginning in 1982, Principles and practice in second language acquisition, Stephen Krashen addressed the use of background music as a way to lower anxiety associated with learning a second language. Others in the field, such as Annette De Groot, in her 2006 article for Language Learning, “Effects of stimulus characteristics and background music on foreign language vocabulary learning and forgetting,” have addressed the increase in retention of target language vocabulary.Continue reading “Music: The instrument for language acquisition”
How a picture is made When you look at a photograph of a scene, visual cues - such as converging straight lines, shading effects, receding regular patterns and shadows - are processed by your brain to retrieve consistent information about the real scene. Lines parallel to each other in the real scene (such as the tiles on a floor) are imaged as converging lines in the photograph which intersect at a point called the vanishing point. This holds for any set of lines as long as they are parallel to each other in the scene. Two or more aligned vanishing points define a vanishing line, such as the horizon, which defines the eye level of the viewer in the picture. These visual clues are used by artists in their paintings, in a technique called linear perspective that was invented in the second decade of the fifteenth century in Florence by Filippo Brunelleschi. During the following decade it began to be used by innovative painters as the best way to convey the illusion of a three-dimensional scene on a flat surface. In the seventeenth and eighteenth centuries a number of mathematicians such as Desargues, Pascal, Taylor and Monge became increasingly interested in linear perspective, thus laying the foundations of modern projective geometry. Projective geometry can be regarded as a powerful tool for modelling the rules of linear perspective in a metrical or algebraic framework. Geometric consistency and measuring heights We should constantly bear in mind that a painting is a creation that relies upon the artist's and spectator's imaginations to construct a new, artificial world. This world originates from the hands of an artist skilled in achieving effects in which manipulating the perspective may be an advantage and in which accuracy may not be paramount. In particular, before any geometric reconstruction can be applied it is necessary to ascertain the level of geometric accuracy within the painting and, by implication, the desire of its maker for perspectival precision. There are some simple techniques for assessing the consistency of the painted geometry. Vanishing points and vanishing lines are among the most useful projective entities of an image, and a natural way to assess the correctness of a painting's geometry is to check whether images of parallel lines do intersect in a single point on the painting. Even in perspectivally constructed images the heights of figures might be varied by the artist according to the status of those represented. For example, the person paying for the painting may be made to appear larger than other figures in the image. Therefore, comparing the heights of people in a painting can prove interesting - not only to ascertain their consistency with perspective rules, but also in order to establish whether any disproportion is an intentional response to hierachies of status. The image above shows how people's heights can be computed directly from perspective images. To compute the height of the man with respect to the height of the column, (or equally to any other reference object chosen from a picture), the height of the man is projected onto the height of the column in the image using the vanishing lines from the top and bottom of the two objects. This gives where d1 and d2 are the measurements from the image of the height of the column and the projected height of the man respectively. Photographs can behave in a more complicated way, in that the vertical vanishing point may be finite (vertical lines eventually intersect in the image) rather than infinite (as in the example above, where the vertical lines are parallel). In such cases the simple formula above for calculating heights does not work, but instead a slightly more complex formula is used, which includes the finite vertical vanishing point in the calculation. Comparing heights in the Flagellation. Step inside the painting by viewing the movie (5M). Flagellation, by the highly skilled artist and mathematician Piero della Francesca, is one of the most studied paintings from the Italian Renaissance period as it is a masterpiece of perspective technique. The "obsessive" correctness of its geometry makes it a most rewarding painting for detailed mathematical analysis. The method for computing heights described above can be applied to this painting using the figure of Christ as the reference object. At first glance it is not easy to say whether the heights of the figures in the background are consistent with the ones in the foreground, but this technique shows that the measurements are all quite close to each other, confirming the extreme accuracy and care for detail for which Piero della Francesca has become noted. An important application of this theoretical framework is its use in forensic science to measure dimensions of objects and people in images taken by surveillance cameras. The quality of the images is usually very bad (as they are taken by cheap security cameras), and quite often it is not possible to recognize the face of the suspect or distinct features on their clothes. Therefore the height of the person may become an extremely useful identification feature. In the case of photographs of real objects the reference height (the height of the phone box in the figure on the left) may be known or can be measured in situ and the height of the people in the photo can be computed in absolute terms. If a painting conforms to the rules of linear perspective then it behaves, geometrically, as a perspective image and it can be treated as analogous to a straightforward photograph of an actual subject. Deeper into geometry ... An illustration of Leonardo's perspectograph. A point X on the globe is projected to a point x on the image plane via a straight ray from X to Leonardo's eye. In a central projection camera model, a three-dimensional point in space is projected onto the image plane by means of straight visual rays from the point in space to the optical centre (such as your eye, see image of Leonardo's "Perspectograph"). This process can be described mathematically by a projection matrix P, which takes a point in three-dimensional space and transforms it into a point on the two-dimensional image plane. The projection matrix P can be computed from the external and internal camera parameters, such as its position, orientation and focal length. In the case where planar surfaces are imaged, the transformation is called a plane-to-plane homography (a simpler matrix H). If the homography between a plane in the scene and the plane of the image (the retina or the canvas) is known, then the image of the planar surface can be rectified into a front-on view. Rectified, front-on view The homography can be computed simply by knowing the relative position of four points on the scene plane and their corresponding positions in the image. For example, the left-hand image above is a photograph of a flat wall of a building taken from an angle. Four corners of a window have been selected, and the homography between the plane of the wall and that of the photograph has been computed by mapping the selected four image points to a rectangle with the same aspect ratio as the window. Thanks to the homography, a new view of the wall (on the right) has been generated as if it was looked at from a front-on position. A black and white pattern can be seen on the floor in the Flagellation Martin Kemp's manual reconstruction of the floor pattern The computer reconstruction of the floor Piero della Francesca's Flagellation shows, on the left hand side, an interesting black and white floor pattern viewed at a angle. Alongside this image is a manually-rectified image of the floor pattern produced by Martin Kemp (in his book "The Science of Art"), and the rectification achieved by applying a homography transformation as described above (where the four vertices of the black and white pattern have been selected as the base points for the computation of the homography, and assumed to be arranged as a perfect square). There is a striking similarity between the computer- and manually-rectified patterns. However, the computer rectification has many advantages, including speed, accuracy and the fact that the rectified image retains the visual characteristics of the original painting. Furthermore, the computer rectification discovers two patterns, one before and one behind the central dark circle on which Christ is standing. The farther instance of the pattern is very difficult to discern by eye in the original painting, while it becomes evident in the rectified view. Another example of Piero della Francesca's incredible skill and precision. Now we get to the exciting bit! If a image has enough geometric consistency, the methods described above (rectifying slanted views, estimating distances from planar surfaces such as heights of people) can quickly produce a complete three-dimensional reconstruction of the image. The three-dimensional reconstruction process can be used to explore the possible structural ambiguities that may arise, and can magnify possible imperfections in the geometry of the painting. The church of Santa Maria Novella in Florence boasts one of Masaccio's best known frescoes, The Trinity (1426), painted just before his early death in 1428 at the age of 27. The fresco is the first fully-developed perspectival painting from the Renaissance that uses geometry to set up an illusion in relation to the spectator's viewpoint. The Trinity has been analysed repeatedly using traditional techniques, but no consensus has been achieved. It has become apparent that analyses starting with the assumption that the vault coffers are square result in a different format from those that start with the assumption that the plan of the chapel is square (these two assumptions seem likely, since having a square ground plan seems to be the natural choice from a design point of view, and that of square coffers seems to be more likely from a perceptual point of view), although looking at the painting one may think that the two assumptions are consistent with each other. There is an infinite number of reconstructions consistent with the original painting Single-view reconstruction algorithms have been applied to an electronic image of the fresco to help art historians resolve this debate. Since one image alone is used and no scene metric information is known (the "chapel" is not real), the number of reconstructions consistent with the original painting is infinite. In fact, different choices of the coffers or ground plan aspect ratios yield different consistent three-dimensional models. At this point new questions arise. Which architectonical structure did the artist want to convey? If he had started by laying down a square base, why would he choose rectangular-shaped coffers? Was he aware of the depth ambiguity? Was it done on purpose? Without exploring the answers in detail here, we suspect that Masaccio began, as most designers would, with the overall shape, and then fitted in the details to look good, and that when he found that his earlier decisions had resulted in coffers that were not quite square (if he noticed!) he decided that they would look effectively square anyway. In the final analysis, visual effect takes over from absolute accuracy. Whatever the reason for Masaccio's ambiguity, the computer analysis performed here has allowed us to investigate both assumptions rigorously, by building both models efficiently, visualizing them interactively and analysing the shape of vault and base in three dimensions (view the movie for the model with square coffers (5.3M), and the movie for the model with a square base (2M)). St Jerome in His Study Step inside the painting by viewing the movie (3.3M). St Jerome in His Study is an oil painting by the Dutch artist H. Steenwick (1580-1649), who was one of the pioneers of perspectival interiors in Dutch painting. Linear perspective was generally adopted later in northern Europe than in Italy, but it was in Holland, where elaborate depictions of buildings and townscapes in their own right became a major genre for painters in the seventeenth century, that the potential of Brunelleschi's invention for the depiction of actual (or apparently real) views was fully realised. The accuracy of the perspective in Steenwick's St Jerome, and the amazing management of light and shade as it traverses the spaces, make this painting a very significant early example of Dutch painting of domestic and ecclesiastical interiors, combining in this case both a room and a distant vista into a church. The beautifully characterised passage of the light from the windows on the left, casting shadows across the tiled floor, gives Steenwick's imagined interior an extraordinary sense of veracity. Given its strong geometrical component (numerous parallel lines and planar surfaces can be observed) the painting proves an ideal input for our reconstruction techniques. Reconstructing this painting in three dimensions also offers the possibility to detect and investigate inconsistencies which are hard to notice through an analysis of the flat original image alone. The window as it looks A rectified, front-on The images above show the original and a reconstructed front-on view of the large window on the left hand side of the painting. Notice that, while parallelism and angles have been recovered correctly, an unexpected asymmetric curvature of the top arch can be detected - the right side of the arch appears to be thicker than the left side. This inconsistency is made evident by our reconstruction process and is less noticeable in views taken from locations closer to the original view point. This geometrical imperfection is probably due to the fact that the artist has painted a complicated curve at an angle by eye and without undertaking a precise projection, which probably wasn't visually worth the effort. The inaccuracy in the painting can be interpreted statistically, by assuming that during the painting process there is the same likelihood of the artist making a mistake in any direction and at any point of the canvas. In figure on the left below, the distribution of the uncertainty on the plane of the painting is visualised by superimposing a regular grid of circles on the original painting. The figure on the right shows a front-on view of the window, computed by the usual method of rectifying the image by applying a homography transformation to the original painting. The circles in the figure on the left are mapped by the reconstruction process into ellipses of increasing size going from left to right, accounting for the reduced accuracy of the right side of the window arch. The idea of investigating geometric imperfections by generating new views of portions of a painting was already present in what is considered to be the very first treatise on perspective, Della Pictura by Leon Battista Alberti (1435), where he suggested looking at paintings in a mirror to expose any weaknesses. The three-dimensional reconstruction of this image offers another way to expose these weaknesses. Virtual space, the final frontier... These three-dimensional models can be brought together to create an interactive virtual museum, where viewers can visualise the paintings in three dimensions, and interact with them by "diving" into the virtual scenes. And perhaps the time when we can literally step inside the painting is drawing near. In the first steps towards this goal, researchers from Microsoft have developed the Holosim - a hand-held device, such as a palmtop, fitted with tilt sensors so that as you tip or move the device the three-dimensional simulation on the display responds to your movements - allowing you to observe the object from different view points. This opens up exciting new possibilities such as inspecting a virtual three-dimensional reconstruction of a famous object as we hold it in our hands (own a virtual Ashes trophy), or using it as a window on a virtual space. And for the Trekkies among us, surely the holodeck is only a matter of time. The author would like to thank A. Zisserman, I. Reid, M. Kemp and L. Williams for their collaboration on this work. About the author Antonio Criminisi is a researcher at Microsoft Research in Cambridge. His current research interests are in the area of image-based modelling, texture analysis and synthesis, video analysis and editing, 3D reconstruction from single and multiple images with application to Virtual Reality, Forensic Science, Image-Based Rendering and Art History. Antonio developed the work in this article while he was part of the Visual Geometry Group at the University of Oxford. For more examples of this work and for details of his book, Accurate Visual Metrology from Single and Multiple Uncalibrated Images, you can visit his web page.
21 October 2020 Adapt or perish: How the region’s corals survive against all odds Published online 15 October 2015 Corals in the Middle East are the perfect model for adaptation to extreme environments. Although research into how they survive is in its infancy, the findings range from promising to fascinating. The world’s corals are currently undergoing severe mass bleaching – an event that scientists say has taken place only two other times in recorded history. The cause, El Niño, that warms the world’s oceans is a catastrophe for sea life. Its disastrous effects on coral reefs force marine animals and the algae that give them their vibrant colours to part ways, breaking up a symbiotic relationship essential for the corals’ survival; making them wither and eventually die. The last El Niño event, during 1997-1998, killed about a sixth of all coral colonies worldwide. By wiping out reef populations, the phase disturbs the food cycles of these tightly-woven ecosystems. Another side-effect is the starvation of different fish species in the aquatic areas most stressed by El Niño, which is expected to last for a few more months. Although the phenomenon hits reefs in the Pacific, Indian and Atlantic oceans, it has a long reach, affecting reefs worldwide. Amid all the devastation, the unique species of the Middle East are marginally inured to the lethal warmth due to marvellous adaptations. The reefs of the Persian/Arabian Gulf and the Red Sea have already learned how to survive outside nutrient-rich cold marine ecosystems that their counterparts elsewhere are generally used to. In fact, these are a type of corals that live in constant warmth all year round. Because of their robustness in the face of high temperatures and salinity, they have become an object of fascination for the region’s scientists across top research facilities including KAUST and NYADU. The resilience against harsh stressors, is a model of tolerance that some scientists believe can shed light on the solutions to restore and conserve species in other harsh environments – but only if intensively studied and demystified. In this special Nature Middle East speaks to the researchers leading the vanguard of coral reefs research efforts in the region. In her feature, Corals get by with a little help from a friend, writer Nadia Al-Awady talks to John Burt and Emily Howells about their thermal stress experiments. Burt speaks of how the biology of reef fauna in the Gulf can provide some incredible insights into how corals and other reef-associated organisms might cope with future climate change. Howells offers clarity on how the symbiotic relationship in this region’s corals is able to survive high summer sea temperatures. Deeper into the waters of the Middle East lie many hundreds of corals that are perhaps the only contenders to their shallow counterparts in terms of strength and resilience. And Sedeer El-Showk, in his piece Deep Sea corals in the Red Sea: reservoirs of hope, outlines the horizons of research that have opened due to the very existence of these deep-sea corals. “Corals are very successful evolutionarily. They’ve been here for the last 250 million years,” Chris Voolstra, KAUST researcher, says, debunking the myth that corals are static organisms with poor prospects in the face of climate change. Corals are very successful evolutionarily. They’ve been here for the last 250 million years. Moheb Costandi, meanwhile, goes fishing for evidence on how culling top predators in this region, like sharks, is affecting coral reefs. Overfishing threatens Middle East coral reefs considers the struggle over fish as sustenance between corals and man, with the latter exhausting resources that are essential for the stability and biodiversity of the Red Sea ecosystem. Scientist Michael Berumen says that overfishing has eliminated entire groups of fish from the delicate marine food chain. Researchers Yi Jin Liew and Manuel Aranda walk us through the epigenetic adaption of corals, sharing their awe at how Rea Sea corals, for instance, thrive under conditions that would not be tolerated by their relatives in other oceans. In an exclusive commentary to Nature Middle East, they explore the adaptive capacities of corals here and consider the question of whether other species may have similar untapped potential. “Corals have survived several mass extinction events, including the ones that led to the demise of dinosaurs,” they write, going on to reflect on what corals are doing right in terms of genetic adaptation. Finally, Louise Sarant in Modern taxonomy illuminates the Red Sea delves into the taxonomy methods that scientists rely on to classify coral species – and how eschewing some of the traditional methods is perhaps the only way sometimes to effectively study species in this region. Berumen and Gustav Pauly, curate for Sarant a variety of regional studies analyzing genetic information about new species and identified the truth about them through novel ways. Sarant’s essay is accompanied by a blog post on our House of Wisdom that highlights the importance of conservation efforts in the region – the corals’ endurance notwithstanding. The Red Sea, for instance, will eventually bear the brunt of climate change, and while many of the species are significantly immune to changes in temperatures now, they’re not invincible. The ecosystem can be damaged as temperatures increase, scientists warn. Although much is yet to be learned about the corals of the region, it is clear their complexity and capability is worth investing time and research as their secrets perhaps hold the key to the survival of their entire marine species.
Jon Gosier, from Appfrica.com, created this infographic, Population of the Dead, to help visualize the question “How many people have ever lived?” Across the top is also a timeline of births, that helps demonstrate how quickly the population has accelerated in the last few hundred years. Text from the image: The numbers are highly speculative but are as accurate as modern science allows. It’s widely accepted that prior to 2002 there had been somewhere between 106 and 140 billion homo sapiens born to the world. The graphic below uses the conservative number (106 bn) as the basis for a concentric circle graph. The red dot in the center is scaled to represent how many people are currently living (red) versus the dead (white). The vertical line represents time. The spectral graph shows the population ‘benchmarks’ that were used to estimate the population over time. Adding up the population numbers gets you to 106 billion. The two spheres are then used to compare against other numbers.
So let's discuss ionizing radiation a little bit. Whenever we discuss the field of radiology it's important to have at least a basic understanding of what ionizing radiation is and what the effects of radiation are. So radiation is the emission of energy in the form of waves. Electromagnetic radiation is the type that's used in radiography and CT scans. And it can actually be harmful when it's used in excess so it's important to remember that when you're ordering CT scans on a patient or even when you are the one performing them. Radiation can go in multiple different directions. There can be transmitted radiation which is the amount of radiation that actually goes through the detector and hits the detector to create an image. There is absorbed radiation which is the amount of radiation that interacts with the patient's tissues and that is measured in units which are called Gray units. And the scatter radiation, which is the amount of radiation that's deflected to a different direction and it's neither absorbed nor is it transmitted. So, if you're a bystander in a patient who is having a CT scan, you are susceptible to the scatter radiation which is bouncing off of the patient. There are multiple different sources of radiation; imaging is just one of them. Imaging constitutes about 50% of radiation these days, somewhat worldwide, but usually in countries that use CT scans more often. Other types of radiation include cosmic radiation which comes from outer space. Radiation can also come from radioactive material found within the soil and Radon is also an important source of radiation. So what are some biological effects of radiation? It can cause molecular damage and it can create free radicals within the body and this is one of the reasons why it can be hurtful. It also results in disruption of normal cellular metabolic function and mitosis. And because of these it can have multiple different effects on the human body. So there are deterministic effects and there are what are called stochastic effects. So deterministic effects are effects that occur at very high doses of radiation. It results in cell killing including skin erythema, cataracts, and sterility. And this really only occurs above a certain threshold, so with deterministic effects, you don't have any effect at all until you reach a certain threshold of radiation and then all of a sudden you have the effects of cell killing. Stochastic effects on the other hand are dose independent. They include carcinogenesis and genetic damage and if the dose increases the probability of a stochastic effect increases, so there is no threshold the way a deterministic effect has. So as you increase dose and as you do more and more CT scans, let's say, the probability of a stochastic effect will increase. Most susceptible to the effects of radiation are the bone marrow, colon, lung, and stomach. With moderate effects on the bladder, breast, liver, esophagus, and thyroid and the effects that these organs have are really induction of cancer. Children are obviously the most susceptible, they have the most stem cells and stem cells are very susceptible to radiation. So when imaging a child, it's always very, very, important to be careful as to the amount of radiation that you provide. So let's take a look at fetal risk of radiation. If a child is very susceptible to radiation, a fetus is actually even more so. So fetal risk of radiation actually depends on the days after conception that the fetus encounters the radiation. So within the first 1 to 10 days, the fetus has the highest risk of radiation and it really could result in fetal demise. And this is all when a fetus receives a dose of about 10mGy or more. About 20 to 40 days after conception, the fetus can have congenital anomalies which can present after birth. At about 50 to 70 days, the radiation can result in microcephaly. Further out, in about 70 to 150 days, it can lead to growth and mental retardation and again, these are all things that may or may not occur and you may not know until they're going to occur until years after the child is born. Greater than 150 days after conception, it can result in childhood malignancies. So it's very important to remember this chart and to know that in a pregnant female you really don't wanna be doing any kind of study that results in risk of radiation unless you absolutely have to. You really have to weigh the pros and cons and to see whether or not this patient really needs the imaging study. So again, risk of radiation really depends on the level of gestation. So how can we protect against radiation? There's something known as ALARA which stands for As Low As Reasonably Achievable. You wanna minimize the amount of imaging whenever possible and you wanna minimize imaging doses whenever possible. So CT scans can be performed in a variety of different ways. When you're performing a CT scan on a child, it's always very reasonable to lower the dose so that the child receives less radiation. You have to keep in mind however that when you lower the dose of an imaging exam, you're also lowering the sensitivity of that exam. You want exposed personnel to be monitored by a film badge so all radiologist and all technologist that work in the field always wear a film badge and that shows them how much radiation that person is receiving. Lead shielding is always used and you wanna increase your distance from the source. So as we know, scatter radiation is always present and you wanna be as far away from that source of scatter as possible especially when you're someone that works in the field and you're exposed to this day to day. Rooms are now designed with the shielding in place to help prevent radiation exposure. So let's take a look at the differences in radiography and CT. In terms of the mechanism of acquisition, both use ionizing radiation. CTs are actually a lot more expensive than a radiograph and they take a few seconds longer to perform. CTs are not portable, the patient has to go into the CT gantry while radiographs are portable. So in a patient that's not able to move around, radiographs are very good with imaging. Radiographs take just a few seconds, CTs take a little bit longer than that but really no longer than about a minute or so. And radiographs are performed without the administration of intravenous contrast. CT scans may or may not need intravenous contrast, so in a patient that has a contraindication to contrast this is also something to keep in mind. It's important to remember in terms of the radiation that CTs actually have a lot higher radiation than a radiograph does.
It is not always obvious but “mold is everywhere.” Literally there are spores in the air you are breathing right now. When air samples are taken, there is never a zero count of mold spores for “clean” areas. Mold grows in almost any dark moist location. Not only can it grow on food (bread, cheese, etc.) but it grows on trees and in other areas such as basements and in walls and ceilings and air conditioners. Why is this important to know? It is important because mold can cause structural weakness in the materials that attract it. Residential and commercial structures that use wood can ultimately fail after repeated exposure to water. Water exposure from defective roofs, leaking pipes, groundwater through walls and foundations, or elevated moisture levels due to lack of ventilation and/or vapor barriers causes damage to wood and drywall. Therefore, sagging roofs, floors and ceilings may be indicators of mold growth. What causes mold growth in homes and commercial buildings? It can be created by water from a pipe breaking, a flood or a constant leak. Within 24 to 48 hours, mold can begin to grow after building materials such as wallboard, carpeting and ceiling tiles become exposed to elevated moisture levels. As well, mold can grow when buildings are inadequately designed or constructed and water seeps in. Also poor maintenance of ventilation systems creates the conditions for mold to grow. Wherever condensation collects there is the potential for water to accumulate causing moldy conditions. How do you identify the cause and extent of the problem? CED uses the scientific method to determine the origin and cause of the moisture or water problems which cause mold. We identify the underlying problem from which the water source is coming (i.e. poor ventilation, building envelope breach, leaky roof, lack of vapor barrier, etc.) CED completes the following steps during an investigation: 1. When a mold odor or visible mold is observed by an individual, CED would review documents reporting the problem, items detailing the history of the problem and maintenance documentation. 2. We would conduct a physical site inspection. CED looks at the observed areas of concern and may also look behind walls, above ceilings, inside air conditioners or wherever we believe mold may exist. a. During this inspection, the engineer looks for the cause of the moisture. They try to determine was it due to flooding, were there leaks from rain and poor construction, did the fire sprinkler system leak, or were there abnormal amounts of condensation in the air conditioner or ventilation system causing a wet & moldy environment. b. Should sample collections become necessary, they can be completed by companies with the appropriate equipment. 3. After completing the investigation and determining the cause of the problem, we generate a report describing the nature of the substance, and the origin and cause of the growth. In conclusion, after observing mold, the building or home owner should develop a strategy to (1) discover the moisture or water source (CED can assist with this process), (2) fix whatever is causing the moisture problem which led to the mold growth and (3) develop a remediation plan. Finally, to remediate the problem in a safe manner, use a remediation company. These trained experts remediate mold in such a way that fungal debris is not spread into the air where it can be breathed. Should you have any questions or want CED to assist with an inspection, call us at 800.780.4221 or visit us online at www.cedtechnologies.com.
If you are going to have surgery or are planning to participate in a research study, you may have lots of forms to sign. One you should pay special attention to is the informed consent form. The term “informed consent” refers to the permission you give to someone else to take action that will affect you, with a clear understanding of the potential risks and benefits of that action for you. In the context of medical practice and research, informed consent requires doctors and researchers both to seek your permission before treating you or enrolling you in a study and to give you the information you need to make rational decisions about the treatment being offered or the study you are being asked to take part in. You must be competent to understand and voluntarily agree to the proposed course of action. In so doing, you are granting your informed consent. The principle of informed consent is a powerful tool in your hands. Many people think that the primary purpose of informed consent is to protect the hospital from lawsuits or to give the doctor or researcher a green light to go ahead with the procedure, treatment, or study. Informed consent does do these things, but its primary purpose is to give you the opportunity to make decisions about your own medical care. It grants you specific rights as a patient or a research subject. Before you undergo any medical treatment or participate in a research study, there are some important things you should know about informed consent. Informed consent in clinical practice The principle of informed consent has long been an ethical foundation of practicing medicine. Only in the last 50 years, however, has it been codified into law. In all 50 states, doctors are legally responsible for informing their patients of the nature, purpose, and risks of a proposed treatment or procedure. But far from being an abstract concept, informed consent is a very practical way for you to take part in your own health care. Informed consent does not always involve a piece of paper that you have to sign. Oral consent (sometimes known as basic consent) is perhaps the most common kind of informed consent. For this type of consent, a doctor explains to you what he or she is going to do, the reasons for doing it, and any risks that may be involved. A doctor who conducts a blood test or prescribes a medicine, for example, will seek your oral consent. Procedures that require oral consent generally have low risk and are accepted practice. Just because your doctor seeks your oral consent, however, does not mean that you do not have the right to question or refuse treatment. This can be especially important when a doctor is prescribing a drug. Make your consent informed. Many drugs, including those for arthritis, have the potential to cause serious side effects, and you should be aware of these before you begin taking the drug. If you do not understand how the drug works, how to take the drug, or what its benefits, risks, and possible side effects are, ask your doctor about it. Written consent, on the other hand, is necessary for most surgeries and invasive diagnostic tests. In these cases, a doctor will present you with a form to sign before performing the procedure. Handing you the form, however, does not fulfill your doctor’s responsibilities. The doctor should also explain to you, in layperson’s terms, the meaning of the form. The American Medical Association (AMA) recommends that doctors, in informing someone of a possible treatment or procedure, discuss the following: the person’s diagnosis; the nature and purpose of the treatment or procedure to be performed and its risks and benefits; any alternative treatments and their risks and benefits; and the risks and benefits of not undergoing any treatment. The form you sign will state these risks and confirm that your doctor has covered them in conversation with you. If your doctor has not covered any of these items to your satisfaction, or if you don’t understand something, ask for more information before you sign. And remember that you always retain the right to refuse the procedure or treatment. You may even change your mind and refuse the treatment or procedure after you have signed the informed consent form. Doctors are also responsible for making sure, before someone grants either oral or written consent, that the person is competent to give consent. For example, a person with Alzheimer disease, or someone taking a medicine that is necessary but impairs judgment, may be determined to be unable to give consent to a treatment or procedure. In this case, a surrogate decision maker must be consulted. Though each state has slightly different rules as to who can be this surrogate, a person previously appointed by you to be your health-care proxy will always be the first person consulted. In the absence of a designated proxy, there is a hierarchy of surrogate decision makers, usually starting with your spouse and moving on through adult children, parents, and brothers or sisters. If no suitable surrogate can be found, a court-appointed surrogate may be called upon to make decisions for you. Surrogates will try to determine what kind of care you would have wanted. Some people make an “advance directive” that clarifies what kind of care they would want should they ever be unable to make the decision on their own. The exception to these cases occurs when a person is unconscious or otherwise unable to respond, has no available surrogate, and requires emergency treatment. In this case, the doctor must use good judgment to determine what, if any, treatment the person needs. Consent is then considered “implied” or “presumed.” Informed consent laws were originally formulated after World War II to ensure that people participating in clinical trials were doing so freely and with full knowledge of the treatments being tested. These laws outlined the basic principles — defined later as respect for persons, beneficence, and justice — that govern the treatment of human subjects in clinical trials. A clinical trial is a research study designed to test the relative effectiveness of a drug or other treatment. Often the trial will compare a new treatment against a standard treatment and/or against a “placebo” (inactive) treatment. Sometimes neither you nor the person administering the treatment will know whether you are receiving the treatment being studied or a placebo. (This is done to eliminate the effects of any bias you, the people giving you the treatment, or the people collecting and evaluating the study data may have concerning the treatment.) Because every study is different, it is important, before agreeing to participate, that you are aware of the study’s procedures and its potential benefits and risks. You will find this information in the study’s informed consent document. This information should be sufficiently thorough and clear to allow you to make an informed decision about whether to participate in the trial. According to federal law, informed consent documents for research studies must contain the following: - an affirmation that participation in the study is completely voluntary; - a statement of the purpose of the study; - an explanation of the study’s procedures, including how long the study will last; - a statement of the possible benefits of the treatments offered in the study either for you or for people down the road who may have the same medical condition; - a list of the possible risks of the treatments used in the study, as well as acknowledgement of the possibility of additional, unexpected risks; - an explanation of any alternative treatments and procedures that may be beneficial to you; - an explanation of what treatments are available to you in case of injury; - contact information for people who can answer questions about the study and explain to you your rights as a study - an explanation of conditions under which you may be required to leave the study, and an assurance that you can voluntarily stop participating in the study at any time and without penalty; - a statement describing the confidentiality in which your information will be held. In detailing the study’s procedure, the form should be very clear about what the study will require of you — that is, when, where, and how you will receive treatment and be monitored, and, if it applies, who will provide your day-to-day care. The form should also let you know who will pay for any costs associated with the study (for example, the price of transportation to and from the study center). The form may also include other information, such as an explanation of circumstances — for example, new scientific information — that could change the course of the study. The form is also likely to contain detailed information about who will see and what will be done with your information during and after the study. Furthermore, the form is not allowed to deprive you of any right to seek additional compensation for injury that may take place because of the trial. If you find any of these points not clearly presented, you should ask the researchers basic questions: What will be done to me during the study (and what will I have to do)? How long will the study last? How might I benefit? How could I be hurt? What will happen to my information? Who pays for any expenses that may result from the study? What happens if I am injured in the study, and who pays for my treatment? What if I want to leave the study early? Who can I call if I have a question? The researchers should be able to provide answers to all of these questions. Remember also that not just any researcher can plan and conduct a study. By federal law, any proposed research study, whether it’s a clinical trial or another type of study, must first gain approval from an Institutional Review Board (IRB). (There are a few specific exceptions, mostly for surveys and observational studies.) IRBs may be set up by hospitals, universities, or for-profit organizations and must consist of a diverse group of at least five people familiar with research practice. To receive IRB approval, a researcher has to present a detailed proposal — called a “protocol” — explaining how the study will be run. Informed consent documents used in the study must also be approved by the IRB. After the study has been approved, the IRB will set up ways to monitor the study as it goes along to make sure that the protocol is being followed. The researchers are answerable to this board for deviations from the original plan or for any other misconduct. If the study does not follow its protocol, or if the risks of the study become too great, the IRB has the power to suspend or shut down the study. For example, in 2004 new information about the pain-killer celecoxib (Celebrex) and the risk of heart attacks caused one IRB to suspend a study involving the drug (the study was later reactivated). IRBs are an important safeguard against abuse. Make absolutely certain that, if you enter a study designed to test a medical treatment or procedure or a study that takes identifiable information from you, it has received approval from an IRB. The bottom line Informed consent carries with it clear ethical and legal responsibilities for doctors and researchers. If a doctor or researcher does not properly inform you of the risks or effects of a procedure, treatment, or study, and you are injured or otherwise adversely affected, it can be considered negligence. But informed consent confers a less obvious responsibility on you — the responsibility to take an active role in your own health care. You are entitled to knowledge, not only of your condition, but also of the ways in which it is being diagnosed and treated. If you do not understand something, ask. The more informed you are about the care you are receiving, the better that care will be for you.
The Gene Machine This article first appeared in Personal Computer World magazine, December 1996 AS RESEARCHERS CONTINUE to look beyond silicon for the computers of the future, a US scientist has created a massively parallel computer in a single test-tube containing a few drops of liquid. His computer is DNA, the molecule of life. Leonard Adleman, a computer scientist at the University of Southern California, has devised a novel solution to a classic mathematical problem. In doing so, he has single-handedly laid the foundation for a new technology. Imagine that a travelling salesman has to visit a number of towns, starting at one specified town and finishing in another. Given that some roads between towns may allow only one-way travel, and that not all towns will have direct roads between them, is it possible for the salesman to find a route such that he visits each town once only, in a continuous path? The diagram shows the test case used by Adleman. Here, there are seven towns, 1 to 7, and the arrows between towns show the interconnecting roads and the allowed direction of travel. The salesman must start in town 1 and finish in town 7. Such a simple case can be solved with a few minutes' trial and error (the answer is 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7) but as the number of towns and their interconnections increase, the problem becomes very hard to solve indeed. In fact, the problem belongs to one of the hardest classes of mathematical problems known, which require enormous computing power to attack. Adleman's breakthrough was to use DNA to solve the problem, and his approach was ingenious. This is how he did it. DNA comprises two intertwined molecular strands, each of which is a long chain of alternating phosphates and sugars. Attached to each sugar is a molecular group called a 'base', and there are four different kinds, known as A, C, G and T. It is the particular sequence of bases along a strand that forms the genetic code for life. An A base on one strand attacts a T base on the other strand, and a C base attracts a G base. These attractions pull the two strands together into the familiar 'double helix' shape discovered by Watson and Crick in 1953. Adleman represented each city, and each road between two cities, with a specially engineered strand of DNA exactly 20 bases long. The sequence of bases in each strand was carefully designed such that strands could link with each other to spell out possible routes. Take, for example, cities 6 and 2. The strand of DNA representing the road from 6 to 2 would stick to the end of the strand representing city 6, and the beginning of the strand for city 2, but not to any part of any strands for other cities. To solve the problem of finding a route beween cities 1 and 7, Adleman mixed together in his test-tube a million million copies of all the possible strands for the cities and their interconnections, and allowed them to link up with each other. Next, he used standard biochemical techniques to isolate particular strands. First, he isolated only those linked-up strands which started with the code for 1, and ended with 7. Then, he isolated only those strands which coded a route through seven cities, knowing that these strands must be exactly 140 (7 x 20) bases long. Longer or shorter strands were rejected. Finally, he kept only those strands containing city 1, and of these he kept only those containing city 2, and so on. After seven days of intensive laboratory work, Adleman's test-tube contained the answer to the problem, subsequently visible as a series of dark bands on a DNA sequencing gel. On the face of it, it might hardly seem worth the bother, especially as Adleman already knew the answer before he started the experiment. But this was much more than a curious laboratory stunt. During the initial 'linking-up' stage of the process, Adleman's test-tube computer effectively performed an astonishing 10^14 calculations. And it did so with the consumption of only a tiny amount of energy, and in a tiny physical space. This was the first time that the combinatorial power of DNA had ever been exploited for computation, and Adleman's work has sparked a flurry of activity. The first researcher to take the idea further was Richard Lipton of Princeton University, who showed how to use DNA to solve another important puzzle in computer science: the 'satifiability' problem, routinely faced by designers of logic circuits. Here, the goal is to find the solutions to problems in Boolean logic. For example, given an expression such as: ( (a = 1) OR (b = 1) OR (c = 0) ) AND ( (b = 0) OR (c = 1) ) the problem is to find which (if any) binary values of a, b, and c satify the expression. Like the travelling salesman problem, simple instances are easy to solve by trial and error, but as the number of variables and constraints increase, the computation time mushrooms exponentially and the problem soon becomes intractable. With DNA strands, however, huge numbers of potential solutions can be evaluated and discarded in parallel, until the correct solution, if there is one, remains. Perhaps the most exciting proposal is Lipton's scheme for using DNA to code arbitrary binary numbers, which opens up the possibility of DNA-based solutions to a wider range of problems, such as matrix manipulation, factoring, dynamic linear programming and algebraic symbol processing. Since the methods of DNA computing are quite different from traditional step-by-step algorithms, perhaps we shall see the development of hybrid machines, part silicon and part DNA. Another promising application is to provide pure data storage: to encode one bit of data using DNA would occupy approximately 1 cubic nanometre, which means a test-tube ought to comfortably accommodate several hundred million gigabytes. Think what you could do with a bathful. Research into DNA computing is taking off in a big way. This year the 2nd Annual Workshop on DNA-based computing was held at Princeton University, and there is already a new scientific journal devoted to the subject. However, like many ideas for computers based on technologies other than silicon, although the DNA computer looks great on paper, the practical biochemical and engineering challenges are immense. DNA manipulations involve fearsomely complicated lab protocols, and are highly prone to contamination and error. Some scientists also warn of the potential ecological horrors of flushing discarded DNA computers down the drain. But apart from the technological excitement, all this talk about using DNA for computing has got the philosophers hopping too. Are the processes inside our own cells essentially performing computations to which human life is the answer? If this is so, the philosophers ask, then what is the question? For more on DNA computing, visit www.ecl.udel.edu/~magee/. Toby Howard teaches at the University of Manchester.
By Monika Muller at April 26 2019 00:39:08 To know if you are on the right track with worksheets in your classroom, answer (honestly) these simple questions: _ Do my students groan when I hand out a worksheet? (The answer should be no.) _ Are my lesson plans based on worksheets? (The answer should be no.) _ Do I feel anxiety if I don't have worksheets copied? (The answer should be no.) _ Are students excited about learning in my classroom? (The answer should be yes!) There are many types of worksheets you can use as a teaching aid. First is coloring pages. This is good in teaching kids the different colors and their names, and the proper way to color. With First Crafts, kids learn how to make simple crafts and enjoy the fruits of their hard work. There are also worksheets that teach how to read. It includes the basic sounds each letter produce. Kids try to read the words displayed before them. In the First Alphabet worksheet, kids learn how to write the alphabet. And in the First Animals worksheet, kids try to recognize the animals in the picture and learn the names of these animals.
He refers to them as "nature's fighter jets" and has devoted his life's work and an entire lab to monitor their every move. Thus is the relationship existing between Dr. Michael Dickinson and the objects of his attention—fruit flies. Career pursuits aside, Dr. Dickinson's connection to the insects is one he predicts will eventually lead to the development of flying robots capable of performing various covert tasks, such as spying and surveillance. An AFRL-sponsored bioengineer, Dr. Dickinson has been working with colleagues to unravel the mystery of how a fly's brain controls its muscles during precision flight. "They make lightning-fast 90° turns, take off or land upside down, and even carry twice their body weight," observes Dr. Dickinson from his uniquely dedicated lab facility at the California Institute of Technology (Pasadena). "I've spent a lot of time with folks in the lab trying to figure out the basic aerodynamics of insect flight," he explains. Until recently, scientists did not understand how insects could get airborne, let alone fly as well as they do. The conventional laws of aerodynamics dictate that a fly's tiny wings are too small to create enough lift to support its body weight. Based on these same conventions, scientists had long assumed that the viscosity of the air, and not inertia, was the fly's greater force to overcome in executing in-flight turns. Dr. Dickinson's research team evidenced that the long-standing rules of steady-state aerodynamics, which irrefutably govern the flight of airplanes and birds, are simply not applicable to insects that flap their wings approximately 200 times per second. Specifically, the team's research revealed that to generate the forces that allow them to turn, fruit flies make subtle changes in both the tilt of their wings (relative to the ground) and the motion range of each wing flap. They then use their wings to create an opposing, twisting force that prevents them from spinning out of control. To gain insight regarding these anomalies, Dr. Dickinson and his team created a unique test arena known as the Fly-O-Vision. The Fly-O-Vision is essentially a fruit fly flight simulator that allows scientists to track a fly's wing motions as it responds to a changing visual landscape (see figure). The Fly- O-Vision's high-speed, infrared video cameras captured the wing and body motions of fruit flies as they performed rapid 90° turns, called saccades. The researchers then analyzed the threedimensional wing and body positions of the fly as it executed turns at a speed faster than a human can blink. Dr. Dickinson's team concluded that fruit flies perform banked turns resembling those executed by larger flies, whose turns must primarily overcome inertia rather than friction.
Brown Patch, also called Rhizoctonia blight, is a common infectious disease of turfgrass. All turfgrasses grown in Kentucky lawns can be affected by Brown Patch. However, this disease is usually destructive only in tall fescue and perennial ryegrass during warm, humid weather. While Brown Patch can temporarily harm a lawn's appearance, it usually does not cause permanent loss of turf except in plantings less than one-year old. Brown Patch disease is sometimes responsible for poor turf quality, but it is not the only cause of brown spots or bare patches in lawns. You may need to consider other possible causes of thinning or dead grass. These include: Areas affected by Brown Patch are initially roughly circular, varying in size from one to five feet or more. During early morning hours, fine strands of grayish, cobwebby fungal growth (mycelium) may be evident at the margin of actively developing patches. This "smoke ring" disappears quickly as the dew dries. As an outbreak progresses and diseased patches coalesce, affected areas may lose the circular appearance and become irregular or diffuse. On blades of tall fescue, lesions resulting from very recent infections are olive-green; as they dry, lesions become tan and are surrounded by a thin, brown border. Brown patch in perennial ryegrass causes blades to wither and collapse. Lesions initially are dark green or grayish green but quickly become tan as decayed leaves dry. In Kentucky bluegrass, infected leaves exhibit elongated, irregular, tan lesions which are surrounded by a yellow or brown border. Brown Patch is caused by infection of grass foliage and crowns by Rhizoctonia fungi. Rhizoctonia solani is a very common soilborne fungus and is the cause of Brown Patch symptoms in most instances. Rhizoctonia zeae can also cause Brown Patch in tall fescue under very hot, humid conditions. Rhizoctonia fungi survive the winter as tiny, brown, resting bodies (sclerotia) in the soil and thatch layer of the lawn. When environmental conditions are favorable for growth, the sclerotia germinate and produce cobwebby fungal mycelium, which is the active phase of the fungi. Rhizoctonia fungi often harmlessly colonize organic matter in the thatch. However, when stressful conditions weaken the grass, Rhizoctonia can infect the plants and cause disease. Leaf infections are the most common phase of Brown Patch, but infections of crowns and roots sometimes occur, particularly in seedlings. Rhizoctonia colonizes infected tissues and then forms new sclerotia, thus completing its life cycle. Tall fescue and perennial ryegrass are the lawn grasses most susceptible to Brown Patch under Kentucky conditions. Fine fescues (hard fescue, creeping red fescue, chewings fescue, and sheep fescue) and zoysia are all moderately susceptible to the disease. Occasionally, Kentucky bluegrass lawns can be affected by Brown Patch, although this grass is less susceptible than others. Seedlings of all grasses are more susceptible to infection than established plantings. Brown Patch is most destructive when the weather is humid and temperatures are stressful to the grass. Thus, in cool-season grasses such as tall fescue and perennial ryegrass, the disease is most severe under high temperatures (highs above 85 F, lows above 60 F). Conversely, in warm-season grasses such as zoysia, Brown Patch is most severe in humid weather with moderate temperatures (45 - 70 F). Application of high levels of nitrogen fertilizer, particularly during spring and summer, favors development of Brown Patch by producing lush, succulent growth that is very susceptible to Rhizoctonia infection. Other factors increase disease severity by creating a humid environment favorable for growth of Rhizoctonia fungi. These factors include: overwatering, watering in late afternoon, poor soil drainage, lack of air movement, shade, a high mowing height, and overcrowding of seedlings. Excessive thatch, mowing when wet, and leaf fraying by dull mower blades also can enhance disease severity. Apply the bulk of nitrogen fertilizer to cool-season turfgrasses in fall and early winter rather than spring or summer. Fall fertilization increases overall root growth of cool-season grasses and reduces their susceptibility to several diseases. A single fall application may be applied in November; if making two applications, October and December are good times to fertilize. Avoid overfertilizing, particularly with fertilizers high in nitrogen. Maintain adequate levels of phosphorous and potassium in the soil. Do not attempt to cure summertime outbreaks of Brown Patch with nitrogen fertilization, as this will simply aggravate the disease. Set a mower height of no greater than 2 1/2 inches. A mower height greater than this aggravates Brown Patch by reducing air circulation and allowing more leaf-to-leaf contact, conditions which permit greater fungal growth during humid weather. Mow regularly to promote air circulation and rapid drying of the turf, making the lawn environment less favorable for fungal growth. To avoid stressing the grass, mow often enough so that no more than one-third to one-half of the leaf length is removed at any one mowing. In tall fescue lawns, reducing the mower height to 2 inches or less can further reduce outbreaks of Brown Patch. However, keep in mind that lawns mowed this closely must be mowed frequently. In an actively growing tall fescue lawn mowed at 2 inches, it may be necessary to mow several times a week to prevent removal of more than one-half of the leaf length at one mowing. Never scalp the lawn from 4 inches down to 2 inches or less. During an active outbreak of Brown Patch in hot, humid weather, clipping removal can help eliminate a food base for the fungus. However, in the absence of an active disease outbreak, returning clippings to the lawn is a beneficial practice that returns nutrients to the soil. Keep the mower blade sharp. A dull blade shreds the leaves, creating an ideal site for infection. When irrigation is necessary, wet the soil to a depth of at least four inches to promote deep rooting. Check the watering depth by pushing a metal rod or screwdriver into the soil. It will sink easily until it reaches dry soil. Avoid frequent, light waterings. These encourage the grass to develop a shallow root system and frequently provide the surface moisture that Rhizoctonia fungi need to infect the leaves. If a disease outbreak is evident, water early in the day so that the leaves dry quickly. If the lawn is watered late in the day, the leaves may remain wet until morning, thus providing long periods of leaf wetness favorable for infectious fungi. Removing dew, by dragging a hose across the lawn or by very light irrigation during early morning hours, will reduce prolonged leaf wetness and remove leaf exudates that encourage disease development. Avoid using excessive seeding rates when seeding or renovating a lawn, as overcrowding can aggravate an outbreak of Brown Patch. See the UK Extension Publication AGR-52, "Selecting the Right Grass for Your Kentucky Lawn," for information on seeding rates. Selectively prune nearby trees and shrubs to increase air movement and light penetration, thereby allowing leaf surfaces to dry more quickly. Avoid applying herbicides during an active outbreak, as these may aggravate the disease. In an established lawn, fungicide sprays are not recommended to control Brown Patch. Cultural practices will usually do a great deal to reduce the disease. Even if an outbreak of Brown Patch occurs, crowns and roots of established plants often survive, and blighted turf begins to recover when cooler weather arrives. So an established, well-managed lawn often will recover from Brown Patch without fungicide applications. Probably the principal situation in Kentucky where judicious use of a fungicide in a home lawn is necessary is to control Brown Patch in a newly seeded lawn of tall fescue or perennial ryegrass. During the summer following a spring seeding, the immature plants can be easily killed by outbreaks of Brown Patch during hot, humid weather. Fungicide sprays may be helpful to protect tall fescue or perennial ryegrass lawns seeded the previous spring, to prevent loss of turf during the first season of growth. Under very high disease pressure, a fungicide spray may even be needed during the first summer following a seeding made the previous autumn, especially if the lawn was sown in late autumn. During the first summer of growth in a new lawn, inspect the lawn regularly during hot, humid weather and be prepared to have a certified pesticide applicator treat the yard if necessary. Fungicide recommendations are described in the UK Extension Publication PPA-1, "Chemical Control of Turfgrass Diseases." Once the lawn is established, there should be little need for future fungicide applications.
Brainstorm activities for literature! Older students are accustomed to brainstorming for their own writing projects. But a regular-ol brainstorm, not connected to a writing assignment? If your students like to brainstorm, activities for literature are easy to implement and can keep students (and teachers!) from getting in a rut. You can add these to any lesson plan to review or spice up a reading assignment. Plus, these ideas help students connect literary terms to literature. Since a brainstorm needn’t be neat, students can flip through the book, finding quotes from the character of study that indirectly characterizes him or her. Students should also look for other characters’ comments regarding the character as well as interactions between characters – eye rolling? dismissive looks? true engagement? All of those small gestures have meaning. At the end of brainstorming, students should draw conclusions about characters. Brainstorm a symbol’s meaning… Color, history, mythological connections – those components all tell about the symbol. Where is the symbol mentioned, and whom is it connected with? A clear example of this is the scarlet ibis in “The Scarlet Ibis.” The ibis is in a nonnative location, is red (like blood), and suffers – all like Doodle. All three of those descriptions about the ibis can be further explored in connection to Doodle. Ask students to explain what message a symbol gives us. Encourage students to brainstorm circumstances surrounding conflicts. Is the root of a conflict a personality clash? an event that the audience doesn’t know about, but might know eventually? the result of a larger force? immaturity or misunderstanding? a hidden belief (that the character may not consciously admit)? Stories must have conflicts and analyzing them allows students to understand the story better, and eventually the theme. Allow students to freely brainstorm ideas and feelings about the conflicts. This will lead to a greater understanding of the story as a whole. Ask students to write allusions and to research the reference. Why is the author including a particular allusion? Does it contrast a character or theme? Emphasize one? Does it enhance the reader’s understanding? Often, allusions are particular to the setting. Brainstorm any literary term… Allow students to choose what literary term stood out to them. The activity will have greater meaning that way. Even if students’ brainstorm activities for literature don’t result in any profound discovery – who cares? They reviewed the story, took a risk, and realized what doesn’t work. But – more often than not – a brainstorming note later connects to anther part of the story, allowing students to experience true learning. I have no brainstorming worksheets or graphic organizers for these ideas. Grab a stack of sticky notes or note cards. Ask students to write one idea per note. Assemble them when students finish. Have students take notes as they walk around the room. Ask them to compile the information and make copies of their notes – and then use that information to make a quiz. (A quiz the students designed? They’re pretty agreeable about this because they know what’s on the quiz!) When we study literature with a big brainstorm – activities for literature like these are easy to implement – and change if one heads south! Before a large test, students can brainstorm in groups and then present their findings to the class. What works for you? Add ideas that work for your classes when brainstorming about literature.
“An apple a day keeps the doctor away.” > This proverb comes from the ancient Romans, who believed the apple had magical powers to cure illness. In fact, apples are filled with vitamin C, protein, pectin, natural sugars, copper, and iron. They do promote health. “The pot calling the kettle black.” > In the seventeenth century, both pots and kettles turned black because they were used over open fires. Today, this idiom means criticizing someone else for a fault of one’s own So when you say, “The pot calling the kettle black” instead of “Someone hypocritically criticises a person for something that they themselves do,” you’re using an idiom . The meaning of an idiom is different from the actual meaning of the words used. “An apple a day keeps the doctor away” is a proverb. Proverbs are old but familiar sayings that usually give advice. Both idioms and proverbs are part of our daily speech. Many are very old and have interesting histories. If you want to look at more examples of proverbs and idioms, please click here Now, we shall look at what an idiom is and its characteristics.
In 3rd grade, students learn about the physical and living world as they make observations, experiment, research and record and present what they have learned. Third graders conduct hands-on experimentation to develop questions, hypotheses and make observations and conclusions. Children may work in small groups or as a class observing and experimenting. As in other grades, the specific topics studied in science vary according to state however common topics studied in 3rd grade include: earth and space; plants; the cycle of life; animals; electricity and magnetism, and motion and sound. Consult your child’s teacher or research your state’s science standards for more details. In order to build science skills, your 3rd grader: - Observes living and non-living things and makes inferences about the observations. - Researches information on a variety of topics using texts and computers. - Collects and uses data to support experiments and what he learns. - Records her observations both through writing and talking and uses her observations to explain and make conclusions. - Understands what living things need (air, water and food) and what they do (grow, move and reproduce). - Studies and observes life cycles. - Experiments with different types of materials and different matter such as solid, liquids, and gas. - Works in groups and as a class to conduct experiments and create projects. Science Activities: 3rd Grade - Research Your World: Choose something your child likes for example, animals, plants, cooking, weather, and the body. Your child can come up with a list of questions she has about a topic and then work together to find the answers, experiment and observe that topic. - Plant Something: Plant something outside or inside and ask your child to observe what she sees, recording the growth and process. Once the plant has grown, help your child identify the different parts of the plants and talk and learn about what those parts do. - Move It!: Go outside or stay inside to experiment with motion. Take a variety of objects, for example, a ball, a balloon, a paper airplane or a toy car and have them move in different ways. Slide them down a ramp, hill or stairs, push or throw them with different amounts of force or blow air on them. As your child does this, talk about the different speeds of the objects, what makes them go faster and slower and why this might be. - Picture Science: You and your child can take close-up pictures of objects in science such as animal parts, fur, plants, trees, or different materials (wood, rubber, metal). Then you and your child can use your observation skills to try to guess what the picture is. Make this a game, taking turns guessing what each other’s picture is. - Quiz Show: Find either actual objects or pictures of objects which are both “alive” and “not alive.” Show your child one object at a time and ask him to answer “alive” or “not alive.” Make this feel fast paced and like a quiz show, showing objects quickly and asking your child to answer as quickly as possible. You can even time how long it takes. After a round of play, look at the different objects and talk about the similarities and differences between the alive and non-live objects.
CAPA 12 - Knowledge of Culture & Language This course will explore issues related to First Nations, Métis, Inuit knowledge, culture and language. Students will explore these issues within a community, provincial and national context and will have the opportunity to apply the knowledge by producing assignments that will be reflective of what they have learned. At the end of the course, students will be expected to share their own personal and professional moral codes of ethics and how that will shape and or transform their current practice. Upon successful completion of this course, participants will be able to: - Provide advice based on an understanding of Aboriginal values, traditions, and cultural practices - Demonstrate respect for local cultural values and customs - Demonstrate respect for local languages and dialects - Demonstrate respect for traditional knowledge and application in day-to-day work and decision making - Demonstrate support for diverse perspectives and points of view - Encourage and foster an environment that supports social and cultural learning and initiatives
cystic fibrosis transmembrane conductance regulator The CFTR gene provides instructions for making a protein called the cystic fibrosis transmembrane conductance regulator. This protein functions as a channel across the membrane of cells that produce mucus, sweat, saliva, tears, and digestive enzymes. The channel transports negatively charged particles called chloride ions into and out of cells. The transport of chloride ions helps control the movement of water in tissues, which is necessary for the production of thin, freely flowing mucus. Mucus is a slippery substance that lubricates and protects the lining of the airways, digestive system, reproductive system, and other organs and tissues. The CFTR protein also regulates the function of other channels, such as those that transport positively charged particles called sodium ions across cell membranes. These channels are necessary for the normal function of organs such as the lungs and pancreas. About 80 CFTR mutations have been identified in males with congenital bilateral absence of the vas deferens. Most affected males have a mild mutation in at least one copy of the gene in each cell. These mutations allow the CFTR protein to retain some of its function. Some affected males have a mild mutation in one copy of the CFTR gene in each cell and a more severe, cystic fibrosis-causing mutation in the other copy of the gene. Mutations in the CFTR gene disrupt the function of the chloride channel, preventing the usual flow of chloride ions and water into and out of cells. As a result, cells in the male genital tract produce mucus that is abnormally thick and sticky. This mucus clogs the tubes that carry sperm from the testes (the vas deferens) as they are forming, causing them to deteriorate before birth. Without the vas deferens, sperm cannot be transported from the testes to become part of semen. Men with congenital bilateral absence of the vas deferens are unable to father children (infertile) unless they use assisted reproductive technologies. More than 1,000 mutations in the CFTR gene have been identified in people with cystic fibrosis. Most of these mutations change single protein building blocks (amino acids) in the CFTR protein or delete a small amount of DNA from the CFTR gene. The most common mutation, called delta F508, is a deletion of one amino acid at position 508 in the CFTR protein. The resulting abnormal channel breaks down shortly after it is made, so it never reaches the cell membrane to transport chloride ions. Disease-causing mutations in the CFTR gene alter the production, structure, or stability of the chloride channel. All of these changes prevent the channel from functioning properly, which impairs the transport of chloride ions and the movement of water into and out of cells. As a result, cells that line the passageways of the lungs, pancreas, and other organs produce mucus that is abnormally thick and sticky. The abnormal mucus obstructs the airways and glands, leading to the characteristic signs and symptoms of cystic fibrosis. Genetics Home Reference provides information about hereditary pancreatitis. A few mutations in the CFTR gene have been identified in people with isolated problems affecting the digestive or respiratory system. For example, CFTR mutations have been found in some cases of idiopathic pancreatitis, an inflammation of the pancreas that causes abdominal pain, nausea, vomiting, and fever. Although CFTR mutations may be a risk factor, the cause of idiopathic pancreatitis is unknown. Changes in the CFTR gene also have been associated with rhinosinusitis, which is a chronic inflammation of the tissues that line the sinuses. This condition causes sinus pain and pressure, headache, fever, and nasal congestion or drainage. Other respiratory problems, including several conditions that partially block the airways and interfere with breathing, are also associated with CFTR mutations. These conditions include bronchiectasis, which damages the passages leading from the windpipe to the lungs (the bronchi), and allergic bronchopulmonary aspergillosis, which results from hypersensitivity to a certain type of fungal infection. Additional genetic and environmental factors likely play a part in determining the risk of these complex conditions. - cAMP-dependent chloride channel - cystic fibrosis transmembrane conductance regulator (ATP-binding cassette sub-family C, member 7) - cystic fibrosis transmembrane conductance regulator, ATP-binding cassette (sub-family C, member 7)
With bird habitat vanishing before our eyes, fewer birds are gracing our skies. Once so numerous that their cries and beating wings created a deafening sound over California’s great Central Valley, migrating flocks are disappearing. They have fallen to habitat destruction, water and food shortages and climate change. As the silence grows, so do the ominous implications. Find out how The Nature Conservancy is spearheading BirdReturns, a pilot project combining crowd-sourced data, hard science and economic incentives to provide pop-up habitats for birds on rice fields in the Sacramento Valley. THE DECLINE OF THE PACIFIC FLYWAY California is a linchpin of the Pacific Flyway, a grand route of avian migration that spans from Alaska to South America. Birds traveling this pathway come to California to feed, rest and winter in the state’s wetlands and forests. They carry nutrients that enrich our soils—including agricultural lands—and play a vital role in the ecosystem as both predators and prey. Shorebirds, waterfowl, songbirds and raptors also generate billions of dollars in revenue from birdwatchers and hunters. California’s wetlands once supported 40 to 80 million waterfowl each winter. Today, more than 95 percent of wetlands have been converted to farmland, cities and other uses. Despite the habitat losses, California still supports some of the largest concentrations of wintering waterfowl and shorebirds found anywhere in the world. The majority of these birds rely upon a quilt of managed wetlands and bird-friendly agricultural lands. But with California continuing to grow, these lands, and the water that supports them, are under constant threat. The Nature Conservancy has worked for more than 50 years on projects that help protect migratory birds in California. Despite our success in watersheds such as the Cosumnes and Sacramento Rivers, migratory species continue to decline—not just in California, but globally. Under these urgent circumstances, we have devised solutions for safeguarding migratory birds that can be implemented on a broad scale. Our plan is threefold: · Protect and restore critical habitat over large areas · Protect and enhance bird-friendly agricultural lands · Secure adequate water delivery to wetlands and compatible agricultural lands With partners including the California Rice Commission, Cornell Lab of Ornithology and Point Blue Conservation Science, The Nature Conservancy will build on techniques we have developed over decades of work with California watersheds. With a goal of creating one million acres of Central Valley wetland habitat, our creative approach ensures that we can protect nature and help farmers thrive while meeting the needs of our growing planet. And it means one of nature’s great shows—the Pacific Flyway migration—will be witnessed by generations to come.
This lesson helps young people understand the reasons that we bathe. The youth will take part in a demonstration that helps them visualize how germs are spread from person to person. Finally, they will practice proper hand-washing with soap. Before facilitating this lesson, you may want to review the following notes about cleanliness. These facts can be shared with young people during your discussions. How often a person should take a bath or shower depends somewhat on individual preference and family and cultural norms. But there are several reasons that it’s important to make sure kids are getting cleaned up on a regular basis, including: - Physical Health—Regular baths or showers with a mild soap, followed by drying with a clean towel, help wash away germs and prevent illness, infection, and other problems. - Mental Health—Taking a bath or shower in the morning can be invigorating and help you wake up; in the evening it can be soothing and help you calm down. - Social Health—Bodies have smells…lots of them. The less often we clean ourselves the more likely we are to develop noticeable odors. Sometimes these can turn people off. The appearance of not being clean can also cause us to feel self-conscious and insecure. Most people don’t need a lot of deodorant, special creams, or perfumes to look, feel, and smell clean as long as they are following a regular cleaning routine. Ask the youth, why is it important for us to keep our bodies clean by taking baths or showers? Most young people will be able to answer this but many children do try to avoid the bath at some point in their lives, so reinforcing the concept is a good idea. Use the information from the Instructor Notes above as appropriate. Activity: Looking for Germs Explain to the youth that one very important reason to take a bath or shower is to wash away germs that can make us sick. Tell them they are going to demonstrate how easy it is to pass germs around. - Explain that germs are a lot like glitter in that they get on everything we touch or that touches us. That’s why it’s so important to wash ourselves at the end of a day or a time we’ve been very active or gotten dirty. - Give each young person a small amount of petroleum jelly to rub on their hands. - Then sprinkle their hands with a bit of glitter. Have them shake hands with one another, and touch pieces of paper or other objects that can get a little bit glittery. (Caution…this can get MESSY!) - Once the youth have experienced how easy it is to spread germs (by touching other objects) instruct them to wash their hands thoroughly to remove all glitter. - To assure proper hand-washing, we need to rub all surfaces of our hands using soap and clean running water to make a lather. Rub hands for at least 20 seconds. - Once everyone has had a chance to wash their hands, ask the youth about their experience and note that a quick rinse doesn’t remove glitter or germs. Activity: Hand Washing - Teach young people a song to the tune of “Here We Go ‘Round the Mulberry Bush.” The words are, “This is the way we wash our hands, wash our hands, wash our hands; this is the way we wash our hands, to make sure they get clean”. - Explain that this song can help you make sure you wash your hands for at least 20 seconds. Using a clock or timer, see how long it takes you to sing the song. For example, if it takes 10 seconds to sing the verse, young people can sing it twice through so that they know that they have washed their hands for at least 20 seconds. - Have the young people each practice washing their hands while singing the song. - If time permits, ask for suggestions of other verses and mime them as a class. They might suggest, for example, “This is the way we wash our hair”. At the end of the session you can reiterate that while bathing and washing are personal things and everyone gets to make their own choices about them, there are good reasons to have a regular routine, and that it especially impacts others around us if we don’t keep our hands clean Continuing the Conversation Hand out the Healthy Families Newsletter in English or Spanish so that families can continue the conversation about healthy washing habits.
What is “space”? Space can be divided into different types, thus having slightly different definitions, as follows: 2D space is a measurable distance on a surface which shows length and width but lacks thickness or depth. 3D space is a sensation of space that seems to have length, width, and height to create visual or real depth. This term can also describe the perception of 3D space within a two-dimensional composition. 4D space is a highly imaginative treatment of forms that gives a sense of intervals of time or motion. How do we manipulate spacial illusion in two-dimensional art? Within each of these types of space, there are some major principles that guide our perception of how the space is rendered: - positive and negative shapes - direction / linear perspective - scale / proportion - overlapping shapes Positive and Negative Shapes Positive shapes are the enclosed areas that represent the initial selection of shapes planned by the artist. They may suggest recognizable objects or merely be planned non-representational shapes. Negative shapes are unoccupied or empty space left after the positive shapes have been laid down by the artist; however, because these areas have boundaries, they also function as shapes in the total pictorial structure. Sometimes, especially in abstractions, it is difficult to ascertain positive from negative shapes, making space appear more two-dimensional in quality. Direction and Linear Perspective Direction can set the mood of a design as it suggests movement. Four basic types of direction in two-dimensional design are: Notice that the four types above move on the flat, planar x and y axes. Artists can also depict the illusion of three-dimensional movement through linear perspective, which can induce a feeling of moving forward or backward on the “z-axis”, in and out of the picture plane, respectively. Proportion / Scale Proportion deals with size ratios between shapes/forms in a composition. Scale is a specific device that enables artists to manipulate these proportions. Typically, larger objects advance into the foreground, where smaller items recede toward the background. Overlapping shapes in a composition generally gives a perception of depth, pushing obects on top closer to the viewer and objects in the back further away from the viewer.
IBM today announced a materials science breakthrough at the atomic level that could pave the way for a new class of non-volatile memory and logic chips that would use less power than today?s silicon based devices. Rather than using conventional electrical means that operate today's semiconducting devices, IBM's scientists discovered a new way to operate chips using tiny ionic currents, which are streams of charged atoms that could mimic the event-driven way in which the human brain operates. Today's computers typically use semiconductors made with CMOS process technologies and it was long thought that these chips would double in performance and decrease in size and cost every two years. But the materials and techniques to develop and build CMOS chips are rapidly approaching physical and performance limitations and new solutions may soon be needed to develop high performance and low-power devices. IBM research scientists showed that it is possible to reversibly transform metal oxides between insulating and conductive states by the insertion and removal of oxygen ions driven by electric fields at oxide-liquid interfaces. Once the oxide materials, which are innately insulating, are transformed into a conducting state, the IBM experiments showed that the materials maintain a stable metallic state even when power to the device is removed. This non-volatile property means that chips using devices that operate using this novel phenomenon could be used to store and transport data in a more efficient, event-driven manner instead of requiring the state of the devices to be maintained by constant electrical currents. "Our ability to understand and control matter at atomic scale dimensions allows us to engineer new materials and devices that operate on entirely different principles than the silicon based information technologies of today," said Dr. Stuart Parkin, an IBM Fellow at IBM Research. "Going beyond today?s charge-based devices to those that use miniscule ionic currents to reversibly control the state of matter has the potential for new types of mobile devices. Using these devices and concepts in novel three-dimensional architectures could prevent the information technology industry from hitting a technology brick wall." To achieve this breakthrough, IBM researchers applied a positively charged ionic liquid electrolyte to an insulating oxide material - vanadium dioxide - and successfully converted the material to a metallic state. The material held its metallic state until a negatively charged ionic liquid electrolyte was applied, to convert it back to its original, insulating state. Such metal to insulator transition materials have been extensively researched for a number of years. However, IBM discovered that it is the removal and injection of oxygen into the metal oxides that is responsible for the changes in state of the oxide material when subjected to intense electric fields. The transition from a conducting state to an insulating state has also previously been obtained by changing the temperature or applying an external stress, both of which do not lend themselves to device applications. This research was published yesterday in the peer-reviewed journal Science.
In order to perform a prediction card trick, you will need the following: a deck of cards, the box the cards came in, a pen or a pencil, and paper. With this trick, you hand the deck of cards to a bystander. In advance, remove four black cards from the deck, and place them in the box. Tell the bystander to shuffle the cards as much as they want. You write down your prediction on a sheet of paper and set it aside. Your prediction should be, "There are four more pairs of red cards than pairs of black cards in the deck". When the person is done shuffling, tell them to make 3 piles of cards. As they peel 2 cards off the deck at a time, tell them to make a separate pile for pairs of reds, pairs of blacks, and the 3rd pile should be for cards that are one of each. Pick up the stack of cards that have one of each. Place them in the card box. Afterwards, have the bystanders count the cards. If they want you to do it again, your prediction should be that there will be an equal number of each red and black.
Before Egypt became a part of the British empire, it had been invaded by many populations including the Arabs and the Turks, respectively. In 1882, Egypt officially came under the rule of the Brits when they successfully ended a revolt against the Turkish rulers. Though Britain declared its independence in 1922, British troops remained there until the mid-1950s. The U.S. State Department attributes this to Britain’s desire to maintain control of the Suez Canal. On July 22-23, 1952 Lt. Col. Gamal Abdel Nasser ousted then King Farouk. One year later Egypt was declared a republic on June 19, 1953. Three years later, in 1956, Nasser was elected president. While in office, Nasser promoted the concept of “Arabism” (aka: Arab Nationalism) which sought to unify every Arab by creating a single nation. Though an Arab “superstate” was never created during Nasser’s reign, the idea of Egyptian leadership of the Arab world was immensely popular at the time. In the same year he was elected, Nasser made bold moves during what came to be known as the Suez Crisis. His actions during the crisis turned Nasser into the darling of Egypt. The Suez Crisis
The best way to survive the ill-effects of climate change and pollution may be to simply sleep through it. According to a new study published in The American Naturalist, mammals that hibernate or that hide in burrows are less likely to turn up on an endangered species list. The study's authors believe that the ability of such "sleep-or-hide" animals to buffer themselves from changing environments may help them avoid extinction. The idea that sleepers and hiders may have a survival advantage first arose from a study of the fossil record conducted by Dr. Lee Hsiang Liow of the University of Oslo. That study found that sleep-or-hide mammals seem to last longer as species than other mammals. In this latest study, Liow and colleagues from the Universities of Oslo and Helsinki wanted to see if this trend holds for mammals living today. Using a database of over 4,500 living mammal species, Liow and his team identified 443 mammals that exhibit at least one sleep-or-hide behavior. Their list includes tunneling and burrowing animals like moles and chipmunks, as well as animals that can periodically lower metabolic rates like squirrels, bats and bears. The sleep-or-hide list was then compared with "Red List" of threatened species compiled by the International Union for Conservation of Nature. As the researchers suspected, sleep-or-hide species are less likely appear in any of the IUCN's high-risk categories. The pattern holds even under controls for other traits that may influence extinction rates, such as body size (smaller animals generally have lower extinction rates) and geographic distribution. Despite these results, sleepers and hiders shouldn't be viewed as evolutionary "winners," the authors say. "Sleep-or-hide species survive longer, but in a changing world they run the risk of eventually becoming seriously obsolete," says Mikael Fortelius of the University of Helsinki, one of the study's authors. "Species that don't sleep or hide are short-lived, but they may be more likely to leave successful descendants. "In a way it's the classic choice between security and progress." Reference: Lee Hsiang Liow, Mikael Fortelius, Kari Lintulaakso, Heikki Mannila, and Nils Chr. Stenseth, "Lower Extinction Risk in Sleep-or-Hide Mammals," The American Naturalist, Feb. 2009 Source: University of Chicago Explore further: Free the seed: OSSI nurtures growing plants without patent barriers
As many network engineering students have found, a number of different protocols and concepts must be learned in a specific sequence in order to understand how they work with each other. This fact is very apparent when learning about simple traffic forwarding. Initially, students learn about the basics of LANs and switched networks, as well as how devices communicate with each other without using routers. Once students understand this background information, the lessons move toward learning what routers do and how packets are routed. This article takes a small step past this point to talk about how Cisco devices, in both older and more modern hardware, speed up packet forwarding by using packet-switching methods on such devices. A Little History Lesson A number of different methods have been developed to improve the performance of networking devices, both by increasing packet-forwarding speed and by decreasing packet delay through a device. Some higher-level methods focus on decreasing the amount of time needed for the routing process to converge; for example, by optimizing the timers used with the Open Shortest Path First (OSPF) protocol or the Enhanced Interior Gateway Routing Protocol (EIGRP). Optimizations are also possible at lower levels, such as by optimizing how a device switches packets, or how processes are handled. This article focuses at this lower level, specifically by examining how vendors can decrease forwarding time through the development and implementation of optimized packet-switching methods. The three main switching methods that Cisco has used over the last 20 years are process switching, fast switching, and Cisco Express Forwarding (CEF). Let’s take a brief look at these three methods. Of the three methods, process switching is the easiest to explain. When using only process switching, all packets are forwarded from their respective line cards or interfaces to the device’s processor, where a forwarding/routing and switching decision is made. Based on this decision, the packet is sent to the outbound line card/interface. This is the slowest method of packet switching because it requires the processor to be directly involved with every packet that comes in and goes out of the device. This processing adds delay to the packet. For the most part, process switching is used only in special circumstances on modern equipment; it should not be considered the primary switching method. After process switching, fast switching was Cisco’s next evolution in packet switching. Fast switching works by implementing a high-speed cache, which is used by the device to increase the speed of packet processing. This fast cache is populated by a device’s processor. When using fast switching, the first packet for a specific destination is forwarded to the processor for a switching decision (process switching). When the processor completes its processing, it adds a forwarding entry for the destination to the fast cache. When the next packet for that specific destination comes into the device, the packet is forwarded using the information stored in the fast cache—without directly involving the processor. This approach lowers the packet switching delay as well as processor utilization of the device. For most devices, fast caching is enabled by default on all interfaces. Cisco Express Forwarding (CEF) Cisco’s next evolution of packet switching was the development of Cisco Express Forwarding. This switching method is used by default on most modern devices, with fast switching being enabled as a secondary method. CEF operates through the creation and reference of two new components: the CEF Forwarding Information Base (FIB) and the CEF Adjacency table. The FIB is built based on the current contents of a device’s IP routing table. When the routing table changes, so does the CEF FIB. The FIB’s functionality is very basic: It contains a list of all the known destination prefixes and how to handle switching them. The Adjacency table contains a list of the directly connected devices and how to reach them; adjacencies are found using protocols such as the Address Resolution Protocol (ARP). These tables are stored in the main memory of smaller devices, or in the memory of a device’s route processor on larger devices; this mode of operation is called Central CEF. An additional advantage when using CEF on supported larger Cisco devices is that the CEF tables on those devices can be copied and maintained on specific line cards; this mode of operation is called Distributed CEF (dCEF). When using dCEF, the packet switching decision doesn’t have to wait for the Central CEF lookup information; these decisions can be made directly on the line card, thus increasing the switching speed of the traffic going from interface to interface on any of the supporting line cards. This design results in decreased utilization of the backplane between the line card and the route processor, providing additional room for other traffic. One question I always had when I was learning this stuff for the first time was, “Why should I care?” As a network engineer, most of these things would be transparent in my day-to-day activities. Most people only cared whether the installed device processed the packets at the device’s top-rated speed. However, any good network engineer will tell you that it’s always best to have at least a cursory idea of how devices handle traffic, from the lowest layer on the wire or cable to the highest level shown to a user. Most experienced engineers don’t need these concepts and this knowledge day to day, but only when implementing a new feature or troubleshooting a hard-to-find problem. For new students, however, this information is important, as many tests will cover this material. I hope the information in this article will help new students who are just learning about these methods, and that it will also serve as a reference for experienced engineers who need a little tune-up on packet switching methods.
John Stuart Mill was the most influential British philosopher of the 19th century. His works spanned a startling variety of topics including logic, metaphysics, epistemology, ethics, political philosophy, and social theory. All of Mill's writings were aimed at the support and expansion of philosophical radicalism, and he had a significant influence on social theory, political theory, and political economy. His work, Utilitarianism, published in 1863, has been described as "the most influential philosophical articulation of a liberal humanistic morality that was produced in the 19th century." Utilitarianism addresses the subject of ethics, exploring the subject that has been perhaps the most puzzling for philosophers and thinkers of all ages. What is right? What is good? How can we use an understanding of the right and good to provide a moral framework that leads humanity to happiness, balance, and progress in all ways? Mill's work on this topic, though highly criticized in his own era, is still taught in university ethics courses around the world. The narration of the work is preceded by a summary that explores Mill's life and the background of his philosophy. Also included are an overview of the work, a synopsis and analysis, and an examination of the historical context of the piece. The AudioLearn edition of Utilitarianism is suitable for philosophers and students, and for all who wish to delve into history and ethical philosophy. ©2017 AudioLearn (P)2017 AudioLearn
Oct 29, 2009 Martin Luther posts 95 theses. On this day in 1517, the priest and scholar Martin Luther approaches the door of the Castle Church in Reformation Day commemorates the publication of the Ninety Five Theses in 1517 by Martin Luther; it has been historically important in the following European entities. It is a civic holiday in the German states of Brandenburg, Saxony, Question: " What are the 95 Theses of Martin Luther? " Answer: The 95 Theses were written in 1517 by a German priest and professor of theology named Martin Luther. His revolutionary ideas served as the catalyst for the eventual breaking away from the Catholic Church and were later instrumental Read the full text of the Martin Luther 95 Theses and a link to a summary. The 95 Theses Out of love for the truth and from desire to elucidate it, the Reverend Father Martin Luther, Master of Arts and Sacred Theology, and ordinary lecturer therein at Wittenberg, intends to defend the following statements and to How can the answer be improved? Read Luther's 95 Theses online. Christian church doctrine written by Martin Luther for Christianity and faith. Free Bible Study Tools.
Arabic word meaning practice. When Muslims generally use the term, they refer to the Sunnah of Muhammad (SAW), that is his statements, actions, tacit approvals and descriptions of his conduct and appearance. By following his Sunnah, Muslims follow the Qu'ranic injuction to obey Allah and his Messenger. The Qu'ran tells us that Muhammad is a fine example, so following that example is a necessity. The actions of a Muslim should first come from the instructions in the Qu'ran, which are often explained by the actions of the Messenger of Allah (SAW). If it cannot by found there (or the instructions are not clear), a Muslim should refer to the Sunnah of his Prophet. The development of legal opinions and understanding from these two primary sources is known as fiqh.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | In statistics, the Kolmogorov-Smirnov test (often referred to as the K-S test) is used to determine whether two underlying probability distributions differ from each other or whether an underlying probability distribution differs from a hypothesized distribution, in either case based on finite samples. In the one-sample case the KS test compares the empirical distribution function with the cumulative distribution function specified by the null hypothesis. The main applications are for testing goodness of fit with the normal and uniform distributions. For normality testing, minor improvements made by Lilliefors lead to the Lilliefors test. In general the Shapiro-Wilk test or Anderson-Darling test are more powerful alternatives to the Lilliefors test for testing normality. The two sample KS-test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples. Mathematical statistics Edit The empirical distribution function Fn for n observations yi is defined as The two one-sided Kolmogorov-Smirnov test statistics are given by where F(x) is the hypothesized distribution or another empirical distribution. The probability distributions of these two statistics, given that the null hypothesis of equality of distributions is true, does not depend on what the hypothesized distribution is, as long as it is continuous. Knuth gives a detailed description of how to analyze the significance of this pair of statistics. Many people use max(Dn+, Dn−) instead, but the distribution of this statistic is more difficult to deal with. Note that when the underlying independent variable is cyclic, as with day of the year or day of the week, then Kuiper's test is more appropriate. Numerical Recipes is a good source of information on this. Note furthermore, that the Kolmogorov-Smirnov test is more sensitive at points near the median of the distribution than on its tails. The Anderson-Darling test is a test that provides equal sensitivity at the tails. - One-sided KS test explanation - Numerical Recipes (ISBN 0521431085) is a prime resource for this sort of thing (see http://www.nr.com/nronline_switcher.html for a discussion). - The Legacy of Andrei Nikolaevich Kolmogorov - Short introductionde:Kolmogorow-Smirnow-Test |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
To put it succinctly, imperialism is the process whereby powerful groups try to extend their power and increase their wealth by bringing ever more of the world under their domination. Although the word comes to us from Roman times, imperialism has been around for a lot longer. In fact, pretty much every well-known ancient civilisation was an imperialist power, sending armies abroad to conquer new lands, from the Egyptian pharaohs, to the Aztecs and Incas. Even Athens, birthplace of democracy, had its empire and there are several recorded instances of Athenian armies massacring their subjects in order to ensure they didn't get any notions of independence. Imperialism is a direct consequence of hierarchical organisation. Power corrupts and leads to a thirst for more power. Thus, in any hierarchical society, once a group has attained power in their own realm, they will start to look outwards and continue to extend their influence through imperialism until they are toppled from within, or encounter a more powerful imperialist rival. The modern history of imperialism dates from the 15th century, when technical advances in navigation and sailing suddenly opened up vast areas of the world to the European powers. This came at a time when their expansion to the East had been blocked by the Ottoman empire, and they had fallen into a prolonged period of inconclusive warfare amongst themselves. Their technical advantages over the people of the newly accessible lands, especially in warfare, made expansion in that direction a very attractive prospect. European armies and gunboats travelled the world. The British, Dutch, French, Spanish and Portuguese slugged it out over 4 centuries in a race to conquer these new lands and to appropriate their resources. Where they could, as in the Americas and Australia, they simply took over the land and slaughtered the population or put them to work as slaves. Elsewhere, in Asia and Africa, the native societies were too powerful to be militarily subjugated, so they relied on their monopoly of naval technology to impose ever more uneven terms of trade. West Africa provides a good case study. In the 15th century the trade was a relatively even exchange of goods. Cloth, tools, wine and horses were exchanged for gold, pepper and ivory. By the mid 16th century this trade had become entirely one-sided as the European powers traded decreasing quantities of weapons and iron in exchange for vast numbers of slaves to work their American plantations. This underlines the fact that trade relations, as well as conquering armies, have always been a powerful weapon in the hands of the imperialists. Today we can see this pattern repeated. The world's major capitalists define the global rules of trade through such international bodies as the WTO, IMF, World Bank and UN. Although these are presented as being neutral bodies, with voluntary membership, they are in fact imperialist tools. They oversee the transfer of vast quantities of resources and wealth every year from the poorer parts of the world into the bank accounts of the super rich. After centuries of exploitation and theft, they tell us that Africa apparently owes the West $227 billion . The force of arms, although normally hidden, is never too far away. If a local ruler is weak enough, and not sufficiently compliant with the rules of global capitalism, they will be conquered through force of arms and replaced with a more willing servant. Although imperialism is often seen as one country oppressing another, this view clouds the picture. In this age of global capitalism, the group with real power are the big capitalists. They will use whatever political vehicle is most suitable to impose their power. In this era, the US is the undisputed centre of political power in the world and so it is through the US that the capitalists flex their muscles. The people of the US, sent out to kill and die for Chevron and GM, are victims of this imperialism as well as the Iraqis, Afghanis and Somalis whom they kill. So, rather than the US oppressing the rest of the world, we can best understand imperialism, and indeed fight against it, if we see it as the global class of capitalists oppressing the rest of humanity. Episcopal news service: This edition is No76 published in August 2003
Construction of the palace complex began in 1407, the 5th year of the Yongle reign of the third emperor (Emperor Chengzu, Zhu Di) of the Ming dynasty. It was completed fourteen years later in 1420, and then the capital city was moved from Nanjing to Beijing the next year. It was said that a million workers including one hundred thousand artisans were driven into the long-term hard labor. Stone needed was quarried from Fangshan District. It was said a well was dug every fifty meters along the road in order to pour water onto the road in winter to slide huge stones on ice into the city. Huge amounts of timber and other materials were freighted from faraway provinces. Ancient Chinese people displayed their very considerable skills in building it. Take the grand red city wall for example. It has an 8.6 meters wide base reducing to 6.66 meters wide at the top. The angular shape of the wall totally frustrates attempts to climb it. The bricks were made from white lime and glutinous rice while the cement is made from glutinous rice and egg whites. These incredible materials make the wall extraordinarily strong.
Ana-phyl-what? Understanding anaphylaxis Monday 15 May 2017 Anaphylaxis: it’s a scary looking word for a serious condition. But public awareness of this serious allergic reaction can make life for people with allergies a lot easier. What is anaphylaxis? When some people come in contact with a certain type of food, insect bites or stings, or medicines, their immune system reacts with it, resulting in an allergic reaction. Anaphylaxis is the name given to severe allergic reactions. Not everyone who has an allergy is at risk of anaphylaxis, but for those who are, the condition is serious and can be life-threatening. While a person with severe allergies will usually be supported by a team of health professionals who can treat them during an attack, public awareness of anaphylaxis is important. If an adult or child is not with someone trained to treat them during an attack and is unable to treat themselves, first aid from a member of the public could be life-saving. What does anaphylaxis look like? While we often associate anaphylaxis with difficulty breathing, anaphylaxis can affect multiple organ systems, including cardiovascular (heart and blood), gastrointestinal (stomach and bowel) and dermatological (skin). Not all allergic reactions will look the same or be as serious, depending on the person, their allergy and how they’ve come into contact with the allergen. Someone with a mild or moderate allergic reaction might have: - swelling of lips, face or eyes - hives or welts on their skin - a tingling mouth - or pain in their stomach and vomiting. A severe or anaphylactic reaction may cause: - difficult or noisy breathing - swelling of the tongue - swelling or tightness of the throat - difficulty talking or a hoarse voice - wheezing or persistent cough - dizziness or collapse - or becoming pale and floppy (particularly in young children). Symptoms of anaphylaxis usually begin within five to 30 minutes after exposure to the allergen, but could also occur hours later. A severe anaphylactic reaction might follow symptoms of a mild or moderate reaction like hives or welts, but anaphylaxis might occur without these symptoms appearing first. How should anaphylaxis be treated? Anaphylaxis is an emergency. If you suspect someone is having an anaphylactic reaction, or they have told you they are, the first thing to do is administer an EpiPen® if available, and call Triple Zero (000) for an ambulance. If the person knows they have an allergy, they might be carrying an ASCIA Action Plan. This will tell you what symptoms the patient may complain of and what signs to look for. It will recommend how to treat the person while you wait for an ambulance. If they don’t have an action plan, follow the instructions from the emergency operator or visit the ASCIA website for emergency information. If they are not carrying a plan, or you can’t find it, begin first aid by ensuring that you and the person are in a safe place. Lie them down, and raise their legs if possible. If they are having difficulty breathing while lying flat, allow them to sit. Make sure that they do not stand or walk while you wait for an ambulance. Epipen’s® can be used to treat anaphylaxis. These are a special, single-use syringe, pre-loaded with adrenalin which can reverse the anaphylaxis. You do not need to have a first aid certificate to use an EpiPen®, and each EpiPen® is printed with instructions on how to use it. Watch the video below to learn more about anaphylaxis first aid and how to use an EpiPen®. What else can I do to help people with severe allergies? During Food Allergy Week each May, Australians are encouraged to be aware of food allergies and how they can help reduce the risk of a reaction for those living with food allergies. You can help people with allergies by: - understanding that allergies can be serious and life-threatening - providing ingredients for food that you have prepared for others if requested - encouraging children to not share food and drinks - educating children that deliberately exposing other children to foods to which they are allergic is dangerous - taking steps to create a workplace that is safe for those with allergies - and taking the ASCIA ‘Anaphylaxis first aid’ e-training.
A study recently published in The New England Journal of Medicine clearly demonstrates that our nation’s struggle with obesity starts from the youngest years—long before high school graduation, the first signs of puberty or even the start of kindergarten. According to the study, a child who is overweight at the age of five is very likely to grow up to be overweight as a teenager and adult. If a child is overweight on his 11th birthday, it is almost certain that child will be overweight at every subsequent birthday for the rest of his life. The publication of this study and the subsequent media attention that has followed in its wake is a screaming wake-up call to parents, pediatricians, preschool teachers, daycares, policymakers and child advocates. For many kids, by the time they start school it is already too late. We need early intervention, and we need it now. So what should we do? First, pediatricians need to step up. The typical child will see a pediatrician at least 13 times before she reaches her 5th birthday. At every well-child appointment, starting from the newborn visit, the pediatrician and parent should have a conversation about the child’s growth, including where the child falls on the growth chart in weight-for-length for children under two years old and body mass index for those older than 2. If the child’s weight is elevated, pediatricians need to have the courage and the skills to have the conversation. It is never too early. If a pediatrician doesn’t bring it up, the parents should. Secondly, and most importantly, parents and other caretakers need to develop knowledge and skills in how to go about feeding kids. They need to: - Make sure that healthy foods are readily available and that unhealthy foods are not. A hungry child will eat. - Resist the temptation to force kids to eat their vegetables. This just makes kids like the nutrient powerhouses less. - Disband the “clean plate club” once and for all. Forcing a child to clean his plate just teaches him to not listen to his body’s signal of hunger and fullness. - Stop preparing separate meals for picky kids! The breaded chicken, macaroni and cheese, and cheese pizza are not healthy for the child and catering to picky eaters just makes it even less likely a child will ever come around to enjoying healthy, balanced meals. - Avoid food rewards for good behavior, such as for eating vegetables (see #2) or being brave when getting shots at the doctor’s office. - Continue to introduce previously rejected foods. It can take 15 to 20 times for a child to accept a previously disliked food. With repeated exposures, the child will come around - Allow an unhealthy snack every now and then. Restriction just makes the junk food more appealing. - Create opportunities for kids to get outside and be physically active at least 60 minutes every day. - Make healthy foods taste good. After all, taste is the number one predictor of whether a child will eat a food. - Model healthy eating and physical activity behaviors. Ultimately, the eating patterns of young children are heavily influenced by the food environments at home, in daycare and in preschools. Unlike a teenager who can easily consume foods outside the purview of a parent, a young child is mostly at the mercy of what foods a parent and other caretakers make available and how these adults influence what a child eats.
All the ideas in our mind that are not simple are complex. These complex ideas come in four basic varieties: modes, substances, relations, and abstract generals. Modes are ideas that do not include any notion of self-subsistence, in particular, qualities, numbers, and abstract concepts; qualities depend for their existence on substances, whereas numbers and abstract concepts do not have any archetypes out in the world, but exist only as ideas. There are two types of ideas of mode: Simple modes are created by taking a single simple idea and either repeating it or varying it (examples include "dozen," "infinity," "oval," and "space"). Mixed modes are combinations of simple ideas of different kinds (examples include "murder", "obligation", and "beauty"). In contrast to modes, substances are either self-subsisting things (e.g. a man or a sheep) or collections of self-subsisting things (e.g. an army of men or a flock of sheep). Relations are simply relational concepts, such as "father," "bigger," and "morally good." Abstract generals are not treated until Book IV. Complex ideas are created through three methods. First, simple ideas can be glued together through combination, either by taking stock of simple ideas that come into the mind together naturally though sensation (for example, gluing together yellow, long, wheels, loud, etc. into "school bus") or else by mixing and matching simple ideas in the imagination (for example, to create the idea of a mythical creature). Complex ideas can also arise through a comparison of simple ideas, in which we take two or more simple ideas and observe the similarities and differences. This method results in the complex ideas of relations. Finally, there is abstraction, in which the mind separates ideas previously joined by the mind. Chapters xiii-xx analyze our ideas of simple modes, focusing in turn on the ideas of space, duration, number, infinity, pleasure and pain, and powers. Examples of ideas of space include "space," "place," and "inch," and are produced by considering two ideas of color or texture and noticing the distance between them. We form ideas of duration, such as "time," "year," minute," "eternity," by noticing that we have a train of ideas and that this succession has distances between its parts. The idea of number is produced by repeating the simple idea of unity. Realizing that there is no end to the process that gave us the idea of numbers produces the idea of infinity. Ideas of pleasure and pain, such as "good," "love," and "sorrow," which are produced in reference to our simple ideas of pleasure and pain. Finally, we get ideas of powers, such as the ability to cause things to melt or the ability to be melted, by perceiving changes in our ideas and noticing that these changes happen in regular patterns. Chapter XXII examines mixed modes. Mixed modes, Locke tells us, are created simply for purposes of communication. We glue together certain ideas by giving them a collective name if and only if collectively they will prove useful in discourse. So, for instance, we decided to glue together the ideas of murder and father into "patricide," but it never proved as useful to glue together the ideas of murder and son, or murder and neighbor. To strengthen his claim that mixed modes are invented for reasons of convention, Locke points out that often one language will have a word for a concept that does not exist in another culture. He also points out that languages constantly change, discarding and creating new mixed modes as our communicative needs alter. Locke's application of the categories "substance" and "mode" is rather unique in the history of philosophy. Both Aristotle and Descartes agreed with Locke that the distinguishing characteristic is self-subsistence. However, for them, only actual objects were self-subsistent; they would not have included collections of objects as substances. It is not entirely clear why Locke feels the need to classify collections as substances, since collections do not really have any self-subsistence out in the world in the way that single objects do. He probably could as easily called collections mixed modes rather than substances and account for their origin in the same way that he accounts for the origin of concepts like "dozen." Aristotle and Descartes also limited the term "mode" to those things that depend on substances for their existence in a very literal way. Qualities were modes for him; abstract concepts were not. It is clear, though, why Locke felt justified in enlarging the scope of "mode." A mode, he felt, is not just something that is physically dependent on substances; it is also ontologically dependent on substances. We individuate modes in terms of the substances they depend on. While concepts like "murder," "gratitude," and "theft" do not physically exist in substances, they do depend on substances for their existence as ideas. We get these ideas by considering the relations and connections between our ideas of substances.
- Understanding Stroke Slideshow Pictures - Take the Stroke Quiz - Atrial Fibrillation Slideshow: Causes, Tests and Treatment - Stroke FAQs - Patient Comments: Stroke - Symptoms - Patient Comments: Stroke - Treatment - Patient Comments: Stroke - Recovery - Patient Comments: Stroke - Signs - Patient Comments: Stroke - Type - Patient Comments: Stroke - Risk Factors - Find a local Doctor in your town - Stroke facts - What is a stroke? - What are the different types of stroke? - What causes a stroke? - What are the risk factors for stroke? - What is a transient ischemic attack (TIA)? - What are the warning signs of a stroke? - What are the symptoms of a stroke? - How is a stroke diagnosed? - What is the treatment for stroke? - What is the prognosis for stroke? - Is recovery after a stroke possible? - What is stroke rehabilitation? - Can strokes be prevented? Quick GuideStroke Pictures Slideshow: A Visual Guide to Understanding Stroke What causes a stroke? The blockage of an artery in the brain by a clot (thrombosis) is the most common cause of a stroke. The part of the brain that is supplied by the clotted blood vessel is then deprived of blood and oxygen. As a result of the deprived blood and oxygen, the cells of that part of the brain die and the part of the body that it controls stops working. Typically, a cholesterol plaque in one of the brain's small blood vessels ruptures and starts the clotting process. Another type of stroke may occur when a blood clot or a piece of atherosclerotic plaque (cholesterol and calcium deposits on the wall of the inside of the heart or artery) breaks loose, travels through the bloodstream, and lodges in an artery in the brain. When blood flow stops, brain cells do not receive the oxygen and glucose they require to function and a stroke occurs. This type of stroke is referred to as an embolic stroke. For example, a blood clot might originally form in the heart chamber as a result of an irregular heart rhythm, like atrial fibrillation. Usually, these clots remain attached to the inner lining of the heart, but occasionally they can break off, travel through the bloodstream (embolize), block a brain artery, and cause a stroke. An embolism, either plaque or clot, may also originate in a large artery (for example, the carotid artery, a major artery in the neck that supplies blood to the brain) and then travel downstream to clog a small artery within the brain. A cerebral hemorrhage occurs when a blood vessel in the brain ruptures and bleeds into the surrounding brain tissue. A cerebral hemorrhage (bleeding in the brain) causes stroke symptoms by depriving blood and oxygen to parts of the brain in a variety of ways. Blood flow is lost to some cells. Additionally, blood is very irritating and can cause swelling of brain tissue (cerebraledema). Edema and the accumulation of blood from a cerebral hemorrhage increases pressure within the skull and causes further damage by squeezing the brain against the bony skull. This further decreases blood flow to brain tissue and its cells. In a subarachnoid hemorrhage, blood accumulates in the space beneath the arachnoid membrane that lines the brain. The blood originates from an abnormal blood vessel that leaks or ruptures. Often this is from an aneurysm (an abnormal ballooning out of the blood vessel). Subarachnoid hemorrhages usually cause a sudden severe headache, nausea, vomiting, light intolerance, and stiff neck. If not recognized and treated, major neurological consequences, such as coma, and brain death may occur. Another rare cause of stroke is vasculitis, a condition in which the blood vessels become inflamed causing decreased blood flow to parts of the brain. There appears to be a very slight increased occurrence of stroke in people with migraine headache. The mechanism for migraine or vascular headaches includes narrowing of the brain blood vessels. Some migraine headache episodes can even mimic stroke with loss of function of one side of the body or vision or speech problems. Usually, the symptoms resolve as the headache resolves.
Did you know that Claude Monet may not have been “Claude Monet” without the artist Eugene Boudin. Boudin nurtured Monet early in his painting career and showed him the light. Literally. The image associated with the French Impressionists is of an artist outdoors and at their easel. All is still and calm, a gentle breeze flowing over grass or a river is the only movement. The artist studies the quality of sunlight and how it affects the warmth of color and the reflection of water. Impressionists were the champions of en plein air painting, a style that encouraged experiencing the landscape while capturing it on canvas. Claude Monet, one of the vanguards of the movement, especially promoted the technique. Monet was introduced to en plein air by a sadly overlooked artist at the time, Eugene Boudin. Without Boudin’s influence and support, Monet would not have been the painter that is known today. Boudin grew up in Normandy near the sea. While he never became a seafarer like his father, the ocean remained a muse for much of his career. Boudin befriended many artists and taught himself to paint. His seascapes displayed a cunning eye and fascinating perspective in a cultural shift. During his lifetime, beaches were transforming from workplaces for fisherman into recreational spaces enjoyed by everyone. Painting outdoors was the only way for Eugene Boudin to capture these subtleties. He was so enthusiastic of the technique that he often chronicled weather and time on the backs of his canvases. Boudin and Monet were friends since the start of Monet’s career. When Monet was a young man, he made a name for himself by sketching caricatures in Paris. Eugene Boudin felt that Monet was not pushing himself to his full potential. He encouraged Monet to paint with him on the seacoast. The hope was to show his friend the importance of painting landscapes and seascapes outside. He wanted to prove how lighting changed mood and provided character. Above all else, he wanted Monet’s talent to blossom. Monet followed Boudin’s advice and this launched the rest of his career. Eugene Boudin was never fully integrated with the other Impressionists. He exhibited with them in 1874, but remained apart from the rest of the movement. In 1892 he was recognized for both his profound impact on the art community and the quality of his own paintings.
What is Shale? Shale is a fine-grained sedimentary rock that forms from the compaction of silt and clay-size mineral particles that we commonly call "mud". This composition places shale in a category of sedimentary rocks known as "mudstones". Shale is distinguished from other mudstones because it is fissile and laminated. "Laminated" means that the rock is made up of many thin layers. "Fissile" means that the rock readily splits into thin pieces along the laminations. Uses of Shale Some shales have special properties that make them important resources. Black shales contain organic material that sometimes breaks down to form natural gas or oil. Other shales can be crushed and mixed with water to produce clays that can be made into a variety of useful objects. Conventional Oil and Natural Gas Black organic shales are the source rock for many of the world's most important oil and natural gas deposits. These black shales obtain their black color from tiny particles of organic matter that were deposited with the mud from which the shale formed. As the mud was buried and warmed within the earth some of the organic material was transformed into oil and natural gas. The oil and natural gas migrated out of the shale and upwards through the sediment mass because of their low density. The oil and gas were often trapped within the pore spaces of an overlying rock unit such as a sandstone (see illustration below). These types of oil and gas deposits are known as "conventional reservoirs" because the fluids can easily flow through the pores of the rock and into the extraction well. |Conventional Oil and Natural Gas Reservoir: This drawing illustrates an "anticlinal trap" that contains oil and natural gas. The gray rock units are impermeable shale. Oil and natural gas forms within these shale units and then migrates upwards. Some of the oil and gas becomes trapped in the yellow sandstone to form an oil and gas reservoir. This is a "conventional" reservoir - meaning that the oil and gas can flow through the pore space of the sandstone and be produced from the well. | Although drilling can extract large amounts of oil and natural gas from the reservoir rock, much of it remains trapped within the shale. This oil and gas is very difficult to remove because it is trapped within tiny pore spaces or adsorbed onto clay mineral particles that make-up the shale. Unconventional Oil and Natural Gas In the late 1990s natural gas drilling companies developed new methods for liberating oil and natural gas that is trapped within the tiny pore spaces of shale. This discovery was significant because it unlocked some of the largest natural gas deposits in the world. The Barnett Shale of Texas was the first major natural gas field developed in a shale reservoir rock. Producing gas from the Barnett Shale was a challenge. The pore spaces in shale are so tiny that the gas has difficulty moving through the shale and into the well. Drillers discovered that they could increase the permeability of the shale by pumping water down the well under pressure that was high enough to fracture the shale. These fractures liberated some of the gas from the pore spaces and allowed that gas to flow to the well. This technique is known as "hydraulic fracturing" or "hydrofracing". Drillers also learned how to drill down to the level of the shale and turn the well 90 degrees to drill horizontally through the shale rock unit. This produced a well with a very long "pay zone" through the reservoir rock (see illustration at right). This method is known as "horizontal drilling". Horizontal drilling and hydraulic fracturing revolutionized drilling technology and paved the way for developing several giant natural gas fields. These include the Marcellus Shale in the Appalachians, the Haynesville Shale in Louisiana and the Fayetteville Shale in Arkansas. These enormous shale reservoirs hold enough natural gas to serve all of the United States' needs for twenty years or more. Shale Used to Produce Clay Everyone has contact with products made from shale. If you live in a brick house, drive on a brick road, live a house with a tile roof or keep plants in "terra cotta" pots you have daily contact with items that were probably made from shale. Many years ago these same items were made from natural clay. However, heavy use depleted most of the small clay deposits. Needing a new source of raw materials, manufacturers soon discovered that mixing finely ground shale with water would produce a clay that often had similar or superior properties. Today, most items that were once produced from natural clay have been replaced by almost identical items made from clay manufactured by mixing finely ground shale with water. Shale Used to Produce Cement Cement is another common material that is often made with shale. To make cement, crushed limestone and shale are heated to a temperature that is high enough to evaporate off all water and break down the limestone into calcium oxide and carbon dioxide. The carbon dioxide is lost as an emission but the calcium oxide combined with the heated shale makes a powder that will harden if mixed with water and allowed to dry. Cement is used to make concrete and many other products for the construction industry. Oil shale is a rock that contains significant amounts of organic material in the form of kerogen. Up to 1/3 of the rock can be solid kerogen. Liquid and gaseous hydrocarbons can be extracted from oil shale but the rock must be heated and/or treated with solvents. This is usually much less efficient than drilling rocks that will yield oil or gas directly into a well. Extracting the hydrocarbons from oil shale produces emissions and waste products that cause significant environmental concerns. This is one reason why the world's extensive oil shale deposits have not been aggressively utilized. |Oil shale is a rock that contains a significant amount of organic material in the form of solid kerogen. Up to 1/3 of the rock can be solid organic material. This specimen is approximately four inches (ten centimeters) across. Oil shale usually meets the definition of "shale" in that it is "a laminated rock consisting of at least 67% clay minerals," however; it sometimes contains enough organic material and carbonate minerals that clay minerals account for less than 67% of the rock. Composition of Shale Shale is a rock composed mainly of clay-size mineral grains. These tiny grains are usually clay minerals such as illite, kaolinite and smectite. Shale usually contain other clay-size mineral particles such as quartz, chert and feldspar. Other constituents might include organic particles, carbonate minerals, iron oxide minerals, sulfide minerals and heavy mineral grains. These "other constituents" in the rock are often determined by the shale's environment of deposition and often determine the color of the rock. Colors of Shale Like most rocks, the color of shale is often determined by the presence of specific materials in minor amounts. Just a few percent of organic materials or iron can significantly alter the color of a rock. Black and Gray Shale A black color in sedimentary rocks almost always indicates the presence of organic materials. Just one or two percent organic materials can impart a dark gray or black color to the rock. In addition, this black color almost always implies that the shale formed from sediment deposited in an oxygen-deficient environment. Any oxygen that entered the environment quickly reacted with the decaying organic debris. If a large amount of oxygen was present the organic debris would all have decayed. An oxygen-poor environment also provides the proper conditions for the formation of sulfide minerals such as pyrite, another important mineral found in most black shales. The presence of organic debris in black shales makes them the candidates for oil and gas generation. If the organic material is preserved and properly heated after burial oil and natural gas might be produced. The Barnett Shale, Marcellus Shale, Haynesville Shale, Fayetteville Shale and other gas producing rocks are all dark gray or black shales that yield natural gas. The Bakken Shale of North Dakota and the Eagle Ford Shale of Texas are examples of shales that yield oil. Gray shales sometimes contain a small amount of organic matter. However, gray shales can also be rocks that contain calcareous materials or simply clay minerals that result in a gray color. Red, Brown and Yellow Shale Shales that are deposited in oxygen-rich environments often contain tiny particles of iron oxide or iron hydroxide minerals such as hematite, goethite or limonite. Just a few percent of these minerals distributed through the rock can produce the red, brown or yellow colors exhibited by many types of shale. The presence of hematite can produce a red shale. The presence of limonite or goethite can produce a yellow or brown shale. Green shales are occasionally found. This should not be surprising because some of the clay minerals and micas that make up much of the volume of these rocks are typically a greenish color. Hydraulic Properties of Shale Hydraulic properties are characteristics of a rock such as permeability and porosity that reflect its ability to hold and transmit fluids such as water, oil or natural gas. Shale has a very small particle size so the interstitial spaces are very small. In fact they are so small that oil, natural gas and water have difficulty moving through the rock. Shale can therefore serve as a cap rock for oil and natural gas traps and it also is an aquiclude that blocks or limits the flow of groundwater. Although the interstitial spaces in a shale are very small they can take up a significant volume of the rock. This allows the shale to hold significant amounts of water, gas or oil but not be able to effectively transmit them because of the low permeability. The oil and gas industry overcomes these limitations of shale by using horizontal drilling and hydraulic fracturing to create artificial porosity and permeability within the rock. Some of the clay minerals that occur in shale have the ability to absorb or adsorb large amounts of water, natural gas, ions or other substances. This property of shale can enable it to selectively and tenaciously hold or freely release fluids or ions. Engineering Properties of Shale Soils Shales and the soils derived from them are some of the most troublesome materials to build upon. They are subject to changes in volume and competence that generally make them unreliable construction substrates. The clay minerals in some shale-derived soils have the ability to absorb and release large amounts of water. This change in moisture content is usually accompanied by a change in volume which can be as much as several percent. These materials are called "expansive soils". When these soils become wet they swell and when they dry out they shrink. Buildings, roads, utility lines or other structures placed upon or within these materials can be weakened or damaged by the forces and motion of volume change. Expansive soils are one of the most common causes of foundation damage to buildings in the United States. Shale is the rock most often associated with landslides. Weathering transforms the shale into a clay-rich soil which normally has a very low shear strength - especially when wet. When these low-strength materials are wet and on a steep hillside they can slowly or rapidly move down slope. Overloading or excavation by humans will often trigger failure. Environments of Shale Deposition An accumulation of mud begins with the chemical weathering of rocks. This weathering breaks the rocks down into clay minerals and other small particles which often become part of the local soil. A rainstorm might wash tiny particles of soil from the land and into streams, giving the streams a "muddy" appearance. When the stream slows down or enters a standing body of water such as a lake, swamp or ocean the mud particles settle to the bottom. If undisturbed and buried this accumulation of mud might be transformed into a sedimentary rock known as "mudstone". This is how most shales are formed. The shale-forming process is not confined to Earth. The Mars rovers have found lots of outcrops on Mars with sedimentary rock units that look just like the shales found on Earth (see photo at right). Contributor: Hobart King Find it on Geology.com More from Geology.com |Blood Diamonds are illegally-traded diamonds that are often used to fund conflict. |Garnet is best known as a red gemstone. It occurs in any color and has many industrial uses. |Shale Gas is natural gas trapped within shale. It is a growing source of US supply. |Cave of the Hands: A cave in Argentina with wall paintings that date back to about 7000 BC. |Shale: Shale breaks into thin pieces with sharp edges. It occurs in a wide range of colors that include: red, brown, green, gray, and black. It is the most common sedimentary rock and is found in sedimentary basins worldwide. |In less than ten years, shale has skyrocketed to prominence in the energy sector. New drilling and well development methods such as hydraulic fracturing and horizontal drilling can tap the oil and natural gas trapped within the tight matrix of organic shales. © iStockphoto / Edward Todd.| |Organic-rich black shale. Natural gas and oil are sometimes trapped in the tiny pore spaces of this type of shale. | |Unconventional Oil and Gas Reservoir: This drawing illustrates the new technologies that enable the development of unconventional oil and natural gas fields. In these gas fields the oil and gas are held in shales or another rock unit that is impermeable. To produce that oil or gas special technologies are needed. One is horizontal drilling, in which a vertical well is deviated to horizontal so that it will penetrate a long distance of reservoir rock. The second is hydraulic fracturing. With this technique, a portion of the well is sealed off and water is pumped in to produce a pressure that is high enough to fracture the surrounding rock. The result is a highly fractured reservoir penetrated by a long length of well bore.| |Shale is used as a raw material for making many types of brick, tile, pipe, pottery and other manufactured products. Brick and tile are some of the most extensively used and highly desired materials for building homes, walls, streets and commercial structures. © iStockphoto / Guy Elliott.| |Two black organic shales in the Appalachian Basin are thought to contain enough natural gas to supply the United States for several years. These are the Marcellus Shale and Utica Shale.| |Since the late 1990's dozens of previously unproductive black organic shales have been successfully developed into valuable gas fields. See the article: "What is Shale Gas?"| |When shale is drilled for oil, natural gas or mineral resource evaluation a core is often recovered from the well. The rock in the core can then be tested to learn about its potential and how the resource might be best developed.| |Shale is the rock most often associated with landslides. Weathering transforms shale into a clay-rich soil which normally has a very low shear strength - especially when wet. When these low-strength materials are wet and on a steep hillside they can slowly or rapidly move down slope. Overloading or excavation by humans will often trigger failure.| |The United States Geological Survey has prepared a generalized expansive soils map for the lower 48 states.| |A delta is a sediment deposit that forms when a stream enters a standing body of water. The water velocity of the stream suddenly decreases and the sediments being carried settle to the bottom. Deltas are where the largest volume of Earth's mud is deposited. The image above is a satellite view of the Mississippi delta, showing its distributary channels and interdistributary deposits. The bright blue water surrounding the delta is laden with sediment. | |Shale is also a very common rock on Mars. This photo was taken by the mast camera of the Mars Curiosity Rover. It shows thinly bedded fissile shales outcropping in the Gale Crater. Curiosity drilled holes into the rocks of Gale Crater and identified clay minerals in the cuttings. NASA image.
Free shipping on USA orders over $129! Augustus Caesar’s World An overview of world history during the life of Augustus Caesar (Octavius). Wonderfully conversational tone! Recommended for grades 7–12. See full description In her unique “horizontal history” approach, Foster weaves a story of the world around her central character. Rather than focusing exclusively on geo-political events, as most textbooks do, she includes stories of scientific discovery and invention, music, literature, art, and religion. In Augustus Caesar’s World, Foster traces the seven major civilizations—Rome, Greece, Israel, Egypt, China, India, and Persia—from 4500 B.C. to the time of Augustus Caesar in 44 B.C. and culminating in 14 A.D. Readers learn not only the stories of Julius Caesar, Cleopatra, and Marc Antony, but also the historian Livy and how Virgil came to write the Aeneid. The author then takes her readers all over the world to learn what was happening at this same time in China, Persia, India, and more. Foster’s detailed pen and ink drawings are fresh and appealing, and her illustrated timelines give a clear sense of chronology, enriching the engaging text. Note: The author tends to treat all religions as equal in this book. Be cautious of giving it to younger children to read. And even with older children, be sure to discuss the worldview it presents. About the Author “Nothing is more critical, I believe, than that children growing up in these critical explosive days should be given an understanding of American history as a part of the history of the world. Every year this grows more urgent, as increasingly rapid communication integrates world events more closely and the impact of foreign affairs on our own lives becomes more serious and immediate.” Genevieve Foster (1893–1979) wrote this nearly fifty years ago. It resonates with perhaps more truth today. Her writing style is clear, concise and fluid with her greatest strength as a storyteller being her ability to bring her readers right into the minds and times of her characters. Other Great Titles by Genevieve Foster include |10 × 7.5 × 0.74 in Grade 12, Grade 7, Grade 8, Grade 9, Grade 10, Grade 11 Matthew through Acts & Ancient Rome
Wood, which is made up of atmospheric CO2 and water, consists of 50% cellulose nanofiber (CNF), a nano-level fiber with a diameter of one ten-thousandth the thickness of a human hair. Surprisingly, however, the weight of CNF is one fifth that of steel, although it is 7–8 times stronger. Common paper consists almost entirely of CNF. However, it is not generally well known that wood and paper consist of such strong material because paper tears easily. We are developing light and high-strength or transparent material by producing CNF from pulp for paper and mixing it into plastics. Furthermore, we have made a sports car using CNF material and demonstrated a decrease in weight and CO2 emissions by 16% and 8%, respectively, compared with a conventional car. The CNF material is readily recyclable, and can permanently fix CO2 in the air in a different form unless it is burnt. In this presentation, I will explain how to extract CNF from wood, process it, and construct cars, using video animation.
There is a widely held perception that uppercase letters should be taught first because they are easier to write. This is commonly accepted as a fact. However, in order to provide the best instructional outcomes for our students, it is imperative to critically examine this assumption. Are Uppercase Letters Easier to Write? 1. Starting Points Having fewer starting points simplifies the decision on where to start. All the capital letters start at the top line whereas lower case letters can start at the top line or at the midline. This factor favors uppercase as being easier to learn. 2. Pencil lifts When you write, picking up the pencil and putting it down again needs careful visual monitoring and precise motor skills to neatly place the pencil at the start of the next stroke. Seventeen uppercase letters need two or more lifts, compared to only seven lowercase letters. Take the uppercase “A,” for example, with its three distinct strokes, necessitating a lift and precise placement of the pencil at each transition point, relying heavily on visual guidance. In contrast, lowercase “a” is formed using one smooth, continuous stroke, that is not constrained by accuracy demands. This suggests that lowercase letters are easier and more efficient to form. 3. Letter Strokes Letters are created by combining two or more basic strokes. These strokes are typically learned in a specific developmental order: vertical lines, horizontal lines, circles, and finally diagonal lines. Accordingly, letters with vertical and horizontal lines are easier to write. Given that there are six uppercase letters that are composed of these straight lines versus three lowercase letters, it would be easy to conclude that uppercase has an advantage. That is, until one considers the diagonal line – which is known to be the most complex stroke used in letter formation. Eleven uppercase letters contain diagonal lines in contrast to seven lowercase ones. Overall, while some aspects may favor uppercase letters, such as starting points, others, like the infrequent need for pencil lifts, may tip the scale in favor of lowercase letters. Considering these factors, it becomes apparent that the ease of writing uppercase versus lowercase letters is not a clear-cut matter. Therefore, instead of presuming inherent advantages of uppercase letters, we should adopt the perspective that both upper and lowercase letters are equally manageable to form. Our attention should then pivot towards identifying what proves most effective in practical use. Aligning Handwriting with Reading and Writing The majority of words students read and write are in lowercase, constituting approximately 95% of all characters in typical texts. Mastering lowercase first not only facilitates accurate writing sooner but also supports reading development. Some lowercase letters like b, d, p, g, q, share visual similarities, potentially causing confusion. By aligning handwriting instruction with reading, educator’s can significantly improve students’ capacity to differentiate these visually similar letters. This approach is particularly advantageous for kinesthetic learners, as it integrates the physical act of writing with the visual process of identifying the letter. Additionally, lowercase words are generally easier to read, thanks to the distinct shapes formed by ascenders and descenders. In contrast, uppercase letters are likened to “big rectangular blocks,” taking longer to process, as Jason Santa Maria explains in his article, “How We Read.” Transitioning from Upper to Lower Case Students who initially focus on learning uppercase letters may encounter challenges when shifting to lowercase. Repetitive practice of uppercase letters strengthens neural pathways in the brain, automating the process of writing in uppercase. Consequently, when students need to use their working memory for the new task of writing in lowercase letters, they instead rely on the well-established neural pathways associated with uppercase forms. Learning uppercase letters first is akin to using the “hunt and peck” method in typing. Individuals can become adept at finding and pressing keys individually, achieving reasonably good typing speeds. Therefore, it becomes easier for them to default to this method, which is already ingrained in their motor memory, rather than investing the time and effort required to learn correct touch typing. Despite the initial slower production of touch typing, using this method is far more efficient in the long run. Simply put, it’s easier to stick with what one knows than to take on the challenge of learning something new. What do the experts say? Expert voices affirm the advantages for lowercase letters. Dave Thompson, CEO of Educational Fontware, and designer of over 900 fonts said, “Lowercase is definitely easier. There are fewer pen lifts, and much more similarity between smalls (a, c, d, e, g, o, q for example all start with or have a counterclockwise hook) than caps. Lowercase uses retrace without pen lift for b, d, h, m, n, p, and r. Caps are usually taught as pen lifts instead of retrace: B, D, M, N, P, and R. Finally, you can make words out of the smalls, but not the caps.” Virginia Berninger, a UW educational psychology professor, who studied the effect of handwriting on the human brain, says that it is important to teach lowercase letters first because they are used more frequently in writing and are encountered more frequently in written text. Uppercase should only be taught once lowercase letters are legible and automatic. Teachers should also keep written work to a minimum until lowercase writing is functional. Finally, Dr. Steve Graham, the Warner Professor in the Division of Leadership and Innovation in Teachers College who has studied how writing develops, and how to teach it effectively for over 40 years, confirms that it is best to teach lowercase because “it is more economical” to do so. In conclusion, emphasizing the instruction of lowercase letters in early education offers numerous advantages. This approach aligns with efficiency and functionality, establishing a robust foundation for effective writing skills. The evidence from both practical considerations and expert insights supports the prioritization of lowercase instruction for enhanced educational outcomes. Berninger, V. W., & Wolf, B. J. (2009). Teaching students with dyslexia and dysgraphia: Lessons from teaching and science. Baltimore, MD: Brookes Publishing Company. Maria, Jason Santa. “How We Read.” A List Apart, 2014. Web. 15 Dec. 2016. Retrieved from http://alistapart.com/article/how-we-read
Sleep apnea is a disorder that affects sleeping patterns, marked by repeated interruptions in breathing during one’s sleep or bedtime. It’s classified into obstructive sleep apnea and central sleep apnea. Understanding more about the causal factors of the two types may help you avoid this crippling sleep disorder and become more energized, fit, and active. How does sleep apnea develop, and what factors contribute to its development? The following are some of the most common risk factors for sleep apnea. Table of Contents What Are The Causes of Sleep Apnea? While both are under the umbrella term “sleep apnea,” they are caused by different factors, so sleep apnea clinics categorize them differently. Let’s explore the differences between these two: Causes of Obstructive Sleep Apnea This type develops whenever the neck muscles tense and restrict the air passages while sleeping and is more frequent than central sleep apnea. The mechanism is responsible for snoring, and breathing problems, among other symptoms associated with this type. According to experts, it’s estimated to affect 4 to 9 percent of middle-aged adults and even higher for retirees. The leading cause of this type of sleep apnea is obesity. People who are overweight have fat surrounding the larynx, which restricts breathing. Other likely reasons for obstructive sleep apnea, aside from obesity, are: - Growing older - Being a man (higher risk) - Regular consumption of sleeping pills and alcohol - Blockage in the nose - Sleep apnea runs in the family (genetic) - Some physical characteristics (thicker neck, constricted throat or larynx, swollen tonsils) While some of these are under your control, others are not, e.g., smoking and alcohol. Causes of Central Sleep Apnea With this type, your brain cannot successfully send a message to the muscles responsible for normal breathing. This causes breathlessness or trouble getting to or staying asleep. Although less frequent, it also has some causative factors, such as obstructive sleep apnea. The following are some of the people most at risk of getting central sleep apnea: - Senior citizens - Men are at higher risk - People who use methadone and other opioids often - Patients with heart problems, such as congestive heart failure - People who have suffered a stroke Treatment For Sleep Apnea Fortunately, there is a way to effectively treat the condition by adopting lifestyle changes and reducing any risks of cardiovascular disease or stroke risks. A definitive diagnosis from a primary care physician can be the first step to getting a good night’s sleep. Contemplate getting an at-home insomnia test to assist your doctor in diagnosing sleep apnea and beginning the path to better overall health. Here are some things you could do to help treat sleep apnea: If you think that you or someone you know might have sleep apnea, you should have a sleep study performed. These exams typically take your pulse rate, blood oxygenation, air circulation, and breathing patterns. For irregular results, your specialist will be able to recommend treatment without requiring additional testing. However, because compact tracking devices do not always trace all instances of sleep apnea, your physician may still advise polysomnography even though your original results were clear. Your health provider might suggest simple lifestyle changes like weight loss or cutting out drinking alcohol or cigarettes for minor cases. If you have nasal allergies or related complications, you might be advised on how to treat them. Let your doctor know if there’s no noted improvement in your symptoms, and they might suggest other treatments. Healthier Sleeping Practises If your sleep hygiene isn’t disciplined, this might be affecting how well you sleep. Ensure the room is dark when you sleep, and turn off any gadgets at least thirty minutes before bed. You might also be encouraged to: - Sleep on your side, not your facing up - Use a pillow to lift your head higher and prevent tongue slips that block your airway - Invest in a humidifier to prevent dried airways - Swirl with salty water to shrink your tonsils - Avoid caffeine at least 2 hours before bedtime - Don’t take sleeping pills as they relax your throat muscles Another alteration is to make it to your whole day in general and not limit the changes to bedtime. - Healthy weight loss through exercise and healthier eating habits - Get nasal spray or strips to clear your sinuses - Quit smoking which enhances breathing difficulties - Natural sleep aid with unique formulation to help you fall sleep fast & sleep soundly. Support Your Airways If your air passage is weak and has softer, more delicate tissue surrounding your throat, your throat may close up while you’re sleeping. You can do a few exercises to help with tightening the muscles to strengthen them and prevent them from collapsing: - Chewing gum or holding a pen in your mouth - Push your tongue against your palate for a few minutes (3-5 minutes) - Drag your cheek to the side with a pinky while resisting to build tension - Inflate a couple of balloons - Play an instrument like a trumpet or take up singing Continuous positive airway pressure (CPAP) is a form of therapy used to treat intermediate to acute cases of sleep apnea. It uses a device that pushes air via a mask, increasing the pressure higher than the room pressure to keep your air passages clear. You’ll need to consult with your physician first before purchasing these. Alternatively, you could resort to: - Comparative airway pressure machines (auto CPAP or BiPAP) - Oral devices to force your airway open - Get any medical issues sorted (cardiovascular or neuro-related conditions) - Invest in oxygen device supplements - Consider ASVs (Adaptive servo-ventilation) This is often the last resort if other less aggressive options aren’t effective. Often, other treatment options are given a minimum of three months to work before considering surgery. This is often an effective treatment for people with jaw structure complications. Some of the surgical procedures are: - Tissue extraction from the back of your mouth, throat, and tonsils - Shrink the problematic tissue - Jaw reconstruction to enlarge the space and prevent blockage - Place implants into the top of your mouth - Excite nerve tissue responsible for tongue movement to keep the air passage open - Develop a separate air passageway (most extreme, fatal cases of sleep apnea) with a plastic tube through your throat Knowing what you know now about the available treatment options, consult with your primary health physician on the best treatment for you. As previously stated, determining the underlying reasons for your sleep apnea could be the step that finally helps you permanently eliminate it.
Colors can have a significant impact on the way we feel. Imagine bright colors and pastel colors – each of them sets a different emotion to us. The same colors affect children. They play a large role on child's development. They can affect child's mood, behavior, and learning. Bright colors can be stimulating and energizing, while softer colors can have a calming effect. In this blog article you will find helpful information about the impact colors make on infants and toddlers; how colors can help your toddler to better understand emotions; and how not to overstimulate children with colors. How infants start to understand colors Infants start to understand colors by observing and distinguishing different colors in their environment. Infants go through various stages as they start to understand colors: Birth to 3 months: At birth, infants can only see high-contrast colors such as black, white, and gray. As their vision develops, they start to distinguish between primary colors, such as red, blue, and yellow. Infants are attracted to bright and bold colors, and may respond more to warm colors such as red and orange. 3 to 6 months: By three months of age, infants can distinguish between more subtle shades of colors, and can differentiate between colors that are similar, such as green and yellow. Infants at this stage may also be more interested in patterns and textures. 6 to 12 months: By six months of age, infants can distinguish between a wider range of colors and may be able to match objects of the same color. They may also show a preference for certain colors, such as red or blue. At this stage, infants are also beginning to explore their environment more and may be interested in toys that are brightly colored. 12 to 18 months: By 12 months of age, some infants can start to name first colors, such as "red" or "blue," although they may not yet understand the concept of color as a category. They may also begin to understand that objects can have multiple colors. The impact of colors on toddlers Colors can have a significant impact on toddlers, as they are still developing their cognitive and emotional skills. Bright colors can stimulate their senses and encourage exploration, while pastel colors can have a calming effect. It's important to note that every child is unique, and may respond differently to different colors. Emotions: Like children of any age, colors can affect a toddler's emotions and mood. Bright and warm colors such as yellow, orange, and red can promote feelings of happiness and energy, while cool colors such as blue and green can create a calming effect. Attention and concentration: Certain colors can also affect a toddler's attention and concentration. Bright colors can be stimulating and grab a toddler's attention, while softer colors can be soothing and help them to focus. Exploration and learning: Toddlers are in a stage of exploration and learning, and colors can play a role in this. Bright colors can encourage toddlers to explore their environment and interact with objects but can create an overstimulating effect. Different colors can be used to teach them about basic concepts such as colors, shapes, and sizes. Personal expression: Colors can also be used as a way for toddlers to express themselves and their individuality. Encouraging toddlers to choose colors for their clothing or art projects can help them to develop a sense of personal style and creativity. Sleep: Colors can also affect a toddler's sleep patterns. Soft, soothing colors such as blue and lavender can promote relaxation and help toddlers to fall asleep more easily. Colors can help toddlers better understand emotions Colors can be a valuable tool for children to understand and manage their emotions. By using different colors to represent emotions, encouraging creative expression, and using color-coded resources, children can develop a better understanding of their emotions and learn healthy ways to express and manage them. This can be a valuable tool for parents and caregivers to help toddlers better understand their emotions. Color associations: Just like with older children and adults, toddlers can learn to associate colors with different emotions. Parents and caregivers can use this to their advantage by using different colors to represent different emotions, such as yellow for happy or red for angry. This can help toddlers to better understand and communicate their own emotions, as well as recognize the emotions of others. Visual cues: Toddlers are highly visual learners and respond well to visual cues. Using different colors to represent different emotions can provide a visual cue that can help toddlers to understand and recognize emotions. For example, a chart or poster with different colors and corresponding emotions can help toddlers to identify and express their own feelings. Art and creative expression: Like children of any age, toddlers can benefit from using art and creative expression to explore and understand their emotions. Providing toddlers with different colored art materials, such as crayons or paints, can help them to express their emotions in a safe and creative way. Colorful books and media: Colorful books and media can also be a useful tool to help toddlers understand emotions. Books with bright and engaging illustrations can help toddlers to recognize and understand different emotions, while animated shows or videos with colorful characters can help them to learn about emotions in a fun and engaging way. The wheel of emotions Here you can download our wheel of emotions PDF file. It is a great way to help your toddler learn and understand different emotions and start to understand how certain things make them feel. It can help in the learning process of emotional regulation. Use the wheel of emotions and WIN OUR GIFT CARD! Print the material, spin it with your child, color it and share this process in an Instagram story mentioning us @ettetete. Why it is important for toddler to recognize emotions It is important for toddlers to recognize emotions because it helps them to develop important social and emotional skills that are crucial for their overall well-being and success in life. Understanding their own emotions: By recognizing and understanding their own emotions, toddlers can develop important self-awareness skills that can help them to regulate their emotions and cope with stress and challenging situations. Communicating their feelings: Recognizing emotions allows toddlers to communicate their feelings to others. This is important for building positive relationships with caregivers, peers, and family members, as well as for developing effective communication skills. Empathy and perspective-taking: Recognizing emotions in others allows toddlers to develop empathy and perspective-taking skills. These skills are crucial for building positive relationships, resolving conflicts, and understanding the needs and feelings of others. Emotional regulation: Recognizing emotions is a crucial first step in learning how to regulate them. By understanding and recognizing their own emotions, toddlers can learn healthy ways to manage and cope with strong emotions, such as taking deep breaths or talking to a caregiver. Overall, recognizing emotions is an important developmental milestone for toddlers, as it helps them to develop important social and emotional skills that will benefit them throughout their lives. How not to overstimulate toddler with colors While colors can be beneficial for a child's development, it's important not to overstimulate them with too many colors. Here are some ways to avoid overstimulating a toddler with colors. Use a limited color palette: When decorating a toddler's room or choosing clothes for them, it's important to limit the number of colors used. Too many colors can be overwhelming and overstimulating for young children. Stick to a few calming, muted colors that are easy on the eyes. Use color in moderation: While colors can be helpful for stimulating a toddler's imagination and creativity, it's important to use them in moderation. Too much color can overstimulate and distract young children. Focus on using color in strategic ways, such as with toys or art materials, to create a visually appealing environment that's not overwhelming. Choose calming colors: When selecting colors for a toddler's environment, choose calming and soothing colors, such as soft blues or greens, that promote relaxation and calmness. Use natural materials: Whenever possible, choose natural materials, such as wood or cotton, over synthetic materials that can be overly stimulating for young children. Natural materials are less likely to contain bright colors or patterns that can be overwhelming. Pay attention to your child's cues: Every child is different, so pay attention to your child's reactions to different colors and stimuli. If you notice that your child becomes overly stimulated or agitated in a certain environment, try adjusting the colors or decor to create a more calming environment. Overall, it's important to create a calming and soothing environment for toddlers, using colors in moderation and choosing calming colors and natural materials whenever possible. By paying attention to your child's cues and adjusting the environment accordingly, you can help to avoid overstimulation and create a safe and nurturing space for your toddler to thrive in.
Between 20 and 2 million years ago, a giant “terrible beast” roamed the Earth. Known to science as Deinotherium giganteum, it belonged to the same order as elephants living today – but it’s been suggested that its fossilized remains may have once been confused for the Cyclops, giving rise to the legend. The word "cyclops" comes from the Greek for “round eye”, and it’s mentioned in numerous texts including the Odyssey. Rarely painted favorably, the enormous, one-eyed humanoids are often cannibalistic and highly dangerous, though occasionally thwarted by getting blinded in their one eye. It’s a beast you can’t help but love, but one we’ve regrettably never found any evidence of. However, it’s possible that, at one time, the discovery of supposed “Cyclops remains” may have spurred on the legend. The idea is mentioned in Adrienne Mayor’s The First Fossil Hunters: Paleontology in Greek and Roman Times (an author ancient warfare fans may recall from Greek Fire, Poison Arrows, And Scorpion Bombs). Compelling points include the association between fossil beds and the place of origin for some myths and legends, as well as the fact that they’re often depicted as emerging from the ground in a storm – something a big fossil just might do in rough conditions. "The idea that mythology explains the natural world is an old idea," said archaeologist Thomas Strasser, California State University, to National Geographic. "The ancient Greeks were farmers and would certainly come across fossil bones like this and try to explain them. With no concept of evolution, it makes sense that they would reconstruct them in their minds as giants, monsters, sphinxes, and so on.” Crete is a focus of Strasser’s, and is a place known for its myths and legends, but also the discovery of Deinotherium giganteum fossils. An almost complete specimen was found here in 2014, and their curious skulls provide a possible explanation for ancient humans believing in giant, one-eyed beasts. With downward-pointing tusks and a long lower jaw, it made for a peculiar-looking, elephant-like creature, and in its skeletonized form its large nasal opening may have been confused for a massive eye socket. It’s not such a leap when you consider how very wrong we have gotten the morphology of certain dinosaurs based on sparse fossil remains. The legend of the cyclops was probably also encouraged by living animals throughout history. As Snopes explains, "rumors about strange creatures like cyclops or human-animal hybrids, can be traced back to one sad but common occurrence: an animal with a birth defect."
Researchers at the University of California Santa Cruz are developing AI algorithms that could save lives by detecting and monitoring dangers along the shoreline. Your trip to the beach could someday be a lot safer, thanks to artificial intelligence. Researchers at the University of California Santa Cruz, led by Professor Alex Pang, are developing potentially life-saving AI algorithms geared toward detecting and monitoring potential dangers along the shoreline, according to the Santa Cruz Sentinel. The technology aims to alert lifeguards of potential hazards and detect rip currents or riptides, which account for 80% of ocean lifeguard interventions. What is Artificial Intelligence (AI)? Artificial Intelligence (AI) is a field of computer science that focuses on creating intelligent machines capable of performing tasks that typically require human intelligence. AI algorithms can learn from data and improve their performance over time without explicit programming. In the context of beach safety, AI algorithms can be trained to detect and monitor potential dangers, such as rip currents, to enhance lifeguard intervention and prevent accidents. Detecting Rip Currents: A Challenging Task The inspiration for the AI technology came to Professor Pang while windsurfing with friends. He realized just how difficult it is to spot rip currents with an untrained eye. Rip currents are powerful, narrow channels of water that move sand, organisms, and other materials offshore. They can be deadly, making prevention and early detection crucial. Improving Existing Models with Machine Learning To develop the AI algorithm, Pang and his team collaborated with NOAA scientist Gregory Dusek, who had already developed a forecast model to predict the probability of rip currents up to six days in advance. Pang’s detection model builds upon this existing model to offer real-time rip current detection. By using machine learning, the algorithm can continuously improve its recognition abilities over time. The researchers trained the algorithm using images to teach it how to detect rip currents. Enhancing Lifeguard Intervention The real-time detection and monitoring provided by the AI algorithm can significantly enhance lifeguard intervention. The system can send alerts to lifeguards when rip currents are detected, allowing them to take immediate action. Additionally, the technology can distinguish between people and surfers, ensuring that lifeguards are notified only if there are individuals in danger. AI and Water Safety: A Promising Combination AI has shown promise in enhancing water safety in various contexts. For example, a YMCA location in Ann Arbor, Michigan, partnered with Israeli company Lynxight to detect swimmers in distress using AI technology. This proactive approach helps prevent drownings by enabling swift lifeguard response. The development of AI algorithms for beach safety represents a significant step forward in preventing accidents and saving lives. By detecting and monitoring potential hazards, such as rip currents, lifeguards can be alerted in real-time, allowing for swift intervention. The continuous learning capabilities of AI algorithms ensure that their recognition abilities improve over time, further enhancing their effectiveness. As technology continues to advance, AI has the potential to revolutionize beach safety and make trips to the beach safer for everyone.
All publicly accessible websites are seen as constituting a mammoth "World Wide Web" Bold text of information. The pages of a website will be accessed from a common root URL called the homepage, and usually reside on the same physical server. The URLs of the pages organize them into a hierarchy, although the hyperlinks between them control how the reader perceives the overall structure and how the traffic flows between the different parts of the sites. Some websites require a subscription to access some or all of their content. Examples of subscription sites include many Internet pornography sites, parts of many news sites, gaming sites, message boards, Web-based e-mail services, and sites providing real-time stock market data. The first on-line website appeared in 1991. On 30 April 1993, CERN announced that the World Wide Web would be free to anyone. A copy of the original first Web page, created by Tim Berners-Lee, is kept here. A website may be the work of an individual, a business or other organization and is typically dedicated to some particular topic or purpose. Any website can contain a hyperlink to any other website, so the distinction between individual sites, as perceived by the user, may sometimes be blurred. Websites are written in, or dynamically converted to, HTML (Hyper Text Markup Language) and are accessed using a software program called a Web browser, also known as an HTTP client. Web pages can be viewed or otherwise accessed from a range of computer based and Internet enabled devices of various sizes, including desktop computers, laptop computers, PDAs and cell phones. A website is hosted on a computer system known as a web server, also called an HTTP server, and these terms can also refer to the software that runs on these system and that retrieves and delivers the Web pages in response to requests from the website users. Apache is the most commonly used Web server software (according to Netcraft statistics) and Microsoft's Internet Information Server (IIS) is also commonly used. A static website, is one that has content that is not expected to change frequently and is manually maintained by some person or persons using some type of editor software. There are three broad categories of editor software used for this purpose which are - Text editors. such as Notepad or TextEdit, where the HTML is manipulated directly within the editor program - WYSIWYG editors. such as Microsoft FrontPage and Macromedia Dreamweaver, where the site is edited using a GUI interface and the underlying HTML is generated automatically by the editor software - Template-based editors, such as Rapidweaver and iWeb, which allow users to quickly create and upload websites to a web server without having to know anything about HTML, as they just pick a suitable template from a palette and add pictures and text to it in a DTP-like fashion without ever having to see any HTML code. A dynamic website is one that has frequently changing information or interacts with the user from various methods (HTTP cookies or database variables e.g., previous history, session variables, server side variables, e.g., environmental data, etc.) or direct interaction (form elements, mouseovers, etc. When the Web server receives a request for a given page, the page is automatically retrieved from storage by the software in response to the page request, thus opening up many possibilities, including for example: a site can display the current state of a dialogue between users, monitor a changing situation, or provide information in some way personalized to the requirements of the individual user. There is a wide range of software systems, such as ColdFusion (CFM), Active Server Pages (ASP), Java Server Pages (JSP) and the PHP programming language that are available to generate dynamic Web systems and dynamic sites. Sites may also include content that is retrieved from one or more databases or by using XML-based technologies such as RSS. Static content may also be dynamically generated either periodically, or if certain conditions for regeneration occur (cached) in order to avoid the performance loss of initiating the dynamic engine on a per-user or per-connection basis. As noted above, there are several different spellings for this term. Although "website" is commonly used, the Associated Press Stylebook, Reuters, Microsoft, academia, and dictionaries such as Oxford and Merriam-Webster use the two-word, capitalised spelling "Web site". This is because "Web" is not a general term but a shortened form of "World Wide Web". An alternative version of the two-word spelling is not capitalised. As with many newly created terms, it may take some time before a common spelling is finalised. (This controversy also applies to derivative terms such as "Web master"/"webmaster".) The Canadian Oxford Dictionary and the Canadian Press Stylebook list "website" and "web page" as the preferred spellings. Types of websites There are many varieties of Web sites, each specialising in a particular type of content or use, and they may be arbitrarily classified in any number of ways. A few such classifications might include: - Affiliate: enabled portal that renders not only its custom CMS but also syndicated content from other content providers for an agreed fee. There are usually three relationship tiers. Affiliate Agencies (e.g Commission Junction), Advertisers (e.g Ebay) and consumer (e.g Yahoo). Combinations exist (e.g Adbrite). - Archive site: used to preserve valuable electronic content threatened with extinction. Two examples are: Internet Archive, which since 1996 has preserved billions of old (and new) Web pages; and Google Groups, which in early 2005 was archiving over 845,000,000 messages posted to Usenet news/discussion groups. - Blog (or Web log) site: site used to log online readings or to post online diaries; may include discussion forums. Examples: blogger, Xanga. - Business site: used for promoting a business or service. - Commerce site or eCommerce site: for purchasing goods, such as Amazon.com. - Community site: a site where persons with similar interests communicate with each other, usually by chat or message boards, such as MySpace. - Database site: a site whose main use is the search and display of a specific database's content such as the Internet Movie Database or the Political Graveyard. - Development site: a site whose purpose is to provide information and resources related to software development, Web design and the like. - Directory site: a site that contains varied contents which are divided into categories and subcategories, such as Yahoo! directory, Google directory and Open Directory Project. - Download site: strictly used for downloading electronic content, such as software, game demos or computer wallpaper. - Employment website: allows employers to post job requirements for a position or positions to be filled using the internet to advertise world wide. A prospective employee can locate and fill out a job application or submit a resume for the advertised position. - Game site: a site that is itself a game or "playground" where many people come to play, such as MSN Games and Pogo.com. - Geodomain refers to domain names that are the same as those of geographic entities, such as cities and countries. For example, Richmond.comis the geodomain for Richmond, Virginia. - Humor site: satirizes, parodies or otherwise exists solely to amuse. - Information site: contains content that is intended to inform visitors, but not necessarily for commercial purposes; such as: RateMyProfessors.com, Free Internet Lexicon and Encyclopedia. Most government, educational and non-profit institutions have an informational site. - Java applet site: contains software to run over the Web as a Web application. - Mirror (computing) site: A complete reproduction of a website. - News site: similar to an information site, but dedicated to dispensing news and commentary. - Personal homepage: run by an individual or a small group (such as a family) that contains information or any content that the individual wishes to include. - Phish site: a website created to fraudulently acquire sensitive information, such as passwords and credit card details, by masquerading as a trustworthy person or business (such as Social Security Administration, PayPal) in an electronic communication. (see Phishing). - Political site: A Web sites on which people may voice political views. - Pornography (porn) site: a site that shows pornographic images and videos. - Rating site: A site on which people can praise or disparage what is featured. Examples: ratemycar.com, ratemygun.com, ratemypet.com, hotornot.com. - Review site: A site on which people can post reviews for products or services. - Search engine site: a site that provides general information and is intended as a gateway or lookup for other sites. A pure example is Google, and the most widely known extended type is Yahoo!. - Shock site: includes images or other material that is intended to be offensive to most viewers. Examples: rotten.com, ratemypoo.com. - Gripe site: a Web site devoted to the critique of a person, place, corporation, government, or institution. - Web portal site: a website that provides a starting point, a gateway, or portal, to other resources on the Internet or an intranet. - Wedsite: a website that details a couple's wedding event, often sharing stories, photos, and event information. - Wiki site: a site which users collaboratively edit (such as Wikipedia). Some sites may be included in one or more of these categories. For example, a business website may promote the business's products, but may also host informative documents, such as white papers. There are also numerous sub-categories to the ones listed above. For example, a porn site is a specific type of eCommerce site or business site (that is, it is trying to sell memberships for access to its site). A fan site may be a vanity site on which the administrator is paying homage to a celebrity. Websites are constrained by architectural limits (e.g. the computing power dedicated to the website). Very large websites, such as Yahoo!, Microsoft, and Google, employ many servers and load balancing equipment, such as Cisco Content Services Switches. In October of 2006, Netcraft, an Internet monitoring company that has tracked Web growth since 1995, says a mammoth milestone was reached. Netcraft reported that there are currently 100 million Web sites with domain names and content on them, compared to just 18,000 Web sites in August 1995.
One of the most exciting characteristics of philodendrons is their growth rate. These fast-growing indoor plants sprout new leaves almost daily. They're also effortless to grow as they are happy in nearly all environments and can handle a bit of neglect. This makes them an excellent choice for those who want houseplants without hassle. Philodendrons are a genus in the family Araceae. Within the genus, there are nearly 450 species. These species can be divided into climbing varieties and non-climbing, upright varieties. Because there are so many species of philodendron, there are many common names by which they're known. Each species has both a botanical name and a common name. Philodendrons are classified as flowering herbaceous perennials. Herbaceous perennials are plants without woody stems whose foliage dies in the winter and reemerges in spring and summer. Philodendrons are evergreens, however, and never completely die back. While their growth rate may slow during the winter months, most indoor environments are conducive to continued growth. The philodendron genus is additionally categorized in the following manner: Hemiepiphytes - These are plants that begin terrestrial and later become epiphytes. Epiphytes are plants that live on other plants in a non-parasitic way. They piggyback on other plants and take advantage of the resources surrounding their host. In the case of philodendrons, some begin as vines and later become epiphytic. The seedlings of these plants emerge and grow toward the nearest host, most often a tree. Once they've established themselves, the terrestrial roots die, and the remainder of the plant grows on the tree. Terrestrial Plants - A plant with roots that grow in soil are terrestrial. Evidence of philodendrons can be traced back thousands of years. They're considered tropical plants native to South America and the Caribbean. Now, species of philodendron can be found worldwide. Although the physical appearance of philodendrons can vary by species, they all share some general characteristics. Most philodendrons have light to deep green leaves. There are some species outside of this norm, however, that feature green and white leaves. While the leaves of philodendron plants are typically large, the shape can vary. Lobed, oval, and spear-shaped are a few of the many forms they may take. Philodendrons produce specialized leaves called cataphylls that help protect new leaves as they form. Types Of Philodendron There are both vining and upright, non-vining philodendrons. Both make excellent houseplants. The one you choose will depend upon your space and personal taste. Below is a list of the 5 most popular vining and non-vining species of philodendron. Popular Vining Philodendron Green Heartleaf Philodendron (Philodendron Hederaceum). Sometimes referred to as the sweetheart plant, the heartleaf philodendron is appropriately named for its signature heart-shaped leaves. This species is slower growing than most but long-living. Heartleaf Philodendron (Philodendron hederaceum 'Brasil'). Cultivar of the common heartleaf philodendron with variegated leaves. Aptly named Brasil for its resemblance to the Brazilian flag. Philodendron Brandtianum (Philodendron Brandtianum). Easy to grow with grey, mottled leaves. Philodendron Micans (Philodendron Micans). One of the most popular philodendron houseplants with velvety leaves. Oak leaf philodendron (Philodendron Pedatum). Extremely fast-growing variety with large leaves that resemble oak leaves. Popular Upright Philodendrons Xanadu (Philodendron Bipinnatifidum). Large, leathery leaves are the hallmark of this compact plant, often wider than tall. Pink Princess (Philodendron erubescens). A highly popular houseplant due to its bubblegum pink variegation. Gloriosum (Philodendron Gloriosum). Unique variety having large, heart-shaped leaves with white veining. Moonlight Philodendron (Philodendron'Moonlight'). Stunning lime green and yellow leaves make this variety one of the most popular with growers. Birkin (Philodendron' Birkin'). Slow-growing variety with red tones to its leaf and white striping. Upright philodendron plants are grown in containers. Depending on the size, they can make an impressive entryway or corner display. Smaller varieties can be displayed on tables or shelves. Philodendron climbers do well in hanging baskets where the vines can drop freely. If grown in a ground container, they'll need a moss pole or trellis to climb as they grow.
Astronomer Michael Brown led the campaign that controversially demoted Pluto in 2006 from the ninth planet of our solar system into just one of its many dwarf planets. Now, he hopes to fill the gap he created with what he predicts will be the discovery of a “Planet Nine” — a planet many times the size of Earth that might orbit the sun far beyond Neptune. “It was definitely not the intention,” said Brown, a professor of planetary astronomy at the California Institute of Technology in Pasadena and the author of the memoir “How I Killed Pluto and Why It Had It Coming.” “If I were prescient enough to have had all these ideas ahead of time and then demoted Pluto and found a new Planet Nine, then that would be brilliant — but it really is just a coincidence,” he said. A study published online in August by Brown and his colleague at Caltech, astrophysicist Konstantin Batygin, re-examines the evidence for a proposal they first suggested in 2016: that the hypothetical Planet Nine could explain anomalies seen by astronomers in the outer solar system, especially the unusual clustering of icy asteroids and cometary cores called Kuiper belt objects. The study has been accepted for publication by the Astronomical Journal, according to National Geographic. Despite years of looking, Planet Nine has never been seen. As a result, some astronomers have suggested it doesn’t exist and the clustering of objects noted by Brown and Batygin is the result of “observation bias” — since fewer than a dozen objects have been seen, their clustering might be a statistical fluke that wouldn’t be seen among the hundreds thought to exist. For their latest study, however, Brown and Batygin have added several recent observations of objects, and they’ve calculated that the clustering is almost certainly real — in fact, they found there is only a 0.4 percent chance that it’s a fluke. That would suggest Planet Nine is almost certainly there — and the new study includes a "treasure map" of its supposed orbit that tells astronomers the best places in the sky to look for it. Brown is working with data from several astronomical surveys, hoping to catch the first glimpse of Planet Nine. If that search is not successful, he hopes it might be seen in survey data from a new large telescope at the Vera Rubin Observatory in the mountains of northern Chile, which is scheduled to start full operations in 2023. One of the results of the new study is that the orbit of Planet Nine is closer to the sun than what the 2016 study proposed, with an elongated orbit only about 380 times the distance between Earth and the sun at its closest, instead of more than 400 times that distance. The closer orbit would make Planet Nine much brighter and much easier to see, Brown said, although their recalculations suggest it’s also a bit smaller — about six times the mass of Earth, instead of up to 20 times as big. “By virtue of being closer, even if it’s a little less massive, it’s a good bit brighter than we originally anticipated,” he said. “So I’m excited that this is going to help us find it much more quickly.” If Planet Nine does exist, it’s probably a very cold gas giant like Neptune, rather than a rocky planet like Earth. It would be smaller, however: Neptune is more than 17 times the mass of Earth. But roughly six to 10 times Earth’s mass is the most common size of gas giants seen by astronomers elsewhere in our galaxy, although there are none — so far — in our solar system, Brown said. While Planet Nine might have formed at such a great distance from the disk of gas around the early sun, it seems likely it formed about the same distance from the sun as Uranus and Neptune, but it was flung off into the outer reaches of the solar system by the strong gravity of Saturn, he said. He dismissed the suggestion made by astronomers last year that Planet Nine might actually be a black hole orbiting the sun. “It was almost a joke when they wrote that paper,” he said. “It’s funny and it’s cute — but there is really zero reason to speculate that it might be a black hole.” As Brown and his colleagues renew their search for Planet Nine with a better idea of where to look, some other astronomers remain skeptical that it even exists. Physicist Kevin Napier, a graduate student at the University of Michigan, led a study published earlier this year that suggested the clustering of objects in the Kuiper belt was a statistical illusion. He said in an email that the extremely small number of orbits of objects used as evidence for the existence of Planet Nine — just 11 are known — is unconvincing. “There is only just so much statistical power one can draw from a dozen data points,” he said. That means the existence of Planet Nine can only be conjectured until more observations of the outer solar system are made. “Maybe we will discover a new planet lurking in the darkness, or maybe our discoveries will cause any evidence for clustering to disappear altogether,” he said. “Until then, we will keep searching the sky for new and interesting rocks, and by doing so pull our understanding of our solar system into clearer focus.”
Year 6 SATs KS2 SATs papers explained At the end of Year 6, children sit tests in: - English reading - English grammar, punctuation and spelling Paper 1: short answer questions - English grammar, punctuation and spelling Paper 2: spelling - Mathematics Paper 1: arithmetic - Mathematics Paper 2: reasoning - Mathematics Paper 3: reasoning These tests are both set and marked externally, and your child’s marks will be used in conjunction with teacher assessment to give a broader picture of their attainment. KS2 English reading test The English reading test will have a greater focus on fictional texts. There is also a greater emphasis on the comprehension elements of the new curriculum. The test consists of a reading booklet and a separate answer booklet. Pupils will have a total of 1 hour to read the 3 texts in the reading booklet and complete the questions at their own pace. There will be a mixture of genres of text. The least-demanding text will come first with the following texts increasing in level of difficulty. Pupils can approach the test as they choose: eg working through one text and answering the questions before moving on to the next. The questions are worth a total of 50 marks. KS2 English grammar, punctuation and spelling test The new grammar, punctuation and spelling test has a greater focus on knowing and applying grammatical terminology with the full range of punctuation tested. The new national curriculum sets out clearly which technical terms in grammar are to be learnt by pupils and these are explicitly included in the test and detailed in the new test framework. It also defines precise spelling patterns and methodologies to be taught, and these are the basis of spellings in the test.There will be no contextual items in the test. As in previous years, there are two papers, Paper 1: questions and Paper 2: spelling. Paper 1: questions consist of a single test paper. Pupils will have 45 minutes to complete the test, answering the questions in the test paper. The questions are worth 50 marks in total. Paper 2: Spelling consists of an answer booklet for pupils to complete and a test transcript to be read by the test administrator. Pupils will have approximately 15 minutes to complete the test, but it is not strictly timed, by writing the 20 missing words in the answer booklet. The questions are worth 20 marks in total. KS2 mathematics test There are 3 papers; Paper 1: arithmetic; Paper 2: reasoning; and Paper 3: reasoning. Paper 1: arithmetic replaces the mental mathematics test. The arithmetic test assesses basic mathematical calculations. The test consists of a single test paper. Pupils will have 30 minutes to complete the test, answering the questions in the test paper. The paper consists of 36 questions which are worth a total of 40 marks. The questions will cover straightforward addition and subtraction and more complex calculations with fractions worth 1 mark each, and long divisions and long multiplications worth 2 marks each. Papers 2 and 3 each consist of a single test paper. Pupils will have 40 minutes to complete each test, answering the questions on the test paper. Each paper will have questions worth a total of 35 marks. In some answer spaces, where pupils need to show their method, square grids are provided for the questions on the arithmetic paper and some of the questions on Paper 2. No calculators are allowed for any of the mathematics tests. A few weeks before the SATs there will be a mock examination week. The children will sit mock papers on the Monday, Tuesday, Wednesday, and Thursday of that week with exactly the same conditions as they will experience on the actual week. The purpose of the mock week is for the children to become familiar with the exam environment so they are as relaxed and comfortable as possible for the real thing. Tips on how you can help your child prepare The biggest single influence on your child’s SAT marks will be their reading ability. Good readers can read questions quickly, and understand what they need to do. Continue to encourage your child to read every day, looking at both stories and non-fiction. To help your child prepare for SATs use the CGP books and websites listed below. - www.woodlands-junior.kent.sch.uk/revision/index.html (general revision) - www.cgpbooks.co.uk/online_rev/ks2choice.asp (interactive revision activities) - www.parkfieldict.co.uk/sats/ (link page to other revision sites for all subjects)
Image ©Luc Perrot, shown with permission |Rays through a raindrop. The bright arcs are caustics.| Caustics around shadows of petals floating on water. |Supernumeraries are diffraction effects associated with a light caustic. Light rays reflected once inside a raindrop form a primary rainbow. The once reflected rays emerge in a number of directions. As they do so they fold over and intersect each other. The surface where the rays cluster and intersect most densely marks the rainbow’s bright rim. It is a caustic sheet that divides space where there are no rays and where rays intersect. Caustics occur elsewhere, below sunlit wavy water giving dancing ripples along swimming pools or in stream beds, in the atmosphere to give strongly twinkling stars and in strong gravitational lensing of galaxies. Close to the raindrop caustic sheet, each intersecting ray/wave pair coalesces and interferes. Coincident wave crests give brightness, out of phase crests give darkness. The result is a set of light and dark diffraction fringes parallel to the caustic sheet – The rainbow’s supernumerary arcs.
The gusting westerly winds that dominate the climate in central Asia, setting the pattern of dryness and location of central Asian deserts, have blown mostly unchanged for 42 million years. A University of Washington geologist led a team that has discovered a surprising resilience to one of the world’s dominant weather systems. The finding could help long-term climate forecasts, since it suggests these winds are likely to persist through radical climate shifts. “So far, the most common way we had to reconstruct past wind patterns was using climate simulations, which are less accurate when you go far back in Earth’s history,” said Alexis Licht, a UW assistant professor of Earth and space sciences who is lead author of the paper published in August in Nature Communications. “Our study is one of the first to provide geological constraints on the wind patterns in deep time.” Earlier studies of the Asian climate’s history used rocks from the Loess Plateau in northwestern China to show dust accumulation began 25 million to 22 million years ago and increased over time, especially over the past 3 million years. It had been believed that these rocks reflected the full history of central Asian deserts, linking them with the rise of the Tibetan Plateau and a planetwide cooling. But Licht led previous research at the University of Arizona using much older rocks, dating back more than 40 million years, from northeastern Tibet. Dust in those rocks confirmed the region already was already parched during the Eocene epoch. This upended previous beliefs that the region’s climate at that time was more subtropical, with regional wind patterns brought more moisture from the tropics. The new paper traces the origin of this central Asian dust using samples from the area around Xining, the largest city at the northeastern corner of the Tibetan Plateau. Chemical analyses show that the dust came from areas in western China and along the northern edge of the Tibetan Plateau, like today, and was carried by the same westerly winds. “The origin of the dust hasn’t changed for the last 42 million years,” Licht said. During the Eocene, the Tibetan Plateau and Himalayan Mountains were much lower, temperatures were hot, new mammal species were rapidly emerging, and Earth’s atmosphere contained three to four times more carbon dioxide than it does today. “Neither Tibetan uplift nor the decrease in atmospheric carbon dioxide concentration since the Eocene seem to have changed the atmospheric pattern in central Asia,” Licht said. “Wind patterns are influenced by changes in Earth’s orbit over tens or hundreds of thousands of years, but over millions of years these wind patterns are very resilient.” The study could help predict how climates and ecosystems might shift in the future. “If we want to have an idea of Earth’s climate in 100 or 200 years, the Eocene is one of the best analogs, because it’s the last period when we had very high atmospheric carbon dioxide,” Licht said. Results of the new study show that the wind’s strength and direction are fairly constant over central Asia, so the amount of rain in these dry zones depends mostly on the amount of moisture in the air, which varies with carbon dioxide levels and air temperature. The authors conclude that winds will likely remain constant, but global warming could affect rainfall through changes in the air’s moisture content. “Understanding the mechanism of those winds is a first step to understand what controls rainfall and drought in this very wide area,” Licht said. “It also provides clues to how Asian circulation may change, since it suggests these westerly winds are a fundamental feature that have persisted for far longer than previously believed.” A. Licht, G. Dupont-Nivet, A. Pullen, P. Kapp, H. A. Abels, Z. Lai, Z. Guo, J. Abell, D. Giesler. Resilience of the Asian atmospheric circulation shown by Paleogene dust provenance. Nature Communications, 2016; 7: 12390 DOI: 10.1038/ncomms12390
Converting hexadecimal, binary and decimal numbers. Hexadecimal Bee Game You'll learn how to write numbers as Roman numerals as well as write Roman numerals back into standard numerals. Adding and Subtracting Play this game with a partner and see who can make it to the hive first! Students are categorizing numbers into the Real Number system. Number Spellings - Large Numbers Choose the correct written number alternative Fewest Coins (Green) Exchanging quarters, dimes, nickels, and pennies for the fewest coins possible at the highest level of differentiation 2 digits multipication1. 请点击带有正确答案的气球。2. 限时每题各4秒内回答题目。 DOĞAL SAYILARI EN YAKIN ONLUĞA YÜZLÜĞE YUVARLAMA (ROUNDING NUMBERS TO THE NEAREST TEN) IN TURKISH VERİLEN SAYILARI EN YAKIN ONLUĞA VE YÜZLÜĞE YUVARLAMA Let's Have fun exploring Addition in the many forms that it can appear in............ Lets Go !!!Instructions:1. There are 4 options. Choose the correct answer2.Think about your answer 3. You can use pen and paper to help you calculate.
ConserWater, United States A bird’s-eye view on water use ConserWater uses satellite data and artificial intelligence to help predict how much water farmers should provide to their crops Without the proper information, it’s easy to give too little or too much water to crops. Underwatering can lead to reduced yields and overwatering can lead to both water losses and crop damage. ConserWater, headquartered in California, USA, has developed an AI-tool that uses NASA satellite, weather and other real-time data along with deep learning to estimate soil moisture and predict the water needs of crops. The technology allows predictions to be made anywhere, anytime – and regardless of a farm’s size. The company says that it can do this to the same level of accuracy as ground-based soil sensors and reduce water use by over 30%. The technology also helps farmers to save energy and apply more accurate amounts of fertilizers to their fields. OTHER EDITORS’ PICKS IN Freshwater and Waterways
Not all bees are bad bees, and honey bees are essential to pollination! The bugs that spread pollen from one part of the plant to the other are known as “pollinators.” Pollinators can include bugs, animals and even the wind! Most people think of honey bees, and we’ve partnered with the National Pest Management Association (NPMA) to educate the public and bolster the industry’s outlook on these insects! Pollinators in fiction In The Poisonwood Bible, the Price family moves to the Belgian Congo as missionaries. When they arrived, one of the first things they did was plant a garden to provide food for themselves and the village they resided in, but no matter how hard they tried, the garden would not produce any fruit. It took the characters a while, but eventually they figured it out. The garden wouldn’t produce fruit because there were no natural pollinators! Pollinators in real life Pollinators are an extremely important part of the food growing process. According to Penn State’s Department of Entomology, these insects carry pollen between the two reproductive parts of the plant. This allows the plant to produce other seeds and flowers which then leads to fruit. Without pollinators to help us, much of our produce section would be obsolete. Penn State also pointed out the pollinator’s major role in the ecosystem. Pollinators enable plants to produce the nuts, berries, fruits, etc. that herbivores eat. The plants also need the pollinators to produce more seeds. Without them, plants would cease to exist, and without a food source, the herbivores would become extinct and predators wouldn’t have a food source which means they would also become extinct. As you can see, we need to preserve the pollinators to keep the ecosystem balanced. How can I help the pollinators? Create a pollinator garden To create an oasis for a pollinator, plant flowers that are local to your region. If you’re unsure of what flowers those are, you can find a list of wildflowers by state here. If you wish to know more plants that aren’t wildflowers, you can consult your local nursery or an experienced gardener. Plants with lots of nectar and pollen are great. Next, plant these flowers in clusters. Big groupings of flowers are much more visible and noticeable to pollinators. Yellow, blue and purple flowers are especially attractive to bees. Finally, you should consider planting flowers with different bloom times. That way, the pollinators will have access to pollen and nectar year round. It is important to note that these gardens should be a haven for pollinators like honey bees, but unless you are trained to handle bees like a beekeeper is, you should not allow stinging insects to take up residence on your property as it can prove to be very dangerous. If honey bees have taken up residence, contact a pest management professional with a bee removal and relocation program to take the bees off your property. Become a beekeeper After creating a pollinator garden to support your hive, you can become an apiarist or beekeeper! Just check with your local zoning officials that your yard has the clearance and space to support such activities. But we know the beekeeping life isn’t for everyone. Even so, buying honey and bee byproducts from your local beekeeper can help them support their beekeeping hobby or business. Beekeepers will sometimes partner with farmers to help them pollinate their crops and are responsible for creating products like soaps, lotions and candles. Gregory and Pollinators Did you know that Gregory is pollinator friendly? We have a honey bee removal and relocation program in the Upstate of South Carolina, and if your honey bee problem happens to be in another part of our service area, we will try to refer you to a local beekeeper or “apiarist” to remove and relocate the bees. Matt Tancibok, one of our employees, has been very involved in getting the bee program up and running at Gregory. In order to become a “bee expert,” Tancibok has taken several bee classes, and is even responsible for bringing beekeeping classes here to Gregory! Thanks to Matt, Gregory has been able to partner with a local organic beekeeper to offer two cycles of beekeeping classes. Gary Wagner first got involved in beekeeping when he attended one of these classes. Both Tancibok and Wagner have hives in their backyards which is where Gregory has relocated some of the retrieved hives. Matt’s and Gary’s experiences and skills allow them to give expert advice on honey bee issues. One day, we got a call from a commercial account. PCT Magazine’s Residential Technician of 2017, Gus Walker, arrived on the scene to find that honey bees were swarming around bushes. Since this was a commercial account, there is more foot traffic and a more immediate need to get the bees out before they sting someone. So Wagner made his way to the location, and with Gus Walker helping, he placed the bees, along with their queen, in a temporary cardboard box. Other bees followed and began to reorient around the queen in a ball shape. “Within the hour, the bees were placed into a colony and have acclimated well” Wagner stated. Unfortunately, there are times when bee hives can’t be saved. They can be very dangerous to people with allergies, and it’s more important to keep our customers safe. Our primary concern is public safety, but we endeavor to preserve as many hives as possible.
Waste production is a reflection of modern consumption and lifestyles. Economically developed countries are producing more and more waste (900 kg a year per inhabitant in the United States), whereas developing countries produce very little (less than 17 kg a year per inhabitant in the Ivory Coast). We produce over 26 million tonnes of household rubbish in the UK and Northern Ireland every year. The nature of waste is also changing. The materials are synthetic and increasingly complex, resulting in pollution and health problems. Selective waste collection has been introduced in many developed countries. In developing countries, the recovery and re-use of waste materials has long been, and still is, a common activity. The large quantity of household waste is mainly due to the increase in the amount of packaging over the last 40 years. Other waste such as highly dangerous radioactive waste, which has a long lifespan (several thousand years) or industrial waste, can pose serious health and pollution problems. By thinking about the types of products we buy, how we use them and where we dispose of them, we can dramatically reduce the amount of rubbish our homes produce. Problems and challenges Waste, if not properly managed, can cause a change in the environment and its ecosystems. Globally, only 20% of household waste is treated (incinerated or recycled). Some countries in the OECD (Organisation for Economic Cooperation and Development) send their waste to countries in the Southern hemisphere in order to avoid the high cost of waste treatment. Although we produce waste, we do not want to have household waste incineration factories or landfill sites near our homes. However, waste is being produced at a much higher rate than it can be destroyed. Of all the possible solutions, reduction at source is the quickest and most efficient. However, it requires an effort to be made by governments, industry and consumers, and is not easy to put into practice as it means that everyone must change their habits. - Visit a rubbish tip to see the variety and quantity of things that are thrown away - Produce an account of the waste collection and treatment sites in the local district or town in order to understand how the local waste sector operates - Set up a composting machine to look at the idea of biodegradable waste and the plant waste sector - Visit a supermarket with a list of types of purchases and then carry out a study on the amount of waste produced - Look at the connection between consumption and waste production - Carry out a waste audit at your school to see what you are throwing away. Could your school reduce the amount it is putting into its dustbins? Could any of this waste be re-used or recycled?
Rhodes – The names of the island in ancient time until today The first mention of the name of the island is made in Homer’s Iliad. So the name seems to come from the homonymous flower “Rhodes“, which was the favorite flower of the Sun God. Many more ancient coins of Rhodes, on one side bear the sun God, while on the other a Rhodes. From time to time, Rhodes had other relevant names, which came from various properties and characteristics of the island. Some of them along with their origin are: Ofiousa – Oloessa: many snakes in the island Pelagia: The island emerged from the sea Asteria: because of the full of stars sky Aithria: for its good clima Trinitakria: its capes makes a triangle Poiesa: βbecause of its germination Ataviria: Its highest mountain is called Atavyros Ilias: due to the extensive sunshine during the year. Korymvia: for the cone shape that ends in two different peaks. Makaria: which means happy Pontia: which means sea Stadia: because its shape is like an ancient stadium Telchinis: by its mythical first inhabitants, the Telchines. Other written references to the names are found in ancient writers: Plinios the Elder: Ofiousa, Asteria, Aithria, Trinakria, Korymvia, Poiesa, Makaria, Oloessa. Stravon: Ofiousa, Stadia, Telchinis. Ammianos Markellinos: Pelagia Nowadays, the island of Rhodes is also referred to as: - The “island of the knights”, due to the large number of monuments left - by the Order of the Knights of St. John, from the period that the island occupied. - The “island of the sun”, due to the extensive sunshine throughout the year. - The “emerald island”, due to its shape, which refers to the gemstone.
Shingles is a viral infection that affects approximately 1 in 3 adults in the United States. Around half of all shingles cases occur in adults over 60 years old. It can occur in anyone who has had chickenpox, as both shingles and chickenpox are caused by the varicella-zoster virus (VZV). This virus remains in the body after chickenpox has cleared and can reactivate at any time, leading to shingles. Shingles symptoms tend to develop on one side of the face or body. They often affect just a small area. The most common location is on the side of the waist, although they can occur anywhere. Timeline of symptoms Several days before a rash appears, shingles may cause skin sensitivity or pain. Further early symptoms include: - general discomfort - hot skin Within the next 1 to 5 days, a red rash will normally form around the sensitive area. A few days later, fluid-filled blisters will develop at the site of the rash. The blisters will ooze before drying up, typically within 10 days of appearing. At this point, scabs will form on the skin, tending to heal within 2 weeks. There may be other symptoms accompanying the skin sensitivity and rash, including: - malaise or feeling of being unwell - sensitivity to light A person’s vision may be affected if the shingles occurs near the eyes.It should be noted that shingles symptoms range from mild to severe, with some people experiencing itching and mild discomfort and others having intense pain. Most cases of shingles resolve without causing long-term effects. However, potential complications include: Post-herpetic neuropathy (PHN) Post-herpetic neuropathy (PHN) is a common complication of shingles. It refers to nerve damage that causes pain and burning that persists after the shingles infection is gone. Some sources suggest that up to 20 percent of people who get shingles develop PHN with older adults thought to be especially at risk. Treating PHN is difficult, and the symptoms can last for years. However, most people fully recover within 12 months. It is not known why some people who have shingles go on to develop PHN. The risk factors for PHN include: - a weakened immune system - having pain during the early stages of a shingles infection - advanced age - having severe shingles that covers a large portion of the skin According to some research, older women who get severe pain and rash symptoms may have a 50 percent chance of developing PHN. Other potential complications of shingles include: - bacterial infections of the skin - facial paralysis - hearing loss - loss of taste - ringing in the ears - vertigo, a type of dizziness - vision problems It is important to see a doctor as soon as a person notices the symptoms of shingles. The National Institute on Aging recommend that people seek medical treatment no later than 3 days after the rash appears. Early treatment can limit pain, help the rash heal quicker, and may reduce scarring. Once a doctor confirms shingles, they may suggest the following treatments: These ease symptoms, speed up recovery, and may prevent complications. A course of antiviral medications is usually prescribed for 7 to 10 days. Options include: - acyclovir (Zovirax) - famciclovir (Famvir) - valacyclovir (Valtrex) Antiviral drugs are most effective when taken within 3 days of the rash onset, although they may still be prescribed within the first 7 days of the rash appearing. Painkillers and antihistamines Over-the-counter (OTC) or prescription medications may reduce pain and skin irritation. Options include: - anti-inflammatory drugs, such as ibuprofen (Advil) - antihistamines for itching, including diphenhydramine (Benadryl) - corticosteroids or local anesthetics for severe pain - numbing products, including lidocaine (Lidoderm) Certain antidepressant drugs have been proven effective in reducing shingles pain, as well as symptoms of PHN. Tricyclic antidepressants (TCAs) are most commonly prescribed for shingles pain, including: - amitriptyline (Elavil) - imipramine (Tofranil) - nortriptyline (Aventyl, Pamelor) It can take several weeks or months before antidepressants work for nerve pain. Although typically used to treat epilepsy, some anticonvulsant drugs may reduce nerve pain. Again, these can take several weeks to take effect. Commonly prescribed anticonvulsants for shingles include: - gabapentin (Neurontin) - pregabalin (Lyrica) Managing shingles symptoms In addition to seeking medical treatment, people can take other steps to alleviate their symptoms and reduce discomfort. These include: - getting enough sleep and rest - using a wet compress on the itchy and inflamed skin and blisters - reducing stress through a healthy lifestyle, meditation, and deep breathing exercises - wearing loose-fitting clothing made of natural fibers, such as cotton. - taking an oatmeal bath - applying calamine lotion to the skin People should avoid scratching the rash and blisters as much as they can. Breaking the skin or bursting the blisters can cause infection and further complications. Is shingles contagious? Shingles is not contagious but is the reactivation of a virus already present in the body. However, a person with shingles can give chickenpox to someone who has never had the VZV infection before. Therefore, people with shingles should avoid contact with those who have never had chickenpox until their rash has completely healed. To catch the virus, someone must have direct contact with the rash. To avoid spreading VZV, people with shingles should: - Avoid close contact with people who have never had chickenpox or been vaccinated for chickenpox. - Avoid close contact with low birth-weight infants and people with a compromised immune system, such as those on HIV medication or who have had an organ transplant. - Keep the rash covered with loose, natural clothing to avoid others coming into contact with it. - Wash their hands frequently, especially after touching the rash or applying lotions to the skin. Vaccinating against shingles People over 60 should get a vaccination against shingles. There is a vaccination available to reduce the risk of developing shingles and experiencing long-term complications, such as PHN. The Centers for Disease Control and Prevention (CDC) recommends that adults aged 60 years and older have this vaccination, as it is believed to reduce the risk of shingles by 50 percent and PHN by 67 percent. People who have already had shingles can have the vaccine to prevent future occurrences. Each vaccination protects for approximately 5 years.
1. Plain Form Let's get the most basic one out of the way. A sentence ending in plain form is basically no extra ending added. It's a form that gets the minimally required message through, just enough to make the communication possible. Watashi wa ke-ki wo taberu I eat cake. Kore wa pen da This is a pen. If you da after a noun, or you use the dictionary form of a verb, you speak in what’s called jyotai, or direct style in Japanese. It is one of the most basic ways to speak or write, although it comes off a little bare bones or informal. Called the desu/masu form or keitai in Japanese, this form is distal style, and the most fundamental formal sentence ending. In rough translation, desu serves as the verb "be" in English, which is added after nouns and adjectives. For example, “I’m a student” in Japanese is watashi wa gakusei desu. Masu, on the other hand, is added to a verb. For example, “I drink coffee every day” in Japanese is watashi wa mainichi ko-hi- wo nomimasu. Notice that the verb for "drink," nomu, is conjugated and followed by the ending masu. In past tense, desu/masu is changed to deshita/mashita. For example watashi wa ko-hi- wo nomimashita, or "I drank coffee." This is an ending that indicates emphasis, agreement, or a sort of hypothetical question requesting agreement. Commonly used in friendly conversation, all you need to do is add -yone or -ne at the end of an affirmative sentence. Here are some examples. Ramen suki desu yone. You like ramen, don't you? Kyo wa samui desu ne. It sure is cold today! Kore wa anata no pen desu ne? This is your pen, isn't it? 4. Desu kedo/Masu kedo This is probably one of the trickiest ones, since it isn't a convention used frequently in English. Using this ending is like intentionally creating a pause in the middle of the sentence to let the intended listener read between the lines, leaving something implied but unspoken. Let’s look at one example case. This is something you might want to say to your boss when she has clearly forgotten you have to leave the office on time today, but she still comes to you with some extra work when you’re just about to leave. Kyo wa kaeranai to ikenain desu kedo... If I don't leave on time today... In context, this sentence roughly translates as "I must leave on time today" but the sentence ending desu kedo adds a bit of demand, or an unspoken question. You are basically saying, implicitly, "I don't know if you remember it or not, but I have to leave here on time today, so what do you say? (Do you still want me to work overtime?)" The question in parenthesis is the message that silently gets delivered through this magical ending! In fact, Japanese people use this ending a lot when they ask for something to a stranger or a store clerk. Sumimasen, Sakura eki ni ikitain desu kedo... Excuse me, I want to go to the Sakura Station... The implied message here is "please tell me how to get to Sakura Station.” Sumimasen, kore wo henpin shitain desu kedo... Excuse me, I want to return this... The implied message here is something like, "I want to return this, can you help me do so?" This ending doesn't have any ambiguous meaning, but is used to indicate emphasis. The difference between a sentence ending in yo might be marginal, but it’s a clear indication that you truly feel something. A simple example: Kono eiga omoshiroi! This movie is interesting! Kono eiga omoshiroi yo! This movie sure is interesting! Notice that the English translations are basically the same, but in Japanese, the second sentence is a speaker "almost" recommending the movie to a listener. The ending yo is intended to emphasise the statement that precedes it. This grammatical pattern turns a statement into a question. In fact, constructing a question is quite easy in Japanese. Just add ka at the end of a sentence, and the sentence will become a question. (Note: While subjects can be usually omitted in Japanese as in the examples below, sometimes you need to address the person or the people you're speaking to.) Tokyo ni sunde imasu I live in Tokyo. Tokyo ni sunde imasu ka Do you live in Tokyo? So there you have the 6 most common sentence endings in Japanese! Next time you hear someone speak in Japanese, try to pay attention to the ending, and you might get the hidden message in their sentence!
The sun’s activity hit a dramatic low in 2008, a historic lull that caused a similar drop in magnetic effects on Earth — with an eight-month lag, a new study suggests. The study found that many magnetic changes on Earth are indeed strongly linked to the solar activity cycle, though not in perfect synchrony, and it can help scientists map out some causes. The speed of the solar wind — the 1-million-mph stream of particles coming from the sun — as well as the strength and direction of the magnetic fields embedded in it helped produce the low readings on our planet, researchers said. "Historically, the solar minimum is defined by sunspot number," said study lead author Bruce Tsurutani, of NASA's Jet Propulsion Laboratory in Pasadena, Calif., in a statement. "Based on that, 2008 was identified as the period of solar minimum. But the geomagnetic effects on Earth reached their minimum quite some time later, in 2009. So we decided to look at what caused the geomagnetic minimum." [ Photos : Sunspots on Earth ' s Closest Star ] Three big factors The sun typically follows an 11-year cycle, with periods of high activity known as solar maximums and the lulls classified as solar minimums. Currently, the sun is in an active phase of its weather cycle. [ Amazing New Sun Photos From Space ] The current solar activity cycle is called Solar Cycle 24. It looks like Solar Cycle 25 could be an extremely low period, according to new research announced today at the annual meeting of the solar physics division of the American Astronomical Society. Three things help determine the amount of energy transferred from the sun to Earth's magnetosphere: the speed of the solar wind, the strength of the magnetic field outside Earth (known as the interplanetary magnetic field) and which direction this field is pointing. The research team looked at each of these factors. The scientists found that the interplanetary magnetic field was extraordinarily low in 2008 and 2009. This was an obvious contribution to the geomagnetic minimum. But it couldn't be the only explanation, researchers said, since Earth magnetic effects dropped in 2009 but not 2008. Using NASA's Advanced Composition Explorer satellite, the team discovered that the solar wind remained high during the sunspot minimum in 2008. It began a steady decline later, however — consistent with the timing of the decline in geomagnetic effects. The placement of 'coronal holes' Further investigation revealed the cause of this decrease: phenomena called coronal holes. Coronal holes are relatively dark, cold areas within the sun's outer atmosphere. Solar wind rockets at great speeds from the centers of these holes and much more slowly from their edges. During a solar minimum, coronal holes are usually found at the sun's poles, sending to Earth only the slow-moving wind from the holes' edges, not the fast stuff from their centers. But this wasn't the case in 2008, researchers said. Rather, the holes lingered for a while at low latitudes before finally migrating to the poles in 2009. Only then, researchers said, did the speed of the solar wind at Earth begin to slow down, leading to a decrease in geomagnetic effects, which can manifest as variably intense auroras — the brilliant light shows found near Earth's poles. So researchers are starting to get a handle on what causes geomagnetic minimums: low interplanetary magnetic field strength, along with slower solar wind speed and coronal hole placement. "It's important to understand all of these features better," Tsurutani said. "This is all part of the solar cycle, and all part of what causes effects on Earth." The study appeared in the May 16 issue of the journal Annales Geophysicae. Follow SPACE.com for the latest in space science and exploration news on Twitterand on.
Public Health Campaigns PUBLIC HEALTH CAMPAIGNS Promoting public health and preventing the spread of dangerous health risks is an integral communication function in modern society. Whether the focus is on the prevention and control of acquired immunodeficiency syndrome (AIDS), cancer, heart disease, or community violence, a fusion of theory and practice in communication is urgently needed to guide effective promotion efforts. Public health campaigns involve a broad set of communication strategies and activities that specialists in health promotion engage in to disseminate relevant and persuasive health information to groups of people who need such information to help them lead healthy lives. Public health campaigns involve the strategic dissemination of information to the public in order to help groups of people resist imminent health threats and adopt behaviors that promote good health. Typically, these campaigns are designed to raise public consciousness about important health issues by educating specific groups (i.e., target audiences) about imminent health threats and risky behaviors that might harm them. Health campaigns are generally designed both to increase awareness of health threats and to move target audiences to action in support of public health. For example, public health campaigns often encourage target audience members to engage in healthy behaviors that provide resistance to serious health threats. These behaviors can include adopting healthy lifestyles that include exercise, nutrition, and stress-reduction; avoiding dangerous substances such as poisons, carcinogens, or other toxic materials; seeking opportunities for early screening and diagnosis for serious health problems; and availing themselves of the best available health-care services, when appropriate, to minimize harm. Frailty of Messages that Promote Health Campaigns are designed to influence public knowledge, attitudes, and behaviors, yet achieving these goals and influencing the public is no simple matter. There is not a direct relationship between the messages that are sent to people and the reactions these people have to the messages. In addition to interpreting messages in very unique ways, people respond differently to the messages that they receive. For example, having drivers use their seatbelts when they drive might seem like a very straightforward public health goal. A very simple campaign might develop the message, "Wear your seatbelt when you drive!" For this message to influence the beliefs, attitudes, and values of all drivers, the campaign planner must take many different communication variables into account. Is this message clear and compelling for its intended audience? How are audience members likely to respond to this message? Will they pay attention to it? Will they adjust their behaviors in response to it? Campaign planners must do quite a bit of background research and planning to answer these questions. Effective communication campaigns must be strategically designed and implemented. In other words, they must use carefully designed messages that match the interests and abilities of the audience for which they are designed, and they must convey the messages via the communication channels that the target audience trusts and can easily access. A primary goal of the campaign is to influence the way the audience thinks about the health threat. If the target audience already believes this issue is very serious and of great relevance to their lives, this will lead the campaign planner to craft messages that will support these preconceptions. If, on the other hand, members of the target audience barely recognize the health threat and are not at all concerned about it, the campaign planner must design communication strategies that will raise the audience's consciousness and concern about the topic. Generally, campaign planners want to convince target audiences to recognize and take the identified health threat seriously. They want to influence the audience's beliefs, values, and attitudes about the issue to support the goals of the campaign. Only after a communication campaign raises audience consciousness and concern about the threat can it begin to influence (or persuade) the target audience to adopt specific recommendations for resisting and treating the identified health threat. The communication strategies used to raise consciousness and the strategies used to motivate action may be quite different. Message Strategies and Communication Channels Effective public health campaigns often employ a wide range of message strategies and communication channels to target high-risk populations with information designed to educate, motivate, and empower risk reduction behaviors. For this reason, modern campaigns have become increasingly dependent on integrating interpersonal, group, organizational, and mediated communication to disseminate the relevant health information effectively to specific high-risk populations. Most campaigns use mass media (i.e., newspapers, radio, television, etc.) to convey their messages to large, and sometimes diverse, audiences. These channels for communication often have the ability to reach many people over vast geographic distances. In recognition of the multidimensional nature of health communication, the most effective public health campaigns develop information dissemination strategies that incorporate multiple levels and channels of human communication. To have the greatest potential influence on the health behaviors of the target audience, public health campaigns often employ a wide range of communication channels (e.g., interpersonal counseling, support groups, lectures, workshops, newspaper and magazine articles, pamphlets, self-help programs, computer-based information systems, formal educational programs, billboards, posters, radio and television programs, and public service announcements). The use of these different media is most effective when the campaign is designed so that the different communication channels complement one another in presenting the same public health messages to different targeted audiences. Because effective use of communication channels is so important to the success of public health campaigns, research related to health communication can perform a central role in the development of an effective campaign. Such research helps campaign planners to identify consumer needs and orientations; target specific audiences; evaluate audience message behaviors; field test messages; guide message conceptualization and development; identify communication channels that have high audience reach, specificity, and influence; monitor the progress of campaign messages; and evaluate the overall effects of the campaign on target audiences and public health. Strategic Public Health Campaign Model Developing and implementing effective public health campaigns is a complex enterprise. Campaign planners must recognize that mere exposure to relevant health information will rarely lead directly to desired changes in health-related behavior. Edward Maibach, Gary Kreps, and Ellen Bonaguro (1993) address the complex relationship between communication efforts and campaign outcomes in their strategic health communication campaign model. This model identifies five major stages and twelve key issues that planners of public health campaigns should consider in developing and implementing their strategic campaigns. The major elements of the model are (1) campaign planning, (2) theories for guiding efforts at health promotion, (3) communication analysis, (4) campaign implementation, and (5) campaign evaluation and reorientation. Campaign planning addresses two major issues, setting clear and realistic campaign objectives and establishing a clear consumer orientation to make sure that the campaign reflects the specific concerns and cultural perspectives of the target audiences. Realistic campaign objectives refer to the purposes of the campaign. Identifying an important public health threat or issue that can be effectively addressed by a campaign is a crucial first step. There must be an important health issue to address, and it must be a problem that poses significant risks for groups of people. Is the identified health threat likely to be reduced through the implementation of a public health campaign? Are there clearly identified and proven strategies for addressing the threat that can be promoted by the campaign? Are members of the campaign audience likely to adopt the health strategies that will be promoted in the campaign? These questions must be answered before a public health campaign is started, or the planners risk wasting time and money on a campaign that will have a minimal effect on public health. Adopting a consumer orientation means that the whole campaign is designed from the unique cultural perspective of the target audience and that members of the audience are involved as much as possible in the planning and implementation of the campaign. It is imperative that the campaign planners clearly understand the orientation and predisposition of the target audience in order to craft the most appropriate and effective campaign for that audience. Campaign planners must identify specific (well-segmented) target populations who are most at-risk for the identified health threats that will be addressed in the campaign. These populations of individuals become the primary audiences to receive strategic campaign messages. Research related to public health campaigns focuses on the effective dissemination of relevant health information to promote public health. To develop and design persuasive campaign messages that will be influential with the specific target audiences, campaign planners must conduct audience analysis research to gather relevant information about the health behaviors and orientations of the target audiences. Audience analysis also helps campaign planners learn about the communication characteristics and predisposition of target audiences. Theories for Guiding Efforts at Health Promotion Once the basic campaign planning has been completed, it must be determined which established behavioral and social science theories will be used as guides for developing overall campaign strategies and materials. The best theories have been tested in many different contexts (with different populations) and provide the campaign planners with good advice in directing campaign efforts. Too often campaign planners are in such a rush to provide people with health-related messages that they do not carefully design their communication strategies. Theory provides campaign planners with strategies for designing, implementing, and evaluating communication campaigns. There are a wide range of behavioral theories of communication, persuasion, and social influence that can be effectively used to guide public health campaigns. The theories that are adopted will direct the campaign planners' use of message strategies to influence key public audiences. For example, Albert Bandura (1989) developed an insightful theory concerning self-efficacy as a key variable in behavior change, which would lead campaign planners to develop message strategies that build the confidence of members of the target audience to implement and institutionalize campaign recommendations into their lives. Campaign planners who apply exchange theories to their efforts are likely to craft messages that identify the personal detriments of health risks and the benefits of adopting proposed behavioral changes. These theories direct the persuasive communication strategies used in public health campaigns. Communication analysis identifies three critical issues in designing public health campaigns: (1) audience analysis and segmentation, (2) formative research, and (3) channel analysis and selection. Audience segmentation involves breaking down large culturally diverse populations into smaller more manageable and more homogenous target audiences for health promotion campaigns. The greater the cultural homogeneity (i.e., the more they share cultural attributes and backgrounds) of a target audience, the better able campaign planners are to design messages specifically for them. With a diverse target audience, the campaign planner is hard pressed to develop message strategies that will appeal to all parts of the population. It is far more effective to target an audience that shares important cultural traits and is likely to respond similarly to campaign message strategies. After segmenting the target audience into the most culturally homogenous group possible, the campaign planners should, in order to guide the design of the campaign, gather as much information about the group's relevant cultural norms, beliefs, values, and attitudes. The more complete the audience analysis process, the more prepared the campaign planners are to tailor the messages to the specific needs and predilections of the target audience. Audience analysis can take the form of surveys, focus group discussions, or consultation of existing research results that are available and describe key aspects of the population of interest. Formative research is the process used to guide the design and development of the campaign by gathering relevant information about the ways in which representatives of the target audience react to campaign messages. In essence, it is an early method of testing the effect of the developing campaign when changes, updates, and other refinements can still be made to reflect audience responses. Formative research can also help campaign planners make knowledgeable choices about which communication channels to use in the campaign because those channels are most likely to reach and influence the specific target audiences. Implementation involves the long-term administration and operation of the campaign. Campaign planners must carefully establish an effective marketing mix, which originates from the field of marketing and is indicative of the growth of social applications of marketing principles (i.e., social marketing) in public health campaigns. The marketing mix is based on product, price, placement, and promotion. In other words, campaign planners try to establish clear sets of campaign activities (products) to promote objectives that audience members can adopt with minimal economic or psychological costs (price). These objectives need to be presented in an attractive manner that is very likely to reach the target audience (placement), and the message must provide the members of the audience with information about how, when, and where they can access campaign information and programs (promotion). Campaign planners should carefully evaluate the campaign process, which involves identifying macrosocial conditions that may influence accomplishment of the campaign goals and designing strategies for promoting long-term involvement and institutionalization of campaign activities with the target audience. Process evaluation is used to keep track and evaluate campaign activities in order to identify areas for fine-tuning campaign efforts. Since target audiences reside within and are interdependent with the larger society, campaign planners must attempt to involve these larger social systems, such as business organizations and government agencies, in the campaign activities. For example, planners for a campaign related to tobacco control used macrosocial factors to provide strong support for their efforts. They accomplished this by lobbying in support of government and corporate regulations that restricted smoking behaviors in public places and that corresponded nicely with their message strategies that encouraged people to not smoke. In this way, the macrosocial regulations on smoking supported the campaign goals of reducing smoking behaviors. Furthermore, the campaign planner should design strategies for the long-term involvement of the audience with the goals and activities of the campaign in order to ensure that the audience members institutionalize the messages and make them a regular part of their daily lives. An excellent strategy for such institutionalization is to empower members of the target audience to get personally involved with implementing and managing campaign programs so they have a greater stake in achieving campaign goals and so the campaign activities become part of their normative cultural activities. For example, campaigns that support increased physical activity and fitness have benefited from efforts to establish annual activities, festivals, and sporting events to institutionalize their campaign goals. Campaign Evaluation and Reorientation The final process involved in the strategic communication campaign model is evaluation and reorientation. At this point, a summative evaluation (i.e., and evaluation of campaign outcomes) is conducted to determine the relative success of the campaign in achieving its goals at an acceptable cost, as well as to identify areas for future public health interventions. The information gathered through such outcome evaluations reorients campaign planners to the unmet health needs of the target audience. Such feedback is essential in leading campaign planners back to the first stage of the model (planning), where they identify new goals for health promotion. Through this evaluative feedback loop, the strategic communication campaign model illustrates the ongoing cyclical nature of efforts at health promotion. Campaigns and Communication Research As seen in the strategic communication campaign model, communication research performs a central role in strategic public health campaigns. Data are gathered (1) in the planning stage to identify consumer needs and orientations, (2) in the communication analysis stage to target specific audiences, evaluate audience message behaviors, field test messages to guide message conceptualization and development, and to identify communication channels with high audience reach, specificity, and influence, (3) in the implementation stage to monitor the progress of campaign messages and products and to determine the extent to which campaign objectives are being achieved, and (4) in the evaluation and reorientation stage to determine the overall effects of the campaign on target audiences and public health. The strategic communication campaign model suggests that to maximize the effectiveness of efforts at health promotion, research in the area of health communication must be used to guide the development, implementation, and evaluation of strategic public health campaigns. See also:Health Communication. Albrecht, Terrance, and Bryant, Carol. (1996). "Advances in Segmentation Modeling for Health Communication and Social Marketing Campaigns." Journal of Health Communication: International Perspectives 1(1):65-80. Bandura, Albert. (1989). "Perceived Self-Efficacy in the Exercise of Control over AIDS Infection." In Primary Prevention of AIDS: Psychological Approaches, eds. Vickie M. Mays, George W. Albee, and Stanley F. Schneider. Newbury Park, CA: Sage Publications. Kreps, Gary L. (1996). "Promoting a Consumer Orientation to Health Care and Health Promotion." Journal of Health Psychology 1(1):41-48. Lefebvre, Craig, and Flora, June. (1988). "Social Marketing and Public Health Intervention." Health Education Quarterly 15(3):299-315. Maibach, Edward W.; Kreps, Gary L.; and Bonaguro, Ellen W. (1993). "Developing Strategic Communication Campaigns for HIV/AIDS Prevention." In AIDS: Effective Health Communication for the 90s, ed. Scott C. Ratzan. Washington, DC: Taylor and Francis. Ratzan, Scott C. (1999). "Editorial: Strategic Health Communication and Social Marketing of Risk Issues." Journal of Health Communication: International Perspectives 4(1):1-6. Rice, Ronald E., and Atkin, Charles K., eds. (1989). Public Communication Campaigns, 2nd ed. Newbury Park, CA: Sage Publications. Gary L. Kreps
MODIS Thermal anomalies/fire product Thermal remote sensing can be a useful method for tracking wildfires and assessing fire damage. MODIS satellites have been collecting thermal/fire anomaly data since 2000, and the data have been used successfully by fire ecologists to track and monitor fire activity across the globe. MODIS data can be used to monitor many aspects of fire patterns and consequences, including monitoring burn scars, vegetation composition and condition, smoke emissions, and water vapor / clouds. Fire is detected from the MODIS images using a contextual algorithm that uses the strong emission of mid-infrared radiation as a signature of fire activity (described in detail in Giglio et al. 2003). This produces an initial categorization of pixels into six categories: missing data, cloud, water, non-fire, fire, or unknown. The initial categorization often classifies nonfire pixels as fire pixels due differences in temperature and reflectance values across different locations. All pixels classified as potential fire pixels are then compared to neighboring pixels in a series of contextual threshold tests to determine the degree of temperature differentiation of fire pixels from the background of non-fire pixels. The resulting map identifies pixels that are likely to contain active fires, and classifies the active fire pixels based on confidence (i.e. high, nominal or low confidence that the pixel represents an active fire in reality). A database of active fire products is available with information on fire occurrence and location, the rate of thermal energy emission, and an estimate of the smoldering/flame ratio of the fire. All MODIS fire anomaly images have a spatial resolution of 1 kilometer and each satellite (Terra and Aqua) views the entire surface of the earth every one to two days. Three MODIS fire products currently exist: MOD14 is the most basic fire product by MODIS that identifies active fires and volcanic activity. It is a level 2 product that is used to generate the higher level (MOD14A1 andMOD14A2) products. MOD14A1 is a level 3 (more highly processed than level 2) image of thermal activity that is composited every 24 hours and packaged into 8-day products. Images are distributed as tiles, which cover a large square geographic region, and pixel size within each tile is 1 kilometer. MOD14A2 is a level 3 image of thermal activity that is composited over an 8-day period as a summary product. Images are distributed as tiles and pixel size within each tile is 1 kilometer. MODIS produces global maps of fire occurrence through the MOD14, MOD14A1, MOD14A2 MODIS products. Burned area data that map the spatial extent of recent fires are also available (product MCD45A1) Small or relatively cool-burning fires may not be detected via MODIS imagery. Some images have a large number of pixels that are classified as unknown, but more recent algorithms are able to classify pixels with greater confidence. Additionally, while the Terra and Aqua satellites have very high temporal frequency for a satellite imager, fires burning in arid environments (especially those dominated by fine fuels like annual grasses) may change significantly between successive satellite revisits. Remotely sensed images from two MODIS satellites (Terra and Aqua) are processed into level 2 and 3 MODIS thermal anomaly products, which are available for free to the public. Software and hardware requirements will depend on the use of fire anomaly products. A GIS or remote sensing program will be required to visualize maps, and other programs may be required for additional image processing. MODIS image of thermal anomalies in a tile of northern Australia. Gray pixels represent non-fire land areas, blue represents water, white represents active fire, and yellow pixels are unknown. This image is cloudless, but clouds would be shown in purple if cloud cover existed. High-resolution Landsat ETM+ images showing an unburned landscape in Tanzania (upper left) burn scars from a 2000 fire (upper right) and the fire pixels detected by MODIS satellites overlaid on the burn scars (lower right). (source: Case study 5.2: Remote sensing of fire disturbance in the Rungwa Ruaha landscape, Tanzania, CBD Technical Series Number 32, Convention on Biological Diversity. Available online at http://www.cbd.int/ts32/ts32-chap-5.shtml#Ecos_qua). You must have an account and be logged in to post or reply to the discussion topics below. Click here to login or register for the site.
The breeding grounds of (Ehrlich, et al., 1988)consist of low grassy tundra with flat basins within 10 km of lakes, rivers, flood plains, or seas. Some choose rockier terrain near grassy wet tundra and flat marshy areas protected from the north by mountains. Overall they prefer coastal lagoons, marshes, tidal flats, and estuaries, but have been known to take advantage of prairies and agricultural lands. There are three stages of development in. There are the hatchlings and young, the juvenile non-breeders, and the adult breeders. The young grow rapidly and are fully fledged within forty-five days. They reach maturity in two years, which is when they usually pair up in a monogamous relationship with another Snow Goose. The pair begins to breed for the first time in June of the third year (Belrose 1976). Young snow geese are precocial and receive parental care from both the male and female parent. (Ehrlich, et al., 1988) We do not have information on home range for this species at this time. are herbivorous; they eat roots, leaves, grasses, and sedges. They have strong bills for digging up roots in thick mud. Their most common food source in the northern breeding grounds is American bulrush. As they migrate south they feed on the aquatic vegetation in wetlands and estuaries. They also forage in agricultural fields for wasted oats, corn and winter wheat. They eat tender shoots as they come up or feed on grass, weeds, and clover. In their Louisiana wintering grounds they feed on wild rice. Snow geese also need some sort of grit such as sand or shell fragments to aid in their digestion. Foods eaten include: saltgrass, wild millet, spikeruch, feathergrass, panic grass, seashore paspalum, delta duckpatato, bulrush, cordgrass, cattail, ryegrass, wild rice, berries, aquatic plants and invertebrates, and agricultural crops. (Belrose, 1976) Major predators include artic foxes (Vulpes lagopus) and gull-like birds called jaegers (genus Stercorarius). The biggest threat occurs during the first couple of weeks after the eggs are laid and then after hatching. The eggs and young chicks are vulnerable to these predators, but adults are generally safe. They have been seen nesting near snowy owl nests, which is likely a solution to predation. Their nesting success was much lower when snowy owls were absent, which lead scientists to believe that the owls, since they are predatory bird, were capable of keeping predators away from the nests (Tremblay et al., 1997). (Heyland, 2000; Tremblay, et al., 1997) Because of their large numbers the snow geese are hunted, although there are restrictions in place in order to protect the species from over hunting. In recent decades many snow geese have become agricultural pests. They sometimes opt for easy food supplies found in farm fields with tender shoots and wasted corn, wheat, and oats. (Heyland, 2000) Alaine Camfield (editor), Animal Diversity Web. Jessica Logue (author), Western Maryland College, Randall L. Morrison (editor), Western Maryland College. living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico. uses sound to communicate living in landscapes dominated by human agriculture. having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. uses smells or other chemicals to communicate used loosely to describe any group of organisms living together or in close proximity to each other - for example nesting shorebirds that live in large colonies. More specifically refers to a group of organisms in which members act as specialized subunits (a continuous, modular society) - as in clonal organisms. animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds. an area where a freshwater river meets the ocean and tidal influences result in fluctuations in salinity. union of egg and spermatozoan an animal that mainly eats leaves. A substance that provides both nutrients and energy to a living thing. An animal that eats mainly plants or parts of plants. fertilization takes place within the female's body offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes). marshes are wetland areas often dominated by grasses and reeds. makes seasonal movements between breeding and wintering grounds Having one mate at a time. having the capacity to move from one place to another. the area in which the animal is naturally found, the region in which it is endemic. reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body. the regions of the earth that surround the north and south poles, from the north pole to 60 degrees north and from the south pole to 60 degrees south. "many forms." A species is polymorphic if its individuals can be divided into two or more easily recognized groups, based on structure, color, or other similar characteristics. The term only applies when the distinct groups can be found in the same area; graded or clinal variation throughout the range of a species (e.g. a north-to-south decrease in size) is not polymorphism. Polymorphic characteristics may be inherited because the differences have a genetic basis, or they may be the result of environmental influences. We do not consider sexual differences (i.e. sexual dimorphism), seasonal changes (e.g. change in fur color), or age-related changes to be polymorphic. Polymorphism in a local population can be an adaptation to prevent density-dependent predation, where predators preferentially prey on the most common morph. breeding is confined to a particular season reproduction that includes combining the genetic contribution of two individuals, a male and a female associates with others of its species; forms social groups. uses touch to communicate that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle). Living on the ground. defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia. A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome. A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands. A terrestrial biome with low, shrubby or mat-like vegetation found at extremely high latitudes or elevations, near the limit of plant growth. Soils usually subject to permafrost. Plant diversity is typically low and the growing season is short. uses sight to communicate young are relatively well-developed when born Belrose, F. 1976. Ducks, Geese and Swans of North America. Harrisburg, PA: Stackpole Books. Ehrlich, P., D. Dobkin, D. Wheye. 1988. The Birder's Handbook: A field guide to the natural history of North American birds. New York: Simon & Schuster Inc. Frerichs, T. 1997. Lesser Snow Goose. Columbia, SD: U.S. Fish and Wildlife Service. Hebert, P. 2002. "Snow Goose, Chen caerulescens" (On-line). Canada's Aquatic Environments. Accessed January 28, 2004 at http://www.aquatic.uoguelph.ca/birds/speciesacc/accounts/ducks/caerules/account.htm. Heyland, J. 2000. "Canadian Wildlife Service. Greater Snow Goose" (On-line). Accessed April 9, 2002 at www.cws-scf.ec.gc.ca/hww-fap/greatsg/gsgoose.html. Tremblay, J., G. Gauthier, D. LePage, A. Desrochers. 1997. Factors affecting nesting success in Greater Snow Geese: Effects of habitat and association with snowy owls. Wilson Bulletin, 109: 449.
More than 90% of people live in areas exceeding the WHO Air Quality Guideline for fine particulate matter. How does your air measure up? Figure A. Population-weighted annual average PM2.5 concentrations in 2019. Drag or zoom to explore the data. Three Decades of Air Pollution With data going back to 1990, we analyze exposure to three types of air pollution — outdoor fine particulate matter (PM2.5), ozone, and household air pollution (HAP). Exposure to fine-particle outdoor pollution remains persistently high in most places. More than half of the world's population lives in areas that do not even meet WHO's least-stringent air quality target.
Interview from: History.net Angela Zombek, assistant professor of history at University of North Carolina-Wilmington, grew interested in military prisons during a visit to Camp Chase, a Union facility in Ohio. Over time, her studies turned to the 19th-century penitentiary movement, where incarcerated criminals were subjected to solitary confinement and conditions designed to evoke penitence and rehabilitation. How that tradition influenced Union and Confederate military prisons during the crisis of the Civil War is the subject of her book, Penitentiaries, Punishment, and Military Prisons. CWT: Tell me about the pre-Civil War state of prisons. AZ: Long term imprisonment developed at the turn of the 19th century when the middle class thought having a public pillory wasn’t a good idea anymore because the danger in that, or in having a public execution, is arousing sympathy for criminals. Penitentiaries and a long-term system of punishment develops. Corporal punishment and executions moved behind penitentiary walls, away from the public. CWT: Yet sometimes the public could participate? AZ: Penitentiary executions were advertised as spectacles, if you were basically middle class, you could buy a ticket to see that. They also decided that they could charge admission so people could see the operation and see how inmates interacted with the guards. Those things came up later in military prisons. Governor David Tod of Ohio, for example, allowed curious people to tour Camp Chase for 20 cents. AZ: People have estimated that about 490,000 people total were incarcerated during the war. That number is just limited to military prisons that got established by the Union and the Confederate governments. That number wouldn’t necessarily include the people who weren’t POWs or were suspected of treason. CWT: Did prisons in the North and South face different challenges? AZ: I think they really did. That is apparent at Andersonville, Ga., which is established late in the war. There aren’t clear lines of authority. That’s not just related to Andersonville. When the prison at Salisbury, N.C., gets established, the first commandant is appointed by the state governor and he doesn’t know if he has the authority to do anything. That was the case at both Andersonville and Salisbury. The commandants had very little authority over prison conditions. Take Henry Wirz, the commander at Andersonville. He became a scapegoat at the end of the war when he was executed. But the things he could actually do to administer the camp were few and far between. For example, he had no power over the commissary. He had the rank of captain. There were literally some commanders of the guard regiments who outranked him. CWT: Was there a general understanding about how POWs should be treated? You mention political philosopher Francis Lieber’s 1863 General Orders No. 100 on the conduct of war? AZ: Both prisoners and officials are looking back on penitentiary imprisonment in the earlier part of the 19th century for guideposts. That’s what Lieber did because he was held as a political prisoner in Europe [in his native Prussia in the 1820s]. He was well aware of the standards that wardens in the state should use to treat incarcerated criminals, and those standards are basically transferred in terms of food, clothing, and cleanliness, over to military prisoners. CWT: Prisoners were also worked as laborers in some cases, although Union soldiers often refused to be clerks for the Confederacy. AZ: Those were basically efforts by Confederate officials to make up for lack of manpower. If they wanted a prisoner to be a clerk, for example, he was made to swear allegiance to the Confederacy. If Union prisoners had to go out to collect wood, that’s one thing because it’s for survival, but if they are going to work in a position sanctioned by the government that is basically shooting arrows at their own cause. There was a lot of resentment toward prisoners who decided to take those positions in the Confederacy. AZ: Lieber wrote in the General Orders No. 100 that prisoners may be made to work for their captors. So officials on both sides used prison labor to make improvements to the camp, to build barracks or forms of shelter, to dig ditches, to clean up waste in various forms. CWT: You also mention Confederates who brought slaves to prison. AZ: That was an issue that caused controversy in Columbus, Ohio, when Confederate officers held in Camp Chase brought their slaves with them. Once the civilians in Columbus got wind of that, they were absolutely irate, had no tolerance for it, started writing editorials to local newspapers drawing attention to it and contacting the Lincoln administration to stop that practice, which they eventually did. CWT: What happens at war’s end? AZ: Officials try to send prisoners home as quickly as they could. The Union did it by rank and were much more likely to send home privates than officers. They wanted to keep closer tabs on the people who actually led the companies, maybe even led the armies. That process is slow. The prisons in the South, especially in Richmond, were taken over by the U.S. government and used to help keep order in the city. That’s basically the case with Castle Thunder. CWT: Is there a takeaway for the war’s impact on our current prison system? AZ: Number one, it got the federal government involved. Before the Civil War, there was only one federal prison, the D.C. penitentiary. It was shut down in September 1862. It got reopened toward the end of the war due to the need for space. But, in the latter part of the 19th century, we start to see the opening of federal prisons. Number two, the Civil War generates another reform wave because of Congress’ investigation in 1867 of Southern military prisons and because of what had gone on during the war. The National Prison Association forms in 1870, and is drawn again to the same issues, such as the conditions, the crowding, the food, the treatment. But again the actual reform of the institutions falls by the wayside. CWT: Anything else you’d like to add? AZ: The correspondence that prisoners in both military prisons and penitentiaries had with people at home was so moving. In correspondence from family members on both sides, I saw so profoundly that relatives of convicts and relatives of POWs say you have to trust in God and put faith in him.
Types of Solutions Depending on the types of solvents and its combination with the solutes, there may be various types of solutions. When solvent is Gas - Gas + Gas e.g.- Mixture of Gases, CO2 in air, N2 in air. - Gas + Liquid e.g.- Moisture (water vapour), mist - Gas + solid. e.g.- Sublimation of solid in air like I2 , NH4Cl Camphor, Carbon in air (smoke) When Solvent is Liquid - Liquid + Gas e.g.- Dissolved O2 in water or Soda Water (dissolved CO2 in Water). - Liquid + Liquid e.g.- Alcohol or Acetone in water - Liquid + solid e.g.- salt or sugar in water. When solvent is Solid - Solid + Gas e.g. – Solution of hydrogen in Nickel. - Solid + liquid e.g.- Sodium amalgam with Mercury - Solid + Solid e.g.- Alloys (Brass, German Silver, Bronze) Methods of Expressing the concentration of Solutions in different units There are various methods by which we can describe the concentration of a solution quantitatively ? (i) Mass percentage (w/w): ? Here solute and solvent both are expressed in weight or mass. Mass % of a component For example, 10% glucose in water means, 10g glucose dissolved in 90g water resulting 100g of solutions. (ii)Volume Percentage (v/v):? Here solute and solvent both are expressed in volume. Volume % of a Component For example, 10% alcohol solution in water means 10ml alcohol is dissolved in 90 ml of water resulting 100 ml of total solutions. (iii) Mass by volume percentage (w/v): ? Here solute is expressed in its weight while the solvent is expressed in its volume. Mass ?Volume % = For example, If we say that 10% of salt (w/v) is dissolved in water, it only means that 10 g of salt is dissolved in 100 ml of water, but not in 90 ml of water. This unit is commonly used in medicine and pharmacy. (iv) Parts per million (ppm):- When the relative quantity of solute in a solution is very low (i.e. in trace), it is difficult to express it in the normal scale. Here we estimate number of solute particles per million ( 106 ). Parts per million (ppm) ppm can also be expressed in m/m, v/v and m/v as in the case of concentration in percentage. This unit is commonly used environmental chemistry and pharmacy. ? Concentration in percentage can be converted to ppm by multiplying it by 104 . ? For example, if we say that 0.08% of oil is present in a contaminated water sample it means 0.08 × 104 = 800 ppm oil is in the water sample. (v) Mole fraction (x) :- Mole fraction of a component (solute or solvent) in a solution is expressed as . Mole fraction of a Component (x) = Unit of mole fraction (x) being a ratio is dimensionless. For example, in a binary solution (containing two components A & B). if nA = number of moles of A nB = number of moles of B. Mole fraction of A and B can be expressed as ? For a solution containing i no. of components. We have It is easy to show that in a given solution sum of mole fractions of all components is equal to unity (i.e. 1). x1 + x2 + x3 ………+ xi = 1 Example 1: What will be the mole fraction of ethylene glycol ( C2H6O2 ) in a solution containing 20% of ( C2H6O2 ) by mass? Solution: Here we assume that we have 100 g of solution. Solution will contain 20 g of ethylene glycol and 80 g of water. (vi) Molarity (M):- It is the number of moles of a solute dissolved in one litre (or one dm3) of solution. Example 2: What will be the molarity of a solution containing 5 g of NaOH in 450 mL solution? (vii) Molality (m): It is the number of moles of solute dissolved per kg. of solvent. (viii) Normality (N):– It is the number of gram equivalent weight of solute dissolved per litre of the solution. We should recall, Number of gram equivalent weight of a substance is the molar mass divided by its acidity (in case of base) or basicity (in case of acid) or charges (ve or + ve) present on it in case of ionic compound. The maximum amount of substance (solute) that can be dissolved in a specific amount of a particular solvent at a certain temperature is called the solubility of the substance in that solvent. Solubility depends on the intermolecular interactions between the solute and solvent as well as temperature and pressure. Let us consider these factors how effect the solution of solid and gas in a liquid. (i) Solubility of solid in a liquid: It is law of nature that like dissolves like. This means that polar solutes dissolve in polar solvents and non-polar solutes in non polar solvents. For example, sodium chloride and sugar dissolve readily in water but not in benzene. On the other hand naphthalene and authracene being non-polar solute can readily dissolve in non-polar solvent like benzene but not in polar solvent like water. Dissolution:- When a solid solute starts dissolving in a solvent its concentration increases in solution, this process is called dissolution. Crystallization:- Some solute particles in solution collide with undissolved solid solute and get separated out of solution, the process is called crystallization. Saturation Point:- At dynamic equilibrium, (when dissolution and crystallization occur at the same rate) the concentration of solution is found to be constant and no more solute can be dissolved at a given temperature and pressure. This stage of solution is called saturation point. The saturation point varies in different types of solution. Effect of Pressure: – Since the solids and liquids are not compressible, practically there is no effect of pressure on the solubility of a solid in a liquid. Effect of Temperature:- Effect of temperature on solubility of a solid in a liquid at dynamic equilibrium must follow Le chateliers Principle, according to which, In a nearly saturated solution. if dissolution process is endothermic (ΔsolH>O) , the solubility should increase with rise in temperature. (a) If dissolution process is exothermic (ΔsolH<O) the solubility should decrease with rise in temperature. Also, experimentally the two conditions are followed. (ii)Solubility of a Gas in a liquid: Many gases dissolve in water. O2 Dissolves only to a small extent in water. Because of dissolved oxygen in water all aquatic life is possible. On the other hand Hydrogen chloride gas (HCl) is highly soluble in water because of its ion formation. Solubility of gases in liquids is greatly affected by pressure and temperature. Effect of Pressure:- Effect of pressure on the solubility of gas in a liquid was first explained by Henry. He gave a quantitative relation between pressure and solubility. From above figure here we see that the concentration of dissolved gases increases with the increase in the pressure on the gas above the solution Henry’s Law: At constant temperature the partial pressure of the gas in vapour phase (p) is proportional to the mole fraction of the gas (x) in the solution. It is expressed as p = KHx KH = Henry’s constant Different gases have different KH values at the same temperature, hence KH is a function of nature of gas. From above equation it is clear that at a given pressure higher the value of KH lower is the mole fraction and hence lower the solubility. Effect of Temperature: Gas molecules, when dissolved in liquid, are in liquid phase and the dissolution process is like condensation which is exothermic (heat evolving). According to Le chatelier’s principle, solubility of gas into liquid being an exothermic process should decrease with increase in temperature. Vapour Pressure of Liquid solutions (i) Vapour pressure of liquid-liquid solutions and Rault’s Law: If Ptotal = total vapour pressure P1 = partial vapour pressure of component 1. P2 = partial vapour pressure of component 2. The quantitative relationship between the partial pressures ( P1 , P2 ) and the corresponding mole fractions ( x1, x2 ) of the two components 1 & 2 respectively, was given by Marte Raoult known as Raoult’s law which states ? Raoult’s law: ?For a solution of volatile liquids, the partial vapour pressure of each component in the solution is directly proportional to its mole fraction.? Thus, for component 1, p1 ∝ x1 or, p1 = p10x1 Similarly, for component 2, p2 = p20x2 Where, p10 and p20= vapour pressures of the pure component 1 and 2 respectively at same temperature. From Dalton?s Law of Partial Pressure Similarly if we put the value (x2 = 1− x1) in equation ptotal = x1p10 + x2p20 it can be shown easily that ptotal = p20 + (p10 − p20)x1 The above plot shows that the plot of p1 or p2 verses the mole fractions x1 & x2 for a solution is linear hence p1 & p2 are directly proportional to x1 & x2 respectively. In Figure, the dashed lines I & II represent the partial pressures of components, while the marked Line III represents the total vapour pressure of the solution. Conclusions from Raoult’s law: 1. Total vapour pressure over the solution can be related to the mole fractions of any one component. i.e. ptotal = p10 + (p20 − p10)x2 = p20 + (p10 − p20)x1 2. Total vapour pressure over the solution varies linearly with the mole fraction of component 2. 3. Depending on the vapour pressures of the pure components 1 and 2, total vapour pressure over the solution decreases or increases with the increase of the mole fraction of component 1. (See above plot) Does Raoult’s law differ from Henry’s law to some extent? Exactly not, only the two cases are different and we can say that Raoult’s law is a special case of Henry’s law in which kH becomes equal to p10. Raoult’s law describes liquid-liquid solutions in which one component is volatile while Henry’s law describes the solution in which one component so volatile that it exists as a gas i.e. the solution of a gas in liquid. p = p0x Raoult’s law p = kHx Henry’s law The equation of Raoult’s law & Henry’s law say same thing that the vapour pressure of a volatile component or a gas is directly proportional to its mole fraction in a solution. Only the proportionality constant in case of Raoult’s law is p0 , while it is kH in case of Henry’s law, which may or may not be equal to p0 . (ii) Vapour pressure of solutions of solids in Liquids: We know from unit 5 (class XI) that when liquids vapourise at a certain temperature, a pressure is exerted by the vapour of liquid over the liquid phase at equilibrium conditions, known as vapour pressure of that liquid. In a pure liquid the entire surface is filled by only liquid molecules, but when we add a non-volatile solute to a liquid solution, a part of the liquid surface is also occupied by these solutes. This reduces the space to the liquid molecules to evaporate out resulting the lowering in vapour pressure because only solvent part of the solution can vapourise and creates vapour pressure, but not the solute part being non-volatile. Decrease in vapour pressure due to presence of non-volatile solute in a solution doesn’t depend on the nature of solute but on the quantity of solute. Experimentally it is observed that decrease in vapour pressure of water by adding 1 mol of sucrose to 1 kg water is nearly same to that by 1 mol of urea to 1 kg water. Ideal and Non-Ideal Solutions (i) Ideal Solutions (A) Those solutions which obey Raoult’s law i.e. p1 = p0x1 (B) In which there is no change in enthalpy or temperature when components are mixed i.e. ΔHmix = 0 (C) In which no change in volume, when components are mixed ? i.e. ΔVmix = 0 (D) In which the solvent-solvent (like A ? A) or solute-solute (like B ? B) intermolecular attractive forces are nearly equal to those between solvent-solute (A ? B). i.e. A−A or B−B ≈ A−B. (E) Which is formed by mixing the components of same or nearly same polarity. Examples: n Hexane + n Heptane Benzene + Toluene Chlorobenzene + Bromobenzene Ethanol + Methanol (ii) Non-Ideal solutions: (A) Those solutions which do not obey Raoult?s law (P11 ≠ p10x1) . (B) (ΔHmix ≠ 0 & ΔVmix ≠ 0) It is observed that there is a rise or fall in temperature when non-ideal solutions are formed. (C) A−B ≠ A−A or B−B. (D)When P1 > p10x1, there is positive deviations from Raoult?s law. When P1 < p10x1 , there is negative deviation from Raoult?s law. These plots show how the vapour pressures of two component systems vary with their compositions - a solution that shows positive deviation from Raoult’s law and - a solution that shows negative deviation from Raoult’s (Latin word Colligative ? co = together, ligare = to bind.) Colligative properties are the properties which are strictly dependent on the number of solute particles, but not on its nature. We have already discussabout the examples of urea and sucrose where the vapour pressure of water is decreased by the same magnitude when equal molar mass of any compound is added to the fixed amount of water. There are four types of colligative properties (i) Relative lowering in vapour pressure (RLVP) (ii) Elevation of boiling point (EBP) (iii) Depression of freezing point (EFP) (iv) Osmotic pressure (OP) For describing all the four colligative properties, we assume a binary solution in which solvent and solute are denoted as 1 and 2 respectively. (i) Relative lowering of vapour pressure (RLVP):- We have already discussed that addition of non-volatile solute decreases the vapour pressure of solvent. The lowering in vapour pressure is given by p1o−p1 = p1ox2 Where, p1o = Vapour pressure of pure solvent p1 = Vapour pressure of solution when non-volatile solute is added. x2 =mole fraction of solute. From equation p1o−p1 = p1ox2 Here, n1 and n2 are the number of moles of solvent and solute in the solution. For dilute solutions n2 << n1, neglecting n2 in the denominator we have w1 and w2 are masses and M1 and M2 are the molar masses of the solvent and solute respectively. Here, we see that RLVP only depends on the mole fractions of solute, that is, the amount of solute. So this it is a colligative property. Example 3: The vapour pressure of pure benzene at a certain temperature is 0.850 bar. A non-volatile, non-electrolyte solid weighing 0.5 g when added to 39.0 g of benzene (molar mass 78 g mol−1 ). Vapour pressure of the solution, then, is 0.845 bar. What is the molar mass of the solid substance? Solution: The various quantities known to us are as follows: p1o = 0.850 bar; p = 0.845 bar; M1 = 78 mol−1, w2 = 0.5 g; w1= 39g Substituting these values in equation , we get Therefore, M2 = 170 g mol−1 (ii) Elevation in Boiling Point (EBP) RLVP and EBP is interrelated. When the vapour pressure of a solution is decreased after addition of a non-volatile solute, its boiling point will automatically increase. For example, if we take water, its normal boiling point is 100°C (i.e. 373 k). After addition of non-volatile solute to water its vapour pressure is decreased, it means at 373 k, its vapour is less than the atmospheric pressure. In order to increase the vapour pressure of this solution upto atmospheric pressure we have to supply more heat. This results in increase in boiling point of solution. EBP can be expressed as ? Example 4: The boiling point of benzene is 353.23 K. When 1.80 g of a non-volatile solute was dissolved in 90 g of benzene, the boiling point is raised to 354.11 K. Calculate the molar mass of the solute, Kb for benzene is 2.53 k kg mol−1 . The elevation (ΔTb) in the boiling point = 354.11 K ? 353.23 K = 0.88 K. Substituting these values in expression ΔTb = Kbm we get (iii) Depression in Freezing Point (DFP) When the vapour pressure of a solution of its liquid phase becomes equal to the vapour pressure of its solid phase at a certain temperature, the liquid solution starts freezing and this temperature is called the freezing point of the solution. After addition of a solute to a pure solvent, the freezing point of the solvent decreases, this is called depression in freezing point (ΔTf) . Where, ΔTf = depression in freezing point. m = molality of the solution. Kf = molal freezing point constant or Cryoscopic constant. Kf is defined as the depression or lowering in the Freezing Point of 1 molal solution. Example 5: 45 g of ethylene glycol (C2H6O2) is mixed with 600 g of water. Calculate (a) the freezing point depression and (b) the freezing point of the solution. Solution: Depression in freezing point is related to the molality, therefore, the molality of the solution with respect to ethylene glycol = Freezing point of the aqueous solution = 273.15 K ? 2.2 K = 270.95 K (iv) Osmotic Pressure (OP) The tendency of a solvent to flow through a semipermeable membrane into a more concentrated solution is called Osmosis. Semipermeable (semi = partially, Permiable = to pass through) membrane can only pass the solvent molecules but not the solute molecules into the solution. e.g. living cell, membrane, parchment paper etc. When a pressure is applied to the solution side it is observed that the movement of solvent molecules into the solution side through semipermeable membrane is stopped. It means osmotic pressure is the extra pressure required to stop the movement of solvent molecules into solution side through SPM. van?t Hoff showed osmotic pressure is related to the molar concentration of the solution as ? or λ = MRT λ = (n2/v)RT λ = osmotic Pressure M = molarity of solution R = universal gas constant T = temperature in kelvin n2 = number of moles of solute in solution Out of the four colligative properties osmotic pressure is the best to calculate the molar mass of the solute (especially polymers, proteins) because ? (i) it can be measured at room temperature. (ii) ΔTf and ΔTb are very small values and difficult to measure with ordinary temperature. Example 6: 200 cm3 of an aqueous solution of a protein contains 1.26 g of the protein. The osmotic pressure of such a solution at 300 K is found to be 2.57 × 10−3 bar. Calculate the molar mass of the protein. Solution: The various quantities known to us are as follows: λ = 2.57 × 10−3 bar, V = 200cm3 = 0.200 litre T = 300 K R = 0.083 L bar mol−1k−1 KEY CONCEPT: Extraction of pure water from the water mixed with other solutes (for example sea water containing salts). We know, in osmosis water (or solvent) moves into solution through semi permeable membrane. Here we see that when we apply the pressure more than osmotic pressure to the solution side only solvent squeezes out But, it is possible when the pressure more than the osmotic pressure is applied to the solution side only solvent will squeeze out. This process is called reverse osmosis (see figure). Reverse osmosis is just opposite to the normal osmosis. A binary mixture - that retains the same composition in the vapour state as in the liquid state when distilled or partially evaporated under a certain pressure. - that cannot be separated by fractional distillation - that has a constant boiling point and is called an azeotropic mixture or azeotrope. Some azeotropes boil at lower boiling points than either of its constituents, called minimum boiling azeotropes. Maximum boiling azeotropes are those that boil at higher Boiling Point than either of its constituents Boiling Point. - Ethanol and water mixture (95%) - Nitric acid and water mixture (68% m/m) Minimum boiling azeotropes show positive deviation from Raoult?s law while maximum boiling azeotropes show negative deviation from Raoult’s law. Abnormal molar masses and colligative Properties For the solute which undergo association or dissociation, observed value of colligative property is different from the calculated value of colligative property. The ratio of observed colligative property to the calculated colligative property is known as van?t Hoff factor (i) ? Since the colligative property is directly related to the molecular mass of solute, for the solute which undergoes association or dissociation, abnormality in the molecular mass is observed. Thus, also we can define i as, Actually, abnormal molecular masses are experimentally determined when association or dissociation takes place. Now, there may be three cases ? (i) For a solute which doesn?t undergo any association or dissociation. i = 1 So, observed colligative property = calculated colligative property e.g. Glucose, Urea, Sucrose etc. (when dissolved in water) (ii) For a solute which undergoes dissociation, number of solute particles in solution increases and therefore i > 1 so, observed colligative property > calculated Colligative Property For example, if complete dissociation occurs then i = 2 for KCl, NaNO3 i = 3 for CaCl2, Mg(NO3)2, Na2SO4 (iii) For solute which undergoes association, number of solute particles in solution decreases and therefore, i < 1 and observed colligative property < calculated colligative properties. In case of complete association Actually, CH3COOH or C6H5OH form dimmer and hence the number of solute particles becomes nearly half the initial number of solute. Example 7: The depression in freezing point observed when 0.6 mL of acetic acid (CH3COOH) , having density 1.06 g mL−1 , is dissolved in 1 litre of water is 0.0205°C . Calculate the van?t Hoff factor and the dissociation constant of acid. Acetic acid is a weak electrolyte and will dissociate into two ions: acetate and hydrogen ions per molecule of acetic acid. If α is the degree of dissociation of acetic acid, then we would have n (1 − α) moles of undissociated acetic acid, nα moles of CH3COO− and nα moles of H+ ions, CH3COOHH+ + CH3COO− Padhte Chalo, Badhte Chalo #PadhteChaloBadhteChalo – An initiative to help all the Class XII Students get access to Quality Education for FREE.
Keystone Exams Assessment Anchors and Eligible Content CHEM.A.1: Properties and Classification of Matter CHEM.A.1.1: Identify and describe how observable and measurable properties can be used to classify and describe matter and energy. CHEM.A.1.1.2: Classify observations as qualitative and/or quantitative. CHEM.A.1.1.3: Utilize significant figures to communicate the uncertainty in a quantitative observation. CHEM.A.1.1.5: Apply a systematic set of rules (IUPAC) for naming compounds and writing chemical formulas (e.g., binary covalent, binary ionic, ionic compounds containing polyatomic ions). CHEM.A.1.2: Compare the properties of mixtures. CHEM.A.1.2.1: Compare properties of solutions containing ionic or molecular solutes (e.g., dissolving, dissociating). CHEM.A.1.2.3: Describe how factors (e.g., temperature, concentration, surface area) can affect solubility. CHEM.A.2: Atomic Structure and the Periodic Table CHEM.A.2.1: Explain how atomic theory serves as the basis for the study of matter. CHEM.A.2.1.1: Describe the evolution of atomic theory leading to the current model of the atom based on the works of Dalton, Thomson, Rutherford, and Bohr. CHEM.A.2.1.2: Differentiate between the mass number of an isotope and the average atomic mass of an element. CHEM.A.2.2: Describe the behavior of electrons in atoms. CHEM.A.2.2.1: Predict the ground state electronic configuration and/or orbital diagram for a given atom or ion. CHEM.A.2.2.2: Predict characteristics of an atom or an ion based on its location on the periodic table (e.g., number of valence electrons, potential types of bonds, reactivity). CHEM.A.2.2.4: Relate the existence of quantized energy levels to atomic emission spectra. CHEM.A.2.3: Explain how periodic trends in the properties of atoms allow for the prediction of physical and chemical properties. CHEM.A.2.3.1: Explain how the periodicity of chemical properties led to the arrangement of elements on the periodic table. CHEM.A.2.3.2: Compare and/or predict the properties (e.g., electron affinity, ionization energy, chemical reactivity, electronegativity, atomic radius) of selected elements by using their locations on the periodic table and known trends. CHEM.B.1: The Mole and Chemical Bonding CHEM.B.1.1: Explain how the mole is a fundamental unit of chemistry. CHEM.B.1.1.1: Apply the mole concept to representative particles (e.g., counting, determining mass of atoms, ions, molecules, and/or formula units). CHEM.B.1.3: Explain how atoms form chemical bonds. CHEM.B.1.3.1: Explain how atoms combine to form compounds through ionic and covalent bonding. CHEM.B.1.3.2: Classify a bond as being polar covalent, non-polar covalent, or ionic. CHEM.B.1.4: Explain how models can be used to represent bonding. CHEM.B.1.4.1: Recognize and describe different types of models that can be used to illustrate the bonds that hold atoms together in a compound (e.g., computer models, ball-and-stick models, graphical models, solid-sphere models, structural formulas, skeletal formulas, Lewis dot structures). CHEM.B.1.4.2: Utilize Lewis dot structures to predict the structure and bonding in simple compounds. CHEM.B.2: Chemical Relationships and Reactions CHEM.B.2.1: Predict what happens during a chemical reaction. CHEM.B.2.1.1: Describe the roles of limiting and excess reactants in chemical reactions. CHEM.B.2.1.2: Use stoichiometric relationships to calculate the amounts of reactants and products involved in a chemical reaction. CHEM.B.2.1.3: Classify reactions as synthesis, decomposition, single replacement, double replacement, or combustion. CHEM.B.2.1.4: Predict products of simple chemical reactions (e.g., synthesis, decomposition, single replacement, double replacement, combustion). CHEM.B.2.1.5: Balance chemical equations by applying the Law of Conservation of Matter. CHEM.B.2.2: Explain how the kinetic molecular theory relates to the behavior of gases. CHEM.B.2.2.1: Utilize mathematical relationships to predict changes in the number of particles, the temperature, the pressure, and the volume in a gaseous system (i.e., BoyleÂ?s law, CharlesÂ?s law, DaltonÂ?s law of partial pressures, the combined gas law, and the ideal gas law). Correlation last revised: 1/22/2020
Heavy elements like atomic number 118 were created by smashing calcium-48 ions into americium-243 target atoms, shown in this illustration. Credit: Thomas Tegge, Lawrence Livermore National Laboratory A quest is underway to create larger and larger atoms with more protons and neutrons than ever before. By building these super-heavy elements, scientists are not just creating new kinds of matter – they are probing the subatomic world and learning about the mysterious forces that hold atoms together. "Of course discovering something new is always very interesting, but the main motivation is, we don’t understand how nuclei work out in these extreme limits," said Dawn Shaughnessy, a chemist at Lawrence Livermore National Laboratory in Livermore, Calif. The scientists are also working toward a tantalizing goal: They hope to discover a theoretical "island of stability" where ultra-large elements all of a sudden become easier to make. While most extremely heavy atoms disintegrate in fractions of a second, theory predicts that once elements reach a magic number of protons and neutrons, they become relatively stable again. Finding these magic numbers could also provide revealing clues about how atoms work. Heaviest one yet So far, the heaviest element ever created has 118 protons. The number of protons in an atom – called the atomic number – determines what kind of element it is. So hydrogen is any atom with one proton, while oxygen is an atom with eight protons, or atomic number eight. Generally, an atom has close to equal numbers of protons and neutrons, but this isn't always the case. And an oxygen atom can gain or lose neutrons but remain oxygen, as long as it has eight protons. The heaviest element commonly found in nature – uranium – has 92 protons. Everything heavier is generally man-made. Shaughnessy's team, in collaboration with scientists at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, discovered five of the heaviest elements known, including element 118. Their other conquests include elements 113, 114, 115, and 116. Some of their latest work indicates they may be creeping closer to the island of stability. They can tell by measuring how long their atoms last before decaying, or breaking up into smaller atoms. Most super-heavy elements last only microseconds or nanoseconds before decaying; it's hard for atoms with so many protons and neutrons to hold together. But some jumbo elements, with numbers of protons or neutrons that are close to the magic numbers, can last seconds or minutes. For example, early tests of the element 114 suggested it may have a half-life as long as 30 seconds. A half-life is the time it takes for half of the substance to decay. "Even though we're not quite to the region of stability yet, we see things that can last tens of seconds, close to minutes," Shaughnessy told LiveScience. "For these kinds of things, a minute is like an eternity." Finding elements that are relatively long-lived is exciting, not just because it hints at the island of stability, but because it provides a better chance for scientists to learn more about the element. "Once you make a few atoms of something, and if they live in the few-seconds range, you can do chemistry on it," Shaughnessy said. "You can discover its fundamental chemical properties." To create their monster elements, the teams use a particle accelerator called a cyclotron to speed up beams of calcium nuclei to about 10 percent of the speed of light. Then they smash these calcium ions into a target of stationary atomic nuclei. For example, to create element 118 the researchers collided calcium, which has 20 protons, with californium, the element with 98 protons. Usually, the bombarding particles will just bounce off the target, but once in a while, two nuclei will stick together and create what's called a composite nucleus. Since 98 and 20 add up to 118, the resulting fused nucleus was the element 118. To find just a handful of the ultra-heavy elements, the teams had to run their experiments for months. "In a six-month experiment, we may see three to ten atoms," Shaughnessy said. The scientists rig up special detectors primed to look for the element they're hoping to create. The detectors look for the right energy signature predicted for their goal element, while using magnets to divert any other particles. Both the Lawrence Livermore-JINR team, and a competing German team, have been searching for element 120, but so far have struck out. "We both ended up not finding anything, so we think we're hitting the limit of our current capability," Shaughnessy said. "As we go higher and higher, the event rate will get even smaller. You either have to run longer experiments or you have to improve technology sensitivity on how you detect these things." (The event rate refers to how often the target element will form.) The researchers think they may be homing in on the fabled magic numbers that create stable atoms. Element 114 lasted longer than any of the super-heavy elements just below it with fewer protons. Element 116 also had a relatively long half-life, but then element 118 turned out to be less stable, lasting less than a millisecond before decaying. This tells the researchers they might be getting close – especially to the magic number of protons. The magic number of neutrons is still thought to be a ways off. "The question is how far away are we seeing the effect?" Shaughnessy said. "We know we're not at the island of stability, but we are seeing longer half-lives." The number of particles that can easily pack into an atom's nucleus is thought to depend on the complex arrangement of both protons and neutrons within the nucleus. Just as electrons in an atom have energy states, protons and neutrons also have energy levels. Each energy level can hold a certain number of protons or neutrons; when a nucleus' highest energy levels are full, the particle is stable. Scientists think the magic numbers are the numbers of protons and neutrons that completely fill a set of energy levels. An atom in this configuration would feel relatively secure, and wouldn't want to lose any protons or neutrons to decay into a smaller atom. - New Heavy Element Likely to Be Named for Copernicus - The Chemistry of Life: The Human Body - Top 10 Greatest Explosions Ever
Stoicism, one of the three major schools of Hellenistic philosphy, was founded in Athens in 308 B.C.E. by Zeno of Citium (334-262 B.C.E.) and further developed by his two successors, Cleanthes (331-232 B.C.E.) and Chrysippus (c. 280-206 B.C.E.). The school got its name from the “stoa poikile,” a painted colonnade in the Agora of Athens where Zeno of Citium gave his discourses. Stoicism grew out of the teachings of the Cynics, and taught that true happiness is achieved through the use of reason to understand events taking place around us and to separate from harmful and destructive emotions. A Stoic learned to practice self-discipline in order to grow in wisdom and virtue. Stoics believed that the universe was imbued with a divine will, or natural law, and that living in accordance with it was eudaimonia (“flourishing,” an ideal life). Students were encouraged to distance themselves from the concerns of ordinary society, while at the same time improving it through their service and example. The Stoic school flourished in Greece and Rome for almost five centuries, until its decline in the second century C.E. A second phase of the school, Middle Stoicism, developed at Rhodes under Panaetius (c. 185-110 B.C.E.) and Posidonius (c. 135-50 B.C.E.), who broadened the strict doctrines of the earlier Stoics. A large number of works survive from a third stage, Roman Stoicism, which focused largely on ethics. Its proponents include the younger Seneca (c. 1-65 C.E.), Epictetus (c. 55-135 C.E.), and Marcus Aurelius (121-180 C.E.). The early Stoics provided a unified account of the world, consisting of formal logic, corporealistic physics and naturalistic ethics. Later Stoics focused on ethics, and progression towards living in harmony with the universe, over which one has no direct control. This is evident in the works of Epitectus, Cicero (an eclectic who shared many of the moral tenets of Stoicism), Seneca the Younger, Marcus Aurelius, Cato the Younger and Dio Chrysostum. Stoic ideas had an influence on early Christianity, and on the thought of many later Western philosophers, who were particularly interested by Stoic theory of logic. Stoicism, which acknowledged the value of each individual, also played a role in the development of democratic government. The Stoic school was founded by Zeno of Citium (334-262 B.C.E.) in Athens, Greece, around 308 B.C.E. After studying under Crates the Cynic and several other Athenian philosophers, Zeno developed his own system of thought and began teaching in the Agora of Athens at the stoa poikile (Painted Colonnade), from which the school takes its name. Upon his death in 262 B.C.E., he was succeeded by his disciple Cleanthes (331-232 B.C.E.), and then by Chrysippus (c. 280-c. 206 B.C.E.). Chrysippus was a prolific writer, and is credited with organizing and developing the teachings of Stoicism into the form in which it continued for the next four centuries. Except for a short “Hymn to Zeus” by Cleanthes, only fragments of the written works of the early Stoics are preserved. In the first century C.E., Flavius Arrian (c. 86–160 C.E.) composed two books, Discourses and Handbook, based on the teachings of the Greek Stoic Epictetus (55 -135 C.E.). These works clearly explain the Stoic system of ethics and lay out a detailed course of exercises in self-examination and self-discipline to be followed by anyone striving to become a Stoic. The power of Stoic thought is evident in the writings of Cicero (106-43 B.C.E.) and of the Emperor Marcus Aurelius (121-180 B.C.E.), who both applied Stoic theory to political life. The Stoic school declined and disappeared with the fall of the Roman Empire and the rise of Christianity. However, aspects of Stoicism have continued to be part of Western thought to the present day, including ethics and theories of logic and epistemology. Certain elements of Stoic cosmology and ethics are seen in Christian doctrine. Stoics divide philosophy into three interrelated areas, physics, logic and ethics, all of which contribute to a person’s progress towards eudaimonia (a life of flourishing). The physics of Stoicism is based on the proposition that everything, including god, the mind, reason, and the soul, is matter, or that “nothing incorporeal exists.” This concept is based on two arguments: that the universe is one and therefore we cannot make a separation between the tangible and the intangible; and that since god and the world, body and soul act on each other (the body initiates thoughts in the soul and the soul initiates actions in the body), they must be of the same substance. At the most basic level the universe is constituted of an active principle, god, and a passive principle, matter. God, or logos, is the primordial fire that generates the four elements of air, fire, earth and water. Air and fire form an active rational force called breath (Greek pneuma, Latin spiritus), which acts on the more passive earth and water (physical matter). The two aspects interpenetrate each other, meaning that they both occupy the same space at the same time (crasis). The relationship between god and the world resembles the relationship between soul and body, with the soul as a fire that permeates the whole body. Since everything originates from god, or logos, the universe is imbued with divine reason, and therefore we see harmony, beauty and order in the natural world. The concept of pneuma was central to the Stoic theory of physics. The Stoics denied the existence of void in the cosmos and instead regarded the cosmos as a single, pneuma-charged organic entity. All natural substances were organized into a hierarchy of classes based on the activity and degree of organization of the pneuma. At the most basic level was hexis, the state of inanimate objects such as stone and metal, which are simply held together by their pneuma. Organic things, such as plants, which grow and reproduce but do not have cognitive power were said to have phusis as well as hexis. Animals, which had instincts, perception, impulses and a certain amount of cognition, were said to have psuche (soul) as well as phusis and hexis. The highest level of organization of the pneuma was the possession of reason (logos), especially characterized by the use of language. Only gods and humans possessed reason. Spiritual and intellectual qualities such as justice, righteousness and virtue were considered to be portions of pneuma. According to this view, all parts of the cosmos worked together for the benefit of the whole. Stoics believed that the universe moved through a never-ending cycle of phases, each developing according to a pre-ordained design and ending in a conflagration. The basic unit of Stoic logic was the simple proposition (axioma), a primary statement of truth or falsehood. Simple propositions could be combined into more complex conditional, conjunctive and disjunctive propositions. According to Stoicism, individual words had a corporeal existence, but propositions and concepts belonged to a class of incorporeals called lekta. According to the Stoics the use of language was closely connected with reason, and was one of the characteristics that set human beings apart from animals. A spoken sentence had three components: the object spoken of, the words of the sentence, and the meaning of those words (lekton). Stoics believed that the mind is like a blank slate at birth, and that all our cognitive experience comes through sensual experience. They developed an elaborate explanation of the way in which the mind receives and interprets sensory impressions and stores them as concepts or memories. A Stoic learned to examine sensory impressions and evaluate their truth or falsehood before accepting (assent) and responding to them. While the Epicureans believed that the most basic human impulse was the pursuit of pleasure, the Stoics identified the instinct for self-preservation and self-awareness as the “primary impulse.” This impulse came from Nature and could be seen in every newborn creature; it explained why animals instinctively knew how to behave. Human beings were initially motivated by this same primary impulse, but as they as they grew to adulthood they developed rationality and the notion of duty and virtue, which took precedence over self-preservation. As a person progressed in reason and virtue, he began to understand the value of other children, family, neighbors, members of the community and finally, all mankind, and to alter his actions accordingly. This process was called oikeiôsis, or the doctrine of appropriation. A wise person understood his role in the family and community, and acted to fulfill those roles. The eventual goal was to “live in accordance with nature,” or eudaimonia (a flourishing life). Only virtue was good, only vice was evil. Everything else, health, wealth, honor, sickness, poverty, death, was considered an “indifferent” (adiaphora). The possession of these indifferents was irrelevant to happiness, though some, such as health, were “preferred” and some, such as poverty, were “dispreferred.” These indifferents served as subject matter for the choices each person made from birth, with every correct choice being a step towards the goal of living in harmony with nature. There might be occasions when a person, guided by reason, might choose to sacrifice health or wealth for the sake of his role in the family or nation. Suffering and unhappiness resulted from passions, which were seen as mistakes in judgment and the erroneous assignment of value to something which was really an “indifferent.” Epictetus is quoted as saying, "When I see a man in a state of anxiety, I say, what can this man want? If he did not want something which is not in his power, how could he still be anxious?" A wise man using reason did not desire anything that was not in accord with Nature. The four types of passion were categorized as distress, fear, pleasure and appetite. The Stoics believed that the development of the universe was preordained by god, or divine will, and that man was therefore unable to affect the course of history by his actions. In his Discourses, Epitectus distinguished between “what is in our power” and “what is not in our power.” It is not in our power to change events, but it is in our power to change how we perceive and judge these events and their effect on our lives. True happiness could be achieved by learning to judge events from the point of view of Nature rather than an individual point of view. Early Stoics said that a person was either all virtue or all vice. They categorized four main types of virtue, wisdom (sophia), courage (andreia), justice (dikaiosyne), and temperance (sophrosyne), a classification derived from the teachings of Plato. A man possessing one of these virtues automatically possessed them all. True sages, or wise men, were very rare, and almost everyone could be considered a fool. Later Stoics softened this stance and placed greater emphasis on the process of becoming virtuous. Philosophy for a Stoic was not just a set of beliefs or ethical claims; it was a way of life involving constant practice and training (or askesis, from which the term ascetic derives). Stoic philosophical and spiritual practices included logic, Socratic dialogue and self-dialogue, contemplation of death, training attention to remain in the present moment (similar to some forms of Eastern meditation), and daily reflection on everyday problems and possible solutions. The Discourses and Handbook of Epitectus elaborated a system of mental exercises intended to develop the understanding of someone wishing to become a Stoic. In Meditations, which he wrote as a personal reflection, Marcus Aurelius detailed how he applied such practices in his daily life on the battlefield and in politics. For example, he says in Book II, part 1: Techniques like these continue to have value today in teaching how to overcome difficult circumstances and resolve conflicts. Although Stoicism was considered by many early Fathers of the Church to be a part of the philosophical decline of the ancient world, many of its elements were held in high esteem, in particular, the natural law, which is a major part of the Roman Catholic and early American doctrines of secular public morality. The central Stoic concept of logos became part of Christian thought (Christian Bible, John 1). The Stoic definition of virtue as the conformance of the will to the rational order of the world has parallels with traditional Christian morality. Long before Christianity, the Stoics taught that all human beings, including women and slaves, were of equal value, and put forth the concept of a worldwide brotherhood of mankind existing in harmony and peace. Stoic cosmopolitanism influenced Augustine of Hippo's concept of the City of God. Stoicism influenced the Christian Boethius in his Consolation of Philosophy, a book that promotes Christian morality via secular philosophy; this book was highly influential in the Middle Ages. Collection of various Stoic quotes: All links retrieved October 22, 2015. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia:
``20/20 vision`` is commonly accepted as the standard of normal distance vision for a human being. Basically it means ``good visual acuity at 20 feet.`` So if your vision is 20/20, you can read certain sizes of letters on a Snellen chart clearly at 20 feet or closer. But if your friend has 20/15 vision, his visual acuity is better than yours: you would have to stand 15 feet away from the chart to read the smaller letters that he can read while standing 20 feet away. Conversely, someone with 20/30 vision has worse distance vision than you. A spasm of accommodation (also known as “low accommodation,” or “accommodative spasm”) is a condition in which the ciliary muscle of the eye remains in a constant state of contraction. Normal accommodation allows the eye to ``accommodate`` for near-vision. For this reason, many people are assumed to have normal (20/20) vision, but actually have difficulty reading or focusing. Myopia, or Nearsightedness Myopia occurs when an eye is too long for the corneal curvature. As a result, light rays entering the eye do not come to a sharp focus on the retina at the back of the eye. Instead, they focus further forward, producing a blurred image. People who are highly myopic have an increased risk of a retinal detachment resulting from the ``pulling`` on the retina. Hyperopia occurs when an eye is too short for the corneal curvature. As a result, light rays entering the eye focus behind the retina instead of precisely on the retina, and a blurred image is produced. A person with hyperopia cannot see distant objects clearly. Astigmatism is the result of having a cornea that is irregular in shape. The cornea is normally round. An astigmatic cornea is oblong or ``football`` shaped, resulting in a condition that generally causes eyestrain, headaches Presbyopia is a condition in which the focusing ability of a person's eyes has decreased to the point where vision at his reading distance becomes blurred and difficult. You may begin to notice early signs of Presbyopia around the age of 40.
The concrete industry is responsible for about 5% of the global carbon dioxide emissions. It’s no question that something must be done to help control the effect on the plant, and the University of Exeter in England just might have the solution. The most recent issue of the journal Advanced Functional Materials includes an article from the University detailing an innovative solution to reducing construction’s carbon footprint – graphene reinforced concrete. Graphene Reinforced Concrete Adding graphene to the concrete mixture creates a product that is twice as strong, and four times more resistant to water than traditional concrete. Past efforts in altering the makeup of concrete has led to processes where existing components of concrete were modified. In this case, graphene is introduced as a new component, where it is suspended in water. The introduction of graphene improves concrete’s emissions by about 50%. Graphene reinforced concrete requires fewer materials to make, which translates to a 446 kilogram reduction in carbon emissions for each ton of concrete products. “This ground-breaking research is important as it can be applied to large-scale manufacturing and construction. The industry has to be modernized by incorporating not only off-site manufacturing, but innovative new materials as well,” said lead author of the article, Dimitar Dimov. “Finding greener ways to build is a crucial step forward in reducing carbon emissions around the world and so help protect our environment as much as possible. It is the first step, but a crucial step in the right direction to make a more sustainable construction industry for the future.” For more information, check out this article.
Chinese vocabulary is formed by 20000 Chinese characters since ancient times, but nowadays, only half of them are used. However, you must not confuse words with characters because most of the words are formed by a mix of characters and it is common that there are many more words than characters. Hanyu Da Zidian is a compendium of the Chinese characters and contains 54, 678 head entries for characters and The Zhonghua Zihai contains over 85, 500 characters’ definitions and it is the largest character work. One of the most popular books is called “Hanyu Da Cidian” and it records over 23,000 Chinese characters and more than 350,000 Chinese definitions. Mandarin, as the main language of China, is important; its vocabulary has two parts: the word sound and the proper tones; it is recommended to pay a special attention to the tones because there are many words that are written similarly but their pronunciation is way too different. Vocabulary in Chinese © 2007-2019 All Rights Reserved
Helping Kids in Foster Care Learn to Manage Their Emotions and Behavior Helping children and young people in foster care learn to manage their thoughts, emotions and energy is a two-part process. This is especially true when foster parents, kin and other caregivers are working with children who have experienced trauma. One part of helping children and young people master self-regulation is to provide them with stability through comforting daily activities, such as establishing bedtime routines or listening to music. Part two involves reading children’s signals and helping them use calm-down strategies when they are stressed. “We tend to think of regulation as something we only have to pay attention to in those acute, critical moments,” says Margaret Blaustein, a clinical psychologist with an expertise in treating complex childhood trauma. “But the experience of having increasingly organized internal states over extended periods of time actually decreases the likelihood of those intense moments.” Blaustein is the co-author of ARC Reflections, a powerful new skill-building course that teaches foster parents how to support children healing from trauma. The Annie E. Casey Foundation collaborated with the Justice Resource Institute to develop ARC Reflections, and the resulting curriculum includes a comprehensive suite of training materials, including an implementation guide, PowerPoint presentations, facilitator guides, handouts and more. During the nine sessions of ARC Reflections, caregivers learn how trauma affects children and practice key self-regulation skills at home. Facilitators work with caregivers to: - identify patterns and activities that either soothe or upset a child; - develop predictable daily routines that help children feel safe; and - use ongoing activities such as sports, arts or reading to support day-to-day functioning. Kristine Kinniburgh, a clinical social worker and co-author of ARC Reflections, stresses that self-regulation — much like any other skill — takes time and support to use independently. “Think about how kids learn to eat with a spoon,” she says. “You don’t hand them a spoon and expect them to scoop. There are a number of steps that happen. It is that way with regulation, too.” The ARC Reflections sessions also teach foster parents about helping children during moments of distress or overwhelming emotion. Advice here includes: - offering children opportunities for control and choice, which can help calm kids who associate powerlessness with danger; - catching moments of distress at their earliest possible point, ideally when children still have an ability self-regulate; - serving as a “mirror,” which encourages caregivers to name feelings and let children know that they see them and understand their needs; and - remaining calm, cool and connected. “Foster parents are the backbone of the child welfare system,” says Tracey Feild, managing director of the Casey Foundation’s Child Welfare Strategy Group. “We are honored and pleased to provide a curriculum that helps them understand and support the children and teenagers in their care.”
The total amount of waste deposited in US landfills (262 million tonnes) was underestimated by the US Environmental Protection Agency by as much as 140 million tonnes in 2012, according to a paper published online in Nature Climate Change. The study suggests that the associated methane emissions from landfill may also be greater than previously estimated. Landfill sites emit major quantities of methane, a particularly potent greenhouse gas. Reducing emissions from these sites by capturing and combusting methane is a crucial part of climate change policy. Jon Powell and colleagues analysed reported rates of waste disposal and operational data from 1,200 facilities across the US. They found 262 million tonnes of waste were disposed of by the facilities in 2012, about 140 million tonnes more than the US Environmental Protection Agency previously estimated. They also found waste disposal rates have been increasing by an average of 0.3% a year between 2010 and 2013. Most of the waste ends up at open landfill facilities that are actively accepting new waste, and about 91% of landfill emissions in 2012 came from such sites. They found that the efficiency of methane capture was 17 percentage points greater at closed landfills than at open landfills, further underscoring the climatic impact of open waste sites. The results suggest that the government must target efforts to reduce methane emissions at open landfill waste facilities. Furthermore, the authors argue that this study highlights the limitations of the government’s current approach to collecting waste disposal data, with significant implications for policy-based efforts to reduce emissions and tackle climate change. Environment: Value of national parks’ impact on mental health estimatedNature Communications Ecology: Lost deer-like species ‘rediscovered’Nature Ecology & Evolution
An International Tsunami Survey Team (ITST) studying the effects of the December 26 tsunami on Indonesia's island of Sumatra documented wave heights of 20 to 30 m (65 to 100 ft) at the island's northwest end and found evidence suggesting that wave heights may have ranged from 15 to 30 m (50 to 100 ft) along a 100-km (60-mi) stretch of the northwest coast. These wave heights are higher than those predicted by computer models made soon after the earthquake that triggered the tsunami. "Groundtruthing" the models, which are used to forecast tsunamis for early-warning systems and long-term planning efforts, was one of the main goals of the scientific survey. The survey was conducted from January 20 to 29 in the province of Aceh, which lies only 100 km (60 mi) from the epicenter of the earthquake and sustained what many consider the worst tsunami damage of all affected areas. About a third of the 320,000 residents of Aceh's capital, Banda Aceh, are dead or missing, accounting for much of Indonesia's toll of more than125,000 dead and 90,000 missing. Led by Yoshinobu Tsuji of the University of Tokyo's Earthquake Research Institute, the survey team consisted of nine scientists from Japan, six from Indonesia, two from France, and two from the United States. The U.S. scientists were Andy Moore of Kent State University and Guy Gelfenbaum of the U.S. Geological Survey (USGS). The team collected information about wave heights at the beach and inland, inundation distance (how far inland the water reached), runup elevation (the water's height relative to mean sea level at its farthest reach inland), flow directions, erosion, sediment deposition, and coastal subsidence. The team gathered some of its information from eyewitness accounts. Though not always reliable (commonly, eyewitnesses are running for their lives as they observe the tsunami), the eyewitness accounts collected in Sumatra provided several consistent pieces of information: Being just landward of the subduction zone where the tsunami-generating earthquake occurred, northwestern Sumatra was struck by a "near field" tsunami. In contrast, areas across the ocean from the earthquake epicenter, such as Sri Lanka, were struck by "far field" tsunamis. The rapid arrival of the tsunami in near-field locationsjust 15 to 20 minutes in Banda Acehmeans that a tsunami early-warning system, now in the planning stages for the Indian Ocean, should be accompanied by tsunami education and long-term emergency and land-use planning efforts for the most effective mitigation of tsunami hazards. Because the tsunami washed out many roads and bridges, the scientists had to hike long distances to reach some field areas, and on several occasions used makeshift rafts constructed from barrels and boards to cross rivers. Despite such complications, they were able to collect much data, which will be used to improve both the scientific understanding of tsunamis and the computer models used to predict tsunami effects. To measure wave heights, the scientists looked for water stains on buildings and broken branches and debris in trees (where buildings and trees were left standing), then used laser rangefinders to calculate the heights. The west-facing coastlines were struck by the highest waves, some more than 30 m high. Waves that wrapped around the island to hit the north-facing coastline of Banda Aceh were lower, about 10 m high, but the area's low-lying land allowed those waves to penetrate far inland. Inundation distances in the province were so large that they were most easily measured from satellite images, where sediment deposited by the waves and vegetation killed by the saltwater are clearly visible. One such image shows that the waves that struck the villages of Lampuuk and Lho Nga on the west coast met the waves that struck Banda Aceh from the north. Items broken and bent by the tsunami waves, as well as sedimentary structures in the tsunami deposits, were used to determine flow directions. The scientists found that the large tsunami waves flowed around natural barriers, flooding low-lying areas behind them. The researchers surveyed beach profiles to document erosion (common near the coast) and deposition (common inland) by the tsunami. Sand eroded from beaches probably provided much of the sand that was deposited inland. The survey team dug trenches in the tsunami deposits to measure their thickness and to examine other characteristics that can shed light on how high the waves were and how fast the water was flowing. Data from the sediment deposits will not only tell scientists about the recent tsunami but also help them recognize and interpret the deposits of ancient tsunamis, which, in turn, will help them better understand an area's tsunami history and its likely tsunami risk (see "USGS Scientists Study Sediment Deposited by 2004 Indian Ocean Tsunami" in Sound Waves, February 2005). Models predict that the type of earthquake that caused the tsunamia megathrustwill raise the sea floor above the fault rupture and cause subsidence near the coast. So, the team was not surprised to find evidence that coastal land had subsided in Sumatra. Trees with roots and lower trunks submerged in seawater indicate that coastal land subsided 1 to 2 m (3 to 6 ft) in some areas. Japanese team member Yuichiro Tanioka and his Indonesian graduate student Yudhicara resurveyed parts of Banda Aceh for which older elevation maps were available and discovered that the land there had subsided by 28 to 57 cm (about 1 to 2 ft). The scientists' efforts were focused mainly around the very northwest end of Sumatra, but they also collected data about 100 km (60 mi) to the south, at Kreung Sabe. Wave heights of 15 m (50 ft) at that site suggest that the tsunami waves may have been unusually high, 15 to 30 m, along the entire 100-km stretch of coast from Kreung Sabe to the northwest tip of the island. USGS scientists hope to return to Sumatra in April to test this hypothesis by measuring wave heights at intermediate points along the coastline and to collect additional data, such as nearshore bathymetry and sediment-deposit profiles. in this issue:
You can find an updated version of this video here: https://youtu.be/KFn8TEQeBaA Confused about how to pronounce the sounds found in Read Write Inc. Phonics? Worry not - 5-year-old Sylvie is here to show you how! Use this guide to support your child when practising the sounds at home. Information for Parents: The Phonics Screening Check The phonics screening check is a short, simple test taken by Year 1 children in England each June to assess their reading ability. Parents can watch our short video to find out what happens during the check and why it's useful. - - - Did you like this video?
A small plaque designates this Boston brownstone as the former home of U.S. Senator Charles Sumner. Sumner was one of the most influential politicians in the antebellum period and is best remembered today for his opposition to slavery. An abolitionist prior to the Civil War, Sumner became an advocate of full political rights for former slaves after the war. He also called for the redistribution of land after the war, proposing that rebel slave owners had forfeited their property which should belong to the enslaved persons who enriched the planters for generations. In 1856 Sumner delivered what would become his most famous speech, the Crime Against Kansas.” In this speech, the Boston Senator condemned slavery and its spread to the Kansas Territory as well as the pro-slavery politicians that supported it. In response, Congressman Preston Brooks brutally attacked Sumner on May 22, 1856 in the Senate Chamber. During Reconstruction, Sumner believed that the Southern states were no longer part of the Union and could not rejoin until they followed certain guidelines made by Congress that guaranteed the rights of former slaves. Sumner’s support of equal political rights for former slaves led to the passage of the Civil Rights Act of 1875. Born January 6, 1811, Charles Sumner became an attorney and served in the Senate where he represented Massachusetts from 1851 to 1874. As a young man, Sumner was a student at the Boston Latin School. An avid reader with a keen mind, Sumner earned a degree from Harvard University in 1830. Sumner furthered his education by studying law at Harvard, completing his studies in 1833. He stayed in Boston and began his law career, but became best known for his antislavery views. In 1848, Sumner helped create the Free-Soil Party, a small political party that opposed the spread of slavery into any new U.S. territory. The following year, he represented African American residents of Boston who sued the city for integration in the public schools in Roberts vs the City of Boston. After running as a Free-Soil candidate, Sumner became a senator in 1851 and continued to speak out against slavery. In 1855, Sumner supported the Republican Party, a new party formed by members of the former Whig Party who opposed the extension of slavery and resented the disproportionate political power of Southern planters. In the late 1850s, Sumner was a leading voice of those who opposed the expansion of slavery, an issue that was being debated fiercely in contrast to previous years when the two major parties avoided the issue. The issue was impossible to avoid in 1856 as residents and people around the nation debated whether the Kansas Territory should join the Union as a free or slave territory and state. Not mincing words, Sumner delivered a speech in response to the attack by pro-slavery forces on the Free State residents of Lawrence. Sumner decried the Crime Against Kansas” and also condemned the institution of slavery and its expansion. The Senator also called out proslavery advocates by name, including Andrew Butler, a senator from South Carolina. Butler was not present on this day, but his cousin Preston Brooks, also from South Carolina, was. Brooks was offended by Sumner’s speech and incensed by the slander made to Butler. Brooks wanted to defend the honor of his family and sought physical vengeance, a trademark of the honor-based culture of the antebellum South. However, he thought so lowly of Sumner that he would not challenge him to a duel. Therefore, on May 22, 1856, Brooks entered the Senate chamber where Sumner was seated at his desk and beat the elderly Sumner with his cane while he was pinned between his desk and chair, which were both nailed to the Senate floor. Another South Carolinian, Laurence M. Keitt, stopped bystanders from ending the attack and Brooks beat Sumner until his can broke; Brooks coolly walked out of the chamber as Sumner lay unconscious. Sumner never completely recovered from his beating, but after some travelling, he came back to his seat in the Senate in 1859 and continued to hold the same beliefs and push for reform. When the Civil War broke out, he was named chairman of Committee on Foreign Relations and pushed President Lincoln to quickly free the slaves in order to prevent Great Britain from siding with the Confederacy. Sumner was skeptical of Lincoln’s decisions, even saying that the point of the war was to free the slaves while Lincoln said otherwise, but he still worked with the president during the war. After the war Sumner, fought for equal political and civil rights of African Americans. He believed that the Constitution gave all male citizens the right to vote. He also believed that other civil rights should be guaranteed by the law and hoped to pass laws that would prevent the former planters from defrauding African Americans of the fruits of their labor. Sumner proposed a Civil Rights Bill in 1872, but it did not pass. Sumner persisted and introduced the bill and lobbied on its behalf until it passed, largely as a tribute to the elderly Senator. On March 11, 1874, Sumner passed away after suffering from a heart attack. The Charles Sumner House was built in 1805 and was bought by Sumner’s dad in 1830. Charles lived in the house until 1867.
They are all reference surfaces representing a state of the ocean surface. Their oceanographic meaning is given below: - The Mean Sea Surface (MSS) represents the mean state of the ocean and therefore includes permanent effects of global currents. MSS models are developed based on data provided by altimetry satellites. The MSS is not an equipotential surface and deviates from the Geoid. The MSS is a composite of the Geoid and sea surface topography. - The Mean Sea Level (MSL) refers to a ‘level’ water surface, which you would get if the sea was shaped by the Earth’s gravity field only and perfectly at rest. As such it is an imaginary surface that doesn’t physically exist. It coincides with an equipotential surface, as for example the Geoid. - The Geoid is the equipotential surface of the Earth’s gravity model that best fits the global mean sea surface which would coincide exactly with the mean ocean surface of the Earth, if the oceans were in equilibrium, at rest, and extended through the continents – approximated by Geoid Models as for example EGM96 and EGM2008. - Dynamic Ocean Topography (DOT): the average over 12 year of the difference between the mean sea surface and the Geoid. It originates from the fact that the major ocean circulation has a (more or less) time-invariant non-zero component (i.e., a component that does not average to zero over time). If the oceans were static and not affected by external influence as for example wind, temperature and air pressure, then Mean Sea Surface (MSS) and the Geoid would be the same surfaces. However, there are steady currents in the ocean, driven by winds and atmospheric heating and cooling, which give rise to differences in sea level around the world. These local differences between the Geoid and MSS are described by the Dynamic Ocean Topography (DOT). The DOT values range between (approximately) -2.5m and +1.2 m An example plot of the DOT, calculated as the difference between the DTU10 MSS model and the EGM2008 Geoid model is available from the Danish National Space Institute. Tides calculated relative to a Geoid model therefore are relative to Mean Sea Level (MSL). Tides calculated relative to the actual mean state of the local sea level are relative to Mean Sea Surface (MSS).
Astronomers have discovered a planet drifting through space, not orbiting a star. Such cosmic wanderers are believed to be common in the universe. But the new-found planet's proximity to our solar system - just 100 light years, or 1000 trillion kilometers, away - and the absence of any nearby stars have allowed the international [Canadian and European] team to study the planet's properties in greater detail than ever before. Because it seems to be traveling with a group of about 30 young stars, researchers were able to determine it was the same age - between 50 and 120 million years old. Then, using computer models of planet evolution, they report it has a temperature of about 400 degrees Celsius, and a mass four to seven times that of Jupiter. These free-floating objects can help astronomers understand more about how planets and stars form and behave. Rogue planets may have coalesced from the same dust and debris as normal planets before being ejected from their solar systems, or they may be brown dwarfs - stars that never grew large enough to trigger the reaction that causes starlight. Lead researcher, Philippe Delorme of the University of Grenoble, said "If this little object is a planet that has been ejected from its native system, it conjures up the striking image of orphaned worlds, drifting in the emptiness of space."
Variance and Standard Deviation In “Range, Interquartile Range and Box Plot” section, it is explained that Range, Interquartile Range (IQR) and Box plot are very useful to measure the variability of the data. There are two other kind of variability that a statistician use very often for their study. - Standard Deviation Why variance and Standard Deviation are good measures of variability? Because variance and standard deviation consider all the values of a variable to calculate the variability of your data. There are two types of variance and standard deviation in terms of Sample and Population. First their formula has been given. Then, what is the difference between sample and population has been discussed below. Here is the formula for sample and population variance and standard deviation. There is slight difference observe them carefully. - X is individual one value - N is size of population - x̄ is the mean of population How to calculate variance step by step: - Calculate the mean x̄. - Subtract the mean from each observation. X- x̄ - Square each of the resulting observations. (X- x̄) ^2 - Add these squared results together. - Divide this total by the number of observations n (in case of population) to get variance S2. If you are calculating sample variance then divide by n-1. - Use the positive square root to get standard deviation S. Mean (x̄) =15 Sample variance ( s² ) = 639.74/10 = 63.97 Population ( σ² ) = 639.74/11 = 58.16 S = 8.00 σ = 7.6 - If variance is high, that means you have larger variability in your dataset. In the other way, we can say more values are spread out around your mean value. - Standard deviation represents the average distance of an observation from the mean - The larger the standard deviation, larger the variability of the data. The Standard Deviation is a measure of how spread out numbers are. Its symbol is σ (the greek letter sigma) for population standard deviation and S for sample standard deviation. It is the square root of the Variance. Population vs. Sample Variance and Standard Deviation The primary task of inferential statistics (or estimating or forecasting) is making an opinion about something by using only an incomplete sample of data. In statistics, it is very important to distinguish between population and sample. A population is defined as all members (e.g. occurrences, prices, annual returns) of a specified group. Population is the whole group. A sample is a part of a population that is used to describe the characteristics (e.g. mean or standard deviation) of the whole population. The size of a sample can be less than 1%, or 10%, or 60% of the population, but it is never the whole population. As both sample and population are not same thing therefore slight difference is there in their formula. A question may raise that at the time of calculating Variance why we do square the difference? To get rid of negatives so that negative and positive don’t cancel each other when added together. +5 -5 = 0
Mind Maps are the ideal tool for effectively accessing natural creativity and harnessing that creativity for effective problem solving. The main branches of the Mind Map can be used in a variety of ways to support thinking. The only limit to the ways in which Mind Maps can be used is the imagination. Some of the ways the main branches can be used are as follows: Edward de Bono’s Six Thinking Hats – This is a well known technique for getting ‘out of the box’ of habitual thinking. It originated as a way of helping groups to get away from the conflict that characterises many meetings by adopting different thinking modes. See the Mind Map and notes on the following pages. Edward de Bono’s PNI approach. This is a simple way of approaching problems by analysing points on the basis of whether something is ‘Positive’, ‘Negative’ or ‘Interesting’ . Questions. Making the main branches questions can often act as an impetus for effective problem solving. The usual questions are Who, What, Where, Why, When and How. Checklists. One way of using checklists would be to take an item and use the checklist to stimulate thinking about alternative uses. Typical branches may be: Magnify, Minify, Substitute, Rearrange, Reverse and Combine. Forced Relationships and Analogies. One of the main challenges for anyone wishing to be creative is in provoking their thinking away from existing paradigms. There are a number of ways of doing this, such as thinking of similarities to or differences from some of the more or less random words. The choice of words is arbitrary since the key here is to provoking thinking. Typical words (branches) may be: Animals, Transport, People, Textures, Shapes etc. Attribute lists. Again, primarily used to provoke thinking by looking at existing problems, objects or situations in new ways. The way this technique works is simply to list different attributes and then use the natural process of the Mind Map to think divergently. Most of these techniques are covered on our course – Creativity for Logical Thinkers
Bioreactor: Redirects tile water to an underground bed of wood chips where nitrate is removed naturally by microorganisms. Vegetation on top of the bioreactor can provide other benefits such as wildlife habitat. A bioreactor is an edge-of-field treatment process that allows the producer to reduce the amount of nitrate leaving the field from a tile line and therefore improving water quality of the receiving stream. It consists of a buried pit filled with a carbon source such as wood chips, through which tile water is diverted. The carbon provides a food source for microorganisms that use the nitrate to metabolize the carbon, converting the nitrate to harmless atmospheric nitrogen (N2) gas. Bioreactors can reduce nitrate by an average of 43 percent. Two control structures are used to divert tile water into the bioreactor, control the depth of water and to control how long each gallon of water stays in the bioreactor. The control structure at the upper end of the bioreactor determines the amount of water diverted. The structure intercepts or “T’s” into the existing field tile. When tile flow exceeds the bioreactor’s capacity, excess water bypasses the system and flows down the existing tile line, preventing backup into the field. The lower structure determines the depth of water within the bioreactor and the retention time. Bioreactors are generally used where the drainage area is about 40 to 100 acres. The footprint is small, typically covering less than 0.05 acres. Most bioreactors installed in Iowa to date have been 100 to 120 feet long and 10 to 25 feet wide. They work well in existing filter strips. The wood chips may need to be replaced every 10 to 15 years to maintain high levels of nitrate removal. The Iowa Soybean Association Environmental Programs and Services team has installed about 40 bioreactors. It is estimated Iowa needs about 180,000 bioreactors to reach the goals set out in the Iowa Nutrient Reduction Strategy. For more information on bioreactors: • Denitrifying Woodchip Bioreactors • Woodchip Bioreactors for Nitrate in Agricultural Drainage
6th Grade English Language Arts - Writing Standards To work on sixth grade writing standards, click on the numbers below to visit pages of internet resources for each of the learning standards. 0601.3.1 Modes and Genres - Write in a variety of modes and genres, including description, narration, exposition, persuasion, literary response, personal expression, and imaginative. 0601.3.2 Writing Prompt - Practice writing to a prompt within a specified time. 0601.3.3 Work-Related Texts - Create somewhat complicated work-related texts, such as instructions, directions, letters, memos, and reports 0601.3.4 Topics - Develop focused, appropriate, and interesting topics for writing. 0601.3.5 Create Thesis - Create a thesis statement and include relevant facts, details, reasons, and examples that support the thesis. 0601.3.6 Audience Needs - Develop relevant details or reasons in a manner that meets the needs of the audience and purpose. 0601.3.7 Appropriate Structures - Organize writing using structures appropriate for the topic and that meet the needs of the audience (e.g., if using an anecdote to provide an example, use chronological order with sufficient time signals for the reader to follow easily). 0601.3.8 Appropriate Words - Use appropriate and effective words and phrases to indicate the organizational pattern (e.g., problem-solution, with order of steps necessary indicated in the solution). 0601.3.9 Text Features - Use text features (e.g., headings, subheadings, formatting) as appropriate to signal simple relationships between ideas. 0601.3.10 Precise Language - Use accurate and precise language to convey meaning. 0601.3.11 Figurative Language - Use strong verbs and figurative language (e.g., metaphors, similes) for emphasis or creative effect as appropriate to the purpose. 0601.3.12 Vocabulary - Use appropriate vocabulary, sentence structure, and usage to distinguish between formal and informal language. 0601.3.13 Syntactic Variety - Incorporate some variety of syntactic structures for effect when appropriate (e.g., modifying phrases, parenthetical expressions). 0601.3.14 Purpose - Edit to craft a tone that is appropriate for the topic and audience, and supports the purpose. 0601.3.15 Point of View - Use language that conveys the writer's point of view. 0601.3.16 Source Material - When other sources are used or referenced (such as in research, informational essays, or literary essays) adhere to a set of rules 0601.3.17 Note Taking - Generate notes on text, and identify main and supporting ideas. 0601.3.18 Edit - Edit writing for mechanics (punctuation, capitalization), spelling, grammar (e.g., consistent verb tense, noun and pronoun agreement). 0601.3.19 Revise - Drawing on reader's comments, revise papers to focus on topic or thesis, develop ideas, employ transitions, and identify a clear beginning and ending. 0601.3.20 Writing Rubric - Demonstrate confidence in using a Writing Assessment Rubric while evaluating one's own writing and the writing of others. 0601.3.21 Use Technology - Use relatively basic software programs (e.g., Word PowerPoint) to write more challenging texts and create graphics to present ideas visually and in writing. 0601.3.22 Publication - Identify and explore opportunities for publication (e.g., local/national contests, Internet web sites, newspapers, periodicals, school displays). SPI 0601.3.1 Identify Purpose - Identify the purpose for writing (i.e., to inform, to describe, to persuade). SPI 0601.3.2 Audience - Identify the audience for which a text is written. SPI 0601.3.3 Thesis - Select an appropriate thesis statement for a writing sample. SPI 0601.3.4 Logical Order - Rearrange multi-paragraphed work in a logical and coherent order. SPI 0601.3.5 Key Ideas - Select illustrations, descriptions, and/or facts to support key ideas. SPI 0601.3.6 Supporting Sentences - Choose the supporting sentence that best fits the context flow of ideas in a paragraph. SPI 0601.3.7 Flow - Identify sentences irrelevant to a paragraph's theme or flow. SPI 0601.3.8 Transition - Select appropriate time-order or transitional words/phrases to enhance the flow of a writing sample. SPI 0601.3.9 Conclusion - Select an appropriate concluding sentence for a well-developed paragraph. SPI 0601.3.10 Title - Select an appropriate title that reflects the topic of a written selection. SPI 0601.3.11 Graphic Organizer - Complete a graphic organizer (e.g., clustering, listing, mapping, webbing) with information from notes for a writing selection. SPI 0601.3.12 Appropriate Format - Select the most appropriate format for writing a specific work-related text (i.e., instructions, directions, letters, memos, e-mails, reports). Review Help Resources to help review Sixth Grade English Language Arts standards
Mojave National Preserve This unique and isolated dune system rises more than 600 feet above the desert floor. The dunes were created by southeast winds blowing finely grained residual sand from the Mojave River sink, which lies to the northwest. The dunes' color is created from many golden rose quartz particles. When the dry sand grains slide down the steep upper slopes, a notable booming sound is produced. In some years, the dunes offer a nice spring wildflower display. A hike to the top and back takes approximately two hours. The area is closed to vehicles. This extraordinary dune system has an unexpectedly mysterious history. Huge amounts of sand were needed to build Kelso’s delicate wind-created sculptures, but geologists studying the Preserve discovered that no new sand is moving in to replenish the dunes. Where did the sand originally come from? What made it stop accumulating? The Kelso Dune sands remained a mystery until very recently. By studying the mineral composition and shapes of sand grains that make up Kelso Dunes, we know that most of the sand has traveled all the way from the Mojave River sink east of Afton Canyon. Wind blowing from the northwest gradually carried the sand southeastward. In the path of the prevailing winds lie the Providence Mountains and the pink pinnacles of the Granite Mountains. The rocky crags and sloping fans of the two ranges block the moving sand. Sand piles up at the base of the mountains and along their flanks, forming dunes and sand sheets. Where the sand piles up researchers found that the dunes are actually made up of several sets of dunes, stacked one on top of another. Each set formed in response to some past climate change! The Kelso Dunes depend upon times when the sand grain (sediment) supply is enhanced. This happens whenever the climate is dry enough to expose the raw material of dunes, sand, to the wind. In fact, most of the eastern part of the Kelso Dunes formed when water-filled Soda Lake and Silver Lake dried up, exposing the lake bottom sediment. The entire dune system was stacked up in five major pulses over the past 25,000 years. Plants move in: Over the past few thousand years plants have progressively covered and stabilized areas once covered by drifting sand. As you explore the dunes look for tracks left behind by the many creatures that call these dunes home. Other locations to visit in Mojave Desert Related DesertUSA Pages - How to Turn Your Smartphone into a Survival Tool - 26 Tips for Surviving in the Desert - Your GPS Navigation Systems May Get You Killed - 7 Smartphone Apps to Improve Your Camping Experience - Desert Survival Skills - Successful Search & Rescue Missions with Happy Endings - How to Keep Ice Cold in the Desert Survival Tips for Horse and Rider an Emergency Survival Kit Share this page on Facebook: DesertUSA Newsletter -- We send articles on hiking, camping and places to explore, as well as animals, wildflower reports, plant information and much more. Sign up below or read more about the DesertUSA newsletter here. (It's Free.)
Study sheds light on metabolism The way in which our cells convert food into fuel is shared by almost all living things. Now, scientists from the University have found a likely reason why. Researchers examined how cells make energy from food, by digesting simple sugars such as glucose in a series of chemical reactions. This process is almost the same for every kind of cell, including animals, plants, and bacteria. Their new study shows that this process is the most effective method to extract energy. Cells that have more energy can grow and renew faster, giving them - and the organism to which they belong - an evolutionary advantage. Physicists from the University built complex computer models to better understand why cells developed the pathways they use to convert sugar into energy. They compared models of pathways found in animals and plants with alternative mechanisms that might have evolved instead. They conducted an exhaustive search for all possible alternatives to the established biological mechanisms, which are known to have existed for billions of years. Their results suggest that the metabolic systems have evolved because they enable cells to produce more energy, compared with alternative pathways. The study, published in Nature Communications, was supported by the Carnegie Trust, Leverhulme Trust, Royal Society, and the Royal Society of Edinburgh. The key mechanisms that underpin metabolism are found in almost all plants and animals, and control the productivity of life on Earth, yet we understand little of how they came about. This study shows that our metabolic pathway is a highly developed solution to the problem of how to extract energy from our food.
You may have heard about the idea of sleep learning from books, magazines, the internet, or television, but may have raised your eyebrows with the thought that it actually worked. While the concept of sleep learning is far from new, a new study has found that it might actually work. What is Sleep Learning? Sleep learning, also referred to as hypnopedia, is the process of conveying information while sleeping. The thought behind the concept is that a sleeping brain is able to receive and process new information. The most common approach used in sleep learning involves audio (i.e. sound recordings). In many cases, a CD or tape is used, which contains either simple messages, subliminal statements, or hypnotic instructions. While sleep learning is used in a number of applications, it is most commonly applied to breaking a bad habit or learning another language. Does Sleep Learning Work? Researchers at the Weizmann Institute of Science conducted a sleep study that found that sleep actually boosts our brains’ ability to not only learn, but also remember what we learned. In conducting the study, the researchers played various audible tones to sleeping study participants. After each tone was played, the researchers emitted an odor to the sleeping participants. Some odors presented after the tones were pleasant, while others were unpleasant. After getting a whiff of the odor, the reactions of the sleeping study participants were interesting. The participants that received a pleasant odor reacted with a deeper breath, while the participants who received an unpleasant smell responded with a shallow breath. As the night carried on, the researchers decided to play the tones without the accompanying odor. What they found was quite remarkable. The sleeping participants still reacted with a “sniff”, either shallow or deep, even though no odor was present. In essence, the participants were reacting to the audio tone. What’s more, when the tones were played upon wakened participants, the same sniffing/breathing reactions remained. “This acquired behavior persisted throughout the night and into ensuing wake, without later awareness of the learning process. Thus, humans learned new information during sleep,” wrote the researchers in the Nature Neuroscience study. The researchers at Weizmann said that this breathing and sniffing response presented even when there was no odor to sniff indicated that the subject’s brains were processing the link associated with the tone and the aroma even while sleeping. Might Want to Give it a Try Keep in mind that even though the results of this study revealed that people can learn new information while asleep, and that they can adjust their behavior unconsciously while awake is encouraging, what worked for one individual might not work for another. Different factors come into play such as the methods used to learn, length of sleep, stage of sleep (REM or non-REM), and so forth. That said, sleep learning may work for you. Trying sleep learning out yourself is the only way you’ll know for sure if it works for you. Link to Us! If you found this article useful and shareable, please copy and paste the following into the html code of your website or blog: Learn More about Getting a Better Night's Sleep and Good Sleep Hygiene at <a href="http://www.plushbeds.com/blog/sleep-science/sleep-learning/">Plushbeds Green Sleep Blog</a>.
As we usually don't recognize the importance of air, even though language and characters are used every time of our life, we also don't recognize their importance enough in many cases. Especially we are not accustomed to imagining that people who have no proper characters to express their own language have painful difficulties. But, before Hangeul was created, our nation had to suffer those difficulties Before Hangeul, our ancestors had to use Chinese letters for enjoying literal life. As like Latin language played a role of the Ligua Franca in Europe of the Middle age, Chinese characters and Chinese language acted as the Ligua Franca not only in Korea but also all around East Asia including Japan and Vietnam. Considering this, it was rather natural that our ancestors used Chinese characters and literary Chinese for their literature But, it was very hard to dictate our language with Chinese characters because they were made based on Chinese language that was totally different from our language. Therefore, this led to the separation of spoken language and written language. Usually, it is common that written language reflects spoken language. In some cases, if characters have been used for a long time, they might have independent function, or furthermore the written and spoken languages might have totally different features from each other at last. But usually the distance between spoken and written languages is not too far. However, the gap between Chinese characters and our language was too much wide. For this reason, our ancestors had to make so much effort to learn Chinese letters and There was an attempt to write our language with Chinese characters, so-called writing method with borrowed characters. When our own proper nouns had to be expressed in texts written in literary Chinese, Chinese characters were borrowed, and that was the first sign of generalization of writing method with borrowed characters. This writing method with borrowed characters had been developed in various areas other than writing proper nouns. It is natural to use one's own language to sing one's feeling sincerely. And the song may be needed to leave with writings for a long time. So the song sung with our language was written with borrwed Chinese characters, these songs were called "Hyangga(úÁʰ)" and the writing method used was called "Hyangchal(úÁóÎ)". Also when reading the scriptures of Buddhism and Confucianism, they were read with pauses at right position, and postpositions or verbal endings of our own language were attached to clarify how the preceding and following expressions It was called "gugyeol(Ï¢ÌÁ)". Meanwhile, Using Chinese language was general in official documents of upper class of the nation, but because the middle class people(ñéìÑ) like petty officials had not enough skill to use literary Chinese as much as ruling class, and to clarify the meaning of texts using postpositions or verbal endings of our own language was needed, modified Chinese language, with word orders changed according to Korean rules and grammatical elements supplemented, was used in official documents of the low-class officials. This is called ¡°Yidu(ì§Ôæ)¡±. In this way, the writing method with borrowed characters was developed and used, but Chinese characters were incomplete and inefficient characters to express our own language. First, there were many words in our language not being expressed easily with brrowing "eum(ëå, sound)" or "Hun(ýº, meaning)" of Chinese characters(eg. an onomatope and mimesis) and because there are multiple "Hun" for one Chinese character in general, there were not clear cases which "Hun" should be used to read the Chinese character used in the writing method with borrowed characters. Because of these limitations of the writing method with borrowed characters, as the ruling class had accustomed to Chinese language, its use had been reduced, and patterns of expression had become simplified and routinized.
Students use the Internet to gather information on Ancient Egypt. They describe the role of a pharaohs and what they wore and ate. They discuss why the Nile is important to the region and examine hieroglyphics. 21 Views 90 Downloads Who Wants to be a Millionaire: Ancient Egypt "Who was the most famous Egyptian?" "What were pyramids used for?" These are two of the great review questions you'll find in this fun, ancient Egypt review game. Kids will use their knowledge of Egypt's past to win the game. 6th - 8th Social Studies & History Springfield Podcasting Project: Ancient Egypt Talk Show Sixth graders use both web and print resources to collect data on daily life in ancient Egypt. They use this data to write a script for an Ancient Egypt Talk Show which will be posted as a podcast. Note: This instructional activity is... 6th English Language Arts Ancient Egyptian Profile Students explore the meaning of the term "material culture" and its role in piecing together the past. They research ancient Egyptian material culture and the style of ancient Egyptian art. They then reproduce a portrait in the manner of... 1st - 6th Visual & Performing Arts
The southern sand octopus can make a quick escape by making its own quicksand. How? It shoots jets of water into the sand grains, separating them into an almost fluid state, and allowing the octopus to burrow. There under the ocean floor, it hides from predators. This is likely a good thing for an octopus with no ability to camouflage. From New Scientist: Watch these next: Making Sand Swim, Deep Look’s The Amazing Life of Sand, and the classic octopus unscrews a jar from the inside. [Research article co-author Jasper] Montana and his team first caught the octopus in the act of burrowing in 2008 when they were scuba diving at night in Port Philip Bay, south of Melbourne, Australia. When they shone a light on the octopus, the startled animal spread out its arms and repeatedly injected high-powered jets of water into the sediment using its funnel… The liquefied sand is likely to reduce drag and so allow the animal to burrow more quickly, using less energy, Montana’s team speculates. They found that the animal used its arms and mantle to push the sand away and form a burrow. It also extended two arms to the surface to create a narrow chimney to breathe through. Finally, it secured the walls of its new home with a layer of mucus that kept the grains of sand together so the entire thing maintained its shape.
All systems of our body and brain are designed to constantly work to maintain life-sustaining balance that scientists call homeostasis. Unconsciously and automatically our bodies and brains maintain the functions and systems enabling us to be active and alive, such as body temperature, blood pressure, heart rate, breathing, digestion, elimination, and healing. The brain is the master controller for all voluntary and involuntary body systems and actions. It sends messages to the body and receives messages from the body by using electricity. The brain does this using a network of specialized cells called neurons, combined with specific hormones and chemicals produced by the brain and body for this purpose. The brain produces four distinct types of rhythmic electrical impulses known as brain waves, labeled with the Greek letters Alpha, Beta, Theta, and Delta. Brain waves are measured in electrical units known as Hertz. Hertz is a standard unit of measurement equal to a frequency of one cycle per second. Each brain wave has it’s own unique frequency range. Beta measures 15 Hertz and above. Alpha is 8-14 Hertz. Theta is 4-7 Hertz. Delta is less than 4 Hertz. People usually produce a mixture of brain waves frequencies at any given time. An electroencephalogram, or EEG, is a recording of brain wave activity. Brain waves are measured and recorded using an instrument known as an electroencephalograph (EEG) machine. The normal, focused waking state consists primarily of Beta. When you close your eyes during relaxation/meditation and during dreaming activity, Alpha waves tend to be produced. The slower Theta and Delta are dominant during deep sleep. If the rhythmic electrical impulses, or brain waves, produced by the brain become abnormal or out of balance, imbalances are created in the body. Examples of conditions that can result in abnormal brain wave rhythms are: open and/or closed head injury, stroke, coma, autism, epilepsy, migraine and cluster headaches, attention deficit disorder, dyslexia, learning disabilities, clinical depression, anoxia, Parkinson’s disease, and post viral damage. All-Digital, Real-Time EEG Neurofeedback is one of the most compelling examples of the body’s ability to self-regulate and bring itself into balance. Current brain research has shown that All-Digital, Real-Time EEG Neurofeedback can be an effective auxiliary treatment for the above-mentioned conditions. When there is a brain injury or irregularity, the brain tends to produce too much Theta frequency. The ratio of Theta brainwaves to Beta brainwaves becomes out of balance. All-Digital, Real-Time EEG Neurofeedback uses a special computer and amplifier to display the brain waves with less than one-thousandth of a second delay. It is this immediate and real time feedback that enables retraining of the brain. During All-Digital, Real-Time EEG Neurofeedback training, the brain learns to inhibit this abnormal amount of Theta and return to a state of balance among the four brain waves. In All-Digital, Real-Time EEG Neurofeedback training, non-invasive painless sensors, called electrodes, are placed on the surface of the scalp. These sensors enable the brain wave patterns to be amplified and displayed on a computer screen. By displaying abnormal rhythmic patterns, the brain can be trained to replace them with normal patterns. The computer assists the brain in recognizing normal rhythmic patterns by producing immediate audio and visual reinforcement when they occur. Because the brain inherently seeks normal brain wave rhythmic balance, the brain makes appropriate corrections immediately. All-Digital, Real-Time EEG Neurofeedback is both safe and effective. It helps to improve functions such as concentration, short-term memory, speech, motor skills, sleep, energy level, and emotional balance. Once the brain’s normal rhythmic patterns have been restored, All-Digital, Real-Time EEG Neurofeedback is no longer necessary. The results of the training are permanent unless another trauma or injury occurs. The brain is divided into two halves, known as the right and left hemisphere. Each hemisphere is also divided into sections called lobes. Many parts of the brain are interconnected and control similar functions, but each part also has unique functions. The following provides a limited explanation of some brain functions. Ability to feel and express emotions Ability to understand feelings of others Anxiety and panic attacks Control time management Feelings of self-worth Initiation of action/Procrastination Learning from experience Right Temporal Lobe Fine Motor Control Left Temporal Lobe Control of aggression No claims are being made to cure or diagnose any illness, disease, or condition using All-Digital, Real-Time EEG Neurofeedback. However, many people have reported experiencing improvement after being diagnosed with one or more of the following conditions: Anoxia (oxygen deprivation) Attention Deficit Disorder Attention Deficit Hyperactivity Closed head injury Open head injury Pervasive developmental disability Post-viral brain injury There are many different forms and practitioners of EEG Neurofeedback. The information discussed in this article relates exclusively to the unique, All-Digital, Real-Time EEG Neurofeedback Neuropathways System, developed and patented by Margaret Ayers. This system is the only EEG Neurofeedback system that provides immediate audio and visual feedback with less than one-thousandth of a second delay. Elaine Offstein is a Board Certified Educational Therapist (BCET#10151). She holds a Bachelor of Arts Degree in Psychology, a Master of Arts Degree in Special Education, an Elementary Education Credential, a Resource Specialist Certificate, a Montessori Education Certificate, and California State Non-Public Agency License #1A-19-189.
Why does a clump of bumper cars move so slowly? In an inelastic collision, momentum is conserved, but energy is not. Because multiple objects collide and stick to each other, some of the energy is lost as heat due to friction. However, momentum is conserved because friction is an internal force, not an external force. When you play bumper cars in an amusement park, you might have always wondered why you slow down when you crash into another bumper car. The more bumper cars in a crash, the slower the whole clump goes. 1. How are bumper cars an example of an inelastic collision? 2. Let’s say that you bumped into one other bumper car. In order to use momentum to solve for the velocity of the clump consisting of you and the other bumper car, what must be assumed? 3. You crash into another bumper car at rest of equal combined mass (120 kg) at 4 m/s from behind. Assuming that there is no net external force, what would be the speed of the clump formed by you and the other bumper car? 4. Compare the kinetic energies of the clump and your bumper car before the crash using the same information from the previous question.
Quaternary Time Table Older North American glacial research proposed four major glacial advances (Wisconsin, Illinoian, Kansan, and Nebraskan) during the Quaternary Period, and the Quaternary time scale was divided into the four glacial advances separated by three interglacials (Sangamon, Yarmouth, and Aftonian). More recent evidence suggests that there were many advances and retreats, and there is a trend toward dividing the Pleistocene into Late, Middle, and Early Pleistocene based upon new dating methods instead of glacial movement. Only the last glacial and interglacial are being retained as part of the Late Pleistocene. |Epoch||Stage||Beginning date ka*| |Holocene||10 to 12**| |Middle Pleistocene||750 to 775| |Early Pleistocene||1,650 or 2,480***| * ka stands for kilo anno, or one thousand years. 10 ka would mean 10,000 years before the present, and 1,650 ka would mean 1,650,000 years before present. ** There is scientific debate on where to put this boundary, it appears at different ages around the world. *** Two different times have been proposed for the international standard Pliocene-Pleistocene boundary. The problem is that neither of them are a clear marker throughout the world.
What was the holocaust The Holocaust refers to the massive killing or genocide of Jews during World War II by the Nazis and their allies. It is estimated that about five million Jews, including one million Jewish children, were murdered during the Holocaust. The total loss of life including the genocide of other ethnic and minority groups (the physically disabled, mentally challenged, communists, homosexuals, etc.) is put at around eleven million. The Holocaust was a result of widespread feeling in Germany that some groups were more valuable than others and the term "final solution" (extermination) was coined to describe the ultimate solution to the "Jewish problem". Extermination camps were set up throughout Germany and its conquered territories. The most common methods employed included gas chambers and shootings. Many people escaped the Holocaust by escaping to other countries, with the most notable example being Albert Einstein. The holocaust was a period of time starting in the 1930's where Nazi's were in power in Germany. They targeted groups of different races, gypsies, those who had disabilities, certain groups such as the Poles and Russians, homosexuals and others, the largest targeted being the Jewish population. The Nazis would murder these people in numerous ways. Some were sent to concentration camps where they would face death by disease, malnutrition, gas chambers, and even be experimented on. It was around the 1940's when concentration camps became the "go-to place" for the victims. Nearing the end of the Holocaust when liberation was being reached for those persecuted, many of the victims were moved by train or on long death marches that would typically cause a lot of deaths. This allowed for a smaller amount of liberated individuals to survive. On May 7, 1945 the Germans surrendered to the Allies and the death marches and concentration camps were closed. Any who were victims were officially liberated.
In a computer network, a proxy server is any computer system offering a service that acts as an intermediary between the two communicating parties, the client and the server. In the presence of a proxy server, there is no direct communication between the client and the server. Instead, the client connects to the proxy server and sends requests for resources such as a document, web page or a file that resides on a remote server. The proxy server handles this request by fetching the required resources from the remote server and forwarding the same to the client. How Proxy Server Works? An illustration of how a proxy server works is shown below: As shown in the above example, whenever the client connects to a web proxy server and makes a request for the resources (in this case, “Sample.html”) that reside on a remote server (in this case, xyz.com), the proxy server forwards this request to the target server on behalf of the client, so as to fetch the requested resource and deliver it back to the client. An example of client can be a user operated computer that is connected to the Internet. Types of Proxy Servers and their Uses: 1. Forward Proxies A forward proxy is the same one described above where the proxy server forwards the client’s request to the target server to establish a communication between the two. Here the client specifies the resources to be fetched and the target server to connect to, so that the forward proxy server acts accordingly. Except for reverse proxy (discussed in the latter part of this article), all other types of proxy servers described in this article falls under forward proxy. 2. Open Proxy An open proxy is a type of forwarding proxy that is openly available to any Internet user. Most often, an open proxy is used by Internet users to conceal their IP address so that they remain anonymous during their web activity. The following are some of the web proxies that fall under the category of open proxy: An anonymous proxy is a type of open proxy that conceals IP address of Internet users so that the target server cannot identify the origin of the requesting client. However, an anonymous proxy identifies itself as a proxy server but still manages to maintain the anonymity of the users. This type of proxy server identifies itself as a proxy, but reveals an incorrect IP address of the client to the target server. High Anonymity Proxy (Elite Proxy) An elite proxy provides maximum anonymity as it neither identifies itself as a proxy nor reveals the original IP address of the client. In most cases, users have to pay for this type of proxy as it is seldom available freely on the Internet. 3. Reverse Proxy Unlike a forwarding proxy where the client knows that it is connecting through a proxy, a reverse proxy appears to the client as an ordinary server. However, when the client requests resources from this server, it forwards those requests to the target server (actual server where the resources reside) so as to fetch back the requested resource and forward the same to the client. Here, the client is given an impression that it is connecting to the actual server, but in reality there exists a reverse proxy residing between the client and the actual server. Reverse proxies are often used to reduce load on the actual server by load balancing, to enhance security and to cache static content, so that they can be served faster to the client. Often big companies like Google which gets a large number of hits maintain a reverse proxy so as to enhance the performance of their servers. It is not a surprise that whenever you are connecting to google.com, you are only connecting to a reverse proxy that forwards your search queries to the actual servers to return the results back to you.
On the first day of Black History Month, Google is honoring the great African-American writer, social reformer and thinker Frederick Douglass with a Doodle. Born a slave in 1818 in Maryland, Douglass escaped to freedom at the age of 20 and became a leader of the abolition movement, using his skills as an orator and statesman to move others to his cause. Douglass died of a massive heart attack or stroke on February 20, 1895. “At a time when many argued that slaves did not possess the intellectual capacity to be educated, Douglass stood as stark evidence of enslaved people’s potential,” Gilder Lehman Institute director Sandra Trenholm wrote. Here are five facts you need to know about him: 1. Douglass’ Bestselling 1845 Biography Helped Push the Cause of Abolition Douglass’ 1845 autobiography, Narrative of the Life of Frederick Douglass, an American Slave, became a bestselling book and “was influential in promoting the cause of abolition.” Other books he published include My Bondage and My Freedom and Life and Times of Frederick Douglass. His second autobiography was published just three years prior to his death and it covered events during the Civil War. After one of his autobiographies was published, Douglass engaged in a two-year speaking tour of Great Britain and Ireland in order to avoid recapture by one of his former owners, who Douglass had mentioned in his book. Douglass was known for his quote “I would unite with anybody to do right and with nobody to do wrong” as he was an advocate for equality among all people. He delivered hundreds and hundreds of speeches and editorials against slavery and racism throughout his life, becoming a powerful voice of the people. According to History.com, he was the most important black American leader of the 19th century. 2. Douglass Was the First African-American Nominated for Vice President of the United States Douglass was the first African-American nominated for Vice President of the United States. He was the running mate and Vice Presidential nominee of Victoria Woodhull in 1872. It was on the Equal Rights Party ticket and the nomination was made without his approval. That same year, Douglass was presidential elector at large for the State of New York and he took the state’s vote to Washington D.C. During the Civil War, Douglass was an adviser to President Abraham Lincoln. 3. He Escaped From Slavery & Had to Move to Britain to Avoid Slavecatchers Douglass was born into slavery in Talbot County, Maryland, to his mother, Harriet Bailey. He was born Frederick Augustus Washington Bailey and later changed his last name to Douglass. Douglass’ exact birth date is unknown as he wrote in his first autobiography, “I have no accurate knowledge of my age, never having seen any authentic record containing it.” He later decided to celebrate the day on February 14. When he was eight, Douglass was sent to Baltimore to work for the family of Hugh and Sophia Auld, where he was first educated. “Sophia Auld had not owned slaves before and treated Douglass with great kindness, taught him the alphabet, and awakened his love of learning,” Trenholm wrote. Douglass said hearing her read the Bible aloud “awakened my curiosity in respect to this mystery of reading, and roused in me the desire to learn.” But Hugh Auld stopped his wife from teaching Douglass, telling her “if you teach him how to read, he’ll want to know how to write, and this accomplished, he’ll be running away with himself,” Douglass wrote in his autobiography. Douglass said, “from that moment, I understood the direct pathway from slavery to freedom.” Douglass was forced back to plantation life after his legal owner died when he was 15. He spent five years under harsh masters, enduring hunger and beatings, before escaping in 1838 after two unsuccessful attempts. He was still not safe. He began speaking out against slavery and published his first autobiography in 1845. He garnered fame, which put his life at risk. He moved to Ireland and spent two years there, continuing his speaking tour against slavery. A group of British supporters purchased his freedom in 1847 and he was able to return t the United States. “Upon his return, Douglass continued to advocate the abolition of slavery,” Trenholm wrote. “He also championed equal rights for all Americans, regardless of race or gender. He published two additional autobiographies, founded five newspapers, and served as the US Consul General to Haiti.” 4. Underground Railroad Member Anna Murray-Douglass Was His Longtime Wife Anna Murray-Douglass was married to her husband until her death and the couple had five children together – Rosetta Douglass, Lewis Henry Douglass, Frederick Douglass, Jr., Charles Remond Douglass, and Annie Douglass, who died at ten years old. Murray was a member of the Underground Railroad. She passed away in 1882 and in 1884, Douglass remarried, to white feminist Helen Pitts. Murray was Douglass’ wife of 44 years and she was born free unlike some of her other siblings, according to the Oxford African American Studies Center. She was working as a laundress and housekeeper when she met Douglass, who was working as a caulker. Murray actually had encouraged Douglass to escape slavery and she gave him some money to help him. The two later married in September 1838. Murray was active with the Boston Female Anti-Slavery Society and supported her husband with his abolitionist newspaper, North Star. The North Star carried on for four years until it merged with Gerrit Smith’s Liberty Party Paper, ultimately becoming Frederick Douglass’ Paper. When asked why he created the North Star, Douglass was quoted saying, “I still see before me a life of toil and trials…, but, justice must be done, the truth must be told…I will not be silent.” 5. Douglass Was an Early Supporter of the Women’s Rights Movement In addition to his importance in the abolition movement, Douglass was also an early supporter of women’s rights. He was the only African-American to attend the Seneca Falls Convention, which was the first women’s rights convention, in 1848. Douglass wrote in his autobiography, “Life and Times of Frederick Douglass”: The slayers of thousands have been exalted into heroes, and the worship of mere physical force has been considered glorious. Nations have been and still are but armed camps, expending their wealth and strength and ingenuity in forging weapons of destruction against each other; and while it may not be contended that the introduction of the feminine element in government would entirely cure this tendency to exalt woman’s influence over right, many reasons can be given to show that woman’s influence would greatly tend to check and modify this barbarous and destructive tendency. … I would give woman a vote, give her a motive to qualify herself to vote, precisely as I insisted upon giving the colored man the right to vote; in order that she shall have the same motives for making herself a useful citizen as those in force in the case of other citizens. In a word, I have never yet been able to find one consideration, one argument, or suggestion in favor of man’s right to participate in civil government which did not equally apply to the right of woman. Douglass was a licensed preacher in addition to his many other accomplishments. The United States’ Episcopal Church remembers Douglass annually on its liturgical calendar, every February 20. The Alpha Phi Alpha fraternity designated Douglass as an honorary member in 1921. Black History Month 2015 begins February 1. Celebrate African-American contributions to U.S. history with these important quotes by black scholars and poets.Click here to read more
Copyright © 2007 Dorling Kindersley Forestry is the management of forests with the aim of harvesting their produce, which includes timber, fuel wood, charcoal, resin, rubber, and pulp for paper. Trees also yield food in the form of fruits, nuts, and oils. Wood is an amazingly versatile material, which can be put to thousands of different uses. As well as being burned for fuel, timber is also used in buildings and to make furniture and tools. Hardwoods, such as teak and mahogany, are prized for their beautiful grain and toughness. Fast-growing softwoods, such as pine, are used mainly for making wood pulp for paper. In well-managed forests, trees are cut down singly or in strips so that the forest has time to grow back. However, many of the world’s forests are now being destroyed by large-scale logging, or deforestation. In particular, the tropical rainforests are disappearing rapidly—a disaster, since they are home to over half the plant and animal species on Earth.
A new study found that the world's lakes are warming faster than both the oceans and the air around them, creating blooms of algae that are toxic to fish and rob water of oxygen. The rapid warming of the lakes threaten freshwater supplies and ecosystem across the planet. Henry Gholz, program director in the Division of Environmental Biology at the National Science Foundation, said that the knowledge of how lakes are responding to global change has been lacking, which has made forecasting the future of lakes and the life and livelihoods they support very challenging. ‘The drastic rise in lake water temperature has become a threat to fishes as the waters have nearly reached the highest temperatures fish can tolerate.’ Gholz noted that these newly reported trends are a wake-up call to scientists and citizens, including water resource managers and those who depend on freshwater fisheries. The study found that lakes are warming an average of 0.61 degrees Fahrenheit each decade. That's greater than the warming rate of the oceans or the atmosphere, and it can have profound effects, the scientists say. At the current rate, algal blooms, which ultimately rob water of oxygen, should increase by 20 percent over the next century. Some 5 percent of the blooms will be toxic to fish and animals. Emissions of methane, a greenhouse gas 20 times more powerful than carbon dioxide, will increase 4 percent over the next decade. Co-author Stephanie Hampton of Washington State University said that warm-water lakes have experienced less dramatic temperature increases, but their waters may have already nearly reached the highest temperatures fish can tolerate. The study is published in the journal Geophysical Research Letters.
Creating a Class In WMI, a class is an object that describes some aspect of an enterprise, such as a special type of disk drive. After you have created a class definition, write your provider DLL to supply instances of the class, property data, and execute methods defined for the class. Scripts and applications can then obtain data or control the device. For more information, see Developing a WMI Provider. A base class represents some general concept. For example, the CIM_CDROMDrive class represents all types of CD-ROM drives in WMI, and contains general properties that describe all kinds of CD-ROM drives. For more information, see Creating a Base Class. A derived class inherits properties and methods from another class. A derived class usually represents a specific case of a base class. For example, the Win32_CDROMDrive class represents a CD-ROM drive on a Windows system. The Win32_CDROMDrive class is based on and inherits many of properties from CIM_CDROMDrive. However, Win32_CDROMDrive, like other derived classes, can have additional properties that make the derived class unique. For more information, see Creating a Derived Class. Creating a class means defining the properties that describe that class. You can also define methods that manipulate the object represented by the class. Generally, a property represents an aspect of the object, such as a serial number for a device or a size in bytes for a process, while a method represents an action that changes the state or behavior of the device or logical entity. Each class must have at least one key property. While a class may have multiple keys, you cannot create an instance of a class with more than 256 keys.
Human skin is a garden of microbes which is home to about 1,000 bacterial species. Most are benign but some invade the skin and cause illness – and of these, antibiotic resistant bacteria are particularly dangerous. We normally associate these resistant bugs with hospitals, but new research finds that they could be living and spreading in households and within communities, too. For a notorious resistant bug, scientists have also been able to pinpoint where in the world it first began spreading. The hope is that this knowledge will allow better way of controlling infection and stopping epidemics. The Staph of nightmares About one in five humans carries the disease causing bacteria Staphylococcus aureus, or Staph, on the skin without any problem. However, breached skin, surgical wounds, or low immunity, as in HIV infection or cancer, may allow Staph to cause diseases ranging from minor skin ailments to major threats to life. The emergence of methicillin-resistant Staphylococcus aureus (MRSA) is well-known. Originally associated with only bacterial infections in hospitals and nursing homes, MRSA is now known to colonise the skin of otherwise healthy individuals – such infections are called “community-associated” (CA-MRSA). CA-MRSA spreads by contact with an infected individual. That is why, the spread of CA-MRSA can occur in households, where the spread between house members is difficult to control resulting in high rates of recurrent infection. This is often due to contaminated household objects such as shared razors, towels and door knobs. While the presence of Staph on skin has long been associated with infection, two features make CA-MRSA riskier. It can cause severe disease in previously healthy people – in fact, in about one in every ten cases, CA-MRSA infections leads to deadly pneumonia, severe sepsis, or the dreaded “flesh-eating disease” (a.k.a. ‘necrotizing fasciitis’). It also has the ability to spread rapidly, which has resulted in a global epidemic. The global epidemic has been attributed to a single CA-MRSA microbe, known as USA300. In the US it is responsible for infection outbreaks in 38 states, and it has spread to Canada and several European countries. Studies of USA300 have found molecular evidence which points to its ability to evolve into more harmful versions. USA300’s invasion of community households is less well understood. This is what Anne-Catrin Uhlemann at Columbia University Medical Centre and her colleagues wanted to investigate. In a paper in the Proceedings of the National Academy of Sciences they have successfully used “whole-genome sequencing” on Staph cells from 161 CA-MRSA-infected residents in New York city in order to reconstruct USA300’s evolutionary history. Whole-genome sequencing takes a snapshot of an organism’s complete genetic make-up, known as the genome, and determines the DNA sequence of all genetic material. Uhlemann used genomic sequencing and health statistics to gain insights into USA300’s spread during a period covering 2009-2011. They looked for small changes in the genome, which often give clues as to how the cell evolved. After investigating more than 12,000 small changes in the USA300 genome, the authors reconstructed the genetic history. This helped them determine that USA300 first arose around 1993. The molecular signatures allowed them to also home in on the geographic location where this happened, which they determined to be in northern Manhattan. Detailed study of USA300’s genome showed it acquired antibiotic resistant genes from viruses that infect bacteria. This allowed the genetic adjustments necessary for USA300 to become harmful. The authors also discovered a smaller subgroup of USA300 resistant to another antibiotic-class, fluoroquinolones, which appeared to emerge around the time when fluoroquinolone prescription rates had soared in the US. All this information put together shows that USA300 evolved and spread in households and communities in New York city. The occurrence of different antibiotic-resistant bugs highlights the effects of overuse of antibiotics. But working out how CA-MRSA spread within households and inside communities may help devise an infection control strategy to break the pattern of spread and reduce the possibility of another large-scale outbreak.
In the previous post it was demonstrated the average surface temperature of the Earth is currently 15 C while its total temperature is .9 C; leaving a ΔT of 14.1 C, which is unfortunately too small a differential to permit the useful conversion of heat into work. Ocean thermal energy conversion (OTEC) on the other hand, a process that converts stratified layers of ocean heat into work, requires a heat difference (ΔT) of at least 20 C. Since the temperature of the ocean at a depth of 1000 is universally 4 C, which is about the average of temperature of the ocean at all levels, and a surface temperature of at least 24 C is needed to derive a result from OTEC, the areas in orange to red (the OTEC zone) below are the only ones where the heat of global warming can be regulated. The graphic is sectioned into squares of 572 kilometers a side and at a power density estimated at (1kW per km2) the OTEC zone has an initial global production potential of about 30 terawatts (TW). Roughly twice the amount of energy we are currently deriving from fossil fuels. It should be noted that the graphic is a representation of annual surface temperatures, so since seasons change, and as shown below the red blob representing a surface temperature of 28 C or above can shift with the winds and/or the seasons, so in a year like 2017 the Atlantic can have a red blob of its own that can produce as severe a hurricane season as the one just witnessed. Another way of looking at the red blob is, a heat black hole. It is a consequence of trade winds driving heat to a depth of 250 meters where during the period of the warming hiatus, 1998 to 2013, a great deal of ocean heat was sequestered. The 64 trillion-dollar question then should be, why aren’t we capitalizing on a massive energy and environmental opportunity that sequesters ocean heat? In part, it is because back in 1998 a team lead by Physicist Martin Hoffert of New York University concluded that the Earth’s atmospheric CO2 content cannot be stabilized without a tenfold increase in carbon-emission-free power generation over the next 50 years and since only 1.5 TW of carbon-emission-free power was being produced at the time, they concluded that non-fossil-fuel generation would have to account for at least 50 percent of the 30 TW required by 2050. At the time, and to as late as 2007, the estimated annual available output for OTEC power was only 3 to 5 TW, and so since it takes plants of at least 100 MW capacity, costing about $USD 600 million, for OTEC to be economically viable, compared to 3 MW wind turbines costing $USD 5 million or 400-Watt Solar Panels at $1,200 a piece, at a formative stage of the carbon divestment and renewable investment era OTEC was handicapped by a lowballed potential and a false economy bias associated with small components. Assuming intermittence of about 70% for wind and solar, at ((5*(100/3))/30) or $555 million per 100 MW then wind is only slightly less costly than OTEC, which is a bargain at (100,000,000 watts/600,000,000 dollars) or $.16 per watt, compared to Solar Panels at 3$ a watt (1200/400). But the real elephant in the room was revealed by the late David MacKay, former Chief Scientific Adviser to the UK Department of Energy and Climate Change in his book Sustainable Energy – Without the Hot Air and in his TED talk A reality check on renewables, where he pointed out, a country like Great Britain, where the energy consumption rate is 125 kWh per day, would have to cover half its land mass by wind farms or between 20 to 25 percent of its area with solar panels to service its citizens. MacKay used light bulbs to illustrate the UK’s energy consumption and noted that it was as if every person in the country had 125 light bulbs burning continuously 24/7/365. And this analogy has been borrowed for this post. In 2013, OTEC’s maximum annual net power production was reassessed at 30 TW with a rider that persistent cooling of the tropical oceanic mixed-layer would limit net production to about 7 TW. But by the time of the reassessment, many pioneering OTEC researchers had moved on to other fields, others had retired or had passed away with their vision unfulfilled and the 7 TW limit pertains only to conventional OTEC. As the years after 2013 have demonstrated, the blob isn’t a very efficient heat black hole. In El Niño years the blob spreads across the Pacific as the thermocline rises in the east and descends in the west, as heat rises in the east and sloshes west, with some of the heat being lost from the ocean’s surface to the atmosphere. To effectively sequester surface heat, it must be moved below the thermocline and retained there for as long possible and how this can be accomplished is the subject of this post. Although a 30 TW potential is ten times what was predicted in the 2007, the following figure from the 2013 study An Assessment of Global Ocean Thermal Energy Conversion Resources With a High-Resolution Ocean General Circulation Model demonstrates this potential is rapidly degraded by upwelling to about 14 TW within about 100 years or about 2 life cycles of the equipment. As the following gif shows this degradation is the consequence of an upwelling rate of 20 meters/year and the movement of surface heat to a depth of about 100 meters (the study models heat movement to a depth of only 55 meters). The legend for the icons shown in the gif above as well as for the gif for Heat Pipe OTEC below are: The OTEC gif opens on the OTEC producing area and zooms into the red blob and then tilts 90 degrees to show evaporators being serviced by burning light bulbs, heat engines and condensers at a level of 55 meters and cold-water pipes servicing the condensers. The interval between the frames of the gif is 1 second, which is 5 times faster than the interval of the gif for Heat Pipe OTEC, because, as Fig. 3 above shows the upwelling rate for conventional OTEC at a power level of 30 TW is 20 meters/year, compared to a diffusion rate of only 4 meters/year for Heat Pipe OTEC regardless of the power production. The OTEC gif shows the surface bulbs being burned out at a rate of one every frame because heat moves from the surface to 55 meters and back again within 3 years and upwelling moves the light bulbs away from the evaporators at the same rate. The 2013 study of Rajagopalan and Nihous, per the following graphic, demonstrates that within 1000 years, conventional OTEC cools the surface near the equator by about 4 degrees while the higher latitudes are warmed by the same amount, which is problematic in terms of sea level rise and aquaculture that depends on upwelling adjacent America’s coastlines. The takeaway from the gif, therefore, should be upwelling doesn’t sequester heat. It only moves it, rapidly, to higher latitudes and the further the heat, in the form of burning light bulbs, is removed from the evaporators the less effective the evaporators are at transferring the heat into the heat engines where the transformation from heat to work has to take place. The following Heat Pipe OTEC gif shows how the surface heat (light bulbs) are moved to 1000 meters at a rate as high as 75 meters/sec as a consequence of the pressure difference between the evaporating and the condensing ends of the heat pipe. From 1000 meter, at a diffusion rate of 4 meters/year, it takes, 250 years, 10 25-year increments, for the light bulbs to get back to the surface so such systems are an effective way of sequestering heat. They are true, heat, black holes. Furthermore, for at least the next 500 years, these black holes will draw heat from the entire surface to replenish the heat that is drawn into the evaporators. One thousand years from now, only 0.18 degrees worth of heat will, therefore, have been relocated to higher latitudes, (4C*(4 heat cycles /90 heat cycles)). At which time it becomes advantageous to slowly release heat from the oceans to the atmosphere because the atmosphere will be starting to cool in the absence of the influence of radiative forcing. It should be noted, with heat pipe OTEC heat is converted to work at an efficiency of 7.6% (see also here). Per the following gif, therefore, only 1 of the original 15 light bulbs adjacent each evaporator is extinguished each 250-year cycle while the other 14 are moved incrementally back to the surface. Between the evaporator and the condenser, the heat is essentially in a state of thermodynamic limbo once it has passed through the heat engine until it again becomes available to the evaporator back at the surface. As will be discussed below it will probably take about (((30/18)*2)*60) 200 years to build out the entire fleet of OTEC platforms capable of producing 30TW of power. By that time there will be at least as much added heat to the systems as will have been generated to this point, so by the time today’s heat resurfaces there will be another light bulb’s worth of heat that will have to be added to the topmost level of the following gif. This additional heat is represented by the orange light bulbs. Burnt out light bulbs are waste heat, so long as they remain within the OTEC producing area they too can be reconverted, at least in part, to work once they are on the surface. They only become anergy outside the OTEC producing area at which point they either melt ice or are radiated to space, which won’t happen for at least 1000 years, until we have stopped adding to the atmospheric greenhouse gas load and have started drawing down that load. The above gif covers a period of only 500 years during which only 2 light bulbs worth of heat will have been burned out and this heat will all remain within the red blob. In the alternative, with the upwelling of conventional OTEC, all 15 of light bulbs are burnt after 166 years and have migrated well beyond the blob. With Heat Pipe OTEC, after 1000 years the original heat will have been recycled only 4 times between the surface and 1000 meters as opposed to 90 times between the surface and 55 meters with conventional OTEC. And this has ramification for both the amount of energy that can be extracted from the oceans and the thermal efficiency of the systems. In the article, “The Long Slow Rise of Solar and Wind” Vaclav Smil points out that every major energy source since coal, then oil, natural gas and now renewable energy that has ever dominated the world supply has taken 50 to 60 years to rise to the top spot. Since 30TW is 1.66 times more energy than is currently derived from all of the world’s energy sources, and no source has ever dominated more than a 50% share of this market, and never for more than 75 years then it will probably take close to 200 years to truly start getting a handle of climate change. In the interim, and out to 3,250 ((100/7.6) * 250) years we can obtain 30 TWs of energy with heat pipe OTEC. Rajagopalan suggests conventional OTEC cannot sustain a power level of 30TW for more than a few decades without degrading the OTEC resource but in the same breath the paper says 7 TW is sustainable in perpetuity. The difference is the upwelling rate of 20 meters/year as opposed to the 4 meters/year with the heat pipe. Each power cycle leaks heat out of the OTEC producing area, so to get the maximum conversion of warming heat to work the least number of the cycles are required, which would be 13. At which time, an extrapolated version of the heat pipe OTEC gif would reveal 3 light bulbs worth of heat still capable of producing power. Rajagopalan’s assessment is OTEC sources and sinks relax to pre-OTEC condition at a similar rate to which they are built up. At any time, therefore, before or after full OTEC capacity is attained, production can be throttled back should unanticipated problems arise but it is highly unlikely that you would ever want to shut off such an energy source. Particularly when it is cooling the surface and its curtailment would cause a rapid release of heat from the ocean into the atmosphere. It has to be noted here, an average temperature movement from the surface to a depth of 500 meters results in a significant lowering of the thermal expansion of the oceans and thus sea level rise, which too is a benefit that can be maintained for up to 3,250 years. OTEC, therefore, represents the only approach, aside from atmospheric- or surface-ocean-based solar radiation management that has the potential to help directly mitigate anthropogenic warming of the surface ocean/atmosphere. One of the main problems confronting systems that experience harsh environments like the deep ocean is tropical cyclones. These don’t form however within 5 degrees latitude of the equator because of the Coriolis effect. In turn, this is the best place for OTEC systems to operate. The petroleum industry has over 30 years’ experience in harsh environments like the deep ocean and now produce oil and gas from depths of about 2900 meters or about 3 times the depth of a deep-water condenser. The risers used to move gas and fluids between the deep and the surface are similar to the piping used in OTEC, which would use tensioning cables to secure the heat pipe and the deep-water condenser to the surface. The same way oil platforms are secured to the sea floor. Bottom line, what more do you want, beyond long-lived, abundant, power, that will cool the planet?
By knowing the genes and proteins that control a cell's progress toward the differentiated form, researchers may be able to accelerate the process – a potential boon for the use of stem cells in therapy or the study of some degenerative diseases, the scientists say. Their finding comes from the first large-scale search for genes crucial to embryonic stem cells. The research was carried out by a team at the University of California, San Francisco and is reported in a paper in the July 11, 2008 issue of "Cell." "The genes we identified are necessary for embryonic stem cells to maintain a memory of who they are," says Barbara Panning, PhD, associate professor of biochemistry and biophysics at UCSF, and senior author on the paper. "Without them the cell doesn't know whether it should remain a stem cell or differentiate into a specialized cell." The scientists used a powerful technique known as RNA interference, or RNAi, to screen more than 1,000 genes for their role in mouse embryonic stem cells. The technique allows researchers to "knock down" individual genes, reducing their abundance in order to determine the gene's normal role. The research focused on proteins that help package DNA. In the nucleus, DNA normally wraps around protein complexes called nucleosomes, forming a structure known as chromatin. This is what makes up chromosomes. They found 22 proteins, each of which is essential for embryonic stem cells to maintain their consistent shape, growth properties, and pattern of gene expression. Most of the genes code for multi-protein complexes that physically rearrange, or "remodel" nucleosomes, changing the likelihood that the underlying genes will be expressed to make proteins. The main player they identified is a 17-protein complex called Tip60-p400. This complex is necessary for the cellular memory that maintains embryonic stem cell identity, Panning explains. Without it, the embryonic stem cells turned into a different cell type, which had some features of a stem cell but many features of a differentiated cell. The scientists believe that Tip60-p400 is necessary for embryonic stem cells to correctly read the signals that determine cell type. These findings are not only important for understanding cellular memory in embryonic stem cells, but will also likely be relevant to other cell types, they say. Inactivation of other genes disrupted embryonic stem cell proliferation. These genes were already known to have only slight influence on viability of mature cells in the body. This suggests that embryonic stem cells are "uniquely sensitive to certain perturbations of chromatin structure," the scientists report. If other types of stem cells are also found to be sensitive to these chromatin perturbations, this could lead to novel cancer therapies in the future, Panning says. Kristen Bole | EurekAlert! Multi-institutional collaboration uncovers how molecular machines assemble 02.12.2016 | Salk Institute Fertilized egg cells trigger and monitor loss of sperm’s epigenetic memory 02.12.2016 | IMBA - Institut für Molekulare Biotechnologie der Österreichischen Akademie der Wissenschaften GmbH A multi-institutional research collaboration has created a novel approach for fabricating three-dimensional micro-optics through the shape-defined formation of porous silicon (PSi), with broad impacts in integrated optoelectronics, imaging, and photovoltaics. Working with colleagues at Stanford and The Dow Chemical Company, researchers at the University of Illinois at Urbana-Champaign fabricated 3-D birefringent... In experiments with magnetic atoms conducted at extremely low temperatures, scientists have demonstrated a unique phase of matter: The atoms form a new type of quantum liquid or quantum droplet state. These so called quantum droplets may preserve their form in absence of external confinement because of quantum effects. The joint team of experimental physicists from Innsbruck and theoretical physicists from Hannover report on their findings in the journal Physical Review X. “Our Quantum droplets are in the gas phase but they still drop like a rock,” explains experimental physicist Francesca Ferlaino when talking about the... The Max Planck Institute for Physics (MPP) is opening up a new research field. A workshop from November 21 - 22, 2016 will mark the start of activities for an innovative axion experiment. Axions are still only purely hypothetical particles. Their detection could solve two fundamental problems in particle physics: What dark matter consists of and why it has not yet been possible to directly observe a CP violation for the strong interaction. The “MADMAX” project is the MPP’s commitment to axion research. Axions are so far only a theoretical prediction and are difficult to detect: on the one hand,... Broadband rotational spectroscopy unravels structural reshaping of isolated molecules in the gas phase to accommodate water In two recent publications in the Journal of Chemical Physics and in the Journal of Physical Chemistry Letters, researchers around Melanie Schnell from the Max... The efficiency of power electronic systems is not solely dependent on electrical efficiency but also on weight, for example, in mobile systems. When the weight of relevant components and devices in airplanes, for instance, is reduced, fuel savings can be achieved and correspondingly greenhouse gas emissions decreased. New materials and components based on gallium nitride (GaN) can help to reduce weight and increase the efficiency. With these new materials, power electronic switches can be operated at higher switching frequency, resulting in higher power density and lower material costs. Researchers at the Fraunhofer Institute for Solar Energy Systems ISE together with partners have investigated how these materials can be used to make power... 16.11.2016 | Event News 01.11.2016 | Event News 14.10.2016 | Event News 02.12.2016 | Medical Engineering 02.12.2016 | Agricultural and Forestry Science 02.12.2016 | Physics and Astronomy
The species of the genus Congeria are the only freshwater bivalves in the world which inhabit exclusively caves. They are Miocene relicts, the only surviving members of a once widespread genus, and endemic to the Dinaric Karst region. There are three species - Congeria kusceri, Congeria jalzici and Congeria mulaomerovici, known from only 15 caves in Croatia, Slovenia and Bosnia and Herzegovina, and all of them are highly threatened. Congeria has evolved numerous adaptations to living in caves. Among the most prominent are loss of pigmentation and visual senses, changes in life-history strategy, and increased longevity. Our research focuses on the loss of melanin pigmentation. The goal is to identify the molecular mechanisms that have led to albinism and to understand its evolutionary implications. To achieve this, we are comparing the three species of Congeria with their closely related surface organisms such as the zebra mussel Dreissena polymorpha. Unlike Congeria, surface Dreissenids are widespread and notoriously invasive. In a second line of research, we seek to understand the molecular changes associated with specialization and loss of invasiveness that have occurred within the Congeria lineage. We are also collaborating with Annette Summers Engel and Hannah Rigoni (University of Tennessee, USA) to decipher the food web structure in Congeria caves, which might be partially based on chemolithoautotrophy. To identify the potential microbial symbionts, we are working with Ana Bielen, Sandra Hudina and Marija Vuk from the University of Zagreb, Croatia.
Gender differences exist in many health conditions, and COVID-19 is no different. It appears that with regards to the novel coronavirus, men’s health is less robust. This global phenomenon is particularly visible in some countries. In Thailand, males account for a massive 81% of COVID-19 related deaths, in England and Wales, it’s 61%. What are the reasons for the considerable difference between the sexes? We spoke to Dr Anthony Kaveh, MD, physician anesthesiologist, and integrative medicine specialist. “Men are disproportionately affected by COVID-19 than women. From preliminary data, possible reasons include behavioural, baseline health, and genetic differences between men and women,” says Dr Kaveh. Let’s look at what we know about COVID-19 infections among men and women. But first, a little about how and why the sexes are different. Men and women have vastly different biological characteristics, that develop thanks to our chromosomes. A chromosome is a bundle of coiled DNA, found in the nucleus of almost every cell in the body. Humans have 23 pairs of chromosomes. The sex chromosomes determine whether you develop as a male or female. In humans, women have two larger X chromosomes (XX), whereas men have a single X chromosome and a much smaller Y chromosome (XY) that has relatively fewer gene copies. When an embryo is developing in the womb, these chromosomes dictate the future sex of the baby. One of the genes found on the Y chromosomes, the SRY gene, starts testicular development in an XY embryo. The testicles begin to make testosterone which directs the embryo to develop as a male. In an XX embryo, there is no SRY gene, so instead, an ovary develops which makes female hormones. This basic, biological variation between the sexes can affect COVID-19 infection rates. The effects of hormones Although essential for male health, testosterone levels are also linked to a range of medical conditions. Oestrogen is a predominantly female hormone that provides protective effects from conditions, including heart disease. Men cannot benefit from its positive health effects, as they only produce low levels. However, Dr Kaveh says that “The immunologic effects of oestrogen in protecting against COVID-19 are theoretical and don’t yet provide a mechanism to explain our observations.” Testosterone could have a role to play in COVID-19 infection rates. High levels of testosterone can suppress an immune response. Researchers found that women and men with lower levels of testosterone had higher antibody responses to an influenza vaccine. Genetics and immunity The X chromosome has about 900 genes, the Y chromosome, just 55. Women have a genetic advantage with two X chromosomes because if there is a mutation in one, the other gene provides a buffer. Men have more sex-linked diseases such as the blood clotting disorder, haemophilia, and suffer from an increased rate of metabolic disorders. The protective XX effect explains why male death rates are frequently higher. The female immune system is stronger. Concerning COVID-19 infections, Dr Kaveh says “Genetic factors are often considered, including the more active female immune system. While a more “active” immune system would make sense to protect against COVID-19, it would be expected to worsen the cytokine storm we observe in severe COVID-19 infection.” However, there is no evidence to support that cytokine storms, which are potentially lethal, excessive immune responses, are more common in women. If more men are testing positive for COVD-19, could the simple reason be that more men are tested than women? In fact, it seems the opposite is true. “Within the context of our early statistics, women are tested more frequently than men, but men have more positive tests. This may reflect a male “stoicism” that leads to delayed care,” says Dr Kaveh. Men are not as likely as women to seek medical attention. The Centers for Disease Control and Prevention (CDC) reported that women were 33% more likely than men to visit a doctor, even excluding pregnancy-related visits. It seems like the reason for higher infection and death statistics in men is not due to a bias in testing. “Obesity, diabetes, hypertension, and smoking are also predictors of COVID-19 hospitalisation, but the breakdown is difficult to correlate,” said Dr Kaveh. People of either sex are more likely to suffer from complications from coronavirus if they have certain pre-existing health conditions, or engage in behaviours such as smoking and excessive alcohol consumption. These health conditions and behaviours tend to be more common in men, which could affect the imbalance that we see in COVID-19 infections. The association between risk factors and infection rate are not yet fully understood. “For example, hypertension is more common in men until menopause, at which point female rates quickly rise,” explains Dr Kaveh. In this case, we should be seeing an increase in the COVID-19 infection rate for women who have reached menopausal age, yet this is not the case. “Obesity, a risk factor for diabetes, affects women more than men globally. However, diabetes is slightly more prevalent in men. These comorbid conditions don’t fully explain the COVID-19 observations, and neither does smoking,” says Dr Kaveh. Smoking is a risk factor for all respiratory diseases and also of lung cancer which is another COVID-19 risk factor. In China, about 50% of men smoke and only 2% of women. These figures could contribute to the high ratio of male deaths which are more than double the rate of female deaths. These differences in smoking and death rates are not as extreme in other countries. Risky behaviour cannot fully explain sex bias in COVID-19 infections. The gender impact on COVID-19 As yet, it seems like there is no definitive answer as to why more men are suffering severe COVID-19 infections. More research is needed. “We are still very early in our global epidemiological observations of COVID-19. More complete data in the coming months will hopefully provide more clues to explain our observations,” concluded Dr Kaveh. Last updated: 30-04-2020