content
stringlengths
275
370k
The probability statement P(A | B) = p has a very different meaning from the logical statement "B implies A with certainty p". The logical statement means that whenever B is true then A is true with certainty p. This applies regardless of any other information we may have. In other words, it is modular. But the probability statement is not modular: it applies when the only thing we know is B. If anything else is known, e.g. C, than we must refer to P(A | B, C) instead. The only exception is when we can prove that C is conditionally independent of A given B, so that P(A | B, C) = P(A | B). To illustrate why this is important, let This point was made eloquently by Pearl (p57). He used it to show that logic based on "certainty factors" is not an adequate replacement for probability theory. In the rain/sprinkler problem, it seems obvious that we need to include C. But sometimes we drop relevant information without realizing it. Consider this example: My neighbor has two children. Assuming that the gender of a child is like a coin flip, it is most likely, a priori, that my neighbor has one boy and one girl, with probability 1/2. The other possibilities---two boys or two girls---have probabilities 1/4 and 1/4. Suppose I ask him whether he has any boys, and he says yes. What is the probability that one child is a girl? By the above reasoning, it is twice as likely for him to have one boy and one girl than two boys, so the odds are 2:1 which means the probability is 2/3. Bayes' rule will give the same result. Suppose instead that I happen to see one of his children run by, and it is a boy. What is the probability that the other child is a girl? Observing the outcome of one coin has no affect on the other, so the answer should be 1/2. In fact that is what Bayes' rule says in this case. If you don't believe this, draw a tree describing the possible states of the world and the possible observations, along with the probabilities of each leaf. Condition on the event observed by setting all contradictory leaf probabilities to zero and renormalizing the nonzero leaves. The two cases have two different trees and thus two different answers. This seems like a paradox because it seems that in both cases we could condition on the fact that "at least one child is a boy." But that is not correct; you must condition on the event actually observed, not its logical implications. In the first case, the event was "He said yes to my question." In the second case, the event was "One child appeared in front of me." The generating distribution is different for the two events. Probabilities reflect the number of possible ways an event can happen, like the number of roads to a town. Logical implications are further down the road and may be reached in more ways, through different towns. The different number of ways changes the probability. This property of probability theory, which is different from logic, is discussed at length by Pearl (p58). In logic, it does not matter how a proposition was arrived at. But in probability, the query cannot be ignored. Here is another example, based on Pearl's: Suppose you, a Bostonian, have entered the New Hampshire lottery along with 999 people from New Hampshire. The prize will be awarded to exactly one of the 1000 people. By sheer luck, you obtain a computer printout listing 998 participants; each name is marked "no prize", and yours is not among them. Should your chances of winning increase from 1/1000 to 1/2? Under normal circumstances, yes. But suppose while poring anxiously over the list you discover the query that produced it: "Print the names of any 998 New Hampshire residents who did not win." Since you are from Boston, the list could not possibly have had you on it. Thus it is completely irrelevant to you; your probability of winning is still 1/1000. (If you are not convinced, draw a tree as before.) What if you just have raw facts without the query that generated them? Unless you can prove that the query is irrelevant, you should average over likely queries. The only time information can safely be omitted is when it is statistically independent from the quantity of interest. This is why independence diagrams are so important for efficient probabilistic computation. The maximum entropy principle has been proposed as a way to incorporate facts without an associated query. If you are starting from a uniform distribution, the idea is to find a distribution consistent with the facts which has maximum entropy. (If you are starting from a non-uniform distribution, you find a distribution which has minimum cross-entropy from the current one.) It is a useful approximation, but only an approximation; you can do better by knowing something about the query. The maximum entropy principle essentially assumes that any state of the world consistent with the facts is equally likely to have produced them. In the two children example, the maximum entropy distribution given "at least one child is a boy" assigns 2/3 probability to the other being a girl, which is consistent with some but not all the different ways we might have arrived at that information. A basic assumption of probability theory is that given enough information, the status of any event can be reduced to a certainty. Randomness is therefore the absence of information, and therefore subjective. The probability distributions we assign to events always represent our own lack of information; someone with different information would assign different probabilities. Another way to say it is that all probabilities are conditional probabilities. In many derivations, these conditions are omitted for brevity, but it is important to remind oneself that the conditions are still there. Statisticians are often asked, "Is that the real distribution?" There is no answer to such a question, because the question presupposes that randomness is intrinsic, when it is not. A more appropriate question would be, "Does that distribution follow from the data, your stated assumptions, and the axioms of probability theory?" Distributions encode the information available to the practitioner; nothing more. Probability theory is not about absolute truth. It is about inference consistent with certain axioms. It cannot tell you how often an event will actually occur in practice; that is an objective quantity that you can only approximate, by acquiring more and more information about the random process. A common, but flawed, rebuttal to the subjectivist argument is that the success of quantum physics "proves" that some things are intrinsically random. But quantum theory does not prove intrinsic randomness any more than the fact that coin flipping, despite being in the realm of Newton's laws, is best described statistically, or that random number algorithms, which are completely deterministic, may pass statistical tests. The convenience of a mental model does not prove that the model is correct. This is a favorite topic of Edwin Jaynes, who focused especially on the subjectivity of entropy in physics. For example, see "Clearing up Mysteries - The Original Goal" and "Probability in Quantum Theory". For example, suppose a, b, and s are all independent Gaussian random variables. Define x = a + s and y = b + s. Given only this information, x and y are dependent (make sure you understand why this is true). Suppose I now tell you the value of s. Conditional on this information, x and y are independent. Suppose I also tell you that the product xy is positive. This makes x and y dependent again. In addition to all this information, suppose I now tell you that both x and y are positive. This makes them independent again. Unfortunately, orthodox model selection via p-values is based on flawed reasoning of this sort. Note that the p-value is not the probability of the hypothesis. The frequentists gave it the mysterious name "p-value" because it is a mysterious quantity that means very little objectively. A bad argument that I found in the New York Times illustrates the problem. All of today's climate models, when started with the Earth's conditions a few million years ago, give low probability to the favorable conditions for life that we see today. Therefore, the article concludes, these models must be flawed, and we shouldn't believe what they say about global warming and the like. The problem is that we know from studies of the universe that favorable conditions for life are very rare. So following the logic in the article, the meteorologists on any planet must necessarily have bad models of climate. To test the system, you place an object on the scale; it weighs 7 and so you classify it as a bowling ball. An expert on bowling balls inspects the object and lets you know that it is indeed a bowling ball. Does this give you any useful information? Yes, because it eliminates any threshold above 7. Together with your prior knowledge, this means that the true threshold is between 1 and 7, with equal probability. And so your best guess is now to use a threshold of 4. This phenomenon suggests that the error-driven training procedure used for neural nets, where only erroneous predictions can alter the classifier, is incomplete. Error-driven training does not average over all classifiers that are consistent with the data, which is necessary for making optimal inferences. Newer techniques like the support vector machine and Bayes-point machine are not error driven and come closer to true probabilistic averaging. The effect of fulfilled predictions can be even more extreme than described here. The optimal classifer can change in an arbitrary way, with new decision boundaries appearing or disappearing, when a prediction is confirmed. The set of consistent classifiers is a convex polytope in the space of classifiers. The optimal classifier is the polytope's center of mass. New information cuts away pieces of the polytope, thus moving the center of mass.
Good sleep is critical to good health. But what if someone stops breathing for short pauses in the middle of the night, breathes shallowly, snorts, snores, gasps for air, or breathes infrequently? This is known as sleep apnoea and it could be contributing to a great number of health issues. Those disruptions through the night may last from 30 seconds to several minutes and may occur hundreds of times. This results in a lack of oxygen to all tissues, but this particularly includes the brain. It can be tricky to diagnose, as there is no blood test for it. What’s needed is for someone to be sleeping in the same room to notice the snoring, gasping, snorting, and pauses through the night, in order to raise concern. It occurs in two types, which differ in how serious they are, although both have major implications for health if untreated. Obstructive sleep apnoea is the most common form, with central sleep apnoea being less common. Someone with obstructive sleep apnoea has an airway that becomes partially or fully blocked during sleep due to either excess weight, large tonsils compressing the area, or simply to anatomical defects. Central sleep apnoea on the other hand occurs when the part of the brain that handles breathing does not correctly communicate with the muscle required for breathing, resulting in pauses or infrequent breaths while sleeping. A combination of the two can occur but is not common, In all cases the body receives less oxygen than it needs, and it responds by releasing the hormones involved with stress such as cortisol and adrenaline. The increase in these hormones coupled with a lack of oxygen can put a person with sleep apnoea at higher risk for high blood pressure, heart attack, stroke, irregular heartbeats, and heart failure. These people often also wake up with a headache, are very tired, and experience brain fog through the day, due to this lack of oxygen and quality sleep. Some people have sleep apnoea in a mild or even moderate form that might be quite subtle, in that their snoring or pauses doesn’t wake up others, yet they themselves wake feeling tired and unrested without knowing why. Others with more severe sleep apnoea are often told that their snoring sounds like a freight train, or their own gasping for air wakes them up, such that they feel like they were choking. Doctors and researchers have described the recent continuous growth of sleep apnoea as an epidemic. It is well established in men, but is showing rapid increases amongst women and it is estimated to affect somewhere between 25% and 30% of adults, but often undiagnosed and untreated. The key telltale sign is snoring. Sufferers may also gasp for air and choke briefly whilst sleeping, but have no recall of it when waking up. They will usually feel sleepy during the day, be tired, and as a result often irritable too. Historically the main treatment for moderate to severe sleep apnoea has always been to keep the airway open via a mask. The CPAP mask and machine (continuous positive airway pressure) has been around since the early 1980’s and this is highly effective – but there are problems. Patients often reject it for various reasons that include discomfort, dry mouth, noise disturbance, claustrophobia, and of course it doesn’t travel well. This may involve removal of the adenoids or excess flesh, or be one of the recent pacemaker type of implants that place a generator in the upper chest which has an electrical stimulation lead to the neck to keep the airways open. This is usually only for those severe cases that reject CPAP. Most sleep apnoea patients suffer worst when they lie on their back causing the tongue to fall back and obstructing the airway. The old method was a tennis ball sewn into the back of the pyjama jacket but special pillows are now available, both for users of oral appliances and CPAP users too. Dental Appliance Therapy Often called an Oral Appliance or mouthpiece, this treatment makes use of something called an M.A.D. (mandibular adjustment device) – a simple well fitting gum shield that is comfortable to wear and highly effective. They work by moving the jaw forward slightly, which then opens the airway; breathing is smooth and continuous, and snoring stops immediately. This is by far the most popular and the easiest to use. It’s comfortable to wear, shapes itself to your dental profile and has proved effective in 98% of cases. It has the added advantage of being easy to take wherever you go – either away on business, or on the family holiday. Quality mouthpieces are now highly recommended by the NHS for all who snore, and also for cases of mild to moderate sleep apnoea and they are now even proposed as a replacement for CPAP intolerance.
A long time ago, in a galaxy far, far away, two black holes spiraled towards each other, pulling closer and closer until they finally smashed together. This incredibly powerful collision unleashed ripples in the fabric of the universe that spread outwards at the speed of light. A billion years later, on September 14, 2015, they arrived at Earth and produced a faint signal at two of the most sensitive scientific instruments ever made. This detection is the result of a decades-long quest to know more about our universe, and opens up a new era in our ability to observe the cosmos. These ripples — called gravitational waves — were predicted by Albert Einstein a century ago, a result of his theory of general relativity (the same theory that explains the motion of Mercury and makes your GPS work correctly, among other things). Like sound waves in the air, gravitational waves propagate away from a source — in this case, two black holes colliding. However, whereas sound waves are variations in air pressure, gravitational waves are wrinkles in space-time. While hints and indirect evidence for the existence of gravitational waves had been seen before, no one had ever measured one directly — until now. Last week, scientists from the Laser Interferometer Gravitational-Wave Observatory (LIGO) announced that towards the end of last year, during engineering test runs before the start of the instrument’s official science observations, they had finally detected a gravitational wave. Catching a Wave When a gravitational wave passes through an object (and they pass through everything, which is important; but we’ll get to that later), the distortion in space-time jostles the particles within that object. While the particles themselves feel no force, the distances between them change as the space-time that they reside in is alternately stretched and squeezed by the passing wave. Don’t worry though. As mildly terrifying as LIGO’s animation of the exaggerated effects of gravitational waves on Earth is, it’s important to remember that the stretching and squeezing effect from these waves is unimaginably tiny. The ratio of change in length to original length, known as strain, from gravitational waves caused by a distant black hole collision is on the order of 10-21 strain — that’s a decimal followed by 20 zeros, then a 1. Put another way, if a gravitational wave acted on the distance between the sun and the nearest star, Proxima Centauri, the change in distance would be about the width of a single strand of human hair. But you know something amazing about humans? We can measure that. Interferometry, My Dear Watson To measure the tiniest of movements, the LIGO Collaboration built the most sensitive ruler in history. Two of them, in fact — one in Louisiana, and one in Washington. Each detector consists of two “arms,” each of which is four kilometers long. Together, they make up an interferometer, an extremely sensitive device that measures the difference between two distances. As this brief video from LIGO shows, an interferometer compares two distances by splitting a laser beam down two different paths, reflecting it back, and looking at interference between the two beams. Light, along with all other forms of electromagnetic radiation, can be thought of as oscillations, or waves, in the strength of electric and magnetic fields. (We won’t get into the wave-particle duality in this post.) The distance between subsequent peaks in a plot of intensity over time is called the wavelength of that light, and is a characteristic that, among other things, determines how we perceive the color of that light. Lasers are particularly useful because they have a tight beam and can have high spectral purity, meaning that the light within them has one very specific wavelength — the lasers used in LIGO, for example, have a wavelength of 1064 nanometers. When the laser beam is split, sent down each arm of the detector, bounced back, and recombined, the waves from each beam will be at a certain point along their cycle depending on the distance that they have traveled. Since the split laser beams in an interferometer recombine at one point in space, and then travel together to a detector, the waves in the two beams will potentially be out of phase if one of the beams traveled farther than the other in the interim — meaning that the peaks of one no longer line up with the peaks of the other. This phase shift of one relative to the other causes interference, resulting in changes in the amplitude (the height of the peaks) of the recombined light wave, which is observed as changes in the brightness of the light. Constructive interference happens when the peaks and valleys of the two combined waves are in the same position, and the amplitude of the resulting wave is at its highest. This results in bright light. Destructive interference, on the other hand, happens when the peaks of one wave align with the valleys of the other and they cancel out. This results in darkness. These are just the two extremes, of course; there are also an infinite number of positions in between that would result in a recombined wave of different amplitudes, as shown in the gif above. Since amplitude corresponds to brightness, this interference allows us to measure the difference in the distance traveled by the two light beams by measuring the brightness of the beam produced when they are recombined. “We did it!” Interferometry allows for extraordinarily precise measurements — in the case of LIGO, just enough for them to detect the effect of a passing gravitational wave. By making the arms of the detector so long, the designers of LIGO increased the scale on which a gravitational wave acts, thereby increasing the change in distance caused by the tiny strain of the gravitational wave and making it easier to measure. When the wave passes through the detector, stretching and squeezing space-time as it goes, it changes the relative lengths of the arms of the LIGO detectors. This change in length changes the distance traveled by the laser beams going down each arm, which in turn changes the relative position of peaks and valleys in the waves of each light when they are recombined. The brightness of the recombined beam changes as the arm lengths change, and the passing wave has been measured. The end result is a beautiful plot showing the signal at both detectors, as well as theoretical predictions, all in agreement. The detector in Louisiana and the detector in Washington both saw waves that increased in both amplitude and frequency before suddenly stopping, as was predicted. In the words of David Reitze, the Executive Director of LIGO, during the press conference announcing the finding: “Ladies and gentlemen, we have detected gravitational waves. We did it!” Listening to the Universe The detection of gravitational waves is a triumph of experimental science, and the culmination of decades of work by thousands of people around the globe. It confirms a century-old prediction, and helps us understand a little more about the inner workings of the universe around us. The pursuit of gravitational waves, like many great quests in the history of science and engineering, has certainly been what I think President Kennedy would call a challenge that “serve[s] to organize and measure the best of our energies and skills.” But there’s more to this detection than the aspirational glory of scientific progress. The ability to detect a signal so tiny that its effect is a movement of less than one ten-thousandth of the width of a proton — a ripple in space-time that has traveled across the universe for a billion years — opens an incredible new era in our observation of the world around us. For centuries, humans have looked at distant stars and with telescopes of increasing complexity. With each new invention, we could see a little farther, or perhaps peer into a whole new wavelength of the electromagnetic spectrum. Eventually, we spotted galaxies beyond our own, and came to realize that even the darkest parts of the sky are full of distant stars if we look hard enough. Each new way of looking showed us something we had not expected to see. Using our telescopes on the Earth and in space, we have seen some of the most beautiful and mysterious structures in our universe. However, it’s important to remember that we are still simply seeing. As physicist Kip Thorne pointed out, “all previous windows through which astronomers have looked are electromagnetic.” These electromagnetic windows — which include the optical spectrum that we can perceive with our eyes — have shown us much about the universe, but they have their limitations. For example, vision can be blocked, and dust clouds or other collections of materials can sometimes obscure our vision. In addition, not all material in the universe radiates in a way that we can see. However, gravitational waves are different. They are vibrations in the universe, the echoes of unimaginably energetic events. As mentioned before, they travel through all matter, so the cannot be blocked (at least, as far as we understand). They carry with them information about objects that we may not be able to see using our usual observational methods. Electromagnetic radiation may present the sights of the universe, but gravitational waves are its sounds. And, thanks to LIGO, you can hear them for yourself: By detecting gravitational waves, we are for the first time listening to the universe around us. We have had our eyes open, but now we have finally opened our ears as well.
Aquarium of the Pacific - Online Learning Center - Species Print Sheet Conservation Status: Safe for Now (Stellamedusa ventana)Cniderians • Sea jellies This sea jelly species that does not have an official common name, was discovered in 1990 by scientists exploring the deep ocean in Monterey Bay, California. Subsequently, photographs taken by a video camera on a remotely operated vehicle (ROV) and jellies collected by the ROV’s arm that were studied in a laboratory setting confirmed that this was a new jelly genus and species. In 2003/2004.It was named Stellamedusa ventana by the scientists who first discovered it. Stella’ refers both to the jelly’s translucent blue-white color and its trailing arms, which, to the scientists, looked like a shooting star. “Medusa” is another word used for a jelly’s bell, uniquely shaped in the case of Stellamedusa ventana. The species name comes from the name of the ROV, Ventana. The jelly’s unofficial or nickname common name, “bumpy” jelly, refers to the wart-like bumps or projections that cover the oral arms and skin (exumbrella) of the jelly’s bell. At the Aquarium This rarely seen sea jelly has a scientific name, Stellamedusa ventana, but no official common name. “Bumpy” jelly is an unofficial common name often used to refer to S. ventana because of the bumps that cover the jelly’s bell and arms. The Aquarium’s Trustees of the Pacific thought that such a unique, fragile, and translucent jelly deserved a better common name and named it, “comet” jelly. The trustees felt this name was in keeping with the “Stella” part of the genus name, given by the jelly’s discoverers who thought the jelly’s trailing arms made it look like a shooting star. (Stella is the Latin word for star). Monterey Bay, California and the Gulf of California “Bumpy” jellies are found at depths of 150-500 m (500-1800 ft). This depthe range, the mesopelagic zone, is below the level to which sunlight penetrates, but above the depth of very low oxygen levels. The “bumpy” jelly is blue-white in color. It has a central bell or medusa with four oral arms hanging from the bottom center of the bell. Unlike many other sea jellies, it does not have tentacles. The exterior of the bell and the arms are covered with wart-like projections that contain clusters of white nematocysts, the stinging cells used to capture prey. These projections have given the jelly its informal common name, “bumpy”. The center of the jelly has a “stomach” and radial canals extending out to the edge of the bell to a circular canal that runs around the edge of the bell. To 10 cm (3.9 in) in diameter and 20 cm (9.8 in) in total length How the “bumpy” jelly feeds in the absence of tentacles was determined in laboratory studies of live specimens captured by the robotic arm of an ROV. Pieces of shrimp placed on the bell surface were first captured and held onto by the stinging nematocysts on the surface of the bell. The bell then moved the prey toward its rim where the food was transferred to one of its pairs of oral arms. The oral arms then guided the food to grooves in the arms from which it was transferred to the jelly’s mouth. The jelly seemed to prefer larger pieces of shrimp, often releasing smaller pieces. Scientists believe “bumpy”. Jellies primarily prey on other sea jellies. Sexes are separate but little else is known about the reproduction cycle of this species. It is assumed that sperm and eggs are broadcast into the water column where fertilization takes place externally. This jelly is an agile swimmer that moves gracefully through the water, trailing its oral arms behind, releasing stinging cells instead of dust or gas as a comet does. Lacking nematocyst-laden tentacles with which to capture food, the “bumpy” jelly has evolved other ways to capture a meal which involve using the bell, oral arms, and the small wart-like, nematocyst–laden bumps to capture, hold, and transport prey. Rarely seen, populations of “bumpy” jelly have not been evaluated. To be as certain as possible, scientists waited 13 years from the time of discovery of S. ventana in 1990 until naming it a new genus and species in 2003. They wanted to observe as many S. ventana as possible. During this period, this rare jelly was only observed on seven explorations of Monterey Bay waters and twice in the Gulf of California, 4,828 kilometers (3,000 miles) away. Even though the coast of California is well-studied, with two major marine research institutes, and many universities and colleges with marine science programs, little is known about this rarely seen jelly..
National Zoo scientists' studies of birds begins at the Zoo and takes them around the world to help ensure that we know what birds and their habitats need to survive in the modern world. The Smithsonian Migratory Bird Center is dedicated to understanding, conserving and championing the grand phenomenon of bird migration. Founded in 1991, we are located at the Smithsonian's National Zoological Park in Washington, D.C. They seek to clarify why migratory bird populations are declining before the situation becomes desperate. Our programs help raise awareness about migratory birds and the need to protect diverse habitats across the Western Hemisphere. Guam's birds have been decimated, and some have gone extinct, thanks to the inadvertent introduction of the brown tree snakes to their island home. Efforts are ongoing to reintroduce captive-born rails and to increase the size and success of the captive breeding program for the kingfishers. Sara Hallager is studying kori bustards, large predatory birds native to sub-Saharan Africa, to improve reproduction in these birds in zoos.
Pronouns - Issues of Gender There are many instances in which you will refer to a single person, but an abstract one—an individual who has not been defined yet, meaning that it could be a he or a she. A long time ago, writers used to use the male third person pronoun in such cases, but as women gained more access to professions and power, just relying on he became inadequate. (There are a quite a number of English language instruction books out there that still use he in all cases, so be careful!) One acceptable option is to use “he or she”/”him or her” and “his or her”: - Example: The position of regional manager will require 50 hours of work per week on average. He or she will also travel widely, and will need to provide his or her own transportation. Some readers find this a little awkward. Instead, you may be able to use combined forms: “s/he,” “(s)he,” “him/her,” “his/her.” These may not be acceptable in all situations, so you’ll need to find out whether it’s acceptable in a given context. Some writers use a gender neutral plural form (“they,” “them,” “their”). Because these pronouns are primarily associated with plurals, though, readers may not accept them as substitutes for third person singular pronouns. For more information on pronouns, visit the following OWL resources:
This new research could explain the dinosaur’s small size (2 m) in relation to its giant (10 m) mainland equivalent, Plateosaurus. Like many species trapped on small islands, such as the ‘hobbit’, Homo floresiensis, of Flores and pygmy elephants on Malta, the Bristol Dinosaur may have been subjected to island dwarfing. Geological mapping indicates that the islands were quite small in size and, judging by abundant remains of fossil charcoal, were often swept by fires. Thus the pygmy Bristol Dinosaur may have met its death in a wildfire. Photo by David Whiteside Thecodontosaurus is one of the earliest named dinosaurs. Its bones were originally found near what is now Bristol Zoo in 1834 – some time before dinosaurs were recognised as a group. In 1975, the remains of at least 11 other individual dinosaurs were uncovered in a quarry at Tytherington, north of Bristol. Now, a collaboration between two palaeontologists, Professor John Marshall, a University of Southampton expert on fossil pollen, based at the National Oceanography Centre, Southampton and Bristol University’s Dr David Whiteside, an authority on extinct reptiles, has revealed that Thecodontosaurus lived more recently than was previously thought. Dr Whiteside emphasises that 'this is a unique equal collaboration between a palaeontologist specialising in pollen grains, which are microfossils, and a vertebrate palaeontologist working on Triassic reptiles'. He says 'I can't think of any other scientific paper where the two specialisms were combined to produce a complete paleoenvironmental model which includes the whole community of land animals showing the time and habitat they lived in and how they died.' The research, which involved a microscopic study of marine algae and fossil pollen and is published in the Geological Magazine, shows that rather than inhabiting the arid uplands of the late Triassic Period, the dinosaurs lived just before the Jurassic Period in a series of lushly vegetated islands around Bristol, the outlines of which can still be seen today in the shape of the land. Professor John Marshall said: 'The cave deposits with dinosaurs have been known for over 150 years and are world famous. You would think there would be nothing new to find. But by looking at new deposits with a fresh mind we have been able to radically change the environmental interpretation. The big surprise was discovering that these reptiles did not live on arid uplands but rather on small well-vegetated tropical islands around Bristol about 200 million years ago. It is only the microfossil pollen and algae that can tell us this. The outlines of the islands can still be seen today in the shape of the land' Professor Marshall and Dr Whiteside further comment ‘The deposits that contain the dinosaurs and other reptiles are very unusual,’ ‘The bones are found in fossil caves, formed by Triassic rain and seawater dissolving the 350 million-year-old Carboniferous Limestone. The caves then filled with sediments including the dinosaur bones as sea levels rose at the very end of the Triassic Period.’ Thecodontosaurus bones have been discovered on both Cromhall Island, north of Bristol and Failand Island, part of which is in the city of Bristol and a short distance inland from the present coast. Geological mapping indicates that the islands the dinosaurs lived on were quite small in size. The discovery that the Bristol dinosaur lived on very small islands is very important as most researchers have believed that it was a primitive member of the prosauropods, which included some very large animals and existed before the huge sauropods such as Diplodocus of the Jurassic. Dr Whiteside said, 'This changes the context in which we should view Thecodontosaurus. It has many similarities to the giant Plateosaurus that lived at the same time and other researchers have not taken into account the rapid changes that take place when large animals are isolated on islands of decreasing size. We believe that the Bristol dinosaur is probably a dwarfed species that derived from the giant Plateosaurus or a very similar animal.' Article: "The age, fauna and palaeoenvironment of the Late Triassic fissure deposits of Tytherington, South Gloucestershire, UK" by D. I. WHITESIDE and J. E. A. MARSHALL, Geological Magazine volume 145, part 1, 2008.
Bridges to Literacy: Early Routines That Promote Later School Success Will this baby be “ready” for school? Will she “like” school? Will he become a reader? These questions are increasingly asked by parents, child care providers, early educators, and policy makers at every level from the neighborhood parent group to the White House. Legislatures throughout the nation are creating programs to foster reading, and governors are regularly photographed reading to children in preschools. In 1998, Federal law decreed a standard that children will recognize 10 alphabet letters before exiting the Head Start program at age 5 (Head Start Act). In elementary school, standardized tests evaluate every child’s reading status. This national momentum suggests that we should examine infant, toddler, preschool, and family routines with an eye to emergent literacy. Changes in the under- standing of literacy development support this exploration. As recently as 25 years ago, people thought reading began in first grade, when children were “ready” for it. Over time, however, that view has shifted. In the 1980s, a few scholars in New Zealand, Canada, and the U.S. began to study the daily activities of families and classrooms to see which practices provide young children with a foundation for later success in reading. They called these beginnings “emergent literacy” (Schickedanz; 1999; Teale & Sulzby, 1986). About 10 years ago we began to see ads for phonics cards for 2-year-olds. In that climate, the National Association for the Education of Young Children (NAEYC) and the International Reading Association (IRA) issued a joint position statement on developmentally appropriate ways to help young children learn to read and write (NAEYC & IRA, 1998). The statement underscores the many ways that early childhood routines and experiences begin the process of creating readers. Prompted by the widespread interest in developing initiatives to support reading and school readiness, this article describes foundations of literacy and discusses strategies that early childhood professionals can use to facilitate its development. A number of bridges to literacy can now be built with confidence! Download the PDF to read the full article. Read more about:
Verb worksheets are practical for individuals who are trying to learn verbs, whether they are students in a traditional classroom, ESL learners, or any other person trying to ehance his or her study of the English language. Verb and Verb Tense Worksheets Here are some ideas for creating verb worksheets for classes or other individuals: - Distribute a list of familiar verbs to the students. Ask the students to determine which verbs are regular and which verbs are irregular. - Write the traditional form of some verbs on a worksheet. Have students conjugate them in the past, present, and future tenses. - Present a handout with sentences that are missing the verb. Have students fill in the proper verb in the proper tense. - In order to teach two lessons, have students circle the noun and underline the verb in sentences. Doing so will also help to teach subject-verb agreement. - Some students struggle with the fact that singular forms of verbs often take an "s." Therefore, any worksheets that develop skills in the area of subject-verb agreement are important. - For beginning students, create a "web of woods" handout. Draw or create a circle on a computer program, and put a number of different words inside. Ask students to note which are verbs with an underline, highlighter, etc. If you are looking for some premade worksheets, you can look at this following list of sample sheets: - Action verbs Students are prompted to identify and mark off action verbs in a variety of sentences. - Tenses This activity requires participants to both identify the verb and state which tense it is in. - More action verbs In addition to finding the action verbs, students must also find the predicate. - Chart Students fill out a chart of verb tenses. - Find the verb On this worksheet, students have to find the verb. This activity is most suitable for beginning learners. - Coloring Included with an answer key to make grading easier for teachers, this worksheet asks students to color in boxes based on what tense of the word is being used. Using Verb Worksheets Verb worksheets are useful in the classroom, whether it be for new learners or to refresh the memories of more advanced students. These worksheets and worksheet ideas are also helpful for ESL students. Furthermore, professionals need to brush up on their skills once and a while as well. There is no reason why they cannot use simple worksheets too. Sometimes sticking with the basics is the best way to refresh peoples' minds, that way complicated and unneeded information does not come into the picture.
Who were your ancestors? Genealogy is a fascinating hobby for many people. Perhaps you heard your grandmother talk about her grandparents. If so, you may know more about your great great grandparents than most people. Sometimes family bibles will take records back a century or two - a few names, birth and death dates, and place of birth. Thatís not much but you can be pretty sure the people mentioned had arms, walked upright, and breathed air. You would be hard pressed to find out hair color, eye color, intelligence, height, weight, and personalities of your ancestors only a few generations back. Alex Haley's hit book and TV series Roots traced an African-American family back 10 generations to Africa. This case study asks you to think of your ancestors in some cases more than a hundred million generations ago! How much of your genealogy can you fill in? Can you trace the source of your mitochondria? A. The table that follows lists 13 anatomical or physiological characteristics of different groups of living organisms that are not characteristics of humans. However, they are characteristics that may have been possessed by distant human ancestors. Examine the list and consider each characteristic as a separate hypothesis about your own distant ancestry. B. Without worrying about evidence for the moment, fill in your position on each hypothesis - agree, disagree, or uncertain. C. In the final column, write a brief justification for your position. When you are done, raise your hand to be put in a group. Note: This will be a temporary group for today and next period. Non-human Characteristic (Hypothesis) Results from initial responses |3. Prehensile tail| |7. Knuckle walker| |8. Egg laying| |11. Chitinous exoskeleton| A. Compare your individual responses for each hypothesis and fill in a duplicate table for the group to summarize the positions of individuals within the group. B. Discuss those hypotheses that lack group consensus or show the greatest amount of uncertainty. See if the group can reach consensus, agree or disagree, on each hypothesis. C. Turn in the group's table at the end of class. A. At the end of class (9/6), identify those hypotheses that lack consensus and distribute them among the group members trying to match hypotheses with individual interests. Using resources available in the Morris Library and on the Internet, find out as much as you can about your hypothesis and be prepared to present a logical argument based on data supporting or refuting it. Based on your discussions in class and further research as necessary, write a ~1-2 page argument based on evidence (provide references) that agrees, disagrees, or confirms uncertainty. Your finding will contribute to class discussion on September 11.
There's More to Sex Education than AIDS Prevention, by Mickey Kavanagh Guide Entry to 98.07.05: Designed for 9th grade classes, this unit will be taught prior to and after AIDS Week. Population explosion is a massive problem. One solution is to act on the personal and individual level to ensure that there are no more "accidental" children - the result of unintended teen pregnancies. A major contributing factor to the high teen pregnancy rate in the U.S. when compared to other countries is the national schizophrenic attitude toward sexuality. By schizophrenic I mean the separation between what adults say about sex and sexuality and what they do. It is my contention that if society focused on raising sexually healthy adolescents, those adolescents would make choices about their sexual activity which would lead to a decrease in both the rates of unintended teen pregnancy and sexually transmitted disease. The unit will raise adolescent awareness of the high incidence of teen pregnancy here compared to other developed countries. Students will analyze possible causes, contrasting national policies and practices toward sex education, access to family planning services, and media coverage. The unit will increase student understanding of humans as sexual beings from birth to death. It will define the components of effective sex education and the characteristics of sexually healthy adolescents.
The Sticks and Stones game is based on the Apache game "Throw Sticks." To play the game, students throw three sticks, each decorated on one side. Students move their pieces around the game board based on the results of the throw, as described below. Allow students to decorate three sticks on one side only; the other side should be blank. (If playing this game as part of a larger unit about Native American culture, you can allow students to decorate the sticks with tribal symbols.) Students will use these sticks to determine how far they move when playing the game. To create the game board, arrange 40 stones in a circle, preferably divided into four groups of 10. (In groups of 10, a side benefit of this game is that it helps to develop student understanding of the place-value system. For instance, if a student is currently on the seventh stone in one group of 10 and rolls a 5, she gets to move to the second stone in the next group of 10. This demonstrates modular arithmetic, because 7 + 5 = 12, which has remainder 2 when divided by 10.) As an alternative, you can use a Monopoly® game board, which consists of 10 squares on each of four sides. The rules of the game are as follows: Pair students together, and let them play the game once, for fun. Then, before playing a second time, have students make a chart of all throws that are possible. During a second game, have them keep track of their throws while playing. How many of each occurred? As an alternative, students can use the demonstration below to generate random throws. After tallying their throws during the second game, have kids use sticky notes to build a bar graph. Place a large piece of paper on the wall, or draw a graph on the chalkboard, which shows the possible throws on the horizontal axis and the number of occurrences on the For each time a particular throw occurred during their games, students should place a sticky note on the graph. For instance, if a student had three throws with zero sides decorated, the student should place three sticky notes in that category. Allow 4-6 students to place sticky notes on the same graph. Compiling the data in this way will give a larger sample size and should yield experimental results that are close to the theoretical probabilities; if only 1-2 students place their data on a graph, the results are more likely to be skewed. As necessary, create a new graph for each group of 4-6 students. (If possible, you can put all of the data from the entire class on one graph, but if there is too much data, the bars will get too tall.) A completed graph may look something like the following: Allow students to compare the relative heights of the bars on the graph. [The bars for one or two sides decorated are much taller, meaning that those results are more likely when the sticks are thrown. It also means that the probability of having a throw with three sides the same is less likely.] To facilitate a discussion about what the graph means, have students compare just two categories. You may want to ask the following - Which is more likely—a throw with one stick decorated or a throw with two sticks decorated? [Neither. They both occur about the same - Which is more likely—a throw with three sticks decorated or a throw with no sticks decorated? [Neither. They both occur about the - Which is more likely—a throw with three sticks decorated or a throw with two sticks decorated? [A throw with two sticks decorated is about three times as likely as a throw with all three decorated.] - Which is more likely—a throw with no sticks decorated or a throw with one stick decorated? [A throw with one stick decorated is about three times as likely as a throw with no sticks decorated.] Be sure to use mathematical terms during this discussion, such as likely and probability. For instance, you may want to ask students, "How much more likely is it to throw three decorated sides than to throw only two decorated sides? Is it twice as likely? More than twice as likely?" [From the graph, it appears to be about three times as likely, because the bar is three times as tall.] Return to the context of the game. Ask students, "Why do you think you get to move more spaces when all three sticks land on the same side?" [Throws with zero or three sides decorated are less likely than throws with one or two sides decorated. Since they are more rare, the reward for those throws is greater. On the other hand, a throw with three sides decorated is just as likely as a throw with no sides decorated, yet the reward for three sides decorated is greater; this is not a mathematical decision, but it probably has to do with human appreciation of art.] The bar graph allows student to use experimental results to discuss probability, but they should also consider the theoretical probability of each result. This can be accomplished by constructing a tree diagram that shows the results after three throws; a D represents a decorated side, and a B represents a blank side: There are eight possible outcomes, as indicated by the number of elements in the third row. The path to each of those elements indicates one possible outcome; for example, the highlighted path shows a first throw of D, a second throw of B, and a third throw of B. An organized list could also be created. The list below shows the eight possible outcomes, which verify the results of the tree Because three sticks are thrown, and because there are two possible results with each stick (D or B), it makes sense that there would be 23 = 8 outcomes. To promote conceptual understanding, be sure to compare the items on the list to the outcomes from the tree diagram. For instance, show that the highlighted path is equivalent to DBB in the list. Based on the list and tree diagram, students should realize that three decorated sides or no decorated sides occur, on average, only once out of every eight throws, whereas one or two decorated sides occur three times every eight throws. Ask students to compare these theoretical probabilities to the experimental results they obtained when playing the game. Finally, ask students, "On average, how many turns do you think it will take to complete a game?" Students can investigate this question by playing again and recording the number of turns, and then comparing their results with the rest of the class. Alternatively, if students are prepared for the mathematics, they can reason through the solution using basic ideas about expected value. [In eight turns, a player would be expected to get three decorated sides on one throw, two decorated sides on three throws, one decorated sides on three throws, and no decorated sides on one throw. Consequently, the player will move 1(10) + 3(3) + 3(1) + 1(5) = 27 stones in eight turns, or approximately 27 ÷ 8 = 3.375 stones per turn. At that rate, it will take 40 ÷ 3.375 = 11.85, or about 12, turns for a player to complete the circle. Of course, it will take more if the player is passed over and sent back to the starting point.] Questions for Students 1. What are the possible outcomes when three sticks are thrown? [There can be 0, 1, 2, or 3 sides decorated.] 2. What is the likelihood of each outcome? [Throws with zero or three sides decorated are less likely than throws with one or two sides decorated. Specifically, P(0) = P(3) = 1/8, and P(1) = P(2) = 3/8.] 3. On average, how many turns will be necessary to complete a game? [As shown above, it will take about 12-13 turns for a player to make it around the board. Since there are two players, a complete game will take approximately 25 turns.] - How did you ensure that students understood the relationship between the experimental results that they collected and the theoretical probability of each outcome? - Were students actively engaged in this lesson? - Did the game provide motivation for the mathematics, or did it provide a distraction from the objectives that were to be learned? What modifications could you make for next time so that the game is not a - Did students meet the objectives of the lesson? If not, what should be done in subsequent lessons?
May 17, 2018 Type 1 diabetes is an autoimmune condition is caused by the body attacking its own pancreas with antibodies. The damaged pancreas in people affected by type 1 diabetes doesn't make insulin. This type of diabetes gives rise to many medical risks such as diabetic retinopathy, diabetic neuropathy and diabetic nephropathy. There are even more serious risks of heart disease and stroke. As the glucose is not used efficiently and spills into the urine, people with untreated Type 1 diabetes develop symptoms such as these mentioned below: As your blood sugar levels rise, the fluids move out of your cells, and this makes you thirsty. Then as you drink more and more fluids there comes the urge to urinate more than usual and this happens both during the day and as well as night When you have diabetes, your body becomes inefficient in using glucose in the blood due to the lack of insulin. You, in turn, feel hungry as the cells become deprived of an energy source. The glucose does not get used efficiently by the body and hence spills into the urine, causing loss of nutrient from the body and weight loss. Other symptoms can be dry and itchy skin, fatigue, and vomiting accompanied by a feeling of nausea. Unfortunately, the exact cause of the disease is relatively unknown, but even then it is most likely that it is an autoimmune disorder. It has been found that in most people the body's own immune system which fights harmful bacteria and viruses does mistakenly destroy the insulin-producing cells in the pancreas. The type 1 diabetes can occur at any age. However, the disease is most often diagnosed in children, adolescents, or young adults. There aren’t many risk factors for type 1 diabetes, however, researches are on the lookout for possible connections. Some of the possible risk factors include the following: A family history – Those who have a parent or a sibling with type 1 diabetes have an increased risk of developing this condition. Genetics – Sometimes, you will find the presence of certain genes that would indicate an increased risk of developing type 1 diabetes. Genetic testing can be done through clinical testing in order to determine the family history of type 1 diabetes in some cases. Geography – It has also been found that the incidence of type 1 diabetes tends to increase as you travel away from the equator. People living in Finland and Sardinia have the highest incidence of type 1 diabetes. Viral exposure – If a person gets exposed to epstein-Barr virus, coxsackievirus, mumps virus or cytomegalovirus, he/she may trigger the autoimmune destruction of the islet cells or the virus may directly infect the islet cells. Early vitamin D – There have been researches that suggest that vitamin D may be protective against type 1 diabetes. If one drinks cow’s milk at an early age, which is a common source of vitamin D, then the risk of type 1 diabetes tends to rise. The aim of the treatment of type 1 diabetes is to maintain a normal blood glucose level and delay or prevent complications due to high blood glucose. The treatment generally aims to keep blood sugar levels between 80 and 120 mg/dL (4.4 to 6.7 mmol/L) in the daytime and between 100 and 140 mg/dL (5.6 to 7.8 mmol/L) during the night. Therefore, you must understand that the treatment of type 1 diabetes is basically a lifelong commitment to taking insulin, exercising regularly, maintaining a healthy weight, eating healthy foods and monitoring the blood sugar levels. There are various types of insulin, namely, fast acting insulin, regular insulin, intermediate insulin, long lasting insulin, combinations, and insulin pens. People with type 1 diabetes generally adjust quickly to the time and attention that is needed to monitor blood sugar, treat the disease and maintain a normal lifestyle. As time goes by, the risk of complications is substantial but can be reduced greatly if blood glucose levels are strictly monitored and controlled. Type 1 diabetes is a lifelong disease and therefore, those people with type 1 diabetes need regular checkups, careful daily monitoring of blood sugar levels, and insulin treatment throughout their lives. A small number of people with diabetes who require kidney transplants because of severe kidney damage from the disease can become exceptions to this rule. That's because a pancreas transplant occasionally can be performed at the same time that a kidney transplant is done. Since the new pancreas can make insulin, this can cure diabetes. Because organ transplantation requires people to take medicines that suppress the immune system for the rest of their lives, pancreas transplant is not a treatment that is recommended by itself (it is only recommended for people who must have another organ transplanted, who will already require those medications long-term). You must call your health care professional if you experience a sudden increase in thirst and urination, with or without vomiting, nausea, fatigue or confusion. Unexplained weight loss always should be reported to a physician. If you or your child have type 1 diabetes, see your doctor regularly (as your doctor advises) to make sure that you are keeping good control of your blood sugar, and to be checked for early signs of complications such as heart disease, eye problems and skin infections. Your doctor most likely will suggest that you also visit other specialists regularly, such as a podiatrist to check your feet and an ophthalmologist to check your eyes for signs of diabetes complications. Like much of its causes, there's no known way to prevent the type 1 diabetes, although researchers are working on preventing the disease or further destruction of the islet cells in people who are newly diagnosed. Read more articles on Type 1 Diabetes Mellitus. For more such stories, Download Onlymyhealth App.
MakúArticle Free Pass Makú, any of several South American Indian societies who traditionally hunted, gathered wild plant foods, and fished in the basins of the Río Negro and the Vaupés River in Colombia. The Makú comprised small bands of forest nomads. The present-day Makú are remnants of an aboriginal population who were killed or assimilated by expanding Arawak, Carib, and Tucano tribes. The Makú language is not related to others, and the several groups speak quite different dialects. It is estimated that they numbered about 2,000, but they are now on the verge of extinction. Little is known of Makú culture. As nomadic hunters, gatherers, and fishermen, they use bows and arrows, blowguns, stone axes, and clubs. Some have recently adopted farming and live in sedentary villages. In the Brazilian Guiana Highlands, the Makú of the Uraricoera River basin speak an isolated language. They obtain European products through trade with other Indians. Do you know anything more about this topic that you’d like to share?
Most of us have heard teachers talk about "grading on a curve." The curve they use is a simple version of a bell curve. A bell curve is a standard charting procedure for defining general trends and statistical averages. It is based on the concept of standard deviations. A bell curve visualises the apparent randomness in a data set. The result is a picture of data distribution that organises items into an overall summary of aberration and normalcy. Microsoft Excel can create bell curves based on data in a spreadsheet. The program has built-in statistical functions to calculate the parameters of the bell curve. You can use a variety of methods to construct a bell curve; after you learn the simpler techniques, you may investigate more complex strategies. - Skill level: Other People Are Reading Type the word "Mean" into cell E1 and "Standard Deviation" into cell G1. Type the desired mean and standard deviation for your bell curve into cells F1 and H1. The mean represents the average number from the entire data set. In a bell curve, this is often similar to the median, or the number which occurs most often. The standard deviation is a statistical property based on likelihood of occurrence. A deviation of 1 will include 68 per cent of all the data in a collection. By the third deviation, almost all the data is included. For example, a mean of 5 with a deviation of 2 means that 68 per cent of all the data will fall between the numbers 3 and 7, which are 2 removed from the mean of 5. Type the number "-4" into cell A2. Select the cell after entering the data by clicking on it once. The desired numbers are arbitrary so long as the subsequent formulas are entered accurately for Excel to generate normally distributed data appropriate for the desired bell curve. Click the "Edit" menu and select the "Fill" sub-menu. Choose the "Series" command from the "Fill" sub-menu. A pop-up window will appear. Select the "Columns" option in the "Series in" section of the "Series" pop-up window. Select the "Linear" option in the "Type" section, and type "0.25" into the "Step value" field. Type "4" into the "Stop value" field and press the "OK" button. The "Step value" is customisable. Enter a smaller number to generate a curve with greater detail and more points, such as "0.1". A higher number will show fewer data points. Type the formula into cell B2. Type the formula into cell C2. These functions generate the complex distribution of data necessary to form a true statistical bell curve. Select cells B2 and C2 by clicking once on B2 and dragging the mouse to cell C2. Release the mouse. Copy the formulas down through the entire data range. Hover the mouse over the lower-right corner of cell C2. The cursor will change to a black plus sign. Click and drag the plus sign down to the last row which contains data in column A. Select columns B and C by clicking on cell B2 and dragging down to the last row that contains data, and over one column to include C. Click the "Chart" button at the top of the Excel program window. A pop-up window will appear. Select the "XY (Scatter)" chat type and press the "Finish" button. The bell curve is created. - 20 of the funniest online reviews ever - 14 Biggest lies people tell in online dating sites - Hilarious things Google thinks you're trying to search for
What Works? Research into Practice. MoE Research Monograph #41: Morphology Works Morphology describes how words are composed of meaningful parts. It is fundamentally related to semantics, but it also provides clues about how words should be written and pronounced. 1. Both the quantity and quality of word knowledge are very important. 2. Morphological awareness predicts reading development. 3. Teaching morphology increases vocabulary and reading achievement. (Kirby & Bowers 2012) Improving students’ vocabulary through morphological awareness Vocabulary knowledge and morphological awareness are intertwined. Being able to break words apart to find meaning is an important skill as students come across new words in the content areas (Green, 2015). In 2000, the National Reading Panel identified vocabulary instruction as one of the five essential components of reading instruction, and a large body of research indicates the critical role vocabulary knowledge plays in reading comprehension (Manyak, 2014). Vocabulary knowledge is critical to the long-term literacy development of all students, and high-quality vocabulary instruction should be a priority for teachers across all grades. (Graves, et al 2014) - Students from low-income and non-English-speaking families, face a large deficit in English vocabulary knowledge upon entrance to and throughout their school years - the continuing deficit in vocabulary knowledge experienced by many students represents a major obstacle to academic achievement in vital areas such as reading comprehension (Manyak, 2014). Approximately 70% of English words contain Greek or Latin prefixes, suffixes, or roots. By teaching students how to tap into this deep-rooted system of meaning that underlies most English words, we help them generate a more extensive and deeply grounded vocabulary (Flannigan et al, 2012). Academic texts contain up to 200,000 different words and, the majority of words in academic texts are morphologically complex, which means that they are made of multiple units of meaning. These words “convey abstract, technical, and nuanced ideas and phenomena that are not typically examined in settings that are characterized by social and/or casual conversation. This makes them more formal and therefore less well known (Goodwin & Perkins, 2015). As students progress through the school system, they are exposed to increasingly complex levels of content, therefore, they need more precise tools (i.e., academic vocabulary) and more knowledge of how those words are used within discipline-specific registers (i.e., academic language in content-specific texts). Research in content area vocabulary has demonstrated the effectiveness of teaching Greek and Latin word roots, especially for struggling readers (Padak, 2008). Morphologically complex words can be divided into three major categories: - Compounds – words are composed of two or more words (Dragonfly) - Inflections – words with suffixed morphemes that denote tense (walked), number (boys), and adjectival comparisons (taller, tallest). (suffix does not change the meaning) - Derivations – words are formed with roots, prefixes, and suffixes. Derivational morphological awareness is essential to word solving complex words. Green (2015) states : - while all three categories need to be taught, derivational morphological awareness, (the ability to use the understanding of word formation to gain meaning through the knowledge of roots and affixes) requires the most focus and attention as it is the most useful for solving word meaning and identifying grammatical function. - teachers should focus on high utility words that have a large lexical family* with cross-curricular applications since there are more opportunities to see the base…if the suffix is unfamiliar, root/base familiarity can assist in determining the meaning. - Teachers need to give students a multiple opportunities to connect with roots & affixes on a deeper level through interactive and hands-on activities rather than simply having them fill in worksheets. Which roots and affixes should be taught? Teachers, especially those teaching multiple subjects, sometimes feel overwhelmed with teaching content and wonder how they can possibly squeeze in vocabulary instruction on top of everything else they are responsible for. Spending just 5 to 10 minutes a day focusing on high utility root words, that have large lexical families, such as equi-, trans-, mono-, etc., (lists below) and morpheme-combining principles can help students quickly learn significantly more words than can be taught with traditional words lists. This not only helps students to rapidly expand their cross-curricular vocabulary, but also helps them to word solve unfamiliar complex words and improves spelling and comprehension. Goodwin & Perkins state: 60% of words can be figured out using knowledge of the units of meaning and 12 Latin roots and two Greek roots can be combined with prefixes to make up 100,000 words. - Select informational texts that contain challenging vocabulary. - Identify complex words in the text. - Identify the subset of these words that students need to understand the text or that represent important concepts in the content area represented by the text. - Identify those words that students can infer the meanings of using their contextual or morphological analysis skills. - Decide which of the words require in-depth instruction and which can be taught with brief explanations. - Edit the lists for any given text so there is a manageable number to teach, no more than 12 and preferably somewhat fewer. Words that have a large lexical family* download list: HIGH UTILITY ROOTS From Padak et al 2008 From Padak et al 2008 From Padak et al 2008 The Cognatarium is a website that divides all of the listed English words into their constituent parts, or morphemes. The Cognātarium contains over 2,600 morphemes. Morphemes are listed alphabetically. You can type in a root word and it will generate a list of words with that root. To access The Cognātarium click here ROOT WORDS & AFFIXES: LISTS What Works? Research into Practice. Research Monograph #41: Morphology Works .WW_Morphology (1) Section 23 Library https://twitter.com/23Librar
The American Academy of Audiology is dedicated to increasing public awareness of audiology and the importance of hearing protection. With October right around the corner, what better time than now to provide a little peak into how exactly our ears work. Check out this video, posted by Schooltube: As you can see, our ability to hear relies heavily on a very precisely functioning fine-tuned system. But that fine-tuned system is also very delicate, and susceptible to damage. Hearing loss is the third most common health problem in the US, and more than half of Americans with hearing loss are under the age of 65. Exposure to excessively loud noise is one of the most common causes of hearing loss regardless of age. And recent studies have demonstrated that the incidence of hearing loss from noise exposure has more than doubled among children and young adults in the past thirty years alone. So what could be causing such a significant increase in hearing loss among our youth? Many researchers point to increased use of personal listening devices at dangerously high volumes. Prolonged exposure to any noise at 85 decibels (that of busy city traffic from inside a vehicle) or greater has the potential to cause permanent noise-induced hearing loss. Some mp3 players have a maximum volume capacity as great as 115 decibels, which is nearly as loud as a jet engine on take off. Check out the Youtube video below that illustrates how excessive noise, such as mp3 players at high volume levels, can cause hearing loss: While hearing damage that noise exposure causes is cumulative and permanent, the good news is that it is also totally preventable. So protect your hearing! Keep the volume turned down, especially on personal listening devices. Distance yourself from noise sources whenever possible. And wear hearing protection when exposed to loud noises, especially if you’ll be exposed for more than a few minutes. Contact an audiologist for more information on hearing, noise-induced hearing loss, and ways you can protect your hearing. Check out the links below for more information from several different resources:
Voltage Regulator Module (VRM) Definition - What does Voltage Regulator Module (VRM) mean? A voltage regulator module is a basic converter that is used by low-voltage devices such as microprocessors to lower a voltage of +5V or +12V according to the specification of the system. In short, microchips with different voltage requirements can be mounted to the same motherboard using a voltage regulator module. A voltage regulator module is also known as a processor power module (PPM). Techopedia explains Voltage Regulator Module (VRM) A voltage regulator module is essentially an integrated circuit (IC) mounted on a motherboard that ensures each component gets its required voltage. It detects and accommodates the voltage requirements in the circuit, and hence it is an essential part of the motherboard of a CPU. Modern CPUs require lower core voltages, typically 1.5V. The exact voltage need is communicated to the VRM by the processor via voltage identification (VID). The VRM initially supplies a standard voltage to the device, which sends a specific VID logic as a reply. After reading the VID, the VRM becomes a voltage regulator, now knowing the voltage level to be supplied.
This page is intended for college, high school, or middle school students. For younger students, a simpler explanation of the information on this page is available on the As an aircraft moves through the air, the air molecules near the aircraft are disturbed and move around the aircraft. Exactly how the air re-acts to the aircraft depends upon the ratio of the speed of the aircraft to the speed of sound through the air. Because of the importance of this speed ratio, aerodynamicists have designated it with a special parameter called the in honor of Ernst Mach, a late 19th century physicist who studied gas For aircraft speeds which are greater than the speed of sound, the aircraft is said to be supersonic. Typical speeds for supersonic aircraft are greater than 750 mph but less than 1500 mph, and the Mach number M is greater than one, 1 < M < 3. In supersonic flight, we encounter and the local varies because of The first powered aircraft to explore this regime was the Bell X-1A, in 1947. It and subsequent experimental aircraft proved that humans could fly supersonically. The aerodynamics of these early aircraft is used on modern supersonic fighter aircraft. There have been several efforts to develop cost-effective supersonic airliners. The Russian TU-144 and the Anglo-French Concorde went into service in the early 1970's but were financial failures. Because of the high associated with supersonic flight, fighter aircraft propulsion systems. On the slide we show an F-14 which is powered by two afterburning turbofan engines. The of supersonic fighters are in planform to reduce drag. The F-14 is unique because the amount of sweep can be varied by the pilot; low sweep for good low speed performance, high sweep for supersonic flight. For Mach numbers less than 2.5, the frictional heating of the airframe by the air is low enough that light weight aluminum is used for the structure. - Beginner's Guide Home Page
We have discussed various methods to measure the distance to a star, classified into two groups: Current detailed knowledge of these distances is the result of a combination of various techniques and a development over centuries. Measurements outside the atmosphere in space, both parallactic and spectrographic observations, have greatly enhanced this knowledge, in terms of quantity as well as quality. This development will continue on the basis of future space missions. But there is more There are many more techniques employed in astronomy to find distance, especially for objects much further away than we have discussed. And, to put the ultimate question, how do astronomers know the size of the entire visible Universe? The graph shows the whole raft of techniques that are used in astronomy today, and here we have discussed only part of the bottom half, because first we need to discuss several other topics, such as how astronomers measure velocity, and about stellar spectra, and stellar evolution. What is called the Cosmic Distance Ladder is the succession of methods by which astronomers determine the distances to celestial objects. A real direct distance measurement (e.g. with parallax) to an astronomical object is only possible on a relatively small scale (in astronomical terms). The ladder analogy arises because no one technique can measure distances at all ranges encountered in astronomy. Instead, one method can be used to measure nearby distances, a second can be used to measure nearby to intermediate distances, and so on. Each rung of the ladder provides information that can be used to determine the distances at the next higher rung. Because the more distant steps of the cosmic distance ladder depend upon the nearer ones, the more distant steps include the effects of errors in the nearer steps, both systematic and random ones. The result of these propagating errors means that distances in astronomy are generally quite imprecise, and that the precision necessarily is poorer for more distant objects. Even worse, the overall distance scale used in astronomy, is prone to systematic effects in any of these individual measurement techniques and these affect our knowledge about the scale of the entire Universe. Much discussion is still going on among astronomers today. Read more about measuring distances in astronomy and these problems in our module "Hubble's Law".
Start a 10-Day Free Trial to Unlock the Full Review Why Lesson Planet? Find quality lesson planning resources, fast! Share & remix collections to collaborate. Organize your curriculum with collections. Easy! Have time to be more creative & energetic with your students! Properties of Real Numbers Students are presented with several sets of numbers and listen to a lecture review of the associative and commutative properties along with number identities and inverses. They also explore a math tutorial online. 3 Views 4 Downloads Exploring Properties - What is the Stock Market? Preteens create a foldable in order to review the properties of operations. They apply their knowledge to write equivalent equations using stock market scenarios. A neat homework page is provided that has four expressions to be... 5th - 7th Math CCSS: Designed Expand Linear Expressions Using the Distributive Property The easiest way to show algebra learners how to expand linear expressions using the distributive property is with an area model. This is the second resource in a series that applies the properties of operations as strategies to develop... 5 mins 6th - 8th Math CCSS: Designed Miss Integer Finds Her Properties in Order Access prior knowledge to practice concepts like order of operations and exponents. Your class can play this game as a daily review or as a warm-up activity when needed. They work in groups of four to complete and correct review problems. 4th - 6th Math CCSS: Designed
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2010 July 3 Explanation: A mere 50 light-years away, young star Beta Pictoris became one of the most important stars in the sky in the early 1980s. Satellite and ground-based telescopic observations revealed the presence of a surrounding outer, dusty, debris disk and an inner clear zone about the size of our solar system -- strong evidence for the formation of planets. Infrared observations from European Southern Observatory telescopes subsequently detected a source in the clear zone, now confirmed as a giant planet orbiting Beta Pic. The confirmation comes as the planet is detected at two different positions in its orbit. Designated Beta Pictoris b, the giant planet must have formed rapidly as Beta Pic itself is only 8 to 20 million years old. With an orbital period estimated between 17 and 44 years, Beta Pictoris b could lie near the orbit of Saturn if found in our solar system, making it the closest planet to its parent star directly imaged ... so far. Authors & editors: Jerry Bonnell (UMCP) NASA Official: Phillip Newman Specific rights apply. A service of: ASD at NASA / GSFC & Michigan Tech. U.
We’ve all heard the timeworn speeches eulogizing the 16th president of our great nation for his iconoclastic measures to end slavery and admired his stone beard gracing the face of Mount Rushmore. We’ve celebrated Martin Luther King’s “dream” and allowed it to inspire our own. We’ve applauded and lauded Rosa Parks’s refusal to let her race dictate where and when she could sit on the bus. However, we may not have adequately appreciated the accomplishments of some other civil rights activists who drastically changed race relations in American history. Here are some of them: Hiram Revels was the first African American congressmen, appointed to the U.S. Senate in 1870. As Senator Charles Sumner said, “‘all men are created equal’ says the great Declaration, and now a great act attests this verity [sic.] and [makes] the Declaration a reality.” Before his political career, he carried out religious work and educated free and enslaved African Americans. In his words, he “preach[ed] the Gospel to [slaves,] and improve their moral and spiritual condition [so] even slaveholders were tolerant of [him].” His work made him popular with his fellow legislators, and his moderate and eloquent speech endeared him to both black and white voters. His position in the Senate as a self-proclaimed “representative of the State, irrespective of color” was a crippling blow to the Color Line, paving the way for equal racial representation in the American government. Branch Rickey agonized his entire life about racial inequality in America, despite being raised in the deep South. Unlike other sympathizers, though, he did something about it. Acknowledging that human nature prevents us from publicly admitting that we and our ancestors were wrong for decades of oppression and injustice, Rickey looked to change America from the inside-out. So, he brought America Jackie Robinson. His endorsement of Robinson forced the other Brooklyn Dodgers players to become comfortable with Robinson or face being dropped from the roster. Proximity and exposure breeds acceptance and affection; it’s a psychological fact. Rickey’s refusal to drop Robinson, regardless of protests from other Baseball managers and his own players, allowed America to understand that black people are human beings, too. His unerring support of Jackie Robinson revolutionized America just as much as, if not more than, Robinson himself did. Whitney Young spent his entire lifetime struggling to create social equality for African Americans. He redefined the role of a social worker and labored to further civil rights for all people. He realized, too, that America couldn’t be changed overnight, and so he fought smaller battles for civil rights that would eventually culminate in his position as executive director of the National Urban League. By the age of 33, Young had successfully desegregated the Atlantic public library. Six years later, Young became executive director of the National Urban League and revolutionized its inner workings to allow the league to have stronger social influence and therefore greater ability for change. In 1968, Young’s fervent advocacy of the Marshall Plan, a ten-point proposal to help bridge the social and economic gap between the races, became a major influence on President Johnson’s “War on Poverty.” For this, Young was awarded the Medal of Freedom, the highest possible civilian honor. Thurgood Marshall was a revolutionary lawyer, civil rights activist, and judge, but his most notable accomplishment was his title as the first ever African-American justice on the United States Supreme Court. His career was marked by major civil rights advancements, such as his victories in Sweatt v. Painter, and McLaurin v. Oklahoma State Regents, which forced integration in schools, and Chambers v. Florida, which gave African Americans equal defense from illegal mental or physical torture as any white American. Marshall was the key attorney in the renowned Brown v. Board of Education that essentially eradicated the entire legal basis for segregation in America. For six years, Marshall served as a circuit judge, and for 24 years, he served on the Supreme Court, always working to further racial equality. The Greensboro Four–Joseph McNeil, Jibreel Khazan, Franklin McCain, and David Richmond–entered a Woolworth’s store in North Carolina, ordered food and waited patiently at the whites-only lunch counter, only to be denied service. The police arrived, but couldn’t arrest the four, due to a lack of provocation. The sit-in quickly gained traction, tens of students sitting and protesting with the Greensboro Four. Within five days, 300 students were protesting at Woolworth’s, and the movement had spread to other countries. Just one day later, 1,000 protesters ssat in at Woolworth’s. Though it still took Woolworth’s almost five months more to integrate its lunch counter, the movement had already taken root in many other cities across the nation to keep the civil rights movement strong.
A genre is a specific type of literature that fits the tropes and norms of certain specifications. Horror, fantasy, action, drama; all of these are genres appear in literature, television, film, even poetry. This being the case, teaching children what genres are and how they affect literary works of all kinds of media can be instrumental in the educational process. Teaching 3rd-graders what genres are might seem difficult, but it can be done easily by following a few key guidelines. Teach the students the difference between comedy and tragedy. These are the most basic genres of literature of all kinds. Every type of literature will loosely fall into one of these two categories. Teach the student that these two basic genres can either make them laugh or cry respectively. As long as they understand comedies and tragedies, the other genres will be much simpler. Plan lessons where the students read short stories or excerpts of stories and then label whether it is a comedy or a tragedy. This will help them differentiate the two while also teaching them how to differentiate genres in general. Teach the students the base sub-genres; fantasy (science fiction), adventure, historical, horror, western and suspense. Though these are not all of the possible sub-genres, they are the most common and easiest to teach younger children. Teach each one individually, offering examples of some easy to understand excerpts from novels or stories of each individual genre. Assign the students to write a short story of their own in one of the genres. Alternatively, you could provide a compilation of short stories or excerpts of each genre and have the students label what each genre is. - Books image by explicitly from Fotolia.com
To see how parallax works, we’ll observe and measure the parallax angle of a relatively distant object such as a tree or a flagpole and use that angle to determine the distance to the object. ruler, meter stick, Parallax Diagram (click to enlarge) Refer to the Parallax Diagram for these steps: I. Locate a target object, like a pole or tree, whose parallax and distance you want to measure. 4.1 Make an estimate of the distance to the target object in meters, and record your estimate. This will allow you to appreciate how well you can visualize distances that are beyond your reach. It will also help in determining whether your result at the end is reasonable or not. II. Find an area where you can lay out a baseline about 10 meters long with these qualities: (a) you can sight the target at approximately either end of the baseline, points A and B on the diagram, (b) from near point A you can sight on the target and line up an easily seen object in the far distance, preferably a few miles or more behind the target, point C, and (c) from the other end of the baseline, near point B, you can line up the target with another, easily seen object in the far distance, point III. Mark positions A and B and measure the baseline distance (b) between A and B in meters. It should be in the range of 5–10 meters. Record that distance on the diagram (letter b). 4.2. Measure the parallax angle of the target by standing somewhere along the baseline where you can view both points C and D in the distance. The closer you are to the center of the baseline, the better, but any point along the baseline will work. With the help of a partner, measure the angle between points C and D (angle p’ on the diagram), as a. Hold the ruler in front of your eye and measure the distance (x) between C and D. b. At the same time have a partner with a meter stick measure the distance (y) from your eye to the ruler you are holding. Have more than one pair of people do this measurement for the most reliable result. c. Compute the parallax angle: p' = (x/y) * 57.3 degrees*. 4.3. Calculate the distance, d, to the pole. Assume the angle is fairly small, so you can use the following approximation: d = b (57.3°/ p') 4.4. Compare your measured distance to the value you estimated in question 4.2 above. Do you believe your measured result is reasonable? 4.5. Which step of the procedure do you believe had the most potential for error? Without doing a major error analysis, approximately what percent error do you feel there is in your result of distance to the * 57.3 is the approximate number of degrees in a radian which is another unit of angle. A more accurate value is found by dividing 180 by "pi". A radian is the angle along the circumference of a circle made by a length equal to the radius of the circle. Investigations on Measuring Distances to Asteroids There are two investigations (by Rich Lohman) on using the parallax technique to find distance to asteroids: Images for the investigation Distance to Asteroid 1998wt are in the folder "MoreTelescopeImages/AsteroidParallax" available on the HOU/GSS software download page The measurements for angles in telescopes are usually not in units of degrees, but arcminutes or even arcseconds. As with the units of time, there are 60 arcminutes (') in a degree and 60 arcseconds (") in an arcminute. The equation in step 4.3 above has the constant 57.3° (the number of degrees in a radian). If your measurements are made in arcseconds instead of degrees, then the constant to use would be 57.3° * 60 arcmin/° * 60 arcsec/arcmin = 206,265". So the equation to use becomes = (b/p") x 206,265 . d = distance to asteroid. b = baseline. p" = parallax angle (arcseconds).
The apical meristems remain active during the entire development of the plant so that longitudinal extension through cell division as well as cell expansion can occur (see examples here below). In nearly all monocots cell division in primary growth and the consecutive expansion of cells form the only means for the plant to increase in size. During the life of most dicotylous plants, besides primary growth also secondary growth, takes place. This type of growth, called also secondary thickening or lateral growth (lateral = to the side), arises from secondary (newly formed) meristems. 1. From the procambium in the vascular bundles secondary cambium is formed which produces secondary phloem and xylem. 2. In some species cork cambium that makes cork tissue develops from parenchymatic cells in the cortex. Variations in modes of secondary growth are illustrated in the following web pages with two model species: castor bean (POISONOUS PLANT, I.E. THE BEANS ARE EXTREMELY TOXIC!) and Dutchman's pipe. The cambium (from the Latin word cambiare = to change) is a layer of generative tissue that consists of small thin-walled cells with the capacity to divide. In dicots three types of cambia involved in (secondary) thickening growth can be discerned: - The fascicular cambium that is present in vessels. In the stem the fascicular tissue forms phloem tissue towards the periphery and xylem tissue to the inside. How fascicular and interfascicular cambium look like and which tissue they generate can be seen from the images in the next web page. - The interfascicular cambium that has developed from parenchyma cells is located between the -separate- groups of vessels. To the examples. - The cork cambium that builds layers of cork at the periphery of the bark. More about cork |Thickening growth: cambium ring versus cambium clusters |Diagram of cross-sections through the stem of young and older dicots| |Closed cambium ring: A > B ||Separate clusters of cambium tissue: C > D| |Example: Castor bean| A Young stage - B Old stage (A)In the stem of the castor bean a ring of cambium (indicated in green) is present from the beginning of thickening growth. (B) During maturation this cambium layer gives rise to a ring of vascular tissue consisting of xylem (red) to the inside and phloem (blue) to the outside. |Example: Aristolochia sp.| C Young stage - D Old stage In young stems of Aristolochia (C) cambium is present in separate clusters in the vessels (fascicular cambium, indicated in green). This fascicular cambium proceeds to the formation of a large amount of secondary tissue (phloem to the outside, indicated in blue and xylem to the inside, indicated in red). Also between the vascular bundles parenchyma cells develop to interfascicular cambium. In older stages (D) arrays of differentiated parenchyma (orange) can be found between the vascular bundles: periclinal divisions occur which give rise to dilatation tissue that compensates for the increase in circumference of the growing stem. |Fascicular cambium versus Cork cambium |E Fascicular cambium is characterized by bilateral (on two sides) deposition of vascular tissue, namely also here phloem toward the periphery and xylem to the inside. | F Cork cambium, located beneath the epidermis, is involved in cork formation and moreover is mostly active toward the outside.
- 1The art of drawing solid objects on a two-dimensional surface so as to give the right impression of their height, width, depth, and position in relation to each other when viewed from a particular point: [as modifier]: a perspective drawing See also linear perspective and aerial perspective.More example sentences - The Shrine authorities produced elevations and perspective drawings of even the most sacred buildings in order to facilitate rebuilding. - A pin at the central vanishing point would have been as useful here as it would for perspective drawings set out mathematically. - The illustrations in Pacioli's work were by Leonardo da Vinci and include some fine perspective drawings of regular solids. - 1.1A picture drawn in perspective, especially one appearing to enlarge or extend the actual space, or to give the effect of distance.More example sentences - There is an added design advantage inherent in steps: they have a completely different impact, depending on the viewer's perspective. - From this distance, painted from this perspective, the waters appear calm, but he knows that the flow has the power to wear away the rocks and the might to shape the landscape. - Clever use of perspective makes the scene appear much bigger than it actually is, and reinforces the fantasy element of the play by delineating the space between the actors and the audience. - 1.2A view or prospect.More example sentences - His landscapes offer a tilting perspective, often a view over rises or down a slope. - The surrounding Black Sea landscape serves to further intensify the already magnificent visual perspectives. - He moved around to get a long perspective view of the street. - 1.3 Geometry The relation of two figures in the same plane, such that pairs of corresponding points lie on concurrent lines, and corresponding lines meet in collinear points.More example sentences - He then goes on to give theorems which relate to the perspective of plane figures. - 2A particular attitude toward or way of regarding something; a point of view: most guidebook history is written from the editor’s perspectiveMore example sentences - The artwork has to be able to point towards new perspectives and formulate new possibilities and new narratives. - But what does a reading of these two books together do to contribute towards developing an anti-authoritarian perspective? - It used to be a decent shelter, but from my perspective, the attitude of the management and the board is not what you want at a shelter. - 2.1True understanding of the relative importance of things; a sense of proportion: we must keep a sense of perspective about what he’s doneMore example sentences - It needs a common sense approach and a sense of perspective to the important things in life. - Let's hope film-makers can acquire a similar sense of perspective before our collective memory is sold off to the highest bidder. - Alternatively, to reflect on my death prompts a sense of perspective on what is important to do now, how to set my priorities, how to live authentically. - 3An apparent spatial distribution in perceived sound.More example sentences - There is now a clearer definition and a back-to-front perspective to the sound. in (or out of) perspective - Showing the right (or wrong) relationship between visible objects.More example sentences - Two sides of the shrine are visible, rendered in perspective as if the building were set in the distance. - Correctly (or incorrectly) regarded in terms of relative importance: these expenses may seem high, but they need to be put into perspectiveMore example sentences - You have to be careful and keep this in perspective, especially in terms of apportioning blame. - While this was a horrendous event, it is important to keep it in perspective. - Taken together, this is a fairly revolutionary and intrusive programme, but it is important to view it in perspective. late Middle English (in the sense 'optics'): from medieval Latin perspectiva (ars) 'science of optics', from perspect- 'looked at closely', from the verb perspicere, from per- 'through' + specere 'to look'.
The intensity and scope of the heat wave is clearly visible in this map of land surface temperature anomalies. Based on data from the Moderate Resolution Imaging Spectraradiometer (MODIS) instrument on the Terra satellite, the map depicts temperatures compared with the average of the same eight-day period of March from 2000-2011. Records aren’t only being broken across the country, they're being broken in unusual ways. Chicago, for example, saw record-breaking temperatures above 26.6° Celsius (80° Fahrenheit) every day between March 14-18. For context, the National Weather Service noted that Chicago typically averages only one day in the eighties each April. Additionally, only once in 140 years of weather observations has April produced as many 80° Fahrenheit days as this March. Meanwhile, Climate Central reported that in Rochester, Minn., the overnight low temperature on March 18 was 16.6° Celsius (62° Fahrenheit), a temperature so high it beat the record high of 15.5° Celsius (60° Fahrenheit) for the same date.
Throughout the Gilded Age, America began to espouse a philosophy called the Monroe Doctrine, named for President James Monroe who first codified the idea. In short, the Monroe Doctrine was based on the premise of hemispheric hegemony, which is the idea that major nations have the right to oversee events in their sphere of influence, while outside nations must keep their distance. Throughout the 1890’s, Spain had violated this principal in their treatment of Cuba, which, at the time, was a territorial possession. At the request of Cuba, the United States began sending aid to them. When the American ship the USS Maine exploded while at anchor, the United States declared war on Spain. The major thrust of the war was in Cuba itself, where the United States sent the bulk of its troops. During the war, future President Theodore Roosevelt made a name for himself and his personal unit, the Rough Riders. Roosevelt’s popularity during the war went a long way in helping him secure the presidency in 1901. The Spanish, outmanned and outgunned, eventually agreed to a ceasefire, and during the treaty phase of negotiations, gave up both Cuba and the Philippines to the United States. The Spanish-American War was the first time that America had demonstrated its modern military prowess and established itself as a world power.
The History and Traditions of Cinco de Mayo There are quite a few misconceptions about the history and traditions associated with the Mexican holiday “Cinco de Mayo”. Contrary to popular belief, the 5th of May is not Mexican Independence day, nor does it have much of anything to do with Spain. We will delve into the actual history of this holiday and the traditions used to mark its passing. Cinco de Mayo History It’s time to lay down some Cinco de Mayo facts. Many people often ask how long Cinco de Mayo has been celebrated. Cinco de Mayo traditions only date back to late 19th century – specifically, it is based on events that took place in 1861. For the record, Mexico had already been an independent nation for the better part of 50 years. The 19th century for Mexico was full of civil strife and conflict, and it seemed one war came directly after another. Conflicts over religions, monarchy and debts all followed and fed into one another. Cinco de Mayo comes during the war over debt. The Mexican reform war (centered around the separation of church and state) left the country destitute, and their national assembly declared a moratorium on all debt repayments. The Major European powers were incensed at the idea they were not being paid back and threatened invasion. Though Mexico was able to broker a deal with most of the European powers, negotiations with France fell apart, and the French army launched a 6,000 man invasion of the country. The French army fought their way inland. They eventually came upon a dug in, albeit much smaller, Mexican force at Puebla and a battle ensued. On May 5, 1862, the smaller Mexican forces were able to push the French army back and secure a morale victory in the war. This event was used to bolster the morale of resistor and unify large sections of the country. A year later the French returned with 30,000 soldiers and conquered Mexico City. What followed was a three-year monarchy under Emperor Maximilian I. When the American civil war ended the US refocused on its southern border and began offering aid to guerrilla fighters resisting the European puppet government. Under increasing pressure in both the European and American theater, the French forces withdrew from Mexico. On June 5, 1867, Benito Juarez entered Mexico City and established a new legitimate Mexican government. As you can see, the history behind Cinco de Mayo is much more complicated than most believe. It has come to signify the Mexican ideals of independence and resistance to outside interference, as well as day to commemorate national unity. To uncover how some interpret the significance of this date, read the gripping story ‘Cinco de Mayo’ by Michael J. Martineck on Geeker. How is Cinco de Mayo Celebrated? Ironically, Cinco de Mayo is more widely celebrated in the US than it is in Mexico. The biggest celebration of all takes place in Los Angeles. The city of Puebla, where the battle took place, does hold quite the little shindig to celebrate the day. They have a massive parade where people arrive in Mexican and French uniforms, and they sell traditional foods and play national music. Throughout Mexico, the US and Canada, celebrations are held utilizing the colors of the Mexican flag, Mexican dancing, and Mexican foods. The day has become a symbol of solidarity for Mexicans throughout North America who no longer live in Mexico. Margaritas, Salsa, Corona, and Sombreros are probably the most cliché of these items – and you’re sure to find no shortage of them that day. Don’t limit yourself, however. The Mexican culture is rich in tradition, and there are tons of foods and activities to try out and explore. The music is probably exactly what you expect, however. Some other Mexican foods to try: Huachinango a la veracruzana, a red snapper dish. A delicious combination of African, European and indigenous influences. Tortas ahogadas is another favorite. Picture it like a sandwich drowned in Chili sauce. If you can get your hands on some, tejuino is an alcoholic drink made from fermented corn. Morisquesta is a sausage and rice dish and my personal favorite. Birria is a spicy stew with a heavy Spanish influence. Mic it up with some chicken, corn tortillas and lime and you have yourself a unique and tasty meal. Do a quick search and see what interesting dishes you can find. The flavors and spices in Mexican food are so varied, there is sure to be something you will like.
Histograms with equal-width bins are easy to construct using samples. For this it's enough to look through the given sample set and for each value from it to figure out what bin this value can be placed in. Each bin requires only one counter. Let f be a column of a table with N rows and n be the number of samples by which the equal-width histogram of k bins for this column is constructed. Let after looking through all sample rows the counters created for the histogram bins contain numbers c,..,c[k]. Then m[i]= c[i]/n * 100 is the percentage of the rows whose values of f are expected to be in the interval It means that if the sample rows have been chosen randomly the expected number of rows with the values of f from this interval can be approximated by the number m[i]*/100 * N. To collect such statistics it is suggested to use the following variant of the ANALYZE TABLE command: - 'WITH n ROWS' provides an estimate for the number of rows in the table in the case when this estimate cannot be obtained from statistical data. - 'SAMPLING p PERCENTS' provides the percentage of sample rows to collect statistics. If this is omitted the number is taken from the system variable samples_ratio. - 'IN RANGE r' sets the range of equal-width bins of the histogram built for the column col1. If this is omitted then and min and max values for the column can be read from statistical data then the histogram is built for the range [min(col1), max(col1)]. Otherwise the range [MIN_type(col1), MAX_type(col1) is considered]. The values beyond the given range, if any, are also is taken into account in two additional bins. - WITH k INTERVALS says how many bins are included in the histogram. If it is omitted this value is taken from the system variable histogram_size.
Choose a class you teach or one you hope to teach in the future and describe your classroom management plan. Because a classroom management plan refers to the things that a teacher does to organize students’ time, space, and materials so that instruction and student learning can effectively take place, your plan should discuss these factors and demonstrate your ability to create a climate conducive to learning. Your plan should also establish high expectations for student behavior and learning. At a minimum, your plan should do the following: - Explain the goal of the plan, including the grade level(s) it addresses. - Generate a set of rules and expectations. - Examine and explain the role students have in the classroom. - Examine and explain the role you play in the classroom. - Create an implementation plan (how this plan will be implemented in the classroom). - Construct a visual representation of the physical arrangement of the classroom. Your plan should be between three and five pages, in addition to a title and reference page, and be divided into sections clearly delineating what is being addressed. The items outlined above should serve as a starting point for your sections. Other sections of your choosing may be added. Given that we are teaching and learning in an increasingly connected digital society, you will also need to review the ISTE Standards for Teachers and identify at least two aspects of the standards you will address in creating an effective learning environment. Please use APA format to cite and reference at least three scholarly sources, including the course textbook, in this assignment. Preparation for Week Three Assignment: Remember, your Week Three assignment requires you to visit and/or interview two teachers, principals, or teaching support staff members. By now you should have either conducted your observations and interviews or have plans to do so
Osteoporosis used to mean fractures caused by thin bones. About a decade ago, however, the definition changed. Now the term osteoporosis encompasses not only those who have had fractures caused by thin bones but those who have thin bones and are at increased risk for fracture. This change made the risk factor for the disease (thin bones) equivalent to the disease (osteoporosis) itself. It also meant that many more women suddenly had osteoporosis. The definition of osteoporosis changed to include those who have had fractures caused by thin bones and those who have thin bones and are at increased risk for fracture The catalyst for the new definition was the development of the dual-energy X-ray absorptiometry (DEXA) scan, a machine that measures bone density at the hip and spine, where fractures are most likely to occur. Before the machine was put into widespread use, an international group of medical experts met to determine a consistent method for using DEXA scan readings. They decided to use a T score, which uses a statistical term called standard deviation. Standard deviation measures how far something is away from the norm; in this case, the “norm” selected was the bone density of a healthy, average woman in her mid-20s. They also decided that a –2.5 standard deviation below the norm would be used as the definition for osteoporosis. But not all women with a –2.5 on a DEXA scan have the same risk for breaking a bone. “Bone density is only one part of a complex part of risk factors that indicate whether someone will have fractures in the next few years, and I put it last on the list,” says bone health expert Bruce Ettinger, MD, a senior investigator at Kaiser Permanente in Oakland, California. Risk factors for fractures incude not only bone density but a woman’s age and if she has previous fractures in her wrist, upper arm or spine The first two factors that need to be taken into account, he says, are a woman’s age (an older woman is at greater risk of experiencing a fracture) and whether a woman has previously fractured her wrist, upper arm, or spine (fractures in these areas quadruple the risk of a future fracture). Next, it’s important to assess whether the woman is thin, smokes, or has a family history of osteoporosis, all of which increase risk. Then, the DEXA scan measurement comes into play. “After looking at all of these things,” Ettinger says, “adding bone density modifies risk a little bit, but it doesn’t change it all that much.” In essence, says Ettinger, most people think about this backwards: They start with the DEXA scan. But the other factors are equally, if not more, critical in assessing fracture risk and making treatment decisions.
Ejemplos preguntas de GMAT HAZ UN MINI-TEST GRATIS. GMAT Verbal Examples Critical Reasoning – Assumption Questions Our modern mass culture derives many of its dubious notions about Ancient Egypt from Hollywood films, and especially from those on Biblical subjects. Hollywood, in turn, adopted many of these misconceptions from the writings of the Ancient Greek historian Herodotus. Science has now confirmed that on one matter about which Herodotus and Hollywood were in agreement, they were both mistaken. The discovery and subsequent analysis of the characteristics of the tombs of the workers who participated in the construction of the Great Pyramid of Giza provides evidence confirming something that Egyptologists have believed for a long time: that those who raised the Pyramids were not slaves but rather paid workers – free men who, the archaeologists speculate, perhaps felt a degree of pride in participating in the construction of the tomb of their Pharaoh, but who at any rate were definitively not the teams of unwilling slaves depicted in Hollywood epics. Which of the following assumptions underlies the argument in the passage above? - Paid workers are more suitable than are slaves to raise long-lasting constructions such as the Great Pyramid of Giza. - The characteristics of the tombs of those who worked on the construction of the Great Pyramid of Giza are representative of those of the tombs of the workers who participated in the construction of all the other C) Pyramids. - In ancient Egypt, slaves were not buried in tombs, either when the Great Pyramid of Giza was constructed or earlier on in Egyptian history. - Hollywood adopted the view that the Pyramids were built by slaves only because that view was sustained by Herodotus. - There was sufficient population in ancient Egypt to provide the full-time paid work-force necessary for the construction of the Great Pyramid of Giza, given that it was not built by slaves. Critical Reasoning – Inference Questions In Botswana, the Ocavango Delta, in reality a flood-plain, is inundated by the waters of the Ocavango river for some three or four months every year, thus becoming a swamp. The large population of lions living there, far from abhorring water, has become accustomed to moving through it and has learnt to hunt in it, given that the antelopes on which its members prey spend more time feeding in the swamps than grazing on dry land. In being relatively at ease in water, these lions resemble jaguars and tigers. They have also grown a longer, fluffier coat, a local adaptation to the fact that the loss of body heat takes place twenty times faster in water than in air. Furthermore, when the plain is flooded, the various prides have come, surprisingly, to allow the incursion of other prides into their territory, since the animals on which they feed tend to move rapidly from one of these territories to another. These facts show that lions are not as immovably averse to water, and not as fiercely territorial, as is commonly thought. Which of the following can be inferred on the basis of the facts cited above? - The lions of the Ocavango Delta are on the way to developing into a rather different species of lion. - During the eight or nine months in which the Ocavango Delta is not flooded, the lions in that area revert to the forms of behaviour held to be characteristic of lions. - The antelopes on which the lions prey would be safer out of the water than in it. - The evolution of species is accelerating in the Ocavango Delta as a result of the very peculiar conditions that prevail in the area. - Adaptations to a particular environment do not necessarily depend on that environment’s being the prevailing one. The ocean nearest in size to the Atlantic is the Indian, but it is different from the other one in that it is a relatively warm body of water with few plankton and therefore comparatively little marine life. - but it is different from the other one in that it is a relatively warm body of water with few plankton - but the latter is different from the former because it is a relatively warm body of water with few plankton - but the latter is different from the former in that it is a relatively warm body of water with little plankton - but it is different from the other because it is a relatively warm body of water with little plankton - but the latter is different from the first in that it is a relatively warm body of water with few plankton The first edition of Dr Roget’s Thesaurus was published in 1852, and in the subsequent editions of the work the author’s son and grandson tried improving the coherency and layout by making changes that nevertheless left intact the original scheme of the thesaurus. - tried improving the coherency and layout by making changes that nevertheless left intact the original scheme of the thesaurus - tried to improve the coherency and layout by making changes leaving the original scheme of the thesaurus nevertheless intact - tried to improve the coherency and layout by making changes that nevertheless left the original scheme of the thesaurus intact - tried improving the coherency and layout that made changes leaving the original scheme of the thesaurus nevertheless intact - tried to improve the coherency and layout by making changes that left nevertheless intact the thesaurus’s original scheme Line Resin is a plant secretion that hardens when exposed to air; fossilized resin is called amber. Although Pliny in the rst century recognized that amber was produced from “marrow discharged by trees,” amber has been widely misunderstood to be a semiprecious gem and has even been described in mineralogy textbooks. Confusion also persists surrounding the term “resin,” which was de ned before rigorous chemical analyses were available. Resin is often confused with gum, a substance produced in plants in response to bacterial infections, and with sap, an aqueous solution transported through certain plant tissues. Resin differs from both gum and sap in that scientists have not determined a physiological function for resin. In the 1950s, entomologists posited that resin may function to repel or attract insects. Fraenkel conjectured that plants initially produced resin in nonspeci c chemical responses to insect attack and that, over time, plants evolved that produced resin with speci c repellent effects. But some insect species, he noted, might overcome the repellent effects, actually becoming attracted to the resin. This might induce the insects to feed on those plants or aid them in securing a breeding site. Later researchers suggested that resin mediates the complex interdependence, or “coevolution,” of plants and insects over time. Such ideas led to the development of the specialized discipline of chemical ecology, which is concerned with the role of plant chemicals in interactions with other organisms and with the evolution and ecology of plant antiherbivore chemistry (plants’ chemical defenses against attack by herbivores such as insects). According to the passage, which of the following is true of plant antiherbivore chemistry? - Changes in a plant’s antiherbivore chemistry may affect insect feeding behavior. - A plant’s repellent effects often involve interactions between gum and resin. - A plant’s antiherbivore responses assist in combating bacterial infections. - Plant antiherbivore chemistry plays only a minor role in the coevolution of plants and insects. - Researchers rst studied repellent effects in plants beginning in the 1950s. Of the following topics, which would be most likely to be studied within the discipline of chemical ecology as it is described in the passage? - Seeds that become attached to certain insects, which in turn carry away the seeds and aid in the reproductive cycle of the plant species in question - An insect species that feeds on weeds detrimental to crop health and yield, and how these insects might aid in agricultural production - The effects of deforestation on the life cycles of subtropical carnivorous plants and the insect species on which the plants feed - The growth patterns of a particular species of plant that has proved remarkably resistant to herbicides - Insects that develop a tolerance for feeding on a plant that had previously been toxic to them, and the resultant changes within that plant species The author of the passage refers to Pliny most probably in order to - give an example of how the nature of amber has been misunderstood in the past - show that confusion about amber has long been more pervasive than confusion about resin - make note of the rest known reference to amber as a semiprecious gem - point out an exception to a generalization about the history of people’s understanding of amber - demonstrate that Pliny believed amber to be a mineral GMAT Quantitative Examples 1) The jewels in a certain tiara consist of diamonds, rubies, and emeralds. If the ratio of diamonds to rubies is 5⁄6 and the ratio of rubies to emeralds is 8⁄3, what is the least number of jewels that could be in the tiara? - 16 % - 22 % - 40 % - 53 % - 67 % 2) At a certain pizzeria, 1/8 of the pizzas sold in one week were mushroom and 1/3 of theremaining pizzas sold were pepperoni. If n of the pizzas sold were pepperoni, how many were mushroom? 4) The addition above shows four of all the different integers that can be formed by using each of the digits 2, 3, 4, and 5 exactly once in each integer. What is the sum of all these integers? 1) If k is an integer less than 17 and k – 1 is the square of an integer, what is the value of k? |(1) k is an even number.| |(2) k + 2 is the square of an integer.| - Statement (1) ALONE is sufficient, but statement (2) alone is not sufficient. - Statement (2) ALONE is sufficient, but statement (1) alone is not sufficient. - BOTH statements TOGETHER are sufficient, but NEITHER statement ALONE is sufficient. - EACH statement ALONE is sufficient. - Statements (1) and (2) TOGETHER are NOT sufficient. 2) A group of 49 consumers were offered a chance to subscribe to 3 magazines: A, B, and C. 38 of the consumers subscribed to at least one of the magazines. How many of the 49 consumers subscribed to exactly two of the magazines? |(1) Twelve of the 49 consumers subscribed to all three of the magazines.| |(2) Twenty of the 49 consumers subscribed to magazine A.| - Statement 1 alone is sufficient to answer the question, but statement 2 alone is not sufficient. - Statement 2 alone is sufficient to answer the question, but statement 1 alone is not sufficient. - Both statements together are needed to answer the question, but neither statement alone is sufficient. - Either statement by itself is sufficient to answer the question. - Not enough facts are given to answer the question. 4) Let A be the set all outcomes of a random experiment and let B and C be events in A. Let C̅ denote the set of all the outcomes in A that are not in C and let P(B) denote the probability that event B occurs. What is the value of P(B)? - P (B ∪ C) = 0.7 - P (B ∪ C̅) = 0.9 GMAT Integrated Reasoning Examples Type 1: Graphics Interpretation Questions The graph shows the percent profit earned by two companies, P and Q, on their investments. In which year was the ratio of investment to income greaest for Company P? Select: 2008, 2002, 2005, 2007, 2004 If the income of Company P in 2006 was same as the income of Company Q in 2003, what would be the ratio of investment of company Q in 2003 to the investment of company P in 2006? Select: 9:10, 10:9, 13:15, 15:13 Type 2: Two–Part Analysis Questions The following is an extract from a sports commentator’s speech, which discusses a fictitious location, called Sanura, on a playing field. “And now we can see that in the final minutes of the match, almost all the players have gathered in anticipation near the only gate where the goal can be scored and are waiting for the ball to be thrown into play. Each coach puts one defenseman from his team in Sanura. While until recently controlling Sanura was considered a good idea only at the beginning of a game when a face-to-face game was developing , now it has become clear that even in situations like this one, when the play is occurring far from Sanura, it is crucial to put some players there.” Based on the definition of the fictitious word Sanura as inferred from the extract above, which of the following events CAN happen in Sanura and which CANNOT? Make only two selections, one in each column |0||0||Throwing a ball from the line| |0||0||Getting sports trauma| |0||0||Scoring a goal| |0||0||All members of one team gathering together| |0||0||member of different teams meeting| Type 3: Table Analysis Questions |Genre||Rental €||Rental Rank||Sales %||Sales Rank| |0||0||For all genres that Mark’s leads in rentals, it does not always lead in sales| |0||0||all other stores combined rent more Documentaries than Mark’s does| |0||0||No single store more than 25% of the town’s Drama movies| Type 4: Multi-Source Reasoning Questions News article in a popular business publication June 7 – If current trends continue, farmed seafood will overtake ocean fishing as the world’s largest source of seafood by 2025. Aggressive overfishing of the world’s oceans and the inability of world governments to agree on fishing limits mean that farming will become critical to the industry’s ability to meet worldwide seafood demand. Additionally, recent concerns about mercury levels in wild-caught fish have led many consumers to prefer farmed fish, further creating increased demand for this relatively new source of seafood. Interview with a well known scientist in a technology journal July 2 – Dr. Jason Dempster, one of the world’s most outspoken critics of the seafood industry’s unwillingness to curb its output in order to protect the fish population, suggests that more than two dozen popular species may become virtually extinct in the next several decades. “I understand that consumers keep buying the seafood, and fishermen are naturally going to meet demand wherever they can find it. However, if something isn’t done to meet the demand another way, by the middle of this century even something as common as tuna may become a delicacy only the world’s wealthiest families can afford.” Article from a weekly news magazine July 20 – Demand for tilapia, one of the world’s most popular species of fish, has grown 1000% over the last decade as people around the world have discovered it as a low-cost fish that goes well with a variety of foods. This increased demand has encouraged countless tilapia farms to open in China, and American officials have expressed concern that not all tilapia imported from China meets U.S. safety standards. Some experts in the U.S. have called for creating more stringent standards for all seafood imports, but Chinese authorities warn that this may dramatically increase the cost of seafood imported into the United States. Consider each of the following statements. Does the information in the three articles support the inference as stated? |0||0||The world’s governments usually do not agree with one another on how to deal with matters related to fishing and seafood farming.| |0||0||An increase in worldwide demand for tilapia has driven the world’s ocean fish population to dangerously low levels.| |0||0||Dr. Dempster supports an increase in fish farming.| |0||0||Chinese tilapia farms have led some U.S. consumers to worry about the levels of mercury in their seafood.| GMAT Analytical Writing Assessment Sample Essay Ejemplo de Analysis of An Argument “The recent surge in violence in the southern part of the city is a result of a shortage of police officers and an absence of leadership on the part of the city council. In order to rectify the burgeoning growth of crime that threatens the community, the city council must address this issue seriously. Instead of spending time on peripheral issues such as education quality, community vitality, and job opportunity, the city council must realize that the crime issue is serious and double the police force, even if this action requires budget cuts from other city programs.” In the argument above, the author concludes that the city council is not doing its job well and needs to focus on expanding significantly the police force in order to combat recent growth in the level of crime. The premise of the argument is that crime is expanding while the city council focuses on ostensibly unrelated matters such as education reform. However, the argument is flawed because it falsely assumes that the city council’s efforts to improve quality of life are entirely unrelated to levels of violence and it assumes that the crime problem can be solved by merely increasing the police force. First, the argument wrongly assumes that issues of educational opportunity, community vitality, and job availability have no bearing on crime. However, the author fails to support this assumption. It is entirely possible that the crime level spiked due to a recent and sizeable layoff at a major nearby factory that pushed countless citizens out of work and onto the streets. With individuals struggling to survive, it should come as no surprise that people are turning to crime. Second, the reasoning in the editorial is flawed because it erroneously assumes that increasing the police force will directly address the root of the crime problem and reduce the level of crime. Yet, a landmark study published in early 2008 showed that increasing the size of a police force beyond a certain point provides extremely small marginal returns in the reduction of crime. Given the fact that the local police force is already above this threshold, the editorial’s author wrongly assumed that a doubling of the police force will materially decrease the crime rate. Moreover, the argument could be improved by appealing to the city’s history with fighting crime and managing the size of its police force. In particular, approximately 25 years ago, the city council faced a situation very similar to the one it faces today: a rising crime rate and growing spending on community development. The city council decided to increase the size of its after- school programs’ budget by about 75% and this reduced crime dramatically. Faced with the same situation today, the city council should follow the path it took 25 years ago.
New kilonova has astronomers rethinking what we know about gamma-ray bursts A year ago, astronomers discovered a powerful gamma-ray burst (GRB) lasting nearly two minutes, dubbed GRB 211211A. Now that unusual event is upending the long-standing assumption that longer GRBs are the distinctive signature of a massive star going supernova. Instead, two independent teams of scientists identified the source as a so-called “kilonova,” triggered by the merger of two neutron stars, according to a new paper published in the journal Nature. Because neutron star mergers were assumed to only produce short GRBs, the discovery of a hybrid event involving a kilonova with a long GBR is quite surprising. “This detection breaks our standard idea of gamma-ray bursts,” said co-author Eve Chase, a postdoc at Los Alamos National Laboratory. “We can no longer assume that all short-duration bursts come from neutron-star mergers, while long-duration bursts come from supernovae. We now realize that gamma-ray bursts are much harder to classify. This detection pushes our understanding of gamma-ray bursts to the limits.” As we’ve reported previously, gamma-ray bursts are extremely high-energy explosions in distant galaxies lasting between mere milliseconds to several hours. The first gamma-ray bursts were observed in the late 1960s, thanks to the launching of the Vela satellites by the US. They were meant to detect telltale gamma-ray signatures of nuclear weapons tests in the wake of the 1963 Nuclear Test Ban Treaty with the Soviet Union. The US feared that the Soviets were conducting secret nuclear tests, violating the treaty. In July 1967, two of those satellites picked up a flash of gamma radiation that was clearly not the signature of a nuclear weapons test. Just a couple of months ago, multiple space-based detectors picked up a powerful gamma-ray burst passing through our solar system, sending astronomers worldwide scrambling to train their telescopes on that part of the sky to collect vital data on the event and its afterglow. Dubbed GRB 221009A, it was the most powerful gamma-ray burst yet recorded and likely could be the “birth cry” of a new black hole. There are two types of gamma-ray bursts: short and long. Classic short-term GRBs last less than two seconds, and they were previously thought to only occur from the merging of two ultra-dense objects, like binary neutron stars, producing an accompanying kilonova. Long GRBs can last anywhere from a few minutes to several hours and are thought to occur when a massive star goes supernova. Astronomers at the Fermi and Swift telescopes simultaneously detected this latest gamma-ray burst last December and pinpointed the location in the constellation Boötes. That quick identification allowed other telescopes around the world to turn their attention to that sector, enabling them to catch the kilonova in its earliest stages. And it was remarkably nearby for a gamma-ray burst: about 1 billion light-years from Earth, compared to around 6 billion years for the average gamma-ray burst detected to date. (Light from the most distant GRB yet recorded traveled for some 13 billion years.) “It was something we had never seen before,” said co-author Simone Dichiara, an astronomer at Penn State University and a member of the Swift team. “We knew it wasn’t associated with a supernova, the death of a massive star, because it was too close. It was a completely different kind of optical signal, one that we associate with a kilonova, the explosion triggered by colliding neutron stars.” As two binary neutron stars begin circling into their death spiral, they send out powerful gravitational waves and strip neutron-rich matter from each other. Then the stars collide and merge, producing a hot cloud of debris that glows with light of multiple wavelengths. It’s the neutron-rich debris that astronomers believe creates a kilonova’s visible and infrared light—the glow is brighter in the infrared than in the visible spectrum, a distinctive signature of such an event that results from heavy elements in the ejecta which block visible light but lets the infrared through. That signature is what subsequent analysis of GRB211211A revealed. And since the subsequent decay of a neutron star merger produces heavy elements like gold and platinum, astronomers now have a new means of studying how these heavy elements form in our universe. Several years ago, the late astrophysicist Neil Gehrels suggested that longer gamma-ray bursts could be produced by neutron star mergers. It seems only fitting that NASA’s Swift Observatory, which is named in his honor, played a key role in the discovery of GRB 211211A and the first direct evidence for that connection. “This discovery is a clear reminder that the Universe is never fully figured out,” said co-author Jillian Rastinejad, a Ph.D. student at Northwestern University. “Astronomers often take it for granted that the origins of GRBs can be identified by how long the GRBs are, but this discovery shows us there’s still much more to understand about these amazing events.” DOI: Nature, 2022. 10.1038/s41550-022-01819-4 (About DOIs). Atoms Lanka Solutions
Direct Variation (also known as Direct Proportion) The concept of direct variation is summarized by the equation below. We say that y varies directly with x if y is expressed as the product of some constant number k and x. If we isolate k on one side, it reveals that k is the constant ratio between y and x. In other words, dividing y by x always yields a constant output. k is also known as the constant of variation, or constant of proportionality. Examples of Direct Variation Example 1: Tell whether y varies directly with x in the table below. If yes, write an equation to represent the direct variation. To show that y varies directly with x, we need to verify if dividing y by x always gives us the same value. Since we always arrived at the same value of 2 when dividing y by x, we can claim that y varies directly with x. This constant number is, in fact, our k = 2. To write the equation of direct variation, we replace the letter k by the number 2 in the equation y = kx. When an equation that represents direct variation is graphed in the Cartesian Plane, it is always a straight line passing through the origin. Think of it as the Slope-Intercept Form of a line written as y = mx + b where b = 0 Here is the graph of the equation we found above. Example 2: Tell whether y varies directly with x in the table below. If yes, write an equation to represent direct variation. Divide each value of y by the corresponding value of x. The quotient of y and x is always k = - \,0.25. That means y varies directly with x. Here is the equation that represents its direct variation. Here is the graph. Having a negative value of k implies that the line has a negative slope. As you can see, the line is decreasing from left to right. In addition, since k is negative we see that when x increases the value of y decreases. Example 3: Tell whether if y directly varies with x in the table. If yes, write the equation that shows direct variation. Find the ratio of y and x, and see if we can get a common answer which we will call constant k. It looks like the k-value on the third row is different from the rest. In order for it to be a direct variation, they should all have the same k-value. The table does not represent direct variation, therefore, we can’t write the equation for direct variation. Example 4: Given that y varies directly with x. If x = 12 then y = 8. - Write the equation of direct variation that relates x and y. - What is the value of y when x = - \,9? a) Write the equation of direct variation that relates x and y. Since y directly varies with x, I would immediately write down the formula so I can see what’s going on. We are given the information that when x = 12 then y = 8. Substitute the values of x and y in the formula and solve k. Replace the “k” in the formula by the value solved above to get the direct variation equation that relates x and y. b) What is the value of y when x = - \,9? To solve for y, substitute x = - \,9 in the equation found in part a). Example 5: If y varies directly with x, find the missing value of x in We will use the first point to find the constant of proportionality k and to set up the equation y = kx. Substitute the values of x and y to solve for k. The equation of direct proportionality that relates x and y is… We can now solve for x in (x, - \,18) by plugging in y = - \,18. Example 6: The circumference of a circle (C) varies directly with its diameter. If a circle with the diameter of 31.4 inches has a radius of 5 inches, - Write the equation of direct variation that relates the circumference and diameter of a circle. - What is the diameter of the circle with a radius of 7 inches? a) Write the equation of direct variation that relates the circumference and diameter of a circle. We don’t have to use the formula y = k\,x all the time. But we can use it to come up with a similar set-up depending on what the problem is asking. The problem tells us that the circumference of a circle varies directly with its diameter, we can write the following equation of direct proportionality instead. The diameter is not provided but the radius is. Since the radius is given as 5 inches, that means, we can find the diameter because it is equal to twice the length of the radius. This gives us 10 inches for the diameter. The equation of direct proportionality that relates circumference and diameter is shown below. Notice, k is replaced by the numerical value 3.14. b) What is the diameter of a circle with a radius of 7 inches? Since the equation requires diameter and not the radius, we need to convert first the value of radius to diameter. Remember that diameter is twice the measure of a radius, thus 7 inches of the Now, we substitute d = 14 into the formula to get the answer for circumference. You might also be interested in:
A dental prophylaxis is a cleaning procedure performed to thoroughly clean the teeth. Prophylaxis is an important dental treatment for halting the progression of periodontal disease and gingivitis. Periodontal disease and gingivitis occur when bacteria from plaque colonize on the gingival (gum) tissue, either above or below the gum line. These bacteria colonies cause serious inflammation and irritation which in turn produce a chronic inflammatory response in the body. As a result, the body begins to systematically destroy gum and bone tissue, making the teeth shift, become unstable, or completely fall out. The pockets between the gums and teeth become deeper and house more bacteria which may travel via the bloodstream and infect other parts of the body. Reasons for prophylaxis/teeth cleaning Prophylaxis is an excellent procedure to help keep the oral cavity in good health and also halt the progression of gum disease. Here are some of the benefits of prophylaxis: Tartar removal – Tartar (calculus) and plaque buildup, both above and below the gum line, can cause serious periodontal problems if left untreated. Even using the best brushing and flossing homecare techniques, it can be impossible to remove debris, bacteria and deposits from gum pockets. The experienced eye of a dentist using specialized dental equipment is needed in order to spot and treat problems such as tartar and plaque buildup. Aesthetics – It’s hard to feel confident about a smile marred by yellowing, stained teeth. Prophylaxis can rid the teeth of unsightly stains and return the smile to its former glory. Fresher breath – Periodontal disease is often signified by persistent bad breath (halitosis). Bad breath is generally caused by a combination of rotting food particles below the gum line, possible gangrene stemming from gum infection, and periodontal problems. The removal of plaque, calculus and bacteria noticeably improves breath and alleviates irritation. Identification of health issues – Many health problems first present themselves to the dentist. Since prophylaxis involves a thorough examination of the entire oral cavity, the dentist is able to screen for oral cancer, evaluate the risk of periodontitis and often spot signs of medical problems like diabetes and kidney problems. Recommendations can also be provided for altering the home care regimen. What does prophylaxis treatment involve? Prophylaxis can either be performed in the course of a regular dental visit or, if necessary, under general anesthetic. The latter is particularly common where severe periodontal disease is suspected or has been diagnosed by the dentist. An endotracheal tube is sometimes placed in the throat to protect the lungs from harmful bacteria which will be removed from the mouth. Prophylaxis is generally performed in several stages: Supragingival cleaning – The dentist will thoroughly clean the area above the gum line with scaling tools to rid them of plaque and calculus. Subgingival cleaning – This is the most important step for patients with periodontal disease because the dentist is able to remove calculus from the gum pockets and beneath the gum line. Root planing - This is the smoothing of the tooth root by the dentist to eliminate any remaining bacteria. These bacteria are extremely dangerous to periodontitis sufferers, so eliminating them is one of the top priorities of the dentist. Medication - Following scaling and root planing, an antibiotic or antimicrobial cream is often placed in the gum pockets. These creams promote fast and healthy healing in the pockets and help ease discomfort. X-ray and examination – Routine X-rays can be extremely revealing when it comes to periodontal disease. X-rays show the extent of bone and gum recession, and also aid the dentist in identifying areas which may need future attention. Prophylaxis is recommended twice annually as a preventative measure, but should be performed every 3-4 months on periodontitis sufferers. Though gum disease cannot be completely reversed, prophylaxis is one of the tools the dentist can use to effectively halt its destructive progress. If you have questions or concerns about prophylaxis or periodontal disease, please ask your dentist.
Bartholomew Gosnold (1571 – 22 August 1607) was an English lawyer, explorer, and privateer who was instrumental in founding the Virginia Company of London, and Jamestown in colonial America. He led the first recorded European expedition to Cape Cod. He is considered by Preservation Virginia (formerly known as the Association for the Preservation of Virginia Antiquities) to be the "prime mover of the colonization of Virginia". He obtained backing to attempt to found an English colony in the New World and in 1602 he sailed from Falmouth, England in a small Dartmouth bark, the Concord, with thirty-two on board. They intended to establish a colony in New England. Gosnold pioneered a direct sailing route due west from the Azores to what later became New England, arriving in May 1602 at Cape Elizabeth in Maine (Lat. 43 degrees).
This slideshow page is to explain and give practice on the passive voice. It is common to use passive structures in academic writing because in many cases, the agent (the person/people/organisation etc. who do/does the action) of an action is less important than the action itself. You form the passive by using a form of the auxiliary be (e.g. am, is, are, was, were, been, be) and the past participle of a main verb (e.g. written, spoken, listened). Past participles are also used in present perfect verbs; e.g. I have written an essay, and are sometimes different from past tense verbs; e.g. I wrote an essay. Passive structures are impossible with intransitive verbs (which do not take objects; e.g. arrive) as there is nothing to become the subject of the passive sentence (e.g. Wrong: The party was arrived at by me. Correct: I arrived at the party.). Stative verbs, which refer to states rather than actions, are also seldom used in the passive. Some other stative verbs are: seem, have, suit and resemble. Choose the correct option from the drop-down boxes, then click the 'Show Answers' button button below: If you have any suggestions or questions, please e-mail us at . hits since 23 September 2004.
The Role Enzymes Play Enzymes facilitate countless daily reactions in your body to keep you alive and thriving. They perform many functions. Enzymes catalyze chemical reactions in the body. Virtually every systemic function in the body is dependent on an enzymatic reaction. Enzymes for digestion - Digestive enzymes specifically work to break down the food you eat. Everyone produces enzymes naturally, but some don't make enough due to poor diet, chronic conditions, stress, or age. Without enough enzymes, your body can't digest food properly, leading to bloating, gas, constipation, diarrhea, or other "GI" related symptoms. The enzyme Lipase catalyzes the breakdown of fats (lipids). Protease enzymes catalyze the breakdown of proteins. Amylase breaks down carbohydrates, starches (polysaccharides). Cellulase refers to a group of enzymes which, acting together, hydrolyze cellulose. Invertase is an enzyme that catalyzes the hydrolysis (breakdown) of sucrose (table sugar). Glucoamylase is a different type of amylase. Produced within the human body, it is accountable for breaking off long chain carbohydrates or starches into sugar that will afterwards be used as fuel by the body. Alpha-galactosidase is a digestive enzyme that breaks down the carbohydrates in beans into simpler sugars to make them easier to digest. Beta Glucanase is a group of carbohydrate enzymes which break down glycosidic bonds within beta-glucan. Beta glucans are a polysaccharide made of glucose molecules linked together into long chains that humans cannot readily digest. Such as cellulose plant fiber, cereal bran fiber, and parts of certain types of fungi, yeast, and bacteria. Pectinase is an enzyme group that breaks down pectin, a structural heteropolysaccharide found in primary plant cell walls of terrestrial plants, and cereals. Xylanase is a class of enzymes which degrade the linear polysaccharide beta-1,4-xylan into xylose, thus breaking down hemicellulose, one of the major components of plant cell walls. Phytase is an enzyme that catalyzes the hydrolysis of phytic acid (myo-inositol hexakisphosphate) – an indigestible, organic form of phosphorus that is found in many plant tissues, especially in grains and oil seeds. Hemicellulase breaks down hemicellulose. Common fiber-rich breakfast cereals, have a large amount of hemicelluloses. Hemicellulase is needed to break down these fiber-rich components. Lactase is essential to the complete digestion of whole milk; it breaks down lactose, a sugar which gives milk its sweetness. Lacking lactase, a person consuming dairy products may experience the symptoms of lactose intolerance. Bromelain is a protein-digesting enzyme mixture derived from the stem, fruit, and juice of the pineapple plant. Bromelain is also used to reduce inflammation and swelling, and for osteoarthritis. Papain is a proteolytic enzyme extracted from the raw fruit of the papaya plant. Proteolytic enzymes help break proteins down into smaller protein fragments called peptides and amino acids. Catalase can help protect the body from oxidative damage by breaking hydrogen peroxide into water and hydrogen. If not broken down, peroxides accumulate in the body, and if left unchecked cause DNA damage and inflammation. These statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease.
|Line 1:||Line 1:| On the topic of the size of the stars, see this story which describes that the visible stars are truly huge. Since a heliocentric model requires distant stars, the measurable diameters of the stars shows that the sizes would need to be of tremendous proportions. It was one of the early controversies in astronomy. The geocentric model's closer stars seemed more reasonable. In response Copernicans On the topic of the size of the stars, see this story which describes that the visible stars are truly huge. Since a heliocentric model requires distant stars, the measurable diameters of the stars shows that the sizes would need to be of tremendous proportions. It was one of the early controversies in astronomy. The geocentric model's closer stars seemed more reasonable. In response Copernicans the mystery of God and later postulated an "optical illusion". ==The Popular Creation Story of Astronomy is Wrong== ==The Popular Creation Story of Astronomy is Wrong== Revision as of 20:59, 3 April 2020 On the topic of the size of the stars, see this story which describes that the visible stars are truly huge. Since a heliocentric model requires distant stars, the measurable diameters of the stars shows that the sizes would need to be of tremendous proportions. It was one of the early controversies in astronomy. The geocentric model's closer stars seemed more reasonable. In response Copernicans appealed to the mystery of God and Copernicans in later eras postulated an "optical illusion". The Popular Creation Story of Astronomy is Wrong The old tale about science versus the church is wide of the mark. “ In the early years of the 17th century, Johannes Kepler argued that the universe contained thousands of mighty bodies, bodies so huge that they could be universes themselves. These giant bodies, said Kepler, testified to the immense power of, as well as the personal tastes of, an omnipotent Creator God. The giant bodies were the stars, and they were arrayed around the sun, the universe’s comparatively tiny central body, itself orbited by its retinue of still tinier planets. This strange view of the universe held by Kepler, the innovative astronomer who set the stage for Isaac Newton and the advent of modern physics by freeing astronomy from the perfect circles of Aristotle and working out the elliptical nature of orbital motion, was held by a number of early supporters of Nicolaus Copernicus and his heliocentric (“sun-centric”) theory. Kepler’s view was the view that science—repeatable observations of the stars and rigorous mathematical analysis of the data gleaned from those observations—demanded. It was also the Achilles’ heel of the Copernican theory. Astronomers who maintained that the Earth sits immobile, at the center of the universe, attacked the giant stars as an absurdity, concocted by Copernicans to make their pet theory fit the data. The story of this “giant stars” view of the universe has been all but forgotten. That is unfortunate. The story of Kepler and the giant stars illustrates a robust dynamism present in science from its very birth. That dynamism stands in contrast to the usual tales we are told about the birth of science, stories portraying the debates around the Copernican theory as occasions when science was suppressed by powerful, entrenched establishments. Stories of scientific suppression, rather than scientific dynamism, have not served science well. The story of giant stars does. Johannes Kepler laid out his ideas about giant stars in a book he wrote in 1606 called De Stella Nova or On the New Star. The book was about a nova, a new star that simply appeared for a while in the sky in 1604. According to Kepler, the nova outclassed all the other stars, rivaling even Sirius, the brightest of all the stars that regularly adorn the night sky. In On the New Star, Kepler addressed the size of the nova, concluding that its girth substantially exceeded that of the orbit of Saturn (the most distant planet known at the time). Sirius was similarly huge, and even the smallest stars were larger than Earth’s orbit. The stars were in fact the size of universes. Kepler’s former boss, Tycho Brahe, had proposed a theory of the universe which borrowed from Copernicus, but which kept Earth fixed in place at the center of the universe. Before his death in 1601, Brahe had been the “Big Science” of his day, with a big observatory, the best instruments, lots of top-notch assistants (such as Kepler), his own publishing operation, and lots of money. The sun, moon, and stars circled the immobile Earth in Brahe’s geocentric (“Earth-centric”) theory, while the planets circled the sun. The stars were located just beyond Saturn, marking the edge of the observable universe. Kepler’s sizes for the nova and Sirius were larger than Brahe’s whole universe, while his sizes for lots of other stars were comparable to such a universe. - An astronomer who believed Copernicus, and believed math, simply had to believe that all the stars were huge. Why would Kepler say that stars were universe-sized? Because the data said they were, at least if the heliocentric theory was right. In that theory, Earth circles the sun yearly. So, if at one time of the year it is moving toward a certain star, six months later it will be moving away from that same star. We might expect to see some stars growing brighter throughout the spring on account of Earth approaching them, and then growing dimmer throughout the fall. There is a name for this sort of effect: parallax. But no one could see any parallax. Copernicus had an explanation for this: The orbit of the Earth must be like a tiny point by comparison to the distance to the stars. Earth’s orbit was negligible in size as regards the stars, and Earth’s motion was negligible in effect. As Copernicus had put it, “that there are no such [parallax] appearances among the fixed stars argues that they are at an immense height away, which makes the circle of [Earth’s] annual movement or its image disappear.” A problem lies in this negligible size and immense distance. People who have good vision and look up at the sky will see the stars as little round dots, with small but measurable apparent sizes. Astronomers dating all the way back to Ptolemy during the second century had determined that the more prominent of those star dots measure somewhere in the range of one-tenth to one-twentieth the diameter that the round moon appears to be. In On the New Star, Kepler said bright stars measure one-tenth the moon’s diameter, Sirius a bit more. The problem is, a star that appears one-tenth the moon’s diameter when seen in the sky would be one-tenth the moon’s true physical diameter only if it was the same distance away from us as the moon. But stars are more distant than the moon. Were that star then 10 times more distant than the moon, its true size would be the same as the moon—it would only appear one-tenth the moon’s size on account of greater distance. Were that star 100 times more distant, its true diameter would be 100 times that of the moon. Were it 1,000 times farther away than the moon, its true size would be 1,000 times larger. And what if that star, which appears to be one-tenth the diameter of the moon, were at the distance the Copernican theory required in order for there to be no detectable parallax? That star would be, Kepler said, as big as the orbit of Saturn. And every last star visible in the sky would be at least as big as the orbit of Earth. Even the smallest stars would be orders of magnitude larger than the sun. This may seem strange to us today, because we know now that stars come in many sizes, and while a very few are larger than Earth’s orbit (the star Betelgeuse in Orion being a prominent example), the vast majority are “red dwarfs” that are far outclassed by the sun. However, in Kepler’s time this was a simple matter of observation, measurement, and math—the ordinary stuff of science. An astronomer of that time who believed Copernicus, believed the measurement data, and believed math, simply had to believe that all the stars were huge. (More on where they went wrong, in a moment). The case for huge stars was so solid that the details regarding the measurements of them did not matter. Johann Georg Locher and his mentor Christoph Scheiner would neatly summarize the giant stars problem in their 1614 astronomy book Disquisitiones Mathematicae or Mathematical Disquisitions. They wrote that in the Copernican theory the Earth’s orbit is like a point within the universe of stars; but the stars, having measurable sizes, are larger than points; therefore, in a Copernican universe every star must be larger than Earth’s orbit, and of course vastly larger than the sun itself. - We should not be surprised that people see in scientific murkiness the hand of conspiratorial establishments. Because of the giant stars, Locher and Scheiner rejected the Copernican theory, and backed Brahe’s theory. That theory was compatible with the latest telescopic discoveries, such as the phases of Venus that showed it to circle the sun. In Brahe’s theory, the stars were not so far away—just past Saturn. An astronomer in Kepler’s time who believed Brahe, believed the measurement data, and believed math, did not have to believe that the stars were huge. (Brahe had calculated that they ranged in size between the larger planets and the sun.) Locher and Scheiner were not alone—for many astronomers, including Brahe himself who first raised the issue, the giant stars were just too much. But Kepler had no problem with giant stars. For him, they were part of the overall structure of the universe; and Kepler, who saw ellipses in orbits and Platonic solids in the arrangement of the planets, always had an eye out for structure. He saw the giant stars as an illustration both of God’s power and of God’s intent in putting the universe together. In discussing the parts of the universe—the stars, the solar system (the system of the “movables,” as Kepler calls them), and the Earth—the words of On the New Star rise almost to the level of poetry, even in translation. - Where magnitude waxes, there perfection wanes, and nobility follows diminution in bulk. The sphere of the fixed stars according to Copernicus is certainly most large; but it is inert, no motion. The universe of the movables is next. Now this—so much smaller, so much more divine—has accepted that so admirable, so well-ordered motion. Nevertheless, that place neither contains animating faculty, nor does it reason, nor does it run about. It goes, provided that it is moved. It has not developed, but it retains that impressed to it from the beginning. What it is not, it will never be. What it is, is not made by it—the same endures, as was built. Then comes this our little ball, the little cottage of us all, which we call the Earth: the womb of the growing, herself fashioned by a certain internal faculty. The architect of marvelous work, she kindles daily so many little living things from herself—plants, fishes, insects—as she easily may scorn the rest of the bulk in view of this her nobility. Lastly behold if you will the little bodies which we call the animals. What smaller than these is able to be imagined in comparison to the universe? But there now behold feeling, and voluntary motions—an infinite architecture of bodies. Behold if you will, among those, these fine bits of dust, which are called Men; to whom the Creator has granted such, that in a certain way they may beget themselves, clothe themselves, arm themselves, teach themselves an infinity of arts, and daily accomplish the good; in whom is the image of God; who are, in a certain way, lords of the whole bulk. And what is it to us, that the body of the universe has for itself a great breadth, while the soul lacks for one? We may learn well therefore the pleasure of the Creator, who is author both of the roughness of the large masses, and of the perfection of the smalls. Yet he glories not in bulk, but ennobles those which he has wished to be small. - In the end, through these intervals from Earth to the sun, from sun to Saturn, from Saturn to the fixed stars, we may learn gradually to ascend toward recognizing the immensity of divine power. Other Copernicans shared Kepler’s views. Copernicans like Thomas Digges, Christoph Rothmann, and Philips Lansbergen, spoke of the giant stars in terms of God’s power, or God’s palace, or the palace of the Angels, or even God’s own warriors. And Copernicus himself had invoked the power of God in discussing the immense distances of the stars, noting “how exceedingly fine is the godlike work of the Best and Greatest Artist.” The anti-Copernicans were unpersuaded. Locher and Scheiner noted that Copernicus’s “minions” did not deny that stars had to be giant in a Copernican universe. “Instead,” the two astronomers wrote, “they go on about how from this everyone may better perceive the majesty of the Creator,” an idea they called “laughable.” One anti-Copernican astronomer, Giovanni Battista Riccioli, wrote that calling in divine power to support a theory “cannot satisfy the more prudent men.” Another, Peter Crüger, regarding the size of stars, commented, “I do not understand how the Pythagorean or Copernican System of the Universe can survive.” - Stories of scientific suppression, rather than scientific dynamism, have not served science well. The anti-Copernicans were not just the Party of No. Locher and Scheiner reported telescopic discoveries. They urged that astronomers engage in programs of systematic telescopic observations in order to use eclipses of Jupiter’s moons to measure the distance to Jupiter, and to use Saturn’s “attendants” (not yet understood to be rings) to probe Saturn’s motion. They worked out an explanation for how Earth might orbit the sun: by continually falling toward it, just as an iron ball might continually fall toward Earth. (This insight came decades before the birth of Newton, who would give us our modern explanation of an orbit being a kind of fall, and who would explain orbits by means of a cannon ball being fired from atop a mountain.) They also investigated the question of how any rotation of Earth might influence the trajectories of falling bodies and projectiles. In fact, other 17th-century anti-Copernicans like Riccioli would develop this idea further, theorizing about what today we call the “Coriolis Effect” (which bears the name of the scientist who described it in the 19th century) and arguing that the absence of any such effect was another piece of evidence indicating that Earth in fact does not move. When we learned in school about the Copernican Revolution, we did not hear about arguments involving star sizes and the Coriolis Effect. We heard a much less scientifically dynamic story, in which scientists like Kepler struggled to see scientifically correct ideas triumph over powerful, entrenched, and recalcitrant establishments. Today, despite the advances in technology and knowledge, science faces rejection by those who claim that it is bedeviled by hoaxes, conspiracies, or suppressions of data by powerful establishments. But the story of the Copernican Revolution shows that science was, from its birth, a dynamic process, with good points and bad points on both sides of the debate. Not until decades after Kepler’s On the New Star and Locher and Scheiner’s Mathematical Disquisitions did astronomers begin to come upon evidence suggesting that the star sizes they were measuring, either with the eye or with early telescopes, were a spurious optical effect, and that stars did not need to be so large in a Copernican universe. When the usual story of the Copernican Revolution features clear discoveries, opposed by powerful establishments, we should not be surprised that some people expect science to produce quick, clear answers and discoveries, and see in scientific murkiness the hand of conspiratorial establishments. We might all have a more realistic expectation of science’s workings if we instead learned that the Copernican Revolution featured a dynamic scientific give and take, with intelligent actors on both sides—and with discoveries and progress coming in fits and starts, and sometimes leading to blind alleys such as Kepler’s giant stars. When we understand that the simple question of whether the Earth moved posed scientifically challenging problems for a very long time, even in the face of new ideas and new instruments, then we will understand better that scientific questions today may yield complex answers, and those only in due course. ” The Case Against Copernicus — Scientific American On this same topic also see this Scientific American article about the history of Heliocentrism vs. Geocentrism starting on p.75: “ Copernicus’s revolutionary theory that Earth travels around the sun upended more than a millennium’s worth of scientific and religious wisdom. Most scientists refused to accept this theory for many decades—even after Galileo made his epochal observations with his telescope. Their objections were not only theological. Observational evidence supported a competing cosmology—the “geoheliocentrism” of Tycho Brahe. Copernicus famously said that Earth revolves around the sun. But opposition to this revolutionary idea didn’t come just from the religious authorities. Evidence favored a different cosmology. ” From the article: The later Copernicans argued that the size of the visible of the stars were an illusion, and cite an observation of stars winking out when touching the edge of the Moon: “ During his observations, Horrocks noted that he observed the moon passing through the stars of the constellation Pleiades. As the leading dark edge of the moon passed in front of the stars they simply winked out. They vanished suddenly, meaning they did not transition to darkness as you might expect if their disk was being slowly covered by the dark edge of the moon. This meant that the ‘measured’ size of the stellar disks was in fact spurious—due to a cause unknown at the time. ” Galileo's Star Division Experiment Interestingly, astronomer Galileo Galilei claimed to be able to divide stars with a terrestrial experiment. Galileo claimed to see an effect which should be impossible according to the star size illusion. Strange Tales of Galileo and Proving: Splitting the Stars A star seen through a telescope of very small aperture. This illustration is from the Treatise on Light by the nineteenth century astronomer John Herschel (son of William Herschel). Center—simulated view of the star supposedly divided in half by Galileo’s distant beam. Right—simulated view showing how, after a period of months, the Earth’s motion relative to the star might cause the position of the beam against the star to change ever so slightly, proving that Earth indeed moves. If, after one year, the star is once again divided in half by the beam, then Earth’s motion around the sun (in which it returns annually to the same place) will be clearly demonstrated. “ A second of arc is small: 1/3600 of a degree. The moon has an apparent diameter of half of one degree, or 1800 seconds of arc, so the stars and the beam when viewed through the telescope would all be but a tiny fraction of the apparent size of the moon. Were Galileo’s beam about 10 cm (4 inches) thick, and were he viewing it with his telescope from a spot on the plain about 12 miles (20 kilometers) away, it would have a width of a second or two of arc, and be about the right size to divide a typical star in the way Galileo suggests. No doubt finding the exact right spot to line everything up and make this idea work would be quite a challenge! But beyond the challenge of making the idea work, there is something strange in what Galileo has said here. You see, the disk-like appearance of stars that Galileo saw through his telescope was completely spurious. Telescopes have limitations, brought on by the fact that light is a wave. They cannot concentrate light waves down into a small enough spot to show a star truly (the scientific term for this issue is diffraction). Very small telescopes are particularly limited in this regard. That disk-like appearance of 5 arc seconds in diameter that Galileo writes about is entirely a product of his telescope. That disk is formed inside the telescope. It does not exist outside the telescope. And since it does not exist outside the telescope, it cannot be cut in half by anything outside the telescope. But Galileo did not know this. This is, in fact, how astronomers first began to figure out that the star disks were spurious. They watched the moon pass in front of stars. They noticed (to their surprise) that the moon did not cut into a star and gradually cover up the star’s disk. Rather, the moon had no effect on the star at all for a while, and then suddenly the star winked out all at once (when the moon finally covered the true body of the star, which is just a vanishingly small point as measured from Earth). But at the time of Galileo and the Dialogue, no one had realized this. So, if the telescopic disk of a star does not exist outside the telescope, and if it cannot be cut in half by some beam placed between the telescope and the star, then Galileo’s reference to cutting a star disk as “an effect which can be discerned perfectly by means of a fine telescope” is strange indeed. It seems Galileo just made that up. ” Galileo’s divided star could never happen! “ In science, it is not cool to just make things up! It is not cool to declare an effect that cannot happen to be perfectly discernable. In science, it is not supposed to pay to make things up (although I do not know that Galileo was ever called out on this, like he was on the tides question). It was strange, and un-cool, that Galileo did just that while trying to prove that the Earth moves. ” This observation is impossible, of course, and so Professor Graney declares that Galileo is a liar.
The environment consists of a set of environment variables and their values. Environment variables conventionally record such things as your user name, your home directory, your terminal type, and your search path for programs to run. Usually you set up environment variables with the shell and they are inherited by all the other programs you run. When debugging, it can be useful to try running your program with a modified environment without having to start gdb over again. PATHenvironment variable (the search path for executables) that will be passed to your program. The value of PATHused by gdb does not change. You may specify several directory names, separated by whitespace or by a system-dependent separator character (‘:’ on Unix, ‘;’ on MS-DOS and MS-Windows). If directory is already in the path, it is moved to the front, so it is searched sooner. You can use the string ‘$cwd’ to refer to whatever is the current working directory at the time gdb searches the path. If you use ‘.’ instead, it refers to the directory where you executed the path command. gdb replaces ‘.’ in the directory argument (with the current path) before adding directory to the search path. set environmentvarname [ For example, this command: set env USER = foo tells the debugged program, when subsequently run, that its user is named ‘foo’. (The spaces around ‘=’ are used for clarity here; they are not actually required.) Note that on Unix systems, gdb runs your program via a shell, which also inherits the environment set with If necessary, you can avoid that by using the ‘env’ program as a wrapper instead of using set environment. See set exec-wrapper, for an example doing just that. unset environmentremoves the variable from the environment, rather than assigning it an empty value. Warning: On Unix systems, gdb runs your program using the shell indicated by your SHELL environment variable if it /bin/sh if not). If your names a shell that runs an initialization file when started non-interactively—such as .cshrc for C-shell, $.zshenv for the Z shell, or the file specified in the ‘BASH_ENV’ environment variable for BASH—any variables you set in that file affect your program. You may wish to move setting of environment variables to files that are only run when you sign on, such as .login or .profile.
Question Mark (?) in English! Learn definition, useful rules with examples of Question Marks. How to use this punctuation mark in sentences with ESL printable infographic. The question mark (?) is an important part of the English language and was developed sometime around the 18th Century. Like the full stop (.), it is used mainly at the end of an interrogative sentence. Many people use it incorrectly or don’t use it when required. Read this article and you will understand when and how to use this punctuation mark. Question Mark Rules The most obvious and common use of the question mark is to end a direct question. Look at the following sentences. - Where are you going? - What is this? - Are you mad? - Is this the place? - How much is this phone for? Most people don’t know that this punctuation mark has other uses as well. Let’s take a look. To indicate uncertainty. - He lived till 1990(?) and was buried near his house. - Gandhi ji, 2nd October 1869(?) – 1948, was a great Indian leader. In a series of questions. - What? He isn’t coming? When did you speak to him? - He’s been hospitalized? Why didn’t you tell me? Is he better now? - This is your car? When did you buy this? How much did it cost? To end a tag question (a statement followed by a question). - His phone was stolen, wasn’t it? - She’s a great painter, isn’t she? - He’s lost his job, hasn’t he? Many times, people use questions marks even when they’re not required. One such situation would be indirect questions; these do not require question marks. - John asked Marry to marry him. - The Principal asked him his name. - His father wondered whether the car was fine.
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. May 25, 1997 Explanation: Looking like a fleet of futuristic starcruisers, NASA's highly successful series of High Energy Astrophysical Observatory (HEAO) spacecraft appear poised over planet Earth. Labeled A, B, and C in this vintage illustration, the spacebased telescopes were known as HEAO-1, HEAO-2, and HEAO-3 respectively. HEAO-1 and HEAO-2 were responsible for revealing to earthlings the wonders of the x-ray sky, discovering 1,000s of celestial sources of high-energy radiation. HEAO-2, also known as the Einstein Observatory, was launched near the date of the famous physicist's 100th birthday (November, 1978) and was the first large, fully imaging x-ray telescope in space. HEAO-3, the last in the series, was launched in 1979 and measured high energy cosmic-ray particles and gamma-rays. These satellite observatories were roughly 18 feet long and weighed about 7,000 pounds. Their missions completed, all have fallen from orbit and burned up harmessly in the atmosphere. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
Nursing teachers are responsible for not only filling their students’ minds with valuable information, but also teaching them the importance of leadership, teamwork, respect and compassion. With so many skills to teach, nursing classes can become quite monotonous and dull. But with a bit of creativity, you can use interactive activities for nursing students to energize your classroom and keep your students excited. Lab Value Bingo For the lab value bingo game, you need to create bingo cards for each student in the class. The format of the card should be the same as a traditional bingo card, which features a chart with five columns and five rows. However, rather than filling the boxes in with miscellaneous numbers, you will fill the boxes with patient lab values. Before you begin, make a list of 25 laboratory tests and their normal values. For example, the normal red blood cell value is 4.5 to 5.0 million, while the normal sodium value is 135 to 145. Fill in the bingo cards with the appropriate numbers, arranging the numbers in different patterns for each card. As you play the game, call out the name of the laboratory test. Students must then mark the appropriate lab value on the bingo cards. The first student to mark five consecutive boxes in a vertical, horizontal or diagonal line wins the game. Divide the class into small groups of three or four students each. Assign a role to each person in the group. In groups of two, the roles will simply be patient and nurse. However, larger groups can include additional roles to make the role-playing game more challenging. For example, you might ask students to play a doctor, an overbearing parent or a spouse who does not speak English. Give the "patient" a medical condition and help him develop a challenging situation for the “nurse” to handle. It might also be necessary to assign specific tasks to the other characters as well. However, the nurse should remain unaware of the situation. Once all characters are ready, the nurse is informed of only the patient’s medical condition. She must then properly treat the patient based on information provided by the remaining characters. The nursing Jeopardy game uses simple trivia questions to test the nursing students’ knowledge. Divide the class into groups of three and ask the first group to sit at the front of the classroom. The teacher asks a trivia question and the three students rush to the chalkboard to answer. The first student to correctly answer the question and return to her seat receives a point. The teacher should pose 10 questions to this first group before moving on to the second group. The entire process continues until all groups have participated. Then, the winning students from each group compete against one another in a final round of Jeopardy. The trust walk activity is a relatively simple game that does not require the students to use their nursing knowledge. However, the game helps teach them to trust one another and work together. In a large room, set up a simple obstacle course using traffic cones, chairs and small blocks. Arrange the students into groups of two. One partner is blindfolded and the other partner must use verbal cues to direct the blindfolded person through the course. The blindfolded students learn to trust their guides, while the guiding students learn the importance of leadership and proper instruction.
(Image: A laser guide star cast on the night sky from the William Herschel Telescope at the Roque de los Muchachos Observatory on the island of La Palma in the Canary Islands.) When astronomers look for parts of the galaxy that could contain life, they generally search for elements like oxygen and carbon. But another element essential to life could be the key to finding systems in the Milky Way that have the right conditions for living organisms. "Phosphorus is one of the six elements on which biology depends," Jane Greaves, an astronomer at Cardiff University in Wales, told Popular Mechanics in an email. "The others are carbon, hydrogen, nitrogen, oxygen and sulphur. Without phosphorus, there would be no adenosine triphosphate (ATP), which is the molecule cells use to transfer energy." Phosphorus is relatively rare in the universe, the rarest of the six elements required for life as we know it. It is created in trace amounts in some stars' natural evolution, but the majority of the universe's phosphorus is fused in supernovae. The element, atomic number 16, only accounts for about 0.0007 percent of all matter. Greaves and fellow Cardiff astronomer Phil Cigan are presenting new research at the European Week of Astronomy and Space Science in Liverpool that compares the amount of phosphorus in the stellar dust of two supernova remnants—Cassiopeia A (Cas A) in the constellation Cassiopeia, and the Crab Nebula in the constellation Taurus. The early results suggest that the Crab Nebula contains significantly less phosphorus than Cas A. The discrepancy comes as a surprise, as computer models suggested the two collections of stellar dust, created by the same type of supernova, should contain similar amounts of phosphorus. Understanding this difference could help us understand how levels of this crucial element are distributed across the stars. "Cas A and the Crab Nebula are Core Collapse Supernovae, where the middle of the star implodes and then rebounds very fast, expelling the new elements made," says Greaves. "My guess is that Cas A had more reactions that made phosphorus because the star was more massive or denser, but that's just a guess so far." If unknown processes cause some stellar explosions to produce more phosphorus than others, then life could be isolated to phosphorus-rich areas of the galaxy. At this point, however, only Cas A and the Crab Nebula have been studied with telescope spectroscopy to determine their chemical compositions. "As far as I know, phosphorus has not been looked for in any other supernova, of any type," says Greaves. The team stresses that their research is preliminary and uses limited data. Phosphorus was detected in Cas A by a team of international astronomers in 2013. Graves and Cigan only recently used the William Herschel Telescope in the Canary Islands to study the infrared spectrum of the Crab Nebula, measuring the proportion of phosphorus and iron to compare to that of Cas A. Observations of the Crab Nebula were somewhat hindered by cloudy weather, however, and follow up research is needed to confirm that it is indeed lacking in the element P. Another possibility is that the age difference between the two clouds of cosmic dust could explain the different amounts of phosphorus. The Crab Nebula was created by a supernova seen and documented from Earth by Chinese astronomers almost a thousand years ago, while the light from the supernova that created Cas A is thought to have reached Earth about 300 years ago, though no one is known to have observed it. "It is possible that with the older event, the Crab Nebula, that some phosphorus has disappeared from gas and [formed] into solid material, something we hope to learn more about at this scientific meeting," says Greaves. After being ejected from supernovae, phosphorus gasses coalesce and are trapped in rocky objects. These rocky, icy, and metal bodies clump together further to create rocky planets, which is how most of the phosphorus made it into Earth. However, the phosphorus that was first used in cells to transfer energy, and spark reproductive life, likely came after the planet formed and had large bodies of water, as meteorites bearing phosphorous crashed into the wet parts of the world. To find where else in the galaxy the spark of life could occur, the trick might be to look for planetary systems that came from phosphorus-rich areas. The upcoming 6.5-meter James Webb Space Telescope, designed for infrared astronomy, should be particularly suited to measuring phosphorus in supernova remnants—gasses that will ultimately form stars and planets. "I'm very much looking forward to JWST, as this can potentially look for schreibersite [an iron-nickel mineral containing phosphorus] in discs around stars where new planets are forming, and it has a good wavelength range to look for this mineral we know occurs in meteorites," says Greaves. With only two supernova remnants scanned for the element, and the capability to look for schreibersite in planetary systems coming online soon, the hunt for life-bearing phosphorus could just be getting started.
A rabbit is fluffy, mud is squishy, and a balloon is stretchy. What substances can be fluffy, squishy and stretchy at the same time, and are so much fun to play with? Silly Putty, Gak and slime! These substances can be confusing, too. Most substances become harder when cooled and flow much better as they warm up. Think of how honey slowly oozes from the bottle on a cold day and rushes out on a hot day. Silly Putty, Gak and slime are different. They can feel as hard as a solid when squeezed in your fist, but as soon as you release your grip, they ooze out through your fingers like a thick liquid. Why would slime be different? In this activity you will make your own slime, play with it and discover what makes it flow! Is it a solid or a liquid? Solids consist of tightly packed particles called molecules or atoms that clasp onto each other so the solid holds its shape. Liquids have particles that can slide over and around one another, allowing the fluid to flow. Only adding or taking away heat can make some liquids, like water or oil, flow better or worse. These are called Newtonian liquids. Non-Newtonian liquids, such as ketchup and slime, are different. Manipulations like squeezing, stirring or agitating can also change how they flow. Sometimes they can become so viscous—or have such a hard time flowing—that they could easily be mistaken for a solid. One such non-Newtonian liquid can be created with white school glue, which is a polymer. A polymer is made from long chains of repeating parts called monomers. One polymer might consist of hundreds of thousands of monomers. Polymers are also called macromolecules, or large-sized molecules. Some are man-made, such as plastic and nylon. Others occur in nature, such as DNA, wheat gluten and starches. White school glue is liquid because its long polymers can slide over and along one another. It does not flow easily, though; it is quite viscous. The addition of some chemicals—such as a borax solution (or sodium tetraborate decahydrate dissolved in water)—can cause cross-links to form between the polymers. It is as if the very long molecules started to hold hands. Will the result still be a fluid where the polymers can glide over each other, or will it become a solid? - Elmer's glue or other polyvinyl acetate (PVA) glue - Hot water - Stirring rod or plastic spoon - Measuring spoon (one half tablespoon) - Measuring cups (one half and one quarter cups) - Goggles or eye protection (handling borax can irritate eyes) - A work space (and work clothing) that is protected and won't be damaged if sticky slime gets on it - Adult helper - Food coloring or marker (optional) - Ziplock bag or airtight container to store your slime (optional) - Protect your work space and clothing—slime can be sticky and hard to remove! - Put on goggles or glasses, as the borax solution can irritate the eyes. - Have an adult helper stir one half tablespoon of borax powder into one half cup of lukewarm water in a cup. Stir well until the solution looks clear, label the container "2 percent borax solution," and set aside. - Pour one half cup of glue and one quarter cup of warm water in a container. Note that this solution has 2/3 (or 67 percent) glue. - Optional: If you like colored slime, you can mix in a few drops of food coloring. Another option is to put the tip of a marker into the water for a short time so the ink dissolves in the water. - Stir the glue/water mixture with the stirring rod. - Add five tablespoons of the borax solution to your glue/water mix. - Stir with the stirring rod. After some stirring, you should see a substance sticking to your stirrer. Does the sticky substance look like a solid or like a liquid—or can you not tell yet? - If your substance is still watery, add more borax solution ¼ tablespoon at a time until there is very little watery solution left. - Collect the sticky substance in your hands and work it with your hands for about one minute. How does the slime feel? How does the stickiness and stretchiness change when you work it for a while? - Would you say the slime is a liquid, or is it more like a solid? - Squeeze your slime into an oval and use both hands to pull it apart quickly. Does it tear or elongate? - Squeeze your slime into an oval again and use both hands to pull it apart slowly. Does it tear or elongate? How thin can you get it? - Work your slime with your hands to form a ball. Try to stick your finger into it forcefully. How deep does your finger go? Does it feel like you poked your finger into something solid, or something liquid? - Now try to stick your finger into it gently. How deep does your finger go? Does it feel like you poked your finger into something solid, or something liquid? - Squeeze your slime into a ball again and put it in a container. What do you think will happen if you leave it there for a while? Will it stay in an oval, like a solid would do, or will it relax into a puddle and take the form of the container, like a liquid would do? - Optional: To keep your slime nice and soft, store it in an airtight container or ziplock bag. - Extra: Add other substances, such as shaving cream or liquid soap, to your glue/water solution. Will you still obtain slime? How will this slime feel and look different? - Extra: Leave your slime uncovered for a day. What do you think will happen? Will it become more like a solid, or start to flow easier? Why do you think this will happen? Observations and results Did the slime sometimes feel like a solid and sometimes like a fluid? This is expected. This type of slime thickens or becomes harder or more viscous when you squeeze or stir it. This happens because it is made up of very long particles that are cross-linked. When you leave the particles alone they will coil up, and the coils can slide over each other. When you apply pressure by squeezing or stirring, some coils unwind and become entangled, making it harder for the slime to flow. When you stirred your slime, tried to rip it apart or poked your finger into it with force, the polymers were entangled and it looked like a solid. As a result, it was hard to stir, it ripped apart, and your finger bounced back. When you left your slime alone to rest, gently pulled it apart or gently poked your finger in it, the polymers were curled up. They could slide over one another, and it felt more like a liquid. As a result, the slime took the form of the container, it could be stretched thin, and your finger could move through it. It did not flow as easily as water because it consists of long cross-linked particles, whereas water consists of small particles. When a substance keeps its volume but loses its form when left alone, scientists call it a liquid. Do not pour glue solutions or slime down a drain because they can form clogs. Instead, throw them away in the garbage. Wash all equipment with soapy water. More to explore It's a Solid... It's a Liquid... It's Oobleck!, from Scientific American The Scientific Secret of Stretchy Dough, from Scientific American Playing with Polymers, from Scientific American This activity brought to you in partnership with Science Buddies.
Introduction: 1 Bit Full Adder An adder is a digital electronic circuit that performs addition of numbers. Adders are used in every single computer's processors to add various numbers, and they are used in other operations in the processor, such as calculating addresses of certain data. In this instructable, we are going to construct and test the one bit binary full adder. The attached figure shows the block diagram of a one bit binary full adder. A block diagram represents the desired application and its various components, such as inputs and outputs. Inputs: A, B, Carry in (Cin) Outputs: Carry out (Cout), Sum (S) IC chips: 74LS136, 74LS08, 74LS32. (Optional: 74LS00) *If you do not have an XOR, 74LS136, IC Chip* 2 330 Ohm resistors 2 LED's (Different colors preferred) 10 Kilo Ohm resistor bank Wires as needed Step 1: Truth Table, Derived Boolean Function, and Schematic The truth table of a one bit full adder is shown in the first figure; using the truth table, we were able to derive the boolean functions for both the sum and the carry out, as shown in the second attached figure. Furthermore, the derived boolean function lead us to the schematic design of the one bit full adder. Finally, I did not have any XOR IC chips, so I used the XOR mixed gates equivalent, which is shown in the last figure. Step 2: Implementation on a Breadboard If the switch is up, then it is off. If the switch is down, then it is on. The White wire represents A. The Blue wire represents B. The Yellow wire represents Carry in (Cin). The Green LED represents the Sum. and the Red LED represents the Carry Out (Cout). 1 Person Made This Project! - TalalKhalil made it!
The term “Ethernet” is not used in the IEEE 802.3 standard to describe UTP cables. However, technical professionals say this all the time. Is it wrong? The answer is no, but not understanding how to use the correct terminology can make things complicated. What is Ethernet? Ethernet is not a cable; it’s an engineering standard. It’s the way of connecting a number of computer systems to form a local area network, with protocols to control the passing of information. It is common for people to call UTP cables, Ethernet cables, because all networks follow the IEEE standards for Ethernet cabling. The original Ethernet standard was developed by Xerox in the 1970s. It is now managed by the IEEE (Institute of Electrical and Electronics Engineers), a working group who develops standards for Ethernet networks. In addition, suppliers often advertise Unshielded Twisted Pair, or UTP, cables as Ethernet cables. UTP cables are the most common cable used in networks and have become closely identified as Ethernet cables. An Ethernet network system is not restricted to just UTP cables. There are many grades of UTP, coaxial, and fiber optic cables that can be installed to transmit signals. Below are some common types: - Twisted Pair Cable - U/UTP, U/FTP, F/UTP, S/UTP, SF/UTP, F/FTP, S/FTP, and SF/FTP - CAT5, CAT5e, CAT6, CAT6A, CAT7, and more - Fiber Optic Cables - Single-mode and multimode - OS1, OM1, OM2, OM3, and OM4 - Coaxial cables - RG6, RG6Q, RG11, RG56, RG58, RG59, and more. UTP cables are not limited to just transmitting one type of signal. They can transmit many different types of signals; such as data, voice, serial, and audio. Describing UTP cables as Ethernet cables is acceptable, so long as the terminology is understood and used correctly.
Dear EarthTalk: Is there any way to harness volcanic energy to meet our electricity and other power needs? —Antonio Lopez, Chino, CA The short answer is yes: Heat generated by underground volcanic activity can and has been harnessed for electricity for over 100 years around the world. Utilities can capture the steam from underground water heated by magma and use it to drive the turbines in geothermal power plants to produce significant amounts of electricity. Getting at the sources is not so easy or cheap, though, as it requires drilling into unstable sections of the Earth’s crust and then harnessing the heat energy miles below the surface. Despite these difficulties, volcanic geothermal energy reserves account for about a quarter of Iceland’s energy consumption (with the rest taken up by another clean renewable resource, hydropower dams). According to statistics from the Geothermal Energy Association, the Philippines is also a big user of geothermal power: About 18 percent of that country’s electricity comes from underground volcanic sources. And in New Zealand, geothermal accounts for about 10 percent of total electricity consumption. But believe it or not, the United States is actually the world’s largest producer of volcano-derived geothermal electricity, but still only derives less than one percent of its total power from such sources. California and Nevada are the leaders in this nascent form of renewable energy domestically, but promising efforts are also underway in Oregon, Utah, Alaska and Hawaii. Some analysts believe that the U.S. has enough geothermal capacity to provide 20 percent or more of the nation’s electricity needs. Against the backdrop of diminishing oil reserves, tapping volcanic energy has become a high priority for some other regions as well. The war-ravaged East African nation of Rwanda is hoping to provide power for its people by harnessing the energy from volcanic gases at Lake Kivu, one of the continent’s largest lakes, covering some 1,000 square miles. The lake is one of three known “exploding” lakes subject to violent and sometimes deadly “overturns” triggered by volcanic activity. Methane and carbon dioxide from an adjacent volcano mix methane and carbon dioxide into the lake, making it a veritable tinder box, threatening the lives and homes of some two million people in the region. In response to the risk—and also to produce energy—the Rwandan government has started using a large barge to suck up water and extract the methane gas therein. The methane is then used to fire the gas-powered Kibuye power plant. Already the system is producing 3.6 megawatts of electricity—some four percent of Rwanda’s total power supply. Within a few years, project backers hope to be generating between 50 and 100 megawatts of power from the operation. Extracting the methane also significantly reduces the risk of explosions, thus providing a measure of safety for area residents. Humans have barely put a dent in the amount of power that can be captured from volcanic activity, but analysts expect to see much more of this form of power coming online over the next few decades. The U.S. Geological Survey refers to this phenomenon as the “plus side of volcanoes.” Environmentalists and others are hopeful that volcanic geothermal energy can become a major player in meeting a significant portion of our energy needs in our increasingly carbon-constrained world.
The Earth's only natural satellite is a spectacular sight even with the naked eye. With a small telescope or pair of binoculars, the view is even more amazing. Dark, flat plains called maria, deep craters and bright rays of ejected material pepper the rugged surface. As the moon orbits Earth, it always keeps one face toward the planet. The permanently hidden part is properly called the "far" side – not the "dark" side. In fact, the part of the moon that is dark changes constantly. The part that is illuminated indicates the moon’s phase. A full cycle of phases requires 29.53 days, or a lunar month. 10 Surprising Moon Facts In 1500 there were no telescopes, but Leonardo da Vinci was able to observe that the dark part of the crescent moon still has a faint glow. He correctly surmised that this was due to reflected light from Earth. As the moon orbits, it rocks back and forth a little, a phenomenon called libration. This allows people to see just a little bit over the edge, into the far side. About 59 percent of the entire lunar surface is visible from Earth. Today, the moon has been thoroughly mapped by orbiting satellites and walked upon by human visitors. Nevertheless the view of the moon from Earth is still a breathtaking sight. - Supermoon 2012 Photos: Big Full Moon Views from Around the World - How the 'Supermoon' Looks (Infographic) - Blood Red Moon: Photos of 2010's Total Lunar Eclipse - Full Moon: Why Does It Happen? How Does It Affect Us? | Video
Heat treatments consist of three main steps: - Heating of the material - Maintaining it at a defined temperature - Cooling with a rate that following appropriate laws The dependence of the structure obtained by the temperature and cooling rate can be studied through the transformation diagrams (whose lines depends from the carbon content present in steel, also by the presence of alloying elements in solid austenitic solution): - TTT (Temperature, Time, Transformation), made rapidly cooling the steel at various temperatures and then keeping them constant and evaluating the beginning and end of the various transformations over time; - CCC (Continuous cooling curves) constructed by varying the beginning and the end of the various transformations over time according to various trajectories cooling. A steel during cooling undergoes these phase transformations : - Pearlitic transformation : austenite isothermal transformation of a steel with eutectoid composition that leads to the formation of pearlite ( constituted by aggregates of alternate laminae of ferrite and cementite); - Bainitic transformation : isothermal transformation of austenite of a eutectoid steel that leads to the formation of bainite ( upper or lower depending on the temperature ), constituted by an aggregate of ferrite and iron carbide . - Martensitic transformation : the transformation that occurs in very short time at a certain temperature (start temperature of austenite - martensite transformation). This transformation is different from the previous ones because it has no time for nucleation and growth. This is because the transformation is done through a coordinated movement of atoms and without diffusion processes, accordingly the chemical composition of martensite is identical to that of austenite from which it is formed. The main steel heat treatments can be divided into: - heat treatment in which the steel are heating to a temperature greater than critical temperature: ANNEALING, NORMALIZING, HARDENING; - heat treatment in which the steel is heated at a temperature below the critical temperature: annealing softening or improvement of workability, etc. - heat treatments in order to achieve particular results: surface hardening, solubilization hardening of austenitic steels, etc. Let us examine below only the general aspects of some of these heat treatments. It is a heat treatment characterized by a high temperature heating, by the prolonged stay at that temperature, and a subsequent slow cooling . The aim of treatment is to make the material more easily workable with machine tools, or to allow the further cold deformation if it has been work hardened by the previous deformation. The purpose of annealing is also to homogenize the composition of the raw materials, to obtain a specific microstructure with specific physical and mechanical properties and, in the case of quenched steels, to cancel the effects of the martensitic hardening . The complete annealing is almost never performed because, in addition to being uneconomical as it reaches temperatures higher than the critical one, the annealing usually leads to a coarse grain structure. High grain size leads to a drastic decrease of the toughness of the material ( with increased fragility ) , therefore, it is better to avoid it. This process is performed by heating the steel to a temperature of about 70 ° C above the critical temperature, maintaining at this temperature for a time sufficient to complete austenitization and letting it cool freely in air. In this way you get a fine grain steel with smooth and homogeneous structure regardless of the initial situation. Consists in steel heating above the critical point, in maintaining it at that temperature for a sufficient time to obtain an austenitic structure in the heart, then in cooling with higher speed than the critical hardening speed, to obtain at room temperature a martensitic structure characterized by great hardness. The fundamental conditions because a steel can take a fully martensitic structure are: - the temperature at which the steel is heated and the residence time at this temperature should be such as to allow the starting structure to become fully austenitic; - the cooling rate must be so high to prevent the transformations at high temperature; - the austenite-martensite transformation temperature must be higher than room temperature; To avoid overheating, then the grain enlargement, the temperature at which the steel must be brought before being hardened (hardening temperature) must be indicatively of about 50-70°C above the critical temperature. The rapid cooling necessary to harden the steel is achieved by immersing the piece in a quenching bath which can be, in order of increasing severity, air, oil, water, brine (concentrated solution of salts in water), and molten salts when it is necessary a quenching bath at high temperature. Since the austenite-martensite transformation occurs with an almost instantaneous volume increase, the hardening induces internal stresses in steels also relevant that can lead to deformation or even breakage inside steel (with the formation of quenching cracks). To avoid the risk of quenching crack formation it is necessary to carefully choose the less drastic quenching bath. There were also carried out some special heat treatments (such as bainitic hardening). The optimal choice (which depends on the type of steel that the size of the piece) is not simple, and it is generally necessary to consult the steel manufacturer indications or rely on experience. In this regard, we attach a pdf file created by the knife maker DENIS MURA, where there are indications, according to his experience, to carry out the heat treatment of some steels. The martensite is tough and durable, but it is fragile. The heating of martensite, said tempering, allows to obtain structures with an advantageous combination of hardness and toughness. The tempering consists in heating a steel hardened to a temperature below to the critical (maximum at 600-650°C), maintaining it at that temperature for an appropriate time, and then to the subsequent cooling generally in air. The magnitude of the effects obtained (decrease hardness but simultaneous increase of the toughness and ductility of the steel) is a function of tempering temperature and its duration, and the results are more marked at higher temperature and with longer treatment time. Structure and properties of metallic materials - Alberto Cigada - Città studi Edizioni
“Academically, children are at varying levels when they enter kindergarten,” Karen Jokinen, a kindergarten teacher in Markham, Ont., says. “There is no expectation that children know the alphabet and numbers, or can write their names, but if they can do some of this, it helps with their confidence at school.” One of the most important things you can do with your tot, according to teachers, is read together. “Go to the library and borrow simple books with large print that don’t have many words,” suggests Vaughan, Ont., teacher Anna Elia Kontostergios. “That way, you can both point to words, count letters, and talk about how pictures help us read unfamiliar words.” Elia Kontostergios also recommends creating your own storybooks by using pictures of your child. “Use snapshots of her doing day-to-day things and create repetitive sentences to match the pictures such as, ‘I am eating. I am playing,'” she says. “Pointing to each word while reading it introduces one-to-one matching, high-frequency words they’ll be learning, and punctuation.” Aside from the alphabet and words, you’ll want to familiarize your soon-to-be scholar with numbers. Try counting games (like making groups using toys from around the house), counting objects while going for walks (the number of red cars they see, for example), and pointing out numbers on the calendar. 9 fun ABC games What else does your kid need to know before starting kindergarten? Here are the skills teachers expect them to be able to do in junior kindergarten and senior kindergarten: Junior kindergarten checklist: – Identify their name – Recognize some letters – Hold a book correctly and flip pages – Grip a pencil properly – Respond when asked, “What’s your name?” – Play cooperatively – Follow one- or two-step directions (e.g. “Take your folder out of your backpack and bring it to your desk.”) – Understand that they cannot leave the classroom – Listen attentively for about five minutes – Answer simple questions (e.g. “What’s your favourite kind of animal?”) Senior kindergarten checklist: – Know the difference between letters and numbers – Count the number of words in a sentence – Write most letters – Write their name – Read high-frequency words (e.g. it, is, the, me, mom, dad) – Point to words while they’re reading simple repetitive texts – Identify numbers one to 10 and represent each number using objects What if she’s not potty trained? My kid wasn’t fully trained until 14 days before her first day. And even though Addyson could do her business on the potty, I wasn’t confident she could clean herself after doing a “job,” as my bubby used to say. “Children must be able to wipe themselves. I cannot stress how important this is,” says Cheryl King, a kindergarten teacher in Brampton, Ont. “I have had terribly distraught kids tell me they can’t wipe, and sadly, I cannot help them—bathrooms are off-limits to teachers.” If an accident does happen (and they do), students are asked to change into their spare outfit, and take their dirty clothes home. “It’s normal for children to have accidents. Sometimes they’re so engaged in their activities that they don’t want to stop and use the bathroom,” says Elia Kontostergios. (Addyson’s teacher put up signs reminding busy, forgetful students to go pee). Note that early childhood educators can help with bathroom accidents, so your child isn’t on her own if there is an ECE in the classroom. King asks parents to pack flushable wipes—they’re easier to use than toilet paper. And teach flushing and handwashing habits at home.
We are studying the transcripts that are produced when a gene is switched on, finding out how they interact with other components inside cells and how changes in these interactions can contribute to disease. Genes are the instructions for our cells, encoded within DNA. When a gene is switched on, or ‘transcribed’, it makes a transcript known as RNA. The most common type of RNAs is called messenger RNA(mRNA), which carries the instructions for building proteins, while other RNAs play important roles in their own right. RNAs do not float freely inside a cell. Instead, they are coated by proteins to form ribonucleoprotein complexes (RNPs). These proteins guide the RNA through the many steps of its journey through the cell. We have developed new techniques to investigate how different RNAs and proteins come together in RNPs and find out how this contributes to their functions. We want to understand the role of RNPs in the development of nerve cells and discover how these roles have evolved over time. We’re also investigating how faulty RNPs lead to conditions affecting the nervous system, such as amyotrophic lateral sclerosis (ALS, also known as motor neurone disease). We hope that our discoveries will lead to new therapies for this devastating illness.
Bibliographic information: Bernhard, D. (1993). Alphabeasts: A hide & seek alphabet book. New York: Holiday House. Annotation: This is about the letters in the alphabet, as well as an animal that starts with each of the letters. The children also get to search for the animal in the picture, as well as find all of the animals and letters in a giant picture at the end end of the book, to recall what they just learned. Grade Level: k-1 Readers who will like this book: Children learning the alphabet, and children associating the alphabet with words and picture. Personal Response and Rating: 4.5/5. I liked how there were three different aspects to the book. I think that having the animals be hidden for the children to find would keep them engaged and also help them want to read the words, to figure out what they are looking for. Text Dependent Question: Which animal was the hardest to find? How can the pictures help us to figure out and learn about the animals that we don’t know? Strategy: #2 Alphabet Books This strategy gives students a chance to practice expanding their vocabulary. Students create a book using vocabulary words. Each student picks a letter and uses a vocabulary word from the unit to describe and talk about on their page. This gets students working on their writing skills while using vocabulary, and expanding their knowledge of the word. They first need to examine alphabet books, then prepare an alphabet chart. Next they choose a letter and design the page format. Then they use the writing process to create the pages. Lastly they compile the pages to make a complete book. I think this strategy works well with this book because students are given a model of what an alphabet book looks like, and it uses animals and illustrations. They learned the names by matching the letters and pictures together, and now they will have a chance to do that in their own way fitting it to their specific lesson.
Life was good for Stone Age Norwegians along Oslo Fjord Southeastern Norway is the most populous part of Norway today. Based on an analysis of more than 150 settlements along Oslo Fjord, the area apparently also appealed to Stone Age people. Eleven thousand years ago at the end of the last ice age, Norway was buried under a thick layer of ice. But it didn’t take long for folks to wander their way north as the ice sheet melted away. The first traces of human habitation in Norway date from roughly 9500 BC. Steinar Solheim is an archaeologist at the University of Oslo’s Museum of Cultural History who has worked on numerous excavations of different Stone Age settlements around Oslo Fjord. Now he and colleague Per Perrson have investigated longer-term population trends in the Oslo Fjord region, based on 157 different Stone Age settlements. All were inhabited between 8000 and 2000 BC. The two researchers tried to determine whether the population during this time was stable, or if living conditions were better or worse for people who lived here during different periods. A newly forested landscape Solheim says that forests began to grow in this region after 9000 BC. "The climate was also quite different, and it was probably a bit warmer than it is today,” he said. “We see a lot of hazel, alder, elm, and later oak, all of which are tree species that prefer warmer environments.” This area of Norway was also much lower in elevation than it is today, since the weight of the glacial ice was enough to depress the land itself. That means the coastline at the time was also higher than it is today. Stone Age settlements were usually down by the water. The people who lived here used wood to keep their fires going, and their cooking pits and fireplaces are among the few things that archaeologists can still find after many thousands of years. But archaeological digs of the settlements also yield stone tools, residues from tool production and remainders from cooking fires. The charcoal from the fires can be used to date the site using radiocarbon dating. In a new study, the researchers used all available dates —512 in total— from the settlements to draw conclusions about population trends for the region between 8000 and 2000 BC. A stable life The researchers used a method that relies on radiocarbon dates as an indication of the amount of human activity in an area. The idea is to look at the temporal distribution of radiocarbon dates, to see whether the population has been stable or whether there have been major fluctuations in human activity. The researchers also used a simulation-based model to account for oversampling and for comparison. The researchers use the simulation-based model to see whether dates from the archaeological sites show a stable population over time, or if the dates are actually more randomly distributed. Using this approach, the researchers found that there was a stable, cohesive population in the Oslo Fjord area between 8000-2000 BC. A little conundrum There is also evidence of settlements that are older than this, but researchers have not found any charcoal, which makes it impossible to accurately date the settlements. This presents a bit of a conundrum, Solheim says. "It is possible that they used something other than wood to cook with, such as blubber, but we just don’t know,” he said. Solheim says that people may have been more mobile at the beginning of this period, but they eventually settled in more permanent locations. “Eventually, you get a network of settlements, where some places are more specialized for hunting or fishing or for other resource use,” he said. Solheim says that they also find traces of more permanent hut-like structures that are surrounded by berms or embankments. A good life by the sea If there was indeed a stable population over the millennia in the region, it means that the people living here lived well, Solheim said. "It appears that they have managed to live quite well on the resources they found along the sea," says Solheim. These populations also managed to survive through known climate anomalies that posed problems for other settlements during the same period. One prominent example is the Finse event, also known as the 8.2 ka event, where there was a sudden and extreme drop in global temperatures starting around 6000 BC that persisted for two to four centuries. This could have been catastrophic for people who lived here, but Solheim’s analysis shows that the population in the region remained stable in spite of the sudden deep freeze. - Solheim, Perrson: Early and mid-Holocene coastal settlement and demography in southeastern Norway: Comparing distribution of radiocarbon dates and shoreline-dated sites, 8500–2000 cal. BCE. Summary
How irrigation occurs – three basic steps: ONE: Get a Permit A water permit/consent must be granted. All irrigation takes in New Zealand are regulated to ensure the sustainability of our water resources. A water ‘take and use’ describes the site specific conditions that need to be followed. For example, a fish screen is often required before water is allowed to be removed from a river. All takes from New Zealand rivers have a ‘minimum flow‘ applied to them. This means when a river’s flow drops below a certain level (the threshold at which aquatic life is maintained) the water take must stop. The permit also states how much water can be removed at any one time (a maximum rate), and over the irrigation season (a seasonal volume). The permit will also specify what the water is to be used for. In 2010 a new law was passed which requires measurement of all irrigation takes. Irrigators now have to submit their water use data, and can be fined or prosecuted if they fail to do so or renege on the conditions of their water permit. TWO: Collect and distribute water The water needs to be collected and distributed to the land. Irrigators take and store water in a number of ways: - Groundwater via wells –water is pumped from groundwater/aquifers via a well. Some wells can be over 200m deep - Run of river via pipes or channels –water is pumped or moved via gravity from the river, River takes can only occur when a river is above its minimum flow - Large scale storage – water is pumped or moved via gravity into a large dam or man-made reservoir - On farm storage – water is pumped or moved via gravity into a small storage pond on the farm - Piped systems – water is moved through an underground network of pipes - Open channel systems – water is moved through man-made waterways More than half of the irrigation water supply in New Zealand comes through irrigation schemes. An irrigation scheme provides water to a group of water users, either through pipes or open channels. The largest irrigation scheme in New Zealand, the Rangitata Diversion Race (RDR), supplies water to over 70,000 hectares in Ashburton District. THREE: Apply the water The last step is irrigating the land. This stage requires a lot of planning to ensure water is used responsibly and sustainably. Different technologies and irrigator types are used depending on the landscape and crop to be irrigated. The different ways a farm can irrigate - Centre pivot and linear move irrigators - Traveling irrigators - Spray lines and long lateral - Solid set sprinklers - Micro sprinklers Did you know? Irrigators have to measure the water they take. New government regulations implemented in 2010 require all large water takes to be measured daily. The data is then provided to the local regional council so they can monitor water trends, and report back to the community.
Carolyn Hornik P.S. 101 Unit: Tantalizing Tangrams 1. To develop an understanding of the elements of a folk tale. 2. To understand the purpose of folk tales is to entertain and to preserve cultural values. 3. To develop an appreciation for figurative language. 4. To compare and contrasts folk tales using a graphic organizer. 5. To write an original folk tale. 1. Reads and comprehends books on the same subject or in the same genre. 2. Produces a response to literature. 3. Participates in group meetings. 4. Prepares and delivers a presentation. 5. Demonstrates a basic understanding of the rules of the English language in written and oral work. 6. Analyzes and subsequently revises work to improve its clarity and effectiveness. 7. Produces work in one genre that follows the conventions of the genre. computer with Internet capabilities, printer, tangram pieces, Grandfather Tang's Story by Ann Tompert, illustrated by Robert Andrew Parker, (Crown Publishers Inc., New York) 1990, Three Pigs, One Wolf and Seven Magic Shapes by Grace Maccarone, (Scholastic, New York) 1997. Dictionary.com may be used to define these words: 1. Distribute tangram pieces. (Students can cut out their own tangrams from this pattern.) Have students name and describe each shape in terms of the number of sides and angles. 1. Read Grandfather Tang's Story and Three Pigs, One Wolf and Seven Magic Shapes by Grace Maccarone, (New York, Scholastic) 1997. In Grandfather Tang's story, Grandfather tells a story about shape changing fox fairies who try to best each other until a hunter brings danger to both of them. As new characters are introduced, tangram pieces are rearranged to represent the new character. Three Pigs, One Wolf, and Seven Magic Shapes is a variation of the Three Little Pigs. The pigs are given seven magic shapes and instructed to use them wisely. Each pig turns his shapes into different objects. Only one pig succeeds in using his tangrams wisely and survives. Have students retell each story using tangrams on a flannel board to recreate the objects made by the characters in each story. Constructing Your Own Set of Tangrams by Tom Scavo explains how to construct a set of tangrams. Use worksheet #1 to help retell the story and worksheet #2 to draw the designs made by the tangram pieces. As the story is being retold, have students identify the setting, characters, problem, events in sequence, and solution of each folk tale. The following graphic organizers may be used: parts of a story, sequence map, character web. A Venn diagram may be used to compare and contrast the two stories. 2. Grandfather Tang's Story and Three Pigs, One Wolf and Seven Magic Shapes are folk tales. Identify elements of folk tales: Students describe how these elements apply to the two folk tales being read. Chinese Folktale Extravaganza has a link to elements of a 3. Students, in cooperative groups, plan out an original folk tale incorporating original characters constructed with their tangrams. Roles within each cooperative group will include: tangram constructors who create the characters for the folk tale out of the tangram pieces, story boarders who make the graphic organizers, writers who write the story based on the graphic organizers, word processors who write, edit, revise and print the story on the computer, artists who illustrate the story and presenters who share the story with the class. Students, use Kidspiration or Inspiration (software applications by Inspiration Software, Inc., that is used to create graphic organizers) to plan the character (prepare a character web), using details to describe the problem and solution. Students use their graphic organizers to write their folk tale. Students enter the text of their folk tales on a word processing application such as AppleWorks, Student Writing Center (Learning Company) or Microsoft Works, illustrate their stories with a drawing application such as Kidpix (Broderbund), print and share their folk tales with the class. Tangram pieces on a flannel board will be used in the recreation of the folk tales. The folk tales may be laminated and bound into a class book. Rubrics for evaluating students folk tales can be found at: A writing checklist may be found at: http://teachnet-lab.org/ps101/chornik/checklist.htm Related Web Sites: 1. This is a condensed version of Grandfather Tang's story with pictures, presented by Dodge School Elementary, Grand Island, NE. 2. This is a lesson plan using math/literature plays. 3. Links to other Chinese folk tales can be found here. 4. A unit plan for dramatizing and writing a folk tale can be found at this site. 5. Students may use the tangram pictures on this site to create characters for their tangram stories. 6. Graphic organizers may be found at: http://eduplace.com/kids/hme/k_5/graphorg/index.html 7. Additional tangram activities are detailed at: http://eduplace.com/tview/pages/g/Grandfather_Tang_s_Story_Ann_Tompert.html The Folk Tale Problem-Solving Recipe, The Center for Applied Research in Education, 1988. This is a graphic organizer to help students plan their folk tale. As a pre-writing activity, have students brainstorm for synonyms for "said." A thesaurus or dictionary.com may be used as well. Words may include: exclaimed, proclaimed, called, cried, shouted, whispered, declared, decided. Instruct students to vary their words, as they write the dialogue in their story, by substituting these synonyms for "said." arrow image credit: http://creativeimaginations.net/PAGE4.html
Quasar Beam Unveils Hidden Matter In Universe Astronomers detected vast filaments of invisible hydrogen by using the light of a distant quasar (core of active galaxy) to probe the dark space between the galaxies. The Hubble Space Telescope Imaging Spectrograph found the spectral "fingerprints" of highly ionized intervening oxygen (which is a tracer of the hydrogen) superimposed on the quasar's light. Slicing across billions of light-years of space, the quasar's brilliant beam penetrated at least four separate filaments of the invisible hydrogen laced with the telltale oxygen. This filamentary structure is throughout the universe, all the way out to the distance of the quasar. For simplification, this graphic isolates the filamentary structure to a specific location along the line of sight to the quasar. Illustration Credit: John Godfrey (STScI)
Summer. Vacation. "No more pencils, no more books ..." -- hold it right there! There may be a two-month vacation from the formal classroom, but the enjoyment of reading can, and should, be year-round. In fact, summertime is when parents and children can discover together that reading is part of recreation as well as learning. Encouraging children to read at home is critical to them developing both into strong readers and succeeding at school. Remember, you are sending an important message to your child that reading is both a pleasure and a necessity. Here are some suggestions to help you read at home with your child: Let your child choose the books. Don't worry about over exposure to one genre. Rest assured that in school, teachers expose students to much variety. At home, success in reading will be determined by children's desire and excitement to explore what fascinates them and not by what is imposed -- especially by parents! Keep in mind that you are not just teaching your child how to read, but you are encouraging your child to want to read. Talk about the book to ensure that your child understands the content and to encourage self-expression. Before reading, discuss the book cover, the title, and the author if it is someone with whom you and your child are familiar. Tired of a book they beg you to read every night? Don't worry, as children love to hear their favourites, and the repetition reinforces learning. I can recite Dr. Seuss's ABC by heart and while it delights my child every time, I know she is learning the alphabet. During reading, make predictions about the storyline. After reading, compare the book to others your child has enjoyed (teachers call this "text-to-text connections".) Encourage your child to make connections between the book, relevant personal experiences ("text-to-self") and general knowledge ("text-to-world"). This makes reading an active and meaningful experience. Read TO your child. Aside from the pure enjoyment of hearing stories, you are exposing your child to language, and by exploring the content of those books you are furthering the development of critical skills in comprehension. Engaging in the wonderful world of words incorporates listening, discussing, debating, exploring, imagining, questioning and ultimately writing. Read WITH your child. Older children may prefer to read independently, or they may choose to read aloud to you. Perhaps you may have the pleasure of sharing the book and reading together. The parental support provided when reading together can also allow a child the opportunity to be challenged with a more difficult text than he or she would otherwise be able to read alone. Accept that mistakes may occur when your child reads aloud. Try not to make constant corrections that could interrupt the flow, as well as hamper enjoyment. But by all means, correct errors that have an impact upon meaning. If there are many mistakes, then gently stop reading and discuss the book. Check for comprehension before you continue reading. Read to your child daily. Whether it is a bedtime ritual, or under a shady tree in the backyard, set aside time to read every day. In Guiding the Reading Process, noted educator David Booth of the Ontario Institute for Studies in Education at the University of Toronto recommends at least 15 minutes per day for primary children (kindergarten to the grade three) and 30 minutes daily for junior students (grades four to six), much of which may be done independently. However, if your child appears tired or becomes disinterested, simply stop. Enjoyment is key to a successful home reading program. You are an important influence on your child's success in reading and his or her attitude toward learning. When you make reading at home a priority, you can inspire a life-long interest in books. And memories of summer will include picking up a book as a well as picking up a ball.
|Home |Software | Web Games | Quizzes | Free for Teachers | About | Contact | Links| The Black Sea The Black Sea (known as the Euxine Sea, in antiquity; Latin: Pontus Euxinus) is an inland sea between southeastern Europe and Asia Minor. It is connected to the Mediterranean Sea by the Bosporus and the Sea of Marmara, and to the Sea of Azov by the Strait of Kerch. There is a net inflow of seawater through the Bosporus, 200 km³ per year. There is an inflow of freshwater from the surrounding areas, especially central and middle-eastern Europe, totalling 320 km³ per year. The most important river entering the Black Sea is the Danube. The Black Sea has an area of 422,000 km² and a maximum depth of 2210 m. Countries bordering on the Black Sea are Turkey, Bulgaria, Romania, Ukraine, Russia, and Georgia (including the breakaway region of Abkhazia). The Crimean peninsula is a Ukrainian autonomous republic. Important cities along the coast include: Istanbul (formerly Constantinople and Byzantium), Burgas, Varna, Constanţa, Yalta, Odessa, Sevastopol, Kerch, Novorossiysk, Sochi, Sukhumi, Poti, Batumi, Trabzon, Samsun. An equivalent of the name "Black Sea" cannot be traced to an earlier date than the 13th century. Strabo reports that in antiquity, the Black Sea was often just called "the Sea" (pontos), just like Homer was often simply called "the Poet". For the most part, Graeco-Roman tradition refers to the Black Sea as Euxeinos Pontos "Hospitable sea". Strabo thinks that the Black Sea was called "inhospitable" before Greek colonization, because it was difficult to navigate, and because its shores were inhabited by savage tribes, and that the name was changed to "hospitable" after the Milesians had colonized, as it were making it part of the Greek civilization. It is, however, likely, that the name Axeinos arose by popular etymology, either from an Iranian axaina "dark", or from Ascanian, i.e. Phrygian. If from axaina "dark", the designation "Black Sea" would, after all, go back to Antiquity. The motive for the name may be an ancient assignment of colors to the direction of the compass, "black" referring to the north, and "red" referring to the south. Herodotus on one occasion uses Red Sea and "Southern Sea" interchangeably. The Black Sea is the largest anoxic, or oxygen-free, marine system. This is a result of the great depth of the sea and the relatively high salinity (and therefore density) of the water at depth; freshwater and seawater mixing is limited to the uppermost 100 to 150 m, with the water below this interface (called the pycnocline) being exchanged only once every thousand years. There is therefore no significant gas exchange with the surface, and as a result decaying organic matter in the sediment consumes any available oxygen. In these anoxic conditions some extremophile microorganisms are able to use sulfate (SO42−) for oxidation of organic material, producing hydrogen sulfide (H2S) and carbon dioxide. This mix is extremely toxic (a lungful would be fatal to a human), resulting in a sea that has almost all of its ecology living in that top layer down to a depth of approximately 180 m (600 ft). The relative lack of micro-organisms and oxygen has allowed deep-sea expeditions to recover ancient (on the order of thousands of years) human artifacts, such as boat hulls and the remains of settlements. Large amounts of organic material reach the bottom of the sea and accumulate in the sediments in concentrations of up to 20%. These kinds of sediments are called sapropel. While it is agreed that the Black Sea has been a freshwater lake (at least in upper layers) with a considerably lower level during the last glaciation, its postglacial development into a marine sea is still a subject of intensive study and debate. There are catastrophic scenarios such as put forward by William Ryan and Walter Pitman as well as models emphasizing a more gradual transition to saline conditions and transgression in the Black Sea. They are based on different theories about the level the freshwater lake had reached by the time the Mediterranean Sea was high enough to flow over the Dardanelles and the Bosporus. On the other hand, a study of the sea floor on the Aegean side shows that in the 8th millennium BCE there was a large flow of fresh water out of the Black Sea. The steppes to the north of the Black Sea have been suggested as the original homeland (Urheimat) of the speakers of the Proto-Indo-European language, (PIE) the progenitor of the Indo-European language family, by some scholars (others move the heartland further east towards the Caspian Sea, yet others to Anatolia).
Animals in Space Content in this section supports the concepts of diversity and adaptations of organisms in the space environment. It includes information on the behavioral responses of animals to microgravity. The section also discusses the history of animals that have flown into space. A Brief History of Animals in Space American and Russian scientists used animals -- mainly monkeys, chimps and dogs -- to test each country's ability to launch a living organism into space and bring it back alive and unharmed. Laika, the First Dog in Space View the stamps created to honor the first dog to travel into space. Butterflies and Spiders in Space These experiments examined the life cycles of the painted lady and the monarch butterflies and the behavior of an orb-weaving spider on the International Space Station. The investigation was called the Commercial Generic Bioprocessing Apparatus Science Insert -- 03 and commonly referred to as CSI-03. Scroll down the page for background information and the results of the experiments. Be sure and check out the image of the orb-weaving spider's web. Spiders in Space -- The Sequel This experiment was the second to study spiders on the space station. The scientific investigation called Commercial Generic Bioprocessing Apparatus Science Insert -- 05, or CSI-05, allowed scientists to observe the habits of two golden orb spiders in microgravity. Golden orb spiders spin 3-D asymmetric webs, unlike the orb-weaving spiders in CDI-03 that were selected for the symmetry of their web formation. Spiders in Space -- Live! Gladys and Esmeralda became space celebrities as scientists and students watched the pair of golden orb spiders adapt to living in microgravity. Read about the results of the Commercial Generic Bioprocessing Apparatus Science Insert -- 5, or CSI-05, experiment.
Cosmically Cool Planet Research! Day Four: Researching with Digital and Print Text Continued Lesson 12 of 19 Objective: SWBAT build their knowledge about a planet by reading print and digital sources. SWBAT to answer self-generated research questions about their planet, taking notes. Welcome to a series of ten lessons on planet research! This set of lessons is part of a larger unit my district is implementing all about the topics of space and books with great word choice. My grade level completes a research report or project for each of our six thematic units. This happens to be the fifth research project my students are completing this year. I loved completing these lessons because none of my students' reports came out the same - even those who researched the same planet! The design of this unit was inquiry-based, so students chose the direction of their report. Some were interested in the history of their planet - how it got its name, who discovered it, etc. Others wanted to know if there were features similar to Earth, or why their planet had so many moons. I've included the Planet Research Packet in this section of my lesson on each day. I refer to page numbers as I walk you through each day of this series of lessons, however I left page numbers off, in case there were pages you didn't want to use. You may notice that my student samples vary slightly from the packet I've provided for you. I made changes to the packet as I noticed things that could be made better. I hope you and your astronomers find these resources helpful as you research planets! Thank you! (See Resource File: Planet Research Packet) *Clipart in my lesson picture purchased from ScribbleGarden on Etsy Our note taking lesson is very quick today, as I want to give my astronomers time to finish up researching and note taking. We quickly review the note taking tips on page five in our Planet Research Packet. I also put a few exemplary student samples under our document camera for students to see. (See Planet Research Packet) My students have had a lot of practice with note taking, but if yours haven't you'll want to do some more modeling. The students complete their research today, similar to days one and two of our researching. Some students have the minimum of five index cards, and some have ten! I move around the room assisting students as needed, especially with those who are using the Readability app for the first time. Review: We review and celebrate today's learning. Peek at Tomorrow's Mission: I let the students know that we'll be choosing some really cosmically cool words to use in our reports tomorrow. I ask students to start thinking about awesome adjectives to describe their planet, as well as figurative language, such as similes and onomatopoeia. Display/Prop: As part of this research unit, the students have to create a display, or prop, at home showing the most exciting thing they learned about their planet. This is not a large-scale project, but rather a small prop that illustrates the most interesting thing they learned about their planet. At the end of the school day today, I pass out page 11 in the Planet Research Packet, which gives directions to complete the activity at home. The students and I read through the letter together, they add their name, due date of the prop, planet they're researching, and most interesting thing they've found out about their planet. We'll continue to write the assignment in our agendas (daily assignment books that go home every night) each night until the prop is due. I'll also put a note in my Friday newsletter to remind parents about the small project at home. (See Resource File: Planet Research Packet - Page 11) Last Day Celebration Note Home: If you are interested in celebrating with space-themed food, here is a letter you can send home with your families. This would give them about a week's notice that you are looking for special treats to get sent in to the classroom. There is more information about this in day ten of this set of lessons. (See Resource File: Cosmically Cool Treats) *Be sure to visit day ten in this series of lessons to see some photos of my students with their space projects. Here are some additional resources you may find helpful if you're working on a space-themed unit. Do We Wish Upon a Shooting Star, or Falling Rock?: This document is an informational passage that includes multiple choice questions. My students need practice with these types of questions, including those with multiple answers, questions with Part A and Part B, and fill in the blank. I teach in Illinois, and our students will be taking the PARCC Assessment beginning next year. I hope these types of tasks will help prepare my students for these tests, as well as our end-of-unit assessments, and overall mastery of the standards. The focus of this assignment are standards RI3.1, RI3.4, and RI3.7. (See Resource File: Shooting Star, or Falling Rock MC Practice)
Biota of the Arctic A transition zone exists at the northern limit of trees where coniferous forest interdigitates with treeless tundra vegetation. In North America, white and black spruce (Picea glauca and P. mariana) interface with tundra, whereas in Siberia and northern Europe larch (Larix) is the primary tree line species. Cottonwoods (Populus species) often penetrate the tundra landscape in the Low Arctic along major rivers. Major vegetation types of the Low Arctic include low-shrub tundra, dominated by species of willow (Salix) and dwarf birch (Betula); tall-shrub tundra, dominated by species of willow, shrub birch, and alder (Alnus); and combinations of sedges and dwarf shrubs, such as species of Labrador tea (Ledum), blueberry and cranberry (Vaccinium), crowberry (Empetrum), and Arctic heather (Cassiope), in wetter sites. Cushion plants (Dryas and Saxifraga species) are common on windswept uplands. Lichens and mosses are important components of the ground cover in some areas. In the Low Arctic, most land surfaces are fully vegetated, with the exception of rock outcrops, dry ridge tops, river gravel bars, and scree slopes (those slopes that have an accumulation of rocky debris at the angle of repose). The vegetation of the High Arctic is less rich than that of the Low Arctic, containing only about half the vascular plant species found in the Low Arctic. For example, more than 600 species of plants are found in the Low Arctic of North America, but in the extreme High Arctic of northern Ellesmere Island and Greenland—north of 83° N—fewer than 100 species of vascular plants grow. The shorter growing season, cooler summers, and drier conditions, as well as the distance of these landmasses from continental flora, account for this difference. More than 40 percent of vascular plant species of the Arctic are circumpolar in distribution. Mosses increase in importance in High Arctic plant communities, and shrub species decrease markedly, with only a few prostrate willows, dwarf birch, and other dwarf shrubs remaining. Prostrate willows, however, remain important components of plant communities that retain some winter snow cover, even in the northernmost land areas. Sedge-moss meadows occur on limited wet sites in valley bottoms watered by melting snows. Upland sites are drier and have a more sparse ground cover that merges into polar desert at higher elevations or where insufficient moisture is available for plant growth. Grasses, occasional prostrate willows, and mat-forming dryas occur in patches in the uplands and are the dominant vegetation in the polar barrens. The true polar desert generally occurs on coastal areas fringing the Arctic Ocean and on areas of a few hundred metres elevation in the extreme High Arctic where soils have not developed and the frost-free period and soil moisture are insufficient for most plant growth. The occasional plants growing there often become established in frost cracks that capture blowing snow and finer windblown soil material. Plants adapted to these conditions include species of the Arctic poppy (Papaver), some rushes (Juncus), small saxifrages (Saxifraga), and a few other rosette-forming herbaceous species. The Arctic poppy and a few of the other flowering herbs adapted to the High Arctic have flowers that are solartropic (turning in response to the Sun). Their parabolic-shaped blossoms track daily movements of the Sun, thereby concentrating solar heat on the developing ovary, warming pollinating insects that land there, and speeding the growth of embryonic seeds. Arctic ecosystems lack the diversity and richness of species that characterize temperate and tropical ecosystems. Animal as well as plant species decline in number with increasing latitude in both polar regions. Vertebrate species of the Arctic tundra and polar barrens are limited to mammals and birds; no amphibians or reptiles occur there. About 20 species of mammals and more than 100 species of birds are present throughout the Arctic. Most are circumpolar in their distribution as single species or closely related species; for example, the caribou of North America and the domestic and wild reindeer of Eurasia belong to the same species, Rangifer tarandus, whereas the lemmings of the Eurasian Arctic are a closely related but distinct species from those of northern North America and Greenland. This similarity in Arctic mammalian fauna is a result of the lower sea levels of the Pleistocene glaciations, when a broad land connection, known as the Bering Land Bridge, connected present-day Alaska and Siberia. Some Arctic mammalian fauna—primarily herbivores such as caribou and reindeer, muskox (Ovibos moschatus; see photograph), and Arctic fox (Alopex lagopus; see photograph), and species of Arctic hare (Lepus) and collared and brown lemmings (Dicrostonyx and Lemmus)—rarely occur outside the Arctic and are adapted to life in this environment. Other fauna such as species of ground squirrel (Spermophilus), vole (Microtus), shrew (family Soricidae), and red fox (Vulpes), as well as ermine (Mustela erminea), wolverine (Gulo gulo; see photograph), wolf (Canis lupus), and brown bear (Ursus arctos; see photograph) are common to other ecosystems but are distributed widely throughout the Arctic. A few other typical temperate species have penetrated northward into the Low Arctic where suitable habitat is available. The moose (Alces alces) and snowshoe hare (Lepus americanus) in North America are examples, and their movement into the Low Arctic may be a consequence of a warming climate and an increase of willows and other shrubs, especially in riparian habitats. Where mountain ranges in boreal forest regions continue into the Arctic—as they do in northwestern North America and Siberia—species of mountain sheep (Ovis) and marmots (Marmota), typical of the alpine zone, have extended their distribution into the Arctic. On land areas of the extreme High Arctic, above 80° N, which include only parts of Axel Heiberg and Ellesmere islands in the Canadian Arctic, northernmost Greenland, northern portions of Svalbard, and Franz Josef Land, only a few mammal species are able to maintain viable populations. In the Canadian High Arctic the musk ox, Peary caribou, Arctic hare, and collared lemming are the only mammalian herbivores, and their predators, the wolf, Arctic fox, wolverine, and ermine, are also present. In northern Greenland these same species are found, with caribou and possibly the wolverine being absent in historical times. Only caribou and the Arctic fox are native to Svalbard, and only the Arctic fox is present on Franz Josef Land. In all these High Arctic areas the polar bear (Ursus maritimus; see photograph), a creature of the sea ice that preys largely on seals, may occasionally be found on land, where females den to bear young or where they graze (rarely) the vegetation or prey on land mammals or nesting birds. Terrestrial avian fauna of the Arctic includes only a few resident species, among them the ptarmigan (Lagopus species), snowy owl (Nyctea scandiaca), gyrfalcon (Falco rusticolus), and raven (Corvus corax); the remainingspecies are present in the Arctic only in summer to breed and rear young, migrating to temperate, tropical, or maritime areas of more southern latitudes during winter. Although the ability to fly has allowed birds to occupy isolated and insular habitats within the Arctic that have been largely inaccessible to mammals, their distribution throughout the Arctic has been tied closely to the location of their wintering areas and annual migration routes. These migration routes, especially those of shorebirds and waterfowl, often follow the coastlines of continents; however, some species cross extensive bodies of water. Nevertheless, the North Atlantic Ocean offers a partial barrier to the circumpolar mixing of species, and thus there is greater similarity between the avian fauna of the Arctic of western North America and eastern Eurasia than there is among the species of the Arctic areas of Europe, eastern North America, and Greenland. Shorebirds, waterfowl, and passerine species of the family Fringillidae (finches, buntings, and sparrows) are the most abundant species nesting in the Arctic. Wet sedge meadows often associated with lake margins, estuaries, and seacoasts are favoured nesting habitats of shorebirds and waterfowl. Nesting densities of passerine species are highest in shrub communities at the southern margins of the tundra and in riparian habitats; they decline rapidly in the High Arctic. Only the redpoll (Acanthis species) and snow bunting (Plectrophenax nivalis) among this group extend their range to the northernmost land areas. In addition to resident species, raptorial birds that commonly nest in the Arctic include the peregrine falcon (Falco peregrinus; see photograph), rough-legged hawk (Buteo lagopus), short-eared owl (Asio flammeus), and, in mountainous terrain, the golden eagle (Aguila chrysaetos; see photograph) in North America and the white-tailed eagle (Italiacetus albicilla) in Greenland and Eurasia. The jaegers (Stercorarius species), which spend the major part of their lives at sea during most of the year, nest in tundra and polar barrens and prey on lemmings and eggs and nestlings of other birds during the breeding season. Marine birds of the Procellariidae (fulmars), Laridae (gulls and terns), and Alcidae (puffins, murres, dovkies, and auklets) that are dependent on a marine food base often nest colonially on coastal cliffs in the Arctic. These nesting colonies are usually found adjacent to upwelling currents in the sea where invertebrates and fish that the birds feed on are most abundant. In the High Arctic, upwelling currents result in open water areas within the pack ice called polynya; these enable seabirds to feed and nest at latitudes above 75° N.
6 The degree of sensitivity of the skin depends on the: 1.Thickness of the epidermisThe thinner the epidermis, the more sensitive the skin is to the stimulus2. Number of receptors present PMR 05The more receptors found on the skin the more sensitive is that part of the skin. 7 Other functions of the human skin. Water proofprevent water loss from skin.Prevents entry of microorganisms that cause illnesses.Remove waste productsexcess water, urea and mineral salts.Produces Vitamin D in the presence of the sunlight.Stabilise body temperature 8 Fill in the blank with the suitable terms given in the box. Receptors thickness thinner sense of touch touch more number The skin is an organ of __________________There are five types of __ ___ in the skin sensitive to various stimuli.The sensitivity of the skin depends on the ___________ of the epidermis and the ____________ receptors on the skin.The ____________ the epidermis, the more sensitive it is to stimulus.The ________ receptors there are on the skin, the more sensitive it is to stimulus.Blind people use their ____ to help them read Braille 14 Sense of smellWhen we have a cold or flu, a lot of mucus is produce. The smell receptors are surrounded by this thick layer of mucus and vary little of chemical vapor gets to the smell receptors. Therefore, the smell receptors do not get stimulated enough to effectively function as a sensory organ of smell. 15 The sensitivity of the nose towards stimuli is influenced by the following factors: PMR 05 The strength of the smell. A stronger smell will be detected by the nose easily compared with a weaker smell.The presence of mucus in the nose. A lot of mucus will reduce the sensitivity of the nose. 16 Human ear A human ear has three main part. the outer ear, filled with air.The middle ear, filled with air.The inner ear, filled with liquid 24 FUNCTIONS OF DIFFERENT PARTS OF THE HUMAN EAR OUTER EARPinnacollects and directs sound waves into the ear canal.ear canal / auditory canaltransmits sound waves to the eardrum.Eardrumvibrates and transmits sound waves to the ossicles.MIDDLE EAROssiclesintensify the vibrations of the sound waves by 22 times before transmitting to the oval window.Eustachian tubebalances the air pressure at both side of the eardrum.oval windowtransmits sound vibrations from the middle ear to the inner ear.INNER EARCochleatransforms sound vibrations into impulses.semicircular canalsbalance the body position.auditory nervessend messages to the brain which interprets the messages as sound. 25 Stereophonic hearing Stereophonic hearing is hearing using both ears. The advantages of stereophonic hearing:enables the direction of the source of hearing to be detected more accurately.This is because the ear nearer the source of sound receives sound louder and earlier than the other ear.Animals that have stereophonic sound can detect the presence of preys and predators more quickly. 26 Properties of sound sound can transferred through; solids liquids gasescannot be transferred through in vacuum.(particles in solids and liquids are closer each other compared to the molecules in gases. Vacuum is space that does not have any particles.) 27 ANIMAL Frekuansi snake 100-800Hz frog 50-10 000Hz dog 10-50 000Hz cat The range of frequencies of hearing in man is 20 Hz until Hz. The following table shows the range of frequencies of hearing of several animals:Different people have different limitations of hearing.ANIMALFrekuansisnakeHzfrogHzdogHzcatHzbatHzgrasshopperHzwhale10-50HzHuman20hz – hz 29 Reflection and absorption of sound Sound can be reflected or absorbed by the surface of an object.The sound reflected repeatedly from one surface is known as echo.Surfaces that are smooth, even and hard are good sound reflectors and produce loud echo. For examples, concrete, plank, metal and mirrorSurfaces that are rough, hollow and soft are good sound absorbers and produce weak echo. For examples cloth, sponge, cork, rubber, carpet and cushion. 30 To overcome the limitations of hearing, we use the stethoscopeenables doctor to detect the soft heartbeats of patients.ii. hearing aidscollects sound signals before being sent to the middle ear.iii. amplifierboosts weak sound signals. 39 FUNCTIONS OF DIFFERENT PARTS OF THE HUMAN EYE Structure / characteristicFunctionScleraMaintains the shape of the eyeballProtects the eyeballcornearybbChoroidsConjunctivairispupilLens PMR 04Transparent and elastic convex lensRefracts and focuses light onto the retina.ciliary musclesupportive ligamentvitreous humouraqueous humourRetina PMR 2011optic nerve 48 Mechanism of Sight PMR 03, 07 ii. focusing distant object. The eye lens focuses the image onto the retina by changing the thickness of the eye lens. The thickness of the lens is changed by the cilliarry muscles.i.Focusing near objectsTo focus near objects onto the retina, the cilliarry muscles contract. The eye lens become thicker.ii. focusing distant object.To focus distant objects onto the retina, the cilliarry muscles relax. The eye lens becomes thinner. 70 Experiment short sightedness & long sightedness InstructionsFirst, I want you to form groups of four. In your group, discuss why some people wear glassesNext, I want you to look at the first picture on the screenClick and drag the picture from left to right. Observe what happens to the imageThen, repeat the activity on the second and third pictures 71 QuestionsWhy do some people wear glasses?What is the cause of short sightedness?Where does the image fall when a short sighted person look at a far object?What kind of lens is used to correct short sightedness? 72 Formative Assessment Types of Defects Able to see close/near objects (clear/blur)Able to see distant object (clear/blur)Types of lens used to correct defects1) Short sightedness2) Long sightedness3) Astigmatism 73 Complete the diagram Short sightedness Long sightedness The image falls _______ of the retinaThe image falls ______of the retina 80 Blind spotThe blind spot is a spot on the retina of the eye that cannot detect light stimulus.The image of the object formed at the blind spot cannot be seen by the eye because there are no light-sensitive cells (photoreceptors) at the blind spot.
The main components of the cryosphere are snow, river and lake ice, sea ice, glaciers and ice caps, ice shelves, ice sheets, and frozen ground (Figure 4.1). In terms of the ice mass and its heat capacity, the cryosphere is the second largest component of the climate system (after the ocean). Its relevance for climate variability and change is based on physical properties, such as its high surface reflectivity (albedo) and the latent heat associated with phase changes, which have a strong impact on the surface energy balance. The presence (absence) of snow or ice in polar regions is associated with an increased (decreased) meridional temperature difference, which affects winds and ocean currents. Because of the positive temperature-ice albedo feedback, some cryospheric components act to amplify both changes and variability. However, some, like glaciers and permafrost, act to average out short-term variability and so are sensitive indicators of climate change. Elements of the cryosphere are found at all latitudes, enabling a near-global assessment of cryosphere-related climate changes. The cryosphere on land stores about 75% of the world’s freshwater. The volumes of the Greenland and Antarctic Ice Sheets are equivalent to approximately 7 m and 57 m of sea level rise, respectively. Changes in the ice mass on land have contributed to recent changes in sea level. On a regional scale, many glaciers and ice caps play a crucial role in freshwater availability. Presently, ice permanently covers 10% of the land surface, of which only a tiny fraction lies in ice caps and glaciers outside Antarctica and Greenland (Table 4.1). Ice also covers approximately 7% of the oceans in the annual mean. In midwinter, snow covers approximately 49% of the land surface in the Northern Hemisphere (NH). Frozen ground has the largest area of any component of the cryosphere. Changes in the components of the cryosphere occur at different time scales, depending on their dynamic and thermodynamic characteristics (Figure 4.1). All parts of the cryosphere contribute to short-term climate changes, with permafrost, ice shelves and ice sheets also contributing to longer-term changes including the ice age cycles. Table 4.1: Area, volume and sea level equivalent (SLE) of cryospheric components. Indicated are the annual minimum and maximum for snow, sea ice and seasonally frozen ground, and the annual mean for the other components. The sea ice area is represented by the extent (area enclosed by the sea ice edge). The values for glaciers and ice caps denote the smallest and largest estimates excluding glaciers and ice caps surrounding Greenland and Antarctica. |Cryospheric Component ||Area ||Ice Volume (106 km2) ||Potential Sea Level Rise (SLE) (m)g | |Snow on land (NH) ||1.9–45.2 ||0.0005–0.005 ||0.001–0.01 | |Sea ice ||19–27 ||0.019–0.025 ||~0 | |Glaciers and ice caps || || || | |Smallest estimatea ||0.51 ||0.05 ||0.15 | |Largest estimateb ||0.54 ||0.13 ||0.37 | |Ice shelvesc ||1.5 ||0.7 ||~0 | |Ice sheets ||14.0 ||27.6 ||63.9 | |Greenlandd ||1.7 ||2.9 ||7.3 | |Antarcticac ||12.3 ||24.7 ||56.6 | |Seasonally frozen ground (NH)e ||5.9–48.1 ||0.006–0.065 ||~0 | |Permafrost (NH)f ||22.8 ||0.011–0.037 ||0.03–0.10 | Figure 4.1. Components of the cryosphere and their time scales. Seasonally, the area covered by snow in the NH ranges from a mean maximum in January of 45.2 × 106 km2 to a mean minimum in August of 1.9 × 106 km2 (1966–2004). Snow covers more than 33% of lands north of the equator from November to April, reaching 49% coverage in January. The role of snow in the climate system includes strong positive feedbacks related to albedo and other, weaker feedbacks related to moisture storage, latent heat and insulation of the underlying surface (M.P. Clark et al., 1999), which vary with latitude and season. High-latitude rivers and lakes develop an ice cover in winter. Although the area and volume are small compared to other components of the cryosphere, this ice plays an important role in freshwater ecosystems, winter transportation, bridge and pipeline crossings, etc. Changes in the thickness and duration of these ice covers can therefore have consequences for both the natural environment and human activities. The breakup of river ice is often accompanied by ‘ice jams’ (blockages formed by accumulation of broken ice); these jams impede the flow of water and may lead to severe flooding. At maximum extent arctic sea ice covers more than 15 × 106 km2, reducing to only 7 × 106 km2 in summer. Antarctic sea ice is considerably more seasonal, ranging from a winter maximum of over 19 × 106 km2 to a minimum extent of about 3 × 106 km2. Sea ice less than one year old is termed ‘first-year ice’ and that which survives more than one year is called ‘multi-year ice’. Most sea ice is part of the mobile ‘pack ice’, which circulates in the polar oceans, driven by winds and surface currents. This pack ice is extremely inhomogeneous, with differences in ice thicknesses and age, snow cover, open water distribution, etc. occurring at spatial scales from metres to hundreds of kilometres. Glaciers and ice caps adapt to a change in climate conditions much more rapidly than does a large ice sheet, because they have a higher ratio between annual mass turnover and their total mass. Changes in glaciers and ice caps reflect climate variations, in many cases providing information in remote areas where no direct climate records are available, such as at high latitudes or on the high mountains that penetrate high into the middle troposphere. Glaciers and ice caps contribute to sea level changes and affect the freshwater availability in many mountains and surrounding regions. Formation of large and hazardous lakes is occurring as glacier termini retreat from prominent Little Ice Age moraines, especially in the steep Himalaya and Andes. The ice sheets of Greenland and Antarctica are the main reservoirs capable of affecting sea level. Ice formed from snowfall spreads under gravity towards the coast, where it melts or calves into the ocean to form icebergs. Until recently (including IPCC, 2001) it was assumed that the spreading velocity would not change rapidly, so that impacts of climate change could be estimated primarily from expected changes in snowfall and surface melting. Observations of rapid ice flow changes since IPCC (2001) have complicated this picture, with strong indications that floating ice shelves ‘regulate’ the motion of tributary glaciers, which can accelerate manyfold following ice shelf breakup. Frozen ground includes seasonally frozen ground and permafrost. The permafrost region occupies approximately 23 × 106 km2 or 24% of the land area in the NH. On average, the long-term maximum areal extent of the seasonally frozen ground, including the active layer over permafrost, is about 48 × 106 km2 or 51% of the land area in the NH. In terms of areal extent, frozen ground is the single largest cryospheric component. Permafrost also acts to record air temperature and snow cover variations, and under changing climate can be involved in feedbacks related to moisture and greenhouse gas exchange with the atmosphere.
Bovine Viral Diarrhea (BVD) BVD or Bovine Virus Diarrhea is an infection that can cause numerous problems in cattle, including damage to the digestive and immune systems and birth defects. BVD can cause high mortality in calves and yearling cattle. The outbreak of this disease has devastating economic consequences to cattle producers. One survey showed that BVD causes estimated losses of up to $150 million annually. In 1946 Olafson and associates discovered gastroenteritis with severe diarrhea in dairy herds in the state of New York. These were the first reported cases of Bovine Virus Diarrhea. The animals were also said to be infected with ulcers in the nasal and oral soft tissue layers. There is no common name for BVD. BVD virus or BVDV is the causative organism of the disease. During the 1970s, it was learned that bovine viral diarrhea virus (BVDV) was closely related to the hog cholera virus. Today the BVDV has been successful in infecting cattle of all ages. This has made a major financial blow due to productive and reproductive losses. - Calves born with BVD show fever - They have a continuous nasal discharge - The calves often have a fever and become dehydrated as well - Diarrhea is also a definite symptom - The affected calves as well as adults lose the ability to move about normally - Animals often die of pneumonia due to a weakened immune system - There are ulcers on the hoofs of the affected cattle How it Affects Cattle Following acute infection of BVD, suffering is mild. The risks are actually from secondary and opportunistic infections. As the disease weakens the system, many secondary infections attack the cattle. BVDV acts as an immunosuppressant (makes the body immune system non-functional) and allows bacterial infections to occur. Acute infections with BVDV are dangerous in pregnant cattle because the virus can cross the placenta and cause infections of the fetus. Fetal infections can result in early embryonic death, abortion, defects from the birth, or the birth of calves chronically infected with BVDV. The BVD virus is not capable of long-term survival in the environment. The BVD virus is widespread in western Canada and throughout the world. A study in Alberta confirmed that 41% of beef cattle contain antibodies to BVD virus. Risks & Dangers Infected animals can shed the virus from discharges of the mouth, nose, eyes or in the milk. The highest virus concentrations are found in the manure of infected animals with diarrhea. Many infected bulls also carry the virus in their semen. Infection in a pregnant cow may spread to the fetus, which can cause a number of different conditions depending on the age of the fetus. Unfortunately there are no specific treatments available for any of the forms of BVD infection. All prenatally infected cattle will slowly die from the mucosal disease, but animals that are infected after birth may survive. Antibiotics may help prevent secondary infections. Vaccines & Prevention Natural exposure or vaccination can be useful. BVD vaccines are obtainable either as modified live vaccines or as killed vaccines. Please note that critical reactions commonly follow the use of modified live virus BVD vaccines. Your local veterinarian must be consulted before undertaking any vaccination or control program. References and Resources |Lautner Farms Updates| Everything Show Cattle Facebook feed! [2hrs ago] The 10th Annual Dakota Classic -Feb 7th [7hrs ago] Brower/Craft Fall Born Sale-TODAY [8hrs ago] Ownership Facts on the Greatest Sires in America [21hrs ago] M Lazy Heart Ranch Online Sale [22hrs ago]
- 3.9–4.7 in - 7.5 in - 0.2–0.4 oz - The smallest, shortest-tailed chickadee - Mésange à dos marron (French) - The Chestnut-backed Chickadee uses lots of fur in making its nest, with fur or hair accounting for up to half the material in the hole. Rabbit, coyote, and deer hair are most common, but hair from skunks, cats, horses, or cows appears in nests as well. The adults make a layer of fur about a half-inch thick that they use to cover the eggs when they leave the nest. - Hole-nesting birds tend to have higher nest success rates than open-cup nesters, but that doesn't mean that they are immune to predation. Chestnut-backed Chickadee nests get attacked by predators including mice, squirrels, weasels, snakes, and black bears. - The Chestnut-backed Chickadee is not truly migratory, but it does make some seasonal movements. In late summer some birds move higher into the mountains. They move back to lower elevations when winter starts, particularly after heavy snowfalls. - The oldest recorded Chestnut-backed Chickadee was 9 years 6 months old. Chestnut-backed Chickadees live mainly in dense, wet coniferous forests along the Pacific Coast, including Douglas-firs; Monterey, ponderosa, or sugar pines; white firs, incense-cedar; and redwoods. They also occur in some deciduous forests, particularly willow and alder stands along streams, eucalyptus groves, open patches of madrone and shrubs, and sometimes along the edges of oak woodlands. They’re also commonly seen at backyard feeders in urban, suburban, and rural areas where extensive trees and shrubs are present. Chestnut-backed Chickadees eat about 65 percent insects and other arthropods, including spiders, caterpillars, leafhoppers, tiny scale insects, wasps, and aphids, feeding their young mainly caterpillars and wasp larvae. To a lesser extent they also eat seeds, berries and fruit pulp. - Clutch Size - 1–11 eggs - Number of Broods - 1-2 broods - Egg Length - 0.6–0.7 in - Egg Width - 0.4–0.5 in - Incubation Period - 12–18 days - Nestling Period - 18–21 days - Egg Description - White with reddish to light-brown spots. - Condition at Hatching - Naked except for sparse tufts of down, eyes closed, clumsy. The female builds the nest on her own. She makes a bottom layer or foundation of moss and strips of bark, particularly incense cedar when it’s available. The nest’s upper layer consists of animal fur woven with strips of bark, grass, feathers, and sometimes textile fibers. Among the kinds of fur found in these nests are rabbit, coyote, deer, skunk, cats, horses, and cattle. Adults also use fur to make a thin, warm flap to cover eggs when they leave the nest. Nest building takes 7-8 days, and the finished product can be quite variable in size: from about 1 inch to 6 inches tall. Males take the first step in choosing nest sites, approaching a possible location while the female watches. Later, the female decides on the site, enters the cavity, and accepts pieces of vegetation brought by the male. Nest sites can be holes in rotted trees, stumps, and posts soft enough for the chickadees to excavate themselves, or old woodpecker holes. These nests are commonly 1-12 feet off the ground. Chestnut-backed Chickadees also readily use nest boxes. © René Corado / WFVZ © René Corado / WFVZ Chestnut-backed Chickadees hop through trees and shrubs, often starting low down and working their way up to the top, then dropping low into a nearby tree. They pick insects and seeds from bark and twigs, sometimes hovering to reach items, or darting out to catch insects like a flycatcher or redstart. Many Chestnut-backed Chickadee pairs stay together for a year or less; a smaller number stay together for 2 to 4 years. Chestnut-backed Chickadees often form flocks with other species in winter. Where Chestnut-backed and Mountain chickadee ranges overlap, you’ll frequently find both species in a single flock, along with Red-breasted Nuthatches, Golden-crowned and Ruby-crowned kinglets, and Brown Creepers. During winter, they travel together in search of food. Flight can be direct, but is most often slightly undulating as is common in most chickadees. Chestnut-backed Chickadees are common across their range but populations have been gradually declining by just over 1 percent per year since 1966, resulting in a cumulative decline of 42 percent, according to the North American Breeding Bird Survey. Partners in Flight estimates the global breeding population at 9.7 million, with 64 percent living in the U.S. and 36 percent in Canada. They rate a 9 out of 20 on the Continental Concern Score and they are not on the 2012 Watch List, although they are a U.S.-Canada Stewardship species. Chestnut-backed Chickadees nest in holes in dead limbs and trees, so forest management practices that remove these elements from a forest can make it harder for these birds to find nest sites. - Dahlsten, Donald L., Leonard A. Brennan, D. Archibald Mccallum and Sandra L. Gaunt. 2002. Chestnut-backed Chickadee (Poecile rufescens), The Birds of North America Online (A. Poole, Ed.). Ithaca: Cornell Lab of Ornithology; Retrieved from the Birds of North America Online: http://bna.birds.cornell.edu.proxy.library.cornell.edu/bna/species/689 - Dunne, P. 2006. Pete Dunne’s essential field guide companion. Houghton Mifflin, Boston. - Ehrlich, P. R., D. S. Dobkin, and D. Wheye. 1988. The birder’s handbook. Simon & Schuster Inc., New York. Patuxent Wildlife Research Center longevity records Partners in Flight. 2012. Species assessment database. - USGS Patuxent Wildlife Research Center. 2012. North American Breeding Bird Survey 1966–2010 analysis. Resident. During summer months some Chestnut-backed Chickadees move to higher elevations. This species often comes to bird feeders. Set up bird feeders in your backyard with black oil sunflower seed, suet or other mixed seeds. Find out more about what this bird likes to eat and what feeder is best by using the Project FeederWatch Common Feeder Birds bird list. If Chestnut-backed Chickadees inhabit your area, setting up nest boxes might entice them to nest on your property. Consider putting up a nest box to attract a breeding pair. Make sure you put it up well before breeding season. Attach a guard to keep predators from raiding eggs and young. Find out more about nest boxes on our Attract Birds pages. You'll find plans for building a nest box of the appropriate size on our All About Birdhouses site. Find This Bird Look for Chestnut-backed Chickadees high in the branches of coastal conifers, or lower down in shrubs around yards and park borders. When searching for Chestnut-backed Chickadees in winter, listen for its conspicuous chick-a-dee and other call notes, a great way to find this bird and the several other species that habitually forage with them.
The user is often faced with the question: What kind of pressure sensor should I use – a relative/gauge or an absolute pressure sensor? From food processing to petrochemical plants, to plastic injection moulding and many other industrial applications, pressure measurement is needed for the control of processes and machinery. In this series of articles I would like to describe the differences between the various pressure sensors available and their corresponding options for use. The main difference between gauge and absolute pressure measurement is the implemented reference pressure or in other words: the zero point of the scale. During gauge pressure measurement, the pressure is always measured in relation (“relative”) to the current ambient pressure (approx. 1.013 bar). In order to measure gauge/relative or absolute pressure, a sensor must be capable of detecting a change in the pressure of a medium and comparing it with the reference pressure (relative/gauge = compared to ambient pressure, absolute = compared to absolute vacuum). Electronic pressure sensors usually measure the change in pressure through the deformation of a diaphragm. If this diaphragm is exposed to the process pressure on one side and “vented” on the other side (and thus exposed to the ambient pressure), the deformation is reduced (or counterbalanced) by exactly this ambient pressure. Therefore, the measuring result is a pressure difference between the measured process pressure and the currently present ambient pressure. For example, in unpressurised (=vented) tanks, where liquids are stored and where the tank is freely connected to the atmosphere above the liquid (and thus is “vented”), the current liquid level can be derived from the hydrostatic pressure of the liquid column using a similarly vented gauge pressure sensor. Thus it is particularly important for smaller tanks and containers to eliminate the influence of the ambient pressure on the measurement through the common ventilation of sensor and vessel, otherwise, for a constant level of liquid, the calculated liquid level in the tank will fluctuate as a function of the ambient pressure. This variation may be up to +/- 30 mbar due to the actual weather conditions and up to 200 mbar as a result of the location (pressure difference between sea level and 2,000 m). Example: A level of 5 m of water in an open tank generates a hydrostatic pressure of +500 mbar. Thus, with an unchanging level of water, an absolute pressure sensor would indicate a fill level of between 4.7 and 5.3 metres, depending on the weather conditions. Since the fill volume is very often calculated from the tank geometry and the level measured, this may result in a substantial measurement error of the tank’s contents.
Biologist Rick Howard and his colleagues have discovered a paradox that crops up when new genes are deliberately inserted into a fish's chromosomes to make the animal grow larger. While the genetically modified fish will be bigger and have more success at attracting mates, they may also produce offspring that are less likely to survive to adulthood. If this occurs, as generations pass, a population could dwindle in size and, potentially, disappear entirely. "Ours is the first demonstration that a genetically modified organism has a reproductive advantage over its natural counterpart," said Howard, a professor of biological sciences in Purdue's School of Science. "Though altering animals' genes can be good for humans in the short run, it may prove catastrophic for nature in the long run if not done with care. And we don't know just what kind of care is necessary yet, or how much." This research, which Howard conducted with William Muir of the animal sciences department and Andrew DeWoody of the forestry and natural resources department, appears in this week's (Feb. 17) online issue of the journal Proceedings of the National Academy of Sciences. Howard and Muir published a related article in the same journal in December 1999 that showed larger animals had a mating advantage, but their previous article did not relate mating advantage to genetic modification (see below URL for related news release). The most common question posed about genetically modified organisms - GMOs for short, and also called transgenics - is whether they are safe for people to eat. When GMOs were first made commercially available in 1996, many food crops, such as corn and soybeans, were altered to produce substantially more yield than they do in nature. The debate on GMOs Contact: Chad Boutin
Collecting Data for Change Over Time Lesson 8 of 13 Objective: Students will be able to devise a way to consistently and safely measure growing seeds in order to track change over time. The question for today's lesson is "How do scientists decide what data to collect?" As the students and I consider this together, I will engage them in thinking about what changes we expect as our seeds are germinating. What data could we gather each day? I will listen for students to discuss that length will change, as well as how the plant looks. I will then guide them to consider length today. If students don't mention length, I will bring it into the conversation by showing several germinating seeds that began at the same time, but are different lengths. To begin my modeling, I turn the student's attention to the work they did yesterday, which was to scientifically draw their 4 types of seeds. Then we discuss what they think changes over time with these seeds. I remind them of the idea of different lengths, sizes, and growth rate and explain that today we will begin recording changes in length. I also mention that we need to think of the changes the a seed undergoes regarding its physical appearance. Next, I model with students how to choose a seed and describe it with detail in the science journal. I also explain that they will need to determine a way to measure the length of the seed while keeping it safe and intact. As we discuss this, I "look a"t the tools on the science tray (rulers, string, measuring tape) and think aloud about how they might help me with these tasks. I then ask the students to consider, with their partners, how they will gather all of this information. As well, I stress that when we work with living things, we have the obligation to be respectful (careful) of how we handle them. As students work with their teams to describe and measure their germinating seeds, I will circulate and prompt them to consider why they are choosing to measure in the way they are. This is an important question, as I expect students will just think they need to measure the visual length of the germinating seed. I will be listening in to determine if they are considering the whole growth, even if it curves. Aside from the science content in this lesson, there are several mathematical concepts being used, but not necessarily taught during the lesson. As the children work and show me there strategies, I will have to work to teach mini lessons along the way in regards to rounding to the closest unit, starting in the right place of the tool, and reporting out the measurements using units. All of this must be done in a helpful, immediate need, way so the scientists can continue their data gathering without being bogged down in too many "structured lessons". Close and Sharing To close, I simply remind the students that we will need to use the same strategies to measure our seed growth over time. It will be important for us to use the same strategy in order to remain consistent in our data.
Beep, Beep, Beep! You probably have one of these sitting on your kitchen counter or built into a kitchen wall. No doubt you know how to use it to reheat leftovers or make popcorn. That familiar beeping sound means that your food is done. But how does a microwave oven work? Do you know how? News You Can Use - A microwave oven produces microwaves and guides them into the oven. - The metal walls and metal mesh inside the oven door ensure that the microwaves can’t escape from the oven. Instead, the microwaves bounce around inside the oven and pass through the food. - Microwaves are electromagnetic waves, like waves of visible light. However, microwaves have lower frequencies and less energy than visible light. - If they have so little energy, how can they cook food? Watch this video to find out: http://vimeo.com/46135412 Show What You Know Learn more about microwave ovens and how they work at the link below. Then answer the following questions. - It was discovered by chance that microwaves can cook food. How did it happen? - What happens when positively charged particles are exposed to an electric field? What happens when negatively charged particles are exposed to an electric field? - A water molecule contains polar bonds. How does this affect the water molecule in terms of electric charge? - Microwaves and other electromagnetic waves have alternating electric fields. How do microwaves affect water molecules? Why does this cause water molecules and other nearby molecules to heat up? - Why does food in a microwave oven get hot? - If you put food in a glass dish and heat it in a microwave oven, the food but not the dish gets hot. Explain why. - You should never put metal or metal foil in a microwave oven. If food wrapped in foil were to be put in a microwave oven, it would not only be dangerous, but the food also wouldn’t get hot. Why not?
Many developers will have to learn all kinds of algorithms in their lives so they can write highly optimized code. Many of these algorithms have long histories and are well-tested. And one of them is the binary search method. The binary search is a fast algorithm to find a record in a sorted list of records. For most people, this is a very familiar algorithm if you had to ever guess a value between 1 and 10, or 1 and 100. The principle is quite simple. You have an x amount of records and you pick the record in the middle of the list. For guessing between 1 and 100, you would pick 50. (100/2) If it is the correct value, you’ve won. If it is too high, you now know the value must be between 1 and 50 so you guess again with value 25. Too low and you pick the value between 50 and 100, which would be 75. You should be able to guess the value in up to 8 tries for values between 1 and 100. Actually, the binary search is actually the easiest explained as bit-wise checking of values. A single byte will go from 00000000 to 11111111 so basically all you do is a bitwise compare from the highest bit to the lowest. You start with value 10000000 (128) and if the value you search for is higher, you know that first bit is 1, else it needs to be 0. Your second guess would either be 11000000 (192) or 01000000 (64) and you would continue testing bits until you’ve had all bits tested. However, your last test could also indicate that you guessed wrong so the maximum number of guesses would be equal to the number of bits plus one. And that’s basically what a binary search is. But it tends to be slightly more complicated. You’re not comparing numbers from 0 to some maximum value but those numbers are generally a kind of index for an array, and you compare the value at the position in the array. You basically have a value X which could basically be any data type and even be a multi-field record and you have an array of records which has all data sorted for some specific index. And these arrays can be reasonably large. Still, the binary search will allow you some very quick search. Now, the biggest problem with the binary search is how people will calculate the index value for the comparison. I already said that you could basically check the bits from high to low but most developers will use a formula like (floor+ceiling)/2 where floor would be the lowest index value and ceiling the highest index value. This can cause an interesting problem with several programming languages because there’s a risk of overflows when you do it like this! So, overflow? Yes! If the index is an unsigned byte then it can only hold a value of 11111111 (255) as a maximum value. So as soon when you have a floor value of 10000000 (128) and a ceiling of at least (10000001) then the sum would require 9 bits. But bytes can’t contain 9 bits so an overflow occurs. And what happens next is difficult to predict. For a signed byte it would be worse, since value 1000000 would be -128 so you would effectively have 7 bits to use. If the 8th bit is set, your index value becomes negative! This means that with a signed byte, your array could never be longer than 64 records, else this math will generate an overflow. (64+65 would be 129, which translates to -127 for signed bytes.) Fortunately, most developers use integers as index, not bytes. They generally have arrays larger than 256 records anyways. So that reduces the risk of overflows. Still, integers use one bit for the sign and the other bits for the number. A 16-bit integer thus has 15 bits for the value. So an overflow can happen if the number of records has the highest bit value set, meaning any value of 16384 and over. If your array has more than 16384 records then the calculation (floor+ceiling)/2 will sometimes generate an overflow. So, people solved this by changing the formula to floor+((ceiling-floor)/2) because ceiling-floor cannot cause an overflow. It does make the math slightly more complex but this is the formula that most people are mostly familiar with when doing a binary search! Yet this formula makes no sense if you want a high performance! If you want a binary search, you should actually just toggle each bit for the index until you found the value. To do so, you need to know how many bits you need for the highest value. And that will also tell you how many guesses you will need, at most, to find the value. But this kind of bitwise math tends to be too complex for most people. So, there is another solution. You can promote the index value to a bigger one. You could use a 32-bit value if the index is a 16-bit value. Thus, you could use (int16(((int32)floor+(int32)ceiling)/2) and the overflow is gone again. And for a 32-bit index you could promote the math to a 64-bit integer type and again avoid any overflows. It is still less optimal than just toggling bits but the math still looks easy and you can explain why you’re promoting the values. But what if the index is a 64-bit value? There are almost no 128-bit values in most programming languages. So how to avoid overflows in those languages? Well, here’s another thing. As I said, the index value is part of an array. And this array is sorted and should not have any duplicate values. So if you have 200 records, you would also need 200 unique values, with each value being at least 1 byte in size. If the index is a 15-bit signed integer then the values in the array must also be at least 15-bits and would generally be longer. Most likely, it would contain pointers to records elsewhere in memory and pointers are generally 32-bits. (In the old MS-DOS era, pointers were 20 bits, so these systems could manage up to 1.048.576 bytes or 1 megabyte of memory.) So, let’s do math! For an overflow to occur with an index as a signed 16-bit integer you would need to have at least 16384 records. Each record would then be at least 2 bytes in size, thus you would have at least 32 kilobytes of data to search through. Most likely even more, since the array is probably made up by pointers pointing to string values or whatever. But 21 KB would be the minimum to occur when using a 16-bit signed index. So, a signed 32-bit index would at least have bit 30 set to 1 before an overflow can occur. It would also need to contain 32-bit values to make sure every value is unique so you would have 4 GB of data to search through. And yes, that is the minimum amount of data required before an overflow would occur. You would also need up to 31 comparisons to find the value you’re searching for, which is becoming a bit high already. So, a signed 64-bit index would have records of at least 8 bytes in size! This requires 36.893.488.147.419.103.232 bytes of data! That’s 33.554.432 terabytes! 32.768 petabytes! 32 exabytes! That’s a huge number of data, twice the amount of data stored by Google! And you need more data than this to get an overflow. And basically, this is assuming that you’re just storing 64-bit integer values in the array but in general, the data stored will be more complex. So, chances of overflows with a 32-bit index are rare and on 64-bit indices it would be very unlikely. The amount of data required would be huge. And once you’re dealing with this much data, you will have to consider alternate solutions instead. The alternate solution would be hash tables. By using a hash function you could reduce any value to e.g. a 16-bit value. This would be the index of an array of pointers with a 16-bit index so it would be 256 KB for the whole array. And each record in this array could be pointing to a second, sorted array of records so you would have 65536 different sorted arrays and in each of them you could use a binary search for data. This would be ideal for huge amounts of data, although things can be optimized better to calculate to an even bigger hash-value. (E.g. 20 bits.) The use of a hash table is quite easy. You calculate the hash over the value you’re searching for and then check the list at that specific address in your hash table. If it is empty then your value isn’t in the system. Otherwise, you have to search the list at that specific location. Especially if the hash formula is evenly distributing all possible values then a hash table will be extremely effective. Which brings me to a clear point: the binary search isn’t really suitable for large amounts of data! First of all, your data needs to be sorted! And you need to maintain this sort order every time when you add or delete items to this record, or when you change the key value of a record! Hash tables are generally unsorted and have a better performance, especially with large amounts of data. So, people who use a 32-bit index for a binary search are just bringing themselves in trouble if they fear any overflows. When they start using floor+((ceiling-floor)/2) for their math, they’re clearly showing that they just don’t understand the algorithm that well. The extra math will slow down the algorithm slightly while the risk of overflows should not exist. If it does exist with a 32-bit index then you’re already using the wrong algorithm to search for data. You’re at least maintaining an index of 4 GB in size, making it really difficult to insert new records. That is, if overflows can occur. The time needed to sort that much data is also quite a lot and again, far from optimal. Thing is, developers often tend to use the wrong algorithms and often have the wrong fears. Whenever you use a specific algorithm you will have to consider all options. Are you using the right algorithm for the problem? Are you at risk of having overflows and underflows? How much data do you expect to handle? And what are the alternative options. Finally, as I said, doing a binary search basically means toggling bits for the index. Instead of doing math to calculate the half value, you could instead just toggle bits from high to low. That way, you never even have a chance of overflows.
Natural History Facts About the Ivory-billed Woodpecker Campephilus principalis — the ivory-billed woodpecker — is among the world’s largest woodpeckers. Only the imperial woodpecker of Mexico, now thought by many to be extinct, was larger than the ivory-bill. The ivory-billed woodpecker once ranged through swampy forests in the southeastern and lower Mississippi valley states: from North Carolina to Florida and west to eastern Texas and Arkansas, with some 1800s reports in Kentucky, Missouri and Oklahoma. John James Audubon reported ivory-bills as far north as the junction of the Ohio and Mississippi rivers around 1825. - Description of the ivory-billed woodpecker: Averaging about 20 inches in length, C. principalis is frequently mistaken for the smaller but similarly marked pileated woodpecker. Ornithologists distinguish the two by the location of the white wing feathers: the full-width white patch in the ivory-bill’s trailing wing feathers (when seen from above) folds to form a white “saddle” on its back when the bird is perched. Males have a prominent scarlet crest; the female’s crest is black. - The ivory-bill’s communication and flight: Ivory-bills communicate with a vocalization that ornithologists transcribe as “kent, kent, kent” and with the “BAM-bam” double-rap of their bills pounding on wood. Their swift, arrow-like flight through trees resembles that of the pintail duck, unlike the slower, swooping flight of the pileated woodpecker. Stiff wing feathers make the ivory-bill an especially loud flyer. People who saw the impressive ivory-bill in flight could be forgiven for shouting, “Lord God, what a bird!” — explaining why the ivory-bill is also known as the Lord God Bird. - The “Ivory” Bill The “ivory” of the ivory-billed woodpecker is a keratin sheath over the bill of bone. The broad bill continues to grow from the ivory-bill’s thick-boned skull throughout its life (potentially, up to 30 years) and is worn down by rigorous pounding on trees. - Habits and habitat of the ivory-billed woodpecker: Ivory-bills are believed to mate for life. They share the duties of incubating their china-white eggs and raising their young, which usually leave the parents’ territory at the end of the season. A pair of ivory-bills is estimated to need six square miles of uncut forest, roughly 36 times as much territory as pileated woodpeckers require. Ivory-bills excavate trees to make nest holes (usually oval-shaped openings between four and six inches in size, extending 20 inches or more down into the tree, and 40 feet or higher above ground level). - Food source of the ivory-billed woodpecker: Beetle larvae are the primary food source for ivory-bills, which are often the first woodpeckers on dying trees searching for these larvae. When beetle larvae bore through the bark to feed on the sap wood beneath, ivory-bills use their elongated beaks to pry bark from the trees and expose the larvae. For More Information About the Ivory-billed Woodpecker - History of the ivory-billed Woodpecker Follow the ivory-billed woodpecker on a journey through historic America lasting one hundred and eighty-five years. - Postcards from the Field: On the Trail of the Ivory-billed Woodpecker Follow ivory-billed woodpecker author and expert Phillip Hoose as he goes in search of the The Lord God Bird and reflects on the fascinating history of this species that, once lost, has been found again. - Ivory-billed Woodpecker News See ivory-billed woodpecker photos, maps of the ivory-bill's habitat, and other news and information about the rediscovery of the ivory-billed woodpecker.
The abilities to read and express oneself are crucial functions for a successful and fulfilling life. To most people, these skills seem basic and even taken for granted by puberty, but nothing could be farther from the truth for those who cope with dyslexia (specific reading disability). According to the National Institutes of Health (2011), dyslexia is a “brain-based type of learning disability that specifically impairs a person's ability to read. These individuals typically read at levels significantly lower than expected despite having normal intelligence.” While dyslexia remains a topic of much debate, the NIH says common symptoms include “difficulty with spelling, phonological processing (the manipulation of sounds), and/or rapid visual-verbal responding.” Dyslexia can have far-reaching implications and is receiving increased attention from researchers and the education system. It can impact reading, comprehending others and spoken language -- inside and outside the classroom. Still, it has special significance for professionals in the field of learning disabilities, who need to help dyslexic students to develop learning techniques and to cope with self-esteem issues. This paper takes an in-depth look at dyslexia, examining some of the research on the causes, symptoms, diagnosis, impact on students, teachers and parents, and suggested treatments. Researchers haven’t identified the exact causes of dyslexia but they made progress showing it has a neuro-biological origin. By definition, dyslexia is not caused by low intelligence, since dyslexia refers to difficulties that occur despite normal IQ and instruction. Reading depends on two component processes: word identification and language comprehension. Vellutino, Fletcher, Snowling, & Scanlon (2004) reviewed evidence from the last 40 years and conclude that dyslexia is not a visual problem, but rather a deficiency in phonological (letter-sound) skills. “Compared with normally developing readers,...
Three scientists, Jeffrey C. Hall, Michael Rosbash and Michael W. Young have been awarded the most reputed Nobel Prize 2017, in physiology or medicine on Monday. The US biology trio’s discoveries “explain how plants, animals and humans adapt their biological rhythm so that it is synchronized with the Earth’s revolutions,” according to the Nobel Assembly. Hall, 72, Rosbash, 73, and Young, 68,“were able to peek inside our biological clock and elucidate its inner workings,” it said. Scientists and doctors now know these day-and-night cycles keep creatures alive by regulating our alertness, sleep patterns, blood pressure, hormones, body temperature and when we eat. Scientists had known about circadian rhythms since 1729, when astronomer Jean Jacques d’Ortous de Mairan placed a mimosa plant into a dark room and noticed that the plant’s leaves still opened and closed at the same times every day. Through a series of breakthroughs, Hall, Rosbash and Young showed these internal clocks are self-regulated. In the morning, sunlight switches on the “period” gene, which begins to produce its protein. This protein accumulates in the cytoplasm, the chunky space in our cells that surrounds the nucleus where our DNA and the period gene are housed. Hall and Rosbash found that period proteins built up throughout the day until nightfall, when their levels began to gradually drop. When dawn broke, period proteins disappeared, and the cycle repeated itself. They hypothesized that the period protein was somehow crossing into the nucleus to shut off its own gene, in what they dubbed a transcription-translation feedback loop. When the period gene is active, period (PER) messenger RNA is made. This messenger RNA is transported to the cell’s cytoplasm and serves as template for the production of PER protein. The PER protein accumulates in the cell’s nucleus, where the period gene activity is blocked. This gives rise to the inhibitory feedback mechanism that underlies a circadian rhythm. Illustration by the Nobel Assembly at the Karolinska Institutet Young extended the work by uncovering two additional protein, named “timeless,” which was responsible to escorting the period protein into the nucleus. Young’s lab also identified a third protein — called doubletime — that controlled the timing of the destruction of the period proteins In humans, these clock genes control the production of insulin and other hormones involved in maintaining how our bodies process food. Disruption of the genes through sleep deprivation or mutation alters brain functions and has been tied to sleep disorders, depression, bipolar disorder and memory defects. Out of whack circadian rhythms also increase a person’s risk for cancer, obesity, diabetes and other metabolic disorders.
What is E-Safety? E-safety is the safe use of information systems and electronic communications, including the internet, mobile phones and games consoles. It is important that children and young people understand the benefits, risks and responsibilities of using information technology. - e-safety concerns safeguarding children and young people in the digital world. - e-safety emphasises learning to understand and use new technologies in a positive way. - e-safety is less about restriction and more about education about the risks as well as the benefits so we can feel confident online. - e-safety is concerned with supporting children and young people to develop safer online behaviours both in and out of school. Report to CEOP a serious E-safety incident click here Using the Internet safely at home Whilst many Internet Service Providers (ISPs) offer filtering systems to help you safeguard your child at home, it remains surprisingly easy for children to access inappropriate material including unsuitable texts, pictures and movies. Parents are advised to set the security levels within Internet browsers with this in mind. Locating the computer or tablet in a family area, not a bedroom, will enable you to supervise children as they use the Internet. However, don’t deny your child the opportunity to learn from the wide variety of material and games available on the Internet. Instead, set some simple rules for keeping them safe and make sure they understand the importance of these rules. Simple rules for keeping your child safe - To keep your child safe they should: - Ask permission before using the Internet - Only use websites you have chosen together or a child friendly search engine. - Only email people they know (why not consider setting up an address book?) - Ask permission before opening an email sent by someone they don’t know - Do not use Internet chat rooms - Do not use their real name when using games on the Internet (create a nick name) - Never give out a home address, phone or mobile number - Never tell someone where they go to school - Never arrange to meet someone they have ‘met’ on the Internet - Only use a webcam with people they know - Ask them to tell you immediately if they see anything they are unhappy with. Using these rules Go through the rules with your child and pin them up near the computer. It is also a good idea to regularly check the Internet sites your child is visiting e.g. by clicking on History and Favourites. Please reassure your child that you want to keep them safe rather than take Internet access away from them. Golden Rules for eSafety Some useful documents regarding e-safety - LGfL 1MG – 1 Minute Guide – Cyberbullying – May 2013.pdf - LGfL 1MG – 1 Minute Guide – eSafety and Ofsted – April 2013 (1).pdf - LGfL 1MG – 1 Minute Guide – Safe Web Use on ios devices – March 2013.pdf - LGfL 1MG – 1 Minute Guide – Sexting- Oct 2013.pdf - Safe Search Engine Child Friendly and Safe Search Engine - Advice about how to counteract extremism Lots of information about extremism and advice on how to counteract it. - Promote good use of devices with your child from a platform for good.org – Click here when your child gets a new gadget. One of these cards might help establish good and safe use. - How to set up parental controls at home Link to the Safer Internet centre’s guide to setting up parental controls at home. - Digital Parenting tips Click here for digital parenting tips from vodafone by age groups - Think You Know Click on the link for the Think You Know website - CEOP Click on the link to the CEOP site CEOP KS1 Film : ‘Lee & Kim’ Cartoon Suitable 5 yrs — 7 yrs Cartoon ‘Lee & Kim’ if you have small children from 4 — 7 years then you should let them view this short 10 minute cartoon, which is designed to keep them safe whilst online and more importantly, this cartoon teaches them in their early years. Jigsaw: for 8 -10 year olds This is an assembly from CEOPs Thinkuknow education programme that helps children to understand what constitutes personal information. The assembly enables children to understand that they need to be just as protective of their personal information online, as they are in the real world.
by Julie Kray These days, most of us don’t often pause long enough to think about the value of the natural resources that sustain life as we know it. Where would we be without water, for instance? We all drink it. H2O is essential to our daily biochemical functions, and the same is true for all other living things. Without water, we would also be foodless, as everything from rice to lettuce to potato chips to steak incorporates water along the way before it arrives on our plates or in our grocery stores. And even though doing the dishes or the laundry, or flushing the toilet are fairly mundane tasks for those of us in developed parts of the world, things would quickly get pretty unhygienic and unhealthy around here without water. With 6.8 million human beings on the planet and counting, we need clean fresh water and lots of it. What we may not often think about is exactly how much water plants use, all over the world, in agricultural and natural environments. Over large areas, water use by plant communities can really add up. Some plants have remarkable abilities to find water sources with their roots, and conserve water during dry times. Plants develop these sorts of adaptations in response to long-term climate patterns where they grow. But how might plants deal with a rapid change in rainfall amount or seasonality, as is forecast for many places around the earth as our planet warms up? This is the question that motivated my graduate research at C.S.U. It’s something that water planners think about constantly when they are trying to determine how much water is available to support growing human populations in different regions. Accurate estimates of plant water use are necessary to build water budgets that will tell us exactly how much is locally available for drinking water supplies, agriculture, or industry. But we don’t yet know how climate changes will affect plant water consumption in different regions. Like the plants themselves, our water budgets will probably have to adjust. I had the fantastic opportunity to study water use of native plant communities in the awe-inspiring San Luis Valley (SLV) of southern Colorado, which is famous for its organic potato production, serves as a lay-over spot for migrating sandhill cranes, and is home to the wild and wonderful Great Sand Dunes National Park. The SLV is also the most arid region in Colorado, receiving only 7-8 inches of precipitation a year, on average. There is very little surface water (streams/rivers), and yet, the SLV is not a desert… a vast aquifer stores almost 1 billion acre-feet of groundwater that is relied upon by native plants and agricultural operations. Our best estimate of current groundwater use by native plant communities in the SLV is about 355,000 acre-feet per year (or 115 billion gallons per year). This is as much water as a city of 2 million people would use in a year! Is this groundwater consumption by native plant communities likely to increase or decrease as plants adjust to future climate conditions in the SLV? And how do we begin to answer this important question? (on to part 2…)
- Electric Power just discussed is an instantaneous parameter, measured in watts or kilowatts for most customers. - If you look at your electricity meter the power consumption is indicated precisely by the instantaneous speed* of rotation of the silvery disc visible through the glass front of the meter. - Over a given time period, the total energy consumed is the total number of turns* that the silvery disc has made. This is measured cumulatively by the meter's numerical dials and counters which are driven, via gears, by the spinning disc. - Example 1: A 100-watt light globe has a power consumption of 100 watts. After running for 24 hours it will have consumed 0.1 (kW) x 24 (hours) = 2.4 kilowatt-hours of electricity. - Example 2: A 100-watt light globe has a power consumption of 112 watts if supplied with 254.4 volts. After running for 24 hours it will have consumed 0.112 (kW) x 24 (hours) = 2.69 kilowatt-hours of electricity. * both of these are affected by voltage. How Voltage affects Customers' Electricity Bills
Speech and Language Pathology Linden BP has two speech-language pathologists (SLPs), often informally known as speech therapists, on our team. SLPs are professionals educated in the study of human communication, its development, and its disorders. SLPs assess speech, language, cognitive-communication, and oral/feeding/swallowing skills to identify types of communication problems (articulation; fluency; voice; receptive and expressive language disorders, etc.) and the best way to treat them. Speech Disorders and Language Disorders A speech disorder refers to a problem with the actual production of sounds, whereas a language disorder refers to a difficulty understanding or putting words together to communicate ideas. Speech disorders include: - Articulation disorders: difficulties producing sounds in syllables or saying words incorrectly to the point that listeners can’t understand what’s being said. - Fluency disorders: problems such as stuttering, in which the flow of speech is interrupted by abnormal stoppages, repetitions (st-st-stuttering), or prolonging sounds and syllables (ssssstuttering). - Resonance or voice disorders: problems with the pitch, volume, or quality of the voice that distract listeners from what’s being said. These types of disorders may also cause pain or discomfort for a child when speaking. - Dysphagia/oral feeding disorders: these include difficulties with drooling, eating, and swallowing. Language disorders include both - Receptive disorders: difficulties understanding or processing language. - Expressive disorders: difficulty putting words together, limited vocabulary, or inability to use language in a socially appropriate way. A SLP team member is consulted on behalf of existing pediatric patients when her expertise can be additive and beneficial to clarifying diagnostic impressions, educational needs and academic planning, and/or advise intervention services.
Microwaves in fusion plasmas In a magnetically confined plasma the ions and electrons experience a Lorenz force from the externally applied magnetic field which gives rise to gyromotion of the particles. The Lorenz force is equated with the centrifugal force and from this simple force balance it is found that, in the case of electrons, the gyration frequency is 28 GHz/T. The magnetic field in large fusion devices as JET, ITER and W7-X is of the order of 3 to 5 T, placing the 1st harmonic of the Electron cyclotron (EC) frequency in the range of 100 to 200 GHz. Microwaves with similar frequencies are used extensively as a tool for both heating and diagnosing the plasma. In a tokamak or stellarator the strength of the magnetic field decays with a 1/R dependence, with R the radius measured with respect to the center of the machine. Therefore, the EC frequency decreases with 1/R as well. By launching microwave power at the EC frequency, referred to as Electron Cyclotron Waves (ECW), the power of the waves can be transferred to the electron population at this specific resonance location. By selecting the frequency (or the magnitude of the B-field) one has thus a mechanism of localized heating. See figure 1. Fig. 1. A simplified scheme of an Electron Cyclotron Heating system. Due to the 1/R dependence of the magnetic field that contains the plasma, the cyclotron frequencies also fall off with a 1/R dependence, with the high frequencies to the left and the low frequencies to the right. Microwaves are injected, in this example, by a double mirror arrangement inside the vacuum vessel. The microwaves travel through the plasma and are absorbed at the cyclotron resonance frequency. Localization in vertical direction is obtained by shaping the microwave beam, for simplicity drawn as a straight line in the cartoon. The power is launched into the plasma using Gaussian beam optics which can focus to a spots size down to the order of 1.5 cm (note: figure shows a line, not a Gaussian beam). Powerful sources are available, gyrotrons with a total beam power of 1 MW, resulting in extreme power densities at the focus of up to a GW/m2. Increase of local conductivity, and / or injection under a toroidal angle, also enables to drive net current. For diagnostic purposes high power as well as low power microwaves are used. In passive systems, such as e.g. Electron Cyclotron Emission (ECE) diagnostics, the electromagnetic radiation that the electrons emit as they gyrate around the magnetic field lines is picked up. The frequency is a measure for the location in the plasma while the intensity of ECE is a measure of the electron temperature. See figure 2 for a cartoon of the situation. Fig. 2. A simplified scheme of an Electron Cyclotron Emission receiver. Traversing the cord from right to left the frequencies drop from high to low. This spectrum is coupled into a microwave receiver by means of transmission line, a waveguide in the case of the figure. The receiver separated the frequencies and at each frequency the microwave power is measured, which is by means of the Planck function a measure of the electron temperature. Compared to gyrotron power, the power of the ECE is extremely small. For instance, for a plasma at 100 million K the ECE power in a localised volume - say corresponding to a few cm3 in the plasma - is of the order of several 100 nW. But at very high electron temperatures, such as expected at ITER, the total synchrotron radiation (integrated over the whole spectrum and vessel volume), is still expected to be considerable. Other microwave diagnostics are e.g. reflectometers that exploit reflection of waves depending on local density, or Collective Thomson Scattering, where photons of a microwave probing beam are scattered of electrons in the plasma.
Asthma is an inflammatory disorder of the airways, characterized by periodic attacks of wheezing, shortness of breath, chest tightness, and coughing. Three things make it harder to breathe during an asthma attack -- the inflammation (swelling) of the lining of the airways, the tightening of the muscles around the airways, and fluid/mucus filling the airways. These factors reduce airflow and produce the characteristic wheezing sound. Most people with asthma have periodic wheezing attacks separated by symptom-free periods. Some people have chronic shortness of breath with episodes where it gets worse. In other cases, cough is the predominant symptom. Asthma attacks can last minutes to days, and can become dangerous if the airflow becomes severely restricted. In sensitive individuals, asthma symptoms can be triggered by inhaled allergens (allergy triggers) such as pet dander, dust mites, cockroach allergens, molds, or pollens. Asthma symptoms can also be triggered by respiratory infections, exercise, cold air, tobacco smoke and other pollutants, stress, food, or drug allergies. Aspirin and other non-steroidal anti-inflammatory medications (NSAIDS) provoke asthma in some patients. Asthma symptoms can decrease over time, especially in children. Many people with asthma have an individual and/or family history of allergies, such as hay fever (allergic rhinitis) or eczema. Others have no history of allergies or evidence of allergic problems. The doctor will conduct a physical exam that focuses on the upper respiratory tract, chest, and skin. The doctor will listen for wheezing and may look for nasal secretions, eczema, and similar allergy-related symptoms. The most important test for diagnosing asthma is called spirometry. A spirometer is an instrument that measures the maximum flow rate you can exhale after breathing in as much as you can. A drug called a bronchodilator is given to the patient to see whether breathing obstruction is "reversible." If so, this is a strong indication of asthma. No one single test, or set of tests, is appropriate for every patient. Your doctor may use other tests to help rule out the possibility of other causes of your symptoms. Treatment is aimed at avoiding known allergens and respiratory irritants and controlling symptoms and airway inflammation through medication. Allergens can sometimes be identified by noting which substances cause an allergic reaction. Allergy testing may also be helpful in identifying allergens in patients with persistent asthma. Common allergens include: pet dander, dust mites, cockroach allergens, molds, and pollens. Common respiratory irritants include: tobacco smoke, pollution, and fumes from burning wood or gas. A variety of medications for treatment of asthma are available. These include: - Long-term controller medications, which are used on a regular basis to prevent attacks, not for treatment during an attack. - Inhaled corticosteroids (QVAR, Asmanex, Pulmicort, Flovent, Alvesco) - Leoukotriene inhibitors (Singulair, Accolate, Zyflo) - Long-acting bronchodilators (Foradil, Serevent) (Used only in combination with an inhaled corticosteroid.) - Cromolyn sodium (Intal) or nedocromil sodium - Combination anti-inflammatory/bronchodilator (Advair, Dulera, Symbicort) - Quick-relief medications, which are used to relieve symptoms during an attack. - Short-acting bronchodilators (Proventil, Ventolin, ProAir, and others) - For attacks: - Oral or intravenous corticosteroids (such as prednisone, methylprednisolone, and hydrocortisone) for stabilizing severe episodes People with mild asthma (infrequent attacks) may use quick-relief inhalers as needed. Those with significant asthma (symptoms occurring more than twice per week) should take anti-inflammatory medications on a regular basis for long-term control. A severe asthma attack requires a medical evaluation and may require hospitalization, oxygen, and intravenous medications. A peak flow meter, a simple device to measure lung volume, can be used at home daily to check on lung functions. This often helps determine when medication is needed or can be tapered in the case of an exacerbation of symptoms. Peak flow values of 50 - 80% of an individual's personal best indicate a moderate asthma exacerbation, while values below 50% indicate a severe exacerbation. Asthma symptoms can be substantially reduced by avoiding known allergens and respiratory irritants. If someone with asthma is sensitive to dust mites, exposure can be reduced by encasing mattresses and pillows in allergen-impermeable covers, removing carpets from bedrooms, and by vacuuming regularly. Exposure to dust mites and mold can be reduced by lowering indoor humidity. If a person is allergic to an animal that cannot be removed from the home, the animal should be kept out of the patient's bedroom. Filtering material can be placed over the heating outlets to trap animal dander. Exposure to cigarette smoke, air pollution, industrial dusts, and irritating fumes should also be avoided. Allergy desensitization (allergy shots) may be helpful in reducing asthma symptoms and medication use, but the size of the benefit compared to other treatments is not known.
Hyperthermia is the general name given to a variety of heat-related illnesses. Warm weather and outdoor activity go hand in hand. However, it is important for older people to take action to avoid the severe health problems often caused by hot weather. Regardless of extreme weather conditions, the healthy human body keeps a steady temperature of 98.6 degrees Fahrenheit. In hot weather or during vigorous activity, the body perspires. As this perspiration evaporates from the skin, the body is cooled. If challenged by long periods of intense heat, the body may lose its ability to respond efficiently. When this occurs, a person may experience hyperthermia. In other words, hyperthermia occurs when body metabolic heat production or environmental heat load exceeds normal heat loss capacity or when there is impaired heat loss. Health Factors That Increase Risk The temperature does not have to hit 100 degrees for a person to be at risk. Both one’s general health and/or lifestyle may increase a person’s chance of suffering a heat-related illness. Health factors which may increase risk include: - poor circulation - inefficient sweat glands, and changes in the skin caused by the normal aging process - heart, lung and kidney diseases, as well as any illness that causes general weakness or fever high blood pressure or other conditions that require changes in diet For example, people on salt-restricted diets may increase their risk of being unable to perspire, caused by medications including diuretics, sedatives and tranquilizers, and certain heart and blood pressure drugs. Other factors include being substantially overweight or underweight, and drinking alcoholic beverages. Lifestyle factors that can increase risk are: - unbearably hot living quarters - lack of transportation - which prevents people from seeking respite from the heat in shopping malls, movie houses, and libraries - overdressing - because they may not feel the heat, older people may not dress appropriately in hot weather - visiting overcrowded places - trips should be scheduled during non-rush hour times - not understanding weather conditions - older persons at risk should stay indoors on especially hot days. The two most common forms of hyperthermia are heat exhaustion and heat stroke. Of the two, heat stroke is especially dangerous and requires immediate medical attention. Heat stress occurs when a strain is placed on the body as a result of hot weather. Heat fatigue is a feeling of weakness brought on by high outdoor temperature. Symptoms include cool, moist skin and a weakened pulse. The person many feel faint. Heat syncope is a sudden dizziness experienced after exercising in the heat. The skin appears pale and sweaty but is generally moist and cool. The pulse is weakened and the heart rate is usually rapid. Body temperature is normal. Heat cramps are painful muscle spasms in the abdomen, arms or legs following strenuous activity. Heat cramps are caused by a lack of salt in the body. Heat exhaustion is a warning that the body is getting too hot. The person may be thirsty, giddy, weak, uncoordinated, nauseated and sweating profusely. The body temperature is normal and the pulse is normal or raised. The skin is cold and clammy. Heat stroke can be life-threatening and victims can die. A person with heat stroke usually has a body temperature above 104 degrees Fahrenheit. Other symptoms include confusion, combativeness, bizarre behavior, faintness, staggering, strong and rapid pulse, and possible delirium or coma. High body temperature is capable of producing irreversible brain damage. Diagnosis is based on the medical history (including symptoms) and physical exam. If the victim is exhibiting signs of heat stroke, emergency assistance should be sought immediately. Without medical attention, heat stroke can be deadly. Heat exhaustion may be treated in several ways: - get the victim out of the sun into a cool place, preferably one that is air conditioned - offer fluids but avoid alcohol and caffeine - water and fruit juices are best - encourage the individual to shower and bathe, or sponge off with cool water - urge the person to lie down and rest, preferably in a cool place Are the symptoms definitely caused by heat stress? Do any tests need to be done? Could there be some underlying cause or health factor that increases the risks? What treatment do you recommend to prevent heat stress? Can anything be done for a heat stroke person while waiting for emergency assistance? Prevention hyperthermia is relatively straightforward: Use common sense in avoiding excessive activity in situations in which heat is present. Adequate intake of fluids before, during and after exercise in any situation also is essential.
Bees Alphabet Activity In this alphabetizing activity, students alphabetize a set of 10 words related to bees. The instructional activity has a reference web site for additional activities. 3 Views 2 Downloads The Sign of the Beaver Extend a class reading of the novel The Sign of the Beaver across all subject areas with this literature unit guide. From basic discussion questions and writing prompts, to a research project about tracking animals, this resource offers... 4th - 7th English Language Arts CCSS: Adaptable Essential Reading Strategies for the Struggling Reader Beneficial for beginning readers, struggling readers, and those in need of review, a set of language arts activities is a great addition to any foundational reading unit. Focusing on phonological awareness, fluency, instructional... K - 5th English Language Arts CCSS: Adaptable Section One: What is Biodiversity? Four intriguing and scientific activities invite learners to explore the natural resources of their town. The activities cover concepts such as genetic traits, organizing species in a taxonomy, the differences between different species... 3rd - 5th Science CCSS: Adaptable
On Wednesday, the 5 Gyres Institute published a landmark paper that presents the first-ever global estimate of plastic ocean pollution. It is the culmination of six years of data gathered from 24 expeditions undertaken by researchers throughout the world and over 50,000 nautical miles. Plastic, unlike organic matter, does not biodegrade over known timescales. Instead, it fragments and disperses under the influence of sunlight and other weathering processes. When plastic breaks down into microplastic particles, it can easily pass through water filtration systems and into rivers and oceans. This is to say nothing of the macroscale plastic that washes into harbors after every rain, accumulates on beaches and in the stomachs of marine creatures. The 5 Gyres Institute is a non-profit organization dedicated to the elimination of plastic waste. In 2012, it shocked the nation when it discovered a huge concentration of microplastic in the Great Lakes, an average of 43,000 plastic particles per square kilometer. Since that time it has worked to raise awareness of the issue, even helping to craft model legislation for a federal ban on cosmetic products containing plastic microbeads. In its latest paper, published December 10 in the journal PLOS ONE, the Institute reports that there are some 5.25 trillion plastic particles floating in the ocean, with a collective weight of about 269,000 tons. “When The 5 Gyres Institute formed, we set out to answer a basic question: how much plastic is out there?” says Dr. Marcus Eriksen, Director of Research and co-founder of the 5 Gyres Institute, in a press release. “There was just no data from the Southern Hemisphere, Western Pacific or Eastern Atlantic. After six long years and a wide-reaching collaboration, we have completed the most comprehensive plastic pollution study to date. We’ve found microplastic ocean pollution, in varying concentrations, everywhere in the world.” This plastic pollution is not limited to the garbage patches formed in the five subtropical ocean gyres (vortices where ocean currents converge and both natural and artificial debris accumulates); it has been measured in remote regions of the planet, in coastal sediments, the circulatory system of mussels, in both Northern and Southern Hemispheres and in the Arctic. “Our findings show that that the garbage patches in the middle of the five subtropical gyres are not final resting places for floating plastic trash,” Eriksen said. “Unfortunately, the endgame for microplastic is dangerous interaction with entire ocean ecosystems. We should begin to see the garbage patches as shredders, not stagnant repositories.” The natural UV- and oxidative-fragmenting that affects plastic is exacerbated by ocean waves and the multitude of fish and sea life that graze on it in the gyres. And while the direct environmental impact of plastic is still unknown, the hazard it poses to animals is well-documented. Plastics can absorb toxins such as PCBs, DDT, pesticides, flame retardants, mercury and other organic pollutants. When marine animals mistake microplastic for plankton or other food, they absorb these toxins into their system. When these grazers are eaten by predators, the toxins are passed on, bioaccumulating up the food chain until they end up on your plate. As Eriksen explains, “The garbage patches could be a frightfully efficient mechanism for corrupting our food chain with toxic microplastics.” Plastic in all its forms poses another hazard to marine ecosystems, as turdles, nets and floating debris can transport microbes, algae, invertebrates and fish to non-native regions and potentially disrupt existing habitats. Of note is the fact that 5 Gyres discovered less miscroplastic on the surface than expected. This “suggests removal processes are at play,” the authors of the report write, including UV degradation, ingestion by organisms, decreased buoyancy due to ingestion and then defecation by organisms, and suspension in the water column. The authors acknowledge that fragmentation could be breaking microplastics down into even smaller particles than their nets could catch (smaller than 0.33 mm). Recent studies have also demonstrated some organisms and bacteria may be able to consume and eliminate plastic naturally. As the authors note, Plastic Europe, a trade organization representing plastic producers and manufacturers, reported 288 million tons of plastic produced globally in 2012. The 268,940 tons of plastic reported in the “All Gyres Paper” is just 0.1 percent of that volume, and though 5 Gyres stresses that its estimate of global weight is in fact “highly conservative,” it does not account for the “potentially massive amount of plastic present on shorelines, on the seabed, suspended in the water column, and within organisms.” One of the driving goals of 5 Gyres is to make companies responsible for the entire life cycle of their plastic products. In an email to Planet Experts, Dr. Eriksen wrote, “We don’t need to focus on cleaning the oceans. Most of that 5.25 trillion particles are less than the size of a grain of rice, and they are globally distributed. In time, all of that waste will rest on the sea floor. We REALLY need to focus all efforts on not making more waste.” Dr. Eriksen will be hosting a Reddit AMA today to discuss the findings of the “All Gyres Paper.” Update 12.16.14 – The American Chemistry Council, an industry trade association for chemical companies (including plastics makers), reached out to Planet Experts in response to this report. Keith Christman, ACC’s Managing Director of Plastics Markets, speaks about efforts ACC is making to curb plastic pollution in this interview with Planet Experts.
Skin is the resilient living fabric that envelops our entire self. It can be defined as a tough membranous tissue that forms the external covering of our body. The skin protects us from invasion of microorganisms and from harmful radiation from the sun. The skin is also essential to prevent dehydration and to regulate body temperature. In addition, the skin is much more than just a covering membrane, as it is the medium through which we initially introduce ourselves to others, a tissue through which we feel and feel through. The skin can be divided into three main parts, the epidermis forming the outermost layer of the skin, the underlying dermis and the deeper located subcutis. The human epidermis is on average 50 µm thick with a surface density of 50.000 cells per mm². The epidermis is a stratified epithelial sheet composed mainly of keratinocytes. Keratinocytes are strongly attached to each other through desmosomes. The epidermis also includes resident cells and these consist of the pigment producing melanocytes, the antigen presenting Langerhans cells and the neuroendocrine Merkel cells. The epidermis is separated from the dermis by the basement membrane, a complex multilayered specialized structure of attachment molecules. Residing on the basement membrane is the basal layer of keratinocytes that contains the proliferating basal cells. Keratinocytes that leave this layer undergo terminal differentiation. The prickle cell layer is located above the basal cell layer and in this layer keratinocytes acquire more cytoplasm and well-formed bundles of keratin intermediate filaments. As the keratinocytes are pushed further outwards, proteins that constitute the cell envelop and keratohyaline granules of the stratum granulosum are synthesized. The end stage of epidermal keratinocyte differentiation results in a dense keratinous layer, the cornified layer consisting of flake-like squames that are eventually shed. The total epidermal renewal time is two months. Approximately 25-40 days are required for the designated epidermal stem cells that underlie the constantly self-renewing epidermis to develop into cells of the granular layer and passing through the cornified layer takes an additional 14 days. The melanocyte is a neural crest -derived cell that migrates into the epidermis and hair follicles during embryogenesis. Melanocytes are characterized by their unique organelle, the melanosome that is essential for the melanin biosynthetic pathway. Melanocytes are dendritic cells that can produce, transport and deliver melanin pigment to the keratinocytes. Melanocytes reside in the basal layer. Each melanocyte has contact with some 30-40 keratinocytes through its dendrites. Melanocytes can be stimulated by ultraviolet radiation to increase melanin production and transport. Langerhans cells are of monocytic lineage and derived from the bone marrow. Langerhans cells appear as dendritic cells residing in the prickle cell layer of epidermis and play a crucial role in the presentation of antigens that enter the skin or are produced within the skin. In the presence of inflammation, Langerhans cells become activated to leave the epidermis and migrate to draining lymph nodes. Merkel cells reside in the basal layer and are highly innervated neuroendocrine cells that are involved in mechanoreception. Merkel cells are often associated with epidermal appendages and nerve fibers. Merkel cells are most dense in volar skin. Located beneath the epidermis is the dermis consisting of a vascularized connective tissue that provides nutritional and structural support. The dermis is composed of a mucopolysaccharide gel that is held together by a collagen and elastin matrix. Vascular structures, nerves and mast cells are found throughout the dermis together with the other dermal resident cells consisting of fibroblasts, dermal dendritic cells and macrophages. The dermis can be roughly divided into an upper papillary dermis consisting of finer fibrillary collagen and a deeper located reticular dermis with dense collagen fibers. Below the dermis is the subcutis consisting of mature adipose tissue arranged in lobules separated by thin fibrous septa. The skin also includes the skin appendages (hair, nails, eccrine and apocrine sweat glands and sweat ducts). Hair follicles include sebaceous glands and are connected to arrector pili muscle fibers. The development of skin appendages is based on epithelial-mesenchymal interactions to produce miniature organs, containing cells that follow specialized routes of differentiation, to add important functions to the skin. Cancer: Melanoma, Skin cancer
Jupiter may have acted like a giant wrecking ball in the newborn solar system, roaming in to destroy an early generation of inner planets before retreating to its current orbit, researchers say. This Jupiter finding could help explain why the solar system is so different from the hundreds of other planetary systems that astronomers have recently discovered, and that life as it is known on Earth might be rarer than previously thought, the scientists added. In the past two decades or so, researchers have confirmed the existence of more than 1,800 planets orbiting distant stars. These discoveries have included nearly 500 systems that, like our solar system, possess multiple planets. Our strange solar system These findings revealed that our solar system is very unusual. The typical planetary system is made up of a few super-Earths — rocky planets up to 10 times the mass of Earth — orbiting much closer to their stars than Mercury does the sun. These super-Earths are usually not only rich in rock, but also in so-called volatile materials that easily vaporize when heated. This means that super-Earths "tend to have very thick and massive atmospheres with pressures that exceed that of the Earth by factors of hundreds, if not thousands," lead study author Konstantin Batygin, a planetary scientist at the California Institute of Technology in Pasadena, told Space.com. In comparison, "the atmospheres of our terrestrial planets are exceptionally thin." Moreover, planetary systems that possess giant planets similar to Jupiter and Saturn typically have them much closer to their stars than in the solar system. Giant worlds known as hot Jupiters, whose orbits are only about one-tenth of the distance from Mercury to the sun, are some of the alien worlds that scientists have seen most often. "Our solar system is looking increasingly like an oddball," study co-author Gregory Laughlin, an astronomer at the University of California, Santa Cruz, said in a statement. Now Batygin and Laughlin find that Jupiter's migrations toward and away from the sun might explain why the solar system is an anomaly. The researchers modeled a leading scenario for the formation of Jupiter and Saturn known as the "Grand Tack," wherein Jupiter arose first and migrated toward the sun until Saturn formed, which caused Jupiter to reverse course and migrate outward to its current orbit. They calculated what might happen if a set of rocky planets formed in the inner solar system before Jupiter migrated inward. In the early solar system, the sun was surrounded by a dense disk of gas and dust. This suggests that any inner rocky planets forming might have eventually become super-Earths such as many of the exoplanets that astronomers have seen around other stars. However, as Jupiter moved inward, its gravitational pull would have slung these nascent inner worlds into close-knit, overlapping orbits. This would have set off a series of collisions that smashed these newborn worlds into pieces. "It's the same thing we worry about if satellites were to be destroyed in low-Earth orbit," Laughlin said in a statement. "Their fragments would start smashing into other satellites and you'd risk a chain reaction of collisions. Our work indicates that Jupiter would have created just such a collisional cascade in the inner solar system." The resulting debris would then mostly have spiraled into the sun. A second generation of inner planets would have formed later from the depleted material that was left behind. This would explain why Mercury, Venus, Earth and Mars are younger than the outer planets, and why they are both smaller and have much thinner atmospheres than inner worlds seen in other planetary systems. "The results imply that our terrestrial planets formed after Jupiter's early migration wiped the slate clean and set the stage for formation of gas-poor objects," Batygin said. "The fact that all of these characteristics of the solar system turn out to stem from the same process is exciting — it is as if the scattered pieces of the puzzle are finally falling together into a coherent picture." "This kind of theory, where first this happened and then that happened, is almost always wrong, so I was initially skeptical," Laughlin said in a statement. However, "there is a lot of evidence that supports the idea of Jupiter's inward and then outward migration. Our work looks at the consequences of that. Jupiter's 'Grand Tack' may well have been a 'Grand Attack' on the original inner solar system." Implications for life on Earth...and elsewhere Jupiter-like planets are uncommon — "only about 10% of sunlike stars host them," Batygin said. This suggests "planetary systems like our own are also expected to be rare." In addition, only the formation of Saturn in the solar system pulled Jupiter back and allowed Mercury, Venus, Earth and Mars to form. One implication of these findings is that life as it is known on Earth might be rarer in the universe than previously thought. "While Earth-mass planets may indeed be plentiful in the galaxy, truly Earth-like planets, with low atmospheric pressures and temperatures on the surfaces, are likely an exception to the rule," Batygin said. "A distant analog that comes to mind is Venus — Venusian atmospheric pressure is 90 times greater than that of the Earth and the surface temperature is about 450 degrees Celsius (842 degrees Fahrenheit)." "Even with a relatively low-mass atmosphere, Venus is not hospitable to life as we know it. One can only imagine the kinds of extreme environments that are typical of extrasolar planets," Batygin said. "This is all to say that life that has evolved on Earth is not well suited to other planets. If solar system exploration and the search for exoplanets have taught us one thing, however, it is to never underestimate the physical diversity of planetary systems. Therefore, extrasolar life, where it exists, will differ substantially from our common definition and thrive in its own unique environment that is unlike anything we are used to." Another potential consequence of these findings is that "Jupiter-like planets and populations of super-Earths should be mutually exclusive, and as a rule will not be hosted by the same stars," Batygin said. NASA's Kepler space observatory's "Second Light" mission can scan the skies to begin testing this prediction, and NASA's planned Transiting Exoplanet Survey Satellite (TESS) can explore it further, Batygin said. Batygin and Laughlin detailed their findings online March 23 in the journal Proceedings of the National Academy of Sciences. - Trump: 'We Are Going to Have the Space Force' - Black Holes Could Actually Be Colliding Wormholes - The 4th Flavor? Scientists Close in on a New Kind of Neutrino - This Supercomputer Can Calculate in 1 Second What Would Take You 6 Billion Years This article originally published at Space.com here
Click and hold the mouse button over a vertebra to see which organs are served by the nerves that pass through the small opening at each spinal joint. The vertebral column, spinal cord, and nervous system have specific and important functions. The vertebral column serves five main functions: The spinal cord is housed in and protected by the vertebral column. The spinal cord performs two main functions: The peripheral nervous system includes the nerves used for communication to and from the brain, spinal cord, and all other parts of the body, including the internal organs, muscles, skin, and blood vessels. There are 31 pairs of spinal nerves along the spinal cord that carry signals from the body to the brain.
The World Health Organization defines health as a state of absolute physical, mental and social welfare, which does not merely symbolize the absence of diseases (World Health Organization, 1946). This is basically determined by the context of the people’s lives, considering the fact that people may not always have the capacity to directly control some of the factors determining their health (World Health Organization, 1946). However, it is important to note that there are different ways society can conceptualize health. Besides, there is a connection between the manner in which the society defines and pursues health. This paper investigates the ways society understands health. The paper will also identify the determinants of health in humans and the link between health policies, health determinants and health. Determinants of Health in Humans Human beings are complex systems, and numerous factors work together to determine their health. To be able to really conclude whether people are healthy or not, it is largely determined by the environment and circumstances. Factors such as the state of the environment where we live, education and income level, genetics, and family and friends relationships play a significant role in determining health (World Health Organization, 1946). However, it is apparent that most people continue to maintain unhealthy practices that are likely to result in health-related problems. Penny et al. (1994) highlighted that the main reasons why people continue to engage in unhealthy behaviors is due to inaccurate perceptions regarding susceptibility and risk. Most people always tend to ignore their own risk-increasing behaviour and concentrate on the risk-increasing behaviors of the people around. Blum (1974, 1981) came up with a model in which he grouped the determinants of health into four divisions: lifestyle, environment, genetics and health care services. The type of lifestyle an individual maintains determines the health status (Hart, 1997). For instance, an individual’s behaviour and coping skills such as maintaining balanced diet, keeping active, consumption of drugs and substances and stress management play significant roles in determining the health. However, despite the fact that lifestyle is often viewed as an aspect of freedom, it is noted that it is also influenced by personal skills and educational level (Hart, 1997). It is crucial to note that both pre-natal and post-natal environments contain significant impacts on the health status of an individual. There exists a significant relationship between the intrauterine environment and the health status of the fetus. For instance, maternal illness, high blood pressure and smoking can give rise to low birth weight infants, while maternal gestational can give rise to overweight infants (Barker et al. 1989a). Post-natal environment is further subdivided into physical, biological and chemical environment, and economic, social and psychological environment. Under the post-natal environment, some of the factors that can determine health include income levels, education, poverty, accessibility to clean air, safe water, healthy working places, good relationships with family and friends, cultural beliefs, crime and violence, environmental pollution, and food and agriculture. These factors entail genetic endowment of an individual such as sex and the biological age. All human diseases contain genetic components, whereby the host response, regarding the severity and extent of the effects, is a function of thegenetic weakness (Hart, 1997). Therefore, inheritance plays a significant role in determining the likelihood of developing a disease. In addition, genetics is also considered as the most powerful arena where medical intervention can be conducted to enhance an individual’s health. However, medical intervention cannot alter with an individual’s genetic constitution to enhance their health, but this works to enhance a greater understanding of the relationship between environmental and genetic factors (Hart, 1997). An individual’s health can be determined by access to good effective healthcare services (Bunker et al., 1995). Good accessibility and use of the healthcare services that treat and prevent diseases can help in minimizing vulnerability. Connection between how a Society defines Health and how it Pursues Health It is apparent that the society considers the more affluent to live longer and their susceptibility to diseases to be actually low compared to the less fortunate people in the society (Hattersley, 1997). Such differences have generated social injustices, and they depict some of the significant influences on the contemporary society. However, it is worth noting that the society recognizes the fact that an individual’s own behavior plays a significant role in determining their health status. Therefore, people tend to behave according to their own understanding regarding healthy practices based on their social status in the society. It is understood that medical care can help prolong survival in case of a serious disease. Poor living conditions result in poor health, while good living conditions promote healthy living (Hattersley, 1997). However, it is important to note that most of the societies have for a long time maintained their cultural beliefs in place that were supposed to guide their living, for instance, the type of diet consumed. In addition, the society has had its own approaches towards dealing with the common types of diseases through their own local skills and resources, and some of the major ailments, such as cancer and diabetes, were always associated with the defilement of the taboos. The society pursues health as a condition that is determined by the environment in which an individual lives, works and plays, and the social status in the society, which is actually connected to the way health is defined considering the fact that the better-off are always considered to be healthier. Influence of Technology Health behaviors have had a significant role as far as the health of the population is concerned. The main view is that the invention of technology has changed the manner in which the society defines health. For instance, the decline of diseases such as whooping cough, small pox and tuberculosis has been associated with the establishment of medical interventions like vaccinations and chemotherapy. Increased access to technology has actually transformed the perception of the society towards health so that people are today more conscious of their health. People are concerned about changing their behavior and adopting the healthy lifestyles, changing their traditional beliefs and coping strategies, and adhering to the medical recommendations. Connection between Health Policies, Health Determinants and Health The policies and actions for health have to be directed towards dealing with the health determinants in an aim to address the causes of diseases so as to prevent them from occurring (Abel-Smith, 1994). This is all geared towards ensuring the risk and susceptibility to diseases is minimized. Health experts highlight that recognizing the impacts on health of the social and economic policies contain significant impacts regarding the manner in which the society makes its developmental decisions. However, it is apparent that decision-makers at all levels recognize the need to the importance of investing in sustainable development and health (Abel-Smith, 1994). However, this is based on clear access to modern information regarding the determinants of health, through which effective health policies are grounded on. It is important to note that translating the scientific information into effective health actions and policies is a complex process (Abel-Smith, 1994). This becomes more challenging, especially when the policies are aimed at changing the manner in which the society perceives health policies. Scientific knowledge regarding the determinants of health is today accumulating at a faster rate. It is therefore important that health policies are updated to ensure they address the health determinants, with the general aim of promoting a healthy society.
Primary Sources and Science By Mark Newman & Carrie Copp “The results were fantastic!” said high school physics teacher Casey Veatch after implementing a Library of Congress primary source into his science lesson. Middle school science teacher Rebecca Prince further explains. “Primary sources always create what I like to call the ‘lean in factor:’ students sit up in their seats, lean forward on their desks, and engage in the discussions that revolve around the primary sources.” Analyzing historical primary sources about science expands critical thinking and promotes student inquiry, just as it does in other disciplines. Students can learn about the history and application of various scientific discoveries through the use of primary sources. Using historical primary sources in science instruction also builds important skills, such as observation and inference, that are integral to experimentation and the scientific method. Primary sources can appeal to all learners. They promote interdisciplinary instruction and involve students in learning content as well as building skills. Rebecca Prince, a middle school science teacher at Rhodes School in River Grove, Illinois, first used primary sources during her student teaching. “I really liked the way primary sources sparked critical thinking and interest in social studies among my students,” she commented. “Once I became a science teacher, I wanted to use primary sources from the Library to create authentic scientific inquiry experiences.” Prince’s current classroom includes students who are English language learners or have special needs. She has found images to be more effective learning aids than print resources because many of her students have low reading skills. Many science experiments, however, rely primarily on written procedures, which can be difficult for her students to follow. Prince wanted to design a learning experience that built on student strengths and created an environment that encouraged them to strive for success in science. In one example of how Prince has used Library of Congress primary sources as the basis for scientific inquiry, she gave her students Samuel Morse's sketches as a model for building their own telegraphs. Prince used photographs to have students study the uses of the telegraph. "Then, as they became more adept at using primary sources to find information," Prince explained, "they studied Samuel Morse and his telegraph sketches." Having learned how to interpret primary sources in a meaningful way, Prince's students made working telegraphs. The Morse sketches were an inspiration for their creativity. Equally important, Prince noted, “They really enjoyed their experience. That exuberance for learning extended to the lab where they used what they had learned from primary sources to build their own telegraphs,” she said. “As a teacher, it was exciting to watch as my students took over their own learning.” Casey Veatch is a high school physics teacher at Bennett High School in Bennett, Colorado. He also is the District Librarian for the Bennett School District and is an Anatomy, Physiology, and Physics instructor for Morgan Community College. He and his wife Carrie, an online social studies teacher for Vilas Online School in Colorado, developed a lesson for his high school physics class that used Alexander Graham Bell’s science notebook from the Alexander Graham Bell Family Papers in the American Memory Collection. In her social studies classroom, Carrie observed the lesson helped “the students make a connection to their prior knowledge about Bell and sound reproduction. They had read about Bell and sound reproduction in their textbook, but the primary sources from the Library of Congress helped students make a personal connection to the concept and the scientist.” Bell was no longer an abstract historical figure; he had become a real person to the students. Casey had his physics students recreate Bell's tuning fork experiment to prove sound can be transmitted through a wire. He used the primary sources to further teach the scientific method. As was the case with Rebecca Prince’s science students, Casey’s students used their greater skill at interpreting primary sources to recreate Bell's experiment successfully. "These primary sources from the Library led students to connect, construct, and wonder," he noted. An important learning lesson for both groups of students concerned the often difficult path of scientific inquiry and experimentation. Casey commented that his students “could see the writing and sketches in his [Bell’s] notebook and wondered why some of the entries were scratched out as they attempted to follow Bell’s line of thinking.” Prince explained that her students followed a path similar to that of many inventors. “Since many first attempts did not succeed, my students learned to accept failure as a challenge rather than defeat. And, it definitely inspired them to work harder. Many of the students were successful in the end and they really enjoyed their experience.” Both Rebecca Prince and Casey Veatch continue to use primary sources from the Library of Congress in their science classes. As part of her seventh grade curriculum, Prince’s students are constructing a bridge using pasta. She plans to show them primary sources that contain actual bridges to serve as guides and inspiration for scientific inquiry. Veatch is currently developing a lesson on the mechanics of flight using a letter from Orville Wright to his father describing a science experiment he performed as a young student. He sees how this document can generate excitement among his students as they attempt to interpret the meaning behind Orville Wright’s experiment. Mark Newman is Director of the Federation of Independent Illinois Colleges and Universities’ Teaching with Primary Sources project and associate professor of Curriculum & Instruction at National-Louis University, Chicago, Illinois. Carrie Copp is a project associate on the Federation TPS project and a graduate student in the Elementary Education MAT Program at National-Louis University. Selected Science Resources from the Library of Congress Presentations and Activities: From Flight to Fantasy Resources from the Library of Congress documenting the history of flight. Presentations and Activities: What in the World is That? Ingenious Inventions throughout History Throughout history, creative men and women have developed ingenious inventions that have solved problems and changed people's lives. Use your observation skills in this matching activity to learn more about some of these wonderful innovations. Themed Resources: Science and Invention Learn about the early recording efforts of Emile Berliner, Bell’s experiments with the telephone, early aviation, and the history of household technology through presentations and primary source images, notebooks and letters. Study early environmental movements and photographs. Themed Resources: Nature and the Environment Study man-made and natural disasters, the origins of the American conservation movement, and view Landsat photographs, valued for aesthetics more than their contributions to geography. Use maps to trace the growth and unique features of the National Parks. Learn about nature writers and visual artists.
For most, a wrong turn just leads to frustration on the road. But in space, a wrong turn can result in never seeing the Earth again. For thousands of years, humans have been able to accurately navigate the Earth with marginal trouble. In most populated areas, pathways could easily set a lost explorer on the right path. Exploring new territories always comes with the risk of humans getting lost, despite humans proving we can be decent natural navigators. The sun generally rises in the east and sets in the west, helping us some with figuring out directions. Humanity’s passionate desire for exploration meant scientists spent their time crafting new directional tools. It wasn’t until October 4, 1957, when history changed after the Soviet Union successfully launched Sputnik I. The launch would spark a race of scientific evolution that would forever change the world. Satellites intrigued scientists. Scientific instrumentation could now be used to monitor the Earth, and more importantly, photograph it. Satellites continued to evolve, and soon, in 1978, the first GPS system was launched by the US military. The GPS was a system of satellites that ensured at least 3 satellites were in view of any point on Earth majority of the time. Similar triangulation techniques were, and still are, used to determine the exact coordinates of a device back on Earth. Now, with the Earth photographed in its entirety as well as a system which could tell a person exactly where they stand, navigating the Earth became incredibly easy. The same could not be said for space exploration. Nearly all navigation techniques on Earth rely on the magnetic fields. Compasses and even satellites use the poles to determine geographical position. However, leaving the Earth requires reconsideration into how to get around- without getting lost. So how does a probe like Juno successfully get to Jupiter without getting completely lost? Bill Nye the Science Guy gives a full explanation to modern technology’s use in making sure probes stay safe and on the course while en route to distant planets: Sure, getting a probe lost in space short changes NASA or an aeronautics company millions of dollars, but the stakes will just get higher as people begin traveling farther than just the moon. NASA engineers looked back to the beginning of navigation to design a map of the cosmos- the stars. Using images previously taken by satellites, the Apollo Star Chart was created. Courtesy of National Air and Space Museum, Smithsonian Institution The chart was used to train Apollo 11 astronauts for their 1969 lunar landing mission. The chart labelled all the stars with specific coordinates which were relayed and actively relayed to the Apollo Guidance Computer as it took readings with a sextant. All the data was relayed back to Earth where it was converted into instructions to keep the spacecraft on track. Other backup systems monitored the thrust to determine the orientation and direction the spacecraft was headed. The system was not entirely reliable, however, in combination with the Apollo Guidance Computer the crew successfully reached the Moon, almost 400,000 km away. While the promise of getting people to Mars seems promising (especially with the EmDrive’s latest news), accurate navigation could prove difficult. Could there be a new type of GPS developed to aid in bettering humanity’s safe travels across the universe?
Motor skills develop rapidly during the first few years of life in most children. Although gross motor skills such as running, jumping and throwing a ball are commonly used standards of development, fine motor skills such as colouring, grasping and stringing beads are equally as important. Although motor skills in young children change from year to year, there are general assumptions about motor skill development for 2- and 3-year-olds. Drawing and Coloring During the second year of life, one hand generally emerges as the child's dominant hand. A 2-year-old child will usually hold a crayon with his whole hand and can draw circles, dots and lines. As the child grows into a 3-year-old, she will hold a crayon or pencil with three fingers rather than the entire hand. Additionally, she will begin using simple shapes to draw a person with a head. To foster development of fine motor skills used for drawing and colouring, allow your child opportunity to create crafts with beads, buttons and string or paint rocks and pictures using small brushes. Rolling, Throwing and Kicking Two-year-old children generally have the ability to toss or roll a large ball on the ground. As the child moves closer to the three-year point, he will progress to throwing a ball underhand and kicking a ball forward with his feet. Parents and caregivers can encourage gross motor development by providing opportunity for children to play with balls and other large toys that can be pushed, thrown or kicked in a safe environment. Stacking blocks is a popular activity with 2- and 3-year-old children that helps develop fine motor skills. By the time a child is 3, she should be able to stack approximately nine blocks to make a tower. Although there are many commercial stacking blocks on the market, parents can provide children with alternate materials such as plastic cups, books or various sizes and shapes of containers. Walking and Climbing Although all children develop at different rates, most can walk forward and backward at the age of 2. Additionally, 2-year-olds learn how to climb onto furniture and walk up and down stairs with the use of a railing. At 3 years of age, most children no longer require the assistance of a railing when climbing stairs and are able to hop on one foot as well as balance on one foot for at least two seconds. This type of gross motor development can be encouraged by playing a game where different animals are dramatised. For example, practice hopping like a rabbit, waddling like a duck and slithering like a snake along with your child.
Your cardiovascular system is made up of the heart, blood vessels and blood. It is a system that never rests. It pumps 5 to 6 liters of blood per minute through your body and can pump as many as 30 liters per minute during times of extreme stress. In humans, the cardiovascular system is a closed system, meaning that the blood never leaves the blood vessels. This simple-seeming closed system, however, carries out a multitude of complex tasks. It transports nutrients, oxygen and hormones where they need to go. It protects your body from infections, toxins and blood loss. It even helps regulate body temperature, fluid balance and pH. The heart is a muscular organ in your chest, which controls the movement of blood in your cardiovascular system. Acting as a pump, it rhythmically moves all the blood in your body where it needs to go, through networks of blood vessels that branch out to and from all parts of your body. The heart is divided into two sides: right and left. The blood leaving the right side of your heart creates a circuit to and from the lungs, known as the pulmonary circulation. The blood leaving the left side of your heart creates a circuit to and from the rest of your body, known as the systemic circulation. Pulmonary circulation is a route your blood takes from your heart to the lungs -- to collect more oxygen -- and back again. Oxygen-poor blood leaves the right side of the heart and travels through the pulmonary arteries toward your lungs. When the blood reaches the lungs, it releases carbon dioxide, a waste product, and picks up more oxygen. The newly oxygen-rich blood can now travel back to the left side of the heart, via the pulmonary veins, to complete the circuit. From there, the blood is ready to go into systemic circulation. In systemic circulation, the oxygen-rich blood leaves the left side of the heart and travels to all other parts of your body through the arteries. From there, it goes into smaller and smaller blood vessels and eventually enters tiny blood vessels called capillaries. At the level of capillaries, cells send their carbon dioxide and other wastes into the blood, and receive the blood's oxygen. From there, the oxygen-poor blood travels back to the heart via your veins. These veins empty into the right side of the heart, where the blood enters the pulmonary circulation again, finishing the circuit. Blood has many components. The majority of your blood is plasma, a liquid that makes up more than half of the blood volume. It is critical to maintain blood pressure and regulate body temperature as it carries all the cells and nutrients throughout the body. There are three main types of cells in the body. The red blood cells carry oxygen and carbon dioxide to and from the cells. White blood cells play a role in fighting off infections. Platelets help promote clotting at sites of injury. - Boundless Biology: The Circulatory System - Openstax College Anatomy and Physiolology: The Cardiovascular System: Blood - Openstax College Anatomy and Physiolology: The Cardiovascular System: The Heart - Hurst's The Heart, 13ed; Valentin Fuster, Richard A. Walsh, Robert A. Harrington - The Merck Manual Home Edition: Components of Blood - Openstax College Anatomy and Physiolology: The Cardiovascular System: Blood Vessel and Circulation
Lake Vostok, also called Subglacial Lake Vostok or Lake East, largest lake in Antarctica. Located approximately 2.5 miles (4 km) beneath Russia’s Vostok Station on the East Antarctic Ice Sheet (EAIS), the water body is also the largest subglacial lake known. Running more than 150 miles (about 240 km) long with a maximum width of about 31 miles (50 km), the lake is roughly elliptical in shape, and it holds nearly 1,300 cubic miles (5,400 cubic km) of water. After decades of speculation and data gathering, the existence of the lake was confirmed in the mid-1990s by a combination of seismic and ice-penetrating radar surveys. Most scientists believe that the lake is the product of volcanic activity that melted a portion of the ice overhead. Some scientists maintain that the lake was isolated from Earth’s atmosphere after the EAIS formed more than 30 million years ago. Other scientists argue that the water making up the lake may be much younger, perhaps only about 400,000 years old. Most scientists, however, agree that Lake Vostok might harbour a unique freshwater ecosystem made up of organisms that evolved independently from other forms of life on Earth. The base of the lake’s food chain would need to derive its energy from chemical sources rather than from photosynthesis, and each organism in this environment would need to endure the pressure of 350 atmospheres (about 5,150 pounds per square inch) brought on by the weight of ice sheet above. A Russian drilling project designed to retrieve ice cores below Vostok Station was initiated in 1990; the station was later found to sit directly above the lake. After the lake’s existence was revealed, the scientists continued to drill, ultimately penetrating some 12,366 feet (3,769 metres) of ice in February 2012 to reach liquid water. Worries over possible contamination of the lake from the drill—as well as the freeze-resistant fluids, such as Freon and kerosene, used in the drilling process—were dispelled when the drill tip punched through the final layers of ice. Pressurized water from the lake rushed up the hole, which forced the drilling fluids upward and away from the lake, before freezing into a 100–130-foot- (30–40-metre-) long ice plug. Shortly after the drill reached the plug, however, the scientists left the station to escape the onset of the coldest part of the Antarctic winter. An ice core was removed from the plug in January 2013 and studied by a Russian team of scientists. In March of that year, after preliminary analyses of the samples taken from the ice core had been completed, Russian state media announced that evidence of bacterial DNA had been found, including at least one type that did not correspond to bacteria known to science. This discovery, however, was later called into question because of possible sample contamination. Several scientists have remarked that the effort to reach Lake Vostok could be a valuable planning and implementation tool for future space missions designed to search for life on worlds containing ice-covered oceans, such as those that occur on Jupiter’s moon Europa.
The most conspicuous movement representing the social aspects of Christianity in American and Canadian Protestantism in the late 19th and early 20th cents. Washington Gladden (1836–1918), a Congregational minister and prolific author who defended the right of working people to form unions, is known as the ‘father’ of the Social Gospel. Josiah Strong (1847–1916) organized interdenominational gatherings that promoted the movement while he was secretary of the (American) Evangelical Alliance; W. Rausenbusch became its foremost prophet. It was influential in the Congregational, Episcopal, Baptist, Methodist, and Presbyterian Churches. Based largely on liberal theology, the movement had a high view of human nature and its potentiality, stressed the idea of progress, was reformist in tone, and had a somewhat utopian cast. It passed its zenith after the First World War, but left an important legacy in the thought of many Churches. Subjects: History of the Americas — Christianity.
The problem to be solved was simple: "what are the prime factors of the number 15." But it was the method that was neat: physicists did it with quantum calculation! Professor Andrew White, from UQ's Centre for Quantum Computer Technology together with colleagues from the University of Toronto in Canada, said by manipulating quantum mechanically entangled photons – the fundamental particles of light – the prime factors of the number 15 were calculated. “Prime numbers are divisible only by themselves and one, so the prime factors of 15 are three and five,” Professor White said. “Although the answer to this problem could have been obtained much more quickly by querying a bright eight-year-old, as the number becomes bigger and bigger the problem becomes more and more difficult. “What is difficult for your brain is also difficult for conventional computers. This is not just a problem of interest to pure mathematicians: the computational difficulty of factoring very large numbers forms the basis of widely used internet encryption systems.” Ben Lanyon, UQ doctoral student and the research paper's first author, said calculating the prime factors of 15 was a crucial step towards calculating much larger numbers, which could be used to crack cryptographic codes that are unbreakable using conventional computers.
A cochlear implant is a surgical treatment for hearing loss that works like an artificial human cochlea in the inner ear, helping to send sound from the ear to the brain. It is different from a hearing aid, which simply amplifies sound. A cochlear implant bypasses damaged hair cells in the child's cochlea and helps establish some degree of hearing by stimulating the hearing (auditory) nerve directly. Hearing loss is caused by a number of different problems that occur either in the auditory nerve or in parts of the middle or inner ear. The most common type of deafness is caused by damaged hair cells in the cochlea. The cochlea is a fluid-filled canal in the inner ear that is shaped like a snail shell. Inside are thousands of tiny hairs called cilia. As sound vibrates the fluid in the cochlea, the cilia move. This movement stimulates the auditory nerve and sends messages about sound to the brain. When these hair cells stop functioning, the auditory nerve is not stimulated, and the child cannot hear. Hair cells can be destroyed by many things, including infection, trauma, loud noise, aging, and birth defects. The first piece of a cochlear implant is the microphone. It is usually worn behind the ear, and it picks up sound and sends it along a wire to a speech processor. The speech processor is usually worn in a small shoulder pouch, pocket, or on a belt. The processor boosts the sound, filters out background noise, and turns the sound into digital signals. Then it sends these digital signals to a transmitter worn behind the ear. A magnet holds the transmitter in place through its attraction to the receiver-stimulator, a part of the device that is surgically attached beneath the skin in the skull. The receiver picks up digital information forwarded by the transmitter and converts it into electrical impulses. These electrical impulses flow through electrodes contained in a narrow, flexible tube that has been threaded into the cochlea during surgery and stimulate the auditory nerve. The auditory nerve carries the electrical impulses to the brain, which interprets them as sound. Despite the benefits that the implant appears to offer, some hearing specialists and members of the deaf community still believe that the benefits may not outweigh the risks and limitations of the device. Because the device must be surgically implanted, it carries some surgical risk. Also, it is impossible to be certain how well any individual child will respond to the implant. After getting an implant, some people say they feel alienated from the deaf community, while at the same time not feeling fully a part of the hearing world. The sounds heard through an implant are different from those sounds heard normally, and have been described as artificial or "robot-like." This is because the implant's limited number of electrodes cannot hope to match the complexity of a human's 15,000 hair cells. Cochlear implants are, however, becoming more advanced and providing even better sound resolution. During the procedure, the surgeon makes an incision behind the ear and opens the mastoid bone (the ridge on the skull behind the ear) leading into the middle ear. The surgeon then places the receiver-stimulator into a well made in the bone and gently threads the electrodes into the cochlea. This operation takes between an hour-and-a-half and five hours. It is performed using general anesthesia. Because the implants are controversial, very expensive, and have uncertain results, the United States Food and Drug Administration (FDA) has limited the implants to people for whom the following is true: - individuals who get no significant benefit from hearing aids - individuals who are at least 12 months old - individuals with severe to profound hearing loss Before a child gets an implant, specialists at an implant clinic conduct a careful evaluation, including extensive hearing tests to determine how well the child can hear. First, candidates undergo a trial with a powerful hearing aid. If the hearing aid cannot improve hearing enough, a physician then performs a physical examination and orders a scan of the inner ear, because some patients with a scarred cochlea are not good candidates for cochlear implants. A doctor may also order a psychological exam to better understand the person's expectations. Patients and their families need to be highly motivated and have a realistic understanding of what an implant can and cannot do. The child may remain in the hospital for a day or two after the surgery, although with improving technology and techniques some children may go home the same day. After about a month, the surgical wounds will have healed, and the child returns to the implant clinic to be fitted with the external parts of the device (the speech processor, microphone, and transmitter). A clinician tunes the speech processor and sets levels of stimulation for each electrode from soft to loud. The child is then trained in how to interpret the sounds heard through the device. The length of the training varies from days to years, depending on how well the child can interpret the sounds heard. With the new approval for using cochlear implants in children as young as 12 months of age, the toddler may not be trained specifically to interpret the sounds in the same way an older child would. The specific therapy that is recommended is highly dependent on the age of the child. As with all operations, there are a few risks of surgery. These include the following: - facial paralysis (which is rare and usually temporary) - infection at the incision site Scientists are not sure about the long-term effects of electrical stimulation on the nervous system. It is also possible that the implant's internal components may be damaged by a blow to the head. This may cause the device to stop working. In general the failure rate of the implants is only 1 percent after one year. There is increasing debate about the use of cochlear implants in infants. This is considered by some to be desirable because, if the implantation is done before a child has begun to significantly acquire language, there is some evidence that the child may be able to develop at a pace similar to hearing children of the same age. Making a decision about whether or not a child, especially a very young one, should have a cochlear implant can be very difficult. The child's doctor may be able to provide parents with resources or put them in contact with other parents who have had to make the same decision whom they can consult. Cochlea —The hearing part of the inner ear. This snail-shaped structure contains fluid and thousands of microscopic hair cells tuned to various frequencies, in addition to the organ of Corti (the receptor for hearing). Hair cells —Sensory receptors in the inner ear that transform sound vibrations into messages that travel to the brain. Inner ear —The interior section of the ear, where sound vibrations and information about balance are translated into nerve impulses. Middle ear —The cavity or space between the eardrum and the inner ear. It includes the eardrum, the three little bones (hammer, anvil, and stirrup) that transmit sound to the inner ear, and the eustachian tube, which connects the inner ear to the nasopharynx (the back of the nose). See also Hearing impairment . Christiansen, John B., and Irene W. Leigh. Cochlear Implants in Children: Ethics and Choices. Washington DC: Gallaudet University Press, 2002. Chute, Patrician M., and Mary Ellen Nevins. The Parents' Guide to Cochlear Implants. Washington DC: Gallaudet University Press. 2002. Barker, Brittan A., and Bruce J. Tomblin. "Bimodal Speech Perception in Infant Hearing Aid and Cochlear Implant Users." Archives of Otolaryngology—Head & Neck Surgery 130 (May 2004): 582–87. Chin, Steven B. "Children's Consonant Inventories after Extended Cochlear Implant Use." Journal of Speech, Language, and Hearing Research 46 (August 2003): 849–63. Conor, Carol McDonald, and Teresa A. Zwolan. "Examining Multiple Sources of Influence on the Reading Comprehension Skills of Children Who Use Cochlear Implants." Journal of Speech, Language, and Hearing Research 47 (June 2004): 509–27. American Society for Deaf Children. PO Box 3355 Gettysburg, PA 17325. Web site: http://www.deafchildren.org. Tish Davidson, A.M. Carol A. Turkington
Potassium perchlorate (KClO4) is an inorganic substance belonging to the perchlorate family of salts. It is typically found as a crystalline, colorless solid and is used in numerous industrial applications. KClO4 is manufactured by the reaction of KCl with sodium perchlorate. Potassium perchlorate is a strong oxidizer and creates an explosive force on reaction with organic compounds (carbon-containing compounds such as sugars and plastics). It is primarily used in the manufacture of explosives for its strong reactive force. Potassium perchlorate is used as an antithyroid agent for the treatment of hyperthyroidism, the condition that results when the thyroid gland produces an excessive amount of hormones (thyroxine and triiodothyronine). Hyperthyroidism excites various systems of the body with a result resembling an adrenaline overdose. It overstimulates metabolism, increases the heart rate, causes anxiety and tremors and results in diarrhea and weight loss. Potassium perchlorate acts upon the overstimulated thyroid hormones to bring the system back to equilibrium. Potassium perchlorate is a potent oxidizing agent that spontaneously reacts with many naturally occurring substances. An oxidizing agent, or oxidizer, is a substance that transfers oxygen atoms to its reactant, therefore stimulating the combustion (burning) of organic materials. Its oxidizing properties are used in the manufacture of fireworks, safe matches, rocket propelling agent, signal flares and explosives. Potassium perchlorate is popularly used as a disinfectant, an agent that inhibits, neutralizes or destroys harmful microorganisms. Potassium perchlorate has the ability to sterilize by destroying microbic life. Potassium perchlorate is used as a rocket propellant. A propellant is a fuel used by rockets for propulsion. Typical rocket propellants include paraffin, kerosene, liquid hydrogen and alcohol. Propellants require an oxidizing agent to burn them and provide thrust. Potassium perchlorate makes a good rocket propellant because it burns at a fast rate, burns without leaving behind dead weight (ash or residue), has a high calorific value (or heating value, i.e., the amount of heat that is released during combustion) that increases the efficiency of fuel and produces large volumes of gases for every gram of fuel combusted. Potassium perchlorate is used to make protective breathing equipment used in fighter aircrafts in the event of depressurization. Other uses include electronic tubes, nuclear reactors, additives for lubricating oils, rubber manufacture, aluminum refining, as a fixer in dyes and for fabrics, in finishing leather and tanning, in electroplating and the production of enamels and paints. About the Author Natasha Gilani has been a writer since 2004, with work appearing in various online publications. She is also a member of the Canadian Writers Association. Gilani holds a Master of Business Administration in finance and an honors Bachelor of Science in information technology from the University of Peshawar, Pakistan. hobby rockets image by Albert Lozano from Fotolia.com
Science: We learn science through doing. Based on children's interests, we set up observation stations, sensory stations, weather demos, and experiments. For example, we learn about chemistry while cooking, biology by observing animals and insects, and human aging through interactions with our grand friends. Technology: Using age-appropriate tools, children learn about lasers during spy week and design by building leprechaun traps around Saint Patrick’s Day. They play with state-of-the-art puzzles, toys, and interact regularly with our up-to-date and modern equipment. They even get to see real fire trucks and learn how they work. Engineering: Preschoolers learn engineering by developing obstacle courses, performing sink/float experiments, engaging in building projects, discovering how things work, and fixing broken materials and toys. Arts: We offer open-ended graphic art projects and child-selected art media, and we display the completed projects for other students and parents to view. We also incorporate singing, dancing and yoga into our curriculum. Storytelling is another important element of our arts program, and includes extending, retelling, and acting out stories. Math: Children have multiple opportunities to learn math skills. We measure while cooking, count objects, and learn the days of the week. In addition, sorting and pattern recognition activities help develop a foundation for learning math in elementary school.
Improving learning outcomes within a language program involves a multifaceted approach that combines diverse strategies tailored to the needs of learners. Here are several practical ways to enhance learning within a language program: Use Authentic Materials Incorporating real-life materials into language lessons is pivotal for providing learners with authentic language exposure and cultural insights. Utilizing newspapers, movies, podcasts, and social media allows learners to engage with language usage in its natural context, offering exposure to colloquial expressions, idiomatic phrases, and cultural nuances that textbooks may not cover. Newspapers provide current events and diverse writing styles, while movies and podcasts offer auditory comprehension and conversational language practice. Additionally, social media platforms present informal language usage and contemporary cultural references. By integrating these real-life materials into lessons, educators offer a holistic language learning experience, enabling learners to grasp the language’s practical application and cultural intricacies. This exposure enhances their linguistic competence, fosters cultural understanding, and equips them with the skills needed for effective communication in real-world contexts. Harnessing technology within language programs revolutionizes the learning landscape, offering an array of innovative tools that cater to diverse learning styles and preferences. Language learning apps, interactive websites, and multimedia resources serve as dynamic platforms that facilitate self-paced learning while providing varied and engaging experiences. Language learning apps, such as AppsAnywhere, offer interactive lessons, quizzes, and games, making learning accessible anytime, anywhere. These apps often adapt to learners’ proficiency levels, offering personalized learning pathways. Online platforms and interactive websites present a plethora of resources including videos, podcasts, forums, and live sessions with native speakers. These platforms foster immersive experiences, allowing learners to practice listening, speaking, reading, and writing skills in real-world contexts. Multimedia resources encompass a wide range of materials such as videos, audio recordings, and digital textbooks. These resources diversify learning approaches, catering to visual, auditory, and kinesthetic learners, thereby enhancing comprehension and retention. By integrating these technological tools into language programs, educators provide learners with versatile resources that complement traditional teaching methods. This fusion of technology and language learning not only fosters autonomy but also enables learners to explore and engage with the language at their own pace, ensuring a more interactive, engaging, and effective learning experience. Cultural immersion serves as a cornerstone in language learning, fostering a deeper understanding of the target language’s essence beyond its linguistic aspects. Organizing events, inviting guest speakers, or arranging field trips are invaluable methods to immerse learners in the cultural context, traditions, and practices associated with the language they are studying. Events celebrating cultural festivals, language-specific workshops, or interactive sessions featuring art, music, and cuisine serve as windows into the rich tapestry of the target culture. Guest speakers, whether natives or experts, offer firsthand insights, personal narratives, and diverse perspectives, enriching learners’ understanding of cultural nuances. Field trips to museums, cultural sites, or community centers authentically showcase the heritage and customs intertwined with the language being studied. These experiences offer tangible encounters with traditions, rituals, and everyday practices, allowing learners to absorb the cultural fabric in a more profound and memorable way. By facilitating such immersive experiences, language programs not only promote cultural appreciation but also enhance linguistic comprehension. Immersion in the target culture encourages learners to embrace cultural diversity, fostering empathy, open-mindedness, and a more comprehensive grasp of the language’s intricacies. Ultimately, these initiatives cultivate well-rounded language learners who can navigate language barriers while appreciating the rich heritage embedded within the language they study. Customizing lessons to cater to various learning styles and individual preferences is integral to effective teaching. Providing diverse learning opportunities enables learners to engage through methods that suit their strengths, be it visual, auditory, kinesthetic, or other styles. By incorporating self-assessment tools, learners gain autonomy in evaluating their progress and setting achievable goals, fostering a sense of ownership in their learning journey. Personalized feedback reinforces their efforts, offering specific guidance for improvement, thereby enhancing motivation and progress. This tailored approach not only acknowledges individual differences but also empowers learners, encouraging active participation and a deeper investment in their language learning experience. Ultimately, it transforms the learning process into a dynamic and fulfilling endeavor, ensuring that learners feel supported and motivated to achieve their language acquisition objectives. Language Learning from Anywhere By integrating these diverse strategies, language programs can establish an immersive and supportive environment conducive to robust learning. This comprehensive approach fosters not only active participation but also cultivates a deeper understanding of cultural nuances. Customizing the learning experience to suit learners’ individual needs ensures that language acquisition becomes an engaging and meaningful journey. Ultimately, this amalgamation of practical methods empowers learners, instilling confidence and a sense of accomplishment, making language acquisition a fulfilling and lifelong endeavor. Through a blend of interactive activities, technology integration, cultural exposure, and personalized learning, language programs can effectively nurture a vibrant learning ecosystem, enabling learners to thrive and excel in their language acquisition journey.