content
stringlengths
275
370k
Evolutionary biologists struggled for a long time to categorize the hoatzin, perhaps the strangest bird in the Amazon basin. Studies of the hoatzin (Opisthocomus hoazin) have shown that it has no close relatives, and biologists have been debating its heritage since the species was first described by the German zoologist Statius Muller in 1776. While the taxonomic status of the hoatzin remains disputed, genetic research published in 2015 concluded that the hoatzin is the last surviving member of a genus of birds that branched off and evolved separately some 64 million years ago, around the time the dinosaurs were wiped out. Such conclusions, together with the fact that hoatzin chicks possess claws on their wings, have led some observers to describe this species as a “living fossil”. In fact, the claws its young are born with appear to be a quite recent adaptation. The hoatzin is practically flightless even as an adult and the young use their claws to hold onto the tree branches they roost in. The hoatzin inhabits wetland areas of the Amazon basin and the Orinoco basin in Venezuela. In Peru’s Tambopata National Reserve, it can be seen roosting in the low trees that grow around the many oxbow lakes common to this part of South America’s tropical forests. It is one of the forest’s noisiest birds, producing a quite un-birdlike grunting sound as it hops among branches, snapping off the leaves it feeds on. Unusually among the birds of the Amazon, the hoatzin is primarily an herbivore, and only very rarely will it eat insects. The hoatzin’s unusual herbivorous digestive system has led to it also being known as the “stink bird”. Uniquely among birds, the hoatzin employs a bacterial fermentation process to break down the vegetable matter it consumes, in the same way that ruminants such as cattle digest their food. This unusual digestive system gives the bird such a foul odor that it is rarely –if ever- hunted by humans for food. Perhaps precisely because it has never been hunted, the hoatzin appears barely disturbed by human visitors to its wetland habitat. It hardly bothers to flap and clamber a few meters away when visitors’ boats approach. In fact, the hoatzin’s unique, double-chamber digestive tract takes up so much space in its sternum that its muscles have been displaced, meaning that the bird flies extremely poorly. While certainly not as beautiful as macaws or toucans, the hoatzin is not an unattractive bird. About the size of a turkey, its face is maroon and topped by an unusually large crest, while its plumage ranges from dark, almost black on its flanks and the underside of its wings, to a rich, reddish-brown color. However, thanks to its terrible smell and the reportedly bad taste of its flesh, the hoatzin is not endangered, and most visitors to the wetlands of Tambopata National Reserve will spot it among lakeshore vegetation.
A supermassive black hole sits at the center of our galaxy and there are usually no activities being detected from it, until astronomers notice the black hole called Sagittarius A* lighting up brighter than normal in May. The closest supermassive black hole to Earth, called Sagittarius A*, or Sgr A*, suddenly got 75 times brighter than normal along the near-infrared region of the light spectrum for two hours on May 13, a team of scientists has found. "The black hole was so bright I at first mistook it for the star S0-2, because I had never seen Sgr A* that bright," Tuan Do, an astronomer and lead author of the paper, told ScienceAlert. "I knew almost right away there was probably something interesting going on with the black hole." So far, nobody knows exactly why it lit up but the main theory is that two objects that passed by it in 2014 and 2018 may have something to do with it. The researchers think such an interaction could have caused this bright flash. Specifically, they said, an interaction with a nearby star that passed near Sgr A* in 2018 could have disturbed gas flows at the edge of the black hole's grasp. They also pointed to a dust cloud that passed near Sgr A* in 2014 but didn't get dramatically torn apart the way astronomers thought it would. The brightness could be a "delayed reaction," they wrote. (Image credit: NASA)
Wetlands in warm ecoregions - overview Med-region riparian zones differ from their mesic temperate and tropical counterparts in several key ways. Regionally, they support a dense and productive closed-canopy forest ecosystem relative to the surrounding landscape, which is typically a matrix of xeric woodlands, shrub and grassland communities. Optimum conditions of sunlight, nutrients, and water support high productivity and forest canopy heterogeneity that is typically more complex than in adjacent and upland areas. All med-regions support distinct riparian flora, although many genera have invaded across regions. Plant species in all regions are adapted to multiple abiotic stressors, including dynamic flooding and sediment regimes, seasonal water shortage, and fire. Climate change resulting from increased anthropogenic greenhouse gas emissions is projected to have a particularly strong effect on med-regions. With an average temperature rise of 2°C or more in the Mediterranean basin, decreased precipitation is projected, along with increased frequency and duration of the droughts and desertification An increased risk of inland flash floods from intensification of extreme events and greater fire frequencies under a warmer and drier climate can potentially effect riparian community composition and succession, vegetation structure, and carbon storage. For further reading - Stella, J. C., Rodriguez-Gonzalez, P. M., Dufour, S., & Bendix, J. (2012). Riparian vegetation research in Mediterranean-climate regions: common patterns, ecological processes, and considerations for management. Hydrobiologia. doi:10.1007/s10750-012-1304-9 link to article
Unraveling the Mysteries of Black Holes Peering into supermassive black holes and picking through the remains of exploded stars is among the detective work the NuSTAR telescope performs. Launched in June 2012, the comparatively small telescope uses high energy x-rays to penetrate dust and gas to get a clear look at some of the densest, hottest regions of the universe, says Fiona Harrison. She’s the astrophysicist who developed NuSTAR and serves as the principal investigator of its NASA mission. NuSTAR recently caught a black hole in the act of blurring x-ray light. Harrison discusses how this and other new findings on the nature of black holes are shaping our understanding of how the universe formed. Fiona Harrison is the Benjamin M. Rosen Professor of Physics and Astronomy at the California Institute of Technology in Pasadena, California.
Global habitat loss still rampant across much of the Earth As 196 signatory nations of the Convention of Biological Diversity (CBD) meet this week in Cancun, Mexico, to discuss their progress towards averting the current biodiversity crisis, researchers from a range of universities and NGOs report in the international journal Conservation Letters that habitat destruction still far outstrips habitat protected across many parts of the planet. The researchers assessed rates of habitat conversion versus protection at a 1 km resolution across the worlds 825 terrestrial ecoregions (areas that contain unique communities of plant and animal) since the CBD was first ratified in 1992. They showed that while there have been considerable gains in global efforts to increase the size of protected area estate, alarming levels of habitat loss still persist. They discovered that over half the planet can be classified as completely converted to human-dominated land use, with 4.5 million square kilometers (an area two thirds the size of Australia) converted in the past two decades alone. "As a consequence of past and recent habitat loss, almost half of the world's ecoregions now must be classified at very high risk, as they have 25 times more land has been converted than protected," said Dr. James Watson of the University of Queensland and the Wildlife Conservation Society, and the lead study's lead author. These highly converted and poorly protected ecoregions occur across all continents and dominate Europe, south and Southeast Asia, western South and North America, western Africa, and Madagascar. "It is now time political leaders recognize that simply chasing protected area targets while ignoring the impacts of rampart habitat loss is not a good solution for much of the world's imperiled species' said Dr. Oscar Venter, of the University of Northern British Colombia and the author's senior author. "We need to specifically target protected areas to places where habitats are disappearing, before it is too late." The researchers identify forty-one ecoregions across 45 nations that are in a 'crisis state', where humans have converted more than 10 percent of the little remaining habitat in just the last two decades. "These crisis and at-risk ecoregions are clearly the place where targeted conservation interventions need to be prioritized," said Dr. Watson. "But this means a rethink in how nations do conservation planning. Nation's tend to place protected areas in remote locations, where nobody else is vying to convert the land. This does not help save threatened biodiversity and we must urgently start strategically placing new protected areas in places that will be destroyed without conservation action."
Sometimes when you do some research – actually, quite often – you find out some really interesting stuff and end up changing your mind. In my story, I had some people on the ground on Mars, and wanted a spacecraft in a geostationary orbit above them to give them communications between them at all times. Just for info, when talking about geostationary orbits, the accepted term for Mars is aerostationary. I’ll use geostationary and geosynchronous because it’s my blog and although the aero prefix is accepted, it isn’t mandatory. Since I’ve written a(including around Mars), figuring out the orbital parameters is easy enough for me, but I decided to check my results against published scientific papers. I like to be thorough in my research. This resulted in a delightful piece of serendipity, which I’ll talk about in a moment, and turned up some fascinating facts about these peculiar types of orbit that I’d never heard of. To begin, I should explain the difference between geostationary and geosynchronous. I’ll use Earth as an example to make the explanations easier. The International Space Station lies at an elevation of 250 miles. It completes an orbit once every 92 minutes. The further away from the planet the orbit, the less the gravitational influence, and so the orbital speed is reduced. So the satellite is moving slower, but the circumference of the orbit is getting bigger, hence the orbital period – the time to complete a single orbit – gets longer. Keep moving out, and eventually you reach a point where the period of the orbit equals one day. This equality is a geosynchronous orbit, because the period of the orbit and the rotation of the planet are synchronised. In theory, this means you stay over the same point of land perpetually, but only if the orbit was equatorial. A polar orbit can also be geosynchronous. Subtle but important difference from geosynchronous. So, why isn’t it simple? With Earth, there are various factors that affect the satellite. The Moon is one of the biggest influences – if it can affect tides on vast bodies of water even further away, it can affect a puny satellite. Venus may have a small effect at various times too, but the biggest factor turns out to be the Earth itself, because it’s not a perfectly smooth sphere. This means that gravity is not uniform all over the planet. Imagine what that means to the poor old satellite. It’s at the perfect spot for gravity to create a perfect orbit, but as it passes over another spot, the gravity changes slightly and it’s suddenly not in that perfect spot anymore. Remember also that the Earth wobbles. This means that no satellite can maintain an orbit just by being there. All these influences wreck any chance that can happen, so the craft has to periodically adjust position by increasing speed (or decreasing it) to get back on station. The amount of speed required to maintain station is known as delta-v. For example, it may need 22 metres per second of delta-v per year to stay on station. If you can calculate how much delta-v you need to maintain an orbit, you can plan on having enough fuel on board the craft to give it a decent service lifetime. The lower the delta-v, the more life your satellite has for a given amount of fuel. Since the variations in Earth’s gravity fields are the most significant factor, clever people with letters after their name did some research and found something rather interesting. There are two points on the Earth where gravity is strongest, and they happen to be opposite each other – one on each side of the world. These two points they named the unstable points. Halfway between them, there were two matching points – again, opposite each other – known as the stable points. The stable points represent the lows in the gravity field. If you are at the right altitude for a geostationary orbit, and above one of these four points, your delta-v is going to be lower than anywhere else. If you are between any of these points, then the satellite will tend to drift toward the stable points, increasing delta-v. It turns out that Mars has this issue too – two stable and two unstable points in the same configuration. Since the delta-v would be lowest at these points, and therefore less effort would be required to maintain station, they would be the go-to places for a geostationary Martian satellite. Mars is quite different from Earth. It doesn’t have a super-massive moon, for starters, and it’s also less spherical than Earth, but the basic issue of delta-v is the same. As I just said, it is less spherical, but the gravitational anomalies are much bigger in comparison to Earth's. There is also the fact that since Mars is so much smaller, and that it is far less dense, the geostationary orbit is much closer to Mars. Closer means those gravitational anomalies have a bigger influence. Aside from the drift away from a stable point toward an unstable point - an east or west drift - there is also a tendency to drift northward which has to be countered. Back to that delightful piece of serendipity I mentioned. It turned out that the people on the Martian surface were almost exactly on one of the unstable points. Yippee! Unfortunately, because of the wild variance in the Martian gravity field, maintaining station in a geostationary orbit around Mars turns out to be very difficult. It is simply too easy to begin sliding off-station. It would just be too much effort. So, back to the drawing board. Let’s tackle this another way. Another orbit, what you might call a ‘regular’ orbit, doesn’t have this problem. At least, not so much. One suggestion was to use a lower orbit at an elevation of 5,000km. Consider that at the geostationary orbit (17,025km), the satellite can see 75 degrees of the planet either side of the point it is above. That’s a 150 degree spread. At 5,000km, this reduces to 107 degrees. A 5,000km orbit has a period of approx 0.26 days (that’s Earth days, not Martian ones, which are about forty minutes longer). Using my orbital calculator, I finessed the orbit down to 4,697km. That gives it an orbital period of 6hrs 9mins and 13secs. If you do the sums in your head, you’ll see that four orbits come to approximately 24hrs 39 mins 53 secs – the same as a Martian day. So now we have exactly four orbits per day. At any part of the planet the satellite can see means, from an observer's point of view from the ground, the satellite is above the horizon. At 4,697km the satellite can see 29% of the Martian surface, with a spread of 104.45 degrees. However, that is its visual footprint. It probably has a radio transceiver aboard to talk to objects/people on the ground, and that is very likely to have a much smaller radio footprint. On Earth, this is typically 60 degrees, and if our satellite had such a radio footprint, it would be high in the sky when radio contact could be established, reducing the ill-effects of mountains etc. casting radio shadows. The satellite will be above the same point on the ground every 6hrs 9mins and 13secs. The radio footprint is 60 degrees. 60 degrees is exactly one sixth of a circle. This means a ground station will be in the radio footprint for approximately one hour. It will also be in in this footprint four times a day at exactly the same times each day. One hour is a pretty decent amount of time to talk to an orbiting crew, and you get four chances a day. Contact, whilst not continuous, is actually pretty good, and the orbit will only require minimal boosting once every few weeks or so, perhaps less, and for a minimal amount of fuel. It may not be as glamorous as a geostationary orbit, but technically and logistically it’s an easier one, without sacrificing much communications ability. So, yeah. I went with it.
One of the basic tools in material handling industry, belt conveyors are most commonly used in transportation of bulk materials (grain, salt, coal, ore, sand, etc.). Belt conveyor systems consist of two or more pulleys (a.k.a. drums). An endless loop of carrying medium "the conveyor belt" rotates about them. To move the belt and the material it carries forward, one or both pulleys are powered. The powered pulley is called drive pulley, the unpowered one is known as idler pulley. Belt conveyors in general material handling such as those moving boxes along inside a facility form a different class of belt conveyors from those that are used to transport large volumes of resources and agricultural materials. Based on the proposed use, conveyor belts are manufactured using either PVC or rubber.
Grand Canyon has so much more than pretty scenery. It contains an amazing diversity of rock formations with an abundance of fossils hidden within. What about dinosaur fossils? Not at Grand Canyon! The rocks of the canyon are older than the oldest known dinosaurs. To see dinosaur fossils, the Triassic-aged Chinle Formation on the Navajo Reservation and at Petrified Forest National Park is the nearest place to go. It is illegal to dig up, relocate, and/or remove fossils from Grand Canyon National Park. If you find a fossil, please leave it for others to discover and scientists to study. You are welcome to take a picture or make a drawing of the fossil, then go to one of the visitor centers to see if a park ranger can help you identify it. Fossils are the preserved remains of ancient life, such as bones, teeth, wood, and shells. Trace fossils represent the presence or behavior of ancient life, without body parts being present. Footprints, worm burrows, and insect nests are examples of trace fossils. Sedimentary rock contains fossils because it was built up layer upon layer, often trapping and preserving animals, plants, footprints, and more within the layers of sediment. If all the conditions are right, fossils are formed as the layers of sediment turn into rock. With 32% of Earth’s geologic history and one billion years of fossil life found at Grand Canyon, this is a great place to study ancient environments, climate changes, life zones, and the geologic processes that formed the landscape as we see it today. The following are the most common and well known groups of fossils found at the canyon. Many more await our discovery. With marine environments creating many of the sedimentary rock layers in the canyon over the past 525 million years, marine fossils are quite common. Species changed over time, but similar fossils can be found in most of the marine-based rocks at Grand Canyon. Grand Canyon’s oldest trilobites are found in the Tonto Group, which is between 525 and 505 million years old. It includes the Tapeats Sandstone, Bright Angel Shale, and Muav Limestone. These fossils are arthropods, or joint-footed animals, with a segmented body of hinged plates and shields. They could curl up into a ball for protection, sometimes fossilizing as a "rolled" trilobite. Like arthropods today, trilobites molted as they grew, shedding their old exoskeleton. These molts could fossilize, so one animal could leave several different sized fossils behind. Even though trilobites were relatively primitive animals, they had amazingly complex eyes. Many species had faceted eyes like an insect, using up to 15,000 lenses in one eye. Though plant-like in appearance, crinoids, or sea lilies, were animals, sometimes described as seastars on a stick. They had structures like “roots” that could hold them in place, collect food, circulate fluid, and even act like feet in some species so they could walk across the sea floor. They had a “stem” or column shaped body created by a series of discs stacked together with a central nerve running through. At the top of the body was a cup-like head with feeding structures radiating out from each. These feathery arms had some structural support and could be used in some species for crawling or swimming, though they were primarily used for filtering and capturing food from the water. In the ancient seas these crinoids were so plentiful they formed "gardens" on the sea floor. Discs, individually or sometimes still stacked together, can be found in all the marine layers at Grand Canyon. These were the hardest parts of the animal and most readily preserved as fossils. The most common shelled animal in the ancient seas was the brachiopod. From about 20,000 species of brachiopods, only about 300 species exist today. They are found in every Paleozoic marine layer at the canyon. Brachiopods had two asymmetrical shells, or valves, with one larger than the other. They often fossilized whole because when their muscles were relaxed, as in death, the valves were closed. They contracted their muscles to open the valves and filter feed. They lived on the ocean floor attaching themselves with strong threads or using the shape of the shell and/or ridges on top of the shell to stabilize them in soft mud or sand. A few species had long spines on either side that helped them to remain stable in faster currents or wave action. BryozoansLacy and stick bryozoans similar to those in our oceans today, were also found in ancient seas. These colonial animals produce “lacy” structures on hard surfaces or “stick” structures that stood up into the water column. Each animal has its own chamber within the colonial structure from which it can extend feeding arms into the water column or retract them for protection. Bryozoans are passive filter feeders, collecting organic material and plankton from the water. Scientists sometimes refer to bryozoans as “moss animals” because when their arms are out feeding, they sometimes look like moss covering a surface. Corals secrete a hard skeleton of calcium carbonate which readily fossilizes under the right conditions. One type of coral found in the ancient marine layers of the canyon is the horn coral. This solitary coral lived on the sea floor, with the pointed end of its “horn” embedded in the soft sediment for stability and the wider end with a cup-like depression in which the animal lived. Corals have a polyp shape, similar to its relative the jellyfish. It tucks its body into its skeleton and extends tentacles into the water column for feeding. Corals have a spiral of tentacles lined with nematocysts, or stinging cells, which can capture plankton floating by within reach. SpongesLiving attached to the sea floor, sponges are a colony of single-celled animals that act like a multi-cellular animal. Each individual animal has a specific job, from filtering water for food to protection. Fossil sponges exist because of a unique skeletal structure. Microscopic silica or calcium carbonate spicules, or interlocking spines, provided structural support. When the sponge died, the spicules clumped together and formed a silica mass. When hardened into rock the mass became a chert nodule. Chert is harder than the limestone rock it is embedded in, causing the nodules to protrude from the rock as erosion occurs. With so many sponges in the ancient seas, layers like the Kaibab Limestone are actually more resistant to erosion because of the chert nodules. Trace fossils are left behind by the activities of ancient organisms. Burrows are a classic example of a trace fossil. Animals burrowed through the soft sediment at the bottom of the ancient seas. Under the right conditions, these burrows were preserved when they filled in with sediment. The animals are not usually present, but evidence of their behavior or activities is represented in the trace fossil. Several of the rock layers in the canyon are of terrestrial origin, including the Hermit Shale, Supai Group, Coconino Sandstone, and Surprise Canyon Formation. The mudstones and siltstones of the Hermit Shale and Supai Group were laid down by a meandering system of rivers and streams in a semi-arid climate about 280 million years ago. The sand grains of the Coconino Sandstone were deposited by wind across large coastal sand dunes about 275 million years ago. Each of these layers has unique trace fossils and environmental features preserved in the rock. The Surprise Canyon Formation may be the most fossiliferous formation with petrified wood and bone fragments as just a few examples of fossils found. In the red layers of the Hermit Shale, plant fossils can be found in the mudstone and siltstone left behind by an ancient river system. Indicated by these fossils is a semi-arid climate, with drought-adapted seed ferns, horsetails, small pines, ginkgos, and a noticeable absence of true ferns. Most of the plant fossils are impressions, or trace fossils, with little of the plant material remaining. Oxygen in the atmosphere during the time of the Hermit Shale deposition was in greater abundance than today, probably 35% compared to the present day 21%. Increased oxygen meant larger insects, explaining the eight-inch wingspan of a dragonfly wing impression fossil found in the Hermit Shale. TracksWithin the dunes of wind-blown quartz sand of the Coconino Sandstone, tracks of ancient animals are the most common fossils. Even though no bones have been found, these tracks contain an abundance of information about the animals that made them. Scorpions, millipedes, isopods, spiders, and mammal-like reptiles once scurried over these dunes. Their footprints tell the stories of running or walking across the sand, traveling up or down the dunes, whether the animal dragged its tail, how big the animal may have been based on its stride length, whether it had an upright or sprawling posture, and what kind of animals shared these dunes. The semi-arid climate and cool temperatures deep within canyon caves have combined to create a perfect environment for preservation of more recent fossils. Pleistocene and Holocene remains have been unearthed within many of these caves, including 11,000 year old sloth bones, dung and hair, California condor bones and egg shell fragments, and pack rat middens. These recent remains help scientists understand more modern environmental conditions and climate change that affected the plant and animal communities within Grand Canyon. All caves (and mine shafts), with the exception of the Cave of the Domes on Horseshoe Mesa, are currently closed to visitation. This is for the safety of visitors, the protection of fragile resources such as fossils and unique cave formations, and the preservation of bat habitat. In the 1970s many fossils were lost due to careless visitors leaving a fire burning in Rampart Cave. These resources are irreplaceable and need all of us to help protect them. Grand Canyon fossil books are available from Grand Canyon Conservancy's online bookstore: Last updated: August 8, 2019
World Day for Laboratory Animals 24 April World Day for Laboratory Animals was instituted in 1979 and has been a catalyst for the movement to end the suffering of animals in laboratories around the world and their replacement with advanced scientific non-animal techniques. The suffering of millions of animals all over the world is commemorated on every continent. Although advanced methods are steadily replacing animal research, outdated laws require animal tests before a product can be put on the market. Every year millions of animals suffer and die in experiments that can never be trusted. As a method of predicting likely effects in humans, animal research is flawed in three key areas: - ‘Species differences’. Each species responds differently to substances, therefore animal tests are an unreliable way to predict effects in humans. - Human diseases in laboratory animals are not naturally occurring so need to be artificially created; they are different from the human condition they are attempting to mimic. This also affects results. - Studies have shown that living in a laboratory environment can affect the outcome of an experiment, with test results differing due to the animal’s age, sex, diet and even their bedding material. So results vary from laboratory to laboratory. Government and agency regulators who are responsible for allowing products on the market, are used to these standard animal tests and the estimates and ‘safety’ evaluations drawn from them. They are also aware of the potential for species differences, which may result in injury to people. Thus, a series of animal tests is followed by human trials and this is where the problem of species differences can produce unexpected adverse reactions in people. Some examples of horrific and unexpected side effects in people, due to differences in reaction between species include: BIA 10-2474 Drug Trial. Clinical trials with a new drug, BIA 10-2474, went fatally wrong – when given to human volunteers – one died, four showed evidence of brain damage and it has since been reported that another lost his fingers and toes. The product had been tested on mice, rats, rabbits, dogs and monkeys for toxic effects on various organs as well as reproductive toxicity. Monkeys were given doses approx 75x that given to the human volunteers. See full report here. TGN1412 – an experimental drug was given to human volunteers and caused life-threatening reactions, yet monkeys were given doses 500 times higher than the human volunteers and no side effects had been seen. This disaster may have been avoided with the implementation of advanced technologies such as ‘micro dosing’ with spectrometry analysis. Animal species differ from each other in a number of ways. For example: - Non-human primates are distinct from us in the way they express genes in the brain. There are even big differences in gene expression between humans and chimps, although gene expression between chimps and other non-human primates is similar - Monkeys are frequently used in brain experiments because of their apparent similarity to humans but they still differ from us in various ways including the structure of the nervous systems, sense organs and, evidence suggests, differences in how they function. - The way drugs break down and are excreted may be similar in monkeys and humans, but metabolism rates differ radically - The blood clotting mechanisms of dogs are different from those of humans - Guinea pigs can only breathe through their noses - Rats, mice and rabbits cannot vomit - Zebrafish – a species increasingly used to model humans – has only two heart chambers whereas the human heart has four. Species differences mean that animals used in research can give different results to humans: - Aspirin causes birth defects in monkeys, but is widely used by pregnant women without the same effect. - Parkinson’s disease only naturally exists in humans, so some of the main characteristics of the disease are not present in animals. Models are created by injecting toxins or creating genetically modified animals. But drugs found to be protective in the brain of animals, including primate models, are not effective in humans. - The anti-inflammatory drug Vioxx had unexpected effects on human patients, after laboratory animal tests. An estimated 88-140,000 extra heart attacks may have been caused by Vioxx causing up to 61,600 deaths. - The cancer drug 6-azauridine can be used in humans for long periods, but in dogs small doses produce potentially lethal results in a few days. - Cancer drug Teropterin was tested on 18,000 mice. Used to treat acute childhood leukaemia, but children died more quickly than if they had not been treated at all. - The heart drug, Eraldin was thoroughly studied in animals and satisfied the regulatory authorities. None of the animal tests warned of the serious side effects in people, such as blindness, growths, stomach troubles, and joint pains. - Opren, the anti-arthritis drug, was passed safe in animal tests. It was withdrawn after causing more than 70 deaths, and serious side effects in 3,500 other people, including damage to the skin, eyes, circulation, liver and kidneys. Animal use is an outdated method Advances in science and technology are evolving rapidly, providing advanced non-animal techniques that are faster, more accurate and of direct relevance to humans. There are a range of sophisticated, multidisciplinary techniques that allow the study of the effectiveness and safety of substances on human tissue in-vitro, as well as in humans. Non-animal methods also include computer analytics, database and models based on humans – better for science as well as humans and animals. However, some animal researchers are resistant to moving away from the use of animals in research and towards non-animal alternatives. Researchers at London’s Institute of Neurology have been carrying out invasive brain experiments in monkeys for four decades. An investigation conducted by our campaign partner the National Anti-Vivisection Society in 1996 documented monkeys with electrodes inserted into their brains through their opened skull to study the nerve connections between the brain and hand muscles. These painful experiments continue today, while the same researchers also carry out studies in humans, without causing such pain and suffering. Other researchers have shown that primate research is unnecessary, and that the same level of information can be obtained from human volunteers using non-invasive techniques such as MEG scanning. fMRI scanning also allows the study of neuron network in the brain, in ways previously only thought possible using invasive methods. Neuroimaging is contributing to the detailed mapping of the human brain, providing unprecedented understanding of functioning and development of mental ill health and neurodegenerative diseases. Regulations for the safety testing of all products were originally devised based on animal methods. All over the world, the regulatory ‘tick box’ approach continues to this day. The fact that results vary between species and are inconsistent is well known but is, effectively, set aside. Many tests continue simply to comply with regulations, rather than for any scientific value. Product testing regulations require that such testing must be carried out in at least two mammal species: a rodent species and a non-rodent “second species”. Animals are burnt, blinded, scalded, poisoned, mutilated, starved and substances are forced down their throats through tubes, so that the products we use every day can be called “safe”. These may be things we use in our food (additives), in our home (cleaners, laundry etc.), in our cars, our gardens and the medicines we take; everything has been tested on animals. Household products ingredient testing in the UK includes using animals for “innovative benefit” or in line with European chemical testing rules. Animals are used to test ingredients for items such as detergents, cleaning products, air fresheners, toilet cleaners, paints, and other decorating materials. Tests on animals for garden products such as pesticides are still allowed. Each year in the UK around 3,000 dogs and more than 2,000 monkeys are subjected to painful experiments to test the safety of different chemicals and drugs for human use. However, an analysis of animal toxicity data of over 3,000 drugs concluded that further data from the second species does not solve the problem of extrapolating results to humans. ADI campaign partner, the NAVS undercover investigations in the 1980s and 90s of a number of laboratories carrying out safety testing on animals revealed dogs being force-fed weed killer. The dose was given through a rubber hose, pushed down each dog’s throat, directly into the stomach. Dogs were also subjected to Maximum Tolerated Dose studies, where animals are dosed to a level where they show signs of toxicity, such as a loss of weight and appetite, vomiting, diarrhoea and convulsions. The drug being tested was force-fed to restrained dogs before they were returned to their cages to vomit. Many years later, the NAVS again documented the same suffering in experiments to test drugs in dogs, with side effects such as foaming at the mouth, vomiting, bleeding from the gums and diarrhoea. Decades of suffering despite the highly questionable validity of these tests. Likewise in the US, animals are used to test the safety of drugs and substances and for cosmetics testing, now banned in the UK and Europe. Although exact figures for animal experiments are unknown, latest statistics suggest over 800,000 animals, including more than 75,000 primates and nearly 65,000 dogs, are experimented on each year. The actual figure however is likely to be millions more as reporting omits the use of birds, rodents and farm animals, for which authorisation is not required. Advanced techniques which do not rely upon animals, and concentrate on methods more relevant to humans are the way forward. Replacing use of animals with advanced science Animal tests can be replaced with advanced scientific methods that are faster and more relevant to people, therefore safer, see more here. How you can help during Lab Animal Week (April 21-28) - Ask your MP to sign Early Day Motion #2228: Developing Innovative Science – Better for Animals TODAY which calls on the UK Government to become a leader in the development of non-animal science. You can check if they have signed here and contact them here. The EDM backs our Declaration for Advanced Science, signatories for which pledge to measures which accelerate the replacement of live animal procedures. Find out more about this here - In the US, ask your Representatives to support measures accelerating the move away from animal models towards more human-relevant research. - Make a donation. - Get involved in, and share, our social media campaigns throughout April. - Organise a fundraising event – hold a bake sale, or do a sponsored walk. More ideas here! - Write to your local newspaper; blog about it; share on Facebook. With your support we are making progress: bans on the use of chimpanzees and wild caught monkeys in EU labs; phasing out the capture of wild monkeys to stock the factory farms; the cosmetics testing ban; stopping the Colombian hunters trapping owl monkeys for malaria experiments; most airlines refusing to transport monkeys for research; replacement of animals in teaching and restrictions on certain painful experiments across Europe. But thousands of monkeys and dogs are still being subjected to painful safety tests.
Microtia is a congenital abnormality in which the external part of a child’s ear is underdeveloped and usually malformed. The defect can affect one (unilateral) or both (bilateral) ears. In about 90 percent of cases, it occurs unilaterally. Microtia occurs in four different levels, or grades, of severity: - Grade I. Your child may have an external ear that appears small but mostly normal, but the ear canal may be narrowed or missing. - Grade II. The bottom third of your child’s ear, including the earlobe, may appear to be normally developed, but the top two-thirds are small and malformed. The ear canal may be narrow or missing. - Grade III. This is the most common type of microtia observed in infants and children. Your child may have underdeveloped, small parts of an external ear present, including the beginnings of a lobe and a small amount of cartilage at the top. With grade III microtia, there is usually no ear canal. - Grade IV. The most severe form of microtia is also known as anotia. Your child has anotia if there is no ear or ear canal present, either unilaterally or bilaterally. Microtia usually develops during the first trimester of pregnancy, in the early weeks of development. Its cause is mostly unknown but has sometimes been linked to drug or alcohol use during pregnancy, genetic conditions or changes, environmental triggers, and a diet low in carbohydrates and folic acid. One identifiable risk factor for microtia is the use of the acne medication Accutane (isotretinoin) during pregnancy. This medicine has been associated with multiple congenital abnormalities, including microtia. Another possible factor that could put a child at risk for microtia is diabetes, if the mother is diabetic prior to pregnancy. Mothers with diabetes appear to be at higher risk for giving birth to a baby with microtia than other pregnant women. Microtia doesn’t appear to be a genetically inherited condition for the most part. In most cases, children with microtia don’t have any other family members with the condition. It appears to happen at random and has even been observed in sets of twins that one baby has it but the other doesn’t. Although most occurrences of microtia aren’t hereditary, in the small percentage of inherited microtia, the condition can skip generations. Also, mothers with one child born with microtia have a slightly increased (5 percent) risk of having another child with the condition as well. Your child’s pediatrician should be able to diagnose microtia through observation. To determine the severity, your child’s doctor will order an exam with an ear, nose, and throat (ENT) specialist and hearing tests with a pediatric audiologist. It’s also possible to diagnose the extent of your child’s microtia through a CAT scan, although this is mostly done only when a child is older. The audiologist will evaluate your child’s level of hearing loss, and the ENT will confirm whether an ear canal is present or absent. Your child’s ENT will also be able to advise you regarding options for hearing assistance or reconstructive surgery. Because microtia can occur alongside other genetic conditions or congenital defects, your child’s pediatrician will also want to rule out other diagnoses. The doctor may recommend an ultrasound of your child’s kidneys to evaluate their development. You may also be referred to a genetic specialist if your child’s doctor suspects other genetic abnormalities may be at play. Sometimes microtia appears alongside other craniofacial syndromes, or as part of them. If the pediatrician suspects this, your child may be referred to craniofacial specialists or therapists for further evaluation, treatment, and therapy. Some families opt not to intervene surgically. If your child is an infant, reconstructive surgery of the ear canal can’t be done yet. If you’re uncomfortable with surgical options, you can wait until your child is older. Surgeries for microtia tend to be easier for older children, as there’s more cartilage available to graft. It’s possible for some children born with microtia to use nonsurgical hearing devices. Depending on the extent of your child’s microtia, they may be a candidate for this type of device, especially if they’re too young for surgery or if you’re postponing it. Hearing aids may also be used if an ear canal is present. Rib cartilage graft surgery If you opt for a rib graft for your child, they’ll undergo two to four procedures over the span of several months to a year. Rib cartilage is removed from your child’s chest and used to create the shape of an ear. It’s then implanted under skin at the site where the ear would have been located. After the new cartilage has fully incorporated at the site, additional surgeries and skin grafts may be performed to better position the ear. Rib graft surgery is recommended for children 8 to 10 years of age. Rib cartilage is strong and durable. Tissue from your child’s own body is also less likely to be rejected as implant material. Downsides to the surgery involve pain and possible scarring at the graft site. The rib cartilage used for the implant will also feel firmer and stiffer than ear cartilage. Medpor graft surgery This type of reconstruction involves implanting a synthetic material rather than rib cartilage. It can usually be completed in one procedure and uses scalp tissue to cover the implant material. Children as young as age 3 can safely undergo this procedure. The results are more consistent than rib graft surgeries. However, there’s a higher risk for infection and loss of the implant due to trauma or injury because it’s not incorporated into surrounding tissue. It also isn’t yet known how long Medpor implants last, so some pediatric surgeons won’t offer or perform this procedure. Prosthetic external ear Prosthetics can look very real and be worn with either an adhesive or through a surgically implanted anchor system. The procedure to place implant anchors is minor, and recovery time is minimal. Prosthetics are a good option for children who haven’t been able to undergo reconstruction or for whom reconstruction wasn’t successful. However, some individuals have difficulty with the idea of a detachable prosthetic. Others may have skin sensitivity to medical-grade adhesives. Surgically implanted anchor systems can also raise your child’s risk for skin infection. Additionally, prosthetics do need to be replaced from time to time. Surgically implanted hearing devices Your child may benefit from a cochlear implant if their hearing is affected by microtia. The attachment point is implanted into the bone behind and above the ear. After healing is complete, your child will receive a processor that can be attached at the site. This processor helps your child hear sound vibrations by stimulating the nerves in the inner ear. Vibration-inducing devices may also be helpful to enhance your child’s hearing. These are worn on the scalp and magnetically connected to surgically placed implants. The implants connect to the middle ear and send vibrations directly into the inner ear. Surgically implanted hearing devices often require minimal healing at the implantation site. However, some side effects may be present. These include: - tinnitus (ringing in the ears) - nerve damage or injury - hearing loss - leaking of the fluid that surrounds the brain Your child may also be at a slightly increased risk of developing skin infections around the implant site. Some children born with microtia may experience partial or full hearing loss in the affected ear, which can affect quality of life. Children with partial hearing loss may also develop speech impediments as they learn to talk. Interaction may be difficult because of the hearing loss, but there are therapy options that can help. Deafness requires an additional set of lifestyle adaptations and adjustments, but these are very possible and children generally adapt well. Children born with microtia can lead full lives, especially with appropriate treatment and any needed lifestyle modifications. Talk to your medical care team about the best course of action for you or your child.
The Elements of Art We will be exploring the 7 elements, or tools, that we use to create art. These are line, shape, color, space, texture, form and value. Students will be introduced to each of these through major artists, print references, video series, internet sources and demonstrations. The main goal is to have them recognize these elements and the importance of their use when creating fine art.
Time To Complete I Can Statements - I will know my exploration of personal narrative writing is of high quality when: - I can identify the elements that define personal narrative writing, such as main idea or theme, conflict, and vivid details. Suggestions for Assessing Student Readiness to Move Forward: - Confer with students to check their understanding of the elements of personal narrative writing. - Provide students an anchor text and ask them to code or otherwise identify the elements of personal narrative writing. Conduct a mini-lesson on narrative essays, describing the key organizational structure, form, and approach, as well as noting what a narrative essay is not. It’s not merely a story or a series of random events, but a description of a significant event in which a lesson was learned. - Provide several exemplars and ask students to select 2-3 to examine and create a list of similarities in structure, craft, and form. Or use a jigsaw format, where students work in groups, and each group reads texts about the genre of narrative essays and examines a different exemplar. Then reorganize the groups so that the new groups each have one representative from the former group, and each representative shares their learning with the others in this new group, to create a collective understanding of the genre. Coding the Text: Ask students to read two exemplars and code them with key features that you want them to include in their narrative writing (this may vary by grade). For example, codes for lower elementary may be: Codes for upper elementary may include: - VD=Vivid Detail - ML=Moral or Lesson Codes for middle school might include: - RA=Rising Action - CL=Climax; Resolution Codes for secondary school might extend to: - LD=Literary Device (Students would name the device such as LD-Irony or LD-Suspense.) Resources Describing the Genre of Narrative Essays Resources for Exemplar and Sample Narrative Essays by Famous Authors - http://www.pps.k12.or.us/files/curriculum/Writing_Binder_Grade_4_Section_3.pdf (see Lois Lowry PN-15) Resources for Exemplar and Sample Narrative Essays by Students
Whether healthy or diseased, human cells exhibit behaviors and processes that are largely dictated by growth factor molecules, which bind to receptors on the cells. For example, growth factors tell the cells to divide, move, and when to die — a process known as apoptosis. When growth factor levels are too high or too low, or when cells respond irregularly to their directions, many diseases can result, including cancer. “It is believed that cells respond to growth factors at extreme levels of sensitivity,” said University of Illinois at Urbana-Champaign Bioengineering Associate Professor Andrew Smith. “For example, a single molecule will result in a major change in cell behavior.” In a recent paper published in Nature Communications, Smith reported the invention of a new technology platform that digitally counts, for the first time ever, the amount of growth factor entering an individual cell. Prior to this, researchers inferred growth factor binding based on how the receiving cells responded when the growth factor molecules were introduced. “We showed the first direct cause-and-effect relationships of growth factors in single cells,” he said. “We expect the outcomes to lead to a new understanding of cell signaling, how cells respond to drugs, and why cell populations become resistant to drugs, particularly toward improved treatments for cancer.” Smith’s technology platform tags each growth factor with a single engineered (10 nanometer) infrared fluorescent quantum dot, which can then be viewed using a three-dimensional microscope. In their study, they counted how many epidermal growth factor (EGF) molecules bound to human triple-negative breast cancer cells that were pre-patterned on island-like surfaces. EGF molecules typically signal cell division and lead to tissue growth. Numerous cancers have mutations in their EGF receptors. “We used quantum dots as the fluorescent probe because they emit a lot more light compared to other conventional fluorescent probes such as organic dyes, and we can tune their wavelengths by changing their chemical composition,” said Bioengineering doctoral student Phuong Le, the lead author of the paper. “In our study, we demonstrated that quantum dots emitting light in the near-infrared wavelength allowed the most accurate counting of growth factors binding to cells.” According to Le, the team also treated the breast cancer cells with quantum dot-tagged EGF in the absence and presence of pharmaceutical drugs that inhibit EGF signaling in cells. “We found that the amount of EGF binding is inversely proportional to drug efficacy,” Le said. “This finding is significant as it means that signaling molecules present in the cancer cells’ tumor — a place where signaling molecules are often misregulated — can enhance the cancer cells’ resistance to pharmaceutical agents.”
African Penguins weight from 2.1kg to 3.7kg, and stand approximately 60cm in height. They have a black stripe on their chests and a black chin. Like all penguins, African Penguins have a large head, a short, thick neck, a streamlined shape, a short, small, flipper-like wings and wedge-shaped tail. They use their webbed feet for swimming. Penguins have a lighter color on the belly and a darker color on their back; this coloration helps camouflage them when they are in the water, hiding them from predators (from below them predators see the light belly and from above them predators see their dark backs). They have pink sweatglands above their eyes. The warmer the weather, the more blood is sent to these sweat glands (to allow it to be cooled by the surrounding air), thus making the glands pinker. Their shiny, waterproof feathers help keep their skin dry. Penguins molt annually over about a 3 week period (losing their old feathers and growing new ones) during which they cannot swim and do not eat. Moulting season is from October to February. Their peak moulting time is during December, after which they head out to sea to feed. Juveniles have a light belly & blue-grey backs and they lack the white face markings and black breast band of the adults. Juveniles have bare, red bare patches above of the eyes, and a few randomly placed black spots on the chest and belly. Penguins are flightless birds.
My Home Page Text Gradient Chart Welcome to the wonderful world of reading. davids.victoria AT blvs DOT org A gradient of text is an ordering of books according to a specific set of characteristics. Gradient means ascending or descending in a uniform or consistent way, so the levels of a gradient are defined in relation to each other. As you go up the gradient of text, the texts get harder; conversely, as you go down, they get easier. At each level of the gradient, there is a cluster of characteristics that helps you think about the texts at that level and how they support and challenge readers. The following gradient shows approximate corresponding grade levels. Grade Levels are not the important factor when selecting books for students. Instead you must start where there they are in their development of reading abilities, and that may or may not be their grade level. The grade-level designations are useful, however, because students whose instructional levels are below their grade level need intensive daily instruction that moves them into increasingly challenging texts. Use the gradient to expand the student's breadth of experience with different types of texts and a range of content, authors, and formats. Consider the developmental appropriateness of the content as students approach levels beyond their present grade. This gradient is a large collection of titles that are categorized by level of difficulty. It is meant to support the effectiveness of the reading program and is to be used as a tool for the instructor. For examples of titles at these levels please use the following links: Level A Level B Level C Level D Level E Level F Level G Level H Level I Level J Level K Level L Level M Level N Level O Level P Level Q Level R Level S Level T Level U Level V Level W Level X Level Y Level Z If you want to see how your child reads orally in grade level materials go to the following link: 3-Minute Reading Assessments To see how reading develops go to Readers and how they change over time. If you are interested in knowing how a readability level is given to text go to Determining Text Difficulty If your child has difficulty with grade level reading material please contact me at either through district email: vdavids AT usd458 DOT org or virtual email: davids.victoria AT blvs DOT org or you can call 913-724-1038 to schedule a time to diagnose and plan a reading program to help your child succeed. Pinnell, G. and Fountas, I., 2002. Leveled books for readers grades 3 -6. Portsmouth: NH: Heinemann.
Worm Farming and Composting In this biology lab, students will investigate the role that worms play in the decomposition of organic materials. Continuation of this lesson will show interdependence within an environment. The fertile soil created by the worms will create a rich environment for plant growth. The plant growth will attract insects, which in turn pollinate the plants and create fruit that will serve as compost for the cycle to continue. - Students will learn vocabulary associated with decomposition and composting. - Students will determine how to measure success in a composting pile. - Students will record their observations and measure the success of the Red Worm Farm by tallying the number of worms produced over a period and their length. Context for Use This lab activity is appropriate for a primary classroom with appx. 20-25 students. I would allow 45 minutes for this activity. Today's lesson will set up the environment for the Red Worms. We will not be adding them for a couple of weeks. Now is a perfect time to order your Red Worms. The organic material that you will need may include (but is not limited to) apple cores, fruit/veggie peels, leafy materials (ends of carrots, lettuce, etc.) Resource Type: Activities:Classroom Activity Grade Level: Primary (K-2) Description and Teaching Materials The following chapter books would make good extensions to a worm theme: The Word Eater by Mary Amato, Katie Kazoo: Free the Worms by Nancy Krulik, How to Eat Fried Worms by Thomas Rockwell or Worms Eat My Garbage by Mary Appelhof Lay out the following materials: candle with a lighter, clear tape, large and small poke (nail, diaper pin, safety pin, upholstery needle, etc.), Sharpie, safety blade or scissors, awl and hammer, one 2L bottle, a base from another 2L bottle or storage container that can be inverted for a cover on the 2L bottle and the lid can be used as a over flow tray, a brown paper bag or one 25 cm by 40 cm sheet of brown paper for a light block, 15-20 red worms, worm bedding: shredded newspaper, shredded leaves, peat moss and straw, worm food: organic leftovers from your kitchen/yard/plant material. Bring out the worm bedding and worm food. Ask them what they think we can do with these items. When they are close, ask them what the other items have to do with it. Observe Know Want to Know Hypothesize Learn Further investigating Questions Record worm facts that we learned from the read alouds and previous knowledge. Ask students if there is anything else they would like to know about worms. Jot these down under W (Want to Know). 1. Remove the label from the 2L bottle and cut it off about 10 cm below the neck (approximately a centimeter or two from where the width begins). You can either cut another base from a 2L bottle as a cover or use a storage container that can be inverted for a cover on the 2L bottle and the lid can be used as a over flow tray. 2. Heat a nail (be careful not to burn yourself) over a candle. Around the base, poke four, 5mm apart, drainage holes with the hot nail poke. After heating a nail, you will have an easier time puncturing the thicker plastic at the base of the 2L container. These will serve as drainage holes. a. To create a nail poke: Cut a soft tree branch that is about 3 inches long and 1 inch wide. Cut the head off of a nail at a sharp angle. Poke the head-end of the nail into the soft core of the wood. 3. Heat a small poke over a candle. Poke two rows of eight, 3 mm apart, air holes with a small hot poke. 4. To create a dark, temporary casing tape a dark color of construction paper (that rises about 4 cm higher than the top of the 2L bottle) lose enough that you can easily pull it up. The only time to remove the casing is when you are observing or taking in measurements. 5. Cut about 2 pages of newsprint into .5 cm strips (perhaps sending them through a shredder will save some headaches). Cut these strips in half crosswise. 6. About ¾ cup of water should be added to the bedding. Fluff the bedding vigorously until the strips are well separated. Add a small handful of soil into the bedding (for the microorganisms that will help to break down the paper). 7. Fill the 2L bottle 2/3 full with the bedding/soil mix. The pH of the mix will need to be in the range of 6.5-8.5. If the mix is slightly too acidic, mix in some powdered lawn lime or finely crushed eggshells. The mix must be quite moist but not saturated. 8. Add the organic food to the top of the bedding. The organic material that you will need may include (but is not limited to) apple cores, fruit/veggie peels, leafy materials (ends of carrots, lettuce, etc.) Cover with 1-2 cm of the bedding. 9. Keep the temperature around 68-70 degrees. 10. Explain that the worms feed by eating the material and they breathe through their skin. For these reasons, the environment must be kept moist. The food should be no larger than 1-2 cm. Place the food on the bedding and cover it with about 1-2 cm of moist bedding. a. Before adding more food, always check to see if the previous food is being eaten. Worms eat 2-3 times their mass of food every few days so it might be a better idea to underfeed them rather than overfeed them. If you add too much food, the environment will begin to smell sour and black fly larvae may become present. If need be, add more bedding to dilute the problem. The container should smell like freshly turned soil. 11. Continue with hypothesizing the role that worms play in soil. Ask the students how we can measure if our composting is a success. Ask them what we will need to do. 12. Teach students how to measure the pH. We will be measuring it every week or so, recording the data and plotting it on a graph. a. Their wastes are referred to as castings and they fertilize the soil. As they move further into the earth, they move organic matter and leaves down and deep soil to the surface. They are the earth's natural aerators that move water and air to plant roots. You can expect the population to double within 3 months. Teaching Notes and Tips Based on an activities from: Bottle Biology by Mrill Ingram p. 18-21. Ugulano, Ronnie. "Vermicomposting-How to Raise Earthworms." Pontiac High School. 25 Jun 2009 . Both of my resources say to order red worms. Due to the wide variety of earthworms, those in your yard may not be the best choice. Some worms prefer to burrow and be left alone, have slow reproduction rates, or release a stench. I had a difficult time locating the red worms at local pet stores and bait shops. Your best bet is to order them online. While I am not sure, I am assuming that Red Wigglers and Red Worms are either the same thing or closely related. A pinch of finely ground cornmeal will help with reproduction as will rabbit scat. Fun extension activities: - Using the soil to plant a flowering plant that will need to be pollinated by insects - Obtaining and displaying a Venus Fly Trap to explain pollination - Use different earthworms to see which does the best in the compost - Will compost or soil provide a better environment for baby worms to be born? Grade 2: Life Science: Interdependence in Living Systems- Natural systems have many components that interact to maintain the living system.
Meningitis is an inflammation of the meninges — a membrane that surrounds the brain and spinal cord. Meningitis can be either bacterial or viral. Bacterial meningitis is usually caused by Streptococcus pneumoniae, Neisseria meningitidis, or Haemophilus influenzae. Symptoms of bacterial meningitis can include sudden onset of fever, headache, neck pain or stiffness, painful sensitivity to strong light, vomiting (often without abdominal complaints), and irritability. All of these symptoms may or may not present. This is a disease that can quickly progress to lethargy, unresponsiveness, convulsions, and death. Prompt medical attention is extremely important. Viral meningitis is serious but rarely fatal in people with a normal immune system. The symptoms generally persist for 7-10 days and then there is complete recovery. This is a bacterial infection that can be caused by a number of different bacteria. It is an infection of one or all of the 4 sinuses — hollow cavities — that are situated around the nose. A sinus infection occurs when these cavities get filled with pus, which produces an ideal environment for bacteria to grow, instead of staying empty, air-filled cavities. Common symptoms include yellow-green nasal discharge, nasal congestion, facial pain that may extend down into your teeth, fever, cough, and generalized headache and ill feeling. Several antibiotics can be used to treat this infection. Bronchitis is an inflammation of the upper lung close to the base of the trachea (the tracheobronchial tree). It includes the common cold, influenza, respiratory syncytial virus (RSV), whooping cough and can be caused by different organisms including Mycoplasma pneumonia, Chlamydia pneumoniae, Haemophilus species, or Streptococcus pneumoniae. Bronchitis typically begins as an upper respiratory infection, headache, sore throat, cough. Treatment is usually rest, adequate fluid intake and a vaporizer to loosen the “junk” that accumulates in the lungs. Antibiotics, such as tetracyclines or erythromycin, are prescribed when it is a bacterial infection, although there are others that can be used. Measles is a viral infection that spreads very quickly among people who have not had the vaccine (part of the MMR vaccine). This disease affects primarily the throat, airways, lungs, and the skin. It takes 1-2 weeks after exposure before the disease becomes active. Initial symptoms include a high fever, coughing, runny nose, and red eyes. These are followed by the appearance of tiny, white spots in the mouth and throat and then a rash from the forehead all the way down the body. Because measles is a viral infection, it cannot be treated with antibiotics. The virus usually lasts around 10-14 days. For more information and appropriate diagnosis, see your doctor. Of note, the MMR vaccine provides excellent protection against the virus. Mumps is another viral infection that is extremely contagious. The virus concentrates in the person’s saliva and anyone who is standing nearby an infected person can become infected. Generally, it takes between two and three weeks for symptoms to appear. These symptoms generally include fever, chills, headache, and loss of appetite. After 1 or 2 days, the salivary glands on either side of the mouth may become swollen, hard, and painful. Some ear pain and painful chewing may also occur. Because the infection is viral, it cannot be treated with antibiotics and it usually clears up in around 10 days. For more information and appropriate diagnosis, see your doctor. Of note, the MMR vaccine, of which mumps is a part, provides excellent protection against the virus. Rubella (German Measles) Rubella is a viral disease spread by air-born droplets or close contact to an infected person. Most cases are so mild that they are hardly noticeable. General ill-feeling and tiredness are the most common symptoms. The rash is similar to that of measles but doesn’t spread as much. Fever, headaches and mild joint stiffness and pain are other common complaints. The virus resolves itself and there is no treatment. Of note, the MMR vaccine, of which rubella is a part, provides excellent protection against this virus. The common cold is a viral infection that usually involves the mouth, nose and lungs. Common signs and symptoms include sneezing, coughing, runny nose and general ill-feeling. A fever is rarely present. Nasal discharge generally remains clear. Treatment is generally symptomatic. Since the cold is caused by a virus, antibiotics will not help it. Colds generally resolve within a week. The croup is an illness in the voice box that most commonly affects infants and children ages three months to three years. The croup usually starts with a cold, cough and sore throat. One of the most characteristic signs of the croup is a barking cough, noisy breathing and hoarse voice. Croup attacks usually occur in the evening or during the night. The best thing the patient can do is stay calm. A steamy bathroom — from a hot shower or a trip out into the cold night air can help relieve the coughing. Since the croup is generally caused by a virus, antibiotics will not help. However, there are symptomatic treatments particularly for breathing difficulties. For more information and appropriate diagnosis and treatment, talk to your doctor. Cellulitis is a bacterial infection of soft tissue — usually the skin. It can occur anywhere on the body but is most likely to be found on the arms, legs or face. It is not contagious and can be easily cured with a course of antibiotics, however, if left untreated it could get into your blood stream and cause a more serious infection. Symptoms around the area of infection include redness, soreness, and swelling. If the infection has spread or entered the blood stream, you could also experience fever, chills, or sweating. For appropriate diagnosis and treatment, see your doctor. Otitis Media (Middle Ear Infection) Otitis media (OM) is a bacterial infection within the middle ear. OM can occur at any age but is most common between 3 months and 3 years of age. Generally, the bacteria causing the infection have migrated from the nose up the eustachian tube and into the middle ear. Generally the first symptom is an ear ache followed by fever, possible hearing loss, nausea, vomiting and diarrhea. Since this is bacterial infection, antibiotics are prescribed and the symptoms should improve within a couple of days. See your doctor for appropriate diagnosis and treatment. Pneumonia, by definition, is an inflammation of the lower respiratory tract caused principally by either a bacterial or viral infection or by chemical irritation. Pneumonia can be defined as either “typical” or “atypical”. Typical — has an abrupt onset of symptoms. Symptoms include fever, chills, rapid and difficult breathing, rapid heart rate, and coughing that produces sputum. Atypical — has a gradual onset of symptoms. Symptoms are general and could be applied to a variety of other illness. These symptoms could include fever, headache, general ill feeling, difficulty breathing, and a dry cough. Treatment of pneumonia depends on the organism causing it. The first concern is to identify whether it is viral, bacterial or chemical in origin. Often, viral pneumonia will lead to a secondary bacterial infection. If the pneumonia is bacterial, antibiotics will be used. For all types of pneumonia, symptomatic treatment can include the use of acetaminophen to lower fever, bronchodilators, like albuterol, to help open the lungs and maintaining adequate fluid intake. For more information and appropriate diagnosis and treatment, see your doctor. Respiratory Syncytial Virus (RSV) RSV is a viral infection that most commonly occurs in the winter, is associated with acute respiratory distress and is occasionally fatal, particularly in the very young. Difficulty breathing, coughing and wheezing are the most common symptoms associated with it. The severity of symptoms varies from case to case ranging from mild to severe. Treatment is aimed at resolving the symptoms — particularly breathing difficuties. Occasionally, ribavirin (Virazole®), an antiviral drug, may be used to help speed recovery. For more information and appropriate diagnosis and treatment, see your doctor. Whooping cough (pertussis) Whooping cough is a very contagious bacterial disease caused by Bordetella pertussis. It is best known for the “whoop” caused by a spasmodic cough that ends in a prolonged, high-pitched, crowing inspiration. Whooping cough has three stages. It begins with sneezing, watery eye, loss of appetite and coughing at night. The second stage occurs after 10-14 days and involves the “whooping” which are long episodes of hard coughing followed by the “whoop”. Excessive mucus may be coughed up during this time. Vomiting may also be present due to the large amount of mucus. The convalescent stage begins at about the 4th week as the coughing and vomiting diminish. The illness generally lasts around 7 weeks, but the coughing may return during any upper respiratory infections for several months afterwards. Antibiotics are generally needed. For appropriate diagnosis and treatment, see your doctor. Roseola is a viral infection that occurs in infants or very young children and is characterized by a very high fever and a distinctive rash. What is unusual about this infection is that despite a very high fever, the child is usually alert and active. Roseola generally resolves itself within a week. Treatment is symptomatic — such as acetaminophen for the fever. Infectious Mononucleosis (Mono) Mono is caused by the Epstein-Barr virus. The virus is found in saliva and mucus. It can be transmitted through coughing, sneezing and kissing. It is diagnosed by a blood test. Signs and symptoms include fatigue, fever, sore throat and enlarged lymph nodes. Generally, the patient presents with a history of fatigue, fever, and general ill-feeling for over a week. The fatigue is the worst in the first two to three weeks of the illness, with a fever peaking most afternoons and evenings. Enlargement of the spleen occurs about 50% of the time. Due to this effect, the patient should avoid contact sports or heavy lifting for about two months (or longer based on your doctor’s recommendations) because of the danger of rupturing the spleen, which can be fatal. Since this is a viral infection, antibiotics will not help. Treatment is generally supportive, including rest. For more information and appropriate diagnosis and treatment, see your doctor. Strep throat is a bacterial infection caused by Streptococcus species (hence “strep”). It is characterized by an extremely sore throat and a thin whitish membrane on the back of mouth at the base of the throat. Step throat is also characterized by a very high fever and pain on swallowing. A diagnosis can be made by performing a rapid strep test. Strep throat is treated with antibiotics and symptoms should improve within a couple of days. For more information and appropriate diagnosis and treatment, see your doctor. This acute disease is caused by Clostridium tetani a bacillus that is found in the intestine of animals, including humans, where it is harmless. The bacillus enters a wound (usually a puncture wound) by contamination with soil, road dust or feces. This anaerobic pathogen favors necrotic tissue and/or foreign bodies. Most cases occur within 14 days of entry. While it is uncommon in industrialized countries, this disease occurs worldwide and affects all ages. The bacillus produces a neurotoxin that causes painful muscle contractions in the cheek and neck muscles, hence the common name of lockjaw, and sometimes involves muscles of the trunk. This disease is fatal in 30-90% of cases dependent on age and therapy. Prevention is by routine immunization with tetanus toxoid and subsequent booster shots. Infant immunization is normally given along with diptheria and pertussis vaccines as DPT. Booster immunizations should be given every10 years in the absence of injury. If injured, a booster immunization may be given on the day of injury.
Essential tremors refer to a neurological disorder that causes uncontrollable shaking of the hands, arms, head and other parts of the body. The lower part of the body is rarely affected by this disorder. Also, both sides of the body may not be affected in the same way. This disorder does not usually need treatment unless it is severe enough to impede daily functioning and make the person dependant on others. However, patients suffering from this condition may often find it difficult to complete simple tasks like buttoning a shirt or writing. Essential tremors also affect the psychological make-up of a person. Depression and anxiety are often associated with essential tremors. The frustration of not being able to control tremors may also make a person withdraw from friends and family. People suffering from essential tremors also have a higher than normal risk of suffering from conditions such as Parkinson’s. The exact cause of this disorder is not yet known and hence it cannot be prevented. Genetic mutations have been credited with causing this condition is many cases but the gene responsible for it has not yet been identified. Thus, essential tremors can be passed down from parent to child. However, the severity of the tremors and the age at which symptoms first become visible may vary. While some people show signs of tremors in their early teen years, others develop tremors only in their late 40s. These tremors can also be caused by abnormal electrical activity in the thalamus. The thalamus controls and coordinates muscle activity. Tremors caused by old age or those caused by excessive consumption of alcohol, emotional distress etc are not categorised as essential tremors. However, ageing may make essential tremors more frequent and pronounced. Blood, urine and other lab tests do not help in diagnosing this condition. A diagnosis is usually made on the basis of an understanding of the family medical history, a physical examination and complete neurological exam. In order to rule out other triggers for the tremors, a doctor may ask for thyroid tests. At present this condition cannot be cured but medication may help reduce the symptoms and improve the patient’s quality of life. The main symptom of primary orthostatic tremor is the occurrence of a rapid tremor affecting both legs while standing. A tremor is involuntary, rhythmic contractions of various muscles. Orthostatic tremor causes feelings of “vibration”, unsteadiness or imbalance in the legs. The tremor associated with primary orthostatic tremor has such high frequency that it may not visible to the naked eye but can be palpated by touching the thighs or calves, by listening to these muscles with a stethoscope, or by electromyography. The tremor is position-specific (standing) and disappears partially or completely when an affected individual walks, sits or lies down. In many cases, the tremor becomes progressively more severe and feelings of unsteadiness become more intense. Some affected individuals can stand for several minutes before the tremor begins; others can only stand momentarily. Eventually, affected individuals may experience stiffness, weakness and, in rare cases, pain in the legs. Orthostatic tremor, despite usually becoming progressively more pronounced, does not develop into other conditions or affect other systems of the body. Some affected individuals may also have a tremor affecting the arms. In one case reported in the medical literature, overgrowth of the affected muscles (muscular hypertrophy) occurred in association with primary orthostatic tremor. The exact cause of primary orthostatic tremor is unknown (idiopathic). Some researchers believe that the disorder is a variant or subtype of essential tremor. Other researchers believe the disorder is a separate entity. Some individuals with primary orthostatic tremor have had a family history of tremor suggesting that in these cases genetic factors may play a role in the development of the disorder. However, more research is necessary to determine the exact, underlying cause (s) of primary orthostatic tremor. Primary orthostatic tremor affects females slightly more frequently than males. Because many affected individuals of primary orthostatic tremor often go unrecognized or misdiagnosed, the disorder is believed by some to be under-diagnosed, making it difficult to determine the true frequency of this disorder in the general population. Tremors, involuntary quivering, or trembling movements can occur in association with many disorders. They may occur at any age and may be rhythmic or intermittent. Tremors mainly occur in disorders of the central nervous system, and especially in disorders of the cerebellum or basal ganglia. Examples of cerebellar diseases might be tumors of the cerebellum, multiple sclerosis involving the cerebellum, or a degenerative disease such as spinocerebellar degeneration. Examples of disorders of the basal ganglia include parkinson’s disease (discussed in more detail below), wilson’s disease, and many other rare and common disorders. Tremor may also occur as a result of anxiety, medication, or be of unknown cause (idiopathic). Orthostatic myoclonus is a rare condition that is similar to primary orthostatic tremor, but myoclonus refers to sudden, involuntary jerking of a muscle or group of muscles caused by muscle contraction or relaxation. Orthostatic myoclonus is characterized by slowly progressive unsteadiness when standing that is relieved by walking or sitting. Some affected individuals experienced bouncing stance and recurrent falls. More research is necessary to determine when orthostatic myoclonus and primary orthostatic tremor are the same disorder or similar, yet distinct, disorders. In rare cases, orthostatic myoclonus may be associated with underlying neoplasm. Essential tremor is a common movement disorder characterized by an involuntary rhythmic tremor of a body part or parts, primarily the hands, arms, and neck. In many affected individuals, upper limb tremor may occur as an isolated finding. However, in others, tremor may gradually involve other anatomic regions, such as the head, voice, and tongue, leading to a quiver in the voice or difficulties articulating speech.
Mosquito (from the Spanish meaning little fly) is a common insect in the family Culicidae (from the Latin culex meaning midge or gnat). Mosquitoes resemble crane flies (family Tipulidae) and chironomid flies (family Chironomidae), with which they are sometimes confused by the casual observer. Mosquitoes go through four stages in their life cycle: egg, larva, pupa, and adult or imago. Adult females lay their eggs in water, which can be a salt-marsh, a lake, a puddle, a natural reservoir on a plant, or an artificial water container such as a plastic bucket. The first three stages are aquatic and last 5–14 days, depending on the species and the ambient temperature; eggs hatch to become larvae, then pupae. The adult mosquito emerges from the pupa as it floats at the water surface. Adult females can live up to a month – more in captivity – but most probably do not live more than 1–2 weeks in nature. Mosquitoes have mouthparts which are adapted for piercing the skin of plants and animals. They typically feed on nectar and plant juices. In some species, the female needs to obtain nutrients from a "blood meal" before she can produce eggs. There are about 3,500 species of mosquitoes found throughout the world. In some species of mosquito, the females feed on humans, and are therefore vectors for a number of infectious diseases affecting millions of people per year.
goldenrod, any of about 150 species of weedy, usually perennial herbs that constitute the genus Solidago of the family Asteraceae. Most of them are native to North America, though a few species grow in Europe and Asia. They have toothed leaves that usually alternate along the stem and yellow flower heads composed of both disk and ray flowers. The many small heads may be crowded together in one-sided clusters, or groups of heads may be borne on short branches to form a cluster at the top of the stem. Some species are clump plants with many stems; others have only one stem and few branches. Canadian goldenrod (S. canadensis) has hairy, toothed, lance-shaped leaves and hairy stems; it is sometimes cultivated as a garden ornamental. Solidago virgaurea of Europe, also grown as a garden plant, is the source of a yellow dye and was once used in medicines. The goldenrods are characteristic plants in eastern North America, where about 60 species occur. They are found almost everywhere—in woodlands, swamps, on mountains, in fields, and along roadsides—and form one of the chief floral glories of autumn from the Great Plains eastward to the Atlantic.
November 25, 2013 "These small habitats, known as microhabitats, include tree holes, logs, and plants that exist within the rainforest strata and they provide cooler temperatures within them than the air that surrounds them," explains lead researcher Brett Scheffers with James Cook University. Scheffers and his team studied whether or not such microhabitats would truly provide cool refuges for animals during extreme weather by looking at 15 species of amphibians and reptiles on Luzon Island in the Philippines. "Microhabitats reduced mean temperature by 1–2 degrees Celsius and reduced the duration of extreme temperature exposure by 14–31 times," the researchers write. They found that not only were microhabitats cooler, but temperatures also fluctuated less. Moreover, even as temperatures outside the microhabitats increased, the insides of microhabitats warmed considerably more slowly. "Microhabitats have extraordinary potential to buffer climate and likely reduce mortality during extreme climate events," the scientists add, an assessment that agrees with other studies across the tropics. Nonetheless, microhabitats are meant as short-term refuges; they could be utilized to survive periodic heatwaves, but not climatic upheaval. If average temperatures rise too high, species will be forced to migrate to a different climate—such as to higher altitudes— in order survive instead of depending on microhabitats for short term refuges. However scientists fear that many species will not be able to migrate quickly enough and will be pushed to extinction. "Our study is a cautionary tale. Biodiversity is resilient and adaptive, however, with future forecasts predicting annual temperature increases of up to 4-6 degrees Celsius and in some areas extreme temperatures that surpass 40 degrees Celsius, there are simply no habitats cool enough to safeguards species from such extremes," notes Sheffers. The world's government have pledged to keep global average temperatures from rising 2 degrees Celsius above pre-industrial levels, however experts say nations are currently moving too slowly to hit that target. A glass frog in Costa Rica. Photo by: Rhett A. Butler. - Brett R. Scheffers, David P. Edwards, Arvin Diesmos, Stephen E. Williams, Theodore A. Evans. (2013) Microhabitats reduce animal’s exposure to climate extremes. Global Change Biology. |AUTHOR: Jeremy Hance joined Mongabay full-time in 2009. He currently serves as senior writer and editor. He has also authored a book.| Climate change could kill off Andean cloud forests, home to thousands of species found nowhere else (09/18/2013) One of the richest ecosystems on the planet may not survive a hotter climate without human help, according to a sobering new paper in the open source journal PLoS ONE. Although little-studied compared to lowland rainforests, the cloud forests of the Andes are known to harbor explosions of life, including thousands of species found nowhere else. Many of these species—from airy ferns to beautiful orchids to tiny frogs—thrive in small ranges that are temperature-dependent. But what happens when the climate heats up? Climate change scattering marine species (08/08/2013) Rising ocean temperatures are rearranging the biological make-up of our oceans, pushing species towards the poles by 7kms every year, as they chase the climates they can survive in, according to new research. The study, conducted by a working group of scientists from 17 different institutions, gathered data from seven different countries and found the warming oceans are causing marine species to alter their breeding, feeding and migration patterns. Climate change to halve habitat for over 10,000 common species (05/13/2013) Even as concentrations of carbon dioxide in the atmosphere hit 400 parts per million (ppm) for the first time in human history last week, a new study in Nature Climate Change warns that thousands of the world's common species will suffer grave habitat loss under climate change. |Get Mongabay articles emailed to your inbox| |Enter your email address:|
Youth violence refers to harmful behaviors that can start early and continue into young adulthood. The young person may be a victim, an offender, or a witness to the violence. While violence impacts people of all ages, violence disproportionately affects youth and is the second leading cause of death for young people between the ages of 10 and 24. Because of the multiple factors that contribute to the development of violence, a comprehensive preventative approach is needed. Youth violence prevention also requires collaboration among justice, public safety, education, public health, and human service agencies, with the support of community leaders, businesses, and faith-based organizations.
The world of computing is in transition. As chips become smaller and faster, they dissipate more heat, which is energy that is entirely wasted. By some estimates the difference between the amount of energy required to carry out a computation and the amount that today’s computers actually use, is some eight orders of magnitude. Clearly, there is room for improvement. So the search is on to find more efficient forms of computation, and there is no shortage of options. One of the outside runners in the race to take the world of logic by storm is reversible computing. By that, computer scientists mean computation that takes place in steps that are time reversible. So if a logic gate changes an input X into a output Y, then there is an inverse operation which reverses this step. Crucially, these must be one-to one mappings, meaning that a given input produces a single unique output. These requirements for reversibility place tight constraints on the types of physical systems that can do this kind of work, not to mention on their design and manufacture. Ordinary computer chips do not qualify–their logic gates are not reversible and they also suffer from another problem. When conventional logic gates produce several outputs, some of these are not used and the energy required to generate them is simply lost. These are known as garbage states. “Minimization of the garbage outputs is one of the major goals in reversible logic design and synthesis,” say Himanshu Thapliyal and Nagarajan Ranganathan at the University of South Florida. Today, they propose a new way of detecting errors in computations and say that their method is ideally applicable to reversible computing and, what’s more, naturally reduces the number of garbage states that a computation produces. Before we look at their approach, let’s quickly go over a conventional method of error detection. This simply involves doing the calculation twice and comparing the results. If they are the same, then the computation is considered error free. This method has an obvious limitation if the original computation and its duplication both make the same error. Thapliyal and Ranganathan have a different approach which gets around this problem. If a reversible computation produces a series of outputs, then the inverse computation on these outputs should reproduce the original states. So their idea is to perform the inverse computation on the output states and if this reproduces the original states, then the computation is error free. And because this relies on reversible logic steps, it naturally minimises the amount of garbage states that are produced in between. There are one or two caveats, of course. The first is that nobody has succeeded in building a properly reversible logic gate so this work is entirely theoretical. But there are a number of computing schemes that have the potential to work like this. Thapliyal and Ranganathan point in particular to the emerging technology of quantum cellular automata and show how their approach might be applied. The beauty of this approach is that it has the potential to be dissipation-free. So not only would it use far less energy than conventional computing, it needn’t lose any energy at all. At least in theory. At first glance, that seems to contradict one of the foundations of computer science: Rolf Landauer’s principle that the erasure of a bit of information always dissipates a small amount of energy as heat. This is the basic reason that conventional chips get so hot. But this principle need not apply to reversible computing because if no bits are erased, no energy is dissipated. In fact, there is no known limit to the efficiency of reversible computing. If a perfectly reversible physical process can be found to carry and process the bits, then computing could become dissipation free. For the moment, that’s wild dream. But in the next few years, as quantum processes begin to play a larger part in computation of all kinds, we may well hear much more about reversible computing and its potential to slash the energy wasted in computing. Ref: arxiv.org/abs/1101.4222: Reversible Logic Based Concurrent Error Detection Methodology For Emerging Nanocircuits
Oakfield Junior Schools curriculum is derived from the National Curriculum. Our curriculum is balanced, broad and based on our School Vision. Our curriculum planning is detailed, inclusive and differentiated to meet all pupils’ talents, skills and abilities. Each class teacher delivers a broad curriculum which includes the core subjects. They are supported by specialist curriculum leaders who are able to advise and work alongside them within the classroom. Help and advice is also available from supporting consultants, advisory teachers and other school improvement advisors. At Oakfield we believe that Literacy is fundamental to all areas of learning as it unlocks the wider Curriculum. We believe that it is vital for children to be able to explain their thinking, debate their ideas and read and write at a level which will help them to develop their language skills further. Here at Oakfield, these skills are taught through discrete English lessons daily however these skills encompass the whole curriculum and children are continually encouraged to apply these skills in all subjects. We provide the children with experiences that promote the development of critical and creative thinking as well as competence in listening and talking, reading, writing and the personal, interpersonal and team-working skills which are so important in life and in the world of work. Speaking and Listening is at the heart of everything we do at Oakfield. The children are taught to talk confidently to one another, talk to an audience and talk in role. We provide opportunities for the children to develop these speaking and listening skills across the whole curriculum. This in turn provides them with the necessary skills to take turns when speaking, listen to one another’s opinions, challenge these opinions and ask and respond to appropriate questions. The teaching of English ensures that we provide children with the necessary skills to write in a range of genres for different audiences and purposes. Our approach to teaching English, ensures that children can say what they want to write confidently before they put pen to paper. We use a variety of teaching approaches to engage the children with the chosen genre and provide them with the necessary skills to write. These include: speaking and listening games, acting in role, questioning and discussion and other forms of media such as film clips. Grammar, vocabulary, punctuation, handwriting, spelling and composition all encompass literacy. Spelling is taught through a ‘little and often’ approach three times at week at the start of English lessons. Pupils are then taught to apply the rules that they have learnt across the curriculum. We ensure that the children can learn two fundamental skills in enabling them to read confidently; word recognition- the ability to decode the words on the page and comprehension – understanding the text that they have read. Through daily guided reading sessions, the teacher models, questions and encourages dialogue between the children in order to develop these skills. At Oakfield, we focus on the core aims of the National Curriculum for maths: fluency, reasoning and problem solving. Pupils are taught in ability sets within each year group to ensure their learning needs can be met. Progress is tracked regularly which allows us to provide additional support to any children falling behind their targets. Enrichment groups are also offered for those children working beyond the expectations for their year group. Parents may also like to refer to our Calculations Booklet when supporting their child at home: With the new National Curriculum, the emphasis is on the more traditional methods of calculation; however, at Oakfield we teach for understanding, rather than by rote, making use of practical equipment (outlined in the booklet) to help with this. We believe that competence in mental calculation and number facts (such as times tables) are essential to success in maths, with specific strategies taught in class and regular homework activities set around this. The skills covered are then applied to ‘real life’ investigations with cross-curricular links exploited as often as possible. From our solar system to viruses and from inventing burglar alarms to understanding the human body. Through a curriculum rooted in curiosity and practical experiences we enthuse our students, developing their scientific skills and thinking, imagination and creativity. We foster links between science and the other subjects enabling our children to see science, and their skills, work in other contexts. Science is incredibly important at Oakfield - it is at the heart of all that is around us and all that we interact with. The children we teach today will shape the future of industry, business, medicine and innovation. A theory lesson and a practical session mean our children receive a minimum of 2 hours of science every week. We have extensive grounds, including a woodland, enabling many of our lessons to be out of the classroom. Children approach practical investigations independently, in pairs, groups or as a whole class – we encourage dialogue and the evaluation of methods and results; whether they are right or wrong. With 5 topics and practical skills per year group, covering areas of physics, chemistry and biology, Oakfield pupils appreciate the value of science and fully engage with it. History and Geography History and Geography are taught together through a topic based approach. Creativity in the design of the curriculum means that meticulous planning is in place to ensure purposeful learning that is directed to achieve an objective. Creativity is also central to the learner, as they are required to think and behave imaginatively to generate original outcomes which are also secure in the National Curriculum 2014 Programmes of Study for History & Geography. Critical thinking is developed along a line of enquiry in meaningful contexts eg. Relocating a Lost Tribe in the Amazon (Year 6); Crime & Punishment in the Middle Ages (Year 4); How Henry VIII's desire for a son changed the religion as well as daily life (Year 5). As History & Geography subjects deal with real world learning, it relates directly to thinking about our actions that can be taken in the real world (sustainability) and how we can live in harmony (SMSC) in an ever-changing world. Art & DT Even though Art and DT are classed as a foundation subjects they are given great importance at Oakfield and regarded as one of our strengths. The development of our brand new Art and DT studio has further enhanced the experiences that we offer the children. Children regardless of their ability are nurtured and encouraged to explore their creativity in many different media. These include pencil, charcoal, ink, paint (including watercolours, acrylics and poster paint), oil and chalk pastels, textiles, clay, modrock and wire modelling. Various art works from different cultures and countries are studied and may be used to create a child’s own picture in the style of or to be used as a stimulus. Some of the work produced is truly outstanding but our aim is for each child to feel proud of what they have achieved. DT skills range from woodwork and structure, sewing and textiles as well as food technology. We have two music specialists who provide weekly music lessons in our very well equipped music room. The music teaching at Oakfield is both practical and fun. We aim to inspire all children to take part in as many musical activities as possible and to develop their musical talents and skills. Music lessons give the children many opportunities to develop their musical skills of singing, playing instruments, listening to music, improvising and composing, so by the time they leave Oakfield they have performed and played music in many different group settings. Children enjoy learning about and playing music from different cultures and times in history, often the music is related to topics the children are learning about in the classroom. All Year 3 children learn to play the recorder and in Years 4 -6 many children take up the opportunity to learn a brass instrument in small groups. The children reach very high standards and are able to play as part of a brass group in our school concerts and many take music exams. We encourage our children to learn instruments and Oakfield has a wide variety of instrumental teachers who teach at school .Our music clubs which include choir, jazz band, brass band, recorder clubs and steel pan group are all very popular. Every opportunity is taken to perform at school concerts ,Christmas plays , local community and national events eg 02 Young Voices concerts, Carol singing at Polesden Lacey House and the Mid- Surrey Music festival at The Dorking Halls. This way every child has the opportunity to participate and experience the fun of being involved in practical music making. Physical Education takes place in the hall, on the MUGA or on the playing field and involves gymnastics, dance, swimming (at the Leatherhead Leisure Centre), outdoor adventurous activities (such as orienteering) and games. We cover a variety of games: rounders, cricket, badminton, hockey, football, rugby, netball, cross country running and track and field events. Please note that we do not cancel outdoor lessons just because it is a bit wet or cold. It is important that your children have clothing appropriate to the time of year to wear during these lessons. We participate in many local sports competitions. We believe the children gain a great deal from working alongside others in team games and in competing against others, both in team and individual competition. We have a fine record of achievement in local competitions. We also attend sports festivals to give the less active children the opportunity to take part in a new sport. We have a large range of clubs which provides the pupils with a great opportunity to participate in extra-curricular PE activities. French is taught by a specialist teacher, throughout the school. The topics introduced in year 3 (by means of songs and games,) are revisited and extended, to enable the retention of basic vocabulary. The emphasis on fun, gives children the confidence to speak, sing and later to read and write in French. Mini films about French customs heighten pupils’ intercultural understanding, whilst grammatical references reinforce their knowledge of English grammar. Oakfield pupils develop good listening skills, knowledge about language and language learning strategies, which facilitate their transition to the learning of both French and other languages at secondary school. At Oakfield we aim to develop pupils’ knowledge and understanding of Christianity and other religious traditions, along with non-religious world views. This comprehensive approach means we can discuss complex and engaging issues such as the meaning of life, the power of faith and the concept of right and wrong. Pupils not only explore their own beliefs, but they learn from different beliefs, values and traditions, improving their ability to show respect and sensitivity to others. The development of these key personal skills further enhances the happy and safe environment offered by Oakfield. Children learn in a cross-curricular way and develop and celebrate the use of Oakfield’s 5Cs. All pupils are encouraged to learn through a varied range of activities, including discussions, debates, drama, research tasks, art and individual reflective tasks. Personal Social Health Education During PSHE, pupils focus on five social and emotional aspects of learning: self-awareness, managing feelings, motivation, empathy and social skills. This approach develops qualities that help to promote positive behaviour and effective learning. It is a strong foundation from which children can prepare for adult life. Pupils learn through group and paired discussion, drama, circle time, and group problem-solving challenges. In addition, pupils learn in a cross curricular way to ensure they develop a robust understanding of personal safety (including pedestrian safety, bikability, drug and alcohol awareness, basic first aid, and safe relationships), e-safety, sex and relationships education, and managing finances. The skills that result from PSHE not only reinforce Oakfield’s 5Cs, but they create confident individuals who will make a positive contribution to their community.
Teaching Gifted Youth about Persistent Problems with Plastics Pollution The recent news story about the devastating amount of plastics that have washed up onto Henderson Island in a remote area of the Pacific Ocean is shocking to see. However, this problem provides great opportunity for parents and educators to engage their gifted and creative students in being a part of the solutions of such crippling examples of human-inflicted damage to our planet. We know that gifted and talented students desire curriculum that is authentic or real and that they have the abilities to engage in sophisticated problem solving, analysis, and ingenuity. The Henderson Island problem is one that these students should be aware of and should have opportunity to pursue, whether just on the level of this one significant example, or through interests that this example might generate. Here are some potential questions that students could pursue to stimulate their thinking about Henderson Island as well as similar problems around the world. - How is wildlife endangered by the massive use of plastics in our world? - How is the environment affected by pollution? - What solutions can you think of to stop/ reduce/ and clean up pollution? - How do weather patterns, ocean movements, and Earth’s rotations affect the transporting of debris around the world? - What kinds of careers are involved in protecting biodiversity, sustaining the environment, and creating Earth-friendly plastics? - What is my role in protecting the environment during my lifetime? Why should I care? You can doubtless think of many other questions as well. Students may find that engaging in such problem solving might lead to related interests in other fields, such as scientific research, data analysis, meteorology, environmental engineering, or anthropology, to name a few. Engaging in such complex, real, and current content helps prepare our gifted students for future problems they will face now and later, as adults. Let’s give them that practice and that opportunity to engage now, as well as later. Click here for more information on the Henderson Island story. All the best,
Silicon, a lustrous, grayish-black chemical element that has both metallic and nonmetallic properties. Silicon is hard and brittle. It is a semiconductor; that is, its electrical conductivity is intermediate between that of a conductor and that of an insulator. Chemically, silicon is relatively inert at ordinary temperatures. It resists attack by all acids except hydrofluoric acid, and is not easily oxidized by air. At high temperatures, silicon can combine with many other elements. Silicon is the second most abundant element on earth (after oxygen). It makes up about 28 per cent of the earth's crust. In nature, silicon is always found combined with other elements. It usually occurs as silica (silicon dioxide) or as silicates, compounds containing silicon, oxygen, and one or more metals. Most common rocks, soils, and clays consist mainly of silicates. Silica occurs chiefly as quartz, a common igneous rock-forming mineral and the chief constituent of sand and sandstone. Silica is found in many plants, and is necessary to build strong cell walls. The shells of diatoms and the skeletons of certain sponges consist mainly of silica. Trace amounts of silica occur in such animal parts as feathers and hair. Silicon (from the Latin silex, flint) was first isolated in 1823 by the Swedish chemist Jns J. Berzelius. Most commercial silicon is produced by heating sand with coke in an electric arc furnace. The silicon thus obtained, about 98 per cent pure, is used primarily for alloying purposes. Silicon of higher purity is usually prepared by heating silicon tetrachloride or trichlorosilane with hydrogen gas. Silicon is widely used in alloys. Copper, when alloyed with silicon, becomes stronger and easier to weld; aluminum, easier to cast; and alloy steels, harder and stronger. Ferrosilicon alloys are used as deoxidizers in steelmaking and as reducing agents in preparing such metals as magnesium and chromium. Highly pure silicon crystals are used as semiconductors in such devices as transistors, power rectifiers, and solar batteries. Silicon minerals and compounds have many commercial uses. Quartz is used as a flux in metallurgy, and in the manufacture of glass, enamels, mortar, and many other substances. Many silicates are important ore minerals. Some are used to make cement, brick, pottery, porcelain, electrical insulation, and heat-resistant fabrics. Certain silicates (such as emerald and topaz) and some forms of silica (such as opal and amethyst) are highly prized gems. Diatomite, a soft rock composed of fossilized diatom shells, is used as a filter for liquids, as a mild abrasive, and as heat insulation. Silica gel, a porous form of silicon dioxide, is used as a drying agent. A typical use is for keeping packaged instruments dry by taking up any moisture in the package. Silicon carbide, an extremely hard substance, is used as an abrasive for grinding, cutting, and polishing metal. Sodium silicate, also called water glass, is used as a coating to preserve eggs and as an industrial adhesive. The silicones are organic silicon compounds that are important oils, resins, and rubber-forming substances. Symbol: Si. Atomic number: 14. Atomic weight: 28.0855. Specific gravity: 2.33. Melting point: 2,570 F. (1,410 C.). Boiling point: 4,271 F. (2,355 C.). Silicon has three stable isotopes: Si-28 to Si-30. It belongs to Group IV-A of the Periodic Table and may have a valence of +2, +4, or -4.
|Archeology at Jamestowne| |Jamestowne flag overlooking James River| In order to understand the convergence of these three cultures you must first understand how they all came together. The specific tribe the European settlers encountered at Jamestown was the Powhatans. Descendants of this tribe had been living in eastern Virginia for thousands of years when the settlers first arrived in 1607. Twelve years after the arrival of the Europeans to join the Natives in the New World, in 1619 Africans began to be kidnapped and shipped to the new world for purposes of forced labor. These three groups coming together would form the beginnings of what became American culture. |Scenic shot of Jamestowne| |Cannon at Jamestowne| |Remains of building| |Statue in honor of John Smith| |Reconstruction being done at Jamestowne|
A heart transplant is an operation in which the diseased heart in a person is replaced with a healthy heart from a deceased donor. Transplants are done as a life-saving measure for end-stage heart failure when medical treatment and less drastic surgery have failed. Because donor hearts are in short supply, patients who need a heart transplant go through a careful selection process. They need to be sick enough to need a new heart, yet healthy enough to receive it. Most patients referred to a heart transplant center have end-stage heart failure. Of these patients, close to half have heart failure as a result of coronary artery disease. Others have heart failure caused by hereditary conditions, viral infections of the heart, or damaged heart valves and muscles due to factors such as the use of certain medicines and alcohol, and pregnancy. Most patients considered for a heart transplant have exhausted attempts at less invasive treatments and have been hospitalized a number of times for heart failure. Patients who are eligible for a heart transplant are placed on a waiting list for a donor heart. Policies on distributing donor hearts are based on the urgency of need, the organs that are available for transplant, and the location of the patient who is receiving the heart. Organs are matched for blood type and size of donor and recipient. Other important notes Heart transplant surgery usually takes about 4 hours. The amount of time a heart transplant recipient spends in the hospital will vary with each person. Once home, patients must carefully check and manage their health status. Patients will work with the transplant team to protect the new heart by watching for signs of rejection, managing the transplant medicines and their side effects, preventing infections, and continuing treatment of ongoing medical conditions. Risks of heart transplant include failure of the donor heart, complications from medicines, infection, cancer, and problems that arise from not following lifelong health care plans. Lifelong health care includes taking multiple medicines on a strict schedule, watching for signs and symptoms of complications, keeping all medical appointments, and stopping unhealthy behaviors such as smoking. Survival rates for people receiving a heart transplant have improved over the past 5–10 years—especially in the first year after the transplant. About 88 percent of patients survive the first year after transplant surgery. After the surgery, most heart transplant recipients (about 90 percent) can come close to resuming their normal daily activities. Source: "Heart and Vascular Diseases." Disease and Conditions Index. The National Heart, Lung, and Blood Institute. The National Institutes of Health.
Children’s teeth begin forming before birth. As early as 4 months, the first primary, or baby teeth, erupt through the gums. All 20 of the primary teeth usually appear by age 3, although their pace and order of eruption varies. Permanent teeth begin appearing around age 6. This process will continue until approximately age 21. ORAL HEALTH FOR CHILDREN To help ensure oral health and a lifetime of good oral care habits: - Limit children’s sugar intake - Make sure children get enough fluoride, either through drinking water or as a treatment at the dentist’s office - Teach children how to brush and floss correctly - Supervise brushing sessions and help with flossing, which can be a challenge for small hands MAJOR OBSTACLES TO CHILDREN'S ORAL HEALTH "Baby bottle tooth decay" – Wipe gums with gauze or a clean washcloth and water after feeding. When teeth appear, brush daily with a pea-sized amount of fluoride toothpaste – Put child to bed with a bottle of water, not milk or juice – Not a concern until about 4 years of age or when permanent teeth appear; after this time, it could cause dental changes White spots on teeth – As soon as the first tooth appears (at about 6 months), begin cleaning the child's teeth daily and schedule a dental appointment Fear of the dentist – Hold the child in the parent's lap during the exam Difficulty creating an oral care routine – Involve the whole family – brush together at the same time each day to create a good habit Love of sweets – Give children health snack options, like carrots and other fresh vegetables, plain yogurt, and cheese Stains from antibiotics – Speak to the pediatrician before any medication is prescribed – Make sure that teens brush well around braces, using a floss threader to remove all food particles Oral accidents from sports – Encourage children to wear mouthguards during sports The following are key preventive measures to preserve oral health through childhood: Fluoride treatments to strengthen tooth enamel and resist decay. This may include fluoride supplements in areas where drinking water is not optimally fluoridated. Be sure to ask your dentist about supplements to determine if they are needed. Dental sealants to provide a further layer of protection against cavities. Sealants are made of plastic and are bonded to the teeth by the dental team. A fun oral care regimen to help encourage children to brush more regularly. Kid's Crest® Cavity Protection is a fluoride toothpaste with Sparkle Fun flavor just for kids. And the Oral B® Stages® Kids' Power Toothbrush makes brushing fun, with popular Disney characters and a patented oscillating Powerhead. Ask your dental professional how these Crest & Oral-B products can help your child: - Kid's Crest Cavity Protection - Oral=B Stages Kids' Power Toothbrushes
|This article needs additional citations for verification. (July 2009) (Learn how and when to remove this template message)| Motion control is a sub-field of automation, encompassing the systems or sub-systems involved in moving parts of machines in a controlled manner. The main components involved typically include a motion controller, an energy amplifier, and one or more prime movers[disambiguation needed] or actuators. Motion control may be open loop or closed loop. In open loop systems, the controller sends a command through the amplifier to the prime mover or actuator, and does not know if the desired motion was actually achieved. Typical systems include stepper motor or fan control. For tighter control with more precision, a measuring device may be added to the system (usually near the end motion). When the measurement is converted to a signal that is sent back to the controller, and the controller compensates for any error, it becomes a Closed loop System. Typically the position or velocity of machines are controlled using some type of device such as a hydraulic pump, linear actuator, or electric motor, generally a servo. Motion control is an important part of robotics and CNC machine tools, however in these instances it is more complex than when used with specialized machines, where the kinematics are usually simpler. The latter is often called General Motion Control (GMC). Motion control is widely used in the packaging, printing, textile, semiconductor production, and assembly industries. Motion Control encompasses every technology related to the movement of objects. It covers every motion system from micro-sized systems such as silicon-type micro induction actuators to micro-siml systems such as a space platform. But, these days, the focus of motion control is the special control technology of motion systems with electric actuators such as dc/ac servo motors. Control of robotic manipulators is also included in the field of motion control because most of robotic manipulators are driven by electrical servo motors and the key objective is the control of motion. The basic architecture of a motion control system contains: - A motion controller to generate set points (the desired output or motion profile) and (in closed loop systems) close a position or velocity feedback loop. - A drive or amplifier to transform the control signal from the motion controller into energy that is presented to the actuator. Newer "intelligent" drives can close the position and velocity loops internally, resulting in much more accurate control. - A prime mover[disambiguation needed] or actuator such as a hydraulic pump, pneumatic cylinder, linear actuator, or electric motor for output motion. - In closed loop systems, one or more feedback sensors such as optical encoders, resolvers or Hall effect devices to return the position or velocity of the actuator to the motion controller in order to close the position or velocity control loops. - Mechanical components to transform the motion of the actuator into the desired motion, including: gears, shafting, ball screw, belts, linkages, and linear and rotational bearings. The interface between the motion controller and drives it controls is very critical when coordinated motion is required, as it must provide tight synchronization. Historically the only open interface was an analog signal, until open interfaces were developed that satisfied the requirements of coordinated motion control, the first being SERCOS in 1991 which is now enhanced to SERCOS III. Later interfaces capable of motion control include Ethernet/IP, Profinet IRT, Ethernet Powerlink, and EtherCAT. Common control functions include: - Velocity control. - Position (point-to-point) control: There are several methods for computing a motion trajectory. These are often based on the velocity profiles of a move such as a triangular profile, trapezoidal profile, or an S-curve profile. - Pressure or Force control. - Impedance control: This type of control is suitable for environment interaction and object manipulation, such as in robotics. - Electronic gearing (or cam profiling): The position of a slave axis is mathematically linked to the position of a master axis. A good example of this would be in a system where two rotating drums turn at a given ratio to each other. A more advanced case of electronic gearing is electronic camming. With electronic camming, a slave axis follows a profile that is a function of the master position. This profile need not be salted, but it must be an animated function - Match moving, for motion tracking in computer-generated imagery - Mechatronics, the science of computer-controlled smart motion devices - Control system - PID controller, proportional-integral-derivative controller - Motion Controller - "Motion Control Resource Info Center". Retrieved 20 January 2011. - Tan K. K., T. H. Lee and S. Huang, Precision motion control: Design and implementation, 2nd ed., London, Springer, 2008. - Ellis, George, Control System Design Guide, Fourth Edition: Using Your Computer to Understand and Diagnose Feedback Controllers
war, laws of Modern Laws of War There is no convention on the laws of war to which all the major powers of the world have acceded, and many conventions provide that their terms shall be inoperative if any of the belligerents is not a signatory or if an enemy commits a violation. Despite such provisions, many nations have adopted the laws of war, and the conditions of warfare have undoubtedly been ameliorated, particularly in the treatment of prisoners and the consideration shown to the sick and wounded. The care of the sick and the wounded is facilitated by making medical personnel noncombatants and by clearly marking hospitals and similar installations, thus sparing them from attack. Conventions restricting the use of certain weapons probably have not materially mitigated the horrors of war. For the most part, only those weapons that are of limited military use, e.g., poison gas, have been effectively banned, while efforts to prohibit militarily effective weapons, e.g., atomic weapons and submarine mines, have not succeeded. The laws of war have had as their objective the protection of civilian populations by limiting all action to the military. A distinction was made between combatants and noncombatants, the former being defined in terms of traditional military units. Thus combatants must have a commander responsible for subordinates, wear a fixed and recognizable emblem, carry arms openly, and follow the laws of war. But the development of aerial bombing in World War I and of guerrilla forces dependent on civilians has tended to make all enemy territory part of the theater of operations. New practices and categories have yet to be worked out to protect civilian centers adequately. Civilians in territory occupied by the enemy are, however, supposed to be entitled to certain protections. There may not be imprisonment without cause, and fines may not be levied upon a whole civilian population for individual offenses. Private property also receives limited protection, and it may not be confiscated for military use unless fair compensation is paid. Special rules govern such actions against property as the taking of a prize at sea or in port, the confiscation of contraband, and the use of the blockade. Property destroyed in the course of action against the enemy is, of course, not compensable. Places of religious, artistic, or historical importance should not be attacked unless there is military need. No direct diplomatic relations exist between belligerents, but neutral diplomats are often given custody of property in enemy territory and are entrusted with negotiations. In the field of combat, passports, safe-conducts, and flags of truce permit consultations between opposing commanders. Hostilities may even be totally suspended by an armistice, which is often the prelude to surrender. Violations of the laws of war have probably occurred in all major conflicts; a nation confident of victory will frequently not be deterred even by fear of reprisals. After World War II the military and civilian leaders of the Axis Powers who were responsible for violations were tried for war crimes, and some Americans were tried for war crimes in the Vietnam War (see My Lai incident).
Researchers have designed an invisible “wall” that stops oily liquids from spreading and confines them to a certain area. The outer shell of a droplet of oil on a surface has a thin skin which allows it to hold its shape like a small dome, known as the liquid’s surface tension. The new development, reported in the journal Langmuir, should help researchers studying the complex molecules and has future implications in the guided delivery of oil and effective blockage of oil spreading. Related article: Russia Readies to Take Shale Oil Lead “Our work is based on micro/nanoelectromechanical systems, or M/NEMS, which can be thought of as miniaturized electrical or mechanical structures that allow researchers to conduct their work on the micro/nanoscopic level,” says Jae Kwon, associate professor of electrical and computer engineering at the University of Missouri. “Oil-based materials or low-surface tension liquids, which can wet any surface and spread very easily, pose challenges to researchers who need to control those tiny oil droplets on microdevices.” Oil-based compounds are referred to as low-surface tension liquids because they tend to spread on the surface of a researcher’s microscope slides or microarrays where the liquids are placed. Also, as can be seen from oil spills in the Gulf of Mexico, oil can stick and easily spread out on any surface. Using specially designed oil-repellent surfaces, Kwon and his group demonstrated invisible “virtual walls” that block spreading of low-surface tension liquids at the boundary line with microscopic features already created in the device. Related article: Bosnian Federation: Oil Legislation Sets First Concessions in Motion “Our newly developed surface helped keep oil, which is normally unmanageable, in predetermined pathways making it controllable. “We feel that oil-repellant surfaces can be widely utilized for many industrial applications, and virtual walls for low-surface tension liquids also have immense potential for many lab-on-a-chip devices which are crucial to current and future research techniques.” In the future, oil-repellent virtual walls may be used to control the transport of oil without spillage, Kwon says. By. Jeff Sossamon
What are the Symptoms of Chronic Sinusitis? Sinusitis (also called rhinosinusitis) is the name of the condition in which the lining of your sinuses becomes inflamed. The sinuses are air spaces behind the bones of the upper face, between the eyes and behind the forehead, nose and cheeks. Normally, the sinuses drain through small openings into the inside of the nose. Anything that blocks the flow may cause a buildup of mucus in the sinuses. The blockage and inflammation of the sinus membranes can be infectious or non-infectious. The symptoms caused by sinusitis may be quite uncomfortable. The signs and symptoms may include: - Facial pain, pressure, congestion or fullness - Difficulty breathing through the nose - Discharge of yellow or green mucus from the nose - Teeth pain - Loss of the sense of smell or taste - Sore throat - Bad breath Types of Sinusitis There are two main categories of sinusitis: acute and chronic. Sinusitis is usually preceded by a cold, allergy attack or irritation from environmental pollutants. Often, the resulting symptoms, such as nasal pressure, nasal congestion, a "runny nose," and fever, run their course in a few days. However, if symptoms persist, a bacterial infection or acute sinusitis may develop. Most cases of sinusitis are acute (or sudden onset); however, if the condition occurs frequently or lasts three months or more, you may have chronic sinusitis.
Sponsored Link • The loading process consists of three basic activities. To load a type, the Java virtual machine must: java.lang.Classthat represents the type The Java virtual machine specification does not say how the binary data for a type must be produced. Some potential ways to produce binary data for a type are: java.lang.Class. The virtual machine must parse the binary data into implementation-dependent internal data structures. (See Chapter 5, "The Java Virtual Machine," for a discussion of potential internal data structures for storing class data.) The Classinstance, the end product of the loading step, serves as an interface between the program and the internal data structures. To access information about a type that is stored in the internal data structures, the program invokes methods on the Classinstance for that type. Together, the processes of parsing the binary data for a type into internal data structures in the method area and instantiating a Classobject on heap are called creating the type. As described in previous chapters, types are loaded either through the bootstrap class loader or through user-defined class loaders. The bootstrap class loader, a part of the virtual machine implementation, loads types (including the classes and interfaces of the Java API) in an implementation-dependent way. User- defined class loaders, instances of subclasses of classes in custom ways. The inner workings of user-defined class loaders are described in more detail later in Chapter 8, "The Linking Model." Class loaders (bootstrap or user-defined) need not wait until a type's first active use before they load the type. Class loaders are allowed to cache binary representations of types, load types early in anticipation of eventual use, or load types together in related groups. If a class loader encounters a problem during early loading, however, it must report that problem (by throwing a subclass of LinkageError) only upon the type's first active use. In other words, if a class loader encounters a missing or malformed class file during early loading, it must wait to report that error until the class's first active use by the program. If the class is never actively used by the program, the class loader will never report the error.
This May is Mental Health Month Life With a Mental Illness Theme Highlights The Importance of Speaking Up, Sharing What #mentalillnessfeelslike • If we want to break down discrimination and stigma surrounding mental illnesses we need to start talking about mental health before Stage 4 and sharing how it feels to live with a mental illness. • Having healthy relationships and getting on a path to good mental health begins with being able to talk about how you feel. • Telling people how life with a mental illness feels helps build support from friends and family, reduces stigma and discrimination, and is crucial to recovery. Whether you are in Stage 1 and just learning about those early symptoms, or are dealing with what it means to be in Stage 4, sharing how it feels can be part of your recovery. • People experience the symptoms of mental illnesses differently, and sharing how it really feels—throughout all the Stages of an illness—can help others to understand if what they are going through may be a symptom of a mental health problem. • Mental illnesses are common and treatable, and help is available. We need to speak up early—before Stage 4— and in real, relatable terms so that people do not feel isolated and alone. Life with a Mental Illness is meant to help remove the shame and stigma of speaking out, so that more people can be comfortable coming out of the shadows and seeking the help they need. Informational Fact Sheets Interactive Worksheets
Robert Hargraves and Ralph Moir introduce iquid fuel reactors:APS Physics | FPS | Liquid Fuel Nuclear Reactors The 2009 update of MIT’s Future of Nuclear Power shows that the capital cost of new coal plants is $2.30/watt, compared to LWRs at $4/watt. The median of five cost studies of large molten salt reactors from 1962 to 2002 is $1.98/watt, in 2009 dollars. Costs for scaled-down 100 MW reactors can be similarly low for a number of reasons, six of which we summarize briefly: Pressure. The LFTR operates at atmospheric pressure, obviating the need for a large containment dome. At atmospheric pressure there is no danger of an explosion. Safety. Rather than creating safety with multiple defense-in-depth systems, LFTR’s intrinsic safety keeps such costs low. A molten salt reactor cannot melt down because the normal operating state of the core is already molten. The salts are solid at room temperature, so if a reactor vessel, pump, or pipe ruptured they would spill out and solidify. If the temperature rises, stability is intrinsic due to salt expansion. In an emergency an actively cooled solid plug of salt in a drain pipe melts and the fuel flows to a critically safe dump tank. The Oak Ridge MSRE researchers turned the reactor off this way on weekends. Heat. The high heat capacity of molten salt exceeds that of the water in PWRs or liquid sodium in fast reactors, allowing compact geometries and heat transfer loops utilizing high-nickel metals. Energy conversion efficiency. High temperatures enable 45% efficient thermal/electrical power conversion using a closed-cycle turbine, compared to 33% typical of existing power plants using traditional Rankine steam cycles. Cooling requirements are nearly halved, reducing costs and making air-cooled LFTRs practical where water is scarce. Mass production. Commercialization of technology lowers costs as the number of units produced increases due to improvements in labor efficiency, materials, manufacturing technology, and quality. Doubling the number of units produced reduces cost by a percentage termed the learning ratio, which is often about 20%. In The Economic Future of Nuclear Power, University of Chicago economists estimate it at 10% for nuclear power reactors. Reactors of 100 MW size could be factory-produced daily in the way that Boeing Aircraft produces one airplane per day. At a learning ratio of 10%, costs drop 65% in three years.
While the Bohr model does not correctly describe an atom, the Bohr radius keeps its physical meaning as a characteristic size of the electron cloud in a full quantum-mechanical description. Thus the Bohr radius is often used as a unit in atomic physics, see atomic units. Note that the definition of Bohr radius does not include the effect of reduced mass, and so it is not precisely equal to the orbital radius of the electron in a hydrogen atom in the more physical model where reduced mass is included. This is done for convenience: the Bohr radius as defined above appears in equations relating to atoms other than hydrogen, where the reduced mass correction is different. If the definition of Bohr radius included the reduced mass of hydrogen, it would be necessary to include a more complex adjustment in equations relating to other atoms. The Bohr radius of the electron is one of a trio of related units of length, the other two being the Compton wavelength of the electron and the classical electron radius . The Bohr radius is built from the electron mass , Planck's constant and the electron charge . The Compton wavelength is built from , and the speed of light . The classical electron radius is built from , and . Any one of these three lengths can be written in terms of any other using the fine structure constant : The Bohr radius including the effect of reduced mass can be given by the following equation: In the above equation, the effect of the reduced mass is achieved by using the increased Compton wavelength, which is just the Compton wavelengths of the electron and the proton added together.
When Orville Wright traveled to Cleveland for the dedication of the Aircraft Engine Research Laboratory in the 1940s, he had already witnessed the advancement of aircraft from his Kitty Hawk model to the winged machines that fought in World War II. Today, the lab, now known as NASA Glenn Research Center, has engineers and scientists engaged in an agency-wide effort to develop alternative aircraft designs using low-carbon propulsion technology for larger passenger aircraft that Wright may have never dreamed of. Since the beginning, commercial planes have been powered by carbon-based fuels such as gasoline or kerosene. While these provide the energy to lift large commercial jets into the world’s airspace, electric power is now seen as a new frontier for providing thrust and power for flight. Just as hybrid or turboelectric power has improved fuel efficiency in cars, boats and trains, aeronautical engineers are exploring how planes can be redesigned and configured with electrical power. One of NASA’s goals is to help the aircraft industry shift from relying solely on gas turbines to using hybrid electric and turboelectric propulsion in order to reduce energy consumption, emissions and noise. “Aircraft are highly complex machines,” says Jim Heidmann, manager for NASA’s Advanced Air Transport Technology project. “Moving toward alternative systems requires creating new aircraft designs as well as propulsion systems that integrate battery technologies and electromagnetic machines like motors and generators with more efficient engines.” Glenn researchers are looking at power systems that generate electricity in place of, or in addition to, thrust at the turbine engine and then convert that electricity to be converted into thrust using fans at other places on the aircraft. “These systems use electric motors and generators that work together with turbine engines to distribute power throughout the aircraft in order to reduce drag for a given amount of fuel burned,” says Amy Jankovsky, subproject lead engineer. “Part of our research is developing the lightweight machinery and electrical systems that will be required to make these systems possible.” In addition to designing better motors, generators and integrated electrical system architectures, Glenn engineers are also researching the basic materials that go into those components. Research is being performed on the conductors inside, and the insulation around the wires. Along with studying the design of motors and the architecture of power electronics, engineers are improving magnetic materials and semi-conductors to make these motors and electronics lighter and more efficient. “Our work is laying a foundation for planes that will require less fossil fuel in the future,” says Glenn Engineer Cheryl Bowman, a technical lead on the project. “Considering that the U.S. aviation industry carries over 700 million passengers every year, making each trip more fuel efficient (by up to 30 percent) can have a considerable impact on the nation’s total use of fossil fuels.”
Every parent has witnessed their little ones being selfish at least once, but it turns out they may be ‘wired’ that way! It turns out that selfish behaviour can be blamed, in part, on an underdeveloped region of the brain. LiveScience reports on a new study suggesting that this could in fact be the case. The study was conducted at the Max-Planck Institute for Cognitive and Brain Sciences in Germany. During the study 146 children paired off and played two different games with each other: In the study, 146 children participated in two games, played in pairs. In the “Dictator Game,” one child offered to share a reward, and another child could only accept what was offered. In the “Ultimatum Game,” one child could propose sharing the reward, but the other child could accept or reject the offer. If the child rejected the offer, neither child received a reward. As was expected older children were more generous than their younger counterparts inferring that impulse control matured with the child. Brain scans were conducted on on both children and adults involved in the study that showed “a region called dorsolateral prefrontal cortex, located in the left side of the brain, toward the front, was more developed in adults. The area is considered to be involved with impulse control.” LiveScience reports that “the results suggest that selfish behavior in children may not be due to their inability to know ‘fair’ from ‘unfair’, but rather an immature part of the brain that doesn’t support selfless behavior when tempted to act selfishly.” Understanding how a child’s brain works is the topic of the Brain Power Conference, May 3-4 in Toronto. But just as important as understanding it is giving tools and insights to parents to know how to help their kid’s learn and grow – and when not to worry because sometimes a selfish act is all in the mind!
Your child is participating in a special kind of science fair known as an “Invention Convention”. This project is designed to promote your child’s problem solving ability and creative thinking skills. Your child will think of a problem she want to solve and in the process of solving that problem, she will invent a new product, improve an existing product or develop a new method for doing something. The first step is coming up with a problem that needs to be solved. Your child may ask you if YOU need something to solve a problem. Your interest and encouragement at this stage will help make this an enjoyable learning experience. Once your child has settled on an idea, he completes an Intent to Invent form, signed by you and returned to the school. You can support your student in the following ways: - Try to refrain from nagging your child to finish the project. Set up a calendar with deadlines - Allow your child to make mistakes and use these mistakes as a learning opportunity - Set aside time for your child to rest and recharge even if deadlines are approaching - Encourage your child to ask for help – but don’t do the entire project yourself Have Fun and Encourage Fun! - The process of inventing should be rewarding and fun for both you and your child. If it stops being fun, take a break and come back to it!! - If you can, attend the local Invention Convention and experience the excitement and energy of these young inventors — and if your child is selected to go to the California State Invention Convention, support her as best you can. Learn about California Invention Convention.
What are three wavelengths that are shorter than visible light? ultraviolet waves, X rays, and gamma rays. What are three uses of ultraviolet waves? Ultraviolet radiation striking your skin enables your body to make vitamin D, which you need for healthy bones and teeth. Fluorescent materials absorb ultraviolet waves and emit the energy as visible light. Another useful property of ultraviolet waves is their ability to kill bacteria in food, water, and medical supplies. What can result from overexposure to ultraviolet waves? sunburns and skin cancers, and damage to the surface of the eye. What are three uses of X rays? Doctors and dentists use low doses of X rays to form images of internal organs, bones, and teeth. airport security personnel using X-ray screening devices to examine the contents of luggage. X rays can also be used to inspect for cracks inside high performance jet engines without taking the engine apart, and to photograph the inside of machines. What can gamma rays be used for? Focussed bursts of gamma rays are used in radiation therapy to kill cancer cells.
Creating micro-scale air vehicles that mimic the flapping of winged insects or birds has become popular, but they typically require a complex combination of pitching and plunging motions to oscillate the flapping wings. To avoid some of the design challenges involved in mimicking insect wing strokes, researchers at the Georgia Institute of Technology propose using flexible wings that are driven by a simple sinusoidal flapping motion. “We found that the simple up and down wavelike stroke of wings at the resonance frequency is easier to implement and generates lift comparable to winged insects that employ a significantly more complex stroke,” said Alexander Alexeev, an assistant professor in Georgia Tech’s School of Mechanical Engineering. Details of the flapping motion proposed by Alexeev and mechanical engineering graduate student Hassan Masoud were presented on Nov. 22 at the 63rd Annual Meeting of the American Physical Society Division of Fluid Dynamics. A paper published in the May issue of the journal Physical Review E also reported on this work, which is supported in part by the National Science Foundation through TeraGrid computational resources. In nature, flapping-wing flight has unparalleled maneuverability, agility and hovering capability. Unlike fixed-wing and rotary-wing air vehicles, micro air vehicles integrate lifting, thrusting and hanging into a flapping wing system, and have the ability to cruise a long distance with a small energy supply. However, significant technical challenges exist in designing flapping wings, many motivated by an incomplete understanding of the physics associated with aerodynamics of flapping flight at small size scales. “When you want to create smaller and smaller vehicles, the aerodynamics change a lot and modeling becomes important,” said Alexeev. “We tried to gain insight into the flapping aerodynamics by using computational models and identifying the aerodynamic forces necessary to drive these very small flying machines.” Alexeev and Masoud used three-dimensional computer simulations to examine for the first time the lift and hovering aerodynamics of flexible wings driven at resonance by sinusoidal oscillations. The wings were tilted from the horizontal and oscillated vertically by a force applied at the wing root. To capture the dynamic interactions between the wings and their environment, the researchers used a hybrid computational approach that integrated the lattice Boltzmann model for fluid dynamics and the lattice spring model for the mechanics of elastic wings. The simulations revealed that at resonance -- the frequencies when a system oscillates at larger amplitudes -- tilted elastic wings driven by a simple harmonic stroke generated lift comparable to that of small insects that employ a significantly more complex stroke. In addition, the simulations identified one flapping regime that enabled maximum lift and another that revealed maximum efficiency. The efficiency was maximized at a flapping frequency 30 percent higher than the frequency for maximized lift. “This information could be useful for regulating the flight of flapping-wing micro air vehicles since high lift is typically needed only during takeoff, while the enhanced aerodynamic efficiency is essential for a long-distance cruise flight,” noted Masoud. To facilitate the design of practical micro-scale air vehicles that employ resonance flapping, the researchers plan to examine how flapping wings can be effectively controlled in different flow conditions including unsteady gusty environments. They are also investigating whether wings with non-uniform structural and mechanical properties and wings driven by an asymmetric stroke may further improve the resonance performance of flapping wings. John Toon | Newswise Science News First Juno science results supported by University of Leicester's Jupiter 'forecast' 26.05.2017 | University of Leicester Measured for the first time: Direction of light waves changed by quantum effect 24.05.2017 | Vienna University of Technology Staphylococcus aureus is a feared pathogen (MRSA, multi-resistant S. aureus) due to frequent resistances against many antibiotics, especially in hospital infections. Researchers at the Paul-Ehrlich-Institut have identified immunological processes that prevent a successful immune response directed against the pathogenic agent. The delivery of bacterial proteins with RNA adjuvant or messenger RNA (mRNA) into immune cells allows the re-direction of the immune response towards an active defense against S. aureus. This could be of significant importance for the development of an effective vaccine. PLOS Pathogens has published these research results online on 25 May 2017. Staphylococcus aureus (S. aureus) is a bacterium that colonizes by far more than half of the skin and the mucosa of adults, usually without causing infections.... Physicists from the University of Würzburg are capable of generating identical looking single light particles at the push of a button. Two new studies now demonstrate the potential this method holds. The quantum computer has fuelled the imagination of scientists for decades: It is based on fundamentally different phenomena than a conventional computer.... An international team of physicists has monitored the scattering behaviour of electrons in a non-conducting material in real-time. Their insights could be beneficial for radiotherapy. We can refer to electrons in non-conducting materials as ‘sluggish’. Typically, they remain fixed in a location, deep inside an atomic composite. It is hence... Two-dimensional magnetic structures are regarded as a promising material for new types of data storage, since the magnetic properties of individual molecular building blocks can be investigated and modified. For the first time, researchers have now produced a wafer-thin ferrimagnet, in which molecules with different magnetic centers arrange themselves on a gold surface to form a checkerboard pattern. Scientists at the Swiss Nanoscience Institute at the University of Basel and the Paul Scherrer Institute published their findings in the journal Nature Communications. Ferrimagnets are composed of two centers which are magnetized at different strengths and point in opposing directions. Two-dimensional, quasi-flat ferrimagnets... An Australian-Chinese research team has created the world's thinnest hologram, paving the way towards the integration of 3D holography into everyday... 24.05.2017 | Event News 23.05.2017 | Event News 22.05.2017 | Event News 26.05.2017 | Life Sciences 26.05.2017 | Life Sciences 26.05.2017 | Physics and Astronomy
The Cassini spacecraft has been orbiting around Saturn for about eight years, now, sending back strange pictures pretty much the whole time. There are weird pictures of the planet’s rings, pictures of Saturn backlit by the sun. Bizarre images of some kind of jets streaming from its moon Enceladus. Pictures have come back of Saturn’s largest moon, Titan, showing methane lakes and undiscovered Titanic countries where, it is proposed, life might once have spawned, died out, or could come back. Saturn was strange all along. Its rings badly confused Galileo, mainly because of the limited capabilities of his telescopes, but also because in 1610 he did not know what he was seeing. In the later 1600s, the French astronomer Giovanni Cassini observed that the rings are not one solid plane, but divided. In 2004 the Cassini spacecraft found the rings are not flat, but wavy or corrugated. The fact that we can see the rings at all is startling, since they’re less than a mile thick. The three other “gas giant” planets — Jupiter, Uranus and Neptune — have rings, too, but they’re so wispy it took spacecraft to notice them. In a small telescope the image of Saturn is tiny. It has a flat-white to faintly goldish sheen, lustrous like Jupiter and Mars. But unlike them it does not appear as a disk which your eye can comfortably turn into a sphere or globe. What baffled Galileo were the strange protrusions, which he at first thought were separate planets locked in place. But the protrusions were actually those rings. They can be disconcerting if you let them. They’re made of rock, dust and ice left over from some kind of barrage that occurred after Saturn formed about 4.5 billion years ago. Comets or asteroids (no one knows for sure what) annihilated its newly made moons and broke everything to pieces as if there had been a cosmic war. The debris settled into orbit around the giant planet, and huge, flat rings formed from fine rock and ice. After some uncertain eon, it is believed, a second bombardment began. The present moons show the scars. Three small moons, Telesto, Calypso and Tethys, somehow ended up in the same orbit. Mimas has a crater almost a third of its whole diameter. The potato-shaped moon Hyperion angles around bent over 45 degrees on its axis, implying it was struck and half-killed by something enormous. The moons shepherd the tiny ring particles. Saturn is surrounded by rubble. In a small telescope the rubble becomes strangely beautiful. Striking and chilling. Poe, the master of weirdness, wrote that “the tone of [beauty’s] highest manifestation … is sadness,” which is a strange thing to say but maybe you can grasp it when you see Saturn shining up there with its rings. It swings slowly around the sun once every 28 years or so, brooding. In ancient times it was the remotest known planet. The Greeks called it Cronos, the father of the gods, whose name is likely linked to the word for time, chronos, in the sense of huge, inexorable motion toward an end and chaos, which in the myths is sad — Cronos toppled his father Uranus, and then was overturned by his own son Zeus. The Romans called him Saturn, and in December celebrated saturnalia, a festival of reckless abandon as winter closed in. Our week ends on Saturn’s day. Maybe this is not despair, but in here somewhere is an image of life’s inescapable decline. It is unlike any other phase of life, strange and disconcerting, and weirdly beautiful. Dana Wilde’s collection of Amateur Naturalist and other writings, “ The Other End of the Driveway,” is available electronically and in paperback from Booklocker.com. A second collection, “ Nebulae: A Backyard Cosmography,” will be available soon.
This is a 7 page quiz that includes a rubric. It is broken down into 4 parts as outlined below. Part A - Determining whether 2 addition equations are equal or not equal; matching equivalent equations. Part B - Determining the missing term example: ___ + 3 = 7 Part C - Determining the missing term example: 5 + 2 = ___ + 3 Part D - Writing addition equations for the number 10 (in the form of a word problem...students use pictures and numbers)
Latin/Lesson 7-The Gerund and Participles< Latin |Intro:||1 • 2| |Chapter 1||1 • 2 • 3 • 4 • 5 • 6| |Chapter 2||1 • 2 • 3 • 4 • 5 • 6 • 7 • 8| |Chapter 3||1 • 2 • 3 • 4 • 5 • 6 • 7 • 8| |Chapter 4||1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10| |Chapter 5||1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9| Participles are verbs which function grammatically like adjectives. English, aided by auxiliary participles, is able have participle phrases in many tenses. Latin has participles that do not have auxiliary supplementary participles. This limits the usage of the participle in Latin, according to some wiki-scholars of Classical Studies. Present Active ParticiplesEdit Present participles are formed by adding -ns to the stem of the verb. |Forming the Present Imperfect Participle| |1st Conjugation||Infinitive: amare Present Imperfect Participle: amans |2nd Conjugation||Infitive: monere Present Imperfect Participle: monens |3rd Conjugation||Infinitive: regere Present Imperfect Participle: regens |4th Conjugation||Infinitive: audire Present Imperfect Participle:audiens Present Participles are declined like 3rd declension adjectives. In cases besides the nominative, the -s becomes -t. 1. ferens, ferentis 2. capiens, capientis 3. ens, entis Form the Present Participle and translate of the following Latin verbs: - meto, messui, messum, ere - metuo, metum, ui, ere - milito, avi, atum, are - postulo, avi, atum, are - sulco, avi, sulcum, are - iacio, ieci, iactum, ere The examples will show participles of the verb amo, amare, amavi, amatum (to love). - present active: base + 'ns.' This forms a two-termination 3rd declension adjective. In the case of amare, the participle is amans, amantis (loving). - perfect passive: fourth principle part, with appropriate first or second declension endings: amatus, -a, -um. - future active: fourth principle part, minus 'm', add 'rus, -a, -um' This forms a 1st-2nd declension adjective: amaturus, -a, -um (about to love). In deponent verbs, the perfect passive participle is formed in the same way as in regular verbs. However, since the nature of the deponent verb is passive in form and active in meaning, the participle is translated actively. Remember that participles are adjectives, and therefore must be declined to agree with the noun which they modify in case, number and gender. The gerund is a verbal noun which is used to refer to the action of a verb. For example: ars scribendi = the art of writing. The gerund is declined as a second declension neuter noun. It is formed by taking the present stem and adding -ndum. |Verb||amo, amare||video, videre||rego, regere||capio, capere||audio, audire| Meanings of the gerundEdit - Genitive: ars legendi - The art of reading / to read - Accusative: ad puniendum - to punish, for punishing - Ablative: saepe canendo - through frequently singing; in legendo: while reading - Genitive with causa: puniendi causa - in order to punish The gerundive is a 1st/2nd declension adjective formed the same way as the gerund, and its function overlaps somewhat with the gerund, but otherwise differs. The literal translation of the gerundive is with "to be", eg. defendendus, -a, -um = "to be defended". - Accusative: ad ludos fruendos - to the games to be enjoyed - to enjoy the games (Note that if this were a gerund construction, it would be ad ludis fruendum since fruor, -i takes the ablative case. In the gerundive construction, both noun and gerundive are governed by the preposition ad) - Gerundive of obligation: Carthago delenda est - Carthage is to be destroyed - Carthage must be destroyed. Note that if there is an object (eg. Carthage is to be destroyed by us), it goes into the dative case. 1. Convert the following subjunctive purpose clauses into gerund or gerundive clauses with the same meaning. For example: militabat ut patriam defenderet -> militabat ad patriam defendendum or militabat patriam defendendi causa or militabat ad patriam defendendam. Try to use each construction twice. - casam exit ut patrem adiuvet - mater in casam rediit ut cenam pararet - hostes vincebant ergo scutum abieci (I threw away my shield) ut celerius fugerem - in silvas currimus ut nos celemus - hostes in silvas ineunt ut nos invenirent - Brutus Iulium Caesarem occidit ut Romam liberaret 2. Translate into Latin. For example: I must see the temple -> templum mihi videndum est - We must build a large city. - Julius Caesar must lead an army into Greece. - Scipio (Scipio, -ionis) must defeat Hannibal.
NASA - Chandra X-ray Observatory patch. May 2, 2017 Combining data from NASA's Chandra X-ray Observatory with radio observations and computer simulations, an international team of scientists has discovered a vast wave of hot gas in the nearby Perseus galaxy cluster. Spanning some 200,000 light-years, the wave is about twice the size of our own Milky Way galaxy. The researchers say the wave formed billions of years ago, after a small galaxy cluster grazed Perseus and caused its vast supply of gas to slosh around an enormous volume of space. "Perseus is one of the most massive nearby clusters and the brightest one in X-rays, so Chandra data provide us with unparalleled detail," said lead scientist Stephen Walker at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "The wave we've identified is associated with the flyby of a smaller cluster, which shows that the merger activity that produced these giant structures is still ongoing." X-ray 'Tsunami' Found in Perseus Galaxy Cluster Video above: A wave spanning 200,000 light-years is rolling through the Perseus galaxy cluster, according to observations from NASA's Chandra X-ray Observatory coupled with a computer simulation. The simulation shows the gravitational disturbance resulting from the distant flyby of a galaxy cluster about a tenth the mass of the Perseus cluster. The event causes cooler gas at the heart of the Perseus cluster to form a vast expanding spiral, which ultimately forms giant waves lasting hundreds of millions of years at its periphery. Merger events like this are thought to occur as often as every three to four billion years in clusters like Perseus. Video Credits: NASA's Goddard Space Flight Center. A paper describing the findings appears in the June 2017 issue of the journal Monthly Notices of the Royal Astronomical Society and is available online: Galaxy clusters are the largest structures bound by gravity in the universe today. Some 11 million light-years across and located about 240 million light-years away, the Perseus galaxy cluster is named for its host constellation. Like all galaxy clusters, most of its observable matter takes the form of a pervasive gas averaging tens of millions of degrees, so hot it only glows in X-rays. Chandra observations have revealed a variety of structures in this gas, from vast bubbles blown by the supermassive black hole in the cluster's central galaxy, NGC 1275, to an enigmatic concave feature known as the "bay." Image above: This X-ray image of the hot gas in the Perseus galaxy cluster was made from 16 days of Chandra observations. Researchers then filtered the data in a way that brightened the contrast of edges in order to make subtle details more obvious. An oval highlights the location of an enormous wave found to be rolling through the gas. Image Credits: NASA's Goddard Space Flight Center/Stephen Walker et al. The bay's concave shape couldn't have formed through bubbles launched by the black hole. Radio observations using the Karl G. Jansky Very Large Array in central New Mexico show that the bay structure produces no emission, the opposite of what scientists would expect for features associated with black hole activity. In addition, standard models of sloshing gas typically produced structures that arc in the wrong direction. Walker and his colleagues turned to existing Chandra observations of the Perseus cluster to further investigate the bay. They combined a total of 10.4 days of high-resolution data with 5.8 days of wide-field observations at energies between 700 and 7,000 electron volts. For comparison, visible light has energies between about two and three electron volts. The scientists then filtered the Chandra data to highlight the edges of structures and reveal subtle details. Next, they compared the edge-enhanced Perseus image to computer simulations of merging galaxy clusters developed by John ZuHone, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. The simulations were run on the Pleiades supercomputer operated by the NASA Advanced Supercomputing Division at Ames Research Center in Silicon Valley, California. Although he was not involved in this study, ZuHone collected his simulations into an online catalog to aid astronomers studying galaxy clusters. "Galaxy cluster mergers represent the latest stage of structure formation in the cosmos," ZuHone said. "Hydrodynamic simulations of merging clusters allow us to produce features in the hot gas and tune physical parameters, such as the magnetic field. Then we can attempt to match the detailed characteristics of the structures we observe in X-rays." One simulation seemed to explain the formation of the bay. In it, gas in a large cluster similar to Perseus has settled into two components, a "cold" central region with temperatures around 54 million degrees Fahrenheit (30 million Celsius) and a surrounding zone where the gas is three times hotter. Then a small galaxy cluster containing about a thousand times the mass of the Milky Way skirts the larger cluster, missing its center by around 650,000 light-years. Animation above: This animation dissolves between two different views of hot gas in the Perseus galaxy cluster. The first is Chandra's best view of hot gas in the central region of the Perseus cluster, where red, green and blue indicate lower-energy to higher-energy X-rays, respectively. The larger image incorporates additional data over a wider field of view. It has been specially processed to enhance the contrast of edges, revealing subtle structures in the gas. The wave is marked by the upward-arcing curve near the bottom, centered at about 7 o'clock. Animation Credits: NASA/CXC/SAO/E.Bulbul, et al. and NASA's Goddard Space Flight Center/Stephen Walker et al. The flyby creates a gravitational disturbance that churns up the gas like cream stirred into coffee, creating an expanding spiral of cold gas. After about 2.5 billion years, when the gas has risen nearly 500,000 light-years from the center, vast waves form and roll at its periphery for hundreds of millions of years before dissipating. These waves are giant versions of Kelvin-Helmholtz waves, which show up wherever there's a velocity difference across the interface of two fluids, such as wind blowing over water. They can be found in the ocean, in cloud formations on Earth and other planets, in plasma near Earth, and even on the sun. "We think the bay feature we see in Perseus is part of a Kelvin-Helmholtz wave, perhaps the largest one yet identified, that formed in much the same way as the simulation shows," Walker said. "We have also identified similar features in two other galaxy clusters, Centaurus and Abell 1795." Chandra X-ray Observatory. Image Credits: NASA/CXC The researchers also found that the size of the waves corresponds to the strength of the cluster's magnetic field. If it's too weak, the waves reach much larger sizes than those observed. If too strong, they don't form at all. This study allowed astronomers to probe the average magnetic field throughout the entire volume of these clusters, a measurement that is impossible to make by any other means. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations. ZuHone Catalog: http://gcmc.hub.yt./ Karl G. Jansky Very Large Array: http://www.vla.nrao.edu/ Pleiades supercomputer: https://www.nas.nasa.gov/hecc/resources/pleiades.html NASA Advanced Supercomputing Division: https://www.nas.nasa.gov/index.html For more information about Chandra, visit: http://www.nasa.gov/chandra Images (mentioned), Animation (mentioned), Video (mentioned), Text, Credits: NASA's Goddard Space Flight Center, by Francis Reddy/Rob Garner.
The following is more a guide for readers. How the spellings are applied in practice is beyond the scope of such a short description. Phonetics are in IPA. Most consonants are usually pronounced much as in English but: - c: /k/ or /s/, much as in English. - ch: /x/, also gh. Medial 'cht' may be /ð/ in Northern dialects. loch (fjord or lake), nicht (night), dochter (daughter), dreich (dreary), etc. Similar to the German "Nacht". - ch: word initial or where it follows 'r' /tʃ/. airch (arch), mairch (march), etc. - gn: /n/. In Northern dialects /gn/ may occur. - kn: /n/. In Northern dialects /kn/ or /tn/ may occur. knap (talk), knee, knowe (knoll), etc. - ng: is always /ŋ/. - nch: usually /nʃ/. brainch (branch), dunch (push), etc. - r: /r/ or /ɹ/ is pronounced in all positions, i.e. rhotically. - s or se: /s/ or /z/. - t: may be a glottal stop between vowels or word final. In Ulster dentalised pronunciations may also occur, also for 'd'. - th: /ð/ or /θ/ much as is English. Initial 'th' in thing, think and thank, etc. may be /h/. - wh: usually /ʍ/, older /xʍ/. Northern dialects also have /f/. - wr: /wr/ more often /r/ but may be /vr/ in Northern dialects. wrack (wreck), wrang (wrong), write, wrocht (worked), etc. - z: /jɪ/ or /ŋ/, may occur in some words as a substitute for the older ȝ (yogh). For example: brulzie (broil), gaberlunzie (a beggar) and the names Menzies, [inzean, Culzean, MacKenzie etc. (As a result of the lack of education in Scots, MacKenzie is now generally pronounced with a /z/ following the perceived realisation of the written form, as more controversially is sometimes Menzies.) - The word final 'd' in nd and ld: but often pronounced in derived forms. Sometimes simply 'n' and 'l' or 'n'' and 'l''. auld (old), haund (hand), etc. - 't' in medial cht: ('ch' = /x/) and st and before final en. fochten (fought), thristle (thistle) also 't' in aften (often), etc. - 't' in word final ct and pt but often pronounced in derived forms. respect, accept, etc. In Scots, vowel length is usually conditioned by the Scots vowel length rule. Words which differ only slightly in pronunciation from Scottish English are generally spelled as in English. Other words may be spelt the same but differ in pronunciation, for example: aunt, swap, want and wash with /a/, bull, full v. and pull with /ʌ/, bind, find and wind v., etc. with /ɪ/. - The unstressed vowel /ə/ may be represented by any vowel letter. - a: usually /a/ but in south west and Ulster dialects often /ɑ/. Note final a in awa (away), twa (two) and wha (who) may also be /ɑ/ or /ɔ/ or /e/ depending on dialect. - au, aw and sometimes a, a' or aa: /ɑː/ or /ɔː/ in Southern, Central and Ulster dialects but /aː/ in Northern dialects. The cluster 'auld' may also be /ʌul/ in Ulster. aw (all), cauld (cold), braw (handsome), faw (fall), snaw (snow), etc. - ae, ai, a(consonant)e: /e/. Often /ɛ/ before /r/. In Northern dialects the vowel in the cluster -'ane' is often /i/. brae (slope), saip (soap), hale (whole), ane (one), ance (once), bane (bone), etc. - ea, ei, ie: /iː/ or /eː/ depending on dialect. /ɛ/ may occur before /r/. Root final this may be /əi/ in Southern dialects. In the far north /əi/ may occur. deid (dead), heid (head), meat (food), clear, speir (enquire), sea, etc. - ee, e(Consonant)e: /iː/. Root final this may be /əi/ in Southern dialects. ee (eye), een (eyes), steek (shut), here, etc. - e: /ɛ/. bed, het (heated), yett (gate), etc. - eu: /(j)u/ or /(j)ʌ/ depending on dialect. Sometimes erroneously 'oo', 'u(consonant)e', 'u' or 'ui'. beuk (book), ceuk (cook), eneuch (enough), leuk (look), teuk (took), etc. - ew: /ju/. In Northern dialects a root final 'ew' may be /jʌu/. few, new, etc. - i: /ɪ/, but often varies between /ɪ/ and /ʌ/ especially after 'w' and 'wh'. /æ/ also occurs in Ulster before voiceless consonants. big, fit (foot), wid (wood), etc. - i(consonant)e, y(consonant)e, ey: /əi/ or /aɪ/. 'ay' is usually /e/ but /əi/ in ay (yes) and aye (always). In Dundee it is noticeably /ɛ/. - o: /ɔ/ but often /o/. - oa: /o/. - ow, owe (root final), seldom ou: /ʌu/. Before 'k' vocalisation to /o/ may occur especially in western and Ulster dialects. bowk (retch), bowe (bow), howe (hollow), knowe (knoll), cowp (overturn), yowe (ewe), etc. - ou, oo, u(consonant)e: /u/. Root final /ʌu/ may occur in Southern dialects. cou (cow), broun (brown), hoose (house), moose (mouse) etc. - u: /ʌ/. but, cut, etc. - ui, also u(consonant)e, oo: /ø/ in conservative dialects. In parts of Fife, Dundee and north Antrim /e/. In Northern dialects usually /i/ but /wi/ after /g/ and /k/ and also /u/ before /r/ in some areas eg. fuird (ford). Mid Down and Donegal dialects have /i/. In central and north Down dialects /ɪ/ when short and /e/ when long. buird (board), buit (boot), cuit (ankle), fluir (floor), guid (good), schuil (school), etc. In central dialects uise v. and uiss n. (use) are [jeːz] and [jɪs]. - Negative na: /ɑ/, /ɪ/ or /e/ depending on dialect. Also 'nae' or 'y' eg. canna (can't), dinna (don't) and maunna (mustn't). - fu (ful): /u/, /ɪ/, /ɑ/ or /e/ depending on dialect. Also 'fu'', 'fie', 'fy', 'fae' and 'fa'. - The word ending ae: /ɑ/, /ɪ/ or /e/ depending on dialect. Also 'a', 'ow' or 'y', for example: arrae (arrow), barrae (barrow) and windae (window), etc.
Basic NumeracyRote learning may be a handy tool to teach the names and order of numbers, but it needs to be supplemented with hands on activities so that children can gain an understanding that each number refers to a set amount or group of objects. In order to perform higher order calculations such as addition and subtraction, children must first be able to recognize and create concrete examples to represent numbers. Conservation of NumberOne important concept that young children often have difficulty understanding is conservation. Visual cues are very important for young children. If a tightly clustered group of objects is spread out they often believe that the number of objects has become greater. It is important to teach children that no matter how many times the layout of a group of objects is changed, the number of objects remains the same. One-to-One CorrespondenceIn order for children to grasp the concept of conservation, they need to be given the tools to prove the theory for themselves. This is where one-to-one correspondence comes in. Simply put, one-to-one correspondence is the process of touching one object for each number that is counted aloud. This may seem simple but many children initially find it difficult to coordinate their counting with the movement of their hand. Counting by two's
Unlike many dystopian novels, which are set in distant and unfamiliar futures, 1984 is convincing in part because its dystopian elements are almost entirely things that have already happened, as Orwell drew from first-hand experience in creating the world of Oceania. For example, “2 + 2 = 5” was a real political slogan from the Soviet Union, a promise to complete the industrializing Five-Year Plan in four years. Orwell satirizes the slogan here to demonstrate the authoritarian tendency to suspending reality. Prior to writing the novel, Orwell had watched the communist revolution in Russia and volunteered to fight against the Fascist government in the Spanish Civil War. At first supportive of the Russian Revolution, Orwell changed his opinions after realizing that behind the veneer of justice and equality lurked widespread famines, forced labor, internal power struggles, and political repression. While fighting in the Spanish Civil War, Orwell became disillusioned with elements within the resistance forces that he felt wanted to replace the Fascist government with an authoritarian regime of its own. These experiences provide much of the political satire of 1984. The Spanish Civil War catalyzed Orwell and made him highly critical of authoritarian tendencies on the left. Much of the Party’s brutality, paranoia, and betrayals are drawn from the Great Purges of 1936–1938 in the Soviet Union. Over 600,000 people died in an official purge of the Communist Party, in an era that also included widespread repression of the public, police surveillance and execution without trial, and an atmosphere of fear. In 1984, Goldstein is the stand-in for Leon Trotsky, the revolutionary figurehead who Stalin cast out of the party and denounced as a traitor to the cause. Jones, Aaronson, and Rutherford symbolize people who were executed or sent to forced-labor camps. Trotsky’s manifesto, The Revolution Betrayed, has much in common with Goldstein’s book, from the tone of writing to the subjects discussed. The rise of Hitler and the scapegoating of Jews and other “undesirables” also had a profound effect on Orwell. He realized that mass media was a key factor in Hitler’s rise, enabling prominent figures and organizations to shape public opinion on a broad scale. The intrusive telescreens and the Party’s frequent parades and events are drawn from Nazi Party public propaganda and its marches and rallies. When 1984 was written, World War II had ended only a few years prior, and many people believed a World War III was inevitable, making the wars of the novel feel not just realistic but unavoidable. Additionally, 1984 was written three years after the U.S. dropped atomic bombs on Hiroshima and Nagasaki, and Orwell references nuclear-powered wars happening in different parts of the world. The idea of three superstates came from the 1943 Tehran Conference, where Stalin, Winston Churchill, and Franklin D. Roosevelt discussed global “areas of influence” and how they should exercise their influence on the rest of the world. Orwell also included everyday life experiences from World War II London. The unappetizing food, inconsistent electricity, and scarcity of basic household goods in 1984 come from Orwell’s experiences with wartime rationing. Frequent bombing raids on London appear in 1984 as well, an echo of the Blitz campaign carried out by Germany on London and the surrounding areas, in which 40,000 people died and almost a million buildings were destroyed.
The X-bar theory was first proposed by Noam Chomsky (1970). It postulates that all human languages share certain structural similarities, including the same underlying syntactic structure, which is known as the "X-bar". The Default Grammar is based in a modified version of the X-bar approach, as indicated below. Constituency grammars are a method of sentence analysis that divides a sentence into major parts, which are in turn further divided into smaller parts in a process that continues until irreducible constituents are reached, i.e., until each constituent consists of only a word or a meaningful part of a word. The end result is presented in a visual diagrammatic form that reveals the hierarchical immediate constituent structure of the sentence at hand. For example: This tree illustrates the manner in which the entire sentence is divided first into two constituents (NP and VP, i.e., subject and predicate), which are further divided into immediate constituents (V, NP, A), and so on, until the smallest constituents (N, V, D, A) are reached. The X-bar is a specific implementation of constituency grammars: it is a method of sentence analysis that divides the sentence into constituents, but it states some very specific rules for doing that: - the topmost node (S, in the diagram above) is called XP (X-phrase) and is considered to be the maximal projection of a head X. This means that the whole process must be understood bottom-up (from a head to its projections) instead of top-down. - the "X" is actually a variable that must be replaced by any of the possible heads: noun (N), verb (V), adjective (J), adverb (A), etc. In that sense, there is no real XP, but NP's, VP's, JP's, etc. A VP (verbal phrase) is the maximal projection of a verb (V); a NP (noun phrase) is the maximal projection of a noun (N); and so on. The sentence above, for instance, can be considered to be the maximal projection of a V (killed) and, therefore, constitutes a VP (verbal phrase), instead of "S". The use of the "X" (and therefore "XP") comes from the fact that one of the claims of the theory is that all these phrases (NP, VP, JP, etc.) share the same underlying structure, i.e., a NP is a specific implementation of a general XP. - projections are always binary, i.e., the tree cannot bring more than two branches at a time. In the example above, for instance, there is a VP (killed the man yesterday) with three branches. This is not possible in X-bar. In order to avoid this, the head may have intermediate projections before the maximal projection. These intermediate projections are called XB (from X-bar), and again must be replaced by the specific categories of the head (NB is the intermediate projection of N, VB is the intermediate projection of V, etc.). - the maximal projection is "maximal", i.e., there can be one single maximal projection of the same head. If we simply replace "S" by "VP" in the example above, the VP would project a VP, which is not possible according to the X-bar. We have then to proliferate the intermediate projections (VB's in our case). One head can have as many intermediate projections as necessary, but it can have one single maximal projection. - there can be four different types of arguments inside the X-bar structure: the head, which projects the whole structure ("killed", in the example above); the complement (or comp), that complements the head ("the man"); the adjunct (or adjt), that modifies the head ("yesterday"); and the specifier (or spec), that determines the head ("they"). The head can have as many complements and adjuncts as necessary, but it can have one single specifier. - The intermediate projections (NB's, VB's, JB's, etc.) are actually combinations of the head with its complements and adjuncts, if any, where as the maximal projection (NP, VP, JP, etc.) is the combination of the head with its specifier, if any. - the complements, adjuncts and specifiers are themselves maximal projections (of different categories other than the head). For instance, the complement of "killed" is not simply a noun but a NP ("the man"), which is the maximal projection of the noun head ("man"). Likewise, the adjunct of "killed" is not the word "yesterday" but the AP ("yesterday"), which is the maximal projection of the adverbial head "yesterday". If we provide all the changes indicated above to our previous tree, we get the X-bar representation of the sentence, which is the following: In the above: - "they" is a NP, i.e., a maximal projection of nominal head (the personal pronoun "they"). It plays the role of the specifier of the VP. - "killed" is the head of the VP, which is actually a projection of it. - "the man" is a NP, i.e., a maximal projection of a nominal head (the noun "man"). It plays the role of the complement of the VP (and, therefore, constitutes a VB). - "the" is a DP, i.e., a maximal projection of a determiner head (the article "the"). It plays the role of the specifier of the NP, which is inside the VB. - "yesterday" is an AP, i.e., a maximal projection of an adverbial head (the adverb "yesterday"). It plays the role of the adjunct of the VP (and, therefore, constitutes a VB). - VP, NP, DP, AP are maximal projections - VB and NB are intermediate projections The X-bar abstract configuration is depicted in the diagram below: - X is the head, the nucleus or the source of the whole syntactic structure, which is actually derived (or projected) out of it. The letter X is used to signify an arbitrary lexical category (part of speech). When analyzing a specific utterance, specific categories are assigned. Thus, the X may become an N for noun, a V for verb, an J for adjective, a P for preposition, etc. - comp (i.e., complement) is an internal argument, i.e., a word, phrase or clause which is necessary to the head to complete its meaning (e.g., objects of transitive verbs) - adjt (i.e., adjunct) is a word, phrase or clause which modifies the head but which is not syntactically required by it (adjuncts are expected to be extra-nuclear, i.e., removing an adjunct would leave a grammatically well-formed sentence) - spec (i.e., specifier) is an external argument, i.e., a word, phrase or clause which qualifies (determines) the head - XB (X-bar) is the general name for any of the intermediate projections derived from X - XP (X-bar-bar, X-double-bar, X-phrase) is the maximal projection of X. The head, the complement, the specifier and the adjunct are said to be the constituents of the syntactic representation and define the four general universal syntactic roles. In the X-bar diagram depicted above, the letter X is used to signify an arbitrary category. Thus, the X may become an N for noun, a V for verb, and so on. In the Default Grammar, there are eight different types of heads: - N = nouns and nominals: personal pronouns, demonstrative pronouns, nominalizations, etc - V = verbs - J = adjectives - A = adverbs - P = prepositions - D = determiners: articles, demonstrative determiners, possessive determiners, quantifiers - I = auxiliary verbs - C = conjunctions Specifiers are used to narrow the meaning intended by the head. They include: - articles: the (book), a (book), etc. - possessive determiners: my (book), your (book), etc. - demonstrative determiners: this (book), that (book), etc. - quantifiers: no (answer), every (hour), etc. - intensifiers (emphasizers, amplifiers, downtoners): very (expensive), quite (well), nearly (under), etc. Complements are used to complete the meaning intended by the head. They may be: - direct objects: (do) something, (give) something - indirect objects: (laugh at) something, (give to) someone - complement of deverbals (i.e., nouns deriving from verbs): (construction of) the city, (arrival of) Peter - complement of adjectives: (loyal) to the queen, (interested) in Chemistry - complement of adverbs: (contrarily) to popular belief, (independently) from her - complement of prepositions: (under) the table, (after) today - complement of conjunctions: (and) Peter, (I don't know if) he'll come Adjuncts are used to modify the meaning intended by the head: - adjectives: beautiful (table) - adverbs: (speak) slowly - prepositional phrases: (table) of wood In the X-bar theory, the heads (X) project two different types of structures: - XB (x-bar) is the intermediate projection, and is derived from the combination of the head or any of its intermediate projections with complements and adjuncts - XP (x-bar-bar, or x-phrase) is the maximal projection, and is derived from the combination of the topmost intermediate projection and the specifier There can be as many intermediate projections as adjuncts and complements, but any head projects one single maximal projection, because it may have one single specifier. The heads define the nature of the intermediate and maximal projections, thus: - A head N projects NB's and a Noun Phrase (NP) - A head V projects VB's and a Verbal Phrase (VP) - A head J projects JB's and an Adjective Phrase (JP) - A head A projects AB's and an Adverbial Phrase (AP) - A head P projects PB's and a Prepositional Phrase (PP) - A head D projects DB's and a Determiner Phrase (DP) - A head I projects IB's and an Inflectional Phrase (IP) - A head C projects CB's and Complementizer Phrase (CP) Specifiers, complements and adjuncts are themselves complex syntactic structures (i.e., maximal projections, or XP's) which are combined to form the sentence structure: Branching is binary A key assumption of X-bar theory is that branching is always binary, if it occurs. This means that there should be as many XB's as complements and adjuncts. Order is parametrized The order of the constituents (specifiers, complements and adjuncts) is subject to language specific parametrization and may vary: The following conventions have been adopted in the Default Grammar. They do not correspond to the current assumptions of the X-bar theory, and derive rather from extralinguistic issues (such as machine-tractability). In the original X-bar approach, branching is always binary. In the Default Grammar, this is also true, except for coordination, where branching is ternary. In any case, the coordinated constituents always project a structure of the same category (two coordinate NP's project a NP, two coordinated NB's project a NB, and so on). In case of a coordination of more than two constituents, the coordination must be represented in separate steps (i.e., branching cannot be greater than 3). Surface and Deep Structures The Default Grammar differentiates between the surface syntactic structure and the deep syntactic structure. The former preserves the order of the words in the sentence; the latter preserves the dependency relations. The deep structure is converted into the surface structure, and vice-versa, through the movement of the constituents. This may entail different configurations for the same sentence, depending on the type of the representation. The Default Grammar adopts the following general configuration, where CP is the topmost category, IP is the complement of CP, and VP is the complement of IP. CP is the maximal projection of a conjunction. It is also used to represent topicalization (i.e., movement of a constituent out of its original position to the beginning of the sentence). Differently from the current X-bar approach, the Default Grammar considers a clause to be an instance of CP (instead of DP). However, CP is represented only in two cases: - When there is a subordinating conjunction, which will be the head of CP; and - In the surface structure, when there is any topicalization. In this case, the topicalized constituent is represented at the position of adjunct of CP (even if the head of CP is empty). CP is not represented when there is no subordinating conjunction or topicalized element. IP is the maximal projection of an auxiliary verb. Differently from the current X-bar approach, only auxiliary verbs may occupy the position of the head of IP. IP is represented in the following cases: - When the sentence contains an auxiliary verb, which will be the head of IP; and - When there is a CP and the sentence is finite (i.e., it is inflected in tense, aspect or mood). IP is not represented when there is no CP nor auxiliary verb. The subject of a sentence is represented at the position of the spec of IP whenever the sentence contains an auxiliary verb. If this is not the case, and the subject is not topicalized, the subject is always represented at the position of the spec of VP. VP is the maximal projection of a main verb or a copula (but not of an auxiliary verb). It may contain one single specifier (the subject of the clause) and as many adjuncts and complements as necessary. The position of the spec of VP is occupied only if there is no auxiliary verb (in this case the subject is represented as the spec of IP) or when the subject is not topicalized (in this case it is represented as the adjunct of CP). There is no structural difference between complements (either direct or indirect) or adjuncts. They are always represented as branches of the intermediate projection. Predicates are represented as complements of copula (linking) verbs. NP is the maximal projection of a noun. It may contain one single specifier (DP), and as many adjuncts and complements as necessary. JP is the maximal projection of an adjective. It may contain one single specifier (AP), and as many adjuncts and complements as necessary. AP is the maximal projection of an adverb. It may contain one single specifier (other AP), and as many adjuncts and complements as necessary. PP is the maximal projection of a preposition. It may contain one single specifier (AP), and as many adjuncts and complements as necessary. DP is the maximal projection of a determiner. It may contain one single specifier (other DP) and adjuncts. A DP may not contain complements. The X-bar may be represented in two different formats: - Projection-driven, where all relations are represented in terms of intermediate and maximal projections (i.e., XB's and XP's) - Head-driven, where all relations are represented by reference to the head (i.e., specifiers, complements and adjuncts) The projection-driven representation is used when the tree structure is important; the head-driven representation, when the network representation is required. XP(XB(HEAD;COMP);ADJT);SPEC) = XS(HEAD;SPEC)XC(HEAD;COMP)XA(HEAD;ADJT) - ↑ Chomsky, Noam (1970). Remarks on nominalization. In: R. Jacobs and P. Rosenbaum (eds.) Reading in English Transformational Grammar, 184-221. Waltham: Ginn. - ↑ In the X-bar theory, differently from the UNLarium approach, adverbs are subsumed by prepositions and are not considered to be an independent lexical category.
About this lesson Teaching the Pentatonic scales is a great way to introduce improvisation to your student. They are easy to learn and are easy to apply and come up with some great sounds. This resources gives your student a handy reference sheet they can refer to when using either the Major or Minor Pentatonic Scales. A comprehensive guide on introducing these scales has already been created in INT-01 Introduction to the Pentatonic Scale so this resource does not include lesson instructions. Before you give this material to your student, make sure they feel comfortable with techniques such as hammer-ons, pull-offs, slides, bends and vibrato.
Twenty-first-century classrooms are becoming increasingly culturally, ethnically, and racially diverse and are looking more and more like microcosms. Consequently, students and some educational stakeholders are demanding the inclusion of race, culture, justice, and equality in the curricula and pushing the envelope for more inclusive pedagogy. Central to the concept of inclusive pedagogy are the values of fairness and equity. Proponents of inclusive pedagogy have indicated that numerous variables influence pedagogy, particularly inclusive pedagogy. These values have elicited concerns throughout the educational system regarding how instructors and facilitators serve all learners academic needs in their academies. However, there is no consensus on what constitutes inclusive pedagogy in higher education (HE) or if inclusive pedagogy even exists in that space. Therefore, educational institution leaders need to re-conceptualize their thoughts on inclusive pedagogy. This paper reviews some of the existing literature applicable to inclusive education and inclusive pedagogy. It proposes inclusive pedagogy dimensions that instructors in HE need to consider to effectively implement inclusive pedagogy practice (IPP) in the classroom. It concludes with a conceptual framework for inclusive pedagogy in practice (IPIP) in HE and suggestions of how administrators, faculty members, and course designers can advance the IPIP framework across their campuses. Livingston-Galloway, M., & Robinson-Neal, A. (2021). Re-conceptualizing inclusive pedagogy in practice in higher education. Journal of the Scholarship of Teaching and Learning for Christians in Higher Education, 11(1), 29-63. https://doi.org/10.31380/sotlched.11.1.29
Informational (nonfiction), 132 words, Level F (Grade 1), Lexile 320L What do animals do at night? In Night Animals, students learn about animals that are active at night and how their senses help them find food in the dark. Students have the opportunity to identify the main idea and supporting details as well as to connect to prior knowledge. Detailed, supportive photographs, high-frequency words, and repetitive phrases support emergent readers. More Book Options Kurzweil 3000 Format Use of Kurzweil 3000® formatted books requires the purchase of Kurzweil 3000 software at www.kurzweiledu.com. Teach the Objectives Use the reading strategy of connecting to prior knowledge to understand text Main Idea and Details : Identify main idea and details Final Consonants : Discriminate final consonant /t/ sound Consonants : Identify final consonant Tt Grammar and Mechanics Verbs : Recognize and use verbs High-Frequency Words : Understand, use, and write the high-frequency word this Think, Collaborate, Discuss Promote higher-order thinking for small groups or whole class
In this challenge we will create a game for the BBC micro:bit. Imagine you have been asked to bring a cupcake to Her Majesty the Queen Elizabeth II. You have picked the best cupcake from the kitchen and placed it at the centre of a silver tray. You will have to carry the cupcake on its tray to the Queen, walking through the many rooms and corridors of Buckingham Palace. Watch out, if you tilt the tray, you may end up dropping the Queen’s cupcake! For this challenge we will replace the silver tray with a BBC micro:bit. The cupcake will be a sprite positioned in the centre of the LED screen, the tray will be the grid of 5×5 LEDs on the micro:bit. The player will have to carry the micro:bit flat on the back of their hand, and carry it around the room, making sure they keep it as flat as possible. Our program will use the built-in accelerometer input sensor of the micro:bit to find out if the micro:bit is leaning (forward, backward, to the left or to the right). If so the sprite (cupcake) will slide in the right direction (The LED light will move on the 5×5 grid). The game will end when/if the cupcake is at the edge of the 5X5 LED grid and the micro:bit is still tilted: the cupcake is falling off the grid/tray. Checking the code above answer the following questions: - Can you identify the block of code used for the micro:bit to detect if it has been tilted to the left? - Can you identify the block of code used for the micro:bit to know when the cupcake is falling off the tray? - Can you explain how does the microbit decide to stop the game by running the last block to display the Game Over message? - Can you explain the purpose of the variable called tolerance? - Why do we need a tolerance? - What would be the impact on the game if the tolerance with 500 instead of 200? (Would it be easier to play or harder? Why?) - What would be the impact on the game if the tolerance with 50 instead of 200? (Would it be easier to play or harder? Why?) Extension Task 1 Add some more code to this game to record how long the player/waiter took to deliver the cupcake. The micro:bit should record the running time from the start (just after the “3-2-1-Go” message) and allow the user to stop the game when they press button A. The player will have to follow a set route (e.g. around the classroom) and if they manage to complete the route without dropping the cupcake, they should press button A to stop the counter, and the micro:bit should display their time. Extension Task 2 Tweak the code so that if a player has been playing for 10 seconds without dropping the cupcake, the tolerance then changes to make the game harder to play (e.g. tolerance change from 200 to 100)
Habituation may be a decrease in response to a stimulus after repeated presentations. For instance, a replacement sound in your environment, like a replacement ringtone, may initially draw your attention or maybe become distracting. Over time, as you become familiar with this sound, you pay less attention to the noise, and your response to the sound will diminish. This diminished response is habituation. To know how habituation works, it is often helpful to look at a couple of different examples. This phenomenon plays a task in many various areas, from learning to perception. Habituation in Exposure Therapy: Exposure therapy uses habituation to assist people in overcoming their fears. A person who is scared of the dark might begin by simply imagining being in a dark room. Once they need to become habituated to the present experience, they’re going to expose themselves to increasingly closer approximations to the important source of their anxiety until they finally confront the fear itself. Eventually, the individual is often habituated to the stimulus so that they did not experience the fear response. What Are The Characteristics Of Habituation? Habituation doesn’t always occur in the same way, and there are a variety of things that will influence how quickly you become habituated to a stimulus. A number of the key characteristics of habituation include: Change: Changing the stimulation’s intensity or duration may end in a reoccurrence of the first response. So if that banging noise grew louder over time or stopped abruptly, you would be more likely to note it again. Duration: If the habituation stimulus isn’t presented for an extended enough period before a sudden reintroduction, the response will once more reappear at. Full-strength: So if that noisy neighbor’s loud banging (from the instance above) were to prevent and begin, you’re less likely to become habituated to it. Frequency: The more frequently a stimulus is presented, the faster habituation will occur. If you wear that very same perfume a day, you’re more likely to prevent noticing it earlier whenever. Intensity: Very intense stimuli tend to end in slower habituation. In some cases, like deafening noises, sort of a car alarm, or a siren, habituation will never occur (a car alarm wouldn’t be very effective as an alert if people stopped noticing it after a couple of minutes, for example). Why Habituation Occurs? Habituation is an example of non-associative learning; there is no reward or punishment related to the stimulus. You are not experiencing pain or pleasure as a result of that neighbor’s banging noises. So why can we experience habituation? There are a couple of different theories that seek to elucidate why habituation occurs: Comparator theory of habituation suggests that our brain creates a model of the expected stimulus. With continued presentations, the stimulus is compared to the model and, if it matches, the response is inhibited. The Dual-factor theory of habituation suggests that there are underlying neural processes that regulate responsiveness to different stimuli. Our brains decide that we do not get to worry that banging noise because we’ve more pressing things on which to focus our attention.
Food For Thought Food disparity impacts thousands of families in Northern Virginia and across the country, leaving many children from diverse economics and ethnic backgrounds at a disadvantage in school. February 13, 2019 Each day, over 180,000 students make their morning trek to a Fairfax County public school for their seven-hour school day. For many families, their daily routine is marked by after-school practices, homework sessions, and carpool, but getting a decent meal is rarely a concern. For the almost 24,000 teenagers in Fairfax County who are food insecure, however, every day is a countdown to mealtime. What is food insecurity? According to the U.S. Department of Agriculture (USDA), the term food insecurity is defined as “a lack of consistent access to enough food for an active, healthy life.” A family or individual coping with food insecurity may have to reduce the quality of their meals, feed their children an unbalanced diet or skip meals completely so that their children may eat, according to the local nonprofit Food For Others Fairfax County. The organization works to minimize this disparity by helping families who are “unable to make ends meet and need to supplement their inadequate food supplies.” Food insecurity affects an estimated 40 million Americans across all United States communities, including those in Vienna and near Madison. There is no single, isolated cause; despite the relationship between food insecurity and income level, Feeding America notes that even individuals living above the poverty line can experience food insecurity. Housing costs, disabilities, social isolation and education level are all factors that can contribute to one’s access to adequate nutrition. “It cuts across every economic status,” Assistant Principal Liz Calvert said. “Even if a family has not found access to food difficult, economic conditions that can change overnight, like the government shutdown, do impact families and their abilities to make ends meet.” An unbalanced diet can result in increased hospitalizations, iron deficiency and behavioral problems such as aggression, anxiety, depression and attention deficit disorder, according to the American Psychological Association. Additionally, while many experiencing food insecurity may appear underweight, food insecurity occurs disproportionately among families who present a high risk for obesity. “A lot of times, we can determine if a patient is food insecure by what we call their body habitus,” Vienna Family Medicine practitioner Dr. Sandra Tandeciarz said. “Physical elements that you see in the exam can give you an idea; patients who are either obese or too thin may be affected [by food insecurity].” The physical consequences of food insecurity can greatly affect education for children, who lack the nutrients necessary to sustain them throughout the school day. A proper diet plays an important role in cognition, and children from homes that do not have consistent access to food are more likely to receive lower test scores and repeat a grade level, according to Feeding America. Living with Food Insecurity In the 2017-2018 school year, 247 Madison students were on the free/reduced lunch program, but only 44.7% of Madison students are familiar with the term food insecurity. “[Madison] has a really diverse population, including economically,” Calvert said. “There are groups of kids [at Madison] who are on free or reduced lunch who don’t have access to food outside of school.” Calvert has seen how food insecurity negatively influences Madison students. “Lack of access to food outside of school affects attendance,” Calvert said. “It affects the ability to pay attention. In many cases, it affects a child’s overall success in school.” Kathy Coles, a current sixth grade teacher at Cunningham Park Elementary School in Vienna has also observed the consequences of food insecurity within her own classroom, and estimates that about one-third of the students at Cunningham Park are affected by some level of food insecurity. “Food insecurity ends up showing up in all sorts of ways within the classroom,” Coles said. “Kids are tired, often because they have not eaten enough. They are just not quite feeling themselves. Sometimes they go down to the clinic, which means class time is lost. Because they’re hungry, they’re not listening; all of this impacts their education.” Many of the families experiencing food insecurity source the majority of their groceries from the dollar store, 7-Eleven or CVS and severely lack fresh fruits and vegetables, Coles also explained. Some of her students and their families do not even have consistent access to transportation to get to the store. Because of these factors, Coles is making a conscious effort to alleviate some of the effects of food insecurity on her students. In addition to providing snacks during the school day for those lacking adequate nutrition, Coles initiated a community garden at Cunningham Park about three years ago. What started as an experiment blossomed into a popular school and community interest after Coles showed students vegetables from the garden. “The kids were fascinated by the fact that I was bringing them fresh vegetables,” Coles said. “We have a gardening committee, and kids and families are always welcome to come. They’re in there digging in that dirt and watching things grow. They’re excited about it and then they try it.” The essential components of an elementary-school-aged child’s diet are protein, fruits, vegetables, grains and essential minerals. Protein is critical for the building and repairing of tissues and bones. Grains and other carbohydrates provide energy. “Dairy also provides protein, and fruits and vegetables add vitamins and minerals,” Dr. Tandeciarz said. Families in a position of food insecurity may struggle to include essential dietary elements in their meals. In the Northern Virginia area, fruits and vegetables are most commonly omitted from the diets of children lacking access to adequate nutrition because of the higher costs of fresh produce, Tandeciarz explained. On average, healthier and perishable foods, such as fresh fruits and vegetables, cost nearly twice as much per serving when compared to unhealthy packaged foods, according to a study by Drexel University. “There are a lot of minerals and vitamins that you get specifically through fruits and vegetables that you won’t necessarily get through other foods unless they’re supplemented,” Tandeciarz said. “Kids basically end up having too much of one thing and not a balance. And when they have too much of one thing, it tends to be foods that have a lot of fat, a lot of salt and not as many vitamins and minerals. So you’re really overcompensating on one side and not providing a balance of what you need.” The Supplemental Nutrition Assistance Program (SNAP) is the current government-run nutrition assistance program, helping low-income families and individuals meet their food-related needs. SNAP states that for each individual on the program, the average monthly benefit is $125.07. Yet the New Food Economy Organization estimates that it costs $143 per week to feed the average teenager. This leaves a tremendous hunger gap that needs to be addressed. Although this issue may be global, change can begin on a local level. In addition to redistributing usable food from Vienna grocery stores and food establishments to those in need, Food For Others organizes food drives to collect non-perishable foods for donation. Efforts to address food insecurity in the community can also be seen right here at Madison. Assistant Principal Calvert founded a food pantry program at Madison after seeing how a lack of regular access to food reduces student performance and success at school. “There were several other high schools in the country which had started food pantries previously,” Calvert said. “We [Calvert, PTSA President and parent volunteer] went over to Oakton High School to find out how they did outreach, identified kids, received donations for their food pantry. They had put together a handbook that we adapted to provide nutritional support for our kids.” This article was originally published in the Jan. 30 edition of The Hawk Talk.
Hazard Mapping is a process that Head Start programs can use after an injury occurs. It helps to: 1) identify location(s) for high risk of injury; 2) pinpoint systems and services that need to be strengthened; 3) develop a corrective action plan; and 4) incorporate safety and injury prevention into ongoing-monitoring activities. Hazard mapping is employed effectively in emergency preparedness planning related to natural disasters. It also is used to isolate locations of disease outbreaks and determine where prevention efforts are most needed. See PDF version: Hazard Mapping for Early Care and Education Programs Goals and Benefits of Hazard Mapping Hazard mapping provides: - An easy method for ongoing, systematic data collection and analysis about where injuries occur in Head Start programs - A way to identify the “how,” “what,” “when,” “who,” etc. by building on injury and incident reports - A strategic approach to safety and injury prevention problems by studying patterns of injury rather than isolated incidents - Compelling visual data for decision makers, staff, and families to make informed decisions about solutions Instructions for Hazard Mapping Step One—Identify high risk injury locations - Create a map of the home, classroom, center, family child care home, Head Start bus or playground area. Label the various places and/or equipment in the location(s) that is being mapped. Make the map as accurate as possible. - Have staff, administrators, or anyone who observed the incident place a “dot” or “marker” on the map to indicate where the specific incident and/or injury occurred. - Depending on the size of the program and number of injuries reported, use data from injury/incident reports for the past three to six months. Add more “dots” or “markers” to identify additional locations where injuries occurred. - Establish a safety and injury prevention committee to review and analyze incident data. The committee may include administrators, staff, Head Start parents/families and community partners. Programs may use their Health Services Advisory Committee or some of its members as their Safety and Injury Prevention Committee. - Analyze and chart the findings. To do this, count the number of incidents in each location. - Count how many of the incidents resulted in an injury and the level of severity of each injury. Use incident and/or injury reports to collect this additional data. - Determine where most incidents occur and where to focus initial efforts for a corrective action plan. Step Two—Pinpoint systems and services that need to be strengthened - To identify and understand patterns of injuries at locations throughout the program, review additional information from injury and/or incident reports. - Who was involved in each injury? (child/children; staff, volunteers, parents) - Where did the injury occur? - What happened? (What was the cause?) - What was the severity of each injury? - When did each injury occur? - Who – e.g., what staff were present and where were they at the time of each injury? - How could each injury have been prevented? - Using your/the program plan, determine areas where systems and services affect these findings. - Translate these findings into recommendations that strengthen systems and services. Step Three—Developing a Corrective Action Plan - Review all of the findings and recommendations regarding injuries and incidents. - Prioritize and select specific activities/strategies to resolve problem areas. These should focus on the everyday service delivery level and the higher systemic level. - Develop an action plan to correct the problem areas you identified. Include each of the activities/strategies selected in this corrective action plan. Identify the steps, the individuals responsible, and the dates for completion. - Create a plan for sharing the corrective action plan with management, staff, and families to get buy in for injury and/or incident responses. Step Four—Incorporating Hazard Mapping in Ongoing-Monitoring Activities - Based on an analysis of these data, determine what action(s) needs to be taken to avoid future injuries in the location(s) identified. Determine if any additional questions should be added to injury/incident report forms to obtain this missing information. - When developing corrective action plans, consider prioritizing more serious injuries, even if they have occurred less often. - A reduction in injuries and/or incidents happens over time if the correct set of interventions is selected based on analysis of the data about patterns of injuries. - Continuously review incident and/or injury data to make sure that interventions are reducing the number of incidents and the severity of injuries. They may include: - Educational opportunities about safety and injury prevention for staff - Environmental modifications - Procedures to monitor compliance with program policies, and/or - Other necessary corrective actions. - Discuss how to share injury data from ongoing monitoring activities and the self-assessment process with staff, families, the Health Services Advisory Committee, and Governing Board and Policy Council. - How will managers share the results of hazard mapping activities with all staff to advise them of risks or hazards that may exist at their center or location? - How will managers share the hazard mapping and incident and/or injury report results with the program’s Health Services Advisory Committee (HSAC) (when it is not the same as the Safety and Injury Prevention Committee) to problem-solve the issues that are identified? - How will managers use hazard mapping as part of ongoing-monitoring activities to (1) develop and maintain corrective action plans, (2) assure continuous program improvement, and (3) reduce the incidence of future injuries to enrolled children? Resources to Learn More National Council for Occupational Safety and Health. (2012). “Mapping” Health and Safety Problems.” Los Angeles, CA: National Council for Occupational Safety and Health. Retrieved August 13, 2012 from: https://www.coshnetwork.org/sites/default/files/Mapping%20NLC.pdf Injury Prevention Program Division. (2012). UCLA Injury and Illness Prevention Program (IIPP). Los Angeles, CA: University of California, Los Angeles. Retrieved August 13, 2012 from: https://ora.research.ucla.edu/OBFS/Documents/VC_Research_IIPP.pdf National Centers:Health, Behavioral Health, and Safety Last Updated: March 29, 2021
Language and Math Model Reality A phrase is one or more words. A term is a distinct concept, represented by a phrase. A phrase may be used to refer to more than one term. Which term is intended in a particular case must be inferred. A standard definition is a description of the terms a phrase conventionally refers to. A local definition is a description of the term a phrase refers to within a well-defined context. E.g., legal contracts and technical standards use local definitions that only apply within themselves. Usually there are alternate descriptions which could be used to define a term. To aid memory, local definitions are often assigned to phrases with related standard definitions. E.g., my local definition of the words “local definition” and “standard definition” are related to definitions more broadly and to the software programmer’s use of locally defined entities as well as standard entities provided and used by a larger group. Such documents often stylize the words that refer to these locally defined terms. E.g., they bold or italicize them. This practice prevents ambiguity if words may refer to either local and standard definitions, but I think it also muddles the documents. Thus, I only italicize locally defined terms sparingly after their first use. When I want to refer to words, as opposed to terms, I place the words in quotes. Standard definitions change, usually slowly. Local definitions, if provided as a precaution due to this change, may even be identical to standard definition used at a point in time. Several standard definitions may be attached to a term simultaneously, by different groups of people. The groups can be separated geographically. E.g., American English differs subtly from British English. They can also be separated in time. E.g., a younger generation using a term differently than their parents. While a standard definition can be thought of as a local definition whose context is a particular group of people, I will not use the term in this way. An individual may be aware of multiple standard definitions, and may use them appropriately within context. A local definition can not be wrong. It can only be useless. A standard definition can be wrong if it is not what is conventionally meant by a group when they use a term. Authors don’t always make their local definitions explicit. Sometimes authors want their local definition to become the standard definition. Since standard definitions change, reading older documents can be difficult. E.g., Shakespeare’s plays are difficult to read because they use old unknown words and because they use words whose meaning has changed. To interpret documents like their original readers we must share their standard definitions. This can be a legal concern. E.g., when interpreting the original intent of the U.S. Constitution. It can also be a religious concern. E.g., when interpreting the original intent of the words of inspired religious figures. Language and mathematics let us describe things. Any mathematical proposition could be restated with words. A model is a simplified description of what exists. Most models are incomplete in the sense that there exist verifiable questions that could be asked of the thing described that can’t be answered by the model. Most models are also approximate; there exist verifiable questions that could be asked of the thing described that would be answered incorrectly by the model. There is a tradeoff between how easy a description is to use and how complex it is. Two models are equivalent if they allow one to answer the same set of questions about the thing and they give the same answers. A model is more detailed than another if it can answer everything the other model can answer, as well as additional questions. There can be many models of the same thing, each answering questions about a different facet of the thing. There can conceivably be complete models. A complete model is sufficient to answer any verifiable question about the thing. There may be non-verifiable metaphysical questions about a thing. Such questions aren’t necessarily invalid. E.g., why are the laws of physics the way they are? What do we mean by a thing? Even the demarcation of the thing is a model, and often an approximate one. A simple solution to this is to say the entire universe is one thing—one big quantum wave function. I believe this is one interpretation of quantum physics. How would we know if a model is complete? Typically, you don’t. The laws of physics aim to be a complete model of how the universe’s state changes. It is not a complete model of the universe, which would require knowledge of its initial state too. There isn’t enough space in the universe to create a complete model of itself. We don’t know if the laws of physics completely describe how the universe’s state changes. The laws could be different in other parts of the universe, at different times, or even in different local or global states. E.g., the gravitational constant could shift over time. If matter is arranged a certain way, say into a brain-like structure, the laws of physics could change within the brain. More fantastical examples are also possible; it could be the case that if a golden box with a particular shape were created then the laws of physics would change throughout the universe. A complete model of a thing is not the thing. Terms are Models Terms are models. The statement, “there are two yellow pillows on my couch,” is a model. Most of our terms are only good enough for their standard ways it is used in our language. Most standard uses of language don’t require particularly complex models. Specialists tend to need more granular models. Thus, they produce local definitions within their fields to supplement the library of standard terms. E.g., lawyers, doctors, engineers, and philosophers have their own lingo. Children learn their first terms using examples. Adults often learn new terms using other terms they are already familiar with, but not always. E.g., your friend may hold up a new fruit at the grocery and tell you it’s name. Many abstract terms must be learned using existing terms. E.g., could democracy be taught using sensory examples? Since terms are defined with other terms, their definitions tend to be circular. A term can be more or less understood. This can break problems with circularity.
Plague (Yersinia Pestis) What Is It? Plague is caused by Yersinia pestis bacteria. It can be a life-threatening infection if not treated promptly. Plague has caused several major epidemics in Europe and Asia over the last 2,000 years. Plague has most famously been called "the Black Death" because it can cause skin sores that form black scabs. A plague epidemic in the 14th century killed more than one-third of the population of Europe within a few years. In some cities, up to 75% of the population died within days, with fever and swollen skin sores. Worldwide, up to 3,000 cases of plague are reported to the World Health Organization (WHO) each year, mostly in Africa, Asia and South America.
Sent in by: Sara of Schenectady, NY and Uriah of Reading, OH Fingers can feel a heartbeat; but to SEE it, use a straw! - paper and pens for charting - watch or timer that measures seconds - Seeing your heartbeat makes it easier to measure. - Draw a chart to record you pulse rates. Write your name (and your friends' names) across the top in separate columns. Write "standing" in the top row on the left side. - To make a drinking straw pulse measurer, you first need to find your pulse with your fingers. Put two fingers on the side of your neck, near the front, and move them around until you can feel something thumping under your skin. That's your pulse. What you're feeling is your blood being pumped around your body by your heart. - Put a piece of clay over your neck where your pulse feels the strongest. - Stick a straw into the clay so that it's sticking straight out from your neck. You might need a friend's help for this part. - To get your pulse rate, count how many times the straw moves in one minute. To save time, you can also count the number of times the straw moves in 15 seconds and then multiply that by four. You can also find a pulse on your arm, temples, and even your ankle. Try it! - Write your pulse rate under your name on the chart. Is your rate faster or slower than your friends' rates? - Do you think your pulse rate is always the same? Could you do something to change it? Come up with different activities that you think might change your pulse rate. Write them in separate columns along the left side of your chart. - First try the activities you think will slow your pulse rate down. Then try the ones you think will speed it up. Calculate your pulse rate for each activity and write it under your name in the row for that activity. - Is there a difference between your pulse rates for each activity? Were you right about which activities speed up and slow down your pulse? Why do you think there is or isn't a difference between pulse rates? Here's the sci scoop on why different activities speed up or slow down your pulse rate: When your heart beats, it pumps blood to all the different parts of your body. The blood brings fresh oxygen to your muscles, which your muscles need for energy. Blood also takes away waste. When your muscles are working hard, they need more oxygen for energy, so your heart speeds up so it can pump more blood. When your body is relaxing, your muscles need less oxygen for energy so your heart doesn't need to pump as quickly. What else could you do to change your pulse rate? Would standing on your head or taking a cold shower make a difference? Test your ideas, and send your results to ZOOM. Alina, age 11 of New York, NY wrote: Unfortunately this experiment didn't work for me. The straw didn't move. Tori, age 15 of Jessup, PA wrote: My grade is holding a health fair for the 5th grade at my school and we are using this experiment to show them their pulse. I will send in the results Malique, age 10 of Fall River, MA wrote: I did it the same way as you did it, but it was'nt hard, and I did'nt have a stopwatch. Brooke, age 12 of Fitchburg, MA wrote: I used play-doh instead of clay. The straw didn't stay in right, so it didn't work. I didn't have any clay! Eric of Perth Andover, NT wrote: My pulse was 47 then when I ran it was 135 I was surpised. Ian, age 13 of New Brunswick, NB wrote: My pulse is 80 then when I ran it was 169. Samantha, age 16 of Indianapolis, IN wrote: First my pulse was 52. l ran 1 mile then my pulse was 234. Rebekah, age 11 of MI wrote: When I took my normal pulse it was 48. After runing for a min my pulse was 120. And when I relaxed my pulse was higher 56. Jeremy & Kayla, age 11 of Montreal wrote: Me:Normal:(15 secs)12x4=48 Relaxed:9x4=36 After exercising:25x4=100 MySister:Normal:13x4=52 Relaxed:11x4=44 After excersising:22x4=88 Slater, age 5 of Hutchinson, KS wrote: My mother is a fitness trainer and thought it very interesting. She had a good laugh watching me run around the house and then watching the straw bounce up and down on my neck. Caroline, age 10 of Mississauga wrote: My pulse was 122. Tiffany, age 9 of San Jose, CA wrote: I ran around the room for one minute I checked my pluse and it was thirty-three but when I add it by four it was 102. Claire, age 8 of Lake Oswego, OR wrote: On the show you tried changing your pulse by running and relaxing, why don't you try changing it by cooling off or heating up your body? Alonso, age 14 of MI wrote: I noticed my heart was beating really hard and would move the straw farther every now and then, at rest my heartbeat was 54 but after running up and down our driveway barefoot for 5min I could make my heartbeat up too 187! I have a stethoscope too me and my friends used it in the experiment too and found that I have a heart murmur that souds like a washing machine between beats. And my friend's heart would beat really fast slow down and beat fast again while at rest the Human Heart is an amazing organ isn't it? Haley, age 10 of Sherwood, AR wrote: My pulse was 122 when I sat and relaxed and 198 when I ran it was sooo cool when I swa that straw going up and down. Ali, age 12 of Freeport, ME wrote: When I first tried it my regular heartbeat was 60. I had measured my fear heartbeat and it was 180 times!!! That was so sweet! My friends think I'm nuts, but it is so cool!!! Scotty, age 6 of Needham, MA wrote: First I jogged for 1 minute and my pulse was 132. Then I rested for 1 minute and my pulse was 112. Ahmaad, age 8 of Symrna, GA wrote: You know about your Drinking straw pulse measurer? I tried to see by drinking cold water to make it go slow then hot water to make it go fast why don't you try it? Abby, age 9 of Geneseo, IL wrote: I know a different way you can do it. Use 5 min. times 10. Try doing it after you run then write it down. Rest then try again. See how much you raised when you ran. Angela, age 11 of Chicago, IL wrote: My heart beat was 73 beats per minute. Morgan, age 11 of Rocky Mount, NC wrote: I placed it on my neck. My friend counted how many times the straw moved in like 10 sec. It went 9 times which my prediction came true!! Kayla, age 14 of Jackson, MS wrote: I tried running around and after that I put the clay and straw on me and it kept coming off but I got use to it and I finally did it right. Christina, age 10 of Cambridge wrote: When I did it it didn't work as well as I thought it would because whenever I put the straw on my pulse it kept sliding of of my neck. The same thing happened when I tried it on the other side too. What can I do? Tara, age 16 of Concord, NC wrote: I had my friend count how many times the straw moved before and after watching scary movie it was a 26 beat difference. Emily, age 11 of Carlisle, MA wrote: When I was angry my pulse quickened. Torrie, age 12 of Paynesville, MN wrote: That was really cool. My bpm was 87 when I sit down. My bpm was134 when I ran and when I slept it was72. This was really fun. Maria, age 12 of St-Laurent, QC wrote: When I ran, My pulse went up to 214!!! Brina, age 10 of Huber Heights, OH wrote: It moved!!! Me and my brother did it. When I did it standing it was 69 when I did it running it was 101!!! For my brother when he stood it was 69 and when he ran it was 77!!! This is incredible!!! Amber, age 8 wrote: At rest it was 104. then I ran and it was 144. After that I did 10 jumping jacks and it was 152. Then I ran in place and it was 160. Then I rest for 5 minutes and it was 116.
3D printing is an additive manufacturing technique. This contrasts with the subtractive manufacturing technique that involves removing material from a block using machining tools. Unfortunately, this technology limited the creativity of 3D designers, as it was not easy to reproduce their original design. Additive production consists in creating a 3D model by layering materials that automatically fuse to form the end product. It is widely appreciated by 3D designers as they can use it to print the most complex forms. 3D printing technology was originally invented by Richard 'Chuck' Hull in 1977 in the US, and his first 3D Systems machine was sold in 1983. Since then, many other technologies have been developed both in plastics and in metals. Additive manufacturing has been widely adopted across all industries, as it can considerably save time and money. They use it to make prototypes and functional parts. One of the most common technologies is FDM (Fused Deposition Modelling). This "thread" technology is very popular with the general public and with educators as it is relatively affordable but printing time is very lengthy and the resolution is rather rough. Ideal for professionals and industry, SLA (stereolithography) is based on the principle of photopolymerization of resin, layer by layer, exposed to a source of light or heat. It delivers high surface quality and very good value for money. Finally, still very expensive but interesting for most industrial applications, is SLS (Selective Laser Sintering), a technique that binds material powders in layers using a laser beam. SLM (Selective Laser Melting) is an additive manufacturing technique used on metals, and is very similar to SLS. Finally, used in the aeronautical and military markets, EBAM (Electron Beam Additive Manufacturing) is a technique whereby strategic metal wire (alloys, titanium, tungsten) is melted layer by layer by an electron beam in a high vacuum. Above all it is used to produce large-size parts (several metres) In recent years, 3D printing has met with tremendous success thanks to its many benefits. And in most cases, it reduces the production cost and lead times. This technique creates objects of all shapes, without technical constraints. 3D printing usually uses plastics but metal and other materials are also tested (wood, food, organic, etc.). 3D printing is a key process for competitive companies because the technology enables them to:
You were 18 years old and writing your first ever final exam of university. You had three sharpened yellow pencils, one blue ballpoint pen, your student ID, a metal reusable bottle of water, and an apple to keep your energy levels up. You were so nervous that your sweaty palms smeared the ink as you answered the questions on the test paper. You couldn’t wait to hand in your test paper, walk out the exam room and forget about that 3-hour ordeal. In anticipation, you glimpse at the loudly ticking clock every 20 minutes and every time you do, time seems to be moving more slowly than before. Even now, years after walking out of that exam room alive, you remember every detail of your experience. But you don’t remember the information you spent learning all semester that appeared on that final exam. Both memories described above are consciously remembered, declarative memories, and these types of memories are naturally prone to forgetting. However, why is it that we remember the information about our experience differently from, or better than, information we may have actively tried to remember? It is because our declarative memory is divided into two types: episodic and semantic memory. Episodic memories are made up of information stored from specific personal experiences, while semantic memories are made up of knowledge and facts we have learned. Together, they allow us to combine newly learned knowledge with experiential knowledge to new situations. Regardless of type, memory naturally degrades over time. Usually, we forget semantic memories much more quickly than we do episodic memories, but we forget nonetheless. While forgetting cannot not be entirely avoided, it can be slowed by improving on memory encoding techniques. While repetition can indeed “refresh” our memory, those memories, especially when learning semantic information, are stronger when processed more elaborately. This levels-of-processing effect essentially means information that is learned multidimensionally creates stronger memories that take longer to fade. This explains why we may remember something like writing our first-ever final exam (an episodic memory) better than the information we were required to study for that exam (a semantic memory). This is because our episodic memories are often processed with an emotional dimension, such as anxiety or excitement, which helps the deepen the level of processing. Not to mention, emotion is one of the strongest triggers for memory. Now, this doesn’t mean you have to try to squeeze out tears of joy or sadness to avoid forgetting something. Instead, making yourself work a little harder to learn or remember something can get the job done. For example, engaging multiple senses with images and audio, mapping conceptual connections between facts, telling a story about an idea, and practicing problem-solving rather than reviewing flashcards can more strongly reinforce memory. Whether learning independently or creating learning materials for an audience, allowing for opportunities of deep, multidimensional processing can improve long-term memory in the long run.
Seadrome was a proposal by Edward R. Armstrong from 1927 through 1946 to build a series of floating airports in the Atlantic in order to enable trans-ocean passenger flights between U.S. and Europe before long distance, unrefueled flight was possible. Aircraft would land, refuel, and fly to the next Seadrome in a series of hops. Two things stopped it from happening: the great depression and improvements in aircraft range. Nonetheless, the Seadrome design appears to be very interesting and thoughtful. It was designed to be maximally stable in open ocean waves. - Overall Specifications (length, width and height are about the same as a modern aircraft carrier +10% or so): - Length: 1200 feet - Width: 400 feet at center, 200 at ends (i.e., narrower at fore and aft decks, much like an aircraft carrier). We could build to different shapes. - Draft: variable from 50 feet with ballast/heave plates retracted into vertical float columns, 160 foot draft with ballast/heave plates fully deployed - Air gap: 70 feet (or possibly 70 foot deck height, but images look like the deck may be 20 to 30 feet above that.) - Displacement: 64,000 tons fully deployed (compared to 100,000 ton displacement of a Nimitz class aircraft carrier) Note that there were at least two slightly different designs for Seadrome and the specifications may vary between them. Seadrome has a trussed upper and lower deck, like a large double-deck bridge. The upper flight deck is a flat and open aircraft runway except for a hotel and control tower. The deck shape is very similar to an aircraft carrier in plan (top-down) view. Like an aircraft carrier Seadrome was to have an aircraft elevator to a hangar deck below the flight deck. The lower deck also had hotel space, lifeboats, living quarters, generators, machinery, etc. Floatation is by about 30 large vertical floats with a diameter of 15 feet from the deck to some feet below the nominal waterline. Some feet under water the columns widen outwards to form to buoyancy tanks that are 30 feet wide and contain air, fuel and water tanks. Some buoyancy and levelling were to be trimmed (adjusted) by pumping around the contents of those tanks between cylinders. Below those tanks, an even narrower column leads another 160 feet below the waterline to iron ballast. The ballast was shaped as a simple cylinder in early versions, and a wide, inverted mushroom shape in later versions. The mushroom shape formed a heave plate of about 40 feet in diameter. The heave plate shape was pointed at the bottom to allow the column to fall more easily than rise and flat on top to resist rising for example due to a wave reaching the column. The lower columns were designed to retract into the upper columns in order to reduce the draft to a possible 50 feet when maneuvering from shallower waters. They would be lowered in deeper waters. So the draft is variable depending on needs. The structure may be less stable with the ballast retracted up, but the winds and waters may be calmer closer to shore, and deployment from shore could be chosen during more stable weather than might eventually be encountered when stationed in the open ocean. Since the buoyancy tanks and ballast/heave plates are inline in the same column, forces from the rising and falling wave water surrounding the column are mostly handled within each relatively strong column shape. Most of the forces may be handled locally within a given column and not transmitted to the deck or larger structure. Based on our model testing, it would seem that the heave plate may have a very significant effect on damping wave motion. Where a plain column would tend to bob up and down in heave after a vertical displacement, the Seadrome column settled immediately in something that may approach critical damping. Measurements would be useful to confirm this. The large, underwater buoyancy tank raises the center of buoyancy since it's near the undersea surface. The deep ballast plates lower the center of gravity. Both of those features increase hydrodynamic and hydrostatic stability, especially taken together. That the buoyancy tank is underwater means that it is less affected by waves which mostly pass over them and interact with the thinner float column above. The thinner column offers less area for wave interaction. That the heave plate is very deep underwater probably means that it's operating in relatively very stable water. Both features reduce the response of the each individual column and thus the overall structure to waves. The columns are doubly trussed together horizontally both above and below the water. There are also diagonal cable stays between the trussed columns. So some translational forces are shared between the columns and overall structure via the trussing and cable stays. This makes the whole stronger than the sum of the parts. All of these design features together probably result in a structure that's strong, relatively light, and could offer a very favorable wave response. Model testing and/or software simulation should be done to confirm the design features and refine the design parameters. Seadrome is an old design, but a very thoughtful and interesting one that could form a successful basis for seasteads. - Safety - should be relatively stable in heave, pitch, roll. Minimal wave response in all directions. 70 foot airgap is nearly double ClubStead, but possibly still too low for rogue waves. Decks could be made higher, for example by scaling up the entire design. - Comfort - could be very stable in waves. Large horizontal deck area may feel like land or a large city block - Cost - ten million dollars per Seadrome was quoted in the 1930s. Call it half a billion dollars or more now. - Pretty - Industrial looking, kind of like a giant pier floating in the ocean, or stationary aircraft carrier, or a really wide oil platform - Modular - probably not modular. May be possible to raft them, or join with bridges. May require precise station keeping that should be possible with electric thrusters, accelerometers, computer control, etc. If joined or bridged, dispersal for storms should be quite viable. - Cargo - Was designed for housing people, storing aircraft, fuel, water, food, supplies. - Free Floating - Yes - Scalable - Smaller versions can be built, but for best response to full size ocean waves, it needs to be built to full scale. Subsections can also be built. Originally a middle third of Seadrome was meant to be built as a demonstration prototype. 1/3 of Seadrome, scaled down, may make a good Baystead (Seastead prototype model). - Standards - Unprecedented design overall, however, steel construction techniques and standards could be very similar to ClubStead, semisubmersible oil platforms, large bridge trusses, etc. - Mobile - Designed to be slowly mobile, possibly self-deploying. Meant to be anchored and point runway into the prevailing wind, swinging around massive anchor. Seasteads may not need anchor. - Draft - Variable draft. Ballast/heave plates retract up into vertical columns (nesting mostly inside them) for operation in shallower waters such as launching from construction near shore. Should reduce draft to about 50 feet. Fully deployed draft is 160 feet. Jeff Chan and friends built a 1/100 scale model of a foredeck at Ephemerisle 2009, but the materials were too heavy for its scale. Lightening efforts are underway, but a model with lighter materials would be a better test. Heave response of a single model column was excellent, settling immediately, possibly near critical damping. Based on our model testing, it would seem that the heave plate may have a very significant effect on damping wave motion. Where a plain column would tend to bob up and down in heave after a vertical displacement, the Seadrome column settled immediately in something that may approach critical damping. Measurements would be useful to confirm this, for example by digitizing video of the model moving. - Spar buoys, ClubStead, Semi-submersible oil platforms, and Seadrome - 2009 Seasteading Conference untalk by Jeff Chan has images, etc. - Wikipedia entry on Edward R. Armstrong includes descriptions, links, historical media coverage, etc.
With schools across the country reevaluating literacy practices in light of the best available evidence, educators are taking a close look at the use of an uninterrupted, 90-minute literacy block for early reading instruction. This brief provides educators with the background, rationale and evidence for implementing an uninterrupted, 90-minute literacy block as part of a comprehensive approach to teaching reading in elementary schools. As of 2017, the literature continues to suggest that an uninterrupted block of at least 90 minutes is an effective practice for early literacy instruction, even though the practice falls in the “demonstrates a rationale” evidence level of ESSA. The available research includes: - The pedagogy of literacy strategies that engage students - The descriptive and causal implementation research on school effectiveness in the 1990s - Studies about the importance of time use and allocation balanced with the reality of misperceptions among teachers about how to best allocate time for literacy instruction - Studies about the negative effects of interruptions on student outcomes - And the Reading First evaluations that report schools’ perceived successes with the strategy
Transmissible spongiform encephalopathies (TSEs) are chronic degenerative diseases that affect the central nervous system of the infected animal. They are known to occur in cattle (bovine spongiform encephalopathy or BSE), sheep and goats (scrapie), deer and elk (chronic wasting disease), mink (transmissible mink encephalopathy) and domestic cats (feline spongiform encephalopathy). Cattle suffering from BSE are often irritable and can react violently to stimuli that normally do not impact healthy animals. As the disease progresses, animals become dizzy and disoriented and eventually lose the ability to walk. This erratic behavior is responsible for the disease being commonly referred to as “mad cow disease.” Unlike foot and mouth disease (which spreads rapidly from animal to animal and from herd to herd), there is no evidence that BSE is contagious or spreads by contact between cattle or by contact between cattle and other species. BSE is not believed to be caused by a bacteria, virus, parasite, fungus, toxin, or chemical. Currently, the most accepted theory is that BSE is caused by a modified form of a normal nerve-cell surface component known as a “prion protein.” If eaten, these modified prion proteins can accumulate in the brain and other tissues, including the spinal cord, subsequently causing normal prion proteins to change to the modified form. These modified proteins continue to accumulate in the brain to the point where they damage brain cells, eventually leading to neurological disease and death. Why or how this substance changes to become disease-producing is still unknown. Whether normal or abnormal, prion proteins primarily are found in neurological tissue, including the brain and spinal cord. Thus, the disease spreads when a susceptible animal eats the brain, spinal cord or nervous tissue of an infected cow where the abnormal prion protein has accumulated, or when a susceptible animal eats proteins rendered from these tissues. BSE was first diagnosed in the United Kingdom in 1986, and was likely caused when cattle were fed rendered protein that contained prions from the carcasses of scrapie-infected sheep or cattle with a previously unidentified transmissible spongiform encephalopathy (TSE). The practice of using products such as meat-and-bone meal as a source of protein in cattle rations had been common for several decades. Restrictions on ruminant protein in feed for ruminant animals were first imposed in England in 1988. In August 1997, the US Food and Drug Administration (FDA) established regulations that prohibit the feeding of most mammalian proteins to ruminants. Similar restrictions were put in place in Canada at the same time. For more information, visit US FDA Federal Initiatives. It is believed that the BSE infected cow in Washington state was exposed to feed containing the prion proteins before the feed bans went into place in North America. Land-Grant University Web Pages Iowa State University Extension — BSE Information Sources NDSU Extension Service — BSE Frequently Asked Questions Federal Agency and Association BSE websites Food and Drug Administration (FDA) Centers for Disease Control (CDC) — BSE and Creutzfeldt-Jakob Diseases (vCJD) International Agency BSE Web Pages BSE – British Department of the Environment, Food and Rural Affairs (DEFRA) BSE Statistics from Britain BSE – Canadian Food Inspection Agency BSE – Geographical Distribution (OIE)
Lovers of antiquity and the classical world know very well that Asia Minor–modern Turkey–was formerly inhabited by a variety of non-Turkic peoples. Most of these people spoke Indo-European languages and included the Hittites, Phrygians, and Luwians (Troy was probably a Luwian city). After the conquests of Alexander the Great, Asia Minor was mostly Hellenized and remained solidly Greek until the 11th century, with Armenians forming the majority in the eastern parts of the region, as they had since antiquity. In the second half of the first millennium CE, Turkic peoples were gradually streaming into most of Central Asia from their original homeland in the Altai mountains of western Mongolia. They gradually displaced or assimilated both the settled and nomadic Iranian-speaking people. But how did they get all the way to Turkey, which has the largest concentration of Turkic peoples today? In the 11th century, Turks began appearing at the edges of Asia Minor (Anatolia), which was then controlled by the Greeks. Many of the Turks were mercenaries in the employ of local Arab and Persian rulers to the east of the Byzantine Empire and Armenia, the dominant states in Asia Minor. In 1037, the Seljuk Empire, a Turkic state, was founded northeast of Iran in Central Asia and quickly overran much of Persia, Iraq, and the Levant. By the 1060s, the Seljuk Empire bordered Byzantine Asia Minor. It should be noted that the Turks were a minority, ruling a Persian, Arab, and Kurdish majority.Enjoying this article? Click here to subscribe for full access. Just $5 a month. The main strategic threat to the Turks was the Fatimid Caliphate based in Egypt. The Fatimids were Ismaili Shia and ruled over Jerusalem and Mecca at that time while the Turks upheld Sunni Islam. The Sunni Caliph in Baghdad was their puppet. By this time, the Caliph had ceased to exercise any political role while the Seljuk sultans held the reigns of power. As was the case of many empires, many problems arose due to the conflicts between nomadic rulers and a sedentary population. Thus, many of the Turkic tribes under Seljuk rule actually posed a problem for the Seljuks since they were restless and sometimes raided settled populations ruled by the Seljuks. As a result, many of the Turkic tribes and families were placed on the frontiers of the Seljuk Empire, including on the frontier of the Byzantine Empire. Turkish raids into Asia Minor commenced, greatly annoying the Byzantines. In 1045, the Byzantines conquered Armenia. Their frontier with the Seljuks was not particularly strong or pacified as a result of the intermittent warfare there. Additionally, many Armenians did not like the Byzantines and did not help them resist the Turkish raids. Eventually, by 1071, the Byzantines, exasperated at constant Turkish raiding, decided to move a large army to their borders to eliminate the Turkish threat once and for all. Unfortunately, this was not a particularly good idea, because their strength lay in manning border forts against lightly armed tribal warriors. By attempting to fight a pitched battle, they also risked total defeat. Furthermore, the Seljuk Turks did not want to antagonize the Byzantines. Their state apparatus was directed against Egypt; it was only tribes that were barely under central Seljuk control that were raiding the Byzantines. Romanus IV Diogenes, the Byzantine Emperor, created a previously non-existent threat for the Seljuks by moving some 40,000 troops to his eastern border, thus alerting the Seljuk Sultan Alp Arslan to the threat from Asia Minor. Thus, the Byzantines, by diverting the Turks’ attention from Egypt, brought a Turkic army to Asia Minor from Persia and Central Asia. The Seljuk and Byzantine armies met at Manzikert in eastern Turkey, where the Byzantines were crushed. This is arguably one of the most decisive battles in history, as it resulted in the eventual establishment of Turkish power in Asia Minor. It was likely that the battle was lost by the Byzantines due to treachery, because units commanded by generals belonging to alternative court factions in Constantinople simply never showed up for the battle, despite being in the vicinity, and returned home afterwards. Sultan Alp Arslan captured Emperor Diogenes and and offered him generous terms before sending him home. However shortly afterwards, the Byzantine empire suffered a civil war between Diogenes and other contenders for the throne and several generals broke his treaty with the Turks. This left Asia Minor devoid of soldiers and gave the Turks good reason to occupy it. By 1081, they were across the Bosphorus Straits from Constantinople. Although the Byzantines and Crusaders later recovered some territory in Asia Minor, from then on, the majority of the region remained under Turkish control. But groups of Turks ruled over many states in the Middle East and South Asia at this point in time. Why did they become the majority in Turkey? After the Seljuk victory, many Turks poured into Asia Minor, establishing little statelets, and ruling over the native population. Following the subsequent Mongol invasions, even more poured in, fleeing from their former lands in Persia and Central Asia. Unlike in many other cases, where a dominant minority eventually became assimilated into the majority population, because of the unstable, chaotic frontier situation, the Turks did not assimilate into the population. Indeed, many locals (ethnic Greeks and Armenians) attached themselves to Turkish warlords for protection as clients. This client-patron relationship spread out over many bands and tribes across Asia Minor and ensured that the majority of the population assimilated into the Turkish religion (Islam), language, and culture instead of vice versa. This is a cultural process known as elite dominance, wherein a minority imposes its culture on the majority. The Turkification of Asia Minor is evident in the fact that genetically, the majority of today’s Turks are most closely related to Greeks and Armenians rather than Central Asian Turkic peoples, like the Uzbeks and Kazakhs. Thus, while the Turkic culture dominated in Asia Minor, the Turks themselves quickly merged genetically into the native population. This is not to say that there is no actual Central Asian genetic component among today’s Anatolian Turkish population. Genetic studies show that some 9 to 15 percent of the Turkish genetic mixture derives from Central Asia. Asia Minor was the most populous part of the Byzantine Empire, its heartland. Without it, the empire simply didn’t have enough resources to compete in the long run. Turkification was also helped by the fact that the Greeks were of a different religion than the Turks. Greeks converting to Islam would often do so by “going Turk,” a phenomenon not possible in already Muslim Arab and Persian regions. Furthermore, in the later Ottoman Empire, the Turkish language prevailed at the official level, and not local languages. As a result of all these factors, densely populated Asia Minor became the region of the world with the largest concentration of Turkic-speaking peoples, far away from their original homeland in Central Asia. This event had a major impact on global geopolitics for centuries to come.
Popular opinion states that there are several benefits to music education, but is there truth to that statement? Several studies have been done to prove the link between learning music and increased brain activity. While we generally think that the act of learning how to play an instrument or sing well is beneficial for children, it provides more perks than you expect. One study has shown that children who received a minimum of three years of music training outperformed their peers and showed enhanced verbal ability and non-verbal reasoning skills. This involves analyzing and comprehending visual data, such as understanding the relationships, differences and similarities between various patterns and shapes. The same study showed that the children performed better with fine motor skills and auditory discrimination abilities. Another study has shown that those who were given a formal education in music has seen an improvement of reading skills and academic achievement. You might think that these learning areas are as distant as it can be from musical training, so it’s wonderful to see how instrumental and vocal training can push a child to develop an important set of skills. For children between the ages of two to nine, many who take music classes see extraordinary boosts in language development, which is crucial for their age range. Kids are born with the ability to decipher words and sounds, but learning music skills greatly enhances these inborn abilities. Social Behavior Benefits and IQ Taking group classes helps with developing necessary social skills that children will bring with them as they grow into adulthood. According to a study published in Psychological Science, those who are regularly taking music classes see an increase in IQ as well. A group of children who were given nine months of voice lessons and piano lessons tested on average three IQ points higher than children who weren’t given any training at all. Want to know more about how music education can boost our brain power? Send us a message and we’d love to chat with you!
Online College Articles Campus College Articles New Mexico Symbols Aircraft, Amphibian, Animal, Answer, Ballad, Balloon Museum, Bilingual Song, Bird, Butterfly, Cookie, Colors, Cowboy Song, Fish, Flag, Flower, Fossil, Gem, Grass, Guitar, Historic Railroad, Insect, Motto, Necklace, Nicknames, Poem, Quarter, Question, Reptile, Seal, Slogan:(Business; Commerce;and Industry,) Song, Spanish Language Song, Symbol, Tie, Tree, Vegetables National & State Symbols New Mexico Early History First Early Inhabitants of New Mexico Early history examines the archaeological record that tells the story of the first inhabitants of New Mexico. Learn about the prehistory and culture of the first early inhabitants, and what lessons it might teach us about the early history of New Mexico. New Mexico First Early Inhabitants Timeline Early History of Native Americans in New Mexico The Indigenous People of New Mexico The Clovis-Paleo Indians later discovered the eastern plains of New Mexico, the same expansive romping grounds of the dinosaurs around 10,000 B.C. The river valleys west of their hunting grounds later flooded with refugees from the declining Four Corners Anasazi cultures. Sometime between A.D. 1130 and 1180, the Anasazi drifted from their high-walled towns to evolve into today's Pueblo Indians, so named by early Spanish explorers because they lived in land-based communities much like the villages, or pueblos, of home. Culturally similar American Indians, the Mogollón, lived in today's Gila National Forest. The Anasazi occupied the region where present day Arizona, New Mexico and Colorado meet. They were among the most highly civilized of the Native American cultures. They raised corn and cotton, and tamed wild turkeys, using the meat for food and the feathers for clothing. In the winter, the Anasazi wore garments fashioned from turkey feathers. US History Overview The United States of America is located in the middle of the North American continent with Canada to the north and the United Mexican States to the south. The United States ranges from the Atlantic Ocean on the nation's east coast to the Pacific Ocean bordering the west, and also includes the state of Hawaii, a series of islands located in the Pacific Ocean, the state of Alaska located in the northwestern part of the continent above the Yukon, and numerous other holdings and territories
A Mercury Colony? There is a good reason for colonizing another planet, which is to avoid extinction if the Earth is hit by a 10km or larger asteroid, as has happened many times in the Earth's history. Colonization of Mercury appears to be a very real and practical possibility, whereas colonization of Mars or the other planets, moons or asteroids is really more in the realm of fantasy. The first thought about Mercury is that it would have very high temperatures and no water, because the equatorial surface temperature ranges between -183oC and 427oC as the planet rotates. But an analysis of temperature vs. latitude and depth shows that the temperature is nearly constant at room temperature (22+/-1oC) in underground rings circling the planet's poles, and deeper than .7 meter below the surface. Similar results are found using numerical techniques in an Icarus paper, Vol. 141, 179–193 (1999). Agriculture would be possible with 2x1013-1015 kg of water covered by 5.65x109m3 of carbon-rich hydrocarbons. Crops would provide food and oxygen, and consume the carbon dioxide we exhale. All human habitation and agriculture would be underground to avoid temperature extremes, ionizing radiation, and the loss of oxygen, water and carbon dioxide to the surface. Filtered light could be used for crops, but it is likely that rapidly growing crop varieties could be developed which would take advantage of the high light intensity and the long Mercury day, where sunrise to sunset lasts for 88 Earth days. X-ray spectrometry for Si, Mg, Al, S, Ca, Ti, Fe, Cl, Cr and Mn, gamma-ray spectrometry for K, Th and U and gamma-ray spectrometry for Al, Ca, S, Fe and Na from Mercury MESSENGER shows the following average composition of Mercury's soil compared to Earth: |Mercury (tens of cm depth)||42.3||24.6||12.5||7.1||2.3||5.9||.2||1.9||< .2||< .5||< .5||2.9||.1||.00002||.00001||?||?||?| |Earth (continental crust)||47||28||2.5||8||.04||4||.5||5||.02||.02||.1||2||2||.0007||.0002||.1||.002||.1| Several other aspects of Mercury make it a good prospect for a colony. One very important advantage is the high solar light intensity, which is stronger than on Earth by a factor of 10.6 at perihelion and 4.6 at aphelion. This strong light intensity would provide virtually unlimited power via electronic solar arrays, and the resulting vertical temperature gradients of ~200oC/m would provide even more unlimited power via thermal solar arrays. With such an unlimited and inexpensive power source, almost anything needed for survival could be produced. The gravity on Mercury is 38% that of Earth, which is strong enough to avoid the reduction in bone mass that occurs in very low gravity and weightless environments. There are no temperature variations over periods longer than the Mercury day (like Earth's seasons), which avoids the need for heating/cooling equipment within the 22+/-1oC underground rings mentioned above. This occurs because Mercury's orbit is synchronized with its rotation such that 0deg and 180deg longitudes always experience midnight and noon at perihelion whereas 90deg and 270deg longitudes always experience midnight and noon at aphelion. The rings would be about 5000km long, similar to the diameter of the planet. They would be only 20-60km wide because of horizontal temperature gradients of .035-.097oC/km. This results in a total area of about 40x5000=200,000km2 of 22+/-1oC temperature around each pole. The rings could also be extended hundreds of floors downwards, essentially by making underground skyscrapers. And the entire area between the rings and the poles could also be populated (albeit more sparsely) simply by using abundant solar power. Now, an underground existence may sound undesirable to many people. However, that fact is that most people spend 95% of their lives indoors, and from a quality-of-life perspective there is little difference between indoors above ground and indoors below ground. And the colony could still have natural areas, trees, flowers, parks, lakes, wild animals, etcetera. In fact it would probably need all of these things to maintain the ecosystem. The only difference from Earth is that they would be in man-made underground greenhouses instead of on the planet surface. Mars automatically comes to mind when discussing planetary colonization, and manned missions to Mars have been the long term focus of US space exploration plans since 2004. But despite all the hype, Mars is really a poor prospect for colonization. The solar light intensity on Mars is .43 that of Earth, which makes solar power and agriculture much less practical than on Mercury. The gravity of Mars is 38% of Earth, essentially equal to Mercury. The magnetic field of Mars is .1% of Earth, and its atmosphere density is 2% that of Earth, so protection from ionizing radiation would require underground habitation, the same as on Mercury. The average equatorial surface temperature of Mars is about -45oC (-50oF), which would be the constant temperature underground. And of course the temperature gets much lower away from the equator. Such low temperatures can be withstood by machines such as the Spirit, Opportunity and Curiosity Mars rovers, but not by people. Human habitation of Mars would be problematic because of the very low temperatures, limited solar power capacity, and a biological history which precludes oil, gas and coal deposits. Human habitation would probably be impossible without nuclear power, and uranium mining and nuclear plants would be very challenging in an airless, cold enviroment. Also, concentrated uranium deposits are probably less common than on Earth because they depend on sedimentary and hydrothermal processes which are more prevalent on Earth. The other planets, moons and asteroids have even worse drawbacks than Mars. Asteroid impacts of 5km diameter or greater occur roughly once every 10 million years, and those of 10km or greater occur roughly once every 100 million years. In the past 540 million years there have been 5 extinction events where more than 50% of the Earth's species were killed off, including the Permian-Triassic extinction where 90% of the species were lost. Most scientists think that some of these were caused by asteriod impacts. A well proven example is the Chicxulub impact which resulted from a 10km asteroid impact at the Cretaceous-Tertiary boundary 65 million years ago and caused the extinction of 70% of the Earth's species, including the dinosaurs. Even larger impacts have occured at earlier times, of which only a few are known because their impact craters get erased by the Earth's geological processes over time. It is thought that a 20km or larger asteroid would cause the extinction of all higher order animals and plants, leaving only microorganisms. While the likelihood of such an event is very small in any given year, it could happen at any time, and it is almost guaranteed to happen eventually. Given the facts above, it appears that the focus of US space exploration plans should be shifted from Mars to Mercury. In particular, the US has already had four successful Mars rovers, so how about a Mercury rover mission. Such a mission could focus on a detailed analysis of the water ice, the dark material covering the water ice, and the soil, either on-site or by bringing samples back to Earth, as proposed by one scientist. Analysis of these materials would be critical for a Mercury colony, and it would also be of interest from a purely scientific standpoint. How deep are the water ice deposits? On-site echosounding measurements would provide a much better estimate than the existing measurements, which really just give a lower bound. What is the isotopic composition of the water ice, which would give clues about its origin? What other materials are mixed in with it? Would the water need to be purged of poisonous contaminants before it could be used for drinking or agriculture? Is the dark covering material made out of hydrocarbons as several scientists have suggested? How much of this material is there, and could it be used as a source of carbon for agriculture? What is the soil concentration of Carbon and Nitrogen, elements that could not be measured by MESSENGER's gamma-ray and X-ray spectrometers? What minerals are present, and are some of the elements critical for agriculture locked up in minerals which cannot be metabolized by plants? Perhaps it would be best to land the rover near a small crater with water deposits, so it could hide in the crater from the hot sun, but project solar cells or a mirror over the edge. Some good landing sites could probably be found among the large number of high resolution images taken by MESSENGER. The main motivation for investigating Mercury is its potential for hosting a self-sustaining human colony, which would protect humanity from extinction in the event of a catastrophic asteroid impact. A second motivation is simply to increase our scientific understanding of the solar system. It is very unlikely that Mercury could ever be a practical source of minerals or energy to be transported back to Earth, or that Mercury would ever have any other Earth-serving economic value. But surely preservation of the human species and scientific curiosity are better motivations than economic benefit. Humans are part of a universe where time is measured in billions of years. We need to take a long term view, and consider the future of the human species in the next thousand, million and billion years, not just the next 10 or 100 years. A Mercury colony would be a challenging and costly effort for sure. The voyage to Mercury might take 6.5 years like the MESSENGER trip because of the large velocity change involved, and the spacecraft would require heavy shielding against ionizing radiation. Much planning and preparation would be needed to ensure that the colony could get through the first weeks, months, and years, with little or no resupply from Earth. However, a Mercury colony appears to be a real possibilty using current technology, not a fantasy for the distant future. “People joke about it, but it’s not so crazy, really,” said David A. Paige, a professor of geology at U.C.L.A. involved with the water ice discovery. In fact, if we delay until the distant future, or even 50 years or so, such an effort probably will become impossible. This is because us humans will consume the Earth's non-renewable energy and mineral resources almost completely within the next 50-100 years, severely reducing our discretionary income for costly activities such as space travel. We should be pursuing a Mercury colony now, before it is too late.Go back to my home page.
Rounding Worksheets for 3rd Grade Here is the Rounding Worksheets For 3rd Grade section. Here you will find all we have for Rounding Worksheets For 3rd Grade. For instance there are many worksheet that you can print here, and if you want to preview the Rounding Worksheets For 3rd Grade simply click the link or image and you will take to save page section. Freebie Rounding Whole Numbers Task Cards 39 Best Rounding Worksheets Images On Pinterest 75 Best 3rd Grade Math Worksheets Images On Pinterest Extra Math 1961 Best Images In 2018 Third Grade Rounding Worksheets Free Refrence Worksheet Archives 3rd Grade Math Worksheets Third Grade Rounding Worksheets Free Best Rounding To The Nearest Word Problems Rounding To The Nearest Hundreds Place Rounding To The Nearest 10 2nd Grade 3rd Grade Pinterest 3rd Grade Math Worksheets Fall Math Worksheets For 1st 2nd & 3rd Grade 75 Best 3rd Grade Math Worksheets Images On Pinterest 39 Best Rounding Worksheets Images On Pinterest Rounding Estimation Worksheets Grade Just Turn Math For Free Printable Rounding Worksheets 5th Grade Games For Word Problems.
This post will explore digital certificates. Consider this as a result of a need to learn more detail about how certificates work. Cryptography solves many key computer issues (like identification). Traditional security based on physical devices is useless in the digital world. In order to protect information and identity, complex algorithms are needed to hide the content and user. Early generation encryption solved the problem but had a big problem with how far it could transfer its secret keys. It became too difficult and expensive to safeguard the symmetric keys. Beyond this, the key trust required new keys for each new trust relationship. It also implied prior knowledge of the other person. Since both parties know the secret key, there is no way to know which of the two parties might have modified content or acted as the other. The shared key does not indicate identification. Therefore it can not be used to authenticate. Public key encryption was first created in the mid 1970’s. For the first time it was possible to transfer secret data without exposing the secret key. There are two keys (one public and one private) that do the opposite of each other. If you use the public key to encrypt, only the private key can decrypt. Likewise, if data is encrypted using the private key, it can only be decrypted by the public key. The public key is made widely available for all to use. The private key is only available to the owner. SSL uses the public keys of servers to encrypt traffic to the web. The SSL protocol requests the public key from the server directly as part of its negotiation. If a user wants to send a private message to another person, they use the public key of that person to encrypt the message. The recipient uses their private key to decrypt the message. The power of this is that the public key can be used by anyone and the content sent to the private key owner cannot be decrypted by anyone else (assuming that the encryption is strong enough). Since only one person owns the private key, it is therefore true that if you decrypt data from the users public key, you can be guaranteed that it came from that person. This introduces the concept of digital signatures. Sometimes it is only necessary to prove that someone sent data. The actual data does not necessarily need to be encrypted but the producer and the consumer want to guarantee that nothing has changed and that the content came from the producer. This is used extensively in code signing. The quick summary is that a hash algorithm is run against a section of data and the resulting digest (like SHA1) is encrypted using the private key of the producer. The consumer receives the data and decrypts the digest and compares this against the digest calculated from the received data. If they do not match, something has been altered that should not have been. The problem however is that just because a public key is published it does not necessarily mean that the owner is the person you want to trust. Someone could send you a public key that actually belongs to someone else. There is a need to guarantee trust. Digital certificates are the answer. Similar in nature to a passport or driver license, they uniquely identify you based on a trusted organisation. The user’s public key is signed by a certificate authority (think VeriSign) that can vouch for you being a valid user/company. They only sign public keys they trust through a process of proving identity. Years ago I needed a certificate for Citrix and had to prove to VeriSign that we were who we said we were. It can be an extensive process based on the level of verification. Digital certificates contain a number of data items to help verify identity and trust. This includes the concept of expiration. Proof of identity only lasts for a specific time (usually a year). Certificates can be allocated based on a trust hierarchy. Trust is relative to the parent which can be mapped to many layers before it gets back to a certificate authority root. Web browsers have a list of certificate authorities they trust and that is the basis of determining trust with HTTPS sites. Only trusted CAs are allowed. More in the next post…
next page :[link] previous page :[link] = Smallpox in America In 1507 smallpox was introduced into the Caribbean island of Hispaniola and to the mainland in 1520, when Spanish settlers from Hispaniola arriving in Mexico brought smallpox with them. Smallpox devastated the native Amerindian population and was an important factor in the conquest of the Aztecs and the Incas by the Spaniards. Settlement of the east coast of North America in 1633 in Plymouth, Massachusetts was also accompanied by devastating outbreaks of smallpox among Native American populations, and subsequently among the native-born colonists. Some estimates indicate case fatality rates of 80–90% in Native American populations during smallpox epidemics. Source :[link] = Great Plague of Spain Three great plagues ravaged Spain in the 17th century. They were: -The Plague of 1596-1602 (Arrived in Santander by ship from northern Europe, most likely the Netherlands, then spread south through the centre of Castile, reaching Madrid by 1599 and arriving in the southern city of Seville by 1600.) which claimed 600,000 to 700,000 lives -The Plague of 1646-1652 ("The Great Plague of Seville"; believed to have arrived by ship from Algeria, it was spread north by coastal shipping, afflicting towns and their hinterlands along the Mediterranean coast as far north as Barcelona.) -The Plague of 1676-1685 Factoring in normal births, deaths, plus emigration, historians reckon the total cost in human lives due to these plagues throughout Spain, throughout the entire 17th century, to be a minimum of nearly 1.25 million. As a result, the population numbers of Spain scarcely budged between the years 1596 and 1696. Hetalia, England aph, Spain aph, Egypt aph, Italy aph, Macau aph & Netherlands aph belonged to Himaruya H. Spice Islands is Maluku Islands in Indonesia. Indonesia aph, Malaysia aph & Portugal aph based on Himaruya's sketch. ([link] India OC & Mughal OC design by dinosaurusgede Philippines OC design by Mexico OC design by Japan 17th century design by
It is called twilight at the interval before sunrise or after sunset, during which the sky is still somewhat illuminated. Twilight occurs because sunlight illuminates the upper layers of the atmosphere. The light is diffused in all directions by the molecules of the air, reaches the observer and still illuminates the environment. The map shows the parts where they are during the day and where they are at night. If you want to know exactly the time that dawns or dusk in a specific place, in the meteorological data we have that information. Why do we use UTC? Universal coordinated time or UTC is the main standard of time by which the world regulates clocks and time. He is one of the several successors closely related to Greenwich Mean Time (GMT). For most common purposes, UTC is synonymous with GMT, but GMT is no longer the most precisely defined standard for the scientific community.
One of the basic skills we have is reading. Without it, life can be very difficult. And unfortunately, there are way too many children who grow up and continue to be illiterate. There are so many ways that these things can be changed, and all the while, changing their lives to something better. With the help of phonic books letters and sounds, you can change the life of your child, making sure that he or she has a better future because of the better knowledge they have that comes with the increased reading skills.Certainly, your knowledge of how to read and the rules regarding grammar and sounds has much to do with it. As a parent, in order to give your child the opportunity to excel, you’ll also need the basic understanding of phonics and the like. Obviously, each word is made up of a few letters. When you break down the word, it ends up in its basic element. Either alone, or in combination, there are sounds better represented. Understanding the different sounds that each letter or combination of letters makes is getting a better understanding of the language. Through the teaching techniques, the child can make these associations of letters and sounds.Of course, every language and English is of no exception, has rules and exceptions to the rules. This can be very confusing for young readers. So, sight words are introduced to these young readers so that they can identify them quickly.These particular words are those in which they are a regular from a phonetic aspect but appear very regularly in literature. Once they understand them and recognize them easily, they will increase your child’s reading skills and help him or her move on. In essence, it is better that children memorize these sight words to simplify reading at this and that future levels.Using phonics programs which consists of books and worksheets will help your child improve through understanding which comes from the various exercises. These exercises can be in the form of games and activities, making it more interesting and more fun for your child to grasp the concept and move on. As we all know, learning is a lot easier for children when it is done in a fun way. The important thing to realize is that every child needs a little bit of help when it comes to improving their reading skills. As a parent, it is imperative that you recognize the importance of reading, and more so, the basics of reading. Once you do understand the important role that it plays, you are certain to use these tools to help your child along the way.
The operation of the ear has two facets: the behavior of the mechanical apparatus and the neurological processing of the information acquired. The mechanics of hearing are straightforward and well understood, but the action of the brain in interpreting sounds is still a matter of dispute among researchers. Click Here to get a narrative description of the ear drawings. Fig. 1 Parts of the ear 1. Auditory canal 6. Round window 2. Ear drum 7. Oval window 3. Hammer 8. Semicircular canals 4. Anvil 9. Cochlea 5. Stirrup 10. Eustachian tube The ear contains three sections, the outer, middle, and inner ears. The outer ear consists of the lobe and ear canal, structures which serve to protect the more delicate parts inside. The outer boundry of the middle ear is the eardrum, a thin membrane which vibrates in sympathy with any entering sound. The motion of the eardrum is transferred across the middle ear via three small bones named the hammer, anvil, and stirrup. These bones are supported by muscles which normally allow free motion but can tighten up and inhibit the bones' action when the sound gets too loud. The leverages of these bones are such that rather small motions of the ear drum are very efficiently transmitted. The boundry of the inner ear is the oval window, another thin membrane which is almost totally covered by the end of the stirrup. The inner ear is not a chamber like the middle ear, but consists of several tubes which wind in various ways within the skull. Most of these tubes, the ones called the semicircular canals, are part of our orientation apparatus. (They contain fine particles of dust-the location of the dust tells us which way is up.) The tube involved in the hearing process is wound tightly like a snail shell and is called the cochlea. Fig 2. Schematic of the ear This is a diagram of the ear with the cochlea unwound. The cochlea is filled with fluid and is divided in two the long way by the basilar membrane. The basilar membrane is supported by the sides of the cochlea but is not tightly stretched. Sound introduced into the cochlea via the oval window flexes the basilar membrane and sets up traveling waves along its length. The taper of the membrane is such that these traveling waves are not of even amplitude the entire distance, but grow in amplitude to a certain point and then quickly fade out. The point of maximum amplitude depends on the frequency of the sound wave. The basilar membrane is covered with tiny hairs, and each hair follicle is connected to a bundle of nerves. Motion of the basilar membrane bends the hairs which in turn excite the associated nerve fibers. These fibers carry the sound information to the brain. This information has two components. First, even though a single nerve cell cannot react fast enough to follow audio frequencies, enough cells are involved that the aggregate of all the firing patterns is a fair replica of the waveform. Second, and probably most importantly, the location of the hair cells associated with the firing nerves is highly correlated with the frequency of the sound. A complex sound will produce a series of active loci along the basilar membrane that accurately matches the spectral plot of the sound. The amplitude of a sound determines how many nerves associated with the appropriate location fire, and to a slight extent the rate of firing. The main effect is that a loud sound excites nerves along a fairly wide region of the basilar membrane, whereas a soft one excites only a few nerves at each locus. The mechanical process described so far is only the beginning of our perception of sounds. The mechanisms of sound interpretation are poorly understood, in fact is not yet clear whether all people interpret sounds in the same way. Until recently, there has been no way to trace the wiring of the brain, no way to apply simple stimuli and see which parts of the nervous system respond, at least not in any detail. The only research method available was to have people listen to sounds and describe what they heard. The variability of listening skills and the imprecision of the language combined to make psycho-acoustics a rather frustrating field of study. Some of the newest research tools show promise of improving the situation, so research that is happening now will likely clear up several of the mysteries. The current best guess as to the neural operation of hearing goes like this: We have seen that sound of a particular waveform and frequency sets up a characteristic pattern of active locations on the basilar membranes. (We might assume that the brain deals with these patterns in the same way it deals with visual patterns on the retina.) If a pattern is repeated enough we learn to recognize that pattern as belonging to a certain sound, much as we learn a particular visual pattern belongs to a certain face. (This learning is accomplished most easily during the early years of life.) The absolute position of the pattern is not very important, it is the pattern itself that is learned. We do possess an ability to interpret the location of the pattern to some degree, but that ability is quite variable from one person to the next. (It is not clear whether that ability is innate or learned.) What use the brain makes of the fact that the aggregate firing of the nerves more or less approximates the waveform of the sound is not known. The processing of impulse sounds (which do not last long enough to set up basilar patterns) is also not well explored. Most studies in psycho-acoustics deal with the sensitivity and accuracy of hearing. This data was intended for use in medicine and telecommunications, so it reflects the abilities of the average untrained listener. It seems to be traditional to weed out musicians from such studies, so the capabilities of trained ears are not documented. I suspect such capabilities are much better than that suggested by the classic studies. The ear can respond to a remarkable range of sound amplitude. (Amplitude corresponds to the quality known as loudness.) The ratio between the threshold of pain and the threshold of sensation is on the order of 130 dB, or ten trillion to one. The judgment of relative sounds is more or less logarithmic, such that a tenfold increase in sound power is described as "twice as loud". The just noticeable difference in loudness varies from 3 dB at the threshold of hearing to an impressive 0.5 dB for loud sounds. Fig. 3 Perceived loudness of sounds The sensation of loudness is affected by the frequency of the sound. A series of tests using sine waves produces the curves shown. At the low end of the frequency range of hearing, the ear becomes less sensitive to soft sounds, although the pain threshold as well as judgments of relatively loud sounds are not affected much. Sounds of intermediate softness show some but not all of the sensitivity loss indicated for the threshold of hearing. At high frequencies the change in the sensitivity is more abrupt, with sensation ceasing entirely around 20 khz. The threshold of pain increases in the top octave also. The ability to make loudness judgments is compromised for sounds of less than 200ms duration. Below that limit, the loudness is affected by the length of the sound; shorter is softer. Durations longer than 200ms do not affect loudness judgment, beyond the fact that we tend to stop paying attention to long unchanging tones. The threshold of hearing for a particular tone can be raised by the presence of another noise or another tone. White noise reduces the loudness of all tones, regardless of absolute level. If the bandwidth of the masking noise is reduced, the effect of masking loud tones is reduced, but the threshold of hearing for those tones remains high. If the masking sound is narrow band noise or a tone, masking depends on the frequency relationship of the masked and masking tones. At low loudness levels, a band of noise will mask tones of higher frequency than the noise more than those of lower frequency. At high levels, a band of noise will also mask tones of lower frequency than itself. People's ability to judge pitch is quite variable. (Pitch is the quality of sound associated with frequency.) Most subjects studied could match pitches very well, usually getting the frequencies of two sine waves within 3%. (Musicians can match frequencies to 1%, or should be able to.) Better results are obtained if the stimuli are similar complex tones, which makes sense since there are more active points along the basilar membrane to give clues. Dissimilar complex tones are apparently fairly difficult to match for pitch (judging from experience with ear training students; I haven't seen any studies on the matter to compare them with sine tone results). Judgment of relative pitch intervals is extremely variable. The notion of the two to one frequency ratio for the octave is probably learned, although it is easily learned given access to a musical instrument. An untrained subject, asked to set the frequency of a tone to twice that of a reference, is quite likely to set them a twelfth or two octaves apart or find some arbitrary and inconsistent ratio. The tendency to land on "proper" intervals increases if complex tones are used instead of sine tones. Trained musicians often produce octaves slightly wider than two to one, although the practical aspects of their instrument strongly influence their sense of interval. (As a bassoonist who has played the same instrument for twenty years, I have a very strong tendency to place G below middle C a bit high.) Identification of intervals is even more variable, even among musicians. It does appear to be trainable, suggesting it is a learned ability. Identification of exact pitches is so rare that it has not been properly studied, but there is some anecdotal evidence (such as its relatively more common occurrence among people blind from birth) suggesting it is somehow learned also. The amplitude of sound does not have a strong effect on the perception of pitch. Such effects seem to hold only for sine tones. At low loudness levels pitch recognition of pure tones becomes difficult, and at high levels increasing loudness seems to shift low and middle register pitches down and high register pitches up. The assignment of the quality of possessing pitch in the first place depends on the duration and spectral content of the sound. If a sound is shorter than 200ms or so, pitch assignment becomes difficult with decreasing length until a sound of 50ms or less can only be described as a pop. Sounds with waveforms fitting the harmonic pattern are clearly heard as pitched, even if the frequencies are offset by some additive factor. As the spectral plot deviates from the harmonic model, the sense of pitch is reduced, although even noise retains some sense of being high or low. Recognition of sounds that are similar in aspects other than pitch and loudness is not well studied, but it is an ability that everyone seems to share. We do know that timbre identification depends strongly on two things, waveform of the steady part of the tone, and the way the spectrum changes with time, particularly at the onset or attack. This ability is probably built on pattern matching, a process that is well documented with vision. Once we have learned to identify a particular timbre, recognition is possible even if the pitch is changed or if parts of the spectrum are filtered out. (We are good enough at this that we can tell the pitch of low sounds when played through a sound system that does not reproduce the fundamentals.) We are also able to perceive the direction of a sound source with some accuracy. Left and right location is determined by perception of the difference of arrival time or difference in phase of sounds at each ear. If there are more than two arrivals, as in a reverberant environment, we choose the direction of the first sound to arrive, even if later ones are louder. Localization is most accurate with high frequency sounds with sharp attacks. Height information is provided by the shape of our ears. If a sound of fairly high frequency arrives from the front, a small amount of energy is reflected from the back edge of the ear lobe. This reflection is out of phase for one specific frequency, so a notch is produced in the spectrum. The elongated shape of the lobe causes the notch frequency to vary with the vertical angle of incidence, and we can interpret that effect as height. Height detection is not good for sounds originating to the side or back, or lacking high frequency content. Peter Elsea 1996
Healthy practices can assist in defense against swine flu The hot, current news item is the flu. In this case, it is influenza H1N1, or more commonly known as “swine flu.” H1N1 is one of several types of influenza (flu) virus that causes respiratory disease that can spread between people. Most people infected with this virus in the United States have had mild cases, but some have been severely ill, and there have been at least three deaths. The symptoms of swine flu in people are similar to the symptoms of regular seasonal influenza and include fever, lethargy, lack of appetite and coughing. Some people with swine flu also have reported runny nose, sore throat, nausea, vomiting and diarrhea. There are some simple, precautionary steps that we can take to protect our health. The Centers for Disease Control and Prevention offer the following hints for optimizing health: • Avoid close contact with people who are sick. When you are sick, keep your distance from others to protect them from getting sick too. If possible, stay home from work, school and errands when you are sick. You will help prevent the spread of illness to others. • Cover your mouth and nose with a tissue when coughing or sneezing. It may prevent those around you from getting sick. Wash your hands thoroughly after coughing, sneezing or blowing your nose. Germs are often spread when a person touches something that is contaminated with germs and then touches his or her eyes, nose or mouth. Use warm water and soap and scrub for at least 20 seconds. Hand sanitizer can be used as an extra line of defense but should be used with care with young children. • Practice other good health habits including: — Eat nutritious foods – whole grains and fruits and vegetables in particular are rich in phytonutrients that are beneficial to a healthy immune system, which improves your ability to fight disease. And for the record, you cannot get this flu from eating pork! — Drink plenty of fluids, especially water. Using your body weight as a guide, divide your weight by 2 and drink that many ounces of water. Example: if you weigh 150 pounds, strive for 75 ounces of water daily. Yes, that sounds like a lot of water, but many people go through each day in a state of near dehydration. And yes, you will make more trips to the bathroom as your body adjusts to the new level of hydration. — Get plenty of sleep – eight hours per night for optimum health. Inadequate sleep stresses the immune system. — Get daily physical activity – 30 minutes per day is a good goal. This not only improves cardiovascular, lymphatic and joint function, but is also good for stress management. — Manage your stress – stress hormones work in conflict with a healthy immune system. Quiet time, prayer, meditation or relaxing hobbies contribute greatly to overall health. More information on influenza is available online at pandemicflu.gov.
dd is a common Unix program whose primary purpose is the low-level copying and conversion of raw data. You can use this command backup whole hard drives, create a large file filled with only zeros, create and modify image files at specific points, and even do conversions to upper case. dd Command can strip headers, write middle of the disk, extracts parts of the binary files and it can be used by the Linux kernel makefiles to make boot images. dd capable to copy and convert magnetic tape formats, convert between ASCII and EBCDIC, Swap bytes. Syntax: dd if=infput file of=outputfile bx=byte size Here will create fixed 10 MB file using dd command [root@localhost ~]# dd if=/dev/zero of=testfile1MB bs=10485760 count=1 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 0.253435 seconds, 41.4 MB/s In above example “if” meaning inputfile (reading) and “of” meaning output file(where the data has to write), “bs” is a byte size. By default linux will take byte size (bs) as 512 bytes if don’t specify the bs option. Yes. dd command can be used to wipe the data Note: While executing this command be careful. [root@localhost ~]# dd if=/dev/zero of=/root/anaconda bs=1024 count=5 5+0 records in 5+0 records out 5120 bytes (5.1 kB) copied, 0.00161641 seconds, 3.2 MB/s In the above example we are writing null to the /root/anaconda file. Above command will overwrite all file content to null. Similar way you can wipe the entire disk/partition and MBR content by executing following command. [root@localhost ~]# dd if=/dev/zero of=/dev/ (where name of the disk/name of the partition) Note: Master Boot Record resides first sector of the disk that is 512. In 512 bytes 446 byte is used for storing Boot loader information. Again Careful while trying this. [root@localhost ~]# dd if=/dev/zero of=/dev/hda bs=446 Or you can also use urandom file. # dd if=/dev/urandom of=/dev/hdc
Physical Features of Brazil floods every 6 months is the largest rain forests in the world Much of the Amazon Basin is covered by the world’s largest rain forest Its wet lowlands cover most of the country’s northern and western areas. Most of Brazil lies in the Tropics, an area between the Tropic of Cancer and the Tropic of Capricorn. wet rain forest Areas along the equator have a tropical rain forest Areas along the Amazon River have a tropical rain Every day is warm and wet. Average daytime temperature is 80°, but it feels hotter because the wet rain forest makes the air humid. Strong winds called monsoons bring a huge amount of rain (more than 120 inches!) each year causing the Amazon River to flood. These areas also have a "dry" season. During the dry season, forest fires are a danger even in a rain the Amazon Basin covers more than 2 million square miles are south and east of the amazon basin this region has a lot of plateau the western part of the highlands is largely grasslands Atlantic low lands its 125 miles wide in the north becomes even narrow in the southeast Western Hemispheres longest river Carries largest amount of water second longest river in the world Begins in the Andes (Peru) and flows east to the Atlantic Ocean has over 1,000 tributaries wide enough and deep enough for ships to pass and deliver cargo. Brazil is the world’s largest producer of coffee, sugarcane, and tropical fruits. The country also produces great amounts of soybeans, corn, and cotton. Brazil has rich mineral resources that are only partly developed. They include iron ore, tin, copper, bauxite, gold, and manganese. abundant rain forest Forests cover about 60 percent of Brazil, accounting for about 7 percent of the world’s timber resources. The rain forest’s mahogany and other hardwoods are highly desirable for making furniture. The rain forest is also a source of natural rubber, nuts, and medicinal plants. Logging, mining, and other development have become a major environmental issue. .
As swine flu spread from Mexico to Texas and then fanned out farther in the United States, Americans began to alter their behavior. Families kept children home from school, postponed trips to the mall, and stayed home instead of eating out. In so doing, the American population may have inadvertently altered the behavior of the pathogen itself. How human behavior changes the spread of emerging infectious diseases, and how the spread of disease simultaneously changes human behavior, will be among the topics discussed by scientists at a meeting at the National Institute for Biological and Mathematical Synthesis (NIMBioS) at the University of Tennessee, Knoxville, June 7-9. Ecologists, epidemiologists, economists, and mathematicians will comprise a NIMBioS Working Group to tackle the topic, "Synthesizing and Predicting Infectious Disease While Accounting for Endogenous Risk" or SPIDER. Accounting for endogenous risk means jointly considering how human behavior influences disease and how disease influences human behavior, explained Eli Fenichel, workshop co-organizer and assistant professor at Arizona State University. "When people perceive risks, they alter their behavior, which in turn, alters the risk. It's a feedback loop between people, the pathogen, and the risk." Most current attempts to model the risks of emerging infectious diseases look at the disease itself and human behavior. The SPIDER Working Group aims to build on that classic view by also considering the economic impact of human decisions about risk. "Epidemiological science has gotten good at modeling and projecting risk. The next major frontier is how do we manage risk in a cost effective way," Fenichel said. "It's a way of thinking about how resources get allocated to address emerging pathogens like the flu now. For example, if we believe that people will behave in a certain way given certain information sets, we might be able to find better ways to distribute medicine." Another avenue for investigation is how the global food trade system would be affected if it becomes the source of a pathogen, Fenichel said. "One of the questions is how do we set up inspections in a cost effective way if we cannot reasonably inspect everything. We need to look at how to best balance the risks and the costs." The group aims to develop predictive models to forecast the risks associated with emerging infectious diseases in humans, livestock, wildlife, and plants, and to collaborate in developing risk management strategies. NIMBioS Working Groups are comprised of 10-15 invited participants and focus on specific questions related to mathematical biology. Each group typically meets two to three times over the course of two years at the Institute. Source: University of Tennessee at Knoxville Explore further: Recorded Ebola deaths top 7,000
Light-emitting diodes (LEDs) present opportunities for horticultural lighting because of their high electrical efficiencies and tunable light spectrums. Today, LED arrays are similarly or more efficient than conventional lamps such as high-pressure sodium (HPS) or fluorescent, and their efficiencies continue to increase. The wavelengths, or colors, of commercially available LEDs vary widely, which allows unique mixtures of colors to be developed that can result in desirable growth responses. For example, manipulating light quality to produce short, well-branched plants could reduce the need for plant growth retardants or other height-suppressing strategies. In Part I of this article (January GG), we presented results from our indoor LED lighting experiment with orange, red and hyper red light. We concluded that because plants grew similarly under different colors of red light, red LEDs for horticultural applications could be chosen based on their longevity, efficiency and cost, without affecting plant quality. In our second experiment, tomato ‘Early Girl,’ salvia ‘Vista Red,’ petunia ‘Wave Pink’ and impatiens ‘SuperElfin XP Red’ were grown in growth chambers, beginning after germination, for four weeks at 68°F. The photosynthetic light intensity was fixed at 160 µmol∙m-2∙s-1 and was delivered for 18 hours each day. Light was delivered using different percentages of blue (B; peak of 446 nm), green (G; peak of 516 nm) and red (R) light. The red light was delivered by equal intensities of two different LEDs with peak wavelengths of 634 or 664 nm. The LED treatment percentages were: B25+G25+R50 (25 percent of light from blue and green LEDs and 50 percent from red LEDs), B50+G50, B50+R50, G50+R50, R100 and B100. Plants were also grown under cool-white fluorescent lamps to serve as a control treatment. A variety of data were collected, including the fresh weight of the shoots, seedling height and total leaf area. Plants Grown Under Different Color Light Finished At Various Heights Plants were 40 to 60 percent shorter when grown under at least 25 percent blue light as compared to plants grown under only red light (Figures 1 and 2). Plants grown under 50 percent green light were shorter than those under all red, but taller than those grown with some blue light. The leaf area of plants grown under only red light was 50 to 130 percent greater than with 25 percent or more blue light. Similarly, plants had 50 to 110 percent greater shoot fresh weight under only red light than plants under 25 percent or more blue light (Figure 3). Plants grown under fluorescent lamps often had fresh weight and leaf area similar to that of the plants grown under only red light, while their height was similar to plants under the G50+R50 treatment. Impatiens was the only species to produce flower buds during the experiment, but only under at least 25 percent blue light. The number of leaflets of tomato that developed edema was greatest under only red light, and edema decreased as the percentage of blue light increased. In a third experiment, the same species of bedding plants were grown for four weeks, also at 160 µmol∙m-2∙s-1, under different percentages of blue and red light. The LED treatments were all red (R100), all blue (B100) or the following percentages: B6+R94, B13+R87, B25+R75 and B50+R50. The red light was delivered by two different red LEDs as in the previous experiment. Plants grown under at least 6 percent blue light were 25 to 50 percent shorter than plants without any blue light (Figure 4). Leaves of impatiens, salvia and petunia were approximately twice as large when grown under only red light compared to leaves on plants under at least 50 percent blue light. Similar to the second experiment, providing an increasing percentage of blue light resulted in more flower buds on impatiens and less edema on tomato. LEDs Can Help Growers Produce Young Plants With Desired Growth Attributes We can conclude that adding blue light to the spectrum inhibits stem elongation and leaf expansion, while growing plants under only red light can increase stem length and leaf size. Twenty-five percent green light can substitute for 25 percent blue light without affecting fresh weight, but plants will be taller. However, the electrical efficiency of green LEDs on the market today is much lower than that for blue LEDs, so the economics of using green LEDs in young plant production are probably unfavorable. Adding blue light to a red-dominant environment stimulated flowering in impatiens and decreased the incidence of edema in tomato. The ratio of blue and red light can be adjusted to produce seedlings with desired leaf sizes and stem lengths. Red light increased leaf size and stem length, which resulted in plants that had the greatest biomass. Plants under at least some blue light were more compact and generally were of greater horticultural quality, but leaf size was also reduced, which subsequently suppressed shoot growth. This information can be used to help growers produce young plants with desired growth attributes, which can’t easily be done with conventional lighting technologies Light spectra could even be fine-tuned during crop production; for example, the proportion of blue light could be increased if seedlings are getting too tall. Editor’s Note: The authors thank Mike Olrich for his technical assistance, funding from Osram Opto Semiconductors, the USDA Floriculture and Research Initiative and private companies that support MSU Floriculture.
Polonium (Po), a radioactive, silvery-gray or black metallic element of the oxygen group (Group 16 [VIa] in the periodic table). The first element to be discovered by radiochemical analysis, polonium was discovered in 1898 by Pierre and Marie Curie, who were investigating the radioactivity of a certain pitchblende, a uranium ore. The very intense radioactivity not attributable to uranium was ascribed to a new element, named by them after Marie Curie’s homeland, Poland. The discovery was announced in July 1898. Polonium is extremely rare, even in pitchblende: 1,000 tons of the ore must be processed to obtain 40 milligrams of polonium. Its abundance in the Earth’s crust is about one part in 1015. It occurs in nature as a radioactive decay product of uranium, thorium, and actinium. The half-lives of its isotopes range from a fraction of a second up to 103 years; the most common natural isotope of polonium, polonium-210, has a half-life of 138.4 days. Polonium usually is isolated from by-products of the extraction of radium from uranium minerals. In the chemical isolation, pitchblende ore is treated with hydrochloric acid, and the resulting solution is heated with hydrogen sulfide to precipitate polonium monosulfide, PoS, along with other metal sulfides, such as that of bismuth, Bi2S3, which resembles polonium monosulfide closely in chemical behaviour, though it is less soluble. Because of the difference in solubility, repeated partial precipitation of the mixture of sulfides concentrates the polonium in the more soluble fraction, while the bismuth accumulates in the less soluble portions. The difference in solubility is small, however, and the process must be repeated many times to achieve a complete separation. Purification is accomplished by electrolytic deposition. It can be produced artificially by bombarding bismuth or lead with neutrons or with accelerated charged particles. Chemically, polonium resembles the elements tellurium and bismuth. Two modifications of polonium are known, an α- and a β-form, both of which are stable at room temperature and possess metallic characteristics. The fact that its electrical conductivity decreases as the temperature increases places polonium among the metals rather than the metalloids or nonmetals. Because polonium is highly radioactive—it disintegrates to a stable isotope of lead by emitting alpha rays, which are streams of positively charged particles—it must be handled with extreme care. When contained in such substances as gold foil, which prevent the alpha radiation from escaping, polonium is used industrially to eliminate static electricity generated by such processes as paper rolling, the manufacture of sheet plastics, and the spinning of synthetic fibres. It is also used on brushes for removing dust from photographic film and in nuclear physics as a source of alpha radiation. Mixtures of polonium with beryllium or other light elements are used as sources of neutrons. |melting point||254 °C (489 °F)| |boiling point||962 °C (1,764 °F)| |oxidation states||−2, +2, +3(?), +4, +6|
What is conservation? Conservation involves the maintenance of biodiversity, including diversity between species, genetic diversity within species, and maintenance of a variety of habitats and ecosystems. Human threat on biodiversity An increasing human population poses a threat to the maintenance of biodiversity through: - over-exploitation of wild populations for food, sport and commerce. This leads to species being harvested more quickly than they can reproduce. - habitat disruption and fragmentation. This can result from intensive agricultural techniques, increased pollution or widespread building. - new species being introduce to an ecosystem. These can out-compete native species. Why is conservation important - ethics Some people believe that every species has value in its own right, irrespective of its value to humans; - every species has the right to survive - humans have an ethical responsibility to look after them The arguments against this approach are economic. For example, burning fossil fuel has negative consequences on the environment but is economically essential. Why is conservation important - economic reasons Many species have a direct economical value when harvested - valuable food source - collection of drugs e.g. aspirin from willow - natural predators or pests can act as biological control agents Many species have an indirect economical value when harvested - wild insects are responsible for pollinating crops - some communities preserve water quality, protect soil and break down waste products - Ecotourism has financial value, drawn from the aesthetic value of living things. It depends on biodiversity. Preservation is important in protecting areas of land, as yet unused by humans, in their untouched form.
Ask anyone to name an animal you would find living in a Scottish river or loch and they’d probably say some sort of fish, maybe a frog or toad, possibly an otter. Very few people would give much thought to the hundreds of tiny water beasties that live, hidden for the most part, beneath the surface. But it is these water beasties, or invertebrates as they are known, that play an essential role in maintaining our aquatic systems and help us to protect the water environment. A huge variety of water invertebrates live in our burns, rivers, lochs, canals and ponds; from the more easily recognisable snails, worms and freshwater shrimps, to the slightly alien looking mayfly and stonefly larvae. All are fascinating in their own right. Some spend all their lives in the water, while others are only there as juveniles or adults. Some are voracious predators hunting smaller water beasties, while others have adapted their own unique ways to catch food like the caddisfly larvae, which spins a net to trap food as it floats past. Some are common in rivers across the UK, while others are of international conservation importance like the endangered freshwater pearl mussel, with half of the worldwide breeding population found in Scottish waters. Regardless of shape or size, what they eat, how long they live in the water environment and whether they are rare or not, all invertebrates are important in maintaining aquatic ecosystems. Plants and algae grow in the river and some invertebrates graze directly on these. For others, coarse materials like leaves and wood fall into rivers providing a vital food source. Some invertebrates are able to shred and eat this material and in the process break it down into smaller bits. Other animals can then eat these smaller particles using specialised mouthparts or, for the really tiny particles, filter them from the water. This process means nutrients entering rivers and other water bodies are able to be used within the system to support large numbers of invertebrates. These in turn are food for other invertebrates and larger animals such as fish, birds and even bats, which hunt for insects that have emerged from the water. Without water invertebrates the scope for life in our rivers and lochs would be limited. Not only are water invertebrates essential in maintaining life in our water environment but they can help us to protect it too. Invertebrates provide us with valuable information about how healthy our rivers and lochs are and we can use them as a means of assessing the quality of our freshwaters. Some species, such as the stonefly, are extremely sensitive to pollution and can only live in clean water. Others, like worms, can tolerate highly polluted conditions. In between these two extremes there are a range of species with different degrees of sensitivity. Some prefer rivers with clean gravel and hardly any silt, while others thrive in more sluggish, silty conditions where silt has accumulated. This could be due to excess run off from the land or as a result of low flow rates that are unable to flush the silt downstream. By taking a sample of the invertebrates living in a river and identifying what species are present we can get a quick and accurate picture of the river’s health. It also helps us to decide whether other changes, such as siltation, are affecting the life in the river. If we find pollution sensitive species and those that like silt free conditions, we can be confident that the water quality is good. If, on the other hand, all we find are those species that are tolerant of pollution or like silty conditions, we can assume that the water environment has become polluted or affected in some way. Sampling and monitoring the invertebrates in our rivers and lochs has formed the basis for assessing the quality of freshwater in the UK and worldwide for many years. We have a network of hundreds of sites across Scotland that our ecologists regularly visit to take samples. Along with other information recorded at the site, such as the colour and clarity of the water and the condition of the banks, we can judge the quality of the river against a ‘reference level’ - the ideal state of a river where there is no human impact other than what you would expect to find. We can then grade the river, bad, poor, moderate, good or high, based on how close it is to the reference condition. This valuable information then helps us target our action to protect and improve Scotland’s water environment. So the next time you look at a river or loch, give a thought to the water beasties living beneath the surface – there’s more to them than meets the eye.
Definition of Unit Rate Unit Rate is the ratio of two measurements in which the second term is 1. More About Unit of Measurement The two measurements involved for which ratio are taken in unit rate is always different. Example of Unit Rate If Nancy earns $180 in 20 hours, then unit rate of her earning is given as 180/20 = $9 per hour. Video Examples: Unit Rate Solved Example on Unit Rate Ques: William can pack 60 toys in 4 hours. Find the unit rate with which he packs toys. - A. 16 toys/hour - B. 15 toys/hour - C. 14 toys/hour - D. 13 toys/hour Correct Answer: B - Step 1: Unit rate is a ratio of different units of measurement with the second term being equal to 1. - Step 2: So, Unit Rate = toys/hours =60/40 - Step 3: = [Divide both the numerator and the denominator by the denominator.] - Step 4: =15/1 = 15 [Simplify.] - Step 5: So, William can pack 15 toys/hour.
The Latin saying Ad fontes is an essential motto of humanism – the intellectual current of the Renaissance – and can be translated as “To the sources”. Renaissance humanism, whose origins lie in Italy in the fifteenth and sixteenth centuries, supported a comprehensive reform of education. This meant that the highest goal should be to form the human being, which should lead the mental abilities of the individual to full relief. The maintenance of the linguistic expression was important, which meant that a central role was played by the use of language, the correct expression – both oral and written – in Latin. Ad Fontes’ guiding principle is that in studying, one should rely on the original texts and sources of Greek as well as Roman poets and philosophers to be able to grasp the background of theories, world pictures and literary works. The motto was shaped in 1511 by the humanist Erasmus of Rotterdam. The theologian, philosopher, philologist and author Erasmus of Rotterdam (c. 1466 / 67-1536), who is regarded as an essential designer of humanism, was regarded as a multiplier. More than 150 books are from his pen, drawing exclusively on Latin and gaining tremendous attention for his oeuvre during his lifetime. Erasmus of Rotterdam took the view that people are not born as human beings, but are educated as such. He assumed that the study of the ancient scholars, especially of the Greek philosophers, and the restoration of their original texts, was essential for this. Thus he writes in De ratione studii ac legendi interpretandique auctores (1511): Sed in primis ad fontes ipsos properandum, id est graecos et antiquos. Translation: Above all, one has to hurry to the sources themselves, that is to say, to the Greeks and the elders at all. This work of Rotterdams has become programmatic for humanism, that is, the essential foundation which described the aims of the current. The essential aspect is that Rotterdam points out that one should consult the original source of a matter to properly grasp this matter. Ebendie’s approach also inspired and inspired the theologian Martin Luther. During the Middle Ages especially the Latin translation of the Bible Vulgata was used, which since the late Antiquity against other translations of the Gospels penetrated. When, in the course of the Reformation, Martin Luther transferred the Bible into German, he did not use this translation, following the approach of ad fontes, but relied on ancient Hebrew, Aramaic, and ancient Greek sources. This reasoning is subsequently found in numerous authors of humanism. Thus the philologist, philosopher, humanist, theologian and philologist philosopher Philipp Melanchthon (1497-1560), for example, demanded from the students of the Wittenberg university that they “learned Greek to Latin so that [they], if they [the philosophers] Theologians, the historians, the orators, the poets [to read] to the point itself, do not “embrace their shadows.” Short overview: The most important overview The Latin saying Ad fontes means To the sources. It is considered as a motto of humanism and was characterized above all by Erausmus of Rotterdam. The saying means that when you study a text, you should refer to the original in order to understand the essentials and not rely on false assumptions. The phrase is related to the phrase Ab initio, which can be translated from the beginning, meaning that a thing is developed or learned from the beginning. In addition, the motto is the Abovo narrative, which means the egg, and means that a text is told of the beginning and shows the prehistory of the action. Index → RenaissanceLiteraturepochen
The zero of a function is any replacement for the variable that will produce an answer of zero. Graphically, the real zero of a function is where the graph of the function crosses the x‐axis; that is, the real zero of a function is the x‐intercept(s) of the graph of the function. Find the zeros of the function f ( x) = x 2 – 8 x – 9. Find x so that f ( x) = x 2 – 8 x – 9 = 0. f ( x) can be factored, so begin there. Therefore, the zeros of the function f ( x) = x 2 – 8 x – 9 are –1 and 9. This means f (–1) = 0 and f (9) = 0 If a polynomial function with integer coefficients has real zeros, then they are either rational or irrational values. Rational zeros can be found by using the rational zero theorem.
There is unequivocal scientific evidence that our planet is warming, primarily as the result of human activity since the mid-20th century. The planet’s average surface temperature has risen by about 2.0 degrees Fahrenheit (1.1 degrees Celsius) mainly in the past 35 years. Warming oceans, shrinking ice sheets, and glacial retreat are very common today. The sea level has risen by about 8 inches during the last century. These changes are largely driven by increased carbon dioxide and other human made emissions into the atmosphere. While at the global level we are still arguing on the sources and effects of climate change on the planet and people, often driven more by self-interest than scientific evidence, we in the developing world continue to suffer, even when we are not the culprits. Carbon dioxide emissions at 11.2 tonnes per capita in the high human development countries was 27 times more than that in low human development countries in 2010. For us climate change is real. The case in point is Zimbabwe. Just as for the planet, Zimbabwe’s climate is warming. We experience more hot days and fewer cold days than in the past. Our average temperature will be 0.5 to 2 0C warmer by 2030. The implications on livelihoods, well-being and human development are huge. Most Zimbabweans rely on rain-fed agriculture and livestock for a living, so frequent droughts and floods have serious implications on their crop yield and – hence – their incomes. The Global Hunger Index (GHI) already puts Zimbabwe in the “serious” category on the GHI severity scale. Moreover, even a slight change in temperature and precipitation might increase the frequency of vector-borne diseases, including malaria, dengue and yellow fever epidemics as well as water-borne diseases such as diarrhoea and typhoid fever. Zimbabwe is already experiencing frequent outbreaks of cholera and typhoid. Hunger is low on average but rises sharply in some seasons. So, it is not a surprise that a million people in Zimbabwe are vulnerable to adverse climate shocks. The recent drought, which was followed by violent floods put over 4.2 million Zimbabweans – more than a quarter of the total population – in need of food assistance. These shocks will constrain or even reverse the gains in human development that Zimbabwe has made over the years. It is in this context that we produced the Zimbabwe Human Development Report: towards building a climate resilient nation. The report was launched in March 2018 at a high profile gathering including the UNDP Administrator Mr. Achim Steiner and the Minister of Environment, Water and Climate Hon. Oppah Muchinguri-Kashiri. The report aims to draw the attention of policy makers to take concrete action towards climate mitigation and adaptation. Key recommendations of the report include, among others: Adaptation and mitigation - Strengthen national capacity to adapt to the effects of climate change, including efficient management and use of water, the promotion and rehabilitation of water related infrastructure such as irrigation, and investments in water harvesting techniques. - Invest in climate smart agricultural technologies, including developing drought tolerant high yielding varieties of crops. - Land and crop suitability mapping, diversification of crop production, including promotion of drought resistant small grains, and exploration of biofortified crop production. - Adaptive interventions for livestock production, including planned de-stocking and encouraging the rearing of heat tolerant indigenous breeds and improved livestock feed during droughts. - Introduction of school based health and nutrition programmes. - Promote renewable energy adoption and the reduction of wild forest fires. - Strengthen the early warning systems to monitor, detect, forecast and provide up-to-date information on climate change issues for timely action. - Strengthen disaster risk management to deal with flood related and other disasters efficiently to limit the danger to human life. - Protect wetlands as a buffer against flood waters and regulate property development in wetlands. - Develop a strategy for climate proofing infrastructure, including through choice of location and design of new infrastructure. - Establish a comprehensive social insurance and social safety net to support poor people when affected by natural disasters. - Promote diversification of livelihoods, and improved and diversified production as a climate resilience measure. - Improve financial and non-financial channels for remittances to help manage temporary or permanent shocks and to escape climate induced poverty. - Build social capital through collective work such as food-for-work programmes to absorb the stress of food insecurity. - Introduce weather-based insurance to minimize the risks of loss of investments to smallholder farmers. - Integrate climate change issues into development planning process at all levels, including national, district and local level. - Improve coordination among public institutions, the United Nations, development partners, private sector and the civil society about climate change issues. - Use grassroots structures as a core building block for climate governance for context specific local climate action and effective and timely responses. - Strengthen disease surveillance systems for early detection of disease outbreaks and action. The national human development report has so far received wide acceptance by all the stakeholders. The live twitter messages reached over 4.5 million users on the launch day itself. The Minister of Environment, Water and Climate, Hon. Oppah Muchinguri-Kashiri, wants to a have a partnership built to jointly implement the recommendations of the report. A high-level panel discussion on “Towards building a climate resilient nation” is planned for the 18th of June 2018 to further brainstorm the findings of the report and actions to be taken. Plans are also underway to have “Green Talks” with the youth for sharing their perspectives on climate change and its impacts on human development. This would also help inculcate environmentally-friendly attitudes among young people. The HDialogue blog is a platform for debate and discussion. Posts reflect the views of respective authors in their individual capacities and not the views of UNDP/HDRO. HDRO encourages reflections on the HDialogue contributions. The office posts comments that supports a constructive dialogue on policy options for advancing human development and are formulated respectful of other, potentially differing views. The office reserves the right to contain contributions that appear divisive. Photo: UNDP Zimbabwe
Symptoms vary from person to person, but the typical lupus patient is a young woman experiencing fever, swollen lymph nodes (glands), butterfly-shaped rash on her face, arthritis of the fingers, wrists or other small joints, hair loss, chest pain and protein in the urine. Symptoms usually begin in only one or two areas of the body, but more may develop over time. The most common signs and symptoms of lupus are: Corticosteroids also can cause or worsen osteoporosis, a disease in which bones become fragile and more likely to break. If you have osteoporosis, you should eat foods rich in calcium every day to help with bone growth. Examples are dark green, leafy vegetables (spinach, broccoli, collard greens), milk, cheese, and yogurt or calcium supplements that contain Vitamin D. Preventive measures are necessary to minimize the risks of steroid-induced osteoporosis and accelerated atherosclerotic disease. The American College of Rheumatology (ACR) Guidelines for the prevention of glucocorticoid-induced osteoporosis suggest the use of traditional measures (eg, calcium, vitamin D) and the consideration of prophylactic bisphosphonate therapy. The NIH: National Institute of Arthritis and Musculoskeletal and Skin Diseases (2014) suggests that the symptoms may vary dependent on the type of lupus and the person. Symptoms tend to ‘come and go’, ‘flare’ from mild to severe intensity, and new symptoms of lupus can arise at any stage (NIH, 2014). Better Health Channel (n. d.) state that lupus may even become life-threatening, for example, should it damage major organs such as the kidneys or brain. Dermatomyositis (DM) and polymyositis (PM): While almost all people with lupus have a positive ANA test, only around 30 percent of people with DM and PM do. Many of the physical symptoms are different as well. For instance, people with DM and PM don't have the mouth ulcers, kidney inflammation, arthritis, and blood abnormalities that people with lupus do. A common neurological disorder people with SLE have is headache, although the existence of a specific lupus headache and the optimal approach to headache in SLE cases remains controversial. Other common neuropsychiatric manifestations of SLE include cognitive dysfunction, mood disorder, cerebrovascular disease, seizures, polyneuropathy, anxiety disorder, psychosis, depression, and in some extreme cases, personality disorders. Steroid psychosis can also occur as a result of treating the disease. It can rarely present with intracranial hypertension syndrome, characterized by an elevated intracranial pressure, papilledema, and headache with occasional abducens nerve paresis, absence of a space-occupying lesion or ventricular enlargement, and normal cerebrospinal fluid chemical and hematological constituents. Lupus nephritis is one of the most common complications of lupus. (13) People with lupus nephritis are at a higher risk of developing end-stage renal disease, requiring dialysis or a transplant, says Kaplan. Symptoms of the condition include high blood pressure; swelling of the hands, arms, feet, legs, and area around the eyes; and changes in urination, such as noticing blood or foam in the urine, needing to go to the bathroom more frequently at night, or pain or trouble urinating. Elevated expression of HMGB1 was found in the sera of people and mice with systemic lupus erythematosus, high mobility group box 1 (HMGB1) is a nuclear protein participating in chromatin architecture and transcriptional regulation. Recently, there is increasing evidence HMGB1 contributes to the pathogenesis of chronic inflammatory and autoimmune diseases due to its inflammatory and immune stimulating properties. We conducted a systematic evidence-based review of the published literature on systemic lupus erythematosus. After searching several evidence-based databases (Table 1), we reviewed the MEDLINE database using the PubMed search engine. Search terms included “lupus not discoid not review not case” and “lupus and treatment and mortality,” with the following limits: 1996 to present, abstract available, human, and English language. One author reviewed qualifying studies for relevance and method. To unravel which people with positive ANA tests actually have lupus, additional blood work can be done. Doctors look for other potentially troublesome antibodies, so they will test for anti-double-stranded DNA and anti-Smith antibodies. These tests are less likely to be positive unless a patient truly has lupus. However, a person who has negative test results could still have lupus, even though this is not so in the case of ANA tests. The most commonly sought medical attention is for joint pain, with the small joints of the hand and wrist usually affected, although all joints are at risk. More than 90 percent of those affected will experience joint or muscle pain at some time during the course of their illness. Unlike rheumatoid arthritis, lupus arthritis is less disabling and usually does not cause severe destruction of the joints. Fewer than ten percent of people with lupus arthritis will develop deformities of the hands and feet. People with SLE are at particular risk of developing osteoarticular tuberculosis. Chronic cutaneous (discoid lupus): In discoid lupus, the most common form of chronic cutaneous lupus, inflammatory sores develop on your face, ears, scalp, and on other body areas. These lesions can be crusty or scaly and often scar. They usually don't hurt or itch. Some patients report lesions and scarring on the scalp, making hair re-growth impossible in those areas. Most people with discoid lupus do not have SLE. In fact, discoid lupus is more common in men than in women. Conventional lupus treatment usually involves a combination of medications used to control symptoms, along with lifestyle changes — like dietary improvements and appropriate exercise. It’s not uncommon for lupus patients to be prescribed numerous daily medications, including corticosteroid drugs, NSAID pain relievers, thyroid medications and even synthetic hormone replacement drugs. Even when taking these drugs, it’s still considered essential to eat an anti-inflammatory lupus diet in order to manage the root causes of lupus, along with reducing its symptoms. If you are a young woman with lupus and wish to have a baby, carefully plan your pregnancy. With your doctor’s guidance, time your pregnancy for when your lupus activity is low. While pregnant, avoid medications that can harm your baby. These include cyclophosphamide, cyclosporine, and mycophenolate mofetil. If you must take any of these medicines, or your disease is very active, use birth control. For more information, see Pregnancy and Rheumatic Disease. Inflammation of the lining surrounding the lungs, or pleuritis, can occur in people with lupus. This can cause symptoms such as chest pain and shortness of breath, says Luk. The pain can worsen when taking a deep breath, sneezing, coughing, or laughing. (18) Pleural effusion, or fluid around the heart and lungs, may also develop and can cause shortness of breath or chest pain, says Caricchio. Fad-diets can be tempting as they offer a quick-fix to a long-term problem. However, they can risk your health. You should follow advice from a doctor or dietician when seeking to change diet. The best way to lose weight and keep it off is to make healthier choices, eat a nutritionally balanced and varied diet with appropriately sized portions, and be physically active. For advice on exercising with lupus, you can read our article HERE. Landmark research has shown clearly that oral contraceptives do not increase the rate of flares of systemic lupus erythematosus. This important finding is opposite to what has been thought for years. Now we can reassure women with lupus that if they take birth-control pills, they are not increasing their risk for lupus flares. Note: Birth-control pills or any estrogen medications are still be avoided by women who are at increased risk of blood clotting, such as women with lupus who have phospholipid antibodies (including cardiolipin antibody and lupus anticoagulant). A mononuclear phagocytic white blood cell derived from myeloid stem cells. Monocytes circulate in the bloodstream for about 24 hr and then move into tissues, at which point they mature into macrophages, which are long lived. Monocytes and macrophages are one of the first lines of defense in the inflammatory process. This network of fixed and mobile phagocytes that engulf foreign antigens and cell debris previously was called the reticuloendothelial system and is now referred to as the mononuclear phagocyte system (MPS). Symptoms, causes, and treatment of chronic kidney disease Chronic kidney disease or failure is a progressive loss of kidney function that sometimes occurs over many years. Often the symptoms are not noticeable until the disease is well advanced, so it is essential that people who are at risk of developing kidney problems, such as those with diabetes, have regular check-ups. Read now Regulatory T cells (Tregs) are a population of CD4+ T cells with a unique role in the immune response. Tregs are crucial in suppressing aberrant pathological immune responses in autoimmune diseases, transplantation, and graft-vs-host disease after allogeneic hematopoietic stem cell transplantation. Tregs are activated through the specific T-cell receptor, but their effector function is nonspecific and they regulate the local inflammatory response through cell-to-cell contact and cytokine secretion. Tregs secrete interleukin (IL)-9 (IL-9), IL-10, and transforming growth factor-beta 1 (TGF-beta 1), which aid in the mediation of immunosuppressive activity. Systemic sclerosis (SSc): Similar symptoms between SSc and lupus are reflux and Raynaud's disease (when your fingers turn blue or white with cold). One difference between SSc and lupus is that anti-double-stranded DNA (dsDNA) and anti-Smith (Sm) antibodies, which are linked to lupus, don't usually occur in SSc. Another differentiator is that people with SSc often have antibodies to an antigen called Scl-70 (topoisomerase I) or antibodies to centromere proteins. For arthritic symptoms, take a natural anti-inflammatory agent, containing ginger and turmeric. Get the right kind of regular exercise; swimming or water aerobics are best for those who have arthritis symptoms. Investigate traditional Chinese medicine and Ayurvedic medicine, both of which often do well with autoimmune conditions. Definitely try one or more mind/body therapies, such as hypnosis or interactive guided imagery. Since other diseases and conditions appear similar to lupus, adherence to classification can greatly contribute to an accurate diagnosis. However, the absence of four of these criteria does not necessarily exclude the possibility of lupus. When a physician makes the diagnosis of SLE, s/he must exclude the possibility of conditions with comparable symptoms, including rheumatoid arthritis, systemic sclerosis (scleroderma), vasculitis, dermatomyositis and arthritis caused by a drug or virus. Immunoglobulins are formed by light and heavy (depending on molecular weight) chains of polypeptides made up of about 100 amino acids. These chains determine the structure of antigen-binding sites and, therefore, the specificity of the antibody to one antigen. The five types of immunoglobulins (IgA, IgD, IgE, IgG, IgM) account for approximately 30% of all plasma proteins. Antibodies are one of the three classes of globulins (plasma proteins) in the blood that contribute to maintaining colloidal oncotic pressure. However, three placebo-controlled studies, including the Exploratory Phase II/III SLE Evaluation of Rituximab [EXPLORER] trial and the Lupus Nephritis Assessment with Rituximab [LUNAR] trial, [124, 125] failed to show an overall significant response. Despite the negative results in these trials, rituximab continues to be used to treat patients with severe SLE disease that is refractory to standard therapy. Jump up ^ Cortés‐Hernández, J.; Ordi‐Ros, J.; Paredes, F.; Casellas, M.; Castillo, F.; Vilardell‐Tarres, M. (December 2001). "Clinical predictors of fetal and maternal outcome in systemic lupus erythematosus: a prospective study of 103 pregnancies". Rheumatology. 41 (6): 643–650. doi:10.1093/rheumatology/41.6.643. PMID 12048290. Archived from the original on 26 January 2016. Retrieved 20 April 2011. Inflammation of the lining of the lungs (pleuritis) with pain aggravated by deep breathing (pleurisy) and of the heart (pericarditis) can cause sharp chest pain. The chest pain is aggravated by coughing, deep breathing, and certain changes in body position. The heart muscle itself rarely can become inflamed (carditis). It has also been shown that young women with SLE have a significantly increased risk of heart attacks due to coronary artery disease. Kidney inflammation in SLE (lupus nephritis) can cause leakage of protein into the urine, fluid retention, high blood pressure, and even kidney failure. This can lead to further fatigue and swelling (edema) of the legs and feet. With kidney failure, machines are needed to cleanse the blood of accumulated waste products in a process called dialysis. Neonatal lupus is a rare form of temporary lupus affecting a fetus or newborn. It's not true lupus: It occurs when the mother’s autoantibodies are passed to her child in utero. These autoantibodies can affect the skin, heart, and blood of the baby. Fortunately, infants born with neonatal lupus are not at an increased risk of developing SLE later in life. Affiliate Disclosure: There are links on this site that can be defined as affiliate links. This means that I may receive a small commission (at no cost to you) if you purchase something when clicking on the links that take you through to a different website. By clicking on the links, you are in no way obligated to buy. Please Note: The material on this site is provided for informational purposes only and is not medical advice. Always consult your physician before beginning any diet or exercise program. Copyright © livehopelupus.org
Considering the destruction wreaked by the magnitude 9.0 earthquake in Japan in 2011, is a 10.0 possible? — M.B., Nicholasville, Kentucky In theory, yes, but it’s extremely unlikely. Earthquakes are caused by the sudden slippage of faults, and their magnitude is partly based on the length of those faults. No known faults are long enough to generate a megaquake of 10 or more. (The largest quake ever recorded was a magnitude 9.5.) California, the Pacific Northwest and Alaska have frequent quakes, but even the notorious San Andreas Fault is not long and deep enough to cause an earthquake matching the one in Japan. The 1906 quake that devastated San Francisco had a magnitude of “only” 7.9. (Main factors affecting damage are the intensity of the shaking, which is variable, and the design of the structures involved.) According to the U.S. Geological Survey, computer models indicate the San Andreas Fault is capable of producing earthquakes up to about 8.3. Probably the area with the most catastrophic potential in North America is the Cascadia Subduction Zone, which runs about 60 miles offshore along the Pacific coast from northern California to Vancouver Island, thereby close to major cities such as Portland, Seattle and Vancouver. This fault is believed to be capable of unleashing an earthquake with a magnitude up to 9.3.
Synesthesia is a condition where attributes associated with one sense (say colour with sight) can be experienced in another inappropriate sense (say colour with the perception of musical notes). There are many kinds, and rare ones are still being discovered. There is no longer any question that these are ‘real’ perceptions and not hoaxes. Synesthesia seems to have its roots at the sensory level and is a bottom-up rather than top-down phenomenon. There is evidence for heightened sensory activity levels and of additional connectivity between sensory modalities. A lack of normal ‘pruning’ is one of the possible causes. It is no longer a question that the condition is inherited it is. But not the specific type of synesthesia. Rather the genetic tendency is for any one or more of 60 odd varieties. Brang and Ramachandran (see citation) discuss the possible reasons for this condition not to be eliminated during evolution. Perhaps it has no disadvantage; perhaps it is a side-effect of a useful gene(s); perhaps it is the extreme of a normal distribution that includes us all. Another possible explanation is that synesthesia simply represents the tail end of a normal distribution of cross-modality interactions present in the general population. Partial evidence supporting this idea comes as sensory deprivation and deafferentation (i.e., loss of sensory input through the destruction of sensory nerve fibers) can lead to synesthetic-like experiences. For example, after early visual deprivation due to retinitis pigmentosa, touch stimuli can produce visual phosphenes, and after loss of tactile sensation from a thalamic lesion, sounds can elicit touch sensations . More remarkably, arm amputees experience touch in the phantom limb merely by watching another persons hand being touched. Long-standing evidence has also demonstrated that hallucinogenic drugs can cause synesthesia-like experiences, suggesting the neural mechanism is present in all or many individuals but is merely suppressed. However, no research has yet established the relationship between these acquired forms to the genetic variant and whether the same neural mechanism is responsible for both. And perhaps, synesthesia is actually advantageous. What are some possible plus points? Synesthesia may assist creativity and metaphor it is more frequent in creative people and is a little similar to metaphor. It may assists memory there is some evidence from savants. There is enhanced sensory processing such as finer discrimination of colours These demonstrations of enhanced processing of sensory information suggest a provocative evolutionary hypothesis for synesthesia: synesthetic experiences may serve as cognitive and perceptual anchors to aid in the detection, processing, and retention of critical stimuli in the world; in terms of memory benefits, these links are akin to a method of loci association. In addition to facilitating processes in individual sensory modalities, synesthetes also show increased communication between the senses unrelated to their synesthetic experiences, suggesting that benefits from synesthesia generalize to other modalities as well, supporting their ability to process multisensory information. Furthermore, others have argued that synesthesia is the direct result of enhanced communication between the senses as a logical outgrowth of the cross-modality interactions present in all individuals. The puzzle of how genetically, how physiologically, and why it is that synesthesia arises will be very illuminating to the questions of how qualia are bound to objects and why we have the vivid conscious experience that we have. Brang, D., & Ramachandran, V. (2011). Survival of the Synesthesia Gene: Why Do People Hear Colors and Taste Words? PLoS Biology, 9 (11) DOI: 10.1371/journal.pbio.1001205
Cereal landraces genetic resources in worldwide GeneBanks. A review Barley, Cereal landraces, conservation, Documentation systems, GeneBanks, Germplasm, Oats, Origin, Rye, Wheat Since the dawn of agriculture, cereal landraces have been the staples for food production worldwide, but their use dramatically declined in the 2nd half of the last century, replaced by modern cultivars. In most parts of the world, landraces are one of the most threatened components of agrobiodiversity, facing the risk of genetic erosion and extinction. Since landraces have a tremendous potential in the development of new cultivars adapted to changing environmental conditions, GeneBanks holding their genetic resources potentially play an important role in supporting sustainable agriculture. This work reviews the current knowledge on cereal landraces maintained in GeneBanks and highlights the strengths and weaknesses of existing information about their taxonomy, origin, structure, threats, sampling methodologies and conservation and GeneBanks’ documentation and management. An overview of major collections of cereal landraces is presented, using the information available in global metadatabase systems. This review on winter cereal landrace conservation focuses on: (1) traditional role of GeneBanks is evolving beyond their original purpose to conserve plant materials for breeding programmes. Today’s GeneBank users are interested in landraces’ history, agro-ecology and traditional knowledge associated with their use, in addition to germplasm traits. (2) GeneBanks therefore need to actively share their germplasm collections’ information using different channels, to promote unlimited and effective use of these materials for the further development of sustainable agriculture. (3) Access to information on the 7.4 million accessions conserved in GeneBanks worldwide, of which cereal accessions account for nearly 45 %, particularly information on cereal landraces (24 % of wheat, 23 % of barley, 14 % of oats and 29 % of rye accessions), is often not easily available to potential users, mainly due to the lack of consistent or compatible documentation systems, their structure and registration. (4) Enhancing the sustainable use of landraces maintained in germplasm collections through the effective application of recent advances in landrace knowledge (origin, structure and traits) and documentation using the internet tools and data providing networks, including the use of molecular and biotechnological tools for the material screening and detection of agronomic traits. (5) Cereal landraces cannot be exclusively conserved as seed samples maintained under ex situ conditions in GeneBanks. The enormous contribution of farmers in maintaining the crop and landraces diversity is recognised. Sharing of benefits and raising awareness of the value of cereal landraces are the most effective ways to promote their conservation and to ensure their continued availability and sustainable use. (6) Evaluation of costs and economic benefits attributed to sustainable use of cereal landraces conserved in the GeneBanks requires comprehensive studies conducted on a case-by-case basis, that take into consideration species/crop resources, conservation conditions and quality and GeneBank location and functions.
Oxygen debt describes a situation the body encounters, usually during or after vigorous exercise, that creates a short supply of oxygen to many bodily systems. Under normal conditions, the body receives a sufficient supply of oxygen to complete automatic tasks involving the muscles, tissues, lungs and bodily fluids. When a vigorous exercise routine is encountered, certain systems must work harder to supply the body with oxygen, therefore using more oxygen than is readily available. This is what causes you to be short of breath or leaves you gasping for air after a brisk run, workout routine or other short bursts of exercise. Variances in Oxygen Debt The recovery of oxygen to offset the debt is responsible for several factors. Most of the recovery processes last a few minutes to a few hours, but some can take several days. Changes in diet, or additional muscle training may alleviate some of the oxygen debt leading to less shortness of breath while exercising. Body weight is a factor as well. Maximum oxygen uptake is determined by sex, age and weight, meaning the amount of oxygen required to replenish the oxygen debt will vary. This may be the reason older people and those who have problems with obesity have a more difficult time with physical exercise. The Process of Recovery When the muscles experience rapid movement and muscle pressure, such as with exercise, the increase in speed by which normal events take place quickly uses up this steady flow of oxygen. A small portion is used to re-oxygenate myoglobin, which is a pigment in the muscles that acts as a small storage facility for oxygen. Most of the oxygen supply, however, involves the conversion or breakdown of lactic acid. Lactic Acid and Oxygen Debt One of the main jobs of the body after vigorous exercise is to take care of the excess lactic acid that has been produced. Lactic acid is a result of muscular use without the presence of oxygen, and it must be either converted into glycogen and glucose for use by the body, or broken down into carbon dioxide and water. Lactic acid is formed to maintain energy levels that the body needs to continue activity. It comes from a temporary conversion of pyruvate, and breaks down glucose for faster energy production. There are other systems in the body that will need to recover from oxygen debt. ATP, or adenosine triphosphate, is one of the components necessary for conversion of pyruvate into lactic acid. Stores of ATP need to be replenished with oxygen, as does the supply of glycogen. Glycogen is a molecule that serves as a secondary and long-term storage of energy, and one of its uses is for situations just like the one of sudden and vigorous exercise. As the storage depletes, glycogen is made on the fly, and will be reproduced and stored over a period of 2 hours to a few days, depending upon the level of oxygen debt.
See what questions a doctor would ask. Chloramphenicol: an oral antibiotic (trade name Chloromycetin) used to treat serious infections (especially typhoid fever) Source: WordNet 2.1 Chloramphenicol: Chloromycetin; 2,2-dichloro-N-(beta-hydroxy-alpha-(hydroxymethyl)-P-nitrophenethyl)-, D-threo-(-)-acetamide. A primarily bacteriostatic antibiotic with wide spectrum of activity against gram-positive and gram-negative cocci and bacilli, first isolated from cultures of Streptomyces venequelae. It binds to the 50S subunit of the ribosome and inhibits bacterial protein synthesis. Reserved for serious infections caused by organisms susceptible to its antimicrobial effects, especially Hemophilus influenza, Streptococcus pneumonia, and neisseria meningitidis. It is used only when less potentially hazardous therapeutic agents are ineffective or contraindicated, because it rarely causes the potentially lethal complication of aplastic anemia. (NCI) Source: Diseases Database Chloramphenicol : antibiotic first isolated from cultures of Streptomyces venequelae but now produced synthetically; has a relatively simple structure and was the first broad-spectrum antibiotic to be discovered; it acts by interfering with bacterial protein synthesis and is mainly bacteriostatic. Count: Chloramphenicol is listed as a: treatment for 11 conditions; alternative treatment for 0 conditions; preventive treatment for 0 conditions; research treatment for 0 conditions. Treatments: list of all treatments Search Specialists by State and City
The common cold is caused by many different types of viruses. Usual symptoms can include sore throat, runny nose and watering eyes, sneezing, chills, and a general, all-over achiness. Colds may be spread when a well person breathes in germs that an infected person has coughed, sneezed, or breathed into the air or when a well person comes in direct contact with the nose, mouth, or throat secretions of an infected person (for example, when a well persons hands touch a surface that the infected person has coughed or sneezed on). To prevent the spread of colds: - Make sure that all children and adults use good handwashing practices. - Clean and disinfect all common surfaces and toys on a daily basis. (See Cleaning and Disinfection section.) - Make sure the child care facility is well ventilated, either by opening windows or doors or by using a ventilation system to periodically exchange the air inside the child care facility. - Make sure that children are not crowded together, especially during naps on floor mats or cots. - Teach children to cover coughs and wipe noses using disposable tissues in a way that secretions are contained by the tissues and do not get on their hands. Excluding children with mild respiratory infections, including colds, is generally not recommended as long as the child can participate comfortably and does not require a level of care that would jeopardize the health and safety of other children. Such exclusion is of little benefit since viruses are likely to be spread even before symptoms have appeared. Daycare.com would like to thank the Centers for Disease Control and Prevention (CDC) and their contributors for this information in striving to make daycare and childcare a more productive and efficient service.
Building Community Capacity by Yelena Mitrofanova, Extension Educator Often when we think of the term community, we think in geographic terms. Our community is the physical location (i.e. city, town, village or neighborhood) where we live. It means there are defined boundaries that are understood and accepted by community members. Defining communities in terms of geography, however, is only one of the possible ways of looking at them. Communities can also be defined by common cultural heritage, language and shared interests. These are sometimes called communities of interest. In urban metropolitan areas, communities are often defined in terms of particular neighborhoods. Most of us belong to more than one community, whether we are aware of this or not. For example, a person can be part of a neighborhood community, a religious community, an ethnical community and a community of shared interests at the same time. However, for each of us, relationships with the land or with people define a community. All people and communities have a certain amount of capacity. No one is without capacity, but often we need to develop it. Community capacity building involves many aspects and considerations. There is no clear agreement about what should or should not be included when discussing capacity building. Most often it refers to skills, knowledge and ability of community members but can also include such things as access to community resources, leadership, infrastructure, time and commitment. What is important to realize is the heart of capacity building is people. If neighborhood or development groups cannot mobilize people, gather resources (what can not be done without people) and help people learn to work on the problems/issues effectively, few people and neighborhoods will benefit. Capacity is simply the ways and means needed to do what should be done to improve the quality of life in a particular community or neighborhood. Most often, it includes the following components: - people who are willing to be involved /citizen participation - skills, knowledge and abilities - inclusiveness of the community diversity - understanding of community history/community values - ability to identify and access opportunities - motivation to carry out initiatives - infrastructure, supportive institutions and physical resources - economic and financial resources - community leadership - community organizing - inter-organizational collaboration/social networks - partnership among organizations, constituency, funders and "capacity builders" - flexibility and the use of a variety approaches - acknowledgment of contributions/celebration of successes - encouragement of new people and organizations to become involved/expanding of your energy pool - good communication through the process/exchanging, transferring and understanding of information There is a common misconception that capacity building is just another way to describe community training and skills development programs. It has a wider meaning than just training and development of individuals; the long term goal of capacity building is to take control and ownership of the process. Capacity building is much broader than simply skills, people and plans. It includes commitment, resources and all that is brought to bear on the process to make it successful. Give people time to express themselves, to adapt to change and to learn. This is best done when the community members have a voice and are in charge of the process. "Real capacity building involves giving groups the independence to manage resources. Not just training them how to work on committees. Training is often helpful, but it is not sufficient in its own right." (Jupp, B. (200) Working Together: Creating Better Environment for Cross-Sector Partnerships) Sources: Flo Frank & Anne Smith "The community Development Handbook" 1999; Jupp, B. "Working Together: Creating Better Environment for Cross-Sector Partnerships" 2000; Mayer, S. "Building Community Capacity: How Different Groups Contribute" 2002 For more information, contact: Yelena Mitrofanova, Extension Educator University of Nebraska-Lincoln Extension in Lancaster County 444 Cherrycreek Road, Suite A, Lincoln, NE 68528.
Part of why you don't see colors in astronomical objects through a telescope is that your eye isn't sensitive to colors when what you are looking at is faint. Your eyes have two types of photoreceptors: rods and cones. Cones detect color, but rods are more sensitive. So, when seeing something faint, you mostly use your rods, and you don't get much color. Try looking at a color photograph in a dimly lit room. As Geoff Gaherty points out, if the objects were much brighter, you would indeed see them in color. However, they still wouldn't necessarily be the same colors you see in the images, because most images are indeed false color. What the false color means really depends on the data in question. What wavelengths an image represents depends on what filter was being used (if any) when the image was taken, and the sensitivity of the detector (eg CCD) being used. So, different images of the same object may look very different. For example, compare this image of the Lagoon Nebula (M8) to this one. Few astronomers use filter sets designed to match the human eye. It is more common for filter sets to be selected based on scientific considerations. General purpose sets of filters in common use do not match the human eye: compare the transmission curves for the Johnson-Cousins UBVRI filters and the SDSS filters the the sensativity of human cone cells. So, a set of images of an object from a given astronomical telescope may have images at several wavelengths, but these will probably not be exactly those that correspond to red, green, and blue to the human eye. Still, the easiest way for humans to visualise this data is to map these images to the red, green, and blue channels in an image, basically pretending that they are. In addition to simply mapping images through different filters to the RGB channels of an image, more complex approaches are sometimes used. See, for example, this paper (2004PASP..116..133L). So, ultimately, what the colors you see in a false color image actually mean depends both of what data happened to be used to be make the image and the method of doing the mapping preferred by whoever constructed the image.
Eastward dispersal from Southwest Asia was slower than that unto Europe A new study, published in the open-access journal PLoS One, has considered the eastwards spread of agriculture from Southwest Asia. This has been less well studied than the westwards expansion into Anatolia and Europe. Researchers conducted a statistical analysis of radiocarbon dates for 160 Neolithic sites in western and southern Asia. The locations of these sites suggest that the dispersal of farmers eastwards from the Zagros followed two routes: a northern route via northern Iran, southern Central Asia and Afghanistan, and a southern route via Fars through the interior of southern Iran. Analysis of the radiocarbon dates indicated an eastwards expansion at an average speed of 0.65 km per year, rather slower than the 1 km per year documented for Europe. The authors of report considered this to be unsurprising. Firstly, the arid climate and complicated topography of the region are less favourable for agriculture. Because of this, the early Neolithic settlements in Iran were relatively small and widely separated. Secondly, the European expansion was aided by the Danube, the Rhine and the Mediterranean coastline, but there are no major rivers in Afghanistan or Iran that could play a similar role. The authors were encouraged that the fairly simple ‘wave of advance’ model used captured the salient features of the data studied, but stressed the need for a more detailed analysis that would consider local environments and climatic conditions. 1. Gangal, K., Sarson, G. & Shukurov, A., The Near-Eastern Roots of the Neolithic in South Asia. PLoS One 19 (5), e95714 (2014).
Here are some ideas for things to do at home to build, strengthen, and support you child's early literacy skills! Phonemic Awareness is the ability to recognize that a language is made up of separate and distinct sounds. In order for a child to experience success with literacy, she must have these skills in place. This link will take you to samples of activities that can help contribute to your child's phonemic awareness development. High-Frequency Words are the words that are most commonly used in reading. They are also often words considered "sight" words, or words that cannot be sounded out, but have to be remembered based on their visual structure. Students need to be able to recognize these words automatically in order to read fluently. This link will take you to a list that you can use to help your child build his sight vocabulary. Fluency is the ability to read text accurately and quickly with appropriate expression. A child’s reading should mirror the way he or she talks. The best strategy for improving fluency is to provide opportunities for your child to read the same passage orally several times. Research shows that such repeated readings lead to improved comprehension. This link will take you to a list of easy things you can use to help your child develop reading fluency. Kids, parents, and teachers can listen to stories, download songs, watch videos, and play games with lovable PBS characters! Tons of fun reading activities for phonemic awareness, phonics, sight words, fluency, vocabulary, and comprehension. There are also links here to book websites, book talks, and author websites. Great all around help! From Readquarium, "The Site that Swims with Learning Fun," free, online games for kids to practice their keyboard skills! Fun and function!! Play fun games that help develop important early literacy skills! Over 200 audio stories to download and enjoy!!
A name that is entered into a computer (e.g. as part of a Web site or other URL, or an e-mail address) and then looked up in the global Domain Name System which informs the computer of the IP address(es) with that name. The product that registrars provide to their customers. A name looked up in the DNS for other purposes. They are sometimes colloquially (and incorrectly) referred to by marketers as "Web addresses". The authoritative definition is that given in the RFCs that define the DNS. Domain names are hostnames that provide more easily memorable names to stand in for numeric IP addresses. They allow for any service to move to a different location in the topology of the Internet (or another internet), which would then have a different IP address. Each string of letters, digits and hyphens between the dots is called a label in the parlance of the domain name system (DNS). Valid labels are subject to certain rules, which have relaxed over the course of time. Originally labels must start with a letter, and end with a letter or digit; any intervening characters may be letters, digits, or hyphens. Labels must be between 1 and 63 characters long (inclusive). Letters are ASCII A–Z and a–z; domain names are compared case-insensitively. Later it became permissible for labels to commence with a digit (but not for domain names to be entirely numeric), and for labels to contain internal underscores, but support for such domain names is uneven. These are the rules imposed by the way names are looked up ("resolved") by DNS. Some top level domains (see below) impose more rules, such as a longer minimum length, on some labels. Fully qualified domain names (FQDNs) are sometimes written with a final dot. Translating numeric addresses to alphabetical ones, domain names allow Internet users to localize and visit Web sites. Additionally since more than one IP address can be assigned to a domain name, and more than one domain name assigned to an IP address, one server can have multiple roles, and one role can be spread among multiple servers. One IP address can even be assigned to several servers, such as with anycast and hijacked IP space. Return to Articles Home Earn Hosting Cash! Every month you have the chance to earn free hosting. Contact us to find out how you can help your friends get online with DWH and earn cash! Quick Domain Search
VARIABLES AND RELATIONSHIPS Reality is very complex, and economists, like other scientists, use models to analyze reality. A model is a simplified version of the real world and only includes the elements that we believe are the most important. For example, we think that prices are the most important factor in determining the demand for bread. We also think that income and population are important. Other economic factors will be left out of the model. Each of these elements of a model is a variable. Our model for the demand for bread has four variables: (1) the quantity of bread demanded, (2) the price of bread, (3) the income of consumers, and (4) the number of consumers. Each variable can be expressed by numbers. The dependent variable is the tail of the dog -- its number value depends on the number values of the other variables. In our model, the dependent variable is the quantity of bread demanded (#1). The other three variables are the independent variables and their number values, taken together, will determine the quantity of bread that consumers want to buy. Models can be expressed using mathematical notation. We often use y for the dependent variable and x for the independent variables. We use f to represent the actual mathematical relationship (usually a linear polynomial). y = f (x) In the demand for bread, we would use Qd for quantity demanded, P for price, Y for income, and N for population. The + and - signs show direct and inverse relationships. Qd = f (-P,+Y,+N) Among the independent variables, the price of bread (#2) is the most important, so we match the quantity (#1) and price (#2) variables together in tables and graphs. The table containing these numbers is called a schedule, and the graph of these numbers is called a curve. Since we are not including income and population, we have to assume that these variables don't vary! We call this condition "ceteris paribus" which means that income and population are held constant. If income changes, for example, we will need a new set of quantity numbers for our schedule, and the location of our curve will change. The relationship of the dependent variable and each of the independent variables can be direct or inverse. In a direct relationship, a higher value of the independent variable is related to a higher value of the dependent variable (or vice-versa). Mathematically, a direct relationship is also a positive relationship. In an inverse relationship, a higher value of the independent variable is related to a lower value of the dependent variable (or vice-versa). Mathematically, an inverse relationship is also a negative relationship. [The word "indirect" does not mean inverse!] In our example, the quantity of bread demanded (#1) is inversely related to the price of bread (#2). These two variables are used for the demand schedule and the demand curve. In the schedule, higher values of price are linked to lower values of quantity demanded. In the demand curve, the curve will slope downward to the right (a "negative" slope). When there is a change in price, we say there has been a "change in the quantity demanded". The demand for bread is directly related to income (#3). If income takes higher values, then the demand for bread will also take higher values. In the demand schedule, the quantity demanded at each price will be higher. In the demand curve, the quantity demanded will be further to the right at each price level. We say that there is an "increase in demand" and "the curve shifts to the right". If income takes lower values, the process is reversed. We say that there is a "decrease in demand" and "the curve shifts to the left". We call these shifts in the demand curve a "change in demand". [The demand for bread is also directly related to changes in population (#4).] TWO CAUSATION FALLACIES Statistics lets economists use real world data to identify these types of relationships for our models. But sometimes, data can be misleading. For example, consumption spending by households and gross domestic product move up and down together. This is a positive (direct) correlation. Variables that are directly related will also show a positive correlation. There are two questions: (1) are these variables related, and (2) which are the independent and dependent variables? Economists believe consumption spending and GDP are related, and that consumption is the dependent variable. In fact, this relationship of consumption to output is the "consumption function" developed by John Maynard Keynes to help explain the causes of the Great Depression of the 1930s. Natural gas use and ice cream sales show a negative (inverse) correlation -- when gas sales are high, ice cream sales are low, and vice-versa. Are these two variables inversely related? Economists argue that these two variables are not related to each other at all. If anything, we are observing the impact of seasonal changes in the weather. Related variables will be correlated variables; correlated variables may not be related variables. Another type of data problem arises from the timing of events. This is sometimes called the post hoc, ergo propter hoc fallacy. It assumes that a later event is always due to an earlier event. If event A is followed by event B, are we observing related events or just a coincidence? For example, we observe (A) an increase the in the money supply, followed by (B) an increase in the price level. Can we conclude that the price level is a dependent variable which is directly related to the money supply which is an independent variable? Economists assert that this relationship does exist, and it is the important "equation of exchange" which we use to explain the power of monetary policy. early 1997, (A) Madonna had a baby. In late 1997, (B) the economies of 1981, the Reagan Administration cut personal income taxes by 25 percent. By the mid-1980s, the federal government deficit was over $200 billion per year. Did the Reagan tax cuts create the later deficit? In 1981 and 1982, the A dependent event will be a later event; not all later events are dependent events.