content
stringlengths
275
370k
Ever look at a baby and wonder what she's thinking? Well there's a lot more going on in there than previously thought. According to the newest brain research, babies' brains begin crackling with activity before they're even born! At birth, an infant's brain houses 100 billion nerve cells, or neurons. Immediately, connections -- or synapses -- between the cells form as the baby experiences her surroundings and makes attachments to caregivers. This network of neurons and synapses controls various functions, such as seeing, hearing, and moving. By the age of three, a child's brain has about 1,000 trillion synapses -- twice as many as an adult. But if a child's brain is not stimulated from birth, these synapses don't develop, impairing her ability to learn and grow. What does this mean for parents? "Basically, the latest research confirms the importance of what many parents do instinctively, such as reading, cuddling, and talking to their children," says Angie Dorell, director of curriculum at La Petite Academy, the nation's second-largest preschool. She says these five parenting practices will help ensure a child's healthy brain development. - Be warm, loving, and responsive: Studies show that children who receive responsive caregiving, such as touching, rocking, talking, and smiling, cope with difficult times more easily when they are older. They are more curious, get along better with other children, and perform better in school than kids who are less securely attached. - Talk, read, and sing to your child: Communicating with your child gives him a solid basis for learning later. Talk and sing about daily events. Read stories in a way that encourages older babies and toddlers to participate by answering questions, pointing to what they see in a picture book, or by repeating rhymes and refrains. - Encourage safe exploration and play: While many of us think of learning as simply acquiring facts, children learn through playing. Blocks, art, and pretending all help children develop curiosity, confidence, language, and problem-solving skills. Let your child choose many of her own activities. If she turns away or seems uninterested, put it aside. Let her pick it up again later when she's interested. - Use discipline as an opportunity to teach: It is normal for children to test rules and to act impulsively at times. Parents need to set limits that help teach children, rather than punish them. For example, tell your child what behavior is acceptable and communicate positively: say, "Feet belong on the floor, please," instead of "Get off the chair!" - Choose quality childcare and stay involved: Research shows that high-quality childcare and early education can boost children's learning and social skills when they enter school. For free tips on how to choose quality care, call Child Care Aware at 800-424-2246. After choosing your provider, stay involved. Drop in unannounced, and insist on progress reports.
by Karie Nugent The Mourning Dove is a member of the phylum Chordata, class Aves, order Columbiformes, and the family Columbidae. Other common names for the mourning dove include stock dove, diamond dove, African collared dove, spotted dove, turtle dove, eared dove, and white-winged dove. The name pigeon can also sometimes be used interchangeably for the word dove. “Pigeon” is usually given to the larger species of doves, while “dove” is given to the smaller, more delicate ones. Mourning Doves are usually medium in size and brownish in color. They have a long, white-tipped tail that is usually rounded or pointed at the end. This long tail helps to distinguish it from the White-winged Dove. Usually the males are slightly larger than the females, and tend to be brighter in color. The range of these birds covers North America and most of South America. There are about five hundred million in North America alone. They can live just about anywhere on the continent in the warm months, but prefer to make their homes in small towns, farms, open woods, roadsides, and grasslands. Most choose to build their nests in tree branches, although a few make their nests on the ground. They migrate to the South during the winter months and fly over a thousand miles to find a new home for the winter. Mourning Doves are monogamous and are usually seen in pairs. Some of them even stay together for the long trip down South. They mostly eat different types of seeds, waste grain, fruits, and insects. They usually prefer to eat seeds from the ground, but will resort to eating seeds from bushes and trees when food on the ground becomes scarce. Occasionally, Mourning Doves will eat agricultural crops such as corn, barley, rye, and oats, which can cause economic problems with farmers and their cash crops. Mourning Doves are very unusual in that they can produce milk similar to that of mammals. During breeding, special glands in both the male and female enlarge to produce a thick milky substance to feed their young. Unfortunately, because of their large population, the Mourning Dove is North America’s most sought after species during hunting season. Luckily for them, they have acute vision and can fly up to fifty-five miles per hour. Written spring 2004, as a service learning project for Dr. Gary Coté's Biology 102 class at Radford University. Copyright Pathways for Radford. Home | Yesterday | Today | Tomorrow | Contact Us
Endoplasmic reticulum is a network of membranes inside a cell through which proteins and other molecules move. Proteins are assembled at organelles called ribosomes. When proteins are destined to be part of the cell membrane or exported from the cell, the ribosomes assembling them attach to the endoplasmic reticulum, giving it a rough appearance. Smooth endoplasmic reticulum lacks ribosomes and helps synthesize and concentrate various substances needed by the cell. To play the media you will need to either update your browser to a recent version or update your Flash plugin The endoplasmic reticulum can either be smooth or rough, and in general its function is to produce proteins for the rest of the cell to function. The rough endoplasmic reticulum has on it ribosomes, which are small, round organelles whose function it is to make those proteins. Sometimes, when those proteins are made improperly, the proteins stay within the endoplasmic reticulum. They're retained and the endoplasmic reticulum becomes engorged because it seems to be constipated, in a way, and the proteins don't get out where they're suppose to go. Then there's the smooth endoplasmic reticulum, which doesn't have those ribosomes on it. And that smooth endoplasmic reticulum produces other substances needed by the cell. So the endoplasmic reticulum is an organelle that's really a workhorse in producing proteins and substances needed by the rest of the cell. William Gahl, M.D., Ph.D. Clinical Director, NHGRI Medical Genetics Branch; Head, Human Biochemical Genetics Section Dr. Gahl studies rare inborn errors of metabolism through the observation and treatment of patients in the clinic, and through biochemical, molecular biological and cell biological investigations in the laboratory. His group focuses on a number of disorders, including cystinosis, Hermansky-Pudlak syndrome, alkaptonuria and sialic acid diseases. Dr. Gahl has a long-standing research interest in cystinosis, a lysosomal storage disorder caused by a mutation in the CTNS gene. Over the past two decades, Dr. Gahl's laboratory has elucidated the pathogenesis of this disease and demonstrated the safety and efficacy of cysteamine (²-mercaptoethylamine) therapy, a treatment that depletes cells of cystine.
Difference between domain name and URL is easy to understand. For this we first need to know what a URL is and what is domain names. A URL (Uniform Resource Locator, previously Universal Resource Locator) is the unique address for a file that is accessible on the Internet. A common way to get to a website is to enter the URL to its homepage file in our web browsers address line. However, any file within that Web site can also be specified with a URL. Such a file might be any Web (HTML) page other than the home page, an image file, or a program such as a common gateway interface application or Java applet. On the Web (which uses the Hypertext Transfer Protocol, or HTTP), an example of a URL is: www.google.com A domain name is an identification label that defines a realm of administrative autonomy, authority, or control in the Internet. Domain names are hostnames that identify Internet Protocol (IP) resources such as websites. Domain names are formed by the rules and procedures of the Domain Name System (DNS).Every domain name has a suffix that indicates which top level domain (TLD) it belongs to. There are only a limited number of such domains. For example: .gov is the extension used for Government agencies, .edu for Educational institution and .org is used for Organizations. The major differences between a domain name and a URL are relatively simple to express. Essentially the domain name is the overall name of the website while the URL is the extended path. To break this down so that it is a little more understandable to more people. Consider this. One wants a website, so he/she goes out and purchase a website name from a registrar, this would be the domain name for the required company or organization. If one however decides that he/she want a website and one get it from say XYZ, and your website is a part of XYZ, the link one have is the URL. Therefore, a domain name is used in a URL. When one uses the web or sent an e-mail message or wants to open a webpage, a domain name is used. For example, The URL http://www.abc.com.my — contains the domain name abc.com.my The e-mail address [email protected] — contains the domain name abc.com.my Hence, the difference between the domain name and URL is just the actual name for the website and its extension.
An international consortium of researchers has assembled a database of human genetic variations, creating a tool that could revolutionize the search for genes that cause many common diseases. But without careful self-regulation, the geneticists say, the information could also result in a flood of misleading or inconclusive results. Called the HapMap, the database catalogs more than three million points of genetic variation based on samples from 269 people in Nigeria, China, Japan, and Utah. More than 200 scientists in Canada, China, Japan, Nigeria, the United Kingdom, and the United States participated in the project. The first phase of the project, reporting more than one million differences, was published in the October 27 issue of Nature, based on data analysis led by Peter Donnelly of the University of Oxford in England and David Altshuler, director of the program in Medical and Population Genetics of the Broad Institute of Harvard and MIT in Cambridge, MA. “We need this background information on variation in the human genome just to begin to address the questions that we want to ask – like what are the genes involved in breast and prostate cancer and diabetes,” says Brian E. Henderson, dean of the Keck School of Medicine at the University of Southern California. “It’s a very powerful tool,” agrees Charles Langley, a population geneticist at the University of California, Davis. “Human medical genetics is finally addressing a much bigger public health issue, which is the genetic basis of common diseases.” Approximately six billion chemical building units, called nucleotides, comprise the human genome. Although roughly 99.9 percent of the sequence of those nucleotides is identical between any two humans, that still leave millions of differences at individual points in the DNA, called single nucleotide polymorphisms, or SNPs. It is these variations that account for many of the genetically determined differences between humans. Researchers could find which of these changes relate to a particular disease by sequencing and comparing entire genomes (and every SNP) among thousands of affected and unaffected people. However, in practice, this would be expensive and time consuming. In 2001, Mark J. Daly, then at the Whitehead Institute, now an associate member at the nearby Broad Institute, found that such genetic differences are inherited in large blocks, called haplotypes (hence the term “HapMap”). While there may be hundreds of SNPs within a region of DNA, all of them are linked, so that everyone who has an “A” nucleotide rather than a “G” at a particular location in a chromosome will have the same genetic variants at other SNPs in that region. And for many haplotypes, only three or four patterns of variation exist. With a catalog of these blocks, geneticists could more effectively identify gene variants involved in common diseases such as diabetes, cancer, heart disease, and psychiatric illnesses.
25 Years of Archaeological Research on the Sands and Gravels of Heslerton by Dominic Powlesland On a bleak windswept day early in the spring of 1978 I visited a small sand and gravel quarry between the villages of East and West Heslerton on the southern side of the Vale of Pickering. The broad band of sand and gravel which separates the edge of the wetlands and the foot of the Yorkshire Wolds, often referred to as the sandy lands, is now considered to be of poor agricultural value and might seem an unlikely place for past settlement. In fact, the situation is quite the reverse. Here, a quarry worker, Jim Carter from West Heslerton, had discovered a number of Early Anglo-Saxon burials following the removal of ploughsoil in preparation for the extraction of sand and gravel. Cook and Son's quarry, like many others throughout Britain, was the setting for a chance discovery that was to change our understanding of the past forever. The light sandy soils covering the sand and gravel deposits that span the gently sloping gap between the base of the Yorkshire Wolds and the wetlands that once covered most of the central part of the Vale of Pickering, provided an ideal setting for prehistoric and later settlement. Throughout the 1960s and 1970s an increase in the scale of sand and gravel extraction for urban regeneration, housing and road building, led to the discovery of hundreds of `new' archaeological sites. These `sites' became clearly visible once the covering blanket of plough-soil had been removed prior to mineral extraction, revealing dark marks in the soil which indicated ancient field ditches, burial-mounds, post-settings for timber buildings and regular outlines of ancient graves. The well-drained sands and gravels often provided a perfect setting for the formation of crop marks showing the layout of buried features, giving archaeologists the opportunity to plan a rescue excavation programme in advance of mineral extraction. At Cook's Quarry a layer of blown sand that lay between the archaeological features and the modern ploughsoil restricted the formation of crop marks, thus hiding evidence of human activity spanning nearly 7000 years. Although a burial accompanied by a jet necklace, dated to about 1800 BC, had been found in the quarry ten years earlier, the discovery had not been followed up. Archaeology was not, however, unknown to the population of West Heslerton. Jim Carter immediately understood the importance of the fragments of metalwork and bone that he found on the quarry surface. He had attended the village primary school and been taught by a remarkable character. Tony Brewster, who later gave up teaching to undertake a series of major archaeological excavations on the Wolds, was a deeply passionate archaeologist who had spent his years as a teacher using archaeology in his lessons at every opportunity. Jim and others in the school had worked on his excavations on Staple Howe, a defended farmstead dating to about 1000 BC and situated just to the west of the village. The site itself had been discovered by two of Jim's contemporaries, Mick Stones and Chick Milner, whilst they were playing in the woods. When, in the autumn of 1978, a small team of us arrived at West Heslerton to undertake a small dig at the quarry, we found ourselves working in perhaps the only village in England where archaeology had been a principal subject at the local primary school throughout the 1950s. We may have had long hair, we may have been students, but everyone understood why we were here and encouraged and supported us from the very first day. We have now benefited from that support and understanding for 25 years. Cropmarks showing the ditches around prehistoric burial mounds in Rillington The story of the Heslerton Digs is not just one, but many stories reflecting the hard work, often in bad weather, of more than a thousand `diggers' excavating huge areas over hundreds of weeks of fieldwork. Days of great excitement and discovery more than matched those of despair. Funding has always been difficult to secure, particularly in the first decade when the funds rarely outlasted the digging season. Since then we have been lucky not only to have secured regular funding from English Heritage, but also to have been given the freedom to develop our excavation and recording techniques which have earned Heslerton an international reputation. Although we can now reflect upon 25 years of fieldwork, our work is no longer driven by the need to rescue a chance discovery, but by local and national research questions. By combining excavation with other forms of fieldwork we are learning more every day about how people have lived in and changed this remarkable landscape. We hope that through a better understanding we can help develop ways in which parts of this remarkable heritage can be preserved for future generations. It would take over a thousand years to dig all the sites that we have recorded through surveys from the air and on the ground. The Vale of Pickering - a most unusual valley. Map showing Lake Pickering at its maximum extent about 14000 years ago The Vale of Pickering is unique in England for here the principal river, the Derwent, flows not towards the coast but inland. It rises just a few kilometres from the sea on the North Yorkshire Moors and runs not east to the sea, but west through the Vale of Pickering and then south to join the River Ouse more than 80km from its source. This unusual drainage pattern of the Vale of Pickering results from changes to an established river valley that originally drained into the North Sea south of Filey. During the last glaciation, which ended about 14,000 years ago, the North Yorkshire Moors appear to have provided a barrier against glaciers pushing south which divided to create a major glacier running down the coast and a second filling the Vale of York. As the glaciers receded, high ridges of glacial till known as lateral moraines were left behind, one blocking off the Derwent's outflow to the sea. With nowhere else to go, the meltwaters of the receding glaciers filled the blocked valley to form Lake Pickering to around the 65 metre contour. Finally the Lake overflowed, cutting a deep gorge between the Howardian Hills and The Yorkshire Wolds at Crambeck. By about 10,000 BC the water level in the valley had stabilised at about 25 metres above sea level leaving Lake Flixton, contained between two moraines at the eastern end of the valley and draining into a very much-reduced Lake Pickering. As the waters drained away, sands and gravels deposited around the edge of the Valley were left exposed whilst, in the centre of the valley, Lake Pickering appears to have been reduced to a large number of smaller lakes that were often separated by long, slightly raised gravel ridges running from east to west. Around these lakes extensive reed marshes and woodland developed, filling the base of the valley. Early Prehistoric Hunters - The Late Palaeolithic and Mesolithic Heslerton is not the only major archaeological site in the Vale of Pickering. The earliest evidence of human occupation in the area comes from the Carr lands at the eastern end of the valley, in particular from Star Carr, where Lake Flixton drained into Lake Pickering, and Seamer Carr, where the site is now buried under the Scarborough municipal rubbish dump. The sites at Star Carr and Seamer Carr, hidden beneath the dark smelly peat that is indicated by the name Carr, have produced worked flint and bone tools, and butchered animal bones dating back as far as 11,000 BC. These flint blades and points were lost or discarded during regular visits to the edges of post-glacial Lakes Flixton and Pickering, which were ideally placed for hunting. By burning the reeds at the edge of the lake new shoots were encouraged to grow, attracting animals to feed at the lake edge where they were easy prey for the skilled hunters of the Late Palaeolithic or Old Stone Age. As the climate warmed and became wetter, extensive reed beds formed around the lake edges and over thousands of years a thick layer of peat formed. Peat, which comprises partially decomposed plant material, develops in stagnant waterlogged areas where there is insufficient oxygen in the water to support the microbes which would otherwise consume the plant remains. Peat is an important resource that acts as a natural sponge holding as much 90% of its volume of water. In the past, the extensive areas of peat in the valley bottom would have moderated the risk of flooding in the Vale of Pickering, expanding to hold more water during very wet times and then gradually shrinking as the water drained away during periods of drought. Peat is of great importance to the archaeologist because within it pollen, insect remains and other organic materials can survive for thousands of years. Pollen found in peat deposits is the best source of information for the reconstruction of ancient environments which, in turn, can help us understand past climates. Excavations following the edge of Lake Flixton, carried out by the Vale of Pickering Research Trust over more than 20 years, have shown that the water level in Lake Flixton remained static for over 5000 years spanning the period from the end of the Palaeolithic into the Mesolithic or Middle Stone Age. During this period the lake edge provided an important and constantly re-visited hunting ground. We can be certain that the situation was similar elsewhere in the valley, associated with the many lakes and pools which were the surviving remnants of Lake Pickering. Until less than 200 years ago there were widespread peat deposits over much of the Vale of Pickering. As the boggy Carr lands have been drained the peat has decayed away, the ground level has sunk and, with the introduction of intensive arable agriculture into these areas, most of the early prehistoric sites have been badly damaged or lost altogether to the plough. The earliest evidence of human activity so far discovered in the excavations in and around Heslerton dates to the Late Mesolithic at about 5000 BC, when tiny `microlithic' flint blades were used to make complex composite tools. Rather than make a spearhead using a single large flint blade, multiple small blades were set into bone, antler or wooden hafts using tree resin as an adhesive. Although we have found only a few microliths, their discovery alongside an ancient stream channel emerging from the foot of the Wolds may indicate that this area was also used for hunting and probably also as a routeway linking the centre of the Vale to the lower slopes of the Wolds. It is difficult to get a clear picture of the Vale and its people during early prehistory. Fragments of worked flint show that the Vale supported a small population at least, living as hunter gatherers on a mixed diet of fish, fowl, meat and fruit and nuts, using their flint tools to work wood and leather, and producing woven materials such as baskets and matting. Although there is a tendency to think of the landscape at this time as being covered by dense primeval forest, it is likely that extensive areas may already have been cleared, sometimes by fire following lightning strikes, and at others deliberately cleared using slash and burn techniques, leaving open areas and areas of scrubby re-growth particularly on the sandy lands where tree cover would take a long time to regenerate. The Mesolithic population were little different to people today and knew that forest clearings made perfect locations for trapping and killing animals for food. Although the discovery of flint tools shows that the area supported a population during the early prehistoric periods, the people who made these objects were nomadic and their settlement sites are difficult to identify. We do not really know what they may have looked like; it is most likely that they used tents which were progressively moved from one camp site to another as the year progressed. It is likely that camp sites may have existed on the slightly elevated sand hills around the lakes; Star Carr for instance was once interpreted as a permanent settlement site although this view is no longer accepted. At sites like Star Carr they hunted elk, red deer and horse in addition to wild fowl and presumably fish. During the Mesolithic period the land-bridge which linked the British Isles to the European mainland was breached as the climate became warmer and sea level rose; it is likely that many important Mesolithic sites lie beneath the North Sea. A detailed study of the sands and silts in the ancient steam channel examined during excavations at Cook's Quarry in Heslerton indicate that the blown sands, which become an important feature of the archaeology of the Heslerton area, were already starting to form by the end of the Mesolithic, showing that some open areas must have existed. The Agricultural Revolution - The Neolithic During the Neolithic or New Stone Age (between about 4000 BC and 2200 BC) settled agriculture is first established and, although hunting and gathering would still have provided an important source of foodstuffs, the growing of crops and domestication of livestock made permanent settlement possible for the first time. During the Neolithic we see the first pottery introduced and the construction of large monuments in the landscape. Eastern Yorkshire is rich in evidence from this period during which the landscape must have supported a considerable population. The direct evidence of settlements is still very sparse, but the burial evidence is both striking and widespread. New technologies were introduced, including polished stone axes which were traded widely throughout Britain, and were efficient tools for both felling and splitting trees. Their distribution in the Vale of Pickering shows that during this period woodland was being cleared aggressively, particularly in the centre of the valley. The construction of large burial mounds, including both long and round barrows, using stone, antler and bone tools, seems to have become widespread indicating that society was organised and that survival was not merely based upon subsistence. There was time and labour to spare from food production to allow for the construction both of complex burial structures and other monuments, in particular avenues of huge posts made from complete tree trunks, and circular monuments surrounded by ditches known as henge monuments. These structures, which derive their name from Stonehenge, are distinguished by a circular enclosure surrounded by a bank and ditch with one to four entrances aligned on the principal points of the compass. In Heslerton a huge long barrow was constructed on the top of the Wolds during the first half of the Neolithic. This monument, which was partially excavated during the 1960s, was over 120m long and would originally have stood over three metres high. The mound, made of chalk quarried from ditches on either side of the monument using pickaxes made from antlers, must have been a most imposing structure. At its eastern end the barrow was fronted by a curved setting of huge tree-trunk posts. Although this structure was built on the top of the Wolds, it was set back from the edge and would only have been visible from limited parts of the landscape. Although part of the eastern end of the barrow had been quarried away, probably in the 19th century, and most of the eastern half was subsequently almost completely levelled by ploughing, the western half of the barrow still survives to a height of over a metre. During the excavations at Heslerton a number of important Neolithic features have been examined. These fall into three groups: structures associated with burial, landscape monuments including three hengiform monuments, and a large number of `avenues' comprising posts and pits that often contain burnt hazelnut shells, animal bone and pottery which may be associated with domestic activity. The pits themselves may have originally been dug as storage pits for hazelnuts which were a valuable food source. The first evidence of what we can now show to be widespread Neolithic activity was discovered at Cook's Quarry, the first site we were to excavate. The light sandy soils which sit atop the sand and gravel deposits must have provided an ideal setting for early agriculture, in which the first cereal crops of einkorn and emmer wheat were probably planted individually using simple digging sticks in much the same way as a modern gardener uses a dibber. These soils, whilst easy to work, would have rapidly become less fertile, and it is likely that during much of the Neolithic relatively small areas were under cultivation. The absence of fertiliser and what would then have been relatively thin topsoils would have supported a few years growth before productivity dropped and new areas were cleared; as far as we know there were no formalised fields during this period. The excavations at Cooks Quarry produced evidence of tree felling or wood working in the form of fragments of polished axe which may have broken off during use. The first of a number of hazelnut storage pits were also found here. These features, which seem to be associated with relatively Late Neolithic activity, often include finely decorated pottery which is classified according to the style of decoration employed. A remarkable discovery, the importance of which has only recently become clear as a result of a radiocarbon date, was the carbonised remains of what appears to have been the blade of a wooden shovel, surviving as fragments of charcoal in the sand. Neolithic shovels, found on other sites in Britain, have generally been made using the shoulder blades of elk or deer; wooden objects so rarely survive we tend to forget that alongside all the stone and bone tools there would have been many others made of organic materials. A large number of massive post pits seem to have provided the settings for possible totem poles, sometimes arranged in regular pairs to form avenues but in other cases dotted seemingly at random around the landscape. During the last three years a number of new examples of these pits have been found as we work ahead of Cook's Quarry, recording the archaeology prior to sand and gravel extraction. The posts in these pits measuring, 50cm-1m in diameter, must have made imposing monuments. In three cases they seem to have provided a focus for cremation burials. We have examined three henge monuments - two large examples measuring more than 50 metres in diameter with entrances to the east and west and a third, much smaller example, about 28 metres in diameter with a single entrance to the north. The largest of the three, located in East Heslerton and measuring over 65m in diameter, was first identified through air photography and then mapped in more detail following a geophysical survey. Its situation, partially enclosing a low chalk knoll overlooking the valley, is almost identical to one fully excavated in the mid-1980s at West Heslerton. These monuments, the function of which remains unclear, appear to mark the first indication of human reworking of the landscape; there is no indication that they served either defensive or settlement purposes. In the case of the larger henges, they were built on the very edge of the light sandy soils, in slightly elevated positions where they would be seen from afar. The smaller monument excavated at Cook's Quarry in 2002 appears to be aligned on a short avenue of massive posts extending northwards towards the edge of the wetland that still filled the bottom of the valley. The term henge, when applied to this small monument, is perhaps too grand; we have yet to confirm its date using radiocarbon dating and it may ultimately turn out to be a new class of burial monument. There were virtually no finds associated with this feature, but near the centre two massive pits had contained posts more than 50cm in diameter; next to one a cremation had been buried in some sort of bag. As the posts decayed, large amounts of cremated bone fell into the voids left by the rotting posts. One is tempted to wonder if these pits had contained inverted tree trunks like those found in 1999 at Sea Henge in Norfolk, on top of which the cremations had originally been placed. An alternative mode of burial, excarnation, where the body is left exposed to the elements until only the bones are left, is hinted at in nearby Barrow 1R, where fragments of disarticulated bone were scattered around what appears to have been a timber mortuary house. Unfortunately only half of this monument was excavated in 1982 and by 2002, when the remaining half was examined, some 5m of the monument had been removed by quarrying and we were therefore only able to excavate the front and back of the mortuary house. To the left is a geophysical survey of a henge monument in East Heslerton, over 65m in diameter. Above is the Site 2 Henge at West Heslerton during excavation The construction of these mysterious monuments may represent the first demonstration of people's increasing control over the landscape. As with so much in archaeology, we are tempted to see these as `ritual' monuments; whatever the case, it appears that the henges and the great post-pits are related and that they form part of a constructed landscape, a giant architectural adventure incorporating large monuments of earth, stone and timber. They, and the large number of round barrows that were constructed during the Late Neolithic and Early Bronze Age, dominate the areas of light soils that provided the setting for early agriculture. It is likely that they were maintained, possibly by grazing with sheep and goats, since they are re-visited and re-used for thousands of years. During the late Neolithic we have the first evidence of settlement. Associated with, but some fifty metres away from the henge in West Heslerton, a series of pits containing pottery known as Grooved Ware, and a number of small post-holes may be the first glimpse we have of settlement; sadly the area opened was very small and there is insufficient evidence to identify the type or shape of the buildings supported by the timbers set in the post-holes. Although we do not have stone monuments like the great stone circles found elsewhere in Britain, wooden post circles, circular enclosures, and large and imposing burial mounds seem to have been dotted around the landscape in an organised fashion, concentrated on the sandy slopes overlooking the great wetland and overlooked from the Wolds to the south where truly massive earth monuments such as Duggleby and Willy Howe overlooked another fertile valley, the Great Wold Valley. The broken polished axe fragments and imposing monuments indicate that much of the sandy land between the foot of the Wolds and the wetlands was probably open; the monuments were not only visible to those living and working in the landscape, but each would have been visible from the others. A great post-circle or, more correctly, horse-shoe arrangement of tree trunk sized posts next to the henge in West Heslerton, may have had a role as a kind of observatory or seasonal clock much like Stonehenge; a single tiny post situated at the centre of the arc may have provided a line of sight for identifying the arrival of spring or midsummer using the stars which were, of course, much more visible at a time when there was virtually no light or other type of pollution. For the archaeologist it is often the period of transition between one major period and another that is most exciting to study. The Neolithic was a period of great change; new polished stone tool technology, the widespread use of pottery, the emergence of settled agriculture, and the construction of monumental architecture for both burial and social or political reasons, changed the landscape for ever. The First Industrial Revolution: The Bronze and Iron Ages We have seen how in the Neolithic the landscape was fundamentally changed by the intervention of people, people whose lives were concerned primarily with taking control of the landscape and managing food production. Early farming, when combined with hunting and gathering, must have produced sufficient surplus to leave time for major construction works and for development of new technologies like polished stone tools and pottery. There is no indication of great population pressure and, although polished axes and mace heads may be seen as weapons, they are more likely to have been symbols of power. Societies throughout Britain were connected by trade networks, bringing polished axes to Eastern Yorkshire from as far away as Cumbria and Cornwall; by today's standards these items must have been immensely valuable. Not until the end of the Neolithic do we see the construction of the first monuments that might be interpreted as defensive. One is tempted to see the landscape as being made up of a large number of linked tribal lands held and maintained through a type of shared ownership, a populated landscape but with room to spare. During the Bronze Age this all changes, and we see wealth and power articulated through weapons, the dividing up of the landscape into large `estates', and the construction of defensive sites, either as a demonstration of power or out of a need for one community to defend itself from others. The Early Bronze Age is in many ways indistinguishable from the Late Neolithic. An increasing range of pottery styles, some with continental European influences, may have arrived here as a consequence of migration. Whether it was the people who moved, or simply the Beaker pottery tradition, this `new age' was heralded by the introduction of metalworking, first copper, rapidly followed by bronze (nowadays frequently referred to as copper alloy). Early Bronze Age Beaker and Food Vessel By the end of the Neolithic period burial in what appear to be family monuments becomes widespread; round barrows up to 30m in diameter are constructed in cemeteries which are often clustered around the large earlier monuments. This tradition continued into the Early Bronze Age leaving a large number of barrows which, in areas where they were built mostly of stone such as the chalk Wolds or Downs, survived relatively intact until the introduction of mechanised farming. These barrows in particular were to catch the attention of some of the great early archaeologists such as JR Mortimer or Canon Greenwell, who excavated hundreds on the Wolds and the Moors at the end of the 19th century. There has been a tendency to see these barrows as the burial mounds of tribal leaders, made obvious by covering the burial with a great mound of earth or stone; the excavations at Cook's Quarry and on the Dawnay Estate tell a different but more interesting story. In the period between about 2200 BC and 1600 BC, when Beaker pottery styles are found all over Britain, often occurring alongside more localised pottery traditions such as the Food Vessels found in Yorkshire and Scotland, two distinctive barrow building styles can be seen in Heslerton. Large barrows of about 30m in diameter are found in cemeteries located in the centre of the sandy lands, whereas smaller barrows feature around the large henges on the southern margins of the sandy areas. The large barrows, of which we have excavated three (1M, 1L and 1R), were all buried beneath blown sands which, in the case of Barrow 1L, had hidden the monument from view since the Roman period at the latest. It is exceptionally rare in Britain to find prehistoric monuments which have been protected from the effects of agriculture or the Barrow 1L with most of the mound removed and visible in the baulks. A number of graves can be seen as dark patches in the sand. Food Vessel burial during excavation on Site 2 attentions of early archaeologists and antiquarians. Because these sites were hidden from view, their final discovery the result of a chance discovery by a quarry worker, the details of their form and construction are well preserved. In all three cases it is clear that the barrow mounds were a very late feature; each had started life as a small flat cemetery surrounded by a shallow marker ditch or gully. Multiple burials were made within the enclosed area, and included both cremations and inhumations. When there was only limited space for further burials, a much deeper enclosing ditch was cut and an earth mound thrown up over the graves; a small number of later burials were then either cut into the mound or, as in the case of 1L, cut into the partially filled in ditch. In 1L a grave was cut into the mound within only a few weeks or months of the mound being constructed; we could see where the spoil removed whilst digging the grave had been thrown across the open ditch and the area then cleaned up when the grave was filled in. In the case of barrow 1R, where the barrow was established around the Neolithic mortuary house and over one of the great totem pole post pits, radiocarbon dates show that one grave had been disturbed some 800 years after it was first cut, the body being carefully stacked to one side to make way for a Beaker burial. The western side of barrow 1R, excavated in 2002, was not defined by a ditch but by a meandering stream channel. The constant wetting and drying of the sands into which the burials had been cut had created an environment in which not a single scrap of bone survived and, despite very careful excavation, nothing was found in eleven features which, given their size, shape and location in the barrow, must have been graves. The smaller barrows, including two examples found within the large henge excavated at West Heslerton, measured 10-12m in diameter and contained inhumation burials that had been placed in tree trunk coffins, where a large oak tree had been split in half and hollowed out. Most were accompanied by pottery vessels, particularly Food Vessels. Each barrow contained only three or four grave pits, in two of which later cremation burials had been inserted into the top of the main grave pit. Not all burials at this time were covered by barrow mounds, nor were those buried beneath barrows necessarily better furnished. A fine Beaker burial accompanied by a jet button was found in what appears to have been a flat grave a few metres from the two barrow mounds within the henge, whilst just outside it a cremation of a young child was found beneath an inverted Food Vessel Urn. Although we have plentiful evidence of burial during the Early Bronze age the evidence for settlement is very limited. The inclusion of grave goods with many of the burials indicates a belief in the afterlife, the dead being buried fully clothed and accompanied by short and wide mouthed Food Vessels, which probably did contain food, or tall and slender Beakers which are thought to have contained liquid. Examination of pollen grains found in Beakers in Scotland indicates that they may have contained an alcoholic beverage like mead. By examining the skeletons we can show that these people were relatively healthy and of similar stature to people today and that in some cases they survived into old age. The picture of domestic life is much more difficult to assess; bog-oaks, preserved in the peat in the eastern end of the valley, together with the pollen record, show that the climate became warmer and wetter. Much of the North York Moors would have been good agricultural land at this time; the rise in temperature and rainfall accelerated peat formation. The countryside teemed with wildlife and the lakes and pools in the centre of the valley would have been a good source of fish and wildfowl. Small circular arrangements of small post or stake holes identified during excavation at Heslerton may be the remains of Beaker period houses, but they lack the domestic refuse that would confirm this interpretation. Artists impression of the LateNeolithic-Early Bronze Age monument complex at Heslerton at around 1800BC. It is not until the Middle Bronze Age, between 1600 BC and 1100 BC, that we see the first clearly recognisable domestic structures in the excavations at Heslerton. Also at about this time there seems to be a change in burial arrangements; new cemeteries are established on long narrow ridges towards the middle of the Vale of Pickering. These cemeteries continue to be utilised until they reach their maximum extent during the middle of the Iron Age at about 500 BC. Like the possible round houses identified from the Early Bronze Age, the Middle Bronze Age structures are circular, measuring about 10m in diameter with walls supported on small stakes. These were difficult to see as they had been set in very small stake holes in an area of chalk gravel. These buildings do however have distinctive porches, constructed with much larger posts some 2m to the east of the wall line. Three of these structures have so far been identified and it is likely that the walls were made of wattle and daub and that the thatched roof extended to cover the porch post-holes. A small group of cremations over 200m to the west of these structures may be contemporary. Two of them were contained within pottery vessels, but all were in shallow pits that had subsequently been badly truncated by ploughing. The Bronze Age is best known for the metalwork after which it is named, yet we have so far found only two tiny copper awls. We have been shown a number of examples of metalwork found elsewhere in the Valley, including a bronze socketed axe that was being used to open paint tins in a farm workshop. In contrast to stone tools, which could only be re-sharpened a few times before being discarded, bronze is recyclable. Utilising tin from Cornwall, copper from Ireland and lead probably from the Peak district, this material was too valuable to discard; when a bronze tool broke it would be kept until it could be melted down and re-made, probably by a travelling smith. Bronze Age metalwork is most commonly found in hoards, deliberate deposits made for religious purposes; a probable example is a Bronze Age sword found in the Costa Beck, near Pickering. Some pieces are occasionally found during farming and others sadly come to light when found by treasure hunters who destroy the setting which, if excavated properly, could give us valuable information about the deposit and the people who made these items. Towards the end of the Bronze Age another major transition occurs in the landscape. At this time we have the first clear and detailed evidence of settlement, both on the sandy lands and on chalk knolls overlooking the Vale of Pickering on the north- facing slope of the Wolds. This is a period for which we have no excavated burial evidence. During the Late Bronze and Early Iron Age, between about 1100 BC and 800 BC, new societies emerge which appear on the basis of domestic items, weaponry and dress fittings, to have had strong links with communities elsewhere in northern Europe, particularly northern France; they are often referred to as La Tène cultures. The constant influx of European influences throughout the Bronze and Iron Ages, whether through migration or trade, reflect a sophisticated and connected society in which barriers of distance and language were evidently overcome. During the Late Bronze to Early Iron Age transition we see the first large-scale land enclosures associated with extensive open settlements and small fortified sites. The increasing use of wheeled vehicles, an increase in population, and the need to manage stock, led to the development of formalised track-ways, their limits defined by hedges and ditches. The skeleton of fields that we see today starts to form and, indeed, some of the landscape boundaries established 3000 years ago survive today as field boundaries and, in the case of West Heslerton, as parish boundaries. By the end of the Iron Age much of Eastern Yorkshire was covered with a great network of enclosures defined by single and multiple banks and ditches; that the banks supported hedges has been demonstrated through the discovery of snail shells from species that live only in a shady hedge habitat. This network of enclosures could have been constructed to define land ownership; however, it is much more likely that they were to assist with stock management and to prevent cattle rustling; wealth was measured in terms of stock not of land. Much of the landscape was relatively open, although the slopes of the Wolds would still have been heavily wooded. The amount of manpower required in establishing this system must have been considerable. The first phase of construction took place at some time between about 1000 and 800 BC when massive pit-alignments were built. These bizarre features comprise pits roughly two metres square, two metres deep, and spaced at metre intervals with an earth bank on either side. We have traced one of these for over 5 kilometres using a combination of excavation and survey, whilst others joining it can be traced using aerial photography running up to and across the Wolds. Tony Brewster excavated two important sites - Staple Howe in Scampston Parish and Devils Hill in West Heslerton. The characteristic pottery associated with these sites is known as Staple Howe ware, and decorated around the rim and body with finger tip or fingernail impressions. Both sites were small hilltop palisaded enclosures, situated on chalk knolls that had in geological time broken loose from the face of the Wolds and slipped down to form small, steep-sided hillocks measuring no more than 85 by 40 metres on the top. The location of these sites, which were surrounded by a massive timber palisade and ditch, provided a commanding view across the valley. Each contained a large grain storage structure and post-holes from possible houses. They may be interpreted as high status properties or, alternatively, as refuges; they were probably both. The discovery of Staple Howe pottery associated with a large number of grain storage buildings and round houses at Cook's Quarry in a completely unenclosed area adjacent to both a trackway and one of the early landscape boundaries, shows that these two fortified or enclosed sites show just one aspect of a well developed landscape. By the middle of the Iron Age a new culture had emerged, once again with links to northern France. The Arras culture, best known for its square-ditched barrow cemeteries and, in particular, the so-called chariot burials. Square barrows, defined by a shallow square ditch and containing a single inhumation in a coffin, are most common in Britain in Eastern Yorkshire. A huge square barrow cemetery has been recorded using air photography on one of the gravel spurs in the wetland area; unfortunately the ground conditions in this area have prevented the survival of human bone and all that remains are the stains left by the plank coffins that contained the burials. We have yet to excavate any settlement evidence associated with these cemeteries, but we believe they are associated with a new trackway that is established following the edge of the wetlands on the northern edge of the sandy lands. This trackway was the prehistoric equivalent of the current A64 and ran for many kilometres along the edge of the wetland; it continued to act as the main focus of settlement for about 1000 years, through the end of the Iron Age and the whole of the Roman period. Excavation of an Iron Age ‘chariot’ burial by the British Museum at Wetwang By the end of the Iron Age it appears that a new form of burial monument replaces the square barrow and, from their distribution and number. We can see that there is a large farmstead with its own cemetery situated at roughly 250 metre intervals all along the trackway. These burial features are like tiny barrows, each containing a cremation within an area defined by a steep-sided ditch or slot measuring as little as two metres in diameter. It is possible that the slot contained timbering that retained a shallow mound, no more than a metre high, made up of the material derived from the slot. We excavated parts of ten examples in Sherburn in 2001; they had been severely plough-damaged and we have yet to confirm their date using some of the tiny fragments of cremated bone and teeth recovered from the surrounding slots; they were however associated with Late Iron Age pottery. During the Iron Age we see the establishment of the first extensive field systems, defined by relatively slight ditched boundaries. Sheep and goats become an increasingly important part of the economy, producing both milk and wool. From the Iron Age until the medieval period wool production for textile manufacture in eastern Yorkshire was to be a cornerstone of the agricultural economy. Loomweights and bone weaving combs are common finds on Iron Age settlements. By the time that the Roman legions established the fort in Malton, the Vale of Pickering was well populated with ribbon development along major trackways following the edge of the wetlands on both sides of the valley. Whilst the centre of the valley remained wetland, and there were large tracts of open downland on the Wolds, the sandy lands supported a large population and extensive field systems. The trackways linking the Vale with areas beyond formed the basis of widespread trade networks. Tribal societies covering large areas of countryside were fully established and were perhaps administrated from high status sites like the hill fort that is lying hidden beneath Scarborough Castle. A massive series of banks and ditches cutting off the spur that connects the North Yorkshire Moors to Scarborough above Snainton, on the opposite side of the valley to East Heslerton, may form the first line of defence for Scarborough; the multiple banks and ditches would not only have stopped chariots, but would also have made cattle rustling extremely difficult. The Impact of Empire: The Roman Period There is reason to believe that the tribal leadership in Eastern Yorkshire struck some sort of deal or at least established a special relationship with the Roman invaders. As far as we can see from the archaeological record, the impact of Rome on the population of the Vale was minimal. The rural economy, the settlements and the trackways continued in use as before, the most significant change being the introduction of mass-produced pottery and other domestic goods, and the coinage with which to purchase it. Evidence from soil analysis shows that during the Roman period the lower slopes of the Wolds, which are on heavier soils, were probably ploughed for the first time, while woodland was cleared on the higher slopes. Perhaps it was necessary to open up new areas using improved Roman ploughing technology to generate the extra produce required to pay Roman taxes or to supply the Roman garrison and town at Malton. The sophisticated stone buildings of this period at Malton, and the few known villas, were the exception rather than the rule; most of the rural population lived in round-houses with wattle and daub walls and thatched roofs. Apart from textile production, other industries developed as a result of increased specialisation. Pottery kilns were established at Knapton late in the Roman period, and its products traded widely throughout the region. The relatively crude Knapton storage and cooking vessels were outshone by higher quality table wares produced in Norton and at Crambeck. The results of our geophysical surveys between Sherburn and Heslerton show that the trackway that linked the many Iron Age farmsteads became built-up for much of its length, with networks of overlapping enclosures forming stock and settlement enclosures linked to adjacent fields. Linear settlements of this type, termed ladder settlements on account of their appearance in crop marks, are found along the Great Wold valley as well as in the Vale. Although you may read that the Roman period ends in AD 410, the archaeological evidence from the Vale of Pickering and many other sites in Britain indicates that by the middle of the 4th century AD many of the towns and the Roman economy in Britain were in a state of collapse. The romantic picture so often painted of life in Roman Britain ending in a sudden collapse leading to the Dark Ages represents a view that can no longer be accepted. During the 4th century the climate seems to have become considerably wetter, so much so that the ladder settlement was under major threat from flooding; a massive dyke to the north of the ladder settlement found through geophysical survey seems to represent a Roman period flood defence. During the excavation of the Early Anglo-Saxon settlement at West Heslerton, our largest excavation, a late Roman stone building was uncovered. This building, which was cut back into the side of a dry valley, the base of which was deliberately terraced by dumping many tons of earth and stone to level each terrace, appears to have been some sort of shrine or small temple. This structure, and fragments of others found on the sides and blocking off the entrance to the dry valley, were associated with large areas of worn surfaces and paths linking the head of the valley to a spring at the bottom. It was not possible to fully excavate the late Roman and earlier deposits on this site, so they have been re-buried and secured for future study. Fragments of other evidence found in later features, and many superimposed pebble surfaces, indicate that this dry valley had been some sort of religious centre possibly since the Iron Age. It seems most likely that this cult site relating to the spring was not visited continuously but only at certain times of the year. Analysis of some of the many oyster shells found in one part of the valley indicate that they were harvested in March; it is possible that they derive from a food stall run for visitors to the site. A series of bread ovens indicate that a bakery may also have been established. A single large rectangular structure, built using a timber frame supported on spreads of chalk rubble, blocks the entrance to the valley from view and may be interpreted as a `hotel' or `hostel'. If the site is indeed used for a single festival occurring in March each year then it may relate to the beginning of the Roman new year on the 1st March; alternatively, if in early April, to Ceres the goddess of agriculture and fertility. Artists impression of how the shrine complex may have looked at around 350AD Occasional coins recovered from the Roman deposits here indicate that the main building was constructed after AD 340 when a coin was lost during the laying of the foundations, and that it continued be a focus of activity for the rest of the century. The only structure in this complex that can easily be interpreted as a domestic building is a single roundhouse in one of a number of rectangular enclosures around the spring head. It is possible that this could have been the home of a site keeper who guarded and maintained the site between festivals. The exact function of the enclosures established in the late Roman period is not understood. It is possible that they were stock enclosures and that the use of the site in the spring may relate to a fertility cult in which people brought their stock to the site for some religious purpose. Whatever activity was going on, it left more than 30,000 sherds of Roman pottery which, together with the worn pebble surfaces and food debris, indicates that large numbers of people visited the site. Lighting up the Dark Ages: The Early Anglo-Saxon period West Heslerton is unique in that it is the only place in England where a complete Early Anglo-Saxon cemetery and its associated settlement have been excavated and recorded using modern techniques. The examination of these two sites involved excavations covering more than 20 hectares (nearly 50 acres). The Early Anglo-Saxon cemetery, discovered in Cook's Quarry in 1977, started the whole programme of excavation in Heslerton, with large scale field-work on the settlement running from 1987 until the winter of 1995-6. The excavation of the settlement was amongst the largest undertaken in Europe and was made possible by the labours of over a thousand volunteers, mostly working over the summer months. Rarely do we have an opportunity to examine both a cemetery and a settlement together; even more rarely do we have the chance to uncover the full extent of both. The results of this work do however justify the investment. The investigation of the cemetery, which re-used the site of the Neolithic henge and smaller Bronze Age barrows, produced a wealth of new evidence, particularly relating to textiles and clothing. The cemetery, in which grave goods including dress fittings, weapons, pottery and wooden vessels accompanied most of the burials, contained the remains of about 250 people buried between the end of the 4th and middle of the 7th centuries AD. The cemetery appears to have been laid out in family groups, each containing a broadly comparable group of burials. Many were well-furnished with grave goods - one burial was accompanied by a sword, shield and spears - others had relatively few accompanying objects. Analysis of trace elements absorbed from the food eaten early on in life indicates that a small percentage of the people may have come from southern Sweden, an area from which some of the dress fittings seem to originate. Another group clearly grew up locally, whilst a third group appears to originate elsewhere in Britain. Analysis of the ancient DNA preserved in the teeth indicates that some of the burials accompanied by weapons may be females and others with brooches and beads may be male, contrasting with the traditional method of sexing burials based generally on the type of grave goods with the burials. The cemetery was in use only for the first half of the life of the village. We do not know where the later cemetery is located; it may lie near an as-yet undiscovered church as by this time, after AD 650, Christianity was beginning to be adopted in Anglo-Saxon England. Prior to the excavation of the settlement in West Heslerton it was thought that Early Anglo-Saxon settlements were small, comprising only a few farmsteads with a limited life span; these would be abandoned when a new farmstead was built nearby. This view is completely at odds with the evidence recovered here; not only does the settlement appear to be laid out on a grand scale with areas for housing, craft and industry, animal husbandry and crop processing, but it also appears to develop during the dying years of the Roman period. The Early Anglo-Saxon settlement in the region has in the past been thought to have begun after AD 450, leaving a Dark Age gap between the end of the Roman period and the beginning of the Anglo-Saxon period. We found no evidence of this empty period; rather, it appears that the Early Anglo-Saxon settlement emerged as the Roman administration was collapsing. Early Anglo-Saxon sites are exceptionally difficult to date precisely; coinage, which provides good dating evidence during the Roman period, goes out of use soon after AD 400 and is not re-introduced until the late seventh century. A dating system based mostly on the brooches and other metalwork found in cemeteries is widely used, but it relies upon relative dates based largely on changing artistic styles rather than absolute dates. Radiocarbon dating techniques have only recently provided sufficiently accurate dates for this period, and we are still awaiting confirmation of some of the Early Anglo-Saxon dates produced using this method. The archaeological evidence can be used to demonstrate that the Anglo-Saxon settlement, which can correctly be called a village, was established directly following the demise of the Roman administration. Had there been a significant gap between the end of Roman activity and the beginning of the Anglo-Saxon settlement phase, we would expect to see a build up of blown sands and other soils separating the Roman from the Anglo-Saxon layers on the site; this was not the case. It appears on present evidence that the village was established at around AD 400, and that it was laid out on a grand scale covering an area nearly 500 metres square. It was established around the spring which had been a focal point of the Roman ritual landscape associated with the shrine and other buildings. Interestingly, the terraced dry valley area which had contained the various shrines linked by pebble paths and surfaces, was not used for settlement and seems to have continued in use as a protected space during the occupation of the village, which ended at about AD 850. Early Anglo-Saxon settlements are quite different to the villages or farmsteads that existed in the ladder settlement during the Roman period. Not only are the architectural styles quite different, but also the mass-produced wheel-made pottery, so common in Roman Britain, disappears and hand made pottery, similar to that in use during the Iron Age becomes the norm. It appears that the collapse of the Roman administration was accompanied by complete economic failure. Early Anglo-Saxon settlements are rarely found built over late Roman domestic settlements and, although the site at West Heslerton takes over a Roman site, it was not a normal domestic settlement. There seems to have been a deliberate break with the Roman world; the creation of new villages away from the established Roman settlements may reflect other factors. We have already seen that the climate changed during the late Roman period; the settlement on the edge of the wetlands was increasingly under threat from flooding and a rising water table. We have often wondered why this settlement was not deserted sooner; it may be that it could not move simply because all the land was owned or controlled by tribal chiefs or the Roman administration. Once this was no longer in place, new settlements could be established in more suitable positions. The Early Anglo-Saxon village was occupied for more than 400 years, during which more than 220 buildings were built on the site. The most distinctive structure type, the `grubenhaus' from the German meaning `hole-in-the-ground house', is of a class not seen in Britain before this period, although commonplace on the continent. These buildings, which seem to have served many purposes, were probably not dwellings. The houses were large rectangular buildings based around an often elaborate series of post settings. Whilst the grubenhäuser are clearly continental in origin, the rectangular buildings seem to combine both native and continental building traditions. The grubenhäuser are very distinctive during excavation, the principal feature being a large rectangular hole in the ground with post-holes to take the timbers that supported the roof at either end. When archaeologists first started to excavate these features, which are invariably filled with rubbish, it was thought that they represented squalid hovels in which people eked out a poor existence in the bottom of a pit covered with a simple roof. Excavation at another similar settlement in West Stow in Suffolk, during the late 1960s and early 1970s, produced evidence that these buildings had raised floors and that the hole provided a dry air space; this cavity floor construction would keep the building above dry and make it last a great deal longer. The discovery of large numbers of loom weights in the rubbish fillings of these abandoned structures has been used to argue that they were weaving sheds. However, they are invariably mixed in with masses of animal bone and other rubbish deposits, and it is much more likely that these loom weights, which were mostly made from unfired clay, were simply discarded in the pit along with all the other rubbish. A detailed examination of the material found in these features, which amounts to more than half of all the million plus finds recovered during the excavation of the settlement, shows that it has nothing at all to do with the building which once stood over the hole, but that it was gathered up from some other part of the site and tipped into the hole to fill it up after the building had been dismantled. The evidence indicates that domestic, industrial and butchery waste was gathered in muck heaps on the edges of the village and was used for night-soiling, spread about the fields as fertiliser. It appears that when one of these structures was abandoned, material was gathered from the heaps, perhaps as part of a cleaning up process after the muck-spreading season, and used to fill up the hole which, in some cases, were more than a metre deep. If these enigmatic structures did not serve either as homes or weaving sheds then what is their function, and how were they built? One type of structure - the grain store - is alarmingly absent from the settlements of the Early Anglo-Saxon period. If one is to grow crops then it is essential to be able to store sufficient seed for use in the following year; it is increasingly likely that many of these buildings served as grain storage structures, buildings in which a ventilated under-floor space would be almost essential to prevent damp from causing the seed to either rot or germinate. It seems that these buildings performed multiple functions; whilst some were grain stores, others would have been used for general storage purposes. In one part of the village, area 2D, these were the only type of building present. This area seems to have been a craft or processing area, with evidence for metalworking, butchery and a malting kiln found in between and around the structures, which were probably used to store all the associated equipment required to undertake these tasks. The presence of spinning and weaving equipment reflects the importance of textile manufacture, the equipment being stored in these buildings when not in use. No matter how we reconstruct these buildings, they would have been dark inside and therefore useless as weaving sheds; such work most likely was undertaken outside or in the larger timber houses or halls. Evidence from the excavated cemetery 500m to the north of the village shows that a variety of woollen twills and linen cloth were used in Early Anglo-Saxon clothing. One of the most difficult tasks for an archaeologist is the reconstruction of now lost building styles based on the limited evidence found in the ground. At the site of West Stow experimental reconstructions in wood and thatch have been built to help our understanding of how these buildings were constructed. We now believe that rather than being built of wood, grubenhäuser were constructed with turf walls that were pushed into the holes left when the floors were removed. Turf-walled buildings can be remarkably strong, and with the turf growing on the outside face of the walls absorbing any moisture, very dry inside - exactly what is needed for storing seed grain. Fine grey silty soils, found partially filling the under floor holes left when the buildings had gone out of use, seems to be derived from wall material pushed into the hole. In a number of cases concentrations of prehistoric worked flints found in these deposits were probably contained in the turf when it was cut somewhere out in the fields. The roofs were probably made of reed thatch and heather. Until we find a well preserved, burnt down or waterlogged grubenhaus we will remain unsure as to exactly how they were built. Reconstruction of a grubenhaus at West Stow, Suffolk; the figures show dress styles based on evidence recovered from the Heslerton Cemetery The village was large, covering more than 12 hectares, with a population estimated to be ten extended families, or about 75 people. Whilst the grubenhäuser contain a wealth of rubbish which can be used as evidence of daily life, craft, industry and agriculture, the timber halls or houses and their surroundings were apparently kept very clean. These buildings were constructed in a number of styles and sizes, the smaller examples perhaps performing the same functions as the grubenhäuser. They were built of timber with a raised floor, but lacking the dug out air-space beneath. All are rectangular and range in size from 4m to 13.5m long and 3m to 5.5m wide. Although in some areas blown sands had buried and preserved the Anglo-Saxon land surface, we have found no evidence of ground level floors as we might with later medieval houses, which had earth or mortar floors, and we must therefore conclude that these buildings also had raised floors. The timber hall buildings are easily recognised from the rectangular arrangement of post-holes into which the upright timbers forming the main timbers of the walls and supporting the roof were placed. The larger buildings were clearly very elaborate, indicating a high level of carpentry skills; each post-hole contained a pair of cut planks separated by a gap one plank-width wide. It seems most likely that horizontal planks were slotted between the uprights and that these were jointed at the corners where corner posts were absent. The combination of the paired uprights and horizontal planking would have made a strong and rigid platform to support a raised floor in buildings which may well have had an upper half storey. Smaller posts marking a dividing wall, usually found at the western end of the building and defining a space no more than 1.5 metres wide, may have supported a stair-case to the upper level. These buildings, like the reconstructions at West Stow, were probably mostly built of timber; had wattle and daub been used to fill the gaps between the main timbers as occurs in medieval houses, we would have expected to find a lot of it on the site but did not. Plan view of two grubenhauer and associated postholes of timber houses Towards the end of the life of the settlement there is a change in construction techniques, and the larger buildings are constructed with the posts placed in a continuous trench rather than in individual post holes. At this time the number of grubenhäuser reduces and post-built granaries are seen for the first time, based on closely set groups of six or eight large posts. It is possible that the roofs of some of the buildings may have been made using shingles -rectangular split wooden tiles - as we found an iron tool of a type still used to split shingles today. In the centre of the village a spring fed a stream that had been deliberately channelled and which seems to have been dammed to form a pond, the waters of which drove a horizontal-wheeled water mill. It was not possible to excavate this, but a concentration of quernstone fragments and results of a geophysical survey indicate its presence. An Anglo-Saxon mill of this type was excavated in Tamworth during the 1970s; its timbers had been preserved by waterlogged conditions. Beyond the mill the waters drained away following the ancient stream channel which defined one edge of the Anglian cemetery and had been a focus of human activity from the Mesolithic period onwards. A large enclosure to the west of the mill contained debris from threshing, as well as carbonised grain which appears to derive from the late granaries, where some of the grain would have become burnt during the process of drying it prior to storage. The largest concentration of timber houses was found in the north eastern part of the village, laid out without any surrounding property boundaries on a chalk knoll overlooking the valley to the north. All were aligned east-west with either single or double entrances in the centre of the long walls. In the southern half of the site, around the spring, a complex of enclosures, some of which re-use those originally established during the late Roman period, seem to have served to keep stock away from the fresh water emerging from the spring and to provide enclosures in which stock may have been kept over winter. The evidence indicates that most of these enclosures relate to activity occurring during the second half of the period of occupation between AD 650 and AD 850, the Middle Saxon period. In one location, 12AE, a large Early Anglo-Saxon building had been replaced by a Middle Saxon building located on a raised platform that overlooked the whole village and was surrounded by a timber fence on three sides. The self-sufficient nature of the village is reflected in the discovery of a number of iron working furnaces, bread and grain drying ovens and a malt kiln. Pottery was made on site using local clays but, in contrast to the wheel made pots of the Roman period, these vessels were hand-made and fired in bonfire kilns which have left no evidence. Two bone stamps, used to decorate the pots prior to firing, were discovered amongst the animal bone. Bone and antler and, no doubt, wood and leather were worked on site, the bone and antler being used to make gaming counters, elaborate combs, spoons, and spindle whorls, thread pickers, pins and needles used in textile manufacture. It is most likely that animals and textiles were traded for foreign goods including quern stones made of lava, imported from Germany, and some rare items including a cowrie shell from the Red Sea; wine was probably also imported in wooden barrels that do not survive. About a million animal bones were recovered during the excavation giving us a detailed picture of animal husbandry during this period. They indicate that surplus cattle were commonly slaughtered for their meat at the end of their first or second year, or as their productivity as breeding stock or traction cattle declined. They were probably pastured on the heavy, damp soils of the Vale where vegetation was lush and water plentiful. Surplus young sheep were also culled during the autumnal months, with additional meat sources coming from older breeders that would have been used primarily for wool production. Tooth wear and indications of disease affecting the bones suggests that the sheep were grazed on the coarser pasture of the Wolds. A high frequency of `penning elbow' indicates that sheep were occasionally corralled, whether for breeding, lambing or shearing. Other species such as pigs, domestic fowl and geese offered additional meat sources, although geese and domestic fowl were apparently reared primarily for their eggs. Perhaps the most interesting aspect of the analysis of the animal bones has been the discovery that there are too few bones of cattle of `market age'; it seems likely that these were traded and paid as taxation. Anglo-Saxon dogs were like deerhounds The village at Heslerton was not a high status site, but one of a number of similar communities around the Vale of Pickering who would probably have paid taxes to the kings of Northumbria. The evidence from the cemetery and the village does not reflect a strong social hierarchy; there is no single building that is much grander than the others and, whilst we assume that there must have been some sort of village leader or chieftain, the evidence is not obvious. The skeletons in the cemetery show relatively good health, with people of similar stature to ourselves, but having a shorter life expectancy, most people not living much beyond middle-age. By AD 850 when the site was deserted, the village would have been a thriving community linked to others in the valley and beyond by trade, travelling artisans bringing specialised goods and skills which could be traded with the community. The end of the village was both deliberate and sudden; the buildings were dismantled, good timbers no doubt being kept to build a new village probably located beneath the present village of West Heslerton. Roofing materials and other debris was evidently burnt leaving a layer of burnt sand, probably blown sand which had been trapped in thatched roofs, filling the latest features. It is most likely that the village was deliberately moved to a more protected location at the onset of the Viking raids. At East Heslerton we have discovered what appears to be a settlement of this date, established behind the later medieval village where it would have been sheltered from view by the wooded slopes of the Wolds, in contrast to the early settlements that were effectively in open country. Artists impression of how the northern part of the Heslerton Anglo-Saxon village may have looked at around AD550 Modern Times: The Medieval and later periods The desertion of the settlement coincides with the beginnings of the English state, in which the role of the individual Anglo-Saxon kingdoms declines until the kings of Wessex dominate the English nation. We have yet to uncover evidence from the Late Saxon period, although carved stonework from this period survives in Sherburn church nearby. It is probably at this time that the new village gets the name Heslerton, the hamlet amongst the hazel trees; we have no idea what name was used to refer to the excavated village. The two settlements of Heslerton Magna (West) and Heslerton Parva (East) emerge during the medieval period, although only Heslerton is mentioned in the Domesday Book. There has been little opportunity to investigate the medieval landscape. A deserted or `crept' village is situated to the south of East Heslerton, and West Heslerton likewise seems to have moved down hill and out beyond the protective slopes of the dry valley in which the church is situated. Geophysical surveys undertaken in the last two years show that the medieval villages of East and West Heslerton had very similar plans including large rectangular enclosures based around the church. The medieval landscape was dominated by rig and furrow field systems in which broad rigs, which in some places are over a metre high, formed strip fields which were shared amongst the population. Very little of these field systems can be seen today; a small area above the crept village at East Heslerton is an important survival. Although most of the rig and furrow has been ploughed flat over the last 100 years, we are able to detect large areas of these fields through geophysical survey and are gradually reconstructing the shape of the medieval landscape. The deserted medieval villages of East (left) and West Heslerton (right) revealed by geophysical survey Small scale excavations undertaken ahead of development in West Heslerton uncovered parts of a medieval building of probable 14th or 15th century date; this may have been the site of the village brewery known to have existed during the 19th century. The medieval village of West Heslerton was not much smaller than the present village and would have comprised the church, which was sadly heavily `restored' and rebuilt during the 19th century, removing any evidence or early stonework or the wall paintings with which the interior would probably have been covered. Near the church stood the manor house in its own enclosure. This arrangement is easier to see at East Heslerton, where a series of crofts and tofts, rectangular enclosures containing a single property with a garden area, are defined by an enclosing bank and ditch. The houses in the village would have been made either of chalk blocks or of timber with wattle and daub or mud and stud construction on a chalk base, and would have had thatched roofs. During the late 18th or 19th century a brick kiln was established on the estate, and the surviving chalk block buildings were encased in brick, tile and ultimately slate roofs replacing the thatch. It would not have been possible to draw upon the results of many hectares of excavation and many tens of hectares of survey to describe the evolution of this landscape and its people in this booklet without the continued support of English Heritage. Not only have they supported most of the work, but they have also encouraged us to develop fieldwork, recording and analytical techniques that have earned Heslerton a worldwide reputation. More recently, we have also received support from Cook's Quarry who are funding excavation ahead of sand extraction to ensure that important evidence is not lost altogether. At a time when television programmes show digs, supported by huge resources, that are `completed' in three days, it may be difficult to appreciate the time required to undertake large excavations. During the past 25 years we have spent more than 2000 days in the field; but that's another story!
A gregarious species (2) (7), the avocet breeds from April to August in large colonies of between 10 and 70 pairs (5). Sexual maturity is reached at two years old, when the avocet will find a breeding ground, often different to where it was reared. This breeding ground is where the avocet will return each year to breed (7). The nest of the avocet consists of a scrape made in sand, mud or short vegetation on the ground (5), into which three or four eggs are laid (8). The nests within the colony are usually only one metre away from each other (2), with some recorded only 20 to 30 centimetres apart (5). The male and female avocet stay together for the breeding season (7), sharing responsibility for incubating the eggs for between 23 and 25 days (8). The chicks fledge the nest after 35 to 42 days (8). The pair bond between the male and female is only sustained for one breeding season, after which they separate and join a flock to begin migration (7). The migration of northern populations begins between August and October, with the avocets heading south in a flock, stopping in certain areas in great numbers (5). Several thousand individuals may roost together and groups of between 5 and 30 forage collectively (5). The diet of the avocet is primarily composed of aquatic invertebrates, such as insects, crustaceans, worms and molluscs, as well as small fish and plants (5). It usually takes food from exposed mud or from water (2), using a characteristic foraging technique that involves a sweeping motion of the beak, and it also upends in deep water to reach prey (2) (3). A highly territorial bird, the avocet will chase away any unwelcome visitors while breeding, lunging towards them with a lowered head and neck, and may even drive away much larger birds such as the common shelduck (Tadorna tadorna) (2).
Have you and your children been struggling to learn the math facts? The game of Math Card War is worth more than a thousand math drill worksheets, letting you build your children’s calculating speed in a no-stress, no-test way. Math concepts: greater-than/less-than, addition, subtraction, multiplication, division, fractions, negative numbers, absolute value, and multi-step problem solving. You will need several decks of math cards. Don’t rush to look for these at your school supply store or try to order them through your favorite website. Math cards are normal, poker-style playing cards with the jack, queen, king, and jokers removed. Make one deck of math cards per player. A math deck contains 40 cards, so a single game of Addition War lets a child work 20 problems, and he hears his opponent work 20 more—and if your children are like mine, they will rarely want to stop at just once through the deck. As my students learn their math facts, they need extra practice on the hard-to-remember ones like 6 × 8. With a normal deck of cards, however, I find they turn up far too many problems like 1 × 9 or 2 × 7. To give a greater challenge to older children, I make each player a double deck of math cards, but I remove the aces, deuces, and tens. This gives each player a 56-card deck full of the toughest problems to calculate. [This is an old, classic children’s game. I’ve often been amazed how such a simple thing can keep my kids occupied for hours. In our variations, because the math card decks are only 4/5 the size of a regular card deck, we give each player his own pack of cards. We don’t shuffle the decks together at the beginning, although I suppose you could—that would be more like the traditional game, which (at least in our house) is usually played with a single deck shuffled and split between the players.] How to Play Basic War—Each player turns one card face up. The player with the greatest number wins the skirmish, placing his own and all captured cards into his prisoner pile. Whenever there is a tie for greatest card, all the players battle: each player lays three cards face down, then a new card face up. The greatest of these new cards will capture everything on the table. Because all players join in, someone who had a low card in the initial skirmish may ultimately win the battle. If there is no greatest card this time, repeat the 3-down-1-up battle pattern until someone breaks the tie. The player who wins the battle captures all the cards played in that turn. When the players have fought their way through the entire deck, count the prisoners. Whoever has captured the most cards wins the game. Or shuffle the prisoner piles and play on until someone collects such a huge pile of cards that the others concede. For most variations, the basic 3-down-1-up battle pattern becomes 2-down-2-up. For advanced games, however, the battle pattern is different: in case of a tie, the cards are placed in a center pile. The next hand is played normally, with no cards turned down, and the winner of that skirmish takes the center pile as well. Addition War—Players turn up two cards for each skirmish. The highest sum wins. Advanced Addition War—Turn up three (or four) cards for each skirmish and add them together. Subtraction War—Players turn up two cards and subtract the smaller number from the larger. This time, the greatest difference wins the skirmish. Product War—Turn up two cards and multiply. Advanced Product War—Turn up three (or four) cards and multiply. Fraction War—Players turn up two cards and make a fraction, using the smaller card as the numerator. Greatest fraction wins the skirmish. Improper Fraction War—Turn up two cards and make a fraction, using the larger card as the numerator. Greatest fraction wins. Integer Addition War—Black cards are positive numbers; red cards are negative. The greatest sum wins. Remember that -2 is greater than -7. Integer Product War—Black cards are positive numbers; red cards are negative. The greatest product wins. Remember that two negative numbers make a positive product. Wild War—Players turn up three cards and may do whatever math manipulation they wish with the numbers. The greatest answer wins the skirmish. Advanced Wild War—Black cards are positive numbers; red cards are negative numbers. Players turn up four cards (or five) and may do whatever math manipulation they wish with the numbers. The greatest answer wins the skirmish. Reverse Wild War—Players turn up three cards (or four, or five) and may do whatever math manipulation they wish with the numbers. The answer with the lowest absolute value (closest to zero) wins the skirmish. Math War Trumps The biggest problem with Math War is that it’s really just a worksheet in disguise. Children enjoy it more than a worksheet because of the social interaction, but there’s no choice or strategy to the game. But you can introduce strategic thinking into your number practice by playing Math War Trumps: - Players draw two cards from their deck and look at them. - The player whose turn it is calls the trump: which math operation to do, and whether the low or high answer takes the trick. - Then all players reveal their cards and calculate. For even more strategy, let players draw three cards and choose which two to reveal. Then they draw two more to replenish their hand for the next turn. Update: Math War with Special Decks Check out all the wonderful ways for middle and high school students to play Math War. Algebra, geometry, and trig decks created by teachers and shared free for your use! More Ways to Play Multi-Digit War—Turn up two or three cards and create a 2-digit or 3-digit number. Multi-Digit Subtraction War—Turn up three cards. Make two of them into a 2-digit number, then subtract the third. Example: Suppose you turn up 3,4, and 5. Should you arrange them as 54-3 or 45-3 or 35-4 or . . . ? Multi-Digit Product War—Turn up three cards. Make two of them into a 2-digit number, then multiply by the third. Example: Suppose you turn up 3,4, and 5. Should you arrange them as 5×43 or 4×53 or 3×54 or . . . ? My Closest Neighbor—Instead of turning up cards at random, each player draws a hand of five cards. Then turn up a target card such as “Closest to 1/2” and try to make a fraction from two cards in your hand that will be near the target but not equal to it. Chris posted a set of printable target cards at her blog. I modified the game for regular playing cards: Fraction Game: My Closest Neighbor. Logarithm War—Requires a special deck of cards. Download from Kate’s blog: This Game Really Is Worth 1000 Worksheets in doc or pdf format. Sadly lost to the time-monster who eats old pages on the internet. Logs and Trig War—Jim extended Kate’s logarithm war to include trig functions. Double the cards, double the fun! Download from Jim’s blog: War: what is it good for? Speed Racer—For two players of evenly-matched ability. Each player turns up one card, and the first player who calls out the correct sum (or difference, or product) of those two cards wins the pair. - Can you think of another variation to share? If you enjoyed this post, check out my Math You Can Play book series featuring math games for all ages. Hat tips: Marni suggested the Mult-Digit variation in the comments section below, but I didn’t think to add it as an update until Mary from the Albany Area Math Circle suggested the Multi-Digit Product War variation in a comment on another post. And then her extension of the game made me think of the Multi-Digit Subtraction War variation. Math tutor and games enthusiast Nancy Rooker suggested the Trumps variation in an email. Amy suggested the Speed Racer variation.
Analyzing the features of exponential graphs through the example of y=5ˣ. Created by Sal Khan and Monterey Institute for Technology and Education. Want to join the conversation? - Can anyone explain to me why a negative power is always a fraction?(15 votes) - Technically, an exponent expresses multiplication, and is shown with a positive number. The opposite of a positive number is (obviously) a negative number, so in keeping with the rules of exponents, this must somehow be the "opposite" of multiplication, which happens to be division. Therefore, a negative exponent is always a division (written as a fraction). That's the reason.(25 votes) - whats a slope?(0 votes) - these answer above are all right.but there's another idea about slope.slope of a line can be expressed as a trigonometric function.it's the tangent of the angle formed between the x-axis and the line itself.(8 votes) - Why Sal drew curved lines between the points. Why not straight lines?(2 votes) - Exponential functions do not change in a constant manner. Linear equations have a constant slope. With exponential equations, the change accelerates as the exponent increases. See this video: https://www.khanacademy.org/math/algebra/introduction-to-exponential-functions/exponential-vs-linear-growth/v/exponential-vs-linear-growth(4 votes) - So no matter what, the graph can't go below the "X-Axis"?(2 votes) - If a is a positive number, then a^x is greater than 0 for all x. So, the graph of a^x will never intersect, nor go below the x-axis.(2 votes) - how do you do negative exponents?(1 vote) - The first step is to flip them for example 5^-2 would flip the -2 and because all numbers are technically fractions (5=5/ 12=12/1 this applies to any number) you would just flip it so instead of having -2 (or -2/1) you would have 1/2. so when rewritten it would be 5^1/2 then you can plug into your cauclour(1 vote) - Don't Positive Exponential Functions always rise upward from the x-axis while the Negative Exponential Function slides downward to the the x-axis. In the Positive function, both x and y values increase I presume, and in the negative the x value increases, while the y value decreases?(0 votes) - What is the difference between exponents and indices?(2 votes) - T Ross, Exponents are notations that indicate a base number is raised to a power or multiplied by itself a given number of times. In writing or word processing programs that allow it, exponents are written as superscript(above the base number). In a plain text editor (like this one), exponents are noted using the *^* symbol. 2^3 means 2 multiplied together 3 times: 2*2*2 = 8 x^3 means x multiplied together 4 times: x*x*x*x Indices are a notation that indicates the position of an element in a sequence, array, or matrix. In a word processing program that allows it, indices are shown in subscript (below the name or variable assigned to the sequence). In a plain text editor, indices are indicated using the _. In the example in the video, Sal uses the sequence of numbers -2, -1, 0, 1, 2 for the x values. If you were to call this sequence X, then X_1 = -2 X_2 = -1 X_3 = 0 X_4 = 1 X_4 = 2(2 votes) - sorry this question stupid but i really suck at math so where does the two in the equation come from?(2 votes) If you are talking about the 2 in the column labeled x, that is a value that Sal selected. He selected all of the x values and used them to calculate the y values.(2 votes) - why does any number to the 0 power always become a 1? why not 0?(2 votes) - As the saying goes, "Do the Math!" At this stage you may not know all you need to with regard to the properties of exponents. Here is one explanation that requires knowing that (x^a)/(x^b)= x^(a-b) You know that, for example, 5/5=1, correct? It is because the numerator and denominator are equal. Suppose you had (5^6)/(5^6). Since the numerator and denominator are equal, this is also equal to 1. Now, using the exponential property that (x^a)/(x^b)= x^(a-b), we have (5^6)/(5^6) = 5^(6-6) = 5^0. And since (5^6)/(5^6) = 1 and (5^6)/(5^6) = 5^(6-6) that means 5^0 = 1 as well. You will know lots more about exponential function when you finish this course!(2 votes) - At4:06, the video said that the smaller the negative exponent we put, the more closely we will get to zero, but not quite to zero. So, will we ever able to reach zero on the number line?(2 votes) - Not with a "normal" exponential function because 0 is a horizontal asymptote. We can shift the exponential function down by subtracting a number at the end such as y = a(b)^x - 3, and this shifts the asymptote down 3 which gives us a x intercept, but then it will get really close to -3 without ever reaching it. There is an old conundrum that if you are 10 feet from a wall and you go 1/2 the way there every minute, will you ever reach the wall? The theoretical answer is no because you just keep dividing a number by 2, but practically you quickly cannot measure what 1/2 of the way is.(1 vote) We're asked to graph y is equal to 5 to the x-th power. And we'll just do this the most basic way. We'll just try out some values for x and see what we get for y. And then we'll plot those coordinates. So let's try some negative and some positive values. And I'll try to center them around 0. So this will be my x values. This will be my y values. Let's start first with something reasonably negative but not too negative. So let's say we start with x is equal to negative 2. Then y is equal to 5 to the x power, or 5 to the negative 2 power, which we know is the same thing as 1 over 5 to the positive 2 power, which is just 1/25. Now let's try another value. What happens when x is equal to negative 1? Then y is 5 to the negative 1 power, which is the same thing as 1 over 5 to the first power, or just 1/5. Now let's think about when x is equal to 0. Then y is going to be equal to 5 to the 0-th power, which we know anything to the 0-th power is going to be equal to 1. So this is going to be equal to 1. And then finally, we have-- well, actually, let's try a couple of more points here. Let me extend this table a little bit further. Let's try out x is equal to 1. Then y is 5 to the first power, which is just equal to 5. And let's do one last value over here. Let's see what happens when x is equal to 2. Then y is 5 squared, 5 to the second power, which is just equal to 25. And now we can plot it to see how this actually looks. So let me get some graph paper going here. My x's go as low as negative 2, as high as positive 2. And then my y's go all the way from 1/25 all the way to 25. So I have positive values over here. So let me draw it like this. So this could be my x-axis. That could be my x-axis. And then let's make this my y-axis. I'll draw it as neatly as I can. So let's make that my y-axis. And my x values, this could be negative 2. Actually, make my y-axis keep going. So that's y. This is x. That's a negative 2. That's negative 1. That's 0. That is 1. And that is positive 2. And let's plot the points. x is negative 2. y is 1/25. Actually, let me make the scale on the y-axis. So let's make this. So we're going to go all the way to 25. So let's say that this is 5. Actually, I have to do it a little bit smaller than that, too. So this is going to be 5, 10, 15, 20. And then 25 would be right where I wrote the y, give or take. So now let's plot them. Negative 2, 1/25. 1 is going to be like there. So 1/25 is going to be really, really close to the x-axis. That's about 1/25. So that is negative 2, 1/25. It's not going to be on the x-axis. 1/25 is obviously greater than 0. It's going to be really, really, really, really, close. Now let's do this point here in orange, negative 1, 1/5. Negative 1/5-- 1/5 on this scale is still pretty close. It's pretty close. So that right over there is negative 1, 1/5. And now in blue, we have 0 comma 1. 0 comma 1 is going to be right about there. If this is 2 and 1/2, that looks about right for 1. And then we have 1 comma 5. 1 comma 5 puts us right over there. And then finally, we have 2 comma 25. When x is 2, y is 25. 2 comma 25 puts us right about there. And so I think you see what happens with this function, with this graph. The further in the negative direction we go, 5 to ever-increasing negative powers gets closer and closer to 0, but never quite. So we're leaving 0, getting slightly further, further, further from 0. Right at the y-axis, we have y equal 1. Right at x is equal to 0, we have y is equal to 1. And then once x starts increasing beyond 0, then we start seeing what the exponential is good at, which is just this very rapid increase. Some people would call it an exponential increase, which is obviously the case right over here. So then if I just keep this curve going, you see it's just going on this sometimes called a hockey stick. It just keeps on going up like this at a super fast rate, ever-increasing rate. So you could keep going forever to the left, and you'd get closer and closer and closer to 0 without quite getting to 0. So 5 to the negative billionth power is still not going to get you to 0, but it's going to get you pretty darn close to 0. But obviously, if you go to 5 to the positive billionth power, you're going to get to a super huge number because this thing is just going to keep skyrocketing up like that. So let me just draw the whole curve, just to make sure you see it. Over here, I'm not actually on 0, although the way I drew it, it might look like that. I'm slightly above 0. I'm increasing above that, increasing above that. And once I get into the positive x's, then I start really, really shooting up.
Includes extension activities for Wrinkle in Time that help students build math, science and art skills. -Compare Wrinkle in Time planets (worksheet) -Explore the size of the Universe (teacher-led activity) -Create your own planet (worksheet) -Instructions for paper-mache planets -Build a tesseract -Create an outfit for Mrs. Who -Stem Challenge: Build a capsule that can travel through time
The biggest fish in the ocean, whale sharks, are incredible animals. They can reach lengths of over 18 metres and weigh more than 19,000kg. Each shark has a unique pattern of spots on its body, like a fingerprint. The number of whale sharks in our oceans has been in decline for years and as a result the species is endangered. Recently, efforts to conserve the animals through ecotourism have been severely impacted by the pandemic. Marine protected areas (MPAs), where human activities like fishing are restricted, are an important tool when it comes to the global conservation of many animals in the sea, including whale sharks. But our new study shows these areas might not be the safe haven we once thought they were. We looked into how long whale sharks spend within an MPA and how this is impacted by injuries. We found injuries from boat strikes delayed the animals’ development, making them spend longer in the area before going out into the wider ocean. This is the first study to link human activity with a change in whale sharks’ life stages. Our work poses difficult questions for the regulation of boat traffic and wildlife tourism within protected areas. In the past, whale sharks were easy targets for fisheries, who harvested their meat and oil from their fatty livers. International demand for shark fins means the fish are still hunted in many areas of the world. To conserve whale sharks, charities and activists have aimed to shift the focus away from hunting and towards ecotourism. Despite their size, whale sharks are filter feeders, with throats incapable of swallowing anything larger than a thumb-sized sprat. The slow moving grace of the creatures, and their seeming indifference to the presence of humans, ensures their position on the bucket lists of many snorkelers, divers and swimmers. Nowhere is this more evident than in the South Ari Atoll MPA (Sampa) in the Maldives. There, whale shark-related tourism was a cornerstone of the economy prior to the pandemic, bringing in $9.4 million (£6.9m) a year. Unfortunately, anecdotal reports suggest loss of income from tourism in 2020 has seen an upsurge in illegal hunting and finning, threatening conservation efforts. The waters in Sampa are one of very few places in the world where the usually transient whale sharks take up semi-permanent residence. Most of the sharks in the area are immature males, so the area is referred to as a developmental habitat. Places like Sampa give young sharks somewhere to build up strength before moving off into the wider ocean. For marine biologists and conservationists, this provides a rare opportunity to study the behaviour of individual animals for long periods of time. Much of whale shark ecology remains a mystery. We are yet to identify where they give birth to their young, or even confirm where they mate, which is a roadblock for conservation. Previously, it was assumed tourist attention had very little impact on sharks. But in recent years, mounting evidence has emerged, suggesting human presence is altering their behaviour. Our study is based on 15 years of dedicated surveying by the Maldives Whale Shark Research Programme (MWSRP), along with citizen science data. We estimated whale shark abundance in Sampa and found it has been decreasing steadily each year, falling from 48 sharks in 2014 to 32 in 2019. The overall decline in whale shark abundance in Sampa falls in line with global trends for the endangered species. Worryingly, we found that 61% of sharks in the study had severe injuries. While some sharks arrive in the MPA with injuries, others acquire them during their residency. We did not directly observe boat strikes happening within the MPA. However, based on the modelling, we can say sharks likely acquired injuries during their residency in the area. We modelled how long whale sharks were staying in the area, and how this related to injuries. For the first time, we found sharks with severe injuries spend longer in the developmental habitat than those without. Our study suggests injured sharks remain in the area as they have access to food and warm water which supports their recovery. Unfortunately, human activities within the developmental habitat mean sharks continue to acquire injuries while they are there. Whale sharks spend a lot of time cruising just below the surface of the ocean, feeding on plankton and small animals. This puts them right in the path of boats. Most injuries we observed were clearly caused by humans, ranging from abrasions from hulls to fins cut off by propellers. The majority of boat traffic within Sampa is related to tourism, with vessels carrying snorkelers and divers. Sharks are amazing healers, given time they can recover from severe injuries. However, even if injuries don’t kill them, they alter their life history. Whale sharks journey across vast distances during their long lifetimes, which can see them reach ages of more than 130 years. This means they often cross political jurisdictions and are subject to various levels of exploitation. To ensure sharks have a safe space to recover, we suggest changes to the management to the MPA. Speed limit zones would help prevent further injuries. A transition of the current voluntary guidelines for tourist encounters to enforceable regulations would also help safeguard the animals. The survival and health of juvenile animals is paramount to the future of the species. Protecting them at the formative part of their life cycle can have a global impact.
- The voltage of the battery The nominal voltage of each cell of the battery is 2V, and the actual voltage varies with charging and discharging. At the end of charging, the voltage is 2.5~2.7V, and then it slowly drops to a steady state of about 2.05V. If the battery is used as the power source, the voltage will drop to about 2V quickly when the discharge starts, and then slowly drop to keep it between 1.9V and 2.0V. When the discharge is close to the end, the voltage quickly drops to 1.7V; when the voltage is lower than 1.7V, it will no longer discharge, otherwise the electrode will be damaged. After stopping use, the battery voltage can rise to 1.98V by itself. - The capacity of the battery (1) The concept of battery capacity. The amount of electricity that a lead-acid battery in a fully charged state can give when discharged to a specified end voltage under certain discharge conditions is called the battery capacity, which is represented by the symbol C. The commonly used unit is ampere hour, abbreviated as ampere hour (A·h). The discharge time rate is usually indicated at the lower corner of C. For example, C10 indicates the discharge capacity of 10 hours, and C120 indicates the discharge capacity of 120 hours. The battery capacity is divided into theoretical capacity, actual capacity and rated capacity. The theoretical capacity is the highest capacity value calculated according to Faraday’s law based on the mass of the active material. The actual capacity refers to the amount of electricity the battery can output under certain discharge conditions. When forming a battery, in addition to the main reaction of the battery, there are side reactions that occur. In addition to various other reasons, the utilization rate of the active material cannot be 100%, so it is far lower than the theoretical capacity. The rated capacity is also called the nominal capacity abroad. It is in accordance with the standards promulgated by the country or relevant departments. The battery is required to be discharged under certain discharge conditions during battery design (communication batteries generally stipulate that the current is discharged to the termination voltage at a rate of 10 hours under a 25°C environment. ) The minimum amount of electricity that should be discharged. (2) Factors affecting the actual capacity of the battery. The actual capacity of the battery is mainly related to the quantity and utilization of the positive and negative active materials of the battery. The utilization rate of active materials is mainly affected by the discharge system, electrode structure, and manufacturing process. What affects the actual capacity during use is the discharge rate, discharge system, termination voltage and temperature. - The discharge rate of the battery According to the size of battery discharge current, it is divided into time rate and current rate. The time rate refers to the length of time from discharge to the end of discharge voltage under certain discharge conditions. Commonly used time rate and magnification. According to the 1EC standard, the discharge time rate has 20, 10, 5, 3, 1, 0.5 hour rates, which are respectively identified as 20h, 10h, 5h, 3h, 1h, 0.5h, etc. The higher the battery discharge rate, the greater the discharge current, the shorter the discharge time, and the less the corresponding capacity released. - The end voltage of the battery The termination voltage refers to the lowest operating voltage at which the battery discharge voltage drops to the point where it is no longer suitable for discharging (at least it can be recharged and used repeatedly). In order to prevent damage to the plates, various standards stipulate the termination voltage of the battery when discharging at different discharge rates and temperatures. The termination voltage of the 10-hour rate and 3-hour rate discharge of the backup power series battery is 1.80V/cell, and the 1-hour rate termination voltage is 1.75V/cell. Due to the characteristics of lead-acid batteries, even if the discharge termination voltage continues to decrease, the battery will not release too much capacity, but the termination voltage is too low to cause great damage to the battery, especially when the discharge reaches 0V and cannot be charged in time. Greatly shorten the life of the battery. For solar batteries, the end-of-discharge voltage design is different for different models and uses. The final voltage depends on the discharge rate and requirements. Generally, for a small current discharge less than 10h, the termination voltage is slightly higher; for a large current discharge greater than 10h, the termination voltage is slightly lower. - Cycle life of battery The battery undergoes a charge and discharge, which is called a cycle (a cycle). Under certain discharge conditions, the number of cycles the battery can withstand before the battery is used to a certain capacity is called the cycle life. The backup power supply generally uses the floating charge life to measure the battery life. For example, the floating charge life of a valve-regulated sealed lead-acid battery is generally more than 10 years, but the cycle life of the battery can also be used to measure. The main factor that affects the cycle life of the battery is the performance and quality of the product, followed by the quality of the maintenance work. For the back-up power supply, 100% DOD discharge, the cycle life is generally 100~200 times, that is, the battery discharges 100% capacity, the battery discharges to the end voltage of 1.8V/pc, after 100~200 cycles, the battery discharges to the end voltage 1.8V, the discharge capacity is lower than 80% of the rated capacity, and the battery life ends at this time. The factors that affect the battery life are comprehensive factors, not only the internal factors of the plate, such as the composition of the active material, crystal type (high temperature curing or normal temperature curing), plate size and grid material structure, etc., but also depend on external factors , Such as discharge rate and depth, working conditions (temperature and pressure, etc.) and maintenance conditions, etc. - The internal resistance of the battery The internal resistance of the battery is not constant, and it changes continuously over time during the charge and discharge process, because the composition of the active material, the electrolyte concentration and the temperature are constantly changing. The internal resistance of lead-acid batteries is very small and can be ignored when discharging with a small current, but when discharging with a large current, the voltage drop loss can reach hundreds of millivolts, which must be paid attention to. The internal resistance of the battery has two parts: ohmic internal resistance and polarization internal resistance. The ohmic resistance is mainly composed of electrode material, diaphragm, electrolyte, terminal, etc. It is also related to battery size, structure and assembly factors. Polarized internal resistance is caused by electrochemical polarization and concentration polarization, and is the internal resistance generated by polarization when the two electrodes undergo a chemical reaction during battery discharge or charging. Polarization resistance is not only related to battery manufacturing process, electrode structure and activity of active materials, but also related to factors such as battery operating current and temperature. The internal resistance of the battery seriously affects the battery’s operating voltage, operating current and output energy, so the smaller the internal resistance, the better the battery performance.
Eye color is a hereditary trait that depends on the genes of both parents, as well as a little bit of mystery. The color of the eye is based on the pigments in the iris, which is a colored ring of muscle located at the center of the eye (around the pupil) that helps to control the amount of light that comes into your eye. Eye color falls on a spectrum of color that can range from dark brown, to gray, to green, to blue, with a whole lot of variation in between. The genetics of eye color are anything but straightforward. In fact children are often born with a different eye color than either of their parents. For some time the belief was that two blue-eyed parents could not have a brown-eyed child, however, while it’s not common, this combination can and does occur. Genetic research in regards to eye color is an ongoing pursuit and while they have identified certain genes that play a role, researchers still do not know exactly how many genes are involved and to what extent each gene affects the final eye color. Looking at it simply, the color of the eye is based on the amount of the pigment melanin located in the iris. Large amounts of melanin result in brown eyes, while blue eyes result from smaller amounts of the pigment. This is why babies that are born with blue eyes (who often have smaller amounts of melanin until they are about a year old) often experience a darkening of their eye color as they grow and develop more melanin in the iris. In adults across the globe, the most common eye color worldwide is brown, while lighter colors such as blue, green and hazel are found predominantly in the Caucasian population. Abnormal Eye Color Sometimes the color of a person’s eyes are not normal. Here are some interesting causes of this phenomenon. Heterochromia, for example, is a condition in which the two eyes are different colors, or part of one eye is a different color. This can be caused by genetic inconsistencies, issues that occur during the development of the eye, or acquired later in life due to an injury or disease. Ocular albinism is a condition in which the eye is a very light color due to low levels of pigmentation in the iris, which is the result of a genetic mutation. It is usually accompanied by serious vision problems. Oculocutaneous albinism is a similar mutation in the body’s ability to produce and store melanin that affects skin and hair color in addition to the eyes. Eye color can also be affected by certain medications. For example, a certain glaucoma eye drop is known to darken light irises to brown, as well as lengthen and darken eyelashes. Eye Color - It's More Than Meets the Eye It is known that light eyes are more sensitive to light, which is why it might be hard for someone with blue or green eyes to go out into the sun without sunglasses. Light eyes have also shown to be a risk factor for certain conditions including age-related macular degeneration (AMD). Color Contact Lenses While we can’t pick our eye color, we can always play around with different looks using colored contact lenses. Just be sure that you get a proper prescription for any contact lenses, including cosmetic colored lenses, from an eye doctor! Wearing contact lenses that were obtained without a prescription could be dangerous to your eyes and your vision.
This is the first course in a series of four that will give you the skills needed to start your career in bookkeeping. If you have a passion for helping clients solve problems, this course is for you. In this course, you will be introduced to the role of a bookkeeper and learn what bookkeeping professionals do every day. You will dive into the accounting concepts and terms that will provide the foundation for the next three courses. You will learn how to work your way through the accounting cycle and be able to read and produce key financial statements. By the end of this course, you will be able to: -Define accounting and the concepts of accounting measurement -Explain the role of a bookkeeper and common bookkeeping tasks and responsibilities -Summarize the double entry accounting method -Explain the ethical and social responsibilities of bookkeepers in ensuring the integrity of financial information. No previous bookkeeping or accounting experience required.
Jump to navigation Jump to search - An adjective is a type of word which usually tells about the properties of people, things, and other nouns (e.g., new, small, red, & bad are adjectives). Usually it can be used before the noun or after a linking verb. It can usually be graded and described by adverbs. - He uses many adjectives in his stories to give us a clear picture. - Some combinations of adjective and noun are quite common.
The focal length is a measure of how a lens converges light. It can be used to know the magnification factor of the lens and given the size of the sensor, calculate the angle of view. A standard reference used for comparisons is the 35 mm format, which is a sensor of size 36×24 mm. A standard wide angle lens would have around 28 to 35 millimeters based on the 35 mm format. The smaller the number, the wider the lens is.Close The focal length is a measure of how a lens converges light. It can be used to know the magnification factor of the lens and given the size of the sensor, calculate the angle of view. The native focal length of the sensor cannot be used for comparisons between different cameras unless they have the same size. Therefore, the focal length in 35 mm terms is a better reference. For the same sensor, the smaller the number, the wider the lens is.Close Indicates the type of image stabilization this lens has: The horizontal field of view in degrees this lens is able to capture, when using the maximum resolution of the sensor (that is, matching the sensor aspect ratio, and not using sensor cropping).Close The vertical field of view in degrees this lens is able to capture, when using the maximum resolution of the sensor (that is, matching the sensor aspect ratio, and not using sensor cropping).Close Shows the magnification factor of this lens compared to the primary lens of the device (calculated by dividing the focal length of the current lens by the focal length of the primary lens). A magnification factor of 1 is shown for the primary camera, ultra-wide cameras have magnification factors less than 1, and telephoto cameras have magnification factors greater than 1.Close Physical size of the sensor behind the lens in millimeters. All other factors being equal (specially resolution), the larger the sensor the more light it can capture, as each physical pixel is bigger.Close The size (side) of an individual physical pixel of the sensor in micrometers. All other factors being equal, the larger the pixel size, the better the image quality is. In this case, each photoreceptor can capture more light and potencially can better differential the signal from the noise, yielding better image quality, specially in low-light.Close The maximum picture resolution this sensor outputs images in JPEG format. Sometimes, if the sensor can also provide images in RAW (DNG) format, they can be slightly larger because of an additional area used for calibration purposes (among others). Unfortunately, firmware restrictions for third-party apps also mean that the maximum picture resolution exposed to third-party apps might be considerably lower than the actual resolution of the sensor, therefore the resolution shown here is the maximum resolution third-party apps can access from this sensor.Close The available output picture formats this camera is able to deliver: The focusing capabilities of this camera: It displays whether this lens can be set to focus at infinity or not. Even if the camera supports autofocus and manual focus, it might happen that the focus range the lens is able to adjust to does not include the infinity position. This property is important for astrophotography, as in such low-light scenarios the automatic focus does not work reliably.Close The distance from which objects that are further away from the camera always appear in focus. Therefore, if the camera is set to focus at infinity, any object further away from this distance will appear in focus.Close The range of supported manual exposure in seconds (minimum or shortest to maximum or longest). This camera might support exposures outside this range, but only in automatic mode and not in manual exposure mode. Also, note that this range is the one third-party apps have access to, as often the first-party app preinstalled on the phone by the manufacturer might have privileged access to the hardware and offer longer or shorter exposures times.Close The range of supported manual sensitivity (ISO). This camera might support ISO sensitivities outside this range in automatic mode. Also, note that this range is the one third-party apps have access to, as often the first-party app preinstalled on the phone by the manufacturer might have privileged access to the hardware and offer an extended manual sensitivity range.Close The maximum ISO sensitivity possible in manual mode is usually reached by using digital amplification of the signal from the maximum supported analog sensitivity. This information, if available, will let you know what is the maximum analog sensitivity of the sensor.Close The data on this database is provided "as is", and FGAE assumes no responsibility for errors or omissions. The User assumes the entire risk associated with its use of these data. FGAE shall not be held liable for any use or misuse of the data described and/or contained herein. The User bears all responsibility in determining whether these data are fit for the User's intended use.
The immune system faces numerous challenges in protecting us against attacks by microorganisms. It must respond quickly and locate and destroy microbes that can enter any body part. It has several protective cells of variable nature and functions. Read on to find out what these immune system cells are and what each one does. Cells of the immune system Almost all immune system cells are derived from hematopoietic cells in the bone marrow. Then they differentiate and generate different populations. A summary of the different types of immune system cells with their respective function is described below. - Monocytes and macrophages Monocytes are cells that makeup 5 to 10 percent of white blood cells. They are found lining the walls of blood vessels in organs such as the liver and spleen. Here they capture microorganisms in the blood as they circulate. When monocytes leave the bloodstream and enter tissues, they change shape and size and become macrophages. Macrophages are cells whose primary function is to ingest microbes through the process of phagocytosis and then kill them. To kill, macrophages can form cytoplasmic organelle and fuse them with lysosomes. Lysosomes contain reactive nitrogen and oxygen species that are toxic to microbes. Together with the activity of proteolytic enzymes, they constitute a fundamental mechanism for eliminating pathogens. The macrophages are activated by microbial substances and recruit other immune cells to the site of infection. In this way, they amplify the immune response. For example, they serve as antigen-presenting cells to activate T lymphocytes. In addition, macrophages can ingest necrotic cells of their own and other cells of the immune system, just like neutrophils. This is part of the cleaning process that occurs after an infection. The neutrophils are a type of cell found in the bloodstream that can quickly ingest and kill microorganisms. Neutrophils are the most abundant circulatory immune cell population and play a significant role in innate inflammatory reactions. Once inflammation occurs, neutrophils rapidly travel to the site of infection. This is where they perform their primary function – phagocytosis, particularly those microbes that have undergone the opsonization process. In addition to phagocytosis, neutrophils can attack pathogens in other ways. For example, they can release granules filled with enzymes and aggressive substances, such as defensins, just as they can export traps in the form of networks (NET) to the extracellular medium. - Dendritic cells Dendritic cells (DC) are immune cells that fulfill a unique role of communication between the response of the innate and adaptive immune systems. On the one hand, their main functions are to act as sentinels, detecting the presence of microbes and initiating innate defense reactions. On the other, activate adaptive responses by capturing and presenting microbial peptides to T lymphocytes. Dendritic cells can fulfill these dual functions because they have several types of receptors. For example, TLRs respond to microbial molecules. When these receptors bind, cytokines are released, and other cells of the immune system are rapidly recruited to the site of infection. Furthermore, all dendritic cells express MHC molecules of the I and II classes, which explains their ability to mediate with the adaptive immune system through binding to T lymphocytes. The lymphocytes are the primary immune cells of the adaptive response. All lymphocytes are similar in shape, and their appearance does not reflect their variety of functions. It is these cells that are responsible for generating antibodies and ensuring memory function, which is why they play a unique role in the transfer of immunity. A fascinating feature of lymphocytes is their diversity of receptors with different specificities. In other words, there are millions of lymphocyte clones in the body and each one is specific against a certain antigen. This diversity of recognition is achieved through a process known as clonal expansion. There are two main classes of lymphocytes, B and T cells. Type B lymphocytes recognize many different antigens and evolve into antibody-secreting cells. Its function is essential to neutralize the microbe, activate the complement, and engulf it. T lymphocytes can have various functions and subtypes. For example, they can act as helpers, recognizing antigens on presenting cells and marking them to stimulate other immune system responses. They can also evolve into cytotoxic T lymphocytes that recognize antigens on infected cells and kill microbes directly, as well as they can fulfill the regulatory function and avoid an immune response to their cells. - NK cells Natural killer (NK) cells are a subtype of lymphocytes that play fundamental roles in the innate immune system. They are so named because they readily kill virus-infected cells and do not require the same thymic education that T cells require. NK cells are cytotoxic; they contain small granules in their cytoplasm with particular proteins like perforin and proteases known as granzymes. NK cells are derived from the bone marrow and are present in relatively low amounts in the bloodstream and tissues. They are essential to defend against viruses and prevent cancer. - Eosinophils, basophils, and mast cells Basophilic eosinophils and mast cells are three additional cell types of the immune system that share the property of having cytoplasmic granules filled with inflammatory and antimicrobial molecules. These immune cells play essential roles in fighting parasites and allergic diseases. Mast cells are derived from the bone marrow and are found in the skin and mucous epithelia. They have granules filled with histamine, and when activated, they promote inflammation. Basophils are rare cells and are generally found circulating in the blood (they represent 1% of circulating immune cells). Although their function is not entirely clear, they are known to have mast cell-like granules and become activated upon binding of the IgE antigen. Eosinophils are granulocytes that contain enzymes capable of damaging the cell walls of parasites. They are found in the blood and mucous membranes, where they perform essential defense functions in the digestive and respiratory systems. Cells of the immune system can be classified as lymphocytes (T cells, B cells, and NK cells), granulocytes (neutrophils, basophils, etc.), and monocytes/macrophages. These are all types of white blood cells. Each type of cell has different functions, and they work in conjunction with other elements such as signaling proteins (cytokines), antibodies, and complement proteins. The cells of the immune system act in a coordinated way to elicit rapid immune responses (innate and adaptive)
LEAD EXPOSURE / PROBLEM A global health crisis affecting society’s most vulnerable members Children around the world are exposed to lead daily Hundreds of millions of children suffer due to lead poisoning. Due to a lack of awareness of the harm and sources of lead exposure in areas where the risk of contamination is highest, children are unknowingly compromised each day. There are countless sources of lead poisoning – and a main concern is the informal recycling of lead-acid batteries which can also make its way into certain consumer products. In communities across the world – particularly in low- and middle-income countries – lead can be found throughout the environment in which children live: in the air they breathe, the water they drink, the food they eat, and even in the soil they walk and crawl on. The dangers to developing children Lifelong impact on health and productivity How children are exposed Lead exposure can happen in innumerable ways. The most common are breathing in dust or fumes containing lead and consuming tainted food and water. This exposure intensifies in proximity to informal and unregulated lead-acid battery recycling sites, a leading cause of lead poisoning. These often open-air facilities exist close to homes and schools, with children even sometimes on-site. In these facilities, smelting sends toxic fumes into the air and furnaces spill lead dust onto the ground. Parents who work at these sites bring contamination home on their clothes, shoes, skin and hair, exposing their children to the toxic substance. What we are doing to help We are educating communities and working with governments and service providers to prevent and address childhood lead exposure.
Scientist: All other things being equal, the intensity of heat increases as the distance from the heat source decreases. Knowing this, most people conclude that the Earth`s seasons are caused by the Earth`s changing distance from the sun. In other words, winter occurs when the Earth is far from the sun, and summer occurs when the earth is close to the sun. However, we know that as North America experiences summer, South America experiences winter, even though the difference in the continents' distance to the sun is negligible. Therefore, the earth`s changing distance from the sun does not cause the seasons. In the argument, the two portions in boldface play which of the following roles? 【选项】The first describes a belief to which the scientist subscribes; the second is evidence in support of this belief.
Natural Disasters: Exploring Volcanoes This week students will be reading Exploring Volcanoes. As students learn from this topic of high interest, they will learn to identify the most important information from a nonfiction text to summarize it. Students will learn to connect the skill of summarizing with the strategy of synthesizing by applying the skill and strategy before, during, and after reading. They will also learn to study a diagram and determine what they learn from each part: the title of the diagram, the labels, and the drawing. Other Important Notes:
These ice cubes are beginning to melt. - The definition of melt is to turn from a solid to liquid as a result of exposure to heat. - An example of melt is what an ice cube does when exposed to the sun. - An example of melt is what you do to an ice cube when you put it in the microwave. - An example of melt is the effect of adding ice or sugar to ice to lower the freezing point. - Melt is defined as to become more emotional or loving, or to cause someone to become more emotional or loving. - An example of melt is when you see a little puppy dog and your heart gets full. - An example of melt is what the little puppy dog does to your heart. - to change from a solid to a liquid state, generally by heat - to dissolve; disintegrate - to disappear or cause to disappear gradually: often with away - to merge, or appear to merge, gradually; blend: the sea melting into the sky at the horizon - to soften; make or become gentle and tender: a story to melt our hearts Origin of meltMiddle English melten from Old English intransitive verb meltan, transitive verb mieltan from Indo-European an unverified form meld-, soft from base an unverified form mel-, to grind from source mill - a melting or being melted - something melted - the quantity melted at one operation or during one period - a dish, esp. a grilled sandwich, containing or covered with a layer of melted cheese: a tuna melt melt in your mouth - to require little or no chewing: said of tender foods - to taste especially delicious verbmelt·ed, melt·ing, melts - To be changed from a solid to a liquid state especially by the application of heat. - To dissolve: Sugar melts in water. - To disappear or vanish gradually as if by dissolving: The crowd melted away after the rally. - To pass or merge imperceptibly into something else: Sea melted into sky along the horizon. - To become softened in feeling: Our hearts melted at the child's tears. - Obsolete To be overcome or crushed, as by grief, dismay, or fear. - To change (a solid) to a liquid state especially by the application of heat. - To dissolve: The tide melted our sand castle away. - To cause to disappear gradually; disperse. - To cause (units) to blend: “Here individuals of all races are melted into a new race of men” ( Michel Guillaume Jean de Crèvecoeur ) - To soften (someone's feelings); make gentle or tender. - A melted solid; a fused mass. - The state of being melted. - a. The act or operation of melting.b. The quantity melted at a single operation or in one period. - A usually open sandwich topped with melted cheese: a tuna melt. Origin of meltMiddle English melten from Old English meltan ; see mel-1 in Indo-European roots. (countable and uncountable, plural melts) - Molten material, the product of melting. - The transition of matter from a solid state to a liquid state. - The springtime snow runoff in mountain regions. - A melt sandwich. - A wax-based substance for use in an oil burner as an alternative to mixing oils and water. - (UK, slang) an idiot. - The capital of France is Berlin. - Shut up you melt! (third-person singular simple present melts, present participle melting, simple past melted or rarely molt, past participle melted or molten) - (ergative) To change (to be changed) from a solid state to a liquid state, usually by a gradual heat. - I melted butter to make a cake. - When the weather is warm, the snowman will disappear; he will melt. - (intransitive, figuratively) To dissolve, disperse, vanish. - His troubles melted away. - (figuratively) To soften, as by a warming or kindly influence; to relax; to render gentle or susceptible to mild influences; sometimes, in a bad sense, to take away the firmness of; to weaken. - (intransitive, colloquial) To be very hot and sweat profusely. - Help me! I'm melting! From Middle English melten, from Old English meltan (“to consume by fire, melt, burn up; dissolve, digest") and Old English mieltan (“to melt; digest; refine, purge; exhaust"), from Proto-Germanic *meltanÄ… (“to dissolve, melt") and Proto-Germanic *maltijanÄ… (“to dissolve, melt"), both from Proto-Indo-European *(s)mel- (“to beat, crush, grind"). Cognate with Icelandic melta (“to melt, digest").
Analogies, or relationships are around us everywhere. Most of the time teachers only introduce analogies as students get older and they need to prepare their students because it's going to be on the standardized tests. I remember seeing analogies for the first time in 6th grade. They were part of our vocabulary time. I was always good in English so I just muddled through trying to figure out analogies. When I was being trained in Thinking Maps, I saw how Dr. Hyerle uses the bridge map to explain analogies. I thought to myself, "If I only knew this as a kid!" Young children can understand analogies, and this understanding helps with their critical thinking skills and reading comprehension (they become quicker at understanding relationships and comparisons). We can start building children's critical thinking skills from an early age by helping them to see analogies. This lesson is going to be helpful because later in the year some of your students may be ready to incorporate analogies into their writing, too. When students read analogies in complex text, they are also engaging with figurative language. Students need to grapple with figurative language, determining what the author is trying to convey to them. This represents a new approach in my teaching as a result of the Common Core standards - that I require students to use rigorous, higher-order thinking skills when doing things like reading and creating analogies. Today, as students problem solve to determine the analogous relationships among words they will have to think about what the relating factor is and then think of certain words and the relationship that exists among the words that fit with the relating factor. This addresses standard L1.5a and L1.5b. To get a better understanding of what I mean by this, check out the video in the guided practice section. For today's lesson you'll need either the Smartboard Introducing Thinking Maps or Activboard lesson Introducing Thinking Maps. You'll also need to make a copy of the Bridge Map Student Copy Bridge Map for each of your students. I brought my students to the carpet and had them sit in front of the Smartboard where I had my Bridge Map projected. I said, "Today we are going to learn about something called analogies. An analogy is where we compare two things and explain what they have in common." Then I wanted my students to feel really confident I said, " When I was a kid I didn't learn analogies until I was in 6th grade, and here you are learning them in 1st grade. You guys are hot stuff." I introduced the map by saying, "This is a Bridge map. We use a Bridge map to describe analogies. I am going to model some analogies and we'll work together on a few first. Let's look at this line called the relating factor. The relating factor tells us what our two objects have in common. I am going to write "is the opposite of" on this line, because the first analogy we will practice will show how are two things are the opposite of each other. If you look at the map you will see that there are straight lines and things that look like mountains. The straight lines are bridges in between the mountains because they show how the ideas are connected. " I proceeded to show the students how we would use and read the Bridge Map. There is a great deal of instruction in this section, and, for the sake of not being too wordy in this section, I've created a video for you that shows the analogies we will work on in this section and how to read the map. You can also see an example of what you can expect of your students in the independent practice section. For further guidance just watch here Guided Practice Bridge Map . We worked together on our analogies. The analogies we worked on were: I explained these in detail in our video, I just wanted to reiterate the three analogies we worked on together in this section. I sent the students back to their seats and passed out the bridge maps to them. I pulled up a blank bridge map on the Smartboard and on the relating factor line I wrote "is the color of". I had my students write this on their relating factor lines. Then I said, "Your job right now is to come up with some analogies where you show the relationship of the object with the color. You will keep going until your whole bridge map is completed. When I see that everyone is done, we will partner up and you will read your analogies to a partner." I circulated around the room, supporting students with segmenting words so they would be able to spell them. Some of my students still say to me, "How do you spell ? ....." I help my students segment words and tap the sounds out on their fingers so they can be independent with their work. I don't want my students seeing me as the supreme master of knowledge. They need to see just how much they can achieve on their own. After my students were done with their maps, I partnered up my students. One person was Person 1 and one was Person 2. Person 1 took their turn reading their map first and Person 2 listened, then Person 2 took a turn reading their map as Person 1 listened. The students were engaged and because I told the students my expectation for speaking and listening, we didn't have any fights over turn taking. I like my closures to be short and sweet. I just wanted to ask my students some questions that referred back to our goals for the lesson. The questions I asked my students were: Then I asked my students if anyone would like to share their maps with the class. This was just one more way that I could assess my students' understanding and give them a chance to consolidate their learning. If after you've done this lesson you decide you'd like to incorporate more Thinking Maps into your daily lessons, I have some resources for you here. The first is this article. I also have this Examples of Thinking Maps and this Bridge Map Examples for you. I also have a pdf that shows you how to incorporate Thinking Maps into reader response activities Reader Response With Thinking Maps. Finally, I also have a video I've made for you that shows you how you can save and modify the maps on the Smartboard lesson so you don't have to keep remaking the maps when designing future lessons. You can watch the video here How to Save and Modify the Thinking Maps.
Gene editing hasn’t cured disease in a human being yet—but it is getting closer. With the debut of CRISPR a few years ago, advances in the technology have been happening at a breakneck pace. Here are a few of the remarkable things that gene editing did in 2017. In August, researchers at Oregon Health and Science University, led by Shoukhrat Mitalipov, reported the first known attempt at genetically modifying human embryos in the U.S. They injected CRISPR into embryos that carried a genetic mutation responsible for an often fatal hereditary heart condition. CRISPR was able to correct the mutation in about three-quarters of the embryos. Researchers in China had previously edited human embryos with CRISPR, but this was the largest attempt at altering embryos to date. CRISPR and other genome-editing methods work by making double-stranded breaks in DNA, which is especially useful if you want to insert or delete entire genes. But some genetic aberrations happen at an even more minute level of the genome. Sometimes you might only want to edit, delete, or insert a single letter of DNA, not a whole gene. This idea, called “base editing,” got closer to reality in 2017. Researchers modified CRISPR to target a single base letter instead of a gene in human cells. With the new tool, they were able to convert an A-T base pair into a G-C pair. The feat is noteworthy because mistakes in a single base pair account for about half of the 32,000 mutations known to be linked with human diseases. This year, startup eGenesis, a spinout from the lab of Harvard Medical School geneticist George Church, took a step forward in modifying pigs using CRISPR. The company wants to fill a huge need for organ transplants by making pig organs safe enough to be used in humans with heart, liver, kidney, or lung failure. One problem is that pigs harbor innate viruses in their DNA that could be transferred to human recipients and cause disease. Researchers at eGenesis used CRISPR to disable these viruses in pig embryos. When the pigs were born, all 37 of them were healthy and virus-free. A 44-year-old man became the first person to receive a gene-editing treatment designed to directly modify his cells. The therapy uses an older DNA editing approach called zinc finger nucleases, rather than CRISPR. The approach is meant to correct an error in a gene that causes a rare metabolic disorder that slowly destroys the body’s cells. Sangamo Therapeutics, the company that makes the experimental therapy, previously used the same technology to edit the cells of HIV patients outside the body and then infuse them back into patients in an attempt to eliminate the virus. The treatment didn’t have a lasting effect, but Sangamo hopes that directly injecting the gene-editing tools into the body will have a better chance of working.
A photoelectric cell, also known as a photocell, is a type of device that uses variations in light in order to change electrical properties such as current, voltage and resistance. According to Encyclopedia.com, photoelectric cells are currently used to create a wide range of items, including solar batteries, automatic door openers and burglar alarms.Continue Reading The most basic variety of photoelectric cell consists of little more than two electrodes that are separated by a semiconductor with light-sensitive properties, states NASA. The electrodes of such a photocell are attached to a battery, which provides a steady level of current. This current, however, increases when the semiconductor is exposed to light. The greater the intensity of light, the greater the increase in current. A more sophisticated type of photoelectric cell, known as a photovoltaic cell, is capable of generating current using light alone. In other words, no source of external voltage is needed. This allows the direct conversion of light into electrical energy. In other words, photovoltaic cells provide a means of harnessing solar energy. Photovoltaic cells are made up of two dissimilar materials that are separated by a semiconductor. When light hits the semiconductor, it generates a potential difference between the two sides. According to NASA, this is caused by electrons that are jarred loose from the semiconductor.Learn more about Education
Maser, device that produces and amplifies electromagnetic radiation mainly in the microwave region of the spectrum. The maser operates according to the same basic principle as the laser (the name of which is formed from the acronym for “light amplification by stimulated emission of radiation”) and shares many of its characteristics. The first maser was built by the American physicist Charles H. Townes and his colleagues in 1953. The name is an acronym derived from “microwave (or molecular) amplification by stimulated emission of radiation.” A maser oscillator requires a source of excited atoms or molecules and a resonator to store their radiation. The excitation must force more atoms or molecules into the upper energy level than in the lower, in order for amplification by stimulated emission to predominate over absorption. For wavelengths of a few millimetres or longer, the resonator can be a metal box whose dimensions are chosen so that only one of its modes of oscillation coincides with the frequency emitted by the atoms; that is, the box is resonant at the particular frequency, much as a kettle drum is resonant at some particular audio frequency. The losses of such a resonator can be made quite small, so that radiation can be stored long enough to stimulate emission from successive atoms as they are excited. Thus, all the atoms are forced to emit in such a way as to augment this stored wave. Output is obtained by allowing some radiation to escape through a small hole in the resonator. The first maser used a beam of ammonia molecules that passed along the axis of a cylindrical cage of metal rods, with alternate rods having positive and negative electric charge. The nonuniform electric field from the rods sorted out the excited from the unexcited molecules, focusing the excited molecules through a small hole into the resonator. The output was less than one microwatt (10-6 watt) of power, but the wavelength, being determined primarily by the ammonia molecules, was so constant and reproducible that it could be used to control a clock that would gain or lose no more than a second in several hundred years. This maser can also be used as a microwave amplifier. Maser amplifiers have the advantage that they are much quieter than those that use vacuum tubes or transistors; that is, they add very little noise to the signal being amplified. Very weak signals can thus be utilized. The ammonia maser amplifies only a very narrow band of frequencies and is not tunable, however, so that it has largely been superseded by other kinds, such as solid-state ruby masers. Solid-state and traveling-wave masers Amplification of radio waves over a wide band of frequencies can be obtained in several kinds of solid-state masers, most commonly crystals such as ruby at low temperatures. Suitable materials contain ions (atoms with an electrical charge) whose energy levels can be shifted by a magnetic field so as to tune the substance to amplify the desired frequency. If the ions have three or more energy levels suitably spaced, they can be raised to one of the higher levels by absorbing radio waves of the proper frequency. The amplifying crystal may be operated in a resonator that, as in the ammonia maser, stores the wave and so gives it more time to interact with the amplifying medium. A large amplifying bandwidth and easier tunability are obtained with traveling-wave masers. In these, a rod of a suitable crystal, such as ruby, is positioned inside a wave-guide structure that is designed to cause the wave to travel relatively slowly through the crystal. Solid masers have been used to amplify the faint signals returned from such distant targets as satellites in radar and communications. Their sensitivity is especially important for such applications because signals coming from space are usually very weak. Moreover, there is little interfering background noise when a directional antenna is pointed at the sky, and the highest sensitivity can be used. In radio astronomy, masers made possible the measurement of the faint radio waves emitted by the planet Venus, giving the first indication of its temperature. Generation of radio waves by stimulated emission of radiation has been achieved in several gases in addition to ammonia. Hydrogen cyanide molecules have been used to produce a wavelength of 3.34 mm. Like the ammonia maser, this maser uses electric fields to select the excited molecules. Test Your Knowledge Oceanic Mass: Fact or Fiction? One of the best fundamental standards of frequency or time is the atomic hydrogen maser introduced by American scientists N.F. Ramsey, H.M. Goldenberg, and D. Kleppner in 1960. Its output is a radio wave whose frequency of 1,420,405,751.786 hertz (cycles per second) is reproducible with an accuracy of one part in 30 × 1012. A clock controlled by such a maser would not get out of step more than one second in 100,000 years. In the hydrogen maser, hydrogen atoms are produced in a discharge and, like the molecules of the ammonia maser, are formed into a beam from which those in excited states are selected and admitted to a resonator. To improve the accuracy, the resonance of each atom is examined over a relatively long time. This is done by using a very large resonator containing a storage bulb. The walls of the bulb are coated so that the atoms can bounce repeatedly against the walls with little disturbance of their frequency. Another maser standard of frequency or time uses vapour of the element rubidium at a low pressure, contained in a transparent cell. When the rubidium is illuminated by suitably filtered light from a rubidium lamp, the atoms are excited to emit a frequency of 6.835 gigahertz (6.835 × 109 hertz). As the cell is enclosed in a cavity resonator with openings for the pumping light, emission of radio waves from these excited atoms is stimulated.
Titration is a method of estimation of the strength of a given substance in analytical chemistry. An acid base titration is specifically meant to estimate the strength of acids, bases, and related salts. The principle behind the acid base titration is neutralization. When an acid reacts with a base, it forms salt and water. This is called as a neutralization reaction because theoretically pH at the end point is 7. In an acid base titration, a known quantity of acid is used to estimate an unknown amount of a base or vice-versa. A known reactant is taken in a burette and the test in a beaker. The reactant from burette is added drop by drop while the beaker is swirled to enhance the reaction. This is continued until the endpoint is reached. The reactant taken in burette is called titrant while that taken in the flask is called as a titer An acid-base indicator is used to indicate the end point of reaction. These indicators change the color of the solution at the endpoint. In modern labs instead of indicators, pH meters are used to detect the endpoint. 5 Types of Acid Base Titration Acid base titration can be done in both aqueous and nonaqueous media. A. Aqueous acid base titration These are normal titration between acids and base dissolved in water. Hence the name aqueous titration. They are prominently used in academic labs and for standardization. 1) Strong acid V/s strong base Here are strong acid reacts with a strong alkali to form salt and water. The reaction of this type is swift and also complete. The reaction happens in stoichiometric means, i.e., each molecule of acid reacts with the corresponding molecule of the base. At the end of the reaction, no molecule of acid or base exists as every molecule in the reaction has completely reacted to form a salt. Hence the endpoint or equivalence point is precise and sharp. Examples of these types of acids are HCl, H2So4, HNO3, HBr, HClO4 (perchloric acid), H3PO4, etc. The examples of strong bases are NaOH, MgOH2, Al2OH3, etc. Reaction example: HCl+NaOH—————-> NaCl + H2O Either the known quantity of acid is taken in burrete to react with an unknown quantity of strong base in the flask (beaker) or vice-versa. The pH at the end point is neutral, i.e., 7. So indicator changing their color around pH seven are used here. 2) Strong Acid v/s Weak Base Here a strong base reacts with a weak acid to form salt and water. But since the reaction uses a strong acid, the pH at the endpoint will be towards acidic, i.e., below 7. Reaction example: HCl+NH4OH—————–>NH4Cl + H2O. Here the salt formed NH4Cl is slightly acidic. So indicators changing color at lower pH’s are employed. During the reaction, a known concentration of strong acid is taken in a burette and allowed to react drop by drop with the base in a beaker. 3) Weak Acid V/s Strong Base Here the reaction happens between a weak acid and strong base. The weak acid is taken in a beaker and known quantity of strong base is dropped from a burrete till the endpoint. Reaction example: H2CO3+ NaOH————–>Na2Co3+H2O The salt formed is slightly basic, so the pH at the end point is above 7. The indicator used is one with a change in color at higher pHs. 4) Weak Acid V/s Weak Base Here both acid and base are weak. So mostly they are avoided due to imprecise endpoints. At the endpoint, the pH will be seven theoretically. But cannot be measured precisely like that in strong acid and strong base case. An extra amount of titrant is needed to reach the endpoint due to the imprecise reaction. Reaction example: H2CO3+NH4OH————–>NH4OH+H2O The endpoint is neutral as the salt is neutral, but due t excess titrant added the pH could be in favor of it. Check the video below for method Back titration or Indirect titration Even the substance which is not acidic or basic it can still be estimated. For this, the substance is converted by use of some reaction and then estimated employing back titration method. Example: Estimation of aspirin. Aspirin is a weak acid drug. So to the sample of aspirin in a beaker, known volume sodium hydroxide is added. It is allowed to react by heating. Once the reaction is over, the sodium remained unconsumed is estimated by back titration with concentrated hydrochloric acid Step. 1) Aspirin + NaOH (Excess) —————> Alkaline hydrolysis + Remained NaOH. Step. 2) Remained NaOH + HCl —————-> NaCl + H20. B. Non-Aqueous titration: These are conventional methods of non-aqueous titration. Here instead of water as solvent glacial acetic acid is used to make the reactants. They are similar to the above types of acid-base reactions. Since many drugs are water-insoluble and slightly acidic or basic, they are analyzed by non-aqueous titrations. They are extensively used for quality control and analysis of drugs. Reaction example: psuedoephidrin+HCl
Definition of Continuity in Terms of Differences of Independent Variable and Function.All elementary functions are continuous at any point where they are defined. MSC: Primary 30C45 secondary 46J10 Keywords: analytic function univalent function Banach algebra Noshiro-Warschawski theorem 1 Introduction and denitions Throughout this paper, C(T) denotes the Banach algebra, with sup norm, of continuous Show that the set of functions in the form is a subspace of the space of continuous functions on (prove closure). I have no idea where to start.Linear Algebra Define the function? Math proof, linear algebra problem? Theorem 5.2.4 If is continuous at and if is continuous at , then is continuous at . For example, take a function defined over some open interval , with at least one discontinuity at, say, now take a function , constant over . for all real numbers a. This property is known as continuity. Definition. Let f(x) be a function defined on an interval around a. We say that f(x) is continuous at a iff.[Geometry] [Algebra] [Differential Equations]. We know that open sets are generators of our Borel -algebra. And from the definition of continuous functions, we have . Define where is the collection of open sets. Continuous functions on the states of a C-algebra and its elements. Not the answer youre looking for? Browse other questions tagged fa. functional-analysis oa.operator-algebras or ask your own question. Suppose . Then, the following are true: Additive closure: If are functions defined in open intervals containing and both of them are continuous at , then the pointwise sum is continuous at . Scalar multiples: If is defined in an open interval containing and is continuous at , and is a real number Lecture 5 : Continuous Functions Denition 1 We say the function f is continuous at a number a if.Catalogue of functions continuous on their domains. From the last day we know: Polynomials and Rational functions. Learn what is continuous function. Also find the definition and meaning for various math words from this math dictionary.Algebra. Analytical. Algebra.Thus, simply drawing the graph might tell you if the function is continuous or not. However not all functions are easy to draw, and sometimes we will need to use the definition of continuity to determine a functions continuity. Solved Problems / Solve later Problems. Tagged: continuous function. Ring theory.(Linear Algebra Exam Problem, the Ohio State University). Banach function algebras are complete normed algebras of bounded, continuous, complex-valued functions defined on topological spaces.This mini-course will begin with revision of the basic definitions and Gelfand theory of commutative Banach algebras, before moving on to discuss the Algebra Continuous function Mathematics Definitions with Examples Here you can download all educational learning lectures grade 6 to 12 all subjects. Graphs of Functions, Equations, and Algebra. Free Math Worksheets to Download. Analytical Tutorials.We present an introduction and the definition of the concept of continuous functions in calculus with examples. Initial Algebra Semanttcs and Continuous Algebras. 69.A-compact. So much for preliminaries We now turn to the definition of "X-trees" which provide. the carriers of mi0al " continuous algebras " The idea is based on the well-known device of representing a X-tree as a function defined on a prefix In this tutorial, youll learn what a continuous function is and what a graph needs to have in order to be continuous. Keywords: definition.Whats a Function? You cant go through algebra without learning about functions. translation and definition "continuous algebra", Dictionary English-English online.Then, a mapping is defined of the algebra of continuous functions of the random variables onto the algebra of operators on the Hilbert space. is right continuous at a if limit x to a f(x) f (a). Continuity Definition.Definition of Open Interval: A function f is said to be continuous in an open interval (a, b), if it is continuous at every point in (a, b). In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function. A continuous function with a continuous inverse function is called a homeomorphism. It is actively under development with development (0.y.z) releases. Features. Algebra. Arithmetic. Finance. Functions.Combinatorics. Distributions. Continuous. A function f is said to be discontinuous at a point c of its domain if it is not continuous at c. The point c is then called a point of discontinuity of the function. Algebra of Continuous Functions. Theorem 1. Objectives To study the basic properties of the -algebra of the bounded uniformly continuous functions on some metric space. Requirements Basic concepts of analysis: supremum, limit, continuity. Exercise 1 (definition of metric) Let be a set and be a function. Continuous and Discrete Functions MathBitsNotebook.com Topical Outline | Algebra 1 OutlineDefinition: A set of data is said to be continuous if the values belonging to the set can take on ANYFunction: In the graph of a continuous function, the points are connected with a continuous line Definition of continuous function in the Definitions. net dictionary.Here are all the possible meanings and translations of the word continuous function. Wiktionary(0.00 / 0 votes)Rate this definition An algebra is a set which is a vector space but further has a product defined between its elements. Algebra. Geometry.Definition A function f(x) is said to be continuous at a point c if the following conditions are satisfied -f(c) is defined -limx c f(x) exist -limx c f(x) f(c). Algebra.Definition of a Continuous function. Posted in the Differential Geometry Forum. Replies: 5. Last Post: Jun 8th 2011, 04:54 AM. A semi-simple commutative Banach algebra , realized as an algebra of continuous functions on the space of maximal ideals . If and if is some function defined on the spectrum of the element (i.e. on the set of values of the function ), then is some function on . Clearly, it is not necessarily true that . Answers.com WikiAnswers Categories Science Math and Arithmetic Calculus What is the definition of a continuous function?Supervisor. Algebra. introduced the notations of Banach algebra valued- measure. Definition (1.1). Let X be a real vector space.Let X be a Banach space, and let B(X ) denote the set of all bounded (or continuous ) linear function of X into itself then B(X ) is a Banach algebra with the algebra operation (1) ( f g)(x) f (x) These definitions are equivalent to one another, so the most convenient definition can be used to determine whether a function is continuous or not.13. Operator topologies In the mathematical field of functional analysis there are several standard topologies which are given to the algebra B of The definition of a continuous function in classical analysis is: such that whenever.function continuous. Jamilson. Complex Analysis. 1. November 9th, 2013 11:49 AM. Problem on function definition. yugimutoshung. Algebra. Algebra. Geometry.limits definition of continuous function domain of a function power functions exponential functions logarithmic functions. Then A can be characterized as an algebra of almost finite extended-real-valued continuous functions defined on some proper topological space. 1. Introduction and definitions. Definition of Continuity. Algebra.A function is continuous over an interval, if it is continuous at each point in that interval. Motivating Example. Of the five graphs below, which shows a function that is continuous at x a? Having fulfilled the requirements of the definition of the limit, this statement results. Linear Functions.Therefore, by the continuity of constant functions, this function is continuous everywhere except at zero. Here is the definition: DEFINITION 3. A function continuous at a value of x.because division by 0 is an excluded operation. (Skill in Algebra, Lesson 5.) x 0 is a point of discontinuity. 4.2 More Continuous Functions. Proving that a function is continuous using only the definition can be quite tedious.The Fundamental Theorem of Algebra states: Theorem 1: ( Gauss) Every non-constant polynomial with complex coefficients has at least one complex root. Algebra of Limit of Sequence.Definition of Continuous Function. One-Sided Continuity. Classification of Discontinuities. Theorems involving Continuous Functions. Contents. Disambiguation. Definitions. Derivations on an algebra. Derivations with values in a bimodule Derivations of continuous functions. Let now. XX. be a topological manifold and. C(X)C(X). the algebra of continuous real-valued functions on. In mathematics, the continuous functional calculus of operator theory and C- algebra theory allows applications of continuous functions to normal elements of a C- algebra. Number Sense. Algebra. Geometry.Continuous means without any break or sudden jump or holes at any point. In Mathematics, continuous function also has similar kind of definition. 5. Graphing Using a Computer Algebra System. 5a. Online graphing calculator (1): Plot your own graph (JSXGraph).6. Graphs of Functions Defined by Tables of Data. 7. Continuous and Discontinuous Functions. MSC: Primary 30C45 secondary 46J10 Keywords: analytic function univalent function Banach algebra Noshiro-Warschawski theorem 1 Introduction and denitions Throughout this paper, C(T) denotes the Banach algebra, with sup norm, of continuous In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function. A continuous function with a continuous inverse function is called a homeomorphism. Definition of Continuity. Briefly, a function is continuous at a point x0 if and only if.From elementary algebra, we know that x02 is defined whenever x0 is defined. In fact, we can calculate it explicitly for rational x0 and approximate it to any (finite) degree of precision if x0 is irrational. of continuous functions on a paracompact locally compact topological space.A displaystyle A. , and this operation is continuous in the same sense as in the definition of stereotype algebra: for each compact set. In mathematics, a continuous function is a function in which arbitrarily small changes in the input produce arbitrarily small changes in the output. If small changes in the input can produce a broken jump in the changes of the output (or the value of the output is not defined)wide spectrum of students and may also facilitate the transition between linear algebra and advanced functional analysis.By definition, each element x E B(A) is a function defined and bounded on a given set A, and(5). More generally, for every continuous function f defined on the unit circle, (6).
RESOURCES + MATERIALS - Markers or crayons - Pictures of flowers and insects - Ask students to draw the same number of flowers as their age. The flowers should be fairly large. - Students can include some insects in their picture. - To include a butterfly, draw a circle for the head, and oval shape for the thorax, and a longer skinnier oval for the abdomen. Next add the eyes and antennae. Butterflies have two sets of wings on each side of their bodies. Draw a large circular shape on each side of the thorax. Under that shape draw a large U-shape that touches the bottom of the body and curves around to the edge of the first wing shape. The wings need to be symmetrical, another vocabulary term that students can be introduced to. - Color the picture with crayons. If time allows, students can add a sun, clouds, or a rainbow. Finally, for extra pizzazz, trace the drawings with a black crayon. - Print name legibly on the front of the paper (at least an inch from the edge so the name will not be cut off).
THE BALKAN BORGIAS One of the forgotten political dynasties of Roman history is the so-called Sapaioi dynasty, installed by Rome as ‘Kings of Thrace’ in an attempt to legitimize her rule in the region. Following the Roman conquest, which culminated in the campaigns of 29/28 BC by M. Crassus’ against the Bastarnae and the Scordisci tribes (Dio Cass. 51. 26-27), a Thracian puppet government, drawn from members of the Odrysae tribe, who had collaborated with Rome (loc cit), was installed to preside over the Romanization of Thrace. This Thraco-Macedonian tribe had ruled large parts of Thrace until the arrival of the Celts in the 4th/3rd c. BC, and after the Roman conquest of Thrace at the end of the 1st c. BC, members of this dynasty were chosen by Rome to rule Thrace (under Roman patronage) until direct imperial rule was finally established in 46 AD (On the Odrysae Sapaioi dynasty see also ‘Behind the Golden Mask’, ‘The Scordisci Wars’ and ‘Artacoi’ articles). From the very beginning it was clear that these Odrysae ‘kings’ had no popular support, and were despised by the population of Thrace as traitors. The first ‘Thracian’ king of whom we have extensive numismatic testimony is Rhoemetalces (Ῥοιμητάλκης) I, who came to the throne in 12 BC. Rhoemetalces was a loyal ally of the Roman emperor Augustus, and had initially been the guardian of King Rhescuporis I (his nephew). In 13 BC the local population, led by a priest-chieftain, Vologaeses of the Bessi tribe, had rebelled against this new Roman backed dynasty. The young Rhescuporis was executed by the rebels, and the new king Rhoemetalces I’s reign began rather shamefully when his Thracian army deserted to the rebels, forcing him to flee Thrace – ‘He (Vologaeses) conquered and killed Rhascyporis, the son of Cotys, and afterwards, thanks to his reputation for supernatural power, he stripped Rhoemetalces, the victim’s uncle, of his forces without a battle and compelled him to take flight. In pursuit of him he invaded the Chersonese, where he wrought great havoc’ (Tacitus 2, 64). Rhoemetalces I was finally restored to his throne when a Roman army led by the governer of Pamphylia, Lucius Piso, arrived and brutally put down the rebellion (loc cit). Rhoemetalces I ruled Thrace until his death in 12 AD. Augustus then divided his realm into two separate kingdoms, one half for his son Cotys to rule, and the other half for Rhoemetalces’ remaining brother Rhescuporis II. Tacitus states that Cotys received the cultivated parts, most of the towns and cities of Thrace, while Rhescuporis received the wild and savage portion: ‘That entire country had been in the possession of Rhoemetalces, after whose death Augustus assigned half to the king’s brother Rhescuporis, half to his son Cotys. In this division the cultivated lands, the towns, and what bordered on Greek territories, fell to Cotys; the wild and barbarous portion, with enemies on its frontier, to Rhescuporis ’ (Tacitus, Annals 2:64). Rhescuporis was apparently unsatisfied with his part of this deal, and set out to annex Cotys’ territory. Inviting his nephew to a banquet to falsely ratify a treaty between them, he arrested and imprisoned Cotys, seizing his part of the kingdom. Cotys died while incarcerated in 18 AD, allegedly by suicide. His wife, Tryphaena, and their children subsequently fled Thrace to Cyzicus to escape Rhescuporis. Cotys had four children by Tryphaena – Rhoemetalces II who later ruled Thrace with his mother Tryphaena (see below); another son, Cotys IX, who became Roman Client King of Lesser Armenia from 38 AD to circa 47 AD; two daughters -Gepaepyris, who married the Roman Client King, Tiberius Julius Aspurgus of the Bosporan Kingdom, and Pythodoris II (or Pythodorida II). In 38 AD, after the death of Rhoemetalces II, Tryphaena abdicated the throne at the request of Roman Emperor Caligula. Pythodoris II married her cousin Rhoemetalces III, and they ruled Thrace as Roman Client Rulers from 38 AD until 46 AD (see below). Little is known on the life of Gepaepyris. She married the Roman Client King of the Bosporan Kingdom, Tiberius Julius Aspurgus, who was of Greek and Iranian ancestry. The Bosporan Kingdom was the longest known surviving Roman Client Kingdom. Aspurgus was the son of Bosporan Queen Dynamis from her first marriage to General and Bosporan King Asander. Gepaepyris bore Aspurgus two sons – Tiberius Julius Mithridates - named in honor of Mithridates VI of Pontus, and Tiberius Julius Cotys I – who was named in honor of his late maternal grandfather Cotys VIII. When Aspurgus died in 38 AD, Gepaepyris ruled with their first son Mithridates in the Bosporan Kingdom until 45 AD. Later, her other son (confusingly called Cotys I) succeeded her and Mithridates. After the murder of Cotys by his uncle, the Roman Emperor Tiberius opened an investigation into Cotys’ death, putting Rhescuporis on trial in the Roman Senate. He invited Cotys’ widow Tryphaena to testify at the trial, during which she accused the defendant of murdering her husband. Rhescuporis was found guilty, and Tiberius sent him to Alexandria. However, en route, Rhescuporis ‘tried to escape’ and was killed by the Romans: Rhescuporis was removed to Alexandria, and there attempting or falsely charged with attempting escape, was put to death (Tac. Ann. Book 2:67). His son, who would later rule Thrace as Rhoemetalces III (see below), was spared by Tiberius and allowed to return to Thrace. In the meantime Tiberius returned the whole Thracian Kingdom to Tryphaena and appointed Rhoemetalces II, her eldest child with Cotys, as coruler. Rhoemetalces II proved to be as unpopular as his predecessors had been. Shortly after he took power, he was besieged in his capital at Philipopolis (Plovdiv) by the local population, intent on executing him as a traitor, and had to be rescued by a Roman legion who arrived at the last minute and massacred the ‘rebels’. As mentioned, the Thracian king is referred to by the Romans as ‘a loyal friend and ally’, and took part in the Roman campaign of 26 AD led by Sabinus against the Celtic Artacoi tribe in the Haemus (Balkan) mountains. Rhoemetalces proved himself an inept military leader, and after leading a campaign of murder and destruction against the local population, his Thracian forces were massacred during a surprise attack by the barbarians (see ‘Artacoi’ article – History section). THE LAST ‘THRACIAN KING’ Upon Rhoemetalces II’s death in 38 AD, Rhoemetalces III (the son of Rhescuporis II, who had been murdered by the Romans), ‘ruled’ in association with his cousin-wife Pythodoris II. The last Thracian king shared the fate of many of his predecessors, and was himself murdered in 46 AD. It is unclear whether he died at the hands of insurgents, or on the orders of his wife. Following his death, another major uprising by the local population was brutally put down by the Romans, and Thrace subsequently became a province of the empire.
点击这里成为作者 · 更新于 2018-11-28 11:00:43 Synchronized methods are methods that are used to control access to an object. A thread only executes a synchronized method after it has acquired the lock for the method's object or class. Synchronized statements are similar to synchronized methods. A synchronized statement can only be executed after a thread has acquired the lock for the object or class referenced in the synchronized statement.
Joint attention is the ability to shift your attention between an object or event and your communication partner. For example, if a little girl notices an ice cream truck coming down the street, she may look at it, then turn to her father with a hopeful smile before turning back to stare at the ice cream truck. Or, a little boy may be spinning an empty water bottle on the floor, enjoying the movement and shifting colors as the bottle spins. He may look up at his mother and point to the bottle, sharing his enjoyment with her. Many autistic children do not develop joint attention skills on their own. Their parents often describe them as being “in their own world,” absorbed in playing with whatever has caught their attention, and seeming oblivious to everything else around them. But if kids are only focusing on one thing at a time (typically an object rather than a person), they’re missing out on so much! They aren’t learning to communicate with others about shared experiences. They aren’t learning the new vocabulary and concepts that they could if they were paying attention to others during a shared experience. They aren’t learning to read facial expressions and understand gestures. They aren’t learning about other people’s emotions. That’s why it’s so crucial to help autistic kids develop their joint attention skills. Studies show that young children’s joint attention skills greatly impact their future language skills. How can you start helping autistic kids learn to shift their attention between an object/event and a person? Here are some easy, fun ways to work on early joint attention skills: - Do goofy, unexpected actions with toys. For example, if they’re playing with matchbox cars, take one that they’re not using, and pop it on your head. Tilt your head back as you dramatically pretend that you’re about to sneeze: “Aaah, aaah, aaah…” then suddenly pop your head down and let the car fall off into your lap as you “sneeze” with a loud “CHOO!” Then hand the car to the child. - Pretend to eat some of the child’s food. During snack or meal time, hold out your hand to ask for a little bit of food, then pretend to eat the food. Be very loud and exaggerated: “YUM YUM YUM YUM!!” while smacking your lips and making fake chewing sounds. Then hand the food back, intact, to the child. - Introduce toys that the child needs help with. Bubbles and balloons work great for this. Blow up a balloon, pinch it tight, and say, “Ready, set…GO!” and release the balloon. It will fly quickly around the room, making noise as the air is expelled. The child will typically not be able to blow up the balloon. Help them learn to give it to you. As you blow up the balloon again, blow slowly, waggle your eyebrows, and take breaks to hold out the balloon and admire it (“Wow! Look! It’s big!”) before continuing to blow it up. (Note: please make sure the child does not have a latex allergy before trying balloons.) Remember, your goal is to show the child that YOU are just as fun and engaging as his toys! The more fun you are, the more readily the child will switch focus, back and forth, between the object and you. Success! Want to learn more? There are many books that are good resources for learning more about how to develop joint attention skills. Two of my favorites are An Early Start for Your Child with Autism (Rogers, Dawson, & Vismara) and More Than Words: Helping Parents Promote Communication and Social Skills in Children with Autism Spectrum Disorder (Sussman).
What's used to steer Jefferson Lab's electron beam? Although it may not look like it at first, the Jefferson Lab accelerator really works much like your TV set. Electrons are accelerated to higher energies by precision-tuned ever-changing electrical fields (the "RF fields"). To steer them, one uses electromagnets to push them to the side (or vertically). The basic type is the "dipole" with a "north" and "south" pole. They act much like a prism does with light. A prism will bend white light and separate it into its different colors. The dipole magnet will "bend" the electron beam and separate the electrons of differing energy. As in life, nothing is that simple, so one also has to use "quadrupole" (two sets of north and south poles) and "sextupole" (six poles) magnets to keep the beam focused just as lenses are used to focus light. The electron beam begins its first orbit at the injector and proceeds through the underground race track-shaped accelerator tunnel at nearly the speed of light. The accelerator uses superconducting radio-frequency technology to drive electrons to higher and higher energies. A refrigeration plant, called the Central Helium Liquefier or CHL, provides liquid helium for ultra-low-temperature (-456°F) superconducting operation. After the electrons are accelerated around Jefferson Lab's accelerator, they are steered by the electromagnets in the "beam switchyard" into the three Halls. The electron beam can be split for use by three simultaneous experiments in the end stations, which are circular, domed chambers with diameters ranging from 98 to 172 feet. Special equipment in each hall records the interactions between incoming electrons and the target materials. A continuous electron beam is necessary to accumulate data at an efficient rate yet ensure that each interaction is separate enough to be fully observed. Carl Zorn, Detector Scientist (Other answers by Carl Zorn)
Warning: the HTML version of this document is generated from Latex and may contain translation errors. In particular, some mathematical expressions are not translated correctly. Classes and objects 12.1 User-defined compound types Having used some of Python's built-in types, we are ready to create a user-defined type: the Point. Consider the concept of a mathematical point. In two dimensions, a point is two numbers (coordinates) that are treated collectively as a single object. In mathematical notation, points are often written in parentheses with a comma separating the coordinates. For example, (0, 0) represents the origin, and (x, y) represents the point x units to the right and y units up from the origin. A natural way to represent a point in Python is with two floating-point values. The question, then, is how to group these two values into a compound object. The quick and dirty solution is to use a list or tuple, and for some applications that might be the best choice. An alternative is to define a new user-defined compound type, also called a class. This approach involves a bit more effort, but it has advantages that will be apparent soon. A class definition looks like this: Class definitions can appear anywhere in a program, but they are usually near the beginning (after the import statements). The syntax rules for a class definition are the same as for other compound statements (see Section 4.4). This definition creates a new class called Point. The pass statement has no effect; it is only necessary because a compound statement must have something in its body. By creating the Point class, we created a new type, also called Point. The members of this type are called instances of the type or objects. Creating a new instance is called instantiation. To instantiate a Point object, we call a function named (you guessed it) Point: blank = Point() We can add new data to an instance using dot notation: >>> blank.x = 3.0 This syntax is similar to the syntax for selecting a variable from a module, such as math.pi or string.uppercase. In this case, though, we are selecting a data item from an instance. These named items are called attributes. The following state diagram shows the result of these assignments: The variable blank refers to a Point object, which contains two attributes. Each attribute refers to a floating-point number. We can read the value of an attribute using the same syntax: >>> print blank.y The expression blank.x means, "Go to the object blank refers to and get the value of x." In this case, we assign that value to a variable named x. There is no conflict between the variable x and the attribute x. The purpose of dot notation is to identify which variable you are referring to unambiguously. You can use dot notation as part of any expression, so the following statements are legal: print '(' + str(blank.x) + ', ' + str(blank.y) + ')' The first line outputs (3.0, 4.0); the second line calculates the value 25.0. You might be tempted to print the value of blank itself: >>> print blank The result indicates that blank is an instance of the Point class and it was defined in __main__. 80f8e70 is the unique identifier for this object, written in hexadecimal (base 16). This is probably not the most informative way to display a Point object. You will see how to change it shortly. As an exercise, create and print a Point object, and then use id to print the object's unique identifier. Translate the hexadecimal form into decimal and confirm that they match. 12.3 Instances as arguments You can pass an instance as an argument in the usual way. For example: printPoint takes a point as an argument and displays it in the standard format. If you call printPoint(blank), the output is (3.0, 4.0). As an exercise, rewrite the distance function from Section 5.2 so that it takes two Points as arguments instead of four numbers. The meaning of the word "same" seems perfectly clear until you give it some thought, and then you realize there is more to it than you expected. For example, if you say, "Chris and I have the same car," you mean that his car and yours are the same make and model, but that they are two different cars. If you say, "Chris and I have the same mother," you mean that his mother and yours are the same person. * Note So the idea of "sameness" is different depending on the context. When you talk about objects, there is a similar ambiguity. For example, if two Points are the same, does that mean they contain the same data (coordinates) or that they are actually the same object? To find out if two references refer to the same object, use the is operator. For example: >>> p1 = Point() Even though p1 and p2 contain the same coordinates, they are not the same object. If we assign p1 to p2, then the two variables are aliases of the same object: >>> p2 = p1 This type of equality is called shallow equality because it compares only the references, not the contents of the objects. To compare the contents of the objects def samePoint(p1, p2) : Now if we create two different objects that contain the same data, we can use samePoint to find out if they represent the same point. >>> p1 = Point() Let's say that we want a class to represent a rectangle. The question is, what information do we have to provide in order to specify a rectangle? To keep things simple, assume that the rectangle is oriented either vertically or horizontally, never at an angle. There are a few possibilities: we could specify the center of the rectangle (two coordinates) and its size (width and height); or we could specify one of the corners and the size; or we could specify two opposing corners. A conventional choice is to specify the upper-left corner of the rectangle and the size. Again, we'll define a new class: And instantiate it: box = Rectangle() This code creates a new Rectangle object with two floating-point attributes. To specify the upper-left corner, we can embed an object within an object! box.corner = Point() The dot operator composes. The expression box.corner.x means, "Go to the object box refers to and select the attribute named corner; then go to that object and select the attribute named x." The figure shows the state of this object: 12.6 Instances as return values Functions can return instances. For example, findCenter takes a Rectangle as an argument and returns a Point that contains the coordinates of the center of the Rectangle: To call this function, pass box as an argument and assign the result to a variable: >>> center = findCenter(box) 12.7 Objects are mutable We can change the state of an object by making an assignment to one of its attributes. For example, to change the size of a rectangle without changing its position, we could modify the values of width and height: box.width = box.width + 50 We could encapsulate this code in a method and generalize it to grow the rectangle by any amount: def growRect(box, dwidth, dheight) : The variables dwidth and dheight indicate how much the rectangle should grow in each direction. Invoking this method has the effect of modifying the Rectangle that is passed as an argument. For example, we could create a new Rectangle named bob and pass it to growRect: >>> bob = Rectangle() While growRect is running, the parameter box is an alias for bob. Any changes made to box also affect bob. As an exercise, write a function named moveRect that takes a Rectangle and two parameters named dx and dy. It should change the location of the rectangle by adding dx to the x coordinate of corner and adding dy to the y coordinate of corner. Aliasing can make a program difficult to read because changes made in one place might have unexpected effects in another place. It is hard to keep track of all the variables that might refer to a given object. Copying an object is often an alternative to aliasing. The copy module contains a function called copy that can duplicate any object: >>> import copy Once we import the copy module, we can use the copy method to make a new Point. p1 and p2 are not the same point, but they contain the same data. To copy a simple object like a Point, which doesn't contain any embedded objects, copy is sufficient. This is called shallow copying. For something like a Rectangle, which contains a reference to a Point, copy doesn't do quite the right thing. It copies the reference to the Point object, so both the old Rectangle and the new one refer to a single Point. If we create a box, b1, in the usual way and then make a copy, b2, using copy, the resulting state diagram looks like this: This is almost certainly not what we want. In this case, invoking growRect on one of the Rectangles would not affect the other, but invoking moveRect on either would affect both! This behavior is confusing and error-prone. Fortunately, the copy module contains a method named deepcopy that copies not only the object but also any embedded objects. You will not be surprised to learn that this operation is called a deep copy. >>> b2 = copy.deepcopy(b1) Now b1 and b2 are completely separate objects. We can use deepcopy to rewrite growRect so that instead of modifying an existing Rectangle, it creates a new Rectangle that has the same location as the old one but new dimensions: def growRect(box, dwidth, dheight) : An an exercise, rewrite moveRect so that it creates and returns a new Rectangle instead of modifying the old one. Warning: the HTML version of this document is generated from Latex and may contain translation errors. In particular, some mathematical expressions are not translated correctly.
Sunday, March 6, 2016 | 2 a.m. The average person spends one-third of his or her life sleeping, but an estimated 50 million to 70 million adults in the United States suffer from a sleep or wakefulness disorder. In September, the Centers for Disease Control and Prevention declared insufficient sleep, in addition to diseases and other health concerns linked to a lack of sleep, a public health problem. Sleep disorders can be caused by neurological disease or environmental factors. Regardless of the cause, the fact remains that most of us just aren’t getting enough sleep. Why do we need sleep? Some mystery remains about the exact purposes and mechanics of sleep, but it’s a critical time for the body and brain to restore, process and strengthen. While scientists may not be able to pinpoint what happens while we’re sleeping and why we need sleep, they are able to identify the consequences of not getting enough sleep: • Anxiety, irritability, moodiness, impulsivity • Forgetfulness, memory and cognitive impairment •Inability to focus, decreased alertness •Hypertension (high blood pressure) •Increased risk of heart attack •Increased risk of stroke •Depression and mood disorders Common sleep disorders Insomnia is the most common sleep disorder and can be acute or chronic. It causes wakefulness, difficulty falling asleep and difficulty going back to sleep. Insomnia can be caused by a medical or psychiatric problem, excessive stress or environmental influences. In some cases, however, it can occur for no obvious reason at all. Narcolepsy is a condition in which the sufferer is unable to control when he or she falls asleep. As a result, the person experiences excessive daytime sleepiness. Narcolepsy can be paired with other symptoms such as sleep paralysis and hallucinations. Though a relatively uncommon disorder — it affects about 200,000 people in the United States — it can be damaging to a person’s quality of life. Restless leg syndrome Restless leg syndrome is a neurological disorder that makes people feel an overwhelming urge to move their legs while resting. There are many reasons someone might have restless leg syndrome, but common risk factors include: Pregnancy also may spark bouts of restless leg syndrome, and some medications may cause it as well. Obstructive sleep apnea occurs when a person stops breathing for several seconds while sleeping. This is caused by a blockage in the upper respiratory system. During sleep, soft tissues in the throat relax and collapse, blocking the airway. This can result in snoring and the brain partially waking up to get more oxygen, which can lead to insufficient and/or reduced quality of sleep. How disorders are diagnosed Some sleep disorders, such as insomnia, can be self-diagnosed or diagnosed by a doctor through a series of questions. Sleep apnea, restless leg syndrome and narcolepsy all need to be diagnosed by a medical professional and may require more extensive testing. Technology’s impact on sleep Darkness sends important signals to the brain that it’s time to start winding down for sleep. Bright light sends signals to the brain telling it to wake up. Because of increased use of personal technology such as cellphones, laptops and TVs, those signals are getting interrupted for many people. The bright glow of a screen can trick your body into thinking it’s supposed to be awake, even if you’re trying to fall asleep. People who use devices late into the night often have a harder time falling asleep and staying asleep. Tips for getting better sleep •Sleep in an environment that’s dark, quiet and cool. Light and noise can keep you awake or wake you up throughout the night. The temperature of the room also can affect your quality of sleep. Many people sleep best in a room that’s kept between 65 degrees and 68 degrees. •Go to bed at the same time every night and wake up at the same time every morning. Sleeping in on the weekends may feel good, but it can throw off your body’s internal clock. If you do sleep in, keep it to an hour minimum. •Don’t eat large meals close to bedtime. •Stop using electronic devices at least two hours before bedtime. •Avoid consuming caffeine and alcohol close to bedtime. •Avoid nicotine entirely. It’s a stimulant.
A statistical analysis for comparing three or more data sets depends on the type of data collected. Each statistical test has certain assumptions that must be met for the test to work appropriately. Also, what aspects of the data you will compare will affect the test. For example, if each of the three data sets has two or more measurements, you will need a different type of statistical test. One of the more common statistical tests for three or more data sets is the Analysis of Variance, or ANOVA. To use this test, the data must meet certain criteria. First, the data should be numerical. Ordinal data -- such as 5-point scale ratings, called Likert scales -- are not numerical data, and the ANOVA will not yield accurate results if used with ordinal data. Second, the data should be normally distributed in a bell curve. If these assumptions are met, the ANOVA test can be used to analyze the variance of a single dependent variable across three or more samples or data sets. Remember, the dependent variable is the factor you are measuring in the study. In cases where the assumptions for ANOVA are met but you want to measure more than one dependent variable, you will need the Multivariate Analysis of Variance, or MANOVA. The dependent variables are the factors you are measuring and want to examine. The independent variable or variables affect the dependent variable. For example, assume you were measuring the effects of strenuous exercise on blood pressure, weight loss and heart rate. The independent variable is the exercise, and the dependent variables are blood pressure, weight loss and heart rate. In this situation, you would use MANOVA. This statistical test is very complicated to calculate and will require the use of a computer and special software. Non-Parametric Inferential Statistics There are many different non-parametric tests, but generally non-parametric statistics are used when the data is ordinal and/or not normally distributed. Non-parametric tests include the sign test, chi-square and the median test. These tests are often employed when you are analyzing survey data where the respondents had to rate different statements; for example, a scale of "strongly disagree, disagree, agree, strongly agree" would qualify as ordinal data. These tests are often easy to calculate by hand although a spreadsheet helps. In addition to inferential tests, you can also use simple descriptive statistics to provide a quick and simple look at the data sets. You can report the average, standard deviations and percentages for each of the three data sets. Descriptive statistics help provide a quick look at the data but cannot be used to draw conclusions. For example, if one of the three data sets has a variable that is 20 percent higher than the other two data sets, you cannot say that the difference is "statistically significant" without using some inferential statistical test, such as ANOVA, MANOVA or a non-parametric test. - "Statistical Methods: In Education and Psychology"; Gene Glass and Kenneth Hopkins; 1996 - "Practical Nonparametric Statistics"; W. Conover; 1999 - "Using Multivariate Statistics"; Bargara Tabachnick and Linda Fidell; 2001 Hemera Technologies/AbleStock.com/Getty Images
Math Standards and Coherence Maps The Utah State Board of Education (USBE) adopted the K-12 Utah Core Standards for Mathematics in January 2016. The Core Standards are the mathematical content that should be taught in each grade level. The Utah Core Guides provide a description of the Core Standards, including concepts and skills to master, critical background knowledge and academic vocabulary. The coherence maps show how the Core Standards build on each other and indicate necessary prerequisites for each individual standard. They can be helpful as you plan scaffolding for math instruction and assist in identifying required background knowledge during RtI.
A turtle begins its life as an egg and, once hatched, is called a hatchling. Turtles take care of themselves from birth as they grow from hatchlings to juveniles then into adults. Adult turtles migrate to beaches where the females lay eggs for the males to fertilize. The eggs are covered in sand until they hatch. Teach elementary schoolchildren about the life cycle of a turtle through multi-sensory activities and books. Cut and Paste Cut and paste activities allow the opportunity for students to visually see the life cycle of a turtle and manually arrange the life cycle in the correct order. Cut and paste activities develop fine motor and visual processing skills, and help tactile and visual learners comprehend new material. Print out a cut and paste activity from worksheets available on TimeForKids.com or make your own. Graphic organizers are comprehensive learning tools for students in all grades. Break students into groups and have them fill in a graphic organizer that shows the turtle's life cycle. Younger students can draw pictures to portray the life cycle. Older students can draw and add captions. You can make your own graphic organizer by drawing seven circles or boxes and numbering them. The students can fill in each stage: hatching, crawling to water, migration, crawling to land, finding a spot to nest, digging a hole and laying eggs. You could also use a pre-made graphic organizer, such as the one also available on TimeForKids.com. Books are valuable learning tools for visual and auditory learners. Use books, such as "One Tiny Turtle" by Nicola Davies and Jane Chapman or "Sea Turtles" by Gail Gibbons, to pique students' interest and introduce them to the life cycle of a turtle. Take the students on a picture walk through the book first, by showing them the pictures and having them describe what they see. Next, read the story to them and see if they accurately described what was happening in each picture. Use sentence strips that describe each stage of the turtle's life cycle to involve children in a hands-on sequencing activity. Make the sentence strips large and colorful, and have students work in groups to try and put the steps in order. Children can lay the strips out on the rug or use magnets to attach them to a magnetic board. This involves physical learners, who learn through movement and by doing. Children learn through questioning and observation. Create journals with your class by stapling together lined paper with a construction paper cover. Have children decorate the journals with pictures of turtles. As you learn about each part of the life cycle of the turtle, have children record new facts they have learned, write any questions and make drawings of what that stage looks like. Include leading questions for children to respond to if you want to assess their learning, such as "Where does a turtle make her nest?" and "What does a hatchling do when it is born?" - Jupiterimages/Photos.com/Getty Images
Alzheimer’s disease is a progressive condition, which presents with degradation of memory, difficulty with communication, language, time and recognition of familiar people. Alzheimer’s disease is an aging process, which affects the brain in various ways. Alzheimer’s often progresses with time and results in increased brain impairment and memory loss. As Alzheimer’s disease worsens, the person may withdraw and develop behavioral isolation from family and society. Alzheimer’s is a progressive condition that affects the brain and disrupts thinking and memory. It can even end up killing brain cells. The impairment and effects caused by Alzheimer’s can lead to various symptoms, which show that Alzheimer’s affects the brain. According to the Alzheimer’s Association, the Alzheimer’s disease affects about 5 million people in the United States. What is Alzheimer’s Disease? Alzheimer’s disease, which is also known simply as Alzheimer’s, is a chronic neurodegenerative disease. Alzheimer’s disease usually starts slowly and develops with time. It has been observed to be the cause in most cases of dementia. The commonest symptom of Alzheimer’s that develops early is short-term memory loss, or the inability to or difficulty in remembering recent events or other details. Alzheimer’s affects specific part of the brain to begin with, which can interfere with cognition and behavior of a person. Certain symptoms like issues with language, mood swings, disorientation, behavioral issues and lack of management of self care, might also develop, appear, or worsen with the advancement of Alzheimer’s disease. With progression of Alzheimer’s, it affects greater areas of brain, resulting in more complaints. What Causes Alzheimer’s? While it is believed that Alzheimer’s affects the brain, the cause of Alzheimer’s is poorly understood. However, about 70% of the risk involved is believed to be genetic in nature, involving many genes. Other risk factors currently known are depression and head injuries. Alzheimer’s disease affects the brain with development and occurrence of tangles and plaques in the brain. The initial symptoms produced by Alzheimer’s are often mistaken for the effects produced by normal aging. However, these symptoms worsen later on and reveal the presence of Alzheimer’s. How Does Alzheimer’s Affect the Brain? Alzheimer’s disease is neurodegenerative in nature, which means, it affects brain cells in a destructive manner. This disease destroys brain cells, leading to various psychological issues and secondary problems. It is important to know how does Alzheimer’s affect the brain. The various theories on how Alzheimer’s disease affects your brain, have resulted in following considerations. Alzheimer’s Affects the Brain by Formation of Plaques As an effect of Alzheimer’s disease, a protein or amyloid, gets deposited in the brain. These are present in forms of clusters, which stick and interrupt the brain signals. This affects the normal functioning of the brain. Alzheimer’s Affects the Brain Due to Impaired Nutrition In Alzheimer’s, a protein called ‘tau’ is deposited in microtubules in normal brain tissues. These microtubules are important for normal functioning of the brain and for transporting nutrients to the cells. As Alzheimer’s disease affects these areas, the brain cells that fail to receive nutrients eventually die out. Exchange of Information in Brain is Affected in Alzheimer’s Disease Alzheimer’s disease also greatly affects the transmission and exchange of information inside the brain. The process of thinking and memory depends on the transmission of signals among and across the neurons. Alzheimer’s affects the brain by interfering with the signal transmission within the cells, activity of neurotransmitters or brain chemicals. This results in faulty signaling and affects the brain, resulting in impaired ability to communicate, learn and remember. Alzheimer’s Disease Affects Memory Alzheimer’s disease affects the brain, causing significant damage to the hippocampus of the brain, which plays an important role in the memory process. Alzheimer’s disease causes shrinking of the hippocampus, which affects the ability of the brain to create new memories and recall them. Alzheimer’s Disease Results in Inflamed Brain Cells Alzheimer’s disease affects the brain in a way that the brain cells recognize the amyloid plaque as cell injury. This stimulates an inflammatory reaction, which results in further damage to the brain cells. Thus, Alzheimer’s disease can also affect the brain resulting in inflamed brain cells. Alzheimer’s Disease Can Affect the Brain Size The Alzheimer’s disease, when advanced, affects the brain, causes shrinking in size and structure of the brain. In advanced Alzheimer’s the surface layer which covers the cerebrum degrades and shrinks. This massive damage to the brain, greatly affects a person’s ability to recall, plan ahead, and concentrate. As a patient’s condition worsens due to this disease, they are likelier to withdraw and develop behavioral isolation from family as well as society. As Alzheimer’s disease affects the brain, bodily functions are gradually lost, and the condition ultimately leads to death. It is believed that Alzheimer’s disease begins affecting the brain long before the condition is diagnosed. While the speed of the development of this disease can vary, the average life expectancy from the time of diagnosis of Alzheimer’s disease ranges from three to nine years. While there are no current actual medications or supplements known to decrease the risks of this disease, regular mental exercise, physical exercise, diet rich in healthy fats and weight management may turn out to reduce the risks of Alzheimer’s disease. While some treatments may be able to improve upon the symptoms for a while, none can reverse or stop its progression. As Alzheimer’s disease continues to affect the brain, patients increasingly rely on other people for assistance.
Lost for about a thousand years, writings of Archimedes are now being revealed. An X-ray microfluorescence system, and other imaging systems are allowing faint remants of ink to be deciphered, even when covered up by gold leaf, paint, and other layers of ink. This tenth century manuscript is the unique source for two of Archimedes Treatises, The Method and Stomachion, and it is the unique source for the Greek text of On Floating Bodies. Discovered in 1906 by J.L. Heiberg, it plays a prominent role in his 1910-15 edition of the works of Archimedes, upon which all subsequent work on Archimedes has been based. The manuscript was in private hands throughout much of the twentieth century, and was sold at auction (for two million dollars) to a private collector on the 29th October 1998. The owner deposited the manuscript at The Walters Art Museum in Baltimore, Maryland, a few months later. Since that date the manuscript has been the subject of conservation, imaging and scholarship. The Walters Art Museum A palimpsest is a book that results from scraping the ink from the surface of a manuscript and writing new text over the old. Scientists believe a copy of Archimedes treatices was made in the tenth century. That manuscript was palimpsested into a prayer book about two hundred years later. John Ludwig Heiberg, who was the world’s authority on Archimedes was intrigued by the under text. Heiberg visited the Metochion in 1906, and discovered the truth, that this book contained the unique source for The Method, The Stomachion, and On Floating Bodies in Greek. The latest page to be deciphered with help from the Stanford Linear Accelerator Center was revealed live yesterday on the internet. Archimedes was born in Sicily in 287 BC. His father was an astronomer and a mathematician. After mastering everything his teachers knew, Archimedes went to Alexandria, Egypt where Euclid in his "The Elements" compiled 2000 years of geometrical treatises. Archimedes then returned to Syracuse and pursued a life of thought and invention. He endeared himself to King Hiero II, discovering solutions to problems that vexed the king.
HUMANS may have conquered the world, but not without a big helping hand from climate change. A model of the last 120,000 years of our history reminds us that, while humans are adaptable, our species is ultimately at the mercy of the climate. Homo sapiens evolved in Africa about 200,000 years ago, but only left the continent some 70,000 years ago. After that we rapidly went global, colonising Europe and Asia, then Australasia and the Americas. So why did our ancestors linger so long in Africa, and what spurred them to finally move on? Geneticist Andrea Manica at the University of Cambridge, and his colleagues teamed up with climate modellers to simulate changes in temperature and rainfall across the planet over the past 120,000 years. This allowed them to calculate changes in the vegetation in different regions, which gave an estimate of the amount of food available there. In turn, the food-supply data drove a model of population and migration, which assumed human migration patterns follow food distribution. The exercise accurately reproduced the pattern and timings of human expansion out of Africa and across the continents – so far as it is known from the archaeological record – suggesting climate and food supply were key factors. The model also revealed that climate changes probably had a key role in lifting four major roadblocks to humanity’s global takeover (see map). The first and greatest of these was the Arabian peninsula, a once-impassable desert that trapped humans in Africa for tens of thousands of years. Around 70,000 years ago it became wetter, and coastal areas more fertile. H. sapiens followed the food and initially clustered in what is now Iraq. “Climate is a really good explanation for why they didn’t make it out earlier,” says Manica. The Arabian peninsula desert trapped us in Africa for tens of thousands of years – until it got wetter There are several conceivable routes out across the Arabian peninsula, but the model suggests most humans would have followed the Arabian coast. From Iraq, one group expanded east and south-east into Indonesia, where they hit a second roadblock: high sea levels made many islands, and ultimately Australia, inaccessible. Waters fell 60,000 years ago and again 15,000 years later, as successive glaciations trapped more water at the poles. This shortened Asian sea crossings. The climate and vegetation model suggests H. sapiens probably only reached south-east Asia 45,000 to 50,000 years ago, which would rule out a crossing when sea levels fell for the first time. The model suggests humans reached Siberia by 30,000 years ago, where they were met by a vast ice sheet – the third roadblock. Not until 15,000 years ago did that shrink, allowing them into the Americas. Back in Europe and Asia, populations faced the final obstacle between them and world domination: local ice sheets, which fluctuated with the ice ages between 55,000 and 15,000 years ago. During warm periods H. sapiens crept north into Scandinavia and northern Asia, but were forced south when the ice encroached again. “The study fills in many of the links that have only been assumed or guessed at,” says Rick Potts of the Smithsonian National Museum of Natural History in Washington DC. “There are inconsistencies,” says John Stewart of Bournemouth University in the UK. “But the results are still remarkably good.” To see just how sensitive our species has been to climate, Manica ran the model several times, varying the strength of climate’s effect on populations. In parallel, he also modelled the history of genetic variation, and compared that with real data on the genetic make-up of modern populations. Strikingly, he could reproduce known migration timings, and real-world genetic data, only if populations in his model were highly sensitive to the climate. Even in recent history, societies have declined or collapsed thanks to climate change (New Scientist, 4 August, p 32). “Climate has been a major determinant in the fate of human populations,” says Manica, “and that’s not going away.”
Snakes, crocodiles, lizards and tortoises can be found in vastly different sizes. Determining which reptiles are the biggest depends on whether weight or length is considered; some reptiles may have a smaller length but a greater weight than others. All the living reptiles, however, are not as large as their now-extinct ancestors were. The average saltwater crocodile male measures about 17 feet and weighs around 454 Kilogram, though samples have been found to measure around 23 feet and weigh over a ton. The species is located in east India, Southeast Asia, and northern Australia. Because their hides are valued, they are often targets of poachers. The title of "largest snake" is under debate. The green anaconda is the largest snake as measured by weight, weighing around 249 Kilogram. The girth is around a foot in diameter and the snake extends to 29 feet in length. The females are much larger than the males of the species. The green anaconda is found in swamps and marshes. While the green anaconda is the heaviest snake in the world, the reticulated python takes the title of the longest, measuring about 32 feet. It does not have as large of a diameter, and therefore is not as big around as the anaconda. The nonvenomous snake features a complex geometric pattern, and can be found throughout Asia. The longest venomous snake is the king cobra, which can reach to about 18 feet in length. During confrontation, they can raise their bodies straight off the ground and meet a human at eye-level. The amount of venom they deliver per bite can be enough to kill twenty humans, though most will attempt to avoid humans when possible. The largest of the lizards can grow to ten feet in length and weigh over 136 Kilogram, making it the heaviest lizard on earth. The komodo dragon has a varied diet, consuming deer, pigs and even human remains. The species, located in Indonesia, is endangered due to destruction of habitat and poaching. While the komodo dragon is the heaviest lizard, the water monitor can grow to greater lengths than the komodo dragon. Water lizards have a varied diet and are noted for their swimming skills. They are not considered endangered, though they are constantly targeted for hunting. The largest tortoise, measuring about five feet long and weighing around 249 Kilogram, has a lifespan of about a 100 years. The animal is endangered as a result of nonnative species threatening food supplies and eggs. The tortoise consumes grass, leaves and cactus, and can survive up to a year without food or water. Located in North and South America, the American crocodile can reach up to 15 feet and weigh about a ton. Species in South America, however, have been measured to be over 20 feet long. The species is endangered, and various laws have been enacted to protect it, though they are rarely enforced. Also known as a gharial, at a possible length of over 15 feet and a weight of 998 Kilogram, the gavial makes the list of the largest reptiles. This endangered species features a long snout, which adds length to its measurement from snout to tail. It is a carnivore, and lives for up to 60 years. While current living reptiles extend to great sizes, none are as large as their ancestors. The Argentinosaurs stretched to about a 120 feet, the length of three buses end-to-end, and weighed around 100 tons. It holds the record for largest dinosaur ever discovered, mostly due to its long tail. - 20 of the funniest online reviews ever - 14 Biggest lies people tell in online dating sites - Hilarious things Google thinks you're trying to search for
“Compressed air cars” are cars with engines that use compressed air, instead of regular gas used in conventional fuel cars. The idea of such cars is greatly welcomed by people of the 21st century, when pollution caused by petrol and diesel is an extremely worrying factor. Engine and Technology The engine that is installed in a “compressed air car” uses compressed air which is stored in the car’s tank at a pressure as high as 4500 psi. The technology used by air car engines is totally different from the technology that is used in conventional fuel cars. They use the pressure generated by the expansion of compressed air to run their pistons. This results in ‘no pollution’, as air is the only product that is used by the engine to produce power, and the waste material is the air itself. Air Storage Tank/Fueling As thought by engineers and designers, the storage tank would be made up of carbon fiber to reduce the car’s weight and prevent an explosion, in case of a direct collision. Carbon-fiber tanks are capable of containing air pressure up to 4500 psi, something the steel tanks are not capable of. For fueling the car tank with air, the compressor needs to be plugged into the car, which would use the air that is around to fill the compressed air tank. This could be a slow process of fueling; at least until air cars are commonly used by people, after which high-end compressors would be available at gas stations that would fuel the car in no time at all. The air-powered car would normally emit air, as it’s what it would solely use. But it would totally depend on the purity of air that is put into the air tank. If impure air is filled in the tank, same would be the level of impurity of the emission. The emission level would highly depend on the location and time of filling air in the tank.
Competition Among Plants by Ron Kurtus (revised 3 May 2015) Just as humans and animals compete to win a prize or gain an advantage, there is also competition among plants. This competition is both among its own species, as well as against other types of plants and even animals. Plants seek the rewards of nutrients, water, sunlight, and territory necessary for survival. One type of competition is comparing the performance with other plants. The "winner" grows the best. Another type of plant competition is head-to-head against other plants. In this case, the winner squeezes out the others to get the most sunlight or such. The third type of competition is where a plant grows at the expense of another plant. If there are sufficient ingredients, the plants will compete by their performance for reproductive ability. If the plants are in proximity, there may be a limited amount of these essential ingredients, resulting in a head-to-head competition for as much of a share as possible. In some cases, parasitic plants will compete with host plants for the nutrition owned by the host. Questions you may have include: - How do plants compete for reproduction efficiency? - How do plants compete for survival? - What type of competition does a parasite create? This lesson will answer those questions. Plants that have sufficient nutrients, water, sunlight, and territory for survival and healthy growth will compete against each other to show which ones can reproduce the best. For example the ones with the most attractive flowers to insects will be able to be pollinated and reproduce better than those of their species with less attractive flowers. Also, competition between species can be determined by which one creates the most seeds and has the best method of dissemination. Performance competition with plants is just doing what is natural for them. Although some plants can sense when a predator—such as a caterpiller—is eating nearby plants, they really are not aware of who is the "winner" in a contest, as are humans and animals. Plants that are close to each other may compete for nutrients, water, sunlight, and territory necessary for survival. Some plants go mainly on the offense, trying to get as much as they can. Other plants use defensive methods to stifle their opponents from getting needed nutrients. They spread their roots to gather nutrients and water necessary for survival and growth. In the competition, there is only so much of these ingredients available, so the stronger or better competitor may be so efficient that it dow not allow the other plant enough for survival or much growth. But it is also possible that neither plant will grow much in such a competition. Offense and defense Another area of competition is in gathering available sunlight. Plants that grow rapidly and have big leaves may be able to gather sunlight at the expense of nearby, less aggressive plants. Some plants use other defensive tactics to prevent opponents from competing. Some put toxins in the ground nearby, so competitors cannot get too close. There are plants that seek nutrients owned by another. Parasitic plants will compete with host plants for the host's nutrients. The parasite is on the offense, trying to take nutrients directly from the victim of the attack. The victim plant is on the defense, trying to fend off the attack and succeed in surviving. Although this seems like a one-sided competition, if the plant is able to prevent the parasitic plant from getting its nutrients, the parasite may wither and even die. But if the host plant dies, the parasite may be in trouble and even die itself. In this type of competition, one may survive and grow, while the other leads a weakened life. Plants with sufficient nutrients, water, sunlight, and territory compete by their for reproductive ability. If the plants are in proximity and there is a limited amount of essential ingredients, a head-to-head competition for as much of a share as possible results. In some cases, parasitic plants will compete with host plants for the nutrition owned by the host. Learn through observation Resources and references Questions and comments Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible. Click on a button to bookmark or share this page through Twitter, Facebook, email, or other services: Students and researchers The Web address of this page is: Please include it as a link on your website or as a reference in your report, document, or thesis. Where are you now? Competition Among Plants
Mission Critical: Preventing Antibiotic Resistance Can you imagine a day when antibiotics don't work anymore? It's concerning to think that the antibiotics that we depend upon for everything from skin and ear infections to life-threatening bloodstream infections could no longer work. Unfortunately, the threat of untreatable infections is very real. Antibiotic resistance occurs when germs outsmart drugs. In today's healthcare and community settings, we are already seeing germs stronger than the drugs we have to treat them. This is an extremely scary situation for patients and healthcare workers alike. So, what is fueling antibiotic resistance, you may ask? We're finding that the widespread overuse and incorrect prescribing practices are significant problems. In addition to driving drug resistance, these poor practices introduce unnecessary side effects, allergic reactions, and serious diarrheal infections caused by Clostridium difficile. These complications of antibiotic therapy can have serious outcomes, even death. According to CDC's National Healthcare Safety Network, a growing number of healthcare-associated infections are caused by bacteria that are resistant to multiple antibiotics. These include: MRSA, vancomycin-resistant Enterococcus, extended-spectrum cephalosporin-resistant K. pneumonia (and K. oxytoca), E. coli and Enterobacter spp., carbapenem-resistant P. aeruginosa, carbapenem-resistant K. pneumonia (and K. oxytoca), E. coli, and Enterobacter spp. So, what can we do to prevent antibiotic resistance in healthcare settings? Patients, healthcare providers, healthcare facility administrators, and policy makers must work together to employ effective strategies for improving antibiotic use—ultimately improving medical care and saving lives. - Ask if tests will be done to make sure the right antibiotic is prescribed. - Take antibiotics exactly as the doctor prescribes. Do not skip doses. Complete the prescribed course of treatment, even when you start feeling better. - Only take antibiotics prescribed for you; do not share or use leftover antibiotics. Antibiotics treat specific types of infections. Taking the wrong medicine may delay correct treatment and allow bacteria to multiply. - Do not save antibiotics for the next illness. Discard any leftover medication once the prescribed course of treatment is completed. - Do not ask for antibiotics when your doctor thinks you do not need them. Remember antibiotics have side effects. - Prevent infections by practicing good hand hygiene and getting recommended vaccines. Healthcare providers can: - Prescribe antibiotics correctly – get cultures, start the right drug promptly at the right dose for the right duration. Reassess the prescription within 48 hours based on tests and patient exam. - Document the dose, duration and indication for every antibiotic prescription. - Stay aware of antibiotic resistance patterns in your facility. - Participate in and lead efforts within your hospital to improve prescribing practices. - Follow hand hygiene and other infection control measures with every patient. Healthcare Facility Administrators and Payers Can: To protect patients and preserve the power of antibiotics, hospital CEOs/medical officers can: - Adopt an antibiotic stewardship program that includes, at a minimum, this checklist: - Leadership commitment: Dedicate necessary human, financial, and IT resources. - Accountability: Appoint a single leader responsible for program outcomes. Physicians have proven successful in this role. - Drug expertise: Appoint a single pharmacist leader to support improved prescribing. - Action: Take at least one prescribing improvement action, such as requiring reassessment within 48 hours to check drug choice, dose, and duration. - Tracking: Monitor prescribing and antibiotic resistance patterns. - Reporting: Regularly report to staff prescribing and resistance patterns, and steps to improve. - Education: Offer education about antibiotic resistance and improving prescribing practices. - Work with other health care facilities to prevent infections, transmission, and resistance. New Investment Needed Expanding upon current patient safety efforts and goals, the FY 2015 President's Budget requests funding for CDC to increase the detection of antibiotic resistant infections and improve efforts to protect patients from infections, including those detailed in today's CDC reports. Additionally the President's Budget requests an increase for the National Healthcare Safety Network to fully implement tracking of antibiotic use and antibiotic resistance threats in U.S. hospitals. - Page last reviewed: April 28, 2014 - Page last updated: April 28, 2014 - Content source: - National Center for Emerging and Zoonotic Infectious Diseases - Page maintained by: Office of the Associate Director for Communication, Digital Media Branch, Division of Public Affairs
The grasslands of the North Pennines are of outstanding importance for birds as they provide breeding sites for ground-nesting species and feeding sites for birds that breed in adjacent habitats such as heather moorland. The majority of North Pennines moorlands are fringed by large enclosed grass allotments and pastures. These areas contain abundant springs and flushes so support wetland vegetation such as Sphagnum moss and large amounts of rush. The mix of tall and short vegetation and the presence of both wetland vegetation and remnant moorland vegetation is ideal for many species of ground-nesting bird. Nowhere else in England are these birds found in such abundance – this is one of the most unique and truly magical features of the area. The curlew, Europe’s largest wading bird, breeds in large numbers in these grasslands. With its characteristic down-curved beak and rich bubbling song, the curlew is one of the most distinctive birds of early summer in the North Pennines. Three other species of wader breed in important numbers in fellside grasslands. Lapwings select areas of short vegetation in which to nest as they require all round visibility in order to watch out for predators. Lapwings are hard to miss. With their bold white and deep iridescent green plumage, long crest and dramatic plunging display flight, their return to the North Pennines in spring is a welcome sight. Preferring to breed in areas with slightly longer wetland vegetation, the redshank is a shyer species. These medium-sized waders are generally seen when they stand on walls or fence posts loudly calling to announce the approach of danger. Shyer still are snipe. Beautifully striped with a very long beak, these waders nest in dense vegetation and tend only to be seen when accidentally put to flight. That is unless their display flight is witnessed. During the breeding season, snipe display above the pastures in which they breed. They first fly up high and then plunge downward vibrating their outer tail feathers to produce a curious whirring sound known as “drumming”. The sweet stream of song from a skylark hovering so high above as to be almost invisible is the epitome of English summertime. Sadly this bird, along with many other once common farmland species, has declined rapidly in England over recent decades. Largely thanks to the persistence of traditional land management practices, skylarks remain common in North Pennines grasslands, their songs mingling with the calls and displays of the waders which also breed in high numbers against a national trend of steep decline. Areas of short vegetation and tightly grazed turf near the moorland edge provide the favoured habitat for the wheatear. Commonly associated with drystone walls and rocky areas, the wheatear is unmistakable in flight as it has a bright white rump. Almost always seen on the ground, either standing calling or pecking at insects in the short grass, wheatears are summer migrants from Central Africa and one of the first heralds of spring in the North Pennines. By contrast, the tall vegetation of North Pennines grasslands provides an ideal habitat for the grey partridge. Another species in national decline, grey partridges can often be seen in the early summer in North Pennines roadside verges as they shepherd their tiny chicks to insect-rich meadows to feed. Whilst the importance of hay meadows for scarce and declining wild flowers may be relatively well known, the importance of hay meadows for birds is seldom recognised. This is an important habitat for birds nonetheless. The most characteristic bird of the North Pennines hay meadow is the yellow wagtail. This summer visitor from Africa is now becoming scarce and unfortunately the reasons for this are currently not known. A bird with a bright yellow breast and long tail, the yellow wagtail is most commonly seen walking along dry stone walls bordering hay meadows. They have a distinctive sibilant call and nest within the tall grass of the hay meadow. Hay meadow plants are believed to provide an important source of food for the twite during the breeding season. This is a scarce and easily overlooked small brown bird but with a distinctive twanging call. Twite nest in areas of long vegetation and particularly favour bracken banks or patches of tall heather. Little is known of their movements in the North Pennines but their use of hay meadows is thought to be significant. Compared to the springtime comings and goings of the many species breeding here, all North Pennines habitats are quiet during the autumn and winter. Two bird species do arrive in the autumn, however, to inject colour and life into the otherwise hushed landscape: the fieldfare and redwing. These members of the thrush family breed in Scandinavia and northern parts of Europe and migrate to the UK to winter in the comparatively milder climate. They generally form large mixed flocks that gather in the tops of trees and then fly down to feed in the rich pastures below. The loud clacking calls of the fieldfare and the high, thin “tseep” of the redwing as they fly over bring a breath of excitement to any winter day.
Evolutionary adaptations developed to protect the Vikings from an infestation of parasitic worms may have resulted in certain genetic traits that increase vulnerability to certain lung diseases. During ancient times, the side-effects of this adaptation were probably harmless, although as people later began smoking and living longer, the removal of certain anti-inflammatory mechanisms appears to increase carriers’ susceptibility to pulmonary complications, like emphysema. Emphysema occurs when air sacks in the lungs, called alveoli, become damaged, causing them to merge into one large air chamber as opposed to many small ones. This reduces the surface area of the lungs, which subsequently become less efficient. Alveoli can become damaged by certain enzymes called proteases, which are secreted by cells involved in inflammation, one of the body’s key immune processes. To keep these enzymes under control, a protein called alpha-1 antitrypsin (A1AT) acts as a protease inhibitor, and is therefore vital in ensuring the lungs remain protected. People who suffer from A1AT deficiency are therefore more prone to developing lung diseases, particularly if they smoke, since this increases inflammation and therefore sparks the release of more proteases. A1AT deficiency is caused by a particular heritable genetic mutation, which results in the creation of an altered form of the protein. Archaeological studies of Viking latrines have found evidence of massive infestations of parasitic worms, while genetic analyses of fecal matter obtained during these excavations reveal that a particular mutation of the A1AT gene was prevalent among the population. A new study in the journal Scientific Reports has now put two and two together, suggesting that this mutation may have protected the Vikings from these parasites. Using blood plasma from donors carrying both the regular and mutated form of the A1AT gene, the researchers sought to determine how levels of antibodies were affected when these worms were present. Conducting their experiments in a laboratory setting, they found that certain compounds released by the parasites destroyed an antibody called immunoglobulin E (IgE) in the plasma of non-mutant donors, but not in that of mutant donors. This suggests that the mutated variant of A1AT protects IgE from these damaging compounds, which likely helps the body to fight the parasites since the role of IgE is to bind receptor sites on the surface of certain immune cells, activating a response against invading pathogens. By providing such protection, the mutated A1AT ensures that the IgE is able to fulfill its function and initiate the body’s natural defenses. However, an offshoot of this is that these variants lose some of their protease-inhibiting power, making the tissue of the lungs more susceptible to damage. This would appear to explain the high rates of A1AT deficiency – and corresponding prevalence of emphysema – among present-day Scandinavians.
Explain, using an example, how each of the following forces helped push Europe toward war in 1914: nationalism, imperialism, militarism. Nationalism is love of country or patriotism, imperialism is taking over another country by force to take its resources, militarism is a build up of military power. Explain the role of the Franco-Prussian War in the rise of the alliance system in Europe. Because of the conflict the European powers established alliances to limit German power. Identify the Black Hand and explain why its members wanted to assassinate Archduke Francis Ferdinand. Terrorist group in Serbia that was behind the assassination of the Archduke. Describe why the method of warfare on the Western Front during World War I led to a stalemate. Trench warfare prevented movement. Neither side was getting anywhere. During Christmas, there was a even a day of peace. Neither side wanted to fight, and they all hung out in no mans land and celebrated. Explain why World War I was considered a global conflict even though most of the fighting took place in Europe. Because of the world powers involved. Everyone was an ally with a country involved. There were also treaty's that said if one country went to war, the other would back them up. Eventually everyone got involved in the war. It was a war to end all wars, but they were fighting over nothing.
James Watt Facts The British instrument maker and engineer James Watt (1736-1819) developed an efficient steam engine which was a universal source of power and thereby provided one of the most essential technological components of the early industrial revolution. James Watt was born on Jan. 19, 1736, in Greenock, Scotland, the son of a shipwright and merchant of ship's stores. He received an elementary education in school, but of much more interest to him was his father's store, where the boy had his own tools and forge and where he skillfully made models of the ship's gear surrounding him. In 1755 he was apprenticed to a London mathematical instrument maker; at that time the trade primarily produced navigational and surveying instruments. A year later he returned to Scotland. By late 1757 Watt was established in Glasgow as "mathematical instrument maker to the university." About this time Watt met Joseph Black, who had already laid the foundations of modern chemistry and of the study of heat. Their friendship was of some importance in the early development of the steam engine. Invention of the Steam Engine In the meantime, Watt had become engaged in his first studies on the steam engine. During the winter of 1763/ 1764 he was asked to repair the university's model of the Newcomen steam engine. After a few experiments, Watt recognized that the fault with the model rested not so much in the details of its construction or in its malfunctioning as in its design. He found that a volume of steam three or four times the volume of the piston cylinder was required to make the piston move to the end of the cylinder. The solution Watt provided was to keep the piston at the temperature of the steam (by means of a jacket heated by steam) and to condense the steam in a separate vessel rather than in the piston. Such a separate condenser avoided the large heat losses that resulted from repeatedly heating and cooling the body of the piston, and so engine efficiency was improved. There is a considerable gap between having a good idea for a commercial invention and in reducing it to practice. It took a decade for Watt to solve all the mechanical problems. Black lent him money and introduced him to John Roebuck of the Carron ironworks in Stirlingshire, Scotland. In 1765 Roebuck and Watt entered into a partnership. However, Watt still had to earn his own living, and his employment as surveyor of canal construction left little time for developing his invention. However, Watt did manage to prepare a patent application on his invention, and the patent was granted on Jan. 5, 1769. By 1773 Roebuck's financial difficulties brought not only Watt's work on the engine to a standstill but also Roebuck's own business. Matthew Boulton, an industrialist of Birmingham, England, then became Watt's partner, and Watt moved to Birmingham. He was now able to work full time on his invention. In 1775 Boulton accepted two orders to erect Watt's steam engine; the two engines were set up in 1776 and their success led to many other orders. Improvements in the Steam Engine Between 1781 and 1788 Watt modified and further improved his engine. These changes combined to make as great an advance over his original engine as the latter was over the Newcomen engine. The most important modifications were a more efficient utilization of the steam, the use of a double-acting piston, the replacement of the flexible chain connection to the beam by the rigid threebar linkage, the provision of another mechanical device to change the reciprocating motion of the beam end to a rotary motion, and the provision of a centrifugal governor to regulate the speed. Having devised a new rotary machine, the partners had next to determine the cost of constructing it. These rotary steam engines replaced animal power, and it was only natural that the new engine should be measured in terms of the number of horses it replaced. By using measurements that millwrights, who set up horse gins (animal-driven wheels), had determined, Watt found the value of one "horse power" to be equal to 33, 000 pounds lifted one foot high per minute, a value which is still that of the standard American and English horsepower. The charge of erecting the new type of steam engine was accordingly based upon its horsepower. On Watt's many business trips, there was always a good deal of correspondence that had to be copied. To avoid this irksome task, he devised letter-press copying, in which, by writing the original with a special ink, copies could be made by simply placing another sheet of paper on the freshly written sheet and then pressing the two together. Watt's interests in applied chemistry led him to introduce chlorine bleaching into Great Britain and to devise a famous iron cement. In theoretical chemistry, he was one of the first to argue that water was not an element but a compound. In 1794 Watt and Boulton turned over their flourishing business to their sons. Watt maintained a workshop where he continued his inventing activities until he died on Aug. 25, 1819. Further Reading on James Watt Excellent biographies of Watt are H. W. Dickinson and Rhys Jenkins, James Watt and the Steam Engine (1927), and Dickinson's James Watt (1936). Eric Robinson and A. E. Musson, James Watt and the Steam Revolution (1969), is a documentary history that commemorates the bicentenary of Watt's patent for the separate condenser in his steam engine and includes extracts from Watt's personal letters and other documents not before published. For background material see H.W. Dickinson, A Short History of the Steam Engine (1939), and T. S. Ashton, The Industrial Revolution (1948).
We all know that nerves are those things that allow us to sense things and control the muscles in our bodies. Nerves are a means of sending information and the signals they conduct run in both directions. Signals can start in the brain and travel down the brain stem to the spinal cord then out to the nerves of the body to tell muscles to contract. Similarly, signals like touch or pain can begin out in the body and travel up the spinal cord to the brain to let you know you’ve stepped on a nail. Nerves are important lines of communication that help coordinate the body and deliver important information. Damage can be done to nerves resulting in various dysfunction. Sometimes the consequences are simple annoyances like when your foot “falls asleep” from sitting in an awkward position. Damage to particular nerves can cause your lungs or heart to stop working. It just depends what nerve is being affected. Luckily the really bad stuff only rarely happens. The most common kinds of nerve damage that just about everyone experiences at some point are called “Nerve Entrapment Syndromes”. Nerve Entrapments occur when a nerve is being pinched or encroached upon by some structure in the body. It is common for this to happen through canals in the body where the nerve must pass through in order to get to where it’s going. Here is a list of some of the common spots this can happen… This is the space where the spinal cord travels down the center of the spine. A disc herniation or tumor or fracture located here can put pressure on the spinal cord. This can cause severe problems like loss of bladder/bowel control, paralysis and in some cases death. Luckily, this is very rare. This is the space between two consecutive vertebrae where branches of the spinal cord called nerve roots exit the spine. Herniations can pinch the nerves here, but so can arthritic changes and misalignments of the spine (which are all related in a condition called spondylosis). Pinching the nerve here results in weakness, pain and/or numbness in the extremities. This is in fact the most common cause of sciatica. This is a space located in the neck and shoulder region just outside of the spine running down to the arm pit area. Many people have never heard of the thoracic outlet. This is where the brachial plexus lies (group of nerves going from the neck to the arm). There are three main areas. The brachial plexus first travels between the neck muscles, then between the clavicle and first rib, then underneath the pectoralis minor muscle to reach the axilla (arm pit). Pinching of the brachial plexus anywhere in this region is called a Thoracic Outlet Syndrome or TOS for short. This can create weakness, pain and/or numbness in the upper limb and can also create problems with blood flow to the upper limb. This is the space in the elbow typically referred to as the funny bone. The funny bone isn’t actually a bone at all, but really describes the area of the elbow where the ulnar nerve passes from the upper arm on its way to the forearm and hand. That’s why when you smack your elbow on something, you feel pain shoot down your arm and into your pinky and ring finger. Not funny at all. Imagine having a condition where you felt that all the time! This is cubital tunnel syndrome. I like to point out that everybody has a carpal tunnel in each wrist, but not everybody has carpal tunnel syndrome. The carpal tunnel is the canal in the wrist where nerves, blood vessels, and tendons travel from the forearm to the hand. If this space is diminished from trauma, inflammation, or out of place wrist bones, then you may find yourself with carpal tunnel syndrome. This causes a particular pattern of pain/numbness in the palm of the hand, the thumb, and 2-3 fingers (index, middle, and sometimes ring finger). People also often notice dropping things because the muscles are weak. Another sign of carpal tunnel is that shaking out the hand helps with relief. Also called “Guyon’s Canal” or “Tunnel of Guyon”, this space occurs at the wrist as well. However, it is the passage for the ulnar nerve at the wrist. As you can guess this produces the same kind of symptoms as the cubital tunnel syndrome above with pain/numbness in the last two fingers of the hand and weakness in the hand. Most people refer to this entrapment as Ulnar Tunnel Syndrome or Ulnar Tunnel Entrapment. This is a space along the inside of the ankle where the tibial nerve and artery pass from the lower leg into the foot. Inflammation, irritation, trauma, misalignment of the bones of the foot, and arthritis can all cause compression of the nerve and/or artery in this space. Symptoms are typically heel and foot pain with numbness, but if the entrapment occurs high enough the entire foot can become involved. It is more common for pain to radiate into the big toe and the or first three toes. This is of course not a space, but never the less is worth mentioning. Many people who believe they have sciatica are actually suffering from Piriformis Syndrome. You see, the sciatic nerve passes right next to (and in some people, right through) the Piriformis muscle. This is one of the muscles in the buttocks region. The Piriformis sometimes becomes overactive and tight and larger in people who have weak glute groups. This is because the Piriformis has to work harder when those gluteus muscles aren’t doing their jobs. There are other reasons why the Piriformis does this, but compensation for weak glutes is the most common. This action causes the sciatic nerve to be compressed or strangled in its passage through the pelvic region into the upper leg. Symptoms are nearly identical to an intervertebral foramen type entrapment (sciatica), and can often be difficult to differentiate (especially considering that both can actually be occurring at the same time). Keep this muscle stretched! Well, there you go. These aren’t all of the Nerve Entrapments, but these are the most common. I want to mention here that physical medicine like Chiropractic and Rehab can help or even fix all of these Nerve Entrapment Syndromes except one. In cases where the spinal cord is being compressed and it is an emergency situation, then emergency procedures typically involving surgery are required. But, like I previously mentioned, this is very rare. If you are having any of these symptoms, please seek out the care of a trained specialist like a Chiropractic Doctor before considering surgery. It is likely that these problems can be fixed without all of the scarring and recovery time involved with cutting.
Brain aneurysm repairDefinition: Brain aneurysm repair is a surgical procedure to correct an aneurysm , a weak area in the wall of a blood vessel that causes the blood vessel to bulge or balloon out. It can leak blood and cause a stroke or bleeding into an area around the brain (also called a subarachnoid hemorrhage ). See also: Aneurysm in the brain Aneurysm repair - cerebral; Cerebral aneurysm repair; Coiling; Saccular aneurysm repair; Berry aneurysm repair; Fusiform aneurysm repair; Dissecting aneurysm repair; Endovascular aneurysm repair - brain You and your doctor will decide the best way to perform surgery on your aneurysm. There are two common methods used to repair an aneurysm: - Clipping is the most common way to repair an aneurysm. This is done during an open craniotomy. See also: Brain surgery (craniotomy) - Endovascular repair, most often using a "coil" or coiling, is a less invasive way to treat some aneurysms. During aneurysm clipping: - You are given general anesthesia and a breathing tube. - Your scalp, skull, and the coverings of the brain are opened up. - A metal clip is placed at the base of the aneurysm to prevent it from breaking open (rupturing). During endovascular repair of an aneurysm: - The procedure is usually done in the radiology section of the hospital. - You may have general anesthesia and a breathing tube. Or, you may be given medication to relax you, but not enough to put you to sleep. - A catheter is guided through a small cut in your groin to an artery and then to the small blood vessels in your brain where the aneurysm is. - Thin metal wires are put into the aneurysm. They then coil up into a mesh ball. Blood clots that form around this coil prevent the aneurysm from breaking open and bleeding. - During and right after this procedure, you may be given a blood thinner called heparin. Why the Procedure Is Performed: If an aneurysm in the brain ruptures, it is an emergency that needs medical treatment, and often surgery. Endovascular repair is more often used when this happens. A person may have an aneurysm but have no symptoms. This kind of aneurysm may be found when an MRI or CT scan of the brain is done for another reason. - Not all aneurysms need to be treated right away. Those that are very small (less than 3 mm) are less likely to break open. - Your doctor will help you decide whether it is safer to have surgery to block off the aneurysm before it can break open (rupture). Risks for any anesthesia are: Possible risks of brain surgery are: Blood clot or bleeding in the brain - Brain swelling - Infection in the brain, or parts around the brain such as the skull or scalp - Surgery on any one area of the brain may cause problems with speech, memory, muscle weakness, balance, vision, coordination, and other functions. These problems may be mild or severe, and they may last a short while or they may not go away. Signs of neurological problems include: Before the Procedure: This procedure is often performed on an emergency basis. If it is not an emergency: - Tell your doctor or nurse what drugs or herbs you are taking and if you have been drinking a lot of alcohol. - Ask your doctor which drugs you should still take on the day of the surgery. - Always try to stop smoking . - You will usually be asked not to eat or drink anything for 8 hours before the surgery. - Take the drugs your doctor told you to take with a small sip of water. - Your doctor or nurse will tell you when to arrive. After the Procedure: A hospital stay for endovascular repair of an aneurysm may be as short as 1 to 2 days if there was no bleeding beforehand. The hospital stay after craniotomy and aneurysm clipping is usually 4 to 6 days. When bleeding or other complications occur before or during surgery, the hospital stay can be 1 to 2 weeks, or more. You will probably have an x-ray test of the blood vessels in the brain (angiogram ) before you are sent home. Ask your doctor if it will be safe for you to have MRI scans in the future. After successful surgical treatment for a bleeding aneurysm, it is uncommon for it to bleed again. The outlook also depends on any brain damage that occurred from bleeding before, during, or after the surgery. Most of the time, open surgery or endovascular repair is more likely to prevent a brain aneurysm that has not caused symptoms from becoming larger and breaking open. Brinjikji w, Lanzino G, Cloft HJ, Rabinstein A, Kallmes DF. Endovascular treatment of very small (3 mm or smaller) intracranial aneurysms: report of a consecutive series and a meta-analysis. Stroke. 2010;41:116-121. Meyers PM, Schumacher HC, Higashida RT, Barnwell SL, Creager MA, Gupta R, et al. American Heart Association Indications for the performance of intracranial endovascular neurointerventional procedures: a scientific statement from the American Heart Association Council on Cardiovascular Radiology and Intervention, Stroke Council, Council on Cardiovascular Surgery and Anesthesia, Interdisciplinary Council on Peripheral Vascular Disease, and Interdisciplinary Council on Quality of Care and Outcomes Research. Circulation. 2009;119:2235-2249. Patterson JT, Hanbali F, Franklin RL, Nauta HJW. Neurosurgey. In: Townsend CM, Beauchamp RD, Evers BM, Mattox KL, eds. Sabiston Textbook of Surgery. 18th ed. Philadelphia, Pa: Saunders Elsevier; 2007:chap 72. |Review Date: 8/27/2010| Reviewed By: David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine; and Daniel B. Hoch, PhD, MD, Assistant Professor of Neurology, Harvard Medical School, Department of Neurology, Massachusetts General Hospital. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
A news report from the International Business Times: By HANNAH OSBORNE November 1, 2012 1:55 PM GMT Extreme weather could eventually lead to the extinction of butterfly populations in the UK because they take so long to recover after a drought. The sensitivity and recovery of UK butterflies following extreme drought is affected by the area and fragmentation of habitat types, research by NERC Centre for Ecology & Hydrology has found. With more droughts during summer months predicted, scientists believe butterflies face eventual extinction. Co-author Dr David Roy, from the centre, said: “The delayed recovery of butterfly populations is worrying given that severe summer droughts are expected to become common in some areas of the UK, for example, south east England. “If populations don’t recover by the time the next drought hits, they may face gradual erosion until local extinction.” The scientists looked at data on the Ringlet butterfly collected between 1990 and 1999; the period that spanned a severe drought in 1995. They found that there were marked declines in insect species after the 1995 drought. The Ringlet butterfly populations crashed severely in regions that were drier. Researchers also found habitat structures in the wider areas influenced population, with populations in larger and connected patches of woodland recovering faster. Lead author Dr Tom Oliver said, “Most ecological climate change studies focus on species’ responses to gradual temperature rise, but it may be that extreme weather will actually have the greatest impact on our wildlife. “We have provided the first evidence that species responses to extreme events may be affected by the habitat structure in the wider countryside.” Dr Tom Brereton, from Butterfly Conservation and study co-author, added: “Our results suggest that landscape-scale conservation projects are vital in helping species to recover from extreme events expected under climate change. “However, conversely, if we do nothing, the high levels of habitat fragmentation will mean species are more susceptible.” There are only around 60 species of butterfly that are regularly seen in the UK. As butterflies pollenate plants, their extinction could have a huge impact on the ecosystem. To leave feedback about this article, e-mail the author: [email protected] To contact the editor, e-mail: [email protected]
Find More Science Experiments The surface layer of liquids has a thin elastic “skin” called surface tension. You can see surface tension at work when you see a drop of water – it creates a little “bead” of water, like a little dome. Surface tension is what makes the dome shape – the water doesn’t flatten out. Water is made up of two kinds of atoms, hydrogen and oxygen. The name for the water molecule is H20. The water molecule has 2 hydrogen atoms and 1 oxygen atom. Water molecules are attracted to each other because hydrogen atoms and oxygen atoms are attracted to each other and hug close together really tight. This is called cohesion. The molecules hug so close together they don’t want to touch other molecules around them. That’s why a bubble is round and only rests a small part of itself on a surface when it lands. When you blow air into soap bubble solution the liquid molecules want to attract to each other again so they wrap around the burst of air until they can attach to each other again – this is what makes the round bubble shape. The air inside the solution is pushing the molecules in the soap bubble solution apart but the attraction between the soap bubble solution molecules is so great, the bubble doesn’t pop – the molecules are hugging each other too tight. To experiment with bubbles you need a good bubble recipe. Below are some simple recipes to try. Each of the recipes use water and dish soap. The “other” ingredient can be baking powder, corn syrup, glycerin (sold at the pharmacy) or sugar. We had the best luck with baking powder. The baking powder recipe made some HUGE bubbles. Science Project Idea: Mix different formulas of bubble mix and test them to see which one makes the best bubbles. Use the same amount of water and the same amount of dish soap in at least three different buckets. Choose one “Other” ingredient and add it in different amounts to each of your trial buckets. To be fair, you should hold the bubble wand in front of a fan instead of trying to blow on it, that way you know that the amount of air being blown to make the bubble will be exactly the same. Test the three formulas several times and record your results on a chart. Decide before you begin what property you are looking for in the bubbles. Are you going to test which formula makes the biggest bubble, the bubble the last the longest without popping or the formula that makes the most bubbles? Here are some books and websites that will help you understand and have fun with bubbles: - Exploratoriuam: Bubbles - Homemade Bubble Solutions - The Ultimate Bubble Book There are 3 fun activities in this book – Person Inside a Bubble (Page 41) Giant Bubbles (Page 48-51) Printing With Bubbles (Page 52-53) - Google Preview: Science Experiments That Explode and Implode: Stink Bombs (bubbles that stink! pages 22-23) - Prize Winning Science Projects for Curious Kids: The Bubble Olympids (pages 102-103) More IndyPL Experiments about Surface Tension: - IndyPL Kids’ Blog: Surface Tension – Pepper Scatter - IndyPL Kids’ Blog: Surface Tension – Sand Castles Words to Know: Surface Tension – The film that forms on the surface of liquids caused by the attraction of the particles in the surface. Cohesion – The attraction between like molecules; to stick together.
(For radio stations: Bill's public radio work can be licensed via PRX). The most important technical achievement of the twentieth century may well be that of Fritz Haber and Carl Bosch. They found a replacement for, well, the polite word is bird droppings. They replaced quano - that's the name for the nitrogen-rich excrement of seabirds - which 19th century farmers used to fertilize fields. "Nitrogen-rich" is the key. We need to eat nitrogen-containing amino acids for our bodies to grow. If the soil is rich with nitrogen, plants will absorb it and make these amino acids, which we can then digest. The problem with using guano was that it was non-renewable. For nature to generate large piles of quano requires a place with huge numbers of nesting birds, abundant fish stocks for these birds, and a rainless climate. Islands near Peru are one of the few places that meet these conditions. Nineteenth century merchants leveled the guano cliffs there, transporting tons to the farmers of Europe. By 1870, with the guano piles gone, a desperate search began for a replacement. They turned to the air. Air is filled with nitrogen, millions of times more than any human being would need in a lifetime. But, you can't just put air in the ground. You must remove the nitrogen, combine it with other elements to make a liquid, which can then be added to a field. For years chemists tried to remove nitrogen from the air, but it wasn't until a German chemist, Fritz Haber, found the secret: extremely high pressure. He found that he could use the nitrogen from the air and form ammonia if he did his chemical reactions in a vessel pressurized to 100 times normal atmospheric pressure. But using these high pressures presented great problems with making large quantities. That kind of pressure can be contained on a small scale, but make a vessel large enough to make tons of ammonia and pressurize it to 100 times atmospheric pressure, and you have a huge bomb if the vessel fails. This is where Carl Bosch enters the picture. When he learned of Haber's process in a meeting, Bosch said "I believe it can go. I know exactly what the steel industry can do." Using this knowledge he designed vessels to contain the pressure and created a factory that made ammonia by the ton. So important was this work that Bosch won the Nobel Prize for Chemistry. The citation called his ammonia factory "a technical advance of extraordinary importance." Indeed, nitrogen from the Haber-Bosch process provides the nutrition for about 60% of the world's people. Although making ammonia to supply nitrogen doesn't sound as exiting as space travel or the latest computer, I'd still nominate the work of Haber and Bosch as the most important technology of the last century. I mean, feeding the world is a pretty big thing. Copyright 2002 William S. Hammack Enterprises
The web browsers are the software’s that allows you to display and interact with hypertext document hosted on remote web server. When you access a document using a browser the document is transferred to the local machine and the page has been displayed on the internet. Viewing the website/internet requires the use of any browser program to view files, download images and many more things. There are different kinds of text based and graphical user interface based browsers. Text based browsers are simple and require less sophisticated computers and terminals than graphic user interface browsers. The graphic user interface browsers are mouse or icon oriented programs that run under graphic user interface systems such as windows, Macintosh etc. they automatically display the formatted text with various fonts, pictures, sounds and movies, with a simple click of the mouse- they provide internet multi media. The text based browsers also read the same hypertext markup language text files as graphic user interface browsers but they display them without formatting. They do not display inline pictures from the document being read, but allow some of the pictures and the sound files to be downloaded and viewed or played on the local computer at a later time, provided the computer has the proper software and hardware. Browsers allow you to directly enter a uniform resource locator to go to a specific document. The graphic user interface browsers are easy to use, point and click type that operate under the same graphical system like windows, Macintosh etc. when started, most of them automatically retrieves and displays a home page. This default home page is often the home page of the site where the browser was developed. You can also change this home page to any other page for which you have a universal resource locator. Netscape navigator and internet explorer are the most popular graphic user interface browsers.Web browser is an application that allows you to explore the Internet. Web application development is one of the most complex processes, and we can fully enjoy its results on our computer.
A team of researchers has found traces of a new particular species of prehistoric sharks called the “megalolamna paradoxodon,” which dates back to 20 million years and resembles modern white sharks. The new discovery shows a never-before-seen species of sharks that lived during the Miocene period and belongs to a family group of sharks called ‘Lamniformes,” in which modern sharks are featured. Researchers discovered the megalolamna paradoxodon thanks to a series of teeth that measured almost 1.77 inches, found in both eastern and western U.S regions, Japan and Peru. “The fact that such a large shark with such a wide geographic distribution had evaded recognition until now indicates just how little we still know about the Earth’s ancient marine ecosystem,” said Kenshu Shimada, a paleobiologist from DePaul University in Chicago and the study’s first author. Can researchers calculate the size of a shark by measuring its teeth? Sharks are dangerous predators that live somewhere in the ocean’s deep structures, located at the top of the marine food chains. They represent an imminent threat to those who share space with them. Researchers discovered the first trace of an ancient shark when analyzing the fossils of a “spiny shark” that dated back to over 420 million years. Since then, over 500 different species of sharks have been found in the deepest oceans, living in depths of up to 6561.68 feet. Sharks are measured in different species depending on their size. For example, the smallest shark is called the dwarf lantern shark and measures only 6.6 inches, while the title the of the world’s largest shark goes to the whale shark that measures up to 39 feet long. The team of researchers led by Professor Shimada first thought that the megalolamna paradoxodon belonged to a genus of sharks called “Lamna” in which today’s salmon shark is featured, due to its large teeth. However, after analyzing the ancient teeth and comparing them to modern-day sharks, researchers noted that this genus didn’t fit the fossils’ measures. According to Shimada’s report, the found-teeth were too robust and they fit better with the “Otodus” type. Researchers noted that the Megalolamna actually was the Otodus’ closest relative with a 45-million-year gap between the two cousins. This discovery was made when researchers compared the fossils with evidence belonging to the Carcharocles Megalodon, the species that features history’s largest shark. The Carcharocles Megalodon species grows up to be almost 60 feet long and according to studies its bite could be even harder than a T-Rex’s bite. The Megalodon, as well as the newly discovered Paradoxodon, belong to the Otodontidae family. Shark researchers have debated whether the Megalodon should be placed in the Otodus family due to its large size, and now thanks to the comparison with the Megalolamma, researchers have proof. Researchers note that the M. Paradoxodon had large teeth that allowed the predator to seize and slice prey thanks to its front and back teeth that worked as sharp knives. Initial studies affirm the newly-discovered species lived in coastal and shallow waters with midlatitudes such as where the team of researchers found the fossils in North Carolina, California, Japan, and Peru. Shimada’s team of researchers has concluded that the recently-found shark measured up to 12 feet long and could be a few inches smaller than its modern competitor the great “white shark.” However, some scientists don’t agree with this position. John-Paul Hodnett, a shark specialist at Philadelphia’s Saint Joseph’s University, said that calculating the size of a shark only by its teeth could result in a mistake, Live Science informed. Professor Shimada’s findings have been published in the Journal of Historical Biology and hope to shed some light on the oceanic history of these long-studied predators. “For teeth, you should always be cautious of the fact that it is possible to have very large or small teeth in a shark’s jaw, which do not represent the true aspect of the shark’s body,” informed Hodnett, who compared the modern whale shark’s small teeth to the animal’s size (12 meters). Source: DePaul University
Figure TS-4: Ranges of percentage changes in crop yields (expressed in vertical extent of vertical bars only) spanning selected climate change scenarioswith and without agronomic adaptationfrom paired studies listed in Table 5-4. Each pair of ranges is differentiated by geographic location and crop. Pairs of vertical bars represent the range of percentage changes with and without adaptation. Endpoints of each range represent collective high and low percentage change values derived from all climate scenarios used in the study. The horizontal extent of the bars is not meaningful. On the x-axis, the last name of the lead author is listed as it appears in Table 5-4; full source information is provided in the Chapter 5 reference list. The response of crop yields to climate change varies widely, depending on the species, cultivar, soil conditions, treatment of CO2 direct effects, and other locational factors. It is established with medium confidence that a few degrees of projected warming will lead to general increases in temperate crop yields, with some regional variation (Table 5-4). At larger amounts of projected warming, most temperate crop yield responses become generally negative. Autonomous agronomic adaptation ameliorates temperate crop yield loss and improves gain in most cases (Figure TS-4). In the tropics, where some crops are near their maximum temperature tolerance and where dryland agriculture predominates, yields would decrease generally with even minimal changes in temperature; where there is a large decrease in rainfall, crop yields would be even more adversely affected (medium confidence). With autonomous agronomic adaptation, it is established with medium confidence that crop yields in the tropics tend to be less adversely affected by climate change than without adaptation, but they still tend to remain below baseline levels. Extreme events also will affect crop yields. Higher minimum temperatures will be beneficial to some crops, especially in temperate regions, and detrimental to other crops, especially in low latitudes (high confidence). Higher maximum temperatures will be generally detrimental to numerous crops (high confidence). [5.3.3] Important advances in research since the SAR on the direct effects of CO2 on crops suggest that beneficial effects may be greater under certain stressful conditions, including warmer temperatures and drought. Although these effects are well established for a few crops under experimental conditions, knowledge of them is incomplete for suboptimal conditions of actual farms. Research on agricultural adaptation to climate change also has made important advances. Inexpensive, farm-level (autonomous) agronomic adaptations such as altering of planting dates and cultivar selections have been simulated in crop models extensively. More expensive, directed adaptations such as changing land-use allocations and developing and using irrigation infrastructurehave been examined in a small but growing number of linked crop-economic models, integrated assessment models, and econometric models. Degradation of soil and water resources is one of the major future challenges for global agriculture. It is established with high confidence that those processes are likely to be intensified by adverse changes in temperature and precipitation. Land use and management have been shown to have a greater impact on soil conditions than the indirect effect of climate change; thus, adaptation has the potential to significantly mitigate these impacts. A critical research need is to assess whether resource degradation will significantly increase the risks faced by vulnerable agricultural and rural populations [5.3.2, 5.3.4, 5.3.6]. In the absence of climate change, most global and regional studies project declining real prices for agricultural commodities. Confidence in these projections declines farther into the future. The impacts of climate change on agriculture are estimated to result in small percentage changes in global income, with positive changes in more developed regions and smaller or negative changes in developing regions (low to medium confidence). The effectiveness of adaptation (agronomic and economic) in ameliorating the impacts of climate change will vary regionally and depend a great deal on regional resource endowments, including stable and effective institutions. [5.3.1, 5.3.5] Most studies indicate that mean annual temperature increases of 2.5ºC or greater would prompt food prices to increase (low confidence) as a result of slowing in the expansion of global food capacity relative to growth in global food demand. At lesser amounts of warming than 2.5ºC, global impact assessment models cannot distinguish the climate change signal from other sources of change. Some recent aggregated studies have estimated economic impacts on vulnerable populations such as smallholder producers and poor urban consumers. These studies indicate that climate change will lower the incomes of vulnerable populations and increase the absolute number of people at risk of hunger (low confidence). [5.3.5, 5.3.6] Without autonomous adaptation, increases in extreme events are likely to increase heat stress-related livestock deaths, although winter warming may reduce neonatal deaths at temperate latitudes (established but incomplete). Strategies to adapt livestock to physiological stresses of warming are considered effective; however, adaptation research is hindered by the lack of experimentation and simulation. [5.3.3] Confidence in specific numerical estimates of climate change impacts on production, income, and prices obtained from large, aggregated, integrated assessment models is considered to be low because there are several remaining uncertainties. The models are highly sensitive to some parameters that have been subjected to sensitivity analysis, yet sensitivity to a large number of other parameters has not been reported. Other uncertainties include the magnitude and persistence of effects of rising atmospheric CO2 on crop yield under realistic farming conditions; potential changes in crop and animal pest losses; spatial variability in crop responses to climate change; and the effects of changes in climate variability and extreme events on crops and livestock. [Box 5-3] Other reports in this collection
It is time to bring to a close this story of the 18th-century transits of Venus and the often amazing expeditions to the ends of the earth that they engendered. The purpose in measuring and timing the passage of Venus across the face of the sun on the very rare occasions it is seen to do this was to establish the scale of the solar system (and eventually the scale of the universe itself). Observers had to be sent to very distant parts of the earth because the longer the baseline between them, the more accurate would be the result, and in the ill-explored world of the 1760s this would cost more than one of them his life. But before we turn to the ultimate results of these undertakings we must look at one more of the expeditions, the most famous of them all, the British expedition to the South Pacific for the 1769 transit. Early analysis of the 1761 transit observations was not entirely satisfactory, and it was expected that the 1769 transit (the last for more than a century) would offer better results. By 1765 Thomas Hornsby, Savilian Professor of Astronomy at Oxford, was urging the European powers to prepare their expeditions: "Posterity must reflect with infinite regret their negligence or remissness; because the loss cannot be repaired by the united efforts of industry, genius, or power." Calculation showed that the South Pacific, as yet hardly explored by Europeans, would be a desirable station, and in case science should not prove attraction enough, Hornsby noted that it would be a "worthy object of attention to a commercial nation to make a settlement in the great Pacific Ocean." Thus it was that the British expedition to the Pacific would have far more hopes behind it than merely establishing the scale of the solar system. Commerce, politics and empire were not to be denied. The Royal Society of London's estimate that £4,000 would be needed to mount the expedition met with little argument, and an appeal to the 30-year-old King George III was launched. "The Memorialists, attentive to the true end for which they were founded by Your Majesty's Royal Predecessor . . . conceived it to be their duty to lay their sentiments before Your Majesty with all humility, and submit the same to Your Majesty's Royal Consideration." Royal Consideration quickly arrived at acquiescence. The Society had among its fellows just the man to command such an expedition: Alexander Dalrymple, a former professional sailor with much experience in eastern seas and an adept geographer and navigator. But where to find a ship? Clearly the Royal Navy must be the answer, as it had been for Mason and Dixon years before. And then a major snag. The Admiralty, it seemed, had never forgotten the last time it had allowed an astronomer, Edmond Halley, to command one of its ships on a scientific expedition (see Marginalia, January–February 1986). The result had been mutiny and the near loss of the ship. The First Lord of the Admiralty, Sir Edward Hawke, rather extravagantly announced he would sooner suffer his right hand to be cut off than sign another such commission. So Dalrymple was out. The Admiralty would find its own man. They picked a junior officer, then doing marine survey work on the St. Lawrence River in Quebec. His name was James Cook, the ship he was to command, the Endeavour. The next question was just where in the South Pacific the expedition should go. Such reports of islands in the vast ocean as existed were not entirely reliable as to latitude and longitude, and one would not, as did Le Gentil in 1761, want to find oneself at sea when the crucial moment arrived. But even as the Endeavour was being fitted out there arrived back from the Pacific the good ship Dolphin. And what news! It had found an island that was a virtual paradise on earth, an island "such as dreams and enchantments are made of . . . ." An island where not only the surroundings were paradisiacal, but where the local culture was also utterly different from that of Europe. In particular, the sailors had discovered, no doubt within minutes of arrival, that the women were extraordinarily free with their sexual favors. The gift of anything metallic would hasten proceedings even further. The captain of the Dolphin had feared his ship would sink at her moorings as her crew enthusiastically ripped the nails from her decks. Her navigators had taken particular care in determining the island's latitude and longitude. Its name was Tahiti. The Endeavour would sail for Tahiti. Considerable quantities of nails would be among her cargo. So on August 26, 1768 the Endeavour sailed from Plymouth, bearing southwest for Rio, then 'round the horrors of Cape Horn and across some 7,400 kilometers of the Pacific to Tahiti, arriving with almost two months in hand before the transit. Joseph Banks (26, later Sir Joseph, and eventually one of the Royal Society's most colorful presidents), who had joined the expedition as scientific leader and botanist, found previous reports to be accurate. Soon after my arrival at the tent 3 hansome girls came off in a canoe to see us . . . and with very little perswasion agreed to send away their carriage and sleep in [the] tent, a proof of confidence which I have not before met with upon so short an acquaintance. But cultural differences went well beyond sexual mores. Ownership seemed a very fuzzy concept, and casually stolen goods became a sore point. Particularly when an important astronomical instrument disappeared and had to be hunted down at gunpoint. The English crew set a poor example, as Banks noted in his journal after a near-perfect observation of the transit: We also heard the melancholy news that a large part of our stock of Nails had been purloind by some of the ships company during the time of the Observation . . . . This loss is of a very serious nature as these nails if circulated by the people among the Indians will much lessen the value of Iron . . . . The transit observations concluded, Cook, as per instructions, set off southwestward in search of the great southern continent postulated by philosophers of the day as the counterpart to the great land masses of the northern hemisphere. Instead, he discovered New Zealand and spent six months charting its coasts. Setting off westward once more, he ran into the east coast of Australia and worked northward, charting 3,000 kilometers of coast as he went. That took them out into the channel between the coast and nearly 2,000 kilometers of the Great Barrier Reef. Despite the crew's desperately careful sailing, the beautiful but treacherous reef claimed the Endeavour, and although they eventually got her off they had to beach her for many weeks on the desolate Queensland coast to make repairs. With supplies running low, the Endeavour put in to Batavia (Jakarta) for refreshment and more permanent repairs. So far the crew's health had been fine; indeed, Cook, with his insistence on sauerkraut as a defense against scurvy, was famous for protecting the well-being of his crews, but he had no defense against the malaria and dysentery ("the bloody flux") of tropical Batavia. By the time the ship set off to cross the Indian Ocean, round southern Africa and sail the length of the Atlantic, nearly half the crew had died and most of the remainder were severely stricken. But finally, on July 13, 1771, more than two years after the transit, the survivors, weak and shaken, arrived home. Among those they left behind was Charles Green, the expedition's official astronomer. It was reported that he "had been ill some time … [and] in a fit of phrensy got up in the night and put his legs out of the portholes, which was the occasion of his death." It says something of Cook the man that he would undertake two more expeditions to the Pacific despite these previous experiences. It was, of course, to cost him his life. So another chapter in the history of the transits of Venus was closed. No one alive then would see another. It remained only to determine how well they had done in arriving at their goal of calibrating the astronomical unit, the distance between the earth and the sun. The Black Drop Three problems permeated the analysis: first, the curious and unexpected phenomenon called "the black drop"; second, uncertainties in the distances between observers; and third, the problem of how to combine redundant observations. The black-drop problem surprised observers. They were trying to determine the exact moment when the edge of Venus's disk was just tangent to the edge of the sun's disk as Venus began or ended its transit, but what they saw was an elongated black ligament joining the two edges and persisting even when Venus's disk was clearly within that of the sun. This so surprised and unsettled the observers that even when two of them were standing alongside each other their reported times could be half-a-minute apart, when they were expecting agreement to within a few seconds. As Cook himself reported, This day prov'd as favourable to our purpose as we could wish, not a Clowd was to be seen the whole day and the Air was perfectly clear . . . [yet] we very distinctly saw a . . . dusky shade round the body of the Planet which very much disturbed the times of the Contacts . . . . We differ'd from one another in observeing the times of the contacts much more than could be expected. Today we understand this as being the result of sunlight refracting through the very dense atmosphere of Venus, but it certainly degraded the timing of the transits. The accuracy of the final results also depended directly on knowing the length of the baseline between observers—in effect knowing accurately the latitude and longitude of each observer. But since methods for determining longitude in the 1760s were inadequate, to say the least, these baselines were not well determined, and the accuracy of the final results was correspondingly diminished. The third problem reminds us that although this series of articles has described those few expeditions that went to remote parts of the earth to observe the transits in 1761 and 1769 (and their observations carried the most weight), there were additionally many other observers who saw the transits from home, if home happened to be in the right hemisphere at the right time. The initial analysts of the data faced the problem of getting the best single answer from multiple locations and observations, when in principle only two observations were needed. Methods for combining redundant observations were only in their infancy and would not come to fruition until the work of Legendre, Gauss and Laplace in the early 19th century led to the method of least-squares. Thus contemporary analyses of the 1760s data yielded a variety of answers. Typical was the analysis of Lalande in 1771, who found values of the earth's mean distance from the sun (the astronomical unit) in the range of 152 to 154 million kilometers (Mkm). But more than a century later in 1891, when locations had been much better determined and mathematical methods improved, Simon Newcomb, dean of late-19th-century American astronomy, from the same data determined a value of 149.7 ± 0.9 Mkm, and when he combined the 1761 and 1769 transits with those of 1874 and 1882, he found an overall transit value of 149.59 ± 0.31 Mkm. Before we compare this to the latest determination, let it be said we now know that of the methods developed after the last transit of Venus up to the mid-20th century (which included trigonometric parallaxes of asteroids, gravitational perturbations by the sun and improvements in the constant of stellar aberration), none would surpass in accuracy (although often in claimed precision) the results of the transits of Venus. Modern astronomy has turned back to Venus to calibrate the astronomical unit, but now in quite a different way. Today giant radiotelescopes are used as radar guns, pumping out a tremendously powerful radio signal directed at Venus, and minutes later, switched to receiver mode, detecting the faint echo returning from the planet, the round-trip time being measured by atomic clocks. This interval, together with the speed of electromagnetic waves, yields the distance of Venus at that moment—and thus, through Kepler's third law (see Part I of this series), the value of the astronomical unit. The current value stands at 149,597,870.691 ± 0.030 kilometers. This astonishing result, if taken at its claimed precision, almost defies comprehension. It is the equivalent of measuring the distance between a point in Los Angeles and one in New York with an uncertainty of only 0.7 millimeter! So when the next transit of Venus finally comes along in about five years (June 8, 2004), we are not likely to expect new exactitude in determining the astronomical unit, but we might give thought to the words of William Harkness, a key American figure in the 19th-century transits, writing just after those transits: There will be no other [transit of Venus] till the twenty-first century of our era has dawned upon the earth, and the June flowers are blooming in 2004. When the last [18th century] transit occurred the intellectual world was awakening from the slumber of ages, and that wondrous scientific activity which has led to our present advanced knowledge was just beginning. What will be the state of science when the next transit season arrives God only knows. © J. Donald Fernie
A sentence needs to make sense. Look at these words: dog bone the its ate. They don't make sense like this, but if we swap them round and add a capital letter and a full stop we get a proper sentence. The dog ate its bone. In this worksheet, you can practise making proper sentences out of jumbled words. Here is a jumbled sentence. Swap the words round so that it makes sense and write the new sentence in the answer box. Don't forget to put in a capital letter and full stop (not a question mark)! yellow bananas are
Using Transitions in the Classroom Have a routine Having a routine in your classroom can help. For instance, plan to have a starter activity ready on the desks for the students every time they enter your classroom and as soon as they enter the room. They shouldn’t have any excuse that they “have nothing to do”. Transitions need preparation like any other activity. Plan for them and enjoy the results. Types of Transitions - Movement Transitions- Different ways to move from place to place - Calming Transitions- Activities that will calm the tone of the class and redirect the focus and energy of the classroom - Action Breaks- Opportunities to release a little bit of extra energy eg aerobics, movement, dance breaks, action songs - Thinking Time- Thought provoking activities where students are challenged to think creatively, or offer them a problem solving or open ended task - Musical Breaks- songs, finger plays, poems Why do we need to incorporate Transitions into our teaching day? Transitions are not simply a means of controlling or managing a group, they are interesting, engaging, and open ended activities with a definite structure. When you incorporate Transition activities into your timetable, you can give children the activity to re-engage and provide students with a series of tasks that don’t take a long time to complete. This can help students feel that they are making progress and so keep them engaged. Identify where the transitions from one activity to another will be. These are the times when students will talk. If a student genuinely doesn’t have anything to do because they are still waiting for the worksheets to be handed out, can you really expect them not to chat? - Make transitions fun and meaningful - Keep a collection of finger-plays, games and songs in your teaching bag of tricks for instant activities - Use a timer to indicate when a transition will happen and for how long. - Tell students what is going to happen next, what will they be doing after the transitions. This helps students to understand what is expected of them and look forward to the next change - Invite students to create their own versions of transition activities. Suggested Attention Getting Signals - Musical Instruments (bells, drum, tambourine, triangle etc) - Music on the IWB or a video clip - Sound effect recording (can be used on the IWB) - Hand clapping, hand motions or sound language Links to explore for other transition activities
Metrics & Indices that Describe Space Weather Scientists who study weather on Earth measure wind speed, rainfall amounts, and temperature. They use special terms, like dew point, relative humidity, and barometric pressure. Scientists who study space weather do the same sorts of things. They also use special terms and measure certain traits of the "weather" in space. The first place to look when measuring space weather is the Sun. Sunspots are visible forms of active regions on the Sun. We have sunspot records over long time periods, so sunspot counts are an important metric for tracking activity levels on the Sun. Solar flares, gigantic explosions on the Sun, are classified using letters and numbers. X-class flares are the most powerful. Coronal Mass Ejections (CMEs) are a third type of solar phenomena for which scientists have developed classification schemes. A second realm for which measures of space weather are needed is interplanetary space. The solar wind is characterized by its speed, particle density, pressure, and temperature. The interplanetary magnetic field (IMF) is described in terms of the magnitude of the magnetic force and the directions of the polarity of that field. The third and region in which we need quantities that describe space weather phenomena is Earth and near-Earth space (geospace). Some measures describe the strength and orientation of Earth's magnetic field at various places on and near Earth. Other metrics relate to characteristics of Earth's atmosphere, especially the various layers of the ionosphere. A third set of metrics describe the flow of electrical currents in the upper atmosphere and the magnetosphere. Finally, indices that describe features of auroras round out the set of metrics needed to track space weather phenomena in geospace.
The known history of the Arabs goes back about three thousand years. They occupied the northern part of the peninsula that bears their name: Arabia. Their idiom, the Arabic language, experienced an extraordinary fate as early as the 7th century, with Islam’s advent. Today, Arabic is the official language of 22 countries (250 million inhabitants), plus Malta, whose idiom is typical of Arabic origin. But Arabic is also practised quite widely in several non-Arab Muslim countries such as Iran and Turkey, whose languages have known various forms of interference throughout history. Before the advent of Islam, the Arabic language was in contact with languages belonging to the Semitic family, such as Akkadian, Phoenician, Hebrew, Aramaic, Assyrian, etc. These languages typically form two subgroups: northern and southern. Although Arabic is part of the latter, it shares some characteristics with the other. Thus, among the main characteristics that it shares with the southern group, we can cite: - A phonological system close to Ancient Semitic with a high rate of back consonants (guttural). - A morphological system structured by derivation: affixed verbs and internal plurals. Among the main characteristics that it shares with the other group: - The nasalized suffixation of the masculine plural. - Internal liabilities. - The diminutive. This seems to give Arabic the status of a synthesis language or a junction between the two groups, making it the still living language closest to ancient Semitic. We know the ancient states of this language through inscriptions dating back to the 8th century BC. J.C., but it is above all literary production, notably poetic and essentially oral, that allows us to grasp the specificities of Arabic in the pre-Islamic era. The refinement of ancient poetry proves that it is the fruit of long maturation. Moreover, it seems to be conveyed by a common language closer to the West Arabian dialects of Hijaz. This common language seems to be elaborated around the speaking of the tribe of Quraysh, who lived in Mecca, the prophet Muhammed city. It was already a holy city and a place of pilgrimage. This is where the revelation took place at the beginning of the 7th AD and in this idiom precisely. Thanks to the immense Quranic contribution and religious production, it has engendered that the so-called classical Arabic succeeded the ancient Arabic. This mutation is attested particularly by writing development, enriched by diacritical and vowel signs with phonological grapheme values. By the codification of the language and the appearance of the first dictionaries grammar treatises. The language of the Quran, sacred and “inimitable”, is taken as an immutable standard, certainly with a margin of flexibility, since the dialectal features of seven dialects deemed representative of Good Usage (fasaha) have been recovered and integrated into the standard, following the tradition of the Prophet who ordered the faithful to address people, especially candidates for conversion, in the language that is familiar to them. Language activity initially linked to the Quran and its exegesis contributes at the same time to the normalization of classical Arabic and its dissemination in the countries where Islam succeeds in penetrating, in Asia, Africa and Europe. He succeeded in supplanting Coptic in Egypt, and the other Semitic languages in the Middle East, while being influenced by them, just like Persian and Turkish. These various influences are at the origin of Arabic varieties, which have evolved into the current dialects practised in Arab countries. After several centuries of intense influence, which made Arabic the world’s first language of culture, a period of stagnation begins, limiting its function to literary expression and heritage conservation. This decline coincides with the Muslim Empire’s disintegration and the Abbasid Caliphate under the Mongol barbarians’ blows and then the Tartars. The evolution of classical Arabic and its dialects over a millennium eventually gave rise to Modern Standard Arabic, with the Renaissance, especially in the 19th century C.E. This linguistic and cultural renaissance was carried out jointly by returning to sources and a rediscovery or rereading of heritage and great openness to European civilization through translation, borrowing and tracing. These influences are most noticeable in the language of the press. This without neglecting the more or less unconscious influence of local dialects. The literal/dialectal opposition, at the root of the phenomenon of diglossia, is a fundamental fact of the current linguistic situation in Arab countries. How does this linguistic evolution manifest itself structurally? Book your free trial lesson In addition to the free Arabic courses, we offer you to begin your journey to fluency in Arabic right now for free with a graduated Egyptian teacher. Structures of the Arabic language The general structure of modern literal Arabic does not differ significantly from classical, which remains the norm, represented by the correct reading of the Quran (tajweed). To avoid the details of the literal regional and local peculiarities due to dialect influences, we will present the main structural characteristics of the classical norm. Arabic has 28 consonantal phonemes, including two semi-consonants (wa and ya) and 6 vocalic phonemes (u-a-i, doubled by duration). The attached table illustrates this system. This classic standard of reference, according to the grammatical tradition, has undergone notable variations in the use of the dialect and even of the modern literal: - The emphatic dad letter, considered specific to Arabic to the point of giving it its name, is no longer realized as a lateral-interdental. Being outside the system, it has become either a dental as in Egypt or assimilated to the interdental as in Tunisia, or a lateral as in rare Arabic dialects. - The interdental order, maintained in Tunisia (except in Mahdia), gave dental sound as in Morocco and whistling as in Egypt. - The same is true of the velar occlusive. His sound production continues to characterize the dialects of Bedouin origin / g / and his muffled production, the urban dialects / q /. These changes that occurred at the level of speech ended up affecting the pronunciation of the literal. - At this level, the changes affecting the dialect are important. The more varied combinations have given a richer syllabic system, making it possible to integrate the borrowings easily. - Intermediate apertures also enrich the vowel system in many Arabic dialects (notably / e / and / o /, as is the case in most Tunisian dialects). Arab morphological system The main characteristic of the Arab morphological system resides in its derivational structure, which makes it a paradigmatic system of schemas, combining complexity and rigour. Indeed, some patterns are analogically predictable; others, on the contrary, are customary and often have variants (e.g. bank gives the plural bunuk in Tunisia and abnak in Morocco). Schemes are generated by amalgamating a root that functions as a virtual consonant lexical unit and one or more affixes. The roots, falling within the lexicon, form an open system while the schemas, falling under the morphology, form a relatively closed system. These roots are essentially triconsonantic. Arabic, which has always borrowed words like all modern languages, must integrate them into one of its schemas. We thus come to identify a virtual and fictitious root that could become derivatively productive. But this is not always possible, as with a strategy that is integrated but not productive. For the borrowed word, its integration into an internal plural scheme would be a sign of more advanced integration: (ex: faylasuf / falasifa “philosopher / s”) While the noun schemes are quite numerous and not very predictable, those of the verbs are, on the contrary, more reduced and rigorously predictable. However, this regularity does not lack complexity because 40% of the triliteral verbs form subsets, each having an internal regularity. This apparent deviation from the norm is due to the presence in the root of an unstable consonant such as the hamza and the two semi-vowels (wa and ya). Arabic verbs have less of a temporal value than an aspectual value (accomplished / unfulfilled. The latter has three modes: indicative, subjunctive and jussive). The advantage of such a system is that it allows, on the one hand, a non-vowel reading, thanks to the regularity of the patterns, and on the other hand the generation of new units, and the use of units in reserve in the system, because no root covers all schemas, both verbal and nominal. But this advantage is not without limits, imposed by the Arab derivation mechanism, which marginalizes any unity that does not fit into the schematic mould. Also, Arabic knows a dual suffixed morpheme “an” in the nominative and “ayn” in the accusative and genitive. One of the most important characteristics of Arabic is that it is a casually inflected language for nouns and adjectives (subject to “u” suffix, the direct object in “a” and indirect in “ï”); this inflexion gives the words a large margin of distributional freedom in the Arabic sentence. To recognize a word or look for it in the dictionaries, it is necessary to know its consonantal root and scheme, requiring minimum prior knowledge of the Arabic linguistic system. The departure of the dialect from the norm of the literal is significant, making them typologically two languages, although clearly related. - The most notable difference is the disappearance of inflectional endings with a corollary, limiting the sentence’s distributional freedom of words. - The dual has practically disappeared; only frozen witnesses remain. - The verbal system has changed, especially at the vocalic level. Indeed, the literal seems to have a predilection for vowel alternation, notably a/i and i/a in the past / present tense opposition. In contrast, the dialect seems to prefer similitude as the dominant tendency (this is the second radical consonant’s vowel, which is the pivot and the most stable element of the scheme and the word). - Syllabic variation resulted in greater schematic variation and greater structural flexibility in the word, and an ability to integrate borrowings and neologisms. - The various Arabic dialects have left their mark on literal Arabic varieties. But the dialect differences from one Arab country to another are much more important than the literal differences. These do not, in any way, constitute an obstacle to mutual understanding, especially between literati. In this connection, the current development of communication means, in particular the media, increasingly favours the factors of convergence. Sociolinguistic consequences of this evolution We apply this term to the literal / dialect duality. It is evident that the dialect, under its habitual, lively and utilitarian character, has evolved more rapidly than the classical norm over the centuries. This difference in the pace of evolution further widened the gap between the two registers of Arabic. The famous Arab socio-historian Ibn Khaldoun already spoke two languages in the 14th century. If we apply the typological approach to them today, at all levels of analysis, we can go so far as to consider them as two languages, despite their indisputable kinship. However, their own dynamics and the reciprocal influence they exert on each other favour the emergence of intermediate levels observable in all Arab countries thanks to the media, mass culture and literacy. These intermediate levels, little described, increasingly make them registers or levels of the same language. In everyday life, all social categories use dialect. When they approach intellectual subjects, the literati enrich them with words drawn from the literal lexical stock that the dialect easily integrates. The use in everyday life of the literal would be more than pedantic, ridiculous. On the other hand, in class, in a public speech, or a media channel, the speaker chooses the register that suits him according to his skills and his audience. We can therefore say that there is a complementary distribution between the registers with almost inevitable interference. If the dialect is dominant in the oral, the literal is more prevalent because the transcription of the dialect is not standardized. At this level, specialization is even sharper than orally. For these reasons, newspapers almost exclusively use modern Standard Arabic, while radio and television channels all use registers by person and program. In literary and artistic production, most publications are literal, except theatre and popular poetry. In produced cinema, the dialect is dominant. But when it comes to historical films or dubbing, the literal is quite present. This situation is complicated in some countries by bilingualism or even multilingualism. In Algeria and Morocco, a significant proportion of the population is Berber-speaking (unilingual or not). In these two countries, along with Tunisia, French is widely used, not as a foreign language, but as a vehicular second language, in education and important administrative, economic and cultural sectors. In its written form, the Arabic language is one of the oldest living languages today while developing dialectal or intermediate levels. The Arabs revere their language and regard it as the symbol of their cultural and spiritual unity. Instead of breaking out (like most classical languages) into regional or dialectal variants, subsequently acceding to the status of standard languages, as was the case for the Romance languages, Arabic developed in parallel the two registers which tend to more and more to complement each other in their distribution and their geographical and social distribution. Without forcing the lines, one could even say that it is more and more a continuum that is at the same time geographic, socio-cultural and structural.
Sea Level Rise The systematic warming of the planet is directly causing global mean sea level to rise in two primary ways: (1) mountain glaciers and polar ice sheets are increasingly melting and adding water to the ocean, and (2) the warming of the water in the oceans leads to an expansion and thus increased volume. Global mean sea level has risen approximately 210–240 millimeters (mm) since 1880, with about a third coming in just the last two and a half decades. Currently, the annual rise is approximately 3mm per year. Regional variations exist due to natural variability in regional winds and ocean currents, which can occur over periods of days to months or even decades. But locally other factors can also play an important role, such as uplift (e.g. continued rebound from Ice Age glacier weight) or subsidence of the ground, changes in water tables due to water extraction or other water management, and even due to the effects from local erosion. Rising sea levels create not only stress on the physical coastline, but also on coastal ecosystems. Saltwater intrusions can contaminate freshwater aquifers, many of which sustain municipal and agricultural water supplies and natural ecosystems. As global temperatures continue to warm, sea level will keep rising for a long time because there is a substantial lag to reaching an equilibrium. The magnitude of the rise will depend strongly on the rate of future carbon dioxide emissions and future global warming, and the speed might increasingly depend on the rate of glacier and ice sheet melting. This page supports the exploration of our changing seas through analysis of Historical Sea Surface Temperatures and Historical Sea Level Anomalies (satellite observations), and Future Sea Level Rise Projections (model-based). Projected Coastal Inundation due to Mean Sea Level Rise and Projected Coastal Inundation due to Mean Sea Level Rise + Storm Surge show potential flood risk maps across scenarios and though to 2150. Use the Dropdowns to select the variable, time period and scenario. Storm Surge can be turned 'on' using the toggle and volume of storm surge can be selected along the slider under the map.
Affecting less than 5 out of every 10,000 people1, recurrent autoinflammatory diseases (also known as Periodic Fever) syndromes are not common, and are classified as rare diseases. Although they involve the immune system, they are not autoimmune diseases. They can be caused by a change to the genetic recipe (DNA) of the non-specific innate immune system, which quite simply means the immune system activates to attack invaders that are not present in the body, causing damage to the body in the process. Initial symptoms of these autoinflammatory diseases include recurrences of fevers lasting more than 24 hours, which are often accompanied by rashes and joint pain. Long-term, damage may occur due to the effects of continuous (chronic) inflammation.1 Diagnosing a recurrent, periodic fever can be difficult and complicated, not only because they occur rarely, but because the symptoms are common to many other diseases. Successful diagnosis of a periodic fever may involve several diagnostic tests before the correct cause or condition is identified. A healthy immune system is essential to fight off invading germs that can cause illnesses or disease – common viruses like the flu, or bacteria such as streptococcus. When germs enter the body, the immune system responds in two ways. With autoinflammatory disease, the innate immune system activates “on its own” in response to germs that are not actually present in the body. This results in an inflammatory response that affects the entire body,1 causing a disease flare with typical symptoms including fever, rash, joint swelling, pain, and fatigue.1 These periodic flares can occur recurrently in some people or continuously (chronically) in others. There are various causes of recurrent periodic fever syndromes and autoinflammatory diseases:1 CAPS: Cryopyrin-associated periodic syndromes, FMF: Familial Mediterranean fever, TRAPS: Tumor necrosis factor receptor associated periodic syndrome, HIDS/MKD: Hyperimmunoglobulinemia D syndrome/Mevalonate kinase deficiency Interleukin-1 beta (IL-1β) is a messenger of the nonspecific innate immune system. It plays a special role in many autoinflammatory diseases, promoting the inflammatory response2 and communicating to immune cells to come to where supposed invaders are in the body. From a practical and emotional point of view, periodic fever syndromes can be quite overwhelming to live with for everyone, with lots of challenges to overcome. But with us understanding more and more about these recurrent diseases, improvements are being made to make diagnosis quicker and more accurate. As a result, there is more support and, indeed, more support groups for people affected by periodic fever syndromes and autoinflammatory diseases than ever before. Because of the lack of knowledge and understanding around rare diseases, such as recurrent autoinflammatory conditions like SJIA, AOSD and Periodic Fever syndromes like FMF, TRAPS, HIDS/MKD, CAPS ,1 more than 25% of people with rare diseases can wait anything from 5 to 30 years from when their symptoms first appear to when they receive an appropriate diagnosis. And up to 40% of people with rare diseases are initially diagnosed incorrectly.2 This has implications not only that person’s health, but on their social and personal lives. Raising awareness and educating people about rare diseases is one of the most important things we can do to help people who affected by these conditions to help ensure they receive the right diagnosis at the right time and get the understanding and support they need. Your help to spread the word about rare autoinflammatory conditions such as periodic fever syndromes and the challenges associated with diagnosing these diseases is vital. Follow us on Facebook and share the video above with the people around you. This booklet will help you understand what hereditary recurrent fever syndromes are, their most common symptoms, how these autoinflammatory diseases work, how they are diagnosed, and how you or anyone affected by periodic fever syndromes learn to live with these life-changing conditions. This helpful glossary should help explain all of terms, abbreviations and acronyms you will discover as you learn about periodic fever syndromes and autoinflammatory conditions. Dr. Michaël Hofer is a pediatric immunologist and rheumatologist based at CHUV, the University of Lausanne, Switzerland. He is an expert in rheumatic diseases affecting children and teenagers, including autoinflammatory syndromes. Here, he outlines the difficulties as children transition from childhood to adulthood. When it comes to autoinflammatory disease, a specialist doctor can provide you with all of the information, advice and care that you and your child need. Here are some things you’ll want to consider in choosing a specialist: 1) Specialist or primary care physician? Your child’s regular doctor, pediatrician or primary care physician will always play an important role in their care, but it’s important to remember that they are not experts in every single disease, so may not have the deep understanding of your child’s specific rare autoinflammatory disease. A specialist will have that expert knowledge you need and be able to offer individual, tailored advice to you and your child. 2) What kind of specialist? There are a number of different specialists that may be able to help, depending on your child’s condition. If your child has systemic juvenile idiopathic arthritis (SJIA) for example, a rheumatologist, who specializes in joints and arthritis, may be recommended. Genetic specialists or counsellors could also be important to consider as they can provide information on the cause of the disease and can help in understanding how other family members may be affected. The best thing to do is ask your child’s primary care physician, as they should have an idea of the kind of specialist care your child will benefit from most. If your child has multiple health needs, they may need multiple specialists. In this case, it will be important to put each specialist in touch with the other members of your child’s specialist team, so that they can make sure all suggested treatments are complementary and ensure your child is receiving safe, effective care. 3) What if you have to travel? Travelling with an unwell child can be difficult, so it’s important to find a specialist not too far away. Some specialist centers provide transport, and some insurance policies may cover cost of travel too, so be sure to check before making your decision. 4) Does your insurance cover the care? If you live in a country that requires health insurance, it’s important to create an open line of communication between you and your provider, as much as possible. Before you book an appointment with a specialist, check your policy will cover it and what information your provider requires. Talk to your provider if you have any doubts or questions. Speak to your primary care physician first as they are likely to know about a number of different experts in your area and can use their connections to find out more. Resources such as Orphanet can also help. Orphanet provides access to a directory of expert services and centers for rare diseases or groups of diseases. We would always recommend checking with your doctor once you’ve found a specialist through Orphanet, to make sure they agree with the choice. Your child is special and finding them a specialist doctor can help ensure they are receiving all the care they need. Good communication with doctors and other healthcare professionals is important in helping you fully understand what your child’s diagnosis means, what treatment options are available, and how to care for your child. Here’s a step-by-step guide on what to do before, during and after medical appointments: During your appointment: Doctor’s appointments don’t have to be scary, stressful or negative experiences. Once you take the right steps to prepare, you can leave an appointment feeling well-informed and positive about you and your child’s next steps. When you understand what all of the complicated sounding terms and medical language to do with autoinflammatory diseases mean, you will be able to have much better conversations with doctors about them. Dr. Isabelle Koné-Paut, currently Head of the Department of Pediatric Rheumatology at the Paris SUD, Bicêtre University Hospital and coordinator of the National Reference Centre for Autoinflammatory Disorders answers them all here. If you don’t see the question you want to ask, please visit our Facebook page and post them there. Marco Cattalini, Head of Pediatric Rheumatology at the Pediatric Clinic, Spedali Civili di Brescia and Assistant Professor of Pediatrics at the University of Brescia, Italy, shares his insights on the signs of periodic fever symptoms and how to find a specialist treatment center that fits your child’s needs. The correct diagnosis of a periodic fever syndrome can be a challenge to doctors and physicians for three main reasons: Start a "Fever Diary" One of the main characteristics of periodic fever syndromes, from the mild to the more severe, is the recurrence of symptoms, such as high fever. It is our experience that the attention of the family first and the referring physicians thereafter are attracted by the recurrence of identical episodes. For this reason, if you suspect your child has an autoinflammatory disease, one of the best things you can do is to keep a “fever diary.” It’s not unusual that when we first see a child with “recurrent fevers” and we cannot come to a definitive diagnosis, we ask the parents to come back in a matter of months (depending on the severity of clinical manifestations) with the fever diary completed. In this diary you should note: It is also very important that you ask your pediatrician to visit your child every time he/she has a fever and to help you fill in the diary with the precise signs associated with the fever. Keep track of lab tests During an attack, your physician may ask for laboratory tests (complete hemogram, ESR, CRP, urine analysis) that may be useful to see if inflammatory markers (an indication of inflammation in the body) are high during fever attacks and to rule out infections (i.e., pharyngeal swab, urine culture). If your pediatrician suspects an autoinflammatory disease, and there is a history of recurrent fever episodes with a high level of inflammatory markers during the attacks, it is also very important to take laboratory tests between the attacks, to see if the inflammatory markers normalize. One inflammatory marker that may be useful to check is Serum Amyloid A (SAA). Persistently elevated SAA levels may suggest a chronic inflammatory state, and it will be important to rule out a periodic fever syndrome. If your diary shows recurrent episodes of systemic inflammation (i.e., fever, with elevated inflammatory markers and signs/symptoms of organ inflammation detected by your doctor) without evidence of bacterial origin, probably the most useful thing to do is to consult a physician with expertise on periodic fever syndromes for further work-up. (You can find a link to the Orphanet Directory of centers in the Links and Downloads section of this website) If people live in remote regions where specialist treatment centers are not as accessible as they are for those who live in large cities, what are the three top tips you can give them, so they don’t miss a moment in their care? Periodic fever syndromes are very rare diseases and many physicians are not familiar with them. As specialized centers are not always easily accessible, a few strategies may help to optimize the care of every child with autoinflammatory disease: 1. Ask for a medical letter: The final diagnosis of periodic fever syndrome is usually done in a referral center. At that time, ask the medical team to provide you a detailed medical letter that includes the following: 2. Get contact details: Ask the medical team for contacts (phone numbers, email addresses) where you or your primary care physician could contact the center in case of need and provide the referral center with the contacts of your primary care physician. Specialized centers can provide you and your primary care physician with all the information necessary to support you and your child. After a diagnosis is made: 1. Request patient association contact: Ask for a patients’ association contact: patients’ associations for periodic fevers usually have their headquarters in referral centers, to be easily accessible, and are an invaluable help to manage your child’s needs. Referral centers may usually also suggest websites that provide verified information on periodic fever syndromes. 2. Call your doctor: Promptly contact your primary care physician and share all the information gathered. As with all rare diseases, the parents of children with periodic fever syndromes usually feel isolated and helpless, even after the diagnosis. It is very important you understand that, although rare, there are other families facing the same difficulties, and people working constantly to help you and your child to have the best possible lives. Keep in contact with them and help this community to grow stronger. 230 pages of patient stories and first hand experiences Get advice to help you, if you know someone with a recurrent fever syndrome. Read about the multifaceted burdens of rare autoinflammatory diseases. Navigating rare autoinflammatory diseases: our commitment to you Uncover the complexities of autoinflammatory diseases and learn how we are focused on helping individual through awareness, connection, community, progress and change.
The ocean abyss hasn’t warmed significantly since 2005, according to a new NASA study, further deepening the mystery of why global warming has apparently ground to halt in the past couple of years. The researchers stress, however, that the findings do not indicate that there isn’t any man-made climate change; sea levels are still rising, it’s just the fine details that are currently escaping scientists. Global warming still heating the planet e. This image shows heat radiating from the Pacific Ocean as imaged by the NASA’s Clouds and the Earth’s Radiant Energy System instrument on the Terra satellite. (Blue regions indicate thick cloud cover.) Image: NASA Today, there are more greenhouse gases, like CO2 or methane, released into the atmosphere then ever before, yet global surface temperatures have stopped following the emissions curve for some time. Clearly, the heat is there somewhere, but where? Recent estimates have calculated that 26 percent of all the carbon released as CO2 from fossil fuel burning, cement manufacture, and land-use changes over the decade 2002–2011 was absorbed by the oceans, which act like a huge carbon sink. (About 28 percent went to plants and roughly 46 percent to the atmosphere.) The heat causes the water to expand and melt glaciers – both factors cause sea levels to rise. Sure enough, the waters have heated up, but temperature readings suggest these haven’t warmed fast enough to account for the stalled air temperatures. Some scientists, backed by climate models, suggest the excess heat may be found in the ocean abyss – below the 1.24-mile mark. Global heat increase by absorbing medium. Scientists at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, analyzed satellite and direct ocean temperature data from 2005 to 2013 to test the idea. To probe the waters’ temperature directly, a network of 3,000 floating temperature probes called the Argo array were deployed. The researchers reached this conclusion after applying a surprisingly simple subtraction calculation. Because water expands when heated, the team calculated the total amount of sea level rise, then subtracted the amount of rise from the expansion in the upper ocean, and the amount of rise that came from added meltwater. What’s left should correspond deep ocean warming, yet the figure was insignificant. “The deep parts of the ocean are harder to measure,” said JPL’s William Llovel, lead author of the study published Sunday in the journal Nature Climate Change. “The combination of satellite and direct temperature data gives us a glimpse of how much sea level rise is due to deep warming. The answer is — not much.”
Infectious bronchitis virus (IBV) is a highly infectious avian pathogen which affects the respiratory tract, gut, kidney and reproductive systems. It belongs to the Coronavirus family, genus Gammacoronavirus which has a single stranded RNA genome, surrounded by a nucleocapsid and envelope. IBV causes infectious bronchitis, targeting the respiratory and uro-genital tract. IBV mainly causes respiratory disease in the infected birds but also lowers egg production, and can cause kidney damage. - Young chickens are depressed and huddle under the heat source - Gasping, coughing and nasal discharge - Drop in egg production - Poor quality eggs; misshapen or soft-shelled eggs with watery content - When the kidneys are affected, increased water intake, depression, scouring and wet litter are commonly observed. IBV is spread by aerosol or by ingestion of contaminated feed, water or faeces. Contaminated equipment and material are also a potential source for indirect transmission over large distances. IBV is globally spread, being prevalent in all countries with an intensive poultry industry. Impact for Society – what are we doing? In spite of available vaccines, IBV is a major problem for the global poultry industry. Work at the Institute is focused on studying the function of individual genes which give us a better understanding of the disease and how to design better vaccines. Collaborations with pharmaceutical companies have been taking place to create novel designs for vaccines. The poultry industry and farmers have benefitted from improved understanding of the disease and the continued investment into new vaccines will reduce financial losses.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Here is an activity to try with a length of adhesive tape. Press the tape against a dusty surface several times. As expected, the tape quickly loses its holding strength as dust particles collect and coat the sticky side. In contrast, consider tree frogs which thrive in dusty, wet, or muddy surroundings. Yet they cling securely to branches and leaves, even hanging upside down. How are they able to hold on without falling? Researchers are exploring the “sticky” ability of tree frogs. In one experiment the tree frog is placed on a flat dusty surface. The platform is then slowly tilted to a vertical position. As the tilt angle increases, the frog eventually loses its grip and starts to slide. If it crawls a few steps, however, the frog recovers its traction and stays put. Close inspection of the frog’s foot shows many tiny hexagonal surface pads. In the channels between these pads, a small amount of mucus material is secreted. The frog is observed to move its foot back and forth spreading the gel and absorbing nearby dust. Grip is restored as a thin clean layer of the sticky gel is applied to the foot pads. The frog’s foot pad reveals a second important feature. When conventional tape is pulled loose, small cracks often spread through the adhesive. This allows removal of the tape but weakens it for further use. In contrast, the separated pads on the frog’s foot prevent the spread of such cracks. As a result, each step taken by the frog is provided with full sticking ability. Just what is it that makes frog feet sticky? The chemistry of adhesion chemistry is highly complex and not fully understood. It involves intermolecular forces, atomic bonding and polymer chemistry. Other living things display equally impressive adhesive structures. For example, gecko feet have thousands of brush-like hairs which hold firmly to almost any surface. Cockleburs have flexible grasping hooks, the inspiration for Velcro. Adhesives are an everyday component in building construction, product manufacture, aerospace engineering and medicine. A reusable, self-cleaning adhesive similar to the feet of tree frogs has many potential applications. For example, since frog feet remain sticky in wet conditions, how about band aids that stick when wet? And after use, such waterproof band aids could be removed painlessly and without leaving a sticky residue. New heavy duty “frog tape” could be applied underwater to seal leaks in a boat hull or swimming pool. For clothing, reusable sealing tape could provide a quiet alternative fastener. Imagine car tires which adhere to wet or icy roadways without slipping. Such new products could incorporate a liquid gel which mimics the self-cleaning ability of tree frogs. The term smart materials is given to new products which respond and adjust to various conditions. God has filled his creation with valuable ideas for new devices and solutions to everyday problems. Sources & Picture-sources:
A digital signal processor (DSP) is a specialized microprocessor chip, with its architecture optimized for the operational needs of digital signal processing. DSPs are fabricated on MOS integrated circuit chips. They are widely used in audio signal processing, telecommunications, digital image processing, radar, sonar and speech recognition systems, and in common consumer electronic devices such as mobile phones, disk drives and high-definition television (HDTV) products. The goal of a DSP is usually to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can also execute digital signal processing algorithms successfully, but may not be able to keep up with such processing continuously in real-time. Also, dedicated DSPs usually have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time. DSPs often also implement data compression technology, with the discrete cosine transform (DCT) in particular being a widely used compression technology in DSPs.
Maine State Flags & Banners available in all sizes in nylon and polyester. Maine is both the northernmost and easternmost portion of New England. It is known for its scenery—its jagged, mostly rocky coastline, its low, rolling mountains, its heavily forested interior, and picturesque waterways—as well as for its seafood cuisine, especially lobsters and clams. The first European settlement in Maine was by the French in 1604 on Saint Croix Island, by Pierre Dugua, Sieur de Mons. The first English settlement in Maine, the short-lived Popham Colony, was established by the Plymouth Company in 1607. A number of English settlements were established along the coast of Maine in the 1620s, although the rugged climate, deprivations, and conflict with the local peoples caused many to fail over the years. As Maine entered the 18th century, only a half dozen European settlements had survived. Patriot and Loyalist forces contended for Maine's territory during the American Revolution and the War of 1812. Maine was part of the Commonwealth of Massachusetts until 1820 when it voted to secede from Massachusetts. On March 15, 1820, it was admitted to the Union as the 23rd state under the Missouri Compromise. See our great Maine state birthday souvenirs and gifts. The first European settlement in Maine was by the French in 1604 on Saint Croix Island, by Pierre Dugua, Sieur de Mons. The first English settlement in Maine, the short-lived Popham Colony, was established by the Plymouth Company in 1607. A number of English settlements were established along the coast of Maine in the 1620s, although the rugged climate, deprivations, and conflict with the local peoples caused many to fail over the years. Maine was part of the Commonwealth of Massachusetts until 1820 when it voted to secede from Massachusetts. On March 15, 1820, it was admitted to the Union as the 23rd state under the Missouri Compromise. Some Maine Symbols 1. State Bird 2. State Mammal 3. State Flower 4. State Insect 5. State Fruit 6. State Tree 7. State Gemstone 8. State Cat 9. State Fish 10. State Herb Black-capped chickadee - Maine designated the black-capped chickadee as the official state bird in 1927. A minuscule, cheerfully sociable bird, the energetic black-capped chickadee does not migrate - allowing us to enjoy them all year long. Moose- Maine designated the moose as official state animal in 1979. Moose are the world's largest member of the deer family. White Pine Cone - Maine designated the white pine cone and tassel as the official state floral emblem in 1895 (botanically, the white pine cone and tassel are not considered true flowers). Honeybee -The honeybee was recognized as the official state insect of Maine in 1975. Bee pollination is critical to plant and human survival - beeswax and honey are just surplus gifts from this tiny wonder of nature. Blueberry - Maine designated wild blueberry as the official state fruit in 1991. Found mostly on hilly and rocky terrain, these delicious native berries were, until recently, harvested exclusively by hand using a special rake invented over 100 years ago by a Mainer. Eastern White Pine -Maine designated the white pine as the official state tree in 1945. Maine's nickname is "The Pine Tree State," the white pine appears on the state flag, and Maine even honors the white pine's cone and tassle as the state "floral emblem." Eastern white pine is a large pine tree native to eastern North America Tourmaline - Maine designated tourmaline as the official state gemstone in 1971. Tourmaline ranges in color from black or white to vibrant shades of red, green, and blue. Individual crystals can be opaque to transparent and may be single or multi-colored. Maine Coon Cat - The Maine coon cat was recognized as the official state cat of Maine in 1985. Well established more than a century ago, Maine coon cats are one of the oldest natural breeds in North America. Though tabby coloring is most well-known, Maine coons come in many colors. Landlocked Salmon -Landlocked salmon (Salmo salar Sebago) was designated the official state fish of Maine in 1969. Landlocked salmon are a subspecies of the Atlantic salmon but never migrate to the sea, living their entire lives in the freshwater lakes of the northern United States and Canada. Wintergreen - Maine designated wintergreen as the official state herb in 1999.Wintergreen has medicinal soothing qualities - native American Indians used crshed leaves to relieve strained muscles and inflammations. Teas made from wintergreen can relieve sore throats and upset stomachs. Did you know? Eastport is the most eastern city in the United States. The city is considered the first place in the United States to receive the rays of the morning sun. Approximately 40 millions pounds (nearly 90 percent) of the nation's lobster supply is caught off the coast of Maine. 90% of the country's toothpick supply is produced in Maine. Maine is the only state that shares its border with only one other state. Maine produces 99% of all the blueberries in the country making it the single largest producer of blueberries in the United States. The coastline boasts so many deep harbors it is thought all the navies in the world could anchor in them. Senator Margaret Chase Smith stood up in the senate and gave the famous Declaration of Conscious speech, speaking out against the McCarthy era. Senator Smith was the first female presidential candidate.
Skin rash is an umbrella term used to describe a wide range of skin reactions. There are countless causes of skin irritation, and skin rashes may take a variety of forms. Most cases of skin rash result in skin that is itchy, red, swollen, scaly, dry, or blistering. The symptoms and appearance of a skin rash will often help determine the irritant causing the reaction. Conditions that cause common skin rashes include: Sunburn: We've all been there. Prolonged exposure to UV rays from sunlight or sunlamps can lead to red, dry, hot, and blistering skin. Even your eyes can be burned by the sun, resulting in a gritty feeling, headaches, and eye twitches. After a few days, a sunburn will start to peel as the skin heals itself. There is no cure for sunburn, and symptoms usually dissipate within a few days, but you can use moisturizer and sunscreen to ease discomfort and prevent further damage. Allergens: Atopic dermatitis (eczema) and contact dermatitis occur when the skin has an allergic reaction after exposure to an irritant. Poison ivy, sumac, poison oak, ingredients in creams or lotions, and nickel metal are all examples of irritants that may cause an allergic reaction. Eczema may also be caused by dry skin, genetics, or an immune system condition. Most allergic reactions result in itchy skin, red spots, and scaly patches around the affected area. Dermatitis on the scalp can cause dandruff and hair loss if left untreated. Fungi: Fungal infections are skin diseases caused by fungi living on the skin. Common types of fungal infections include athlete's foot, diaper rash, yeast infections, and ringworm. These skin infections appear in warm, moist places and often occur as a side effect of poor hygiene. For instance, diaper rash is commonly caused by babies sitting too long in dirty diapers. Ringworm and athlete's foot are often passed through human-to-human contact with an infected person or a surface with fungi living on it. Fungal infections can lead to red rashes, itchiness, blisters, or peeling skin. Bacterial infection: Bacterial infections occur when bacteria enter the body through a cut or scrape on the skin. Impetigo, boils, leprosy, scarlet fever, and cellulitis are common forms of bacterial infections. These skin conditions are highly contagious and may cause lesions that ooze, ulcers, itchy skin, and swelling. Scarlet fever can result in a fever, sore throat, and a red rash on much of the body. Viral infection: Like bacterial infections, viral infections occur when a virus enters the body through a break in the skin (usually a cut or scrape). Viral infections may produce different symptoms, depending on the virus. Common viral infections of the skin include shingles, chickenpox, and warts. Shingles and chickenpox are caused by the same virus and result in itchy rashes, red spots, and blisters on the skin. Warts are caused by an infection from the human papillomavirus (HPV) and result in small, scaly bumps on the skin. Chronic skin disorders: Conditions such as lupus, rosacea, and psoriasis are chronic skin disorders that may occur without cause, or as a side effect of an autoimmune condition. Lupus and rosacea result in red rashes on the cheeks and nose, while psoriasis produces red, scaly, and itchy skin on the scalp. These conditions can be treated, but not cured. Dealing with itchy skin? Book an in-person or video dermatology consult on Sesame to talk with a real, quality dermatologist. Doctors on Sesame can address your symptoms, prescribe medication, and offer referrals if necessary. Save up to 60% on skin care when you book a visit on Sesame- no insurance needed.
(The Conversation) – We know what we need to do to reduce our risk of getting cancer, right? Wear SPF, stop smoking, avoid processed foods, keep fit, lose weight and get enough sleep. But what if much of what causes cancer has already happened in our early years, or worse still, before we were born. A recent study from Brigham and Women’s Hospital and Harvard University says that may be the case, especially in cancers that happen before the age of 50 (early-onset cancers). The most important finding in this study, published in Nature Reviews Clinical Oncology, is that people born after 1990 are more likely to develop cancer before the age of 50 than people born, for example, in 1970. Meaning that young people will be more heavily burdened by cancer than generations gone by, with the knock-on effects on healthcare, economics and families. What we are exposed to in early life can affect our risk of developing cancer later in life, and this review of cancer trends looks at how these factors might be affecting early-onset cancers. What exposures matter in early life are still not fully clear, but front-runners include diet, lifestyle, the environment and the bugs that live in our gut (the microbiome). When looking at large numbers of people, researchers can see that dietary and lifestyle habits are formed early in life. This is seen in obesity where obese children are more likely to become obese adults. As obesity is a known risk factor for cancer, it follows that those adults are likely to develop cancer at an earlier age, possibly because they have been exposed to the risk factor for a longer time. Of course, some of these early-onset cancers are detected through better screening programmes and earlier diagnosis, which contributes to increased numbers of new cancers diagnosed annually, worldwide. But that is not the whole story. Early-onset cancers have different genetic signatures compared with late-onset cancers and are more likely to have spread than cancers diagnosed in later life. This means that those cancers may need different types of treatment and a more personalised approach that is tailored to the patient’s age at the time the cancer developed. The Brigham study looked at 14 cancers and found that the genetic makeup of the cancer and the aggression and growth of the cancer was different in patients who developed it before the age of 50 compared with those who developed the same cancer after the age of 50. This seemed to be more prominent in several types of gut cancers (colorectal, pancreatic, stomach). One possible reason for this relates to our diet and microbiome. Gut bacteria are altered by high-sugar diets, antibiotics and breastfeeding. And as patterns of these things change in society over time, so do the bacteria in our gut. This might support the implementation of sugar taxes as recommended by the World Health Organization. If our healthy cells are programmed in the womb, then so might the cells that go on to cause cancer. Maternal diet, obesity and environmental exposures, such as air pollution and pesticides, are known to increase the risk of chronic diseases and cancers. Conversely, severe restrictions on food intake in pregnancy, as seen in famine, increase the risk of breast cancer in offspring. Both of these findings would have different implications for societal approaches to reducing cancer risk. As a haematologist, I take care of patients with multiple myeloma, which is an incurable blood cancer that usually affects patients over the age of 70. In recent years, there has been an increased number of younger people diagnosed with this cancer worldwide, which is only partly explained by better screening. This study flags obesity as an important risk factor for early-onset disease, but clearly, there are other risk factors yet to be uncovered. Understanding what makes early-onset cancers tick, what exposures really matter and what can be done to prevent them are some of the first steps to developing prevention strategies for future generations.
Let’s go a bit deeper into the brain structures which mark significant, genetic steps on route to the human brain, while recalling that each change is built atop previous, working neurological and behavioral adaptions. Before the cortex emerged in mammals 200–225 million years ago, the brainstem, cerebellum, and primitive forebrain were in charge of behavior. These structures predate consciousness thought. They supported behaviors which solved challenges that we still face. We still have those structures. Their decisions still focus our behavioral decisions about eating, sex, and security, except in specific cases when our conscious decision-making overrules our primitive choices. There are distinct stages in development of brains that led up to the human brain. Although physiologists and cognitive researchers use diverse terms for stages, here we will deal with three significant stages: - Neural net. Before the brain and after the amoeba, a diffuse neural net provided an intermediate stage between external stimulus and organism response developed. - The fish brain (the vertebrate, hindbrain) controls the pulse of life – the drive to keep the body working well and the drive to have sex which propagates the species. - The mammalian brain adds memory to species skills as well as combining needs from homeostasis, sex, and safety into a unified response to current demands. - The primate brain, with its shift to visual and tactile sensory organization, is responsible for combining experiential memory with sensory information. - The final stage, the vast growth of prefrontal cortex, will be discussed, along with culture, in the section on Development of the Adult Human Brain. In pre-human mammalian brains, obviously there is no thinking in words, but there is a cogitation—consideration, evaluation, and action. Sense data, patterns, categories, and concepts—the elements of the environment—are subject to immediate notice and evaluation by the organism. Animals, starting at the vertebrate branch of life (in Figure 13.1 fish, middle left, highlighted), have a neural brain that monitors levels of biologically important activities and dictates actions to pull those levels closer to long-ago developed levels that make current life more pleasant. In humans, we call this homeostasis—level of oxygen and sugar in the blood, heart rate, body temperature, and so on. The external world provides behavioral opportunities to adjust the internal levels to the homeostasis levels. This operation is performed in the oldest portion of the brain and typically does not require our conscious attention. These are preconscious behaviors. For example, if your body temperature is too high, your body has two primary ways to counter it. You can perspire or you can get into shade. Perspiration is a preconscious response to the immediate situation in place. Seeking shade can be preconscious, but it can also be a conscious response. Even when it is conscious, it is a motivated act, not a free-willed act. Vertebrates, early descendants of the invertebrates, developed multiple senses, allowing them to live in environmental niches that the invertebrates hadn’t saturated. They also had a more sophisticated brain, which combined the results of touch, smell, sound, and sight that allowed behaviors related to all of the senses, rather than each sense individually. An organism with various bodily and sensory systems requires good performance of numerous biochemical reactions. Those reactions occur best in certain temperature and pressure ranges, ion ratios, etc. The outside environment is rarely in alignment with all the organism’s needs. Thus, the organism must maintain that balance if it is to continue living. It must maintain its own homeostasis. The next step, a huge step, was the adding of memory in the limbic system occurred in Figure 13.1 probably before the branch in the middle left where the mammals splinter off from the reptiles. - Thalamus. Receives input streams from sensory organs. It does some sensory processing itself, but it also acts like a switchboard forwarding each sensory stream to a particular cortical lobe. It also returns cortical movement decisions to the muscles - Tectum. Enhances auditory and visual data. More immediate, but more primitive reaction than the cortical enhancements - Hypothalamus. Acts as a connection between nervous (behavior of organism in environment) and endocrine (alters internal state by biochemical changes) systems - Hippocampus. Allows memories with emotional values to be formed - Amygdala. Evaluates current situation against remembered situations along the dimension of 3S imperatives (needs, desires, and fears—experienced as emotions) The limbic system supports more complex behavior, beyond the simple stimulus-response actions that homeostasis provides. With mammals came the limbic system (Figure 13.2), which consists of distinct elements with specific capabilities. Once memory formation became possible, with the existence of the hippocampus, current environments and past situations could be compared. With the additional information about the result of prior behavioral choices, we have a guideline for choosing actions in the present. That decision-making process is guided by likely completion resulting in biological satisfaction. The brainstem results in invariable behaviors to satisfy homeostasis. With the limbic system, the organism evaluates the current situation against prior situations in terms of security, satiation, and sexual opportunities. To perform this task, it assigns a value to each situation according to its capacity for satisfying the 3S Imperatives. These values are emotions. Each behavioral choice is likely to only partially satisfy a 3S Imperative and thus any emotion. Very rarely does a single emotion demand and achieve complete satisfaction by a specific behavior. Typically each emotion is partially slaked by our actions, resulting in the confused state of conflicting decisions since desires, goals, and fears cannot all be resolved with a single action. The cortex, twin hemispheric neural structures, each with four lobes and responsibility for half the sensory and movement activities, first emerged in the brain of mammals. The three lobes that receive sensory input extract more information than the limbic system does from the environment. The lobes forward results to a relatively small frontal lobe in the early mammals. In the frontal lobe, behavioral responses to the current situation are triggered and sent to the muscles to activate motion.The active role the cortex takes in assembling a mental worldview is nicely captured by E.T. Jaynes’s (p 133) caution, Seeing is not a direct apprehension of reality, as we often like to pretend. Quite the contrary, seeing is inference from incomplete information. The integration of the limbic system and the cortex’s frontal lobe gives rise to the confusing, dual aspects of emotional reactions. First, the limbic system reacts to the situation, non-verbally but biologically—hairs raised, goosebumps, startled reactions—then the cortex receives the information, enhancing it. Only then can we evaluate it consciously. At that point, we can speak of the emotion, attempting to put into words the reasons for our reaction. Mammal Social Groups Social groups appear as a response to environmental and competition pressures. Members of a species acting together have a greater chance of survival than a single organism acting alone. The relative strengths of competition and cooperation vary among species and environmental niches. The net result is that mammalian species support different social group sizes. The largest social groups in chimpanzees and gorillas are the size of small neighborhoods. Building upon the fundamental thought processes of an individual, these high-level primates alter their behavior range to align with the group norms. The effect of social norms, culture, and civilization will be more fully discussed in Development of the Adult Brain
In America, we hold many values that represent our great country. We value our freedom, abundance of opportunities and peace between all differences among us. But there is one value we strive to hold but cannot achieve: equality. Equality is defined as the state of being equal, especially in status, rights, and opportunities. In America, we like to believe that we treat all of our citizens equally and every person has the same opportunity to achieve success. But this is untrue, there are many people who have head starts in this “race” to succeed and others who have been held before the starting line. There are 39.7 million people who are held before the starting line, making up 12.3 percent of our population. (The Conversation, 2018). This amount of people is considered to be poor or living in poverty. There are far too many people living in poverty and each and every demographic experiences it differently. The term poverty acts as an umbrella, as there are many types of poverty that people can experience. The most commonly known type is extreme poverty, which is set to having less than $1 a day. Another known type of poverty is absolute poverty, which measures the amount of money necessary for basic needs such as shelter, food, and clothing. The quality of life is borderline nonexistent, as most of these people living through poverty are only worried about the items necessary for survival. There is another type that I would consider to be the “highest class” of poverty, relative poverty, which are people whose economic status is poorer than other people’s in society, but still have a better quality of life. These may be everyday people who cannot afford a TV and a Netflix subscription. Relative poverty is measured in terms relative to the average person in society. No matter which type of poverty is present, each one is an issue that should not be taken lightly. In order for any person to thrive and succeed, there are a few basics that need to be met. The basic needs for survival are shelter, food, and clothing; but those needs are only the bare minimum to survive. To achieve greatness and be at peace, there should not be any worry about this month’s rent being paid or wondering if there will be food in the house every day. A happy home means for a happy life, when people feel safe at home and have a steady income, most likely they will live a happier and healthier daily life. Usually when time is spent worrying and stressing, it not only affects the mental health, but also the physical health. Health issues can start to arise and it’s not as easy to see a doctor when you’re living in poverty. One by one, poverty starts to eat away at thousands of families and households around our country. Poverty has many dimensions and does not target all demographics equally, most of the people living in poverty are minorities. These minority groups do not always apply to just race, but also single mothers and single parent households. In 2016, the rate of male headed households living in poverty was only 13.1 percent, and for women headed households it was 26.6 percent. (Poverty USA, 2016). It is apparent that women headed households experience low-income more than male headed households. Women are gender stratified in society, and socially stratified in this case. Women earn less than men, so they may have to juggle more than one low-wage job. On top of maintaining multiple jobs, they struggle with finding a baby sitter or paying a day care center to watch their kids when they are not in school. All of these factors are just a glimpse into the everyday life that single-parents families experience in poverty. Within the 40 million Americans living in poverty, 27.4 percent are African American, 26.6 percent are Hispanic and only 9.9 percent are White (Economic Policy Institute, 2018). Minority groups within our society tend to earn less than non-Hispanic whites do. The reason for the wage gap between minorities and the vast majority, is most likely due to the inequalities of opportunity. In the case of poverty-stricken households, the housing areas are usually rundown, the schools may not be the most prestigious, and the occupations are not highly qualified. This leaves families left with the low wage jobs with little to no benefits, and the schools that provide the bare minimum of education, which often times does not prepare the child for college or a higher-level education. Education and wealth typically go hand in hand, the higher the education, the higher paying occupation is applied, and the more income is earned for the individual. In 1964, President Lyndon B. Johnson signed the Economic Opportunity Act, the Civil Rights Act of 1964 and declared and “unconditional war on poverty”. (Chaundry, Wimer, Macartney, Frolich, Campbell, Swenson, Oellerich, & Hauan, 2016). These two acts were implemented to improve the opportunities, education, health, and resources for low-income families to allow them to have better opportunities to make ends meet. These acts became the building blocks of our mission to alleviate poverty at its worst, about 50 years ago. Since 1964, food stamps, (later known as the SNAP program), Medicaid and Medicare of 1965, Supplemental Security Income of 1972, the child support program of 1975, the Children’s Health Insurance Program of 1997, and the Affordable Care Act of 2010 have drastically aided. A professional writer will make a clear, mistake-free paper for you!Get help with your assigment Please check your inbox I'm Chatbot Amy :) I can help you save hours on your homework. Let's start by finding a writer.Find Writer
To create a cloud composition: 1. Students should be able to create cloud characters and after conducting research, write a rap about their assigned cloud with their classroom teachers. 2. Students should be able to use the rap as the chorus and verse section of a longer composition they will create in music class by following a set of guidelines specified by the music teacher. 3. Students should be able to create an accompaniment for each section of the composition using vocal sounds, body sounds, traditional and non-traditional instrumental sounds, and sounds produced by electronic means. 4. Students should be able to vary the use of dance elements to create movement sequences that capture the essence of their cloud characters. 5. Students should be able to select appropriate props to support the performance of the composition. 6. Students will be able to identify the defining characteristics in clouds in Chinese artwork. 7. Students will be able to create a cloud sculpture for their assigned cloud using the elements of art, the principles of design, and recycled materials. 8. Students will be able to conduct and perform their own cloud composition incorporating words, visuals, sound, and movements. 9. Students will be able to capture the planning process and the performance of their composition using iMovie. Website addresses with information and images about cloud types Musical excerpts of pieces about clouds Copies of student raps Chart paper and markers to create composition maps A variety of classroom pitched and unpitched percussion instruments Props like scarves and ribbon streamers Cloud sculptures mounted on dowels to use as conducting batons Copies of student raps and student composition maps A variety of classroom pitched and unpitched percussion instruments Props like scarves and ribbon streamers and/or PE manipulatives like balls, hula hoops, parachutes, etc. Pictures of clouds in Chinese art MEDIA AND CLASSROOM Teaching materials and reflection tools to address collaboration and team building Creation of cloud characters and raps – Classroom teachers 1. Classroom teachers will guide students to create cloud characters to portray the various types of clouds and the weather they produce (engagement before learning.) Musical excerpts will provide inspiration for movement ideas. 2. Classroom teachers will assign a group of students to each type of cloud. 3. Classroom teachers will guide students to conduct research about their assigned cloud and the type of weather it produces. 4. Classroom teachers will guide students to create a rap for each cloud type. The chorus of the rap should have 4 lines with 4 beats each to express the main idea they want to convey about their cloud type. Students should also create a verse with 4 lines made of 4 beats that expresses the details they want to convey about their cloud type. Creation of the cloud composition – Music teacher 1. Music teacher hears each group perform their raps. 2. Music teacher introduces students to the form of the composition they will be creating by using a composition map. (See composition map at end of lesson plan.) 3. Music teacher models how to use the structure of the composition map with one group. Group members decide how to create the steady beat accompaniment – vocal or body sounds, traditional or non-traditional sounds, or sounds produced electronically. 4. Music teacher invites students in the audience to choose props or instruments for the bridge section to illustrate the cloud being featured and the weather it produces. Music teacher guides all participants through the 32 counts of the bridge, challenging students to show what they have learned about the featured cloud. 5. Music teacher uses cloud sculpture created in art class as a conducting baton to model how to conduct the groups performing the cloud composition. 6. Model group performs the cloud composition with audience participation (music teacher conducts with cloud baton.) Next session – all small groups plan and rehearse their cloud compositions using the structure of the cloud composition map and a student conductor Concurrent sessions with Chinese teacher and art teacher 1. Chinese teacher introduces students to clouds in Chinese art and guides students to discover the defining characteristics of this art form 2. Art teacher reviews the elements of art and the principles of design with her 5th grade classes 3. Art teacher shows students recycled objects that can be used to create their sculpture 4. Each group collaborates to create a 3D cloud sculpture for the cloud type assigned to them by incorporating their assigned element of art and principle of design and using recycled objects 5. Each group figures out how to mount the cloud sculpture onto a dowel to create a conducting baton Concurrent sessions with the PE teacher The PE teacher works with the students to create their 8 beat movement sequences with their props for each cloud type. These movements will form the Bridge section of each group’s cloud composition. Final rehearsals with music teacher Each group rehearses their cloud compositions in music class incorporating all elements – vocal introduction and coda, rap, steady beat accompaniments, patterns with props and movements or with instruments, and cloud sculpture batons Final performance in the gym with all classroom teachers and specialists 1. Each group presents their cloud composition performance with a student conductor and involves the audience in the introduction, coda, and bridge 2. Technology teacher films each performance Reflections with the Guidance counselor 1. The guidance counselor presents a lesson about collaboration 2. Students reflect on how they demonstrated collaboration while creating and performing their cloud compositions Creation of iMovie – Media Specialist and classroom teachers 1. The iMovie will document the process of the creation and performance of the cloud compositions 2. The iMovie will serve as a model for arts integration to be used by other grade levels at our school 3. The iMovie will be shown as part of our STEM night in April 1. Careful organization of the student teams creating each cloud composition will help address the learning needs of all students. It is important to incorporate diverse types of learners in each group. 2. This project is designed to include all the multiple intelligences and all modalities of learning – visual, aural, and kinesthetic. 3. The structure of the cloud composition contains a chorus section that repeats at several points throughout the composition. This is the section that everyone can be successful at performing. The verse can be more challenging and can be performed by a smaller group. 4. The bridge section is non-verbal and will allow all students to move their bodies with a prop or create sounds with instruments. Each student can contribute in some meaningful way to this section of the composition. 5. The cloud composition structure can be modified to meet the learning needs of all students. It can be abbreviated or elaborated upon. The number of choices for props, instruments, and movements can be streamlined or increased depending on the needs of the students. Follow Up and Extension Ideas 1. Our plan is to show the iMovie of the process of the creation and performance of the 5th grade cloud compositions to other grade levels as a model. We are planning a school-wide study of weather concepts that will be showcased at the end of the year during STEM night. This evening would feature live performances as well as the showing of an iMovie which would document the integrated arts and STEM approach to the study of weather by all the grade levels. 2. This lesson could easily be adapted to other science concepts throughout the curriculum for all grade levels. The steps of the process we have used to create the cloud compositions could be taught in terms of the Engineering Design Process. Teachers could complete a graphic organizer demonstrating how we followed the Engineering Design Process to create, rehearse, and perform the cloud compositions. Students could then complete their own graphic organizers for inclusion in their STEM notebooks. - Grade Level: Fifth - Arts Content Area: Dance, Music, Theatre Arts, Visual Arts - Non-Arts Content Area: English Language Arts, Global Studies/World Languages, PE, Science
The data and charts above provide the tide time predictions for Monomoy Point for December 2022, with extra details provided for today, Friday December 9, 2022. What are Tides? The tides are very long waves that move across our oceans, and they are caused by the gravitational pull from the moon and, to a lesser extent, the sun. When the highest point of the wave (also known as a crest) reaches a coastline, the coast experiences what we call a high tide. At the lowest point (also known as a trough) reaches the coast, we experience a low tide. Tidal forces of the moon in the open ocean will form as bulges of water that face the moon, but around land mass and coast lines, the water is able to spread out onto land, which creates the tides. Earth's tides change based on the gravitational pull of the moon as it orbits us. The gravitational pull of the moon is strongest on whichever side of the Earth is facing it, annd gravity pulls the oceans towards the moon, resulting in a high tide. On the opposite side of Earth, the bulge is caused by inertia. The water moving away from the moon is able to result the gravitational forces trying to pull it in the opposition direction because the gravitational pull is weaker on the far side of Earth. Inertia wins, and this caused the ocean to bulge out and create a high tide. As the Earth spins, different locations on the planet will face the moon, and this rotation is what allows the tides to cycle around the planet. Types of Tides There are two types, or extremes, of the tide. These are called the spring tide (also known as the King tide) and the neap tide. These tide types occur twice every month. When we experience a low tide, the Moon is facing the Earth at a right angle to the Sun. This means that the gravitational pull of the Moon and Sun actually work against each other. We call these tides neap tides, and it's when the difference between high and low tide is at its lowest. A neap tide happens between two spring tides, twice a month when the first and last quarter Moon appears. When we experience a high tide, the Earth, Moon, and Sun are in alignment, and this creats a strong gravitational pull. A spring tide is when the high and low tide difference is at it's most extreme, the highest and the lowest tides of the month. Tide Predictions for Monomoy Point Our tide prediction model for Monomoy Point uses harmonic constants and the nearest available coordinates along with the Lowest Astronomical Tide (LAT) to define the chart datum. Tide times and heights may not be 100% accurate and they also do not account for local weather conditions. We built this tool out of a love for tides and astronomical calculations but it is not intended to be used for navigation or any purpose where you would need to rely on the data being accurate. If there are any errors or problems you find with the tide data for Monomoy Point (or any other tidal station) please let us know.
The bilateral ties between the Republic of Cuba and the Kingdom of Spain are referred to as “Cuba–Spain relations.” There has been a connection for more than five centuries. Cuba had been a colony from 1492 until 1898, when the United States seized control of the country as a result of the Spanish–American War. Three of Christopher Columbus’ four trips traveled through the Canary Islands, and the first canaries to settle on the island arrived in 1492 on ships from the ships of the Spanish explorer, Christopher Columbus. A group of canaries arrived in Cuba for the first time during the final part of the sixteenth century. When Columbus returned to the island for a second journey in 1494, he traveled around the island’s southern coast, stopping at a number of inlets, including what would become Guantánamo Bay. After receiving instructions from Spain to conquer the island, Diego Velázquez de Cuéllar moved out from Hispaniola to establish the first Spanish colony in Cuba, which was completed in 1511. After being colonized by Spain since the 15th century, it became an American protectorate during the Spanish–American War of 1898. After being conquered by the United States, Cuba acquired nominal independence as a de facto protectorate of the United States in 1902. It was in 1492 that Christopher Columbus discovered an island that had previously been settled by three separate tribes of indigenous people: the Tanos, Ciboneys, and Guanajatabeyes. They were the first Europeans to set foot on Cuba. Scholars currently estimate that there were between 50,000 and 300,000 indigenous people living on the island at the time of the discovery. During the period 1791-1809, the French colonized Cuba. On December 10, 1898, the Treaty of Paris, which brought the Spanish-American War to a close, was signed. Spain relinquished all claims to Cuba, gave Guam and Puerto Rico to the United States, and handed sovereignty over the Philippines to the United States in exchange for a sum of $20 million dollars. Thousands of United States soldiers fought in the Cuban Revolution. Despite the fact that the Spanish-American War lasted just a few months, it came to an end when Spain signed a peace deal with the United States, granting the United States dominion of Cuba, Puerto Rico, the Philippines, and Guam. Cuba, on the other hand, was no longer considered a U.S. colony but rather an independent country. Cuba’s official name is the Republic of Cuba | Spanish Translation Service. Hernán Cortés led a fresh voyage to Mexico, which landed on the shores of present-day Veracruz on April 22, 1519, marking the beginning of more than 300 years of Spanish dominance over the territory. According to conventional definitions, the “Spanish conquest of Mexico” refers to the invasion of the center area of Mesoamerica, which was home to the Aztec Empire. During the period 1571-1898, the Philippines was under Spanish rule.
This is designed to lend a better understanding concerning how plastics are created, the several types of plastic and their numerous properties and applications. A plastic is a kind of synthetic or man-made polymer; similar in several ways to natural resins present in trees along with other plants. Webster’s Dictionary defines polymers as: some of various complex organic compounds made by polymerization, effective at being molded, extruded, cast into various shapes and films, or drawn into filaments then used as textile fibers. Just A Little HistoryThe history of manufactured plastics goes back greater than 100 years; however, in comparison to other materials, plastics are relatively modern. Their usage within the last century has enabled society to create huge technological advances. Although plastics are regarded as an advanced invention, there have always been “natural polymers” for example amber, tortoise shells and animal horns. These materials behaved very much like today’s manufactured plastics and were often used similar to the way manufactured plastics are now applied. For example, prior to the sixteenth century, animal horns, which become transparent and pale yellow when heated, were sometimes accustomed to replace glass. Alexander Parkes unveiled the initial man-made plastic at the 1862 Great International Exhibition in London. This product-which had been dubbed Parkesine, now called celluloid-was an organic material produced from cellulose that after heated could be molded but retained its shape when cooled. Parkes claimed that this new material could do anything whatsoever that rubber was competent at, yet for less money. He had discovered a material that might be transparent as well as carved into thousands of different shapes. In 1907, chemist Leo Hendrik Baekland, while striving to produce a synthetic varnish, discovered the formula to get a new synthetic polymer caused by coal tar. He subsequently named the newest substance “Bakelite.” Bakelite, once formed, could stop being melted. Because of its properties as being an electrical insulator, Bakelite was used in producing high-tech objects including cameras and telephones. It absolutely was also utilized in the creation of ashtrays and as an alternative for jade, marble and amber. By 1909, Baekland had coined “plastics” as being the term to illustrate this completely new class of materials. The 1st patent for pvc pellet, a substance now used widely in vinyl siding and water pipes, was registered in 1914. Cellophane have also been discovered during this period. Plastics did not really take off until once the First World War, with the aid of petroleum, a substance easier to process than coal into raw materials. Plastics served as substitutes for wood, glass and metal throughout the hardship times during the World War’s I & II. After The Second World War, newer plastics, such as polyurethane, polyester, silicones, polypropylene, and polycarbonate joined polymethyl methacrylate and polystyrene and PVC in widespread applications. A lot more would follow and also the 1960s, plastics were within everyone’s reach due to their inexpensive cost. Plastics had thus come that need considering ‘common’-a symbol of your consumer society. Considering that the 1970s, we now have witnessed the arrival of ‘high-tech’ plastics utilized in demanding fields including health insurance and technology. New types and sorts of plastics with new or improved performance characteristics continue being developed. From daily tasks to the most unusual needs, plastics have increasingly provided the performance characteristics that fulfill consumer needs in any way levels. Plastics are employed such a wide range of applications as they are uniquely able to offering a variety of properties that supply consumer benefits unsurpassed by many other materials. They are also unique because their properties could be customized for every individual end use application. Oil and gas would be the major raw materials utilized to manufacture plastics. The plastics production process often begins by treating components of crude oil or gas in a “cracking process.” This method leads to the conversion of the components into hydrocarbon monomers such as ethylene and propylene. Further processing leads to a wider selection of monomers including styrene, rigid pvc compound, ethylene glycol, terephthalic acid and many more. These monomers are then chemically bonded into chains called polymers. The different mixtures of monomers yield plastics with a wide range of properties and characteristics. PlasticsMany common plastics are manufactured from hydrocarbon monomers. These plastics are created by linking many monomers together into long chains produce a polymer backbone. Polyethylene, polypropylene and polystyrene are the most common instances of these. Below is really a diagram of polyethylene, the easiest plastic structure. However the basic makeup of countless plastics is carbon and hydrogen, other elements may also be involved. Oxygen, chlorine, fluorine and nitrogen can also be located in the molecular makeup of countless plastics. Polyvinyl chloride (PVC) contains chlorine. Nylon contains nitrogen. Teflon contains fluorine. Polyester and polycarbonates contain oxygen. Characteristics of Plastics Plastics are divided into two distinct groups: thermoplastics and thermosets. Nearly all plastics are thermoplastic, which means that when the plastic is created it might be heated and reformed repeatedly. Celluloid is actually a thermoplastic. This property enables easy processing and facilitates recycling. The other group, the thermosets, are unable to be remelted. Once these plastics are formed, reheating may cause the fabric to decompose as opposed to melt. Bakelite, poly phenol formaldehyde, is a thermoset. Each plastic has very distinct characteristics, but most plastics get the following general attributes. Plastics can be extremely resistant to chemicals. Consider every one of the cleaning fluids at your residence that happen to be packaged in plastic. The warning labels describing what occurs if the chemical comes into contact with skin or eyes or perhaps is ingested, emphasizes the chemical resistance of the materials. While solvents easily dissolve some plastics, other plastics provide safe, non-breakable packages for aggressive solvents. Plastics may be both thermal and electrical insulators. A stroll via your house will reinforce this idea. Consider every one of the electrical appliances, cords, outlets and wiring that are made or covered with plastics. Thermal resistance is evident in the kitchen with plastic pot and pan handles, coffee pot handles, the foam core of refrigerators and freezers, insulated cups, coolers and microwave cookware. The thermal underwear that a great many skiers wear is constructed of polypropylene and also the fiberfill in lots of winter jackets is acrylic or polyester. Generally, plastics are very light in weight with varying levels of strength. Consider the plethora of applications, from toys on the frame structure of space stations, or from delicate nylon fiber in pantyhose to Kevlar®, which is often used in bulletproof vests. Some polymers float in water and some sink. But, in comparison to the density of stone, concrete, steel, copper, or aluminum, all plastics are lightweight materials. Plastics may be processed in different approaches to produce thin fibers or very intricate parts. Plastics may be molded into bottles or elements of cars, including dashboards and fenders. Some pvcppellet stretch and are very flexible. Other plastics, including polyethylene, polystyrene (Styrofoam™) and polyurethane, might be foamed. Plastics may be molded into drums or perhaps be blended with solvents to be adhesives or paints. Elastomers and some plastics stretch and are very flexible. Polymers are materials with a seemingly limitless range of characteristics and colors. Polymers have numerous inherent properties that can be further enhanced by a wide range of additives to broaden their uses and applications. Polymers can be done to mimic cotton, silk, and wool fibers; porcelain and marble; and aluminum and zinc. Polymers may also make possible products that do not readily come from the natural world, like clear sheets, foamed insulation board, and versatile films. Plastics may be molded or formed to produce many kinds of merchandise with application in several major markets. Polymers tend to be manufactured from petroleum, however, not always. Many polymers are made from repeat units derived from gas or coal or crude oil. But building block repeat units can often be created from renewable materials including polylactic acid from corn or cellulosics from cotton linters. Some plastics have invariably been made out of renewable materials for example cellulose acetate useful for screwdriver handles and gift ribbon. As soon as the building blocks can be made more economically from renewable materials than from energy sources, either old plastics find new raw materials or new plastics are introduced. Many plastics are blended with additives as they are processed into finished products. The additives are integrated into plastics to change and improve their basic mechanical, physical, or chemical properties. Additives are employed to protect plastics from your degrading results of light, heat, or bacteria; to improve such plastic properties, including melt flow; to provide color; to supply foamed structure; to supply flame retardancy; as well as provide special characteristics for example improved surface appearance or reduced tack/friction. Plasticizers are materials included in certain plastics to enhance flexibility and workability. Plasticizers are located in many plastic film wraps and in flexible plastic tubing, each of which are typically found in food packaging or processing. All plastics utilized in food contact, like the additives and plasticizers, are regulated through the U.S. Food and Drug Administration (FDA) to make sure that these materials are secure. Processing MethodsThere are some different processing methods utilized to make plastic products. Listed below are the four main methods by which plastics are processed to create the merchandise that consumers use, like plastic film, bottles, bags and also other containers. Extrusion-Plastic pellets or granules are first loaded in a hopper, then fed into an extruder, which is a long heated chamber, by which it is moved by the act of a continuously revolving screw. The plastic is melted by a variety of heat in the mechanical work done and through the recent sidewall metal. Following the extruder, the molten plastic is forced out using a small opening or die to shape the finished product. As being the plastic product extrudes in the die, it can be cooled by air or water. Plastic films and bags are produced by extrusion processing. Injection molding-Injection molding, plastic pellets or granules are fed from the hopper right into a heating chamber. An extrusion screw pushes the plastic through the heating chamber, where material is softened in to a fluid state. Again, mechanical work and hot sidewalls melt the plastic. Following this chamber, the resin needs at high pressure in to a cooled, closed mold. Once the plastic cools into a solid state, the mold opens and the finished part is ejected. This technique is used to produce products including butter tubs, yogurt containers, closures and fittings. Blow molding-Blow molding is really a process used along with extrusion or injection molding. In a form, extrusion blow molding, the die forms a continuous semi-molten tube of thermoplastic material. A chilled mold is clamped around the tube and compressed air is then blown to the tube to conform the tube for the interior of the mold and also to solidify the stretched tube. Overall, the target is to produce a uniform melt, form it into a tube together with the desired cross section and blow it in the exact model of the product. This process is commonly used to produce hollow plastic products and its particular principal advantage is being able to produce hollow shapes while not having to join two or more separately injection molded parts. This process is commonly used to produce items including commercial drums and milk bottles. Another blow molding method is to injection mold an intermediate shape called a preform and after that to heat the preform and blow the temperature-softened plastic in to the final shape within a chilled mold. This is the process to create carbonated soft drink bottles. Rotational Molding-Rotational molding includes closed mold attached to a machine capable of rotation on two axes simultaneously. Plastic granules are positioned in the mold, that is then heated in an oven to melt the plastic Rotation around both axes distributes the molten plastic in a uniform coating on the inside of the mold before the part is set by cooling. This procedure can be used to produce hollow products, for instance large toys or kayaks. Durables vs. Non-DurablesAll kinds of plastic items are classified inside the plastic industry for being either a durable or non-durable plastic good. These classifications are used to make reference to a product’s expected life. Products having a useful lifetime of 3 years or maybe more are referred to as durables. They include appliances, furniture, electronic products, automobiles, and building and construction materials. Products having a useful life of under three years are often called non-durables. Common applications include packaging, trash bags, cups, eating utensils, sporting and recreational equipment, toys, medical devices and disposable diapers. Polyethylene Terephthalate (PET or PETE) is obvious, tough and possesses good gas and moisture barrier properties rendering it suitable for carbonated beverage applications and other food containers. The reality that they have high use temperature allows so that it is employed in applications like heatable pre-prepared food trays. Its heat resistance and microwave transparency allow it to be a great heatable film. In addition, it finds applications such diverse end uses as fibers for clothing and carpets, bottles, food containers, strapping, and engineering plastics for precision-molded parts. High Density Polyethylene (HDPE) is used for several packaging applications mainly because it provides excellent moisture barrier properties and chemical resistance. However, HDPE, like all kinds of polyethylene, is limited to those food packaging applications that do not require an oxygen or CO2 barrier. In film form, HDPE can be used in snack food packages and cereal box liners; in blow-molded bottle form, for milk and non-carbonated beverage bottles; and also in injection-molded tub form, for packaging margarine, whipped toppings and deli foods. Because HDPE has good chemical resistance, it really is employed for packaging many household in addition to industrial chemicals including detergents, bleach and acids. General uses of HDPE include injection-molded beverage cases, bread trays and also films for grocery sacks and bottles for beverages and household chemicals. Polyvinyl Chloride (PVC) has excellent transparency, chemical resistance, long-term stability, good weatherability and stable electrical properties. Vinyl products may be broadly split into rigid and flexible materials. Rigid applications are concentrated in construction markets, including pipe and fittings, siding, rigid flooring and windows. PVC’s success in pipe and fittings can be associated with its potential to deal with most chemicals, imperviousness to attack by bacteria or micro-organisms, corrosion resistance and strength. Flexible vinyl is commonly used in wire and cable sheathing, insulation, film and sheet, flexible floor coverings, synthetic leather products, coatings, blood bags, and medical tubing. Low Density Polyethylene (LDPE) is predominantly found in film applications because of its toughness, flexibility and transparency. LDPE includes a low melting point so that it is popular for usage in applications where heat sealing is essential. Typically, LDPE is utilized to produce flexible films including those utilized for dry cleaned garment bags and create bags. LDPE is also employed to manufacture some flexible lids and bottles, and is particularly commonly used in wire and cable applications because of its stable electrical properties and processing characteristics. Polypropylene (PP) has excellent chemical resistance and it is popular in packaging. It comes with a high melting point, so that it is perfect for hot fill liquids. Polypropylene is located in anything from flexible and rigid packaging to fibers for fabrics and carpets and large molded parts for automotive and consumer products. Like other plastics, polypropylene has excellent potential to deal with water as well as salt and acid solutions which can be destructive to metals. Typical applications include ketchup bottles, yogurt containers, medicine bottles, pancake syrup bottles and automobile battery casings. Polystyrene (PS) is actually a versatile plastic that can be rigid or foamed. General purpose polystyrene is clear, hard and brittle. Its clarity allows that it is used when transparency is vital, as with medical and food packaging, in laboratory ware, and in certain electronic uses. Expandable Polystyrene (EPS) is normally extruded into sheet for thermoforming into trays for meats, fish and cheeses and into containers like egg crates. EPS is additionally directly formed into cups and tubs for dry foods such as dehydrated soups. Both foamed sheet and molded tubs are used extensively in take-out restaurants for his or her lightweight, stiffness and excellent thermal insulation. If you are conscious of it or not, plastics play an essential part in your daily life. Plastics’ versatility permit them to be used in anything from car parts to doll parts, from soft drink bottles for the refrigerators they are held in. From your car you drive to function in to the television you watch in your house, plastics help make your life easier and much better. So, just how could it be that plastics are becoming so traditionally used? How did plastics become the material preferred by countless varied applications? The straightforward answer is that plastics provides the items consumers want and want at economical costs. Plastics have the unique capacity to be manufactured to meet very specific functional needs for consumers. So maybe there’s another question that’s relevant: Exactly what do I want? Irrespective of how you answer this query, plastics can probably match your needs. If your product is constructed of plastic, there’s a good reason. And chances are the key reason why has everything with regards to assisting you to, the individual, get what you need: Health. Safety. Performance. and Value. Plastics Have The Ability. Just think about the changes we’ve found in the supermarket in recent times: plastic wrap assists in keeping meat fresh while protecting it through the poking and prodding fingers of your respective fellow shoppers; plastic containers mean you can easily lift an economy-size bottle of juice and should you accidentally drop that bottle, it really is shatter-resistant. In each case, plastics help make your life easier, healthier and safer. Plastics also assist you in getting maximum value from a few of the big-ticket stuff you buy. Plastics help to make portable phones and computers that basically are portable. They help major appliances-like refrigerators or dishwashers-resist corrosion, keep going longer and operate more efficiently. Plastic car fenders and body panels resist dings, so that you can cruise the food market parking area with full confidence. Modern packaging-including heat-sealed plastic pouches and wraps-helps keep food fresh and free from contamination. This means the resources that went into producing that food aren’t wasted. It’s the same thing after you get the food home: plastic wraps and resealable containers keep the leftovers protected-much for the chagrin of kids everywhere. The truth is, packaging experts have estimated that each pound of plastic packaging is effective in reducing food waste by approximately 1.7 pounds. Plastics will also help you bring home more product with less packaging. As an example, just 2 pounds of plastic can deliver 1,300 ounces-roughly 10 gallons-of a beverage such as juice, soda or water. You’d need 3 pounds of aluminum to bring home the equivalent amount of product, 8 pounds of steel or older 40 pounds of glass. In addition plastic bags require less total energy to create than paper bags, they conserve fuel in shipping. It takes seven trucks to carry the identical number of paper bags as fits in one truckload of plastic bags. Plastics make packaging better, which ultimately conserves resources. LightweightingPlastics engineers are usually endeavoring to do even more with less material. Since 1977, the 2-liter plastic soft drink bottle has gone from weighing 68 grams to merely 47 grams today, representing a 31 percent reduction per bottle. That saved more than 180 million pounds of packaging in 2006 for only 2-liter soft drink bottles. The 1-gallon plastic milk jug has undergone a comparable reduction, weighing 30 percent less than what it did 20 years ago. Doing more with less helps conserve resources in a different way. It will help save energy. The truth is, plastics can start to play a significant role in energy conservation. Just consider the decision you’re motivated to make in the food store checkout: “Paper or plastic?” Plastic bag manufacture generates less greenhouse gas and uses less fresh water than does paper bag manufacture. Not only do plastic bags require less total production energy to create than paper bags, they conserve fuel in shipping. It will take seven trucks to hold exactly the same number of paper bags as fits in one truckload of plastic bags. Plastics also aid to conserve energy at home. Vinyl siding and windows help cut energy consumption minimizing air conditioning bills. Furthermore, the Usa Department of Energy estimates that use of plastic foam insulation in homes and buildings each year could save over 60 million barrels of oil over other sorts of insulation. The same principles apply in appliances including refrigerators and air conditioners. Plastic parts and insulation have helped to improve their energy efficiency by 30 to fifty percent because the early 1970s. Again, this energy savings helps reduce your air conditioning bills. And appliances run more quietly than earlier designs that used other materials. Recycling of post-consumer plastics packaging began during the early 1980s as a result of state level bottle deposit programs, which produced a regular supply of returned PETE bottles. With adding HDPE milk jug recycling in the late 1980s, plastics recycling has expanded steadily but in accordance with competing packaging materials. Roughly 60 % of your U.S. population-about 148 million people-have access to a plastics recycling program. Both the common types of collection are: curbside collection-where consumers place designated plastics in a special bin to get acquired from a public or private hauling company (approximately 8,550 communities take part in curbside recycling) and drop-off centers-where consumers place their recyclables to a centrally located facility (12,000). Most curbside programs collect a couple of form of plastic resin; usually both PETE and HDPE. Once collected, the plastics are shipped to a material recovery facility (MRF) or handler for sorting into single resin streams to increase product value. The sorted plastics are then baled to minimize shipping costs to reclaimers. Reclamation is the next step in which the plastics are chopped into flakes, washed to eliminate contaminants and sold to end users to manufacture new items like bottles, containers, clothing, carpet, pvc compound, etc. The number of companies handling and reclaiming post-consumer plastics today has ended 5 times more than in 1986, growing from 310 companies to 1,677 in 1999. The quantity of end ways to use recycled plastics is growing. The government and state government along with many major corporations now support market growth through purchasing preference policies. Early in the 1990s, concern within the perceived reduction of landfill capacity spurred efforts by legislators to mandate the use of recycled materials. Mandates, as a means of expanding markets, may be troubling. Mandates may fail to take health, safety and gratification attributes into account. Mandates distort the economic decisions and can lead to sub optimal financial results. Moreover, they are not able to acknowledge the lifestyle cycle benefits associated with options to the environment, for example the efficient consumption of energy and natural resources. Pyrolysis involves heating plastics from the absence or near shortage of oxygen to get rid of down the long polymer chains into small molecules. Under mild conditions polyolefins can yield a petroleum-like oil. Special conditions can yield monomers such as ethylene and propylene. Some gasification processes yield syngas (mixtures of hydrogen and carbon monoxide are known as synthesis gas, or syngas). Unlike pyrolysis, combustion is undoubtedly an oxidative procedure that generates heat, co2, and water. Chemical recycling is actually a special case where condensation polymers for example PET or nylon are chemically reacted to form starting materials. Source ReductionSource reduction is gaining more attention for an important resource conservation and solid waste management option. Source reduction, known as “waste prevention” is defined as “activities to lower the volume of material in products and packaging before that material enters the municipal solid waste management system.”
Some word has special meaning in C program. Most used special words are explained below Every single command of C program is a statement. For example printf(“hello world”); is a statement. A C program is a set of statements. all statement ends with semicolon. Expression is mathematical statement. For example a+b; is a statement but it is mathematical statement. So it is better to call it as expression. All expression is a statement but reverse is not true. Comment is an information about a statement. Compiler ignore a comment and executes rest of code.A command start with //. For example you want to add “this line will print ‘hello world’” for printf(“Hello world”); statement as comment. You should do it as follows: There is another style for comment. In this style comment starts with /* and ends with */ What is the difference between these two style? Answer: Only one line can be made as comment by // while all lines between /* and */ have been made as comment. printf("Hello world"); //this line will print 'hello world' For example only line1 will be a comment of the following style while all lines will be a comment of the following style Let you write a c program containing 100 lines. When someone will read your code he/she will be confused about some lines. You can add some comment in your code so that other programmer can understand your code easily. Keyword is a reserved word in compiler which has special meaning. For example, integer means a whole number but compiler recognize a whole number by int keyword. You can write water as WATER or Water and so on but if you are said to write molecular formula of water you have to write H2O . H2O or h2O or others format will not represent water. In the same manner you have to write the keyword of C programming exactly. A list of some keyword: |int||integer or whole number| There are a lot of keyword.We mentioned only three to avoid complexity.We will discuss all step by step. Don’t worry, you don’t have to memories all. all keyword are English like language and you will be familiar subconsciously while coding.
3 posts • Page 1 of 1 You can check this by analyzing the ionization energy of each of them. Each element wants to become stable by either losing or gaining electrons so that their outermost shell follows the octet rule. For example, the element Sulfur has relatively high ionization energy and has 6 valence electrons. That means that Sulfur is most likely to form a -2 ion since it only needs two more electrons to become stable, rather than forming a +6 ion and losing 6 electrons. The way I figure it out is I first look at the position of the element. If it is in Group1-3 it will lose the electron, so it will have a positive charge. If its in Group 15-17 it will gain electrons and have a negative charge. The amount of electrons it loses/gains represents the number. Who is online Users browsing this forum: No registered users and 1 guest
The Box Model Premium Content - Free Preview An HTML element can take up more space on a page than its content itself. An element is surrounded by 3 other parts: - The padding around the element - The border around the padding - The margin outside the border, to separate from other elements. For example, here's a paragraph with a padding, border and margin: The quick brown fox packed my box with five dozen liquor jugs and then jumped over the lazy dogs. Here's each part marked: The element itself is in the middle, surrounded by padding in the same color, then a border, and finally a (white) margin on the outside.
The distance migrated varies even within species, as I have said, some birds are partially migratory. In these cases, the young of migrants tend to be migrants also (so if something happens to the nonmigrants, the population can be replaced. Birds probably migrate just far enough to assure their survival. Shorter distances or longer distances must be disadvantagous. Likewise, migrating at all must be a tradeoff to the dangers of trying to survive the winter. The dangers of migration include getting lost, storms, energy loss (exhaustion). Blackpolls migrate from Canada to Ecuador. They gain half their body weight in fat. In partially migratory species, its the females and young that migrate: dominant, territory-holding, males tend to stay put--the cost of migration must be less for migrants than if they did stick it out. Eruptive species: food supply varies markedly from year to year. What happens with many winter finches is that they feed on pine seeds. The trees go through roughly a four year cycle in seed production. When you have lots of seeds, you have lots of finches. The next year, with few seeds, you have a lot of hungry finches pouring into the United States from Canada. Examples of winter finches include Red Crossbills and Purple Finches. Migrants usually build up fat reserves--young Manx Shearwaters leave British nesting grounds 50% heavier than adult normal weight and go to Brazil and Argentine waters. One banded bird made it in 14 days (570 km/day). Sanderlings build up enough fat reserves for a 2000 km trip! You might try Googling bird migration and see what you come up with.
This is a text transcript for our video on conducting school water audits. Narrator: Water is our most precious resource. Without it, no person, animal or plant can survive. This video’s all about understanding how your school uses water, and how it could perhaps use less. We’ll show you how to measure water use, detect leaks, find faulty fixtures and write a report. The School Water Audit Guide has more information – and a series of activities in blue for you to complete. So, let’s get started by seeing how much water your school uses. [Students walking to front of school, one on a skateboard] Your school’s water supply is fitted with a meter which accurately measures every litre of water used. You’ll find yours somewhere on the boundary near the street. [Vision of front of water meter showing numbers in red and black] Read the numbers from left to right. The black numbers show kilolitres - that’s a thousand litres - and the red numbers show litres. [Students sitting next to water meter by side of road, filling out worksheet on clipboard] Starting on a Monday morning, take meter readings for twelve days. Record them on the Daily Water Meter Reading Worksheet. This will show you if there are any leaks in the school. [Principal standing behind shoulders of small group of students turning over page of water bills and asking them questions] Next, take a look at your school's water bill. Bills are sent every two months and tell you how much water the school used - and what it cost. [Close up of students hand with pencil filling out worksheet] Use the water bills to calculate how much water your school uses in a year. If your school reduced water use by 10%, how much money would it save? [Flying shot of overhead of school grounds showing oval and buildings] Your school principal is the person who’s responsible for making sure everything runs efficiently – so why not interview them about how water is managed in the school. [Various shots of students using water] Student: How do students learn about water conservation at school? Principal: I think the only way we know about our water use at this school is from the accounts that the registrar has to pay and we don’t even talk about that to the students about that, maybe we should? We don’t even read the water meter, maybe we should? Student: Are the water fixtures in the school water efficient? Principal: Hmmm that’s a hard one because we are an old school and we have some new sections and we have some old sections. In any of the newer sections of the school we make sure that we replace any taps with water efficient taps. We make sure that our water fountains have got spring loaded return or they’re push button so that the taps aren’t going to be running longer than are necessary. In our toilets we still have some old toilets that are single flush, but as those toilets break I make sure we replace them with dual flush systems. Narrator: Now we've worked out how much water your school uses, let's inspect all the places where water is used – both inside and outside. Divide your group into small teams and make sure each team has a water audit kit. Here’s what you’ll need. - Map of school - Food dye - Measuring jug - Rubber gloves - Set of worksheets [Student filling out worksheet] Narrator: Each team uses the School Water Audit Worksheet to record details of all the fixtures in areas where water is used. Use a new worksheet for each new area. [Students walking up stairs towards building] Let’s get started. [Shot from inside toilet with water leaking] A leaking toilet can waste up to 73,000 litres of water a year – so it’s important to check every toilet in the school. To see how much water a toilet uses, remove the lid, turn off the water supply valve and mark the water level. Now flush the toilet – using a full flush – and refill the cistern to the line with a measuring jug making a note of the volume of water needed. Student: Three litres [Three female students in toilet cubicle with gloves on and clip board recording volumes] Now repeat the measuring for a half flush. [Close up of student hand with glove on pushing half flush button] When you’ve finished, reopen the supply valve and put the lid back on. [Close up of toilet cistern with lid off and bottle of blue dye being poured in. The see closer shot of dye hitting water] If you think the toilet might be leaking, you can check by using a coloured dye – like this. [Shot of toilet bowl with water leaking down back into the bowl and the bowl water is a little blue] Put the dye into the cistern and leave for ten minutes – but don’t flush! If after ten minutes or more you find the water in the bowl is the same colour as the dye, then your toilet is leaking. [Tap dripping slowly] Did you know one dripping tap can waste up to 10,000 litres of water a year? So your next job is to check every tap in the school. [Three male students walk into washroom with jug, ipad and clipboard. Students put the empty jug into the basin under the tap] To measure a tap’s water flow, turn it on full. [Close up of student using iPad stopwatch] Student: 3,2,1, go Narrator: and put the measuring jug under the tap for ten seconds. Narrator: Turn off the tap and measure the water you’ve collected. [Student holding up jug and reading the volume] Student: 1.75 litres [Close up of student recording number on worksheet] Narrator: Then multiply this amount by 6 to give you a litre-per-minute flow rate. Record this on your School Water Audit Worksheet – along with the type of tap and whether it leaks. Then fill out the Leaking Taps Investigations Worksheet to work out how much water is wasted if taps are left to drip. [Students drinking from different types of drinking fountains] Now let’s check the drinking fountains. Measure their flow rate, look for leaks, note the type of taps and record all these details. Watch to see how many times – and for how long - students use the drinking fountains during the day and calculate how much water they use. [Close of a drinking fountain with plastic bottle being filled] By the way - filling water bottles is a really good way to save water at your fountains. [Close up of students hand turning on tap and washing hands from different taps] Hand basins and sinks are used for washing hands and cleaning. But how much water are they using? Survey your school to find out. Also record their flow rates, and make sure you check for those leaking taps along the way. [Close up of shower head with shower running] Showers can use a lot of water – but exactly how much? [Three male students enter shower cubicle with jug, clipboard and put jug under shower head and turn on tap] To measure a shower’s water flow, turn the tap on full and put your measuring jug under the shower-head for ten seconds. [Close up of jug filling up and then mobile phone with stopwatch then student hand turning off shower tap] Turn off the taps and measure the water you’ve collected. [Close up of student reading side of measuring jug] Student: 1 litre [Vision of student recording onto worksheet on clipboard] Narrator: Multiply this amount by 6 to give you a litre-per-minute flow rate. [Close up of student hand recording data onto worksheet] While you’re there, make a note of the tap types and see if there are any leaks. [Close up of showerhead with water running] Installing a water-efficient showerhead in your showers can halve water use as well as cutting heating costs. [Flying shot of top of school showing school grounds] Now it’s time to take a look outside – at the grounds and gardens. [Various shots of different types garden sprinklers on, student watering small garden with watering can] Irrigating ovals, lawns and gardens is where most of your school’s water is used - so this is a very important part of your audit. [shot of gardener digging mulch from heap and putting in wheelbarrow and walking barrow through school grounds past a close up a tap dripping] Your best source of information is the ground staff, so have a chat with the gardener about how they manage water use in your school [close up of gardener sitting on chair in garden shed] Gardener: Mulch is used on all of the garden beds, we acquire our mulch basically through contacting local contractors, who chop down trees in the local area and they drop off mulch at their own convenience. There are some waterwise plants in the school gardens, the newer plantings that we’ve been able to have a choice, native plants that are good for this area. They are good for the birds, they are good for the little possums that are around here. Narrator: Discuss the grassed and garden areas. Gardener: The soil is improved by using wetting agent, and of course slow release fertilizer. Narrator: find out when and how they are watered. Gardener: we use both scheme and bore water at this school. It’s mainly the bore water, especially for the larger areas like the oval and the playgrounds. We use scheme on the smaller sections of the gardens especially with the drip systems. [Ovals with sprinklers spraying water then close up of gardener holding up different types of sprinklers] Narrator: and what sprinkler types are used. Gardener: On our school oval we use thee gear driven sprinklers of course to cover a large area. In the smaller garden beds we use spray head sprinklers. These are totally adjustable for angle and the amount of water that has to be used. [Close up of worksheet being filled out] Narrator: Then use this information to fill out The Irrigation In The School Grounds Worksheet. [Fly over shot of oval to top of Olympic sized pool then shot of pool from the ground] Narrator: If your school has a swimming pool, calculate how much water it contains. [Underwater shot of pool] Narrator: An uncovered pool could lose its entire volume of water in one year through evaporation. [Shot of pool cover rolled up at end of pool] Narrator: Research the cost of a swimming pool cover, and how long it would take to recover the cost through savings on the water bill. [Classroom shot of students on computers with worksheets in front of them] Narrator: By now, you’ll have a lot of facts and figures about your school’s water use – and the next step is to summarise your findings. [Close up of computer screen showing students working on their reports] Narrator: Using the Taking Action Worksheet, collate the results from each team to create a list of leaks, broken or inefficient fixtures and wasteful practices for the whole school. [Close up of students discussing report and writing results then shots of different parts of the school using water] Narrator: Now write a report on how much water the school uses, where it’s wasting water – and how much money this is costing the school. [Shots of students discussing reports and more shots of computer screens] Suggest ways to reduce water use – and set short and long-term targets. [Students at front of classroom presenting report with presentation on large screen behind them] Narrator: Finally, present your report to the school’s P&C committee or the principal. Explain in your presentation why it’s important to conserve water – and remind them how much money the school could save if they took some simple steps. [Split screen with 9 small screens showing all the areas of water use in a school] Narrator: Now you’re a lot wiser about water, don’t forget to spread the water-wise message. You can order posters and stickers to put up around your school to act as a reminder. And keep the audit going throughout the year by continuing to read the water meter– to keep track of how much water’s being used,– and saved. [Screen showing title – school water audit and Water Corporation website address] Good luck running running your own water audit.
The leaders of the 18th century separatist movement from England were not motivated by a genuine desire for freedom and equality If the so-called American Revolution of 1776 was truly committed to breaking with monarchical and autocratic rule from the United Kingdom then why did slavery grow at a rapid rate after the achievement of independence of the former 13 colonies in North America? This is an important political question since even in the 21st century there are repeated references by elected officials in both houses of Congress, the White House and the Supreme Court to the “Founding Fathers” and “Framers of the Constitution.” What is not mentioned by these career politicians and lifetime jurists is that many of the authors of the U.S. Constitution were large-scale slave owners themselves. These wealthy landowners and slave masters did not see any reason to liberate the more than 700,000 Africans living in the former colonies by the conclusion of the 1780s. The existence of slavery was quite profitable and with the discovery of the cotton gin in 1793, the expansion of involuntary servitude across the South and extending further west empowered the planters to the point where as a result of the Constitutional Convention of 1787 they were able to dominate the House of Representatives through a provision declaring that enslaved Africans could be counted as three-fifths of a person… African woman tortured by John Kimber who was acquitted of murder by British courts in 1792
Life Cycle of Atlantic Salmon Irish salmon are Atlantic salmon and spend their juvenile phase in rivers before migrating to sea to grow and mature. To complete their life cycle they must return to their river of origin to spawn. The salmon who adopt this life cycle are called anadromous. The salmon starts life as a small pea sized egg hidden away under loose gravel in cool clean rivers entering the North Atlantic Ocean. Against the odds the parents of this little egg have succeeded in returning to freshwater to spawn completing their life cycle before giving rise to a new generation. To do this both male and female adults ceased to feed on entering freshwater in response to gonadal development, directing all their energy instead to reproduction. The migration of adults in winter to suitable habitat can commence up to a year before spawning takes place. Spawning typically occurs in headwaters, though it may happen anywhere in a river if a suitable substrate of well oxygenated loose gravel is available. At spawning time (November to January), the female digs a depression in the gravel with her tail to deposit her eggs. One or more males discharge milt over the falling eggs to fertilize. Quickly the female covers the eggs with gravel to a depth of several centimetres which forms a nest or "redd" on the river bed. Buried deep inside the gravel the ova are safe from the impact of debris carried along in heavy floods and from attack by predators such as eels (Anguilla anguilla), trout (Salmo trutta) or cormorants (Phalacrocorax carbo). The rate of egg or "ova" development is dependent on water temperature. Eyes inside the pea sized orange ova are visible and increasing movement can be detected as the yolk sac containing food is consumed. The number of ova deposited in the redd is determined by the size of the female with larger females over 10kg depositing 15,000 each. This high fecundity (ova per female) is critical as survival in the wild is extremely low, especially in freshwater. For example, in the Burrishoole River on the west coast of Ireland survival rates for juveniles from 1970 to 2015 was as low as 0.3% in 2001 rising to a high of just 1.3% in 2007. The just-hatched fish are called "alevins" and still have the yolk sac attached to their bodies in Spring. When their yolk sac is absorbed the alevins become increasingly active and begin their journey up through the gravel of the riverbed. When strong enough the small fish must rise to the surface of the water and gulp air. By doing this they fill their swim bladder to gain neutral buoyancy making it easier to swim and hold their position in fast flowing streams. This critical period is therefore referred to as "swim-up" and exposes the young to dangerous predators for the first time. Once they begin to swim freely they are called fry. The fry have eight fins, which are used to maintain their position in fast flowing streams and manoeuvre about in the water during the Summer months. Fry feed on microscopic invertebrates and their abundance is regulated by temperature, predation, pollution and competition for food with other fry and other species of fish. The presence of salmon in a river is synonymous with a healthy aquatic environment, and as they are extremely sensitive to changes in water quality, habitat and climate, salmon are a good indicator of freshwater and marine ecosystem status. Over the Autumn the fry develop into parr with vertical stripes and spots for camouflage. They feed on aquatic insects and continue to grow for one to three years while maintaining their territory in the stream. Once the parr have grown to between 10 and 25cm in body length, they undergo a physiological pre-adaptation to life in seawater by smolting. This is evident by changes in their appearance as they become silvery and swim with the current instead of against it. There are also internal changes in the salt-regulating mechanisms of the fish. This adaptation prepares the smolt for its journey to the ocean. In Spring, large numbers of smolts 1-3 years old, leave Irish rivers to migrate along the North Atlantic Drift, and into the rich feeding grounds of the Norwegian Sea and the greater expanse of the North Atlantic Ocean. Here they feed primarily on fish such as capelin (Mallotus villosus), herring (Alosa spp.), and sand eel (Ammodytes spp.). As they grow quickly fewer predators are able to feed on them. Their rate of growth is therefore critical to survival. Salmon that reach maturity after one year at sea are called Grilse; these return to their river in summer weighing from 0.8 to 4kg. If it takes two or more years at sea to mature the salmon will return considerably earlier in the year and larger at 3 to 15kg, and because of their size they are greatly sought after by fishermen. Salmon exhibit remarkable "homing instinct" with a very high proportion able to locate their river of origin using the earth's magnetic field, the chemical smell of their river and pheromones (chemical substances released by other salmon in the river). Perfect homing precision is expected even after migrations over 3,000km to feeding grounds north of the Arctic Circle in the Norwegian Sea and at West Greenland. There is great excitement when adult salmon return to rivers as many are seen leaping acrobatically into the air and jumping over waterfalls while moving upstream. Salmon that survive fishermen, poachers and pollution may still have to scale large dams built across rivers before eventually finding refuge in lakes and deep pools. Arriving upstream on their spawning grounds among big boulders in icy headwaters the life cycle begins again, so ensuring survival of the species for another generation. Having spawned, the salmon are referred to as "kelts". Weak from not eating since arriving in freshwater and losing energy in a bid to reproduce successfully, kelts are susceptible to disease and predators. Mortality after spawning can be significant, especially for males but some do survive and commence their epic journey again. Scientists studying salmon initially used the rings laid down on scales, much like tree rings, to determine the age and growth of salmon in freshwater and at sea. By doing this they established that some kelts succeeded in spawning three times! Now a new record exists of an Irish salmon that reached maturity after less than one year at sea - a zero sea winter salmon. The salmon left the Bundorragha River in Co. Mayo on the 27/04/2007 as a 1 year old smolt of 49g only to return from the sea on the 05/11/2007 at 810g. Zero Sea Winter Salmon Scale Scientists today have a greater array of techniques to study the complex life cycle of this important species. Tagging, tracking, use of DNA and Stable Isotope Analysis in association with habitat and climate change studies are ongoing in a bid to understand the factors that govern the survival of this species. The climate change canary of the North Atlantic moves easily from freshwater to roam the North Atlantic Ocean feeding constantly on migration while avoiding predators, then homes to their natal river, jumping over almost impassable falls to reach their exact place of birth. Surviving impossible odds makes salmon Salmo salar the "King of Fish".
Common Core Writing Standards I can write narratives to develop real or imagined experiences or events using effective technique, relevant descriptive details, and well-structured event sequences. I... - Engage and orient the reader by establishing a context and introducing a narrator and/or characters; organize an event sequence that unfolds naturally and logistically. - Use narrative techniques, such as dialogue, pacing, and description, to develop experiences, events, and/or characters. - Use a variety of transition words, phrases, and clauses to convey sequence and signal shifts from one time frame or setting to another. - Use precise words and phrases, relevant descriptive details, and sensory language to convey experiences and events. - Provide a conclusion that follows from the narrated experiences or events. I demonstrate a good grasp of standard writing conventions (e.g., spelling, punctuation, capitalization, grammar and usage, paragraphing) and use conventions effectively to enhance readability.
The activities are as follows: - Teacher Guide - Student activity, Graph Type A, Level 4 - Student activity, Graph Type B, Level 4 - Student activity, Graph Type C, Level 4 - Grading Rubric Have you ever thought about what it would be like to live completely alone, without contact with other people? Nowadays, humans are constantly connected by phones, texting, and social media. Our social interactions affect us in many unexpected ways. Strong social relationships can increase human lifespan, and lower the risk of cancer, cardiovascular disease, and depression. Social relationships are so important that they are actually a stronger predictor of premature death than smoking, obesity, or physical inactivity! Like humans, social interactions are important for other animals as well. Jennifer is a behavioral ecologist who is interested in daffodil cichlids, a social species of fish from Lake Tanganyika, a Great Lake in Africa. Daffodil cichlids live in social groups of several small fish and one breeding pair. Each group defends its own rock cluster in the lake. The breeding male and female are the largest fist in the group, and the smaller fish help defend territory against predators and help care for newly hatched baby fish. About 200 social groups together make up a colony. Behavior within a social group may be influenced by the presence of other groups in the colony. For example, neighboring groups can be a threat because they may try to take away territory or resources. After reading about previous research on social interactions in species that live in groups, Jennifer noticed there were very few studies that looked at how neighboring groups affected behavior within the group. Jennifer thought that the presence of neighboring groups may force the breeding pair to be less aggressive towards each other and work together to protect their group’s resources against the outside threat. To test her idea, Jennifer formed breeding pairs of daffodil cichlids in an aquarium laboratory. She first observed the breeding pairs for any aggressive behaviors when they were isolated and could not see other groups. She observed each group for 30 minutes a day for 10 days. Next, Jennifer set up a clear barrier between the breeding pair and a neighboring group. The fish could see each other but not physically interact. Jennifer again watched the breeding pair and documented any aggressive behaviors to see how the presence of a neighboring group affected conflict within the pair. She again observed each group with neighbors for 30 minutes a day for 10 days. During these behavioral tests, Jennifer counted the total number of behaviors done by the breeding pair. She measured several behaviors. Physical attacks were counted every time contact between the fish was made (biting or ramming each other). Aggressive displays were counted when fish give signals of aggression without making physical contact (raising their fins or swimming rapidly at another fish). Submissive behaviors, or actions used to prevent aggression between the breeding pair, were also counted. Finally, behaviors used to encourage social bonding were counted and are called affiliative behaviors. Jennifer predicted that the breeding pair would perform fewer physical attacks and aggressive displays when a neighboring group was present compared to when the breeding pair was alone. She also thought the breeding pair would perform more submissive and affiliative behaviors when the neighboring group was present. In this way, the presence of an outside group would impact the behaviors within a group. Featured scientist: Jennifer Hellmann from The Ohio State University
Lingua latina or sermo eruditus (“erudite speech”) is the official standard language of Romania, or the Luminous Roman Empire under Lucidian dynasty. It is based on Classical Latin, which spread to countries around the Mediterranean with the Roman conquest. In spite of its high-flown use in the court and the imperial government, it is not a living language. Spoken vernacular is called Romance language (lingua romana), descendant of the Vulgar Latin spoken by soldiers, merchants and settlers of the Roman Empire, and distinguished from the Classical form of the language spoken by aristocrats, the form in which the language was generally written. Romance language is the dominant native language in continental Western Europe. During the Empire’s decline, and after its temporal fragmentation during the 9th century, varieties of Romance language began to diverge within each local area at an accelerated rate, and eventually evolved into a continuum of recognizably different typologies. While they are often mutually unitelligible, these internal divisions are usually perceived by their native speakers as dialects of a single Romance language, rather than separate languages. Diglossia is common: most Romanians are able to speak two or even three Romance dialects. Dialects of the Romance language |Afro-Romance (Mauretania, Algeria, Tunisia, Libya)|
Many people have the misconception that digital recording breaks up an audio signal into little slices, so that some of the signal is missing. Nope — all of the analog signal is captured and reproduced. Here’s what actually happens in the most common digital recording method called Pulse Code Modulation or PCM: 1. The signal from your mixer (Figure 1-A) is a varying voltage. This signal is run through a lowpass filter (anti-aliasing filter) which removes all frequencies above 22 kHz (if the sampling rate is 44.1 kHz). 2. Next, the filtered signal passes through an analog-to-digital (A/D) converter. This converter measures the changing voltage of the signal several thousand times a second (Figure 1-B). 3. Each time the waveform is measured, a binary number (made of 1’s and 0’s) is generated. This number is the voltage of the signal at the instant it is measured (Figure 1-C). Each 1 and 0 is called a bit, which stands for binary digit. 4. Those binary numbers are stored on the recording medium (Figure 1-D). The numbers can be stored on tape, hard disk, compact disc, or flash-memory card. Figure 1. Analog-to-digital conversion (during recording) The playback process is the reverse: 1. The binary numbers are read from the recording medium (Figure 2-A) 2. The digital-to-analog (D/A) converter translates the numbers back into an analog signal made of voltage steps (Figure 2-B). 3. An anti-imaging filter (lowpass filter) smooths the steps in the analog signal (Figure 2-C), and the smoothed signal leaves the D/A converter. The original signal’s waveform is reproduced. Figure 2. Digital-to-analog conversion (during playback) The curve or shape of the analog waveform between samples is re-created by the anti-imaging filter. Nothing is lost. With recordings made on a CD, the process does filter out signals above 22 kHz, but we can’t hear that high anyway. The harshness of some early digital recorders was not caused by slicing and dice-ing the signal. It was due in part to excessive phase shift of the anti-aliasing and anti-imaging filters. Those filters have been much improved, so current digital audio is generally much smoother and more like analog. # # # A member of the Audio Engineering Society, Bruce Bartlett is a microphone engineer (www.bartlettaudio.com), recording engineer, and audio journalist. His latest books are “Practical Recording Techniques 6th Ed.” and “Recording Music On Location.” Originally posted 2010-11-10 14:34:34.
Copyright © 2007 Dorling Kindersley From around 2000 BC, people living close to the Mediterranean Sea, such as the MINOANS, Mycenaeans, and PHOENICIANS, built strong wooden ships powered by sails and oars. They established long-distance sea routes linking Europe, Africa, and Asia, and became wealthy sea traders. Later, they sailed to explore and set up colonies. Traders braved the stormy Mediterranean waters to earn as much as possible through overseas business. The most profitable cargoes included silver from Spain (used to make coins), tin from Britain, and copper from Cyprus. The tin and copper metals were smelted to make bronze. Phoenician cloth, colored purple with a dye made from shellfish, was so expensive that only kings and queens could afford to buy it. From 3000 BC to 1450 BC, Minoan kings ruled the eastern Mediterranean area from the island of Crete. The kings grew rich by trading with other islands and demanding offerings from less powerful peoples. They lived in vast, elegantly decorated palaces. In c. 1450 BC, the Mediterranean island of Thera (now Santorini) was destroyed by a volcanic eruption. At nearby Crete, sea levels rose, dust blotted out the Sun, and the Minoans’ crops died out. Then the palace at Knossos, Crete, was attacked by the Mycenaeans. By c. 1100 BC, the Minoan civilization had disappeared. The Phoenicians lived on the eastern shores of the Mediterranean Sea, and were powerful from around 1000 BC to 500 BC. They lived as farmers, foresters, and craftworkers who were highly skilled in woodworking, glass-making, and textile production. The Phoenicians sailed all over the Mediterranean Sea. A few ventured farther—to western Spain, southeast Britain, and western Africa—and built new cities in the regions where they traded. Their most famous city was at Carthage, in North Africa, which remained powerful until the Romans destroyed it in 146 BC.
Stones form in different organs in the body due the retention of excess types of minerals in the body that can easily crystallise if there is insufficient fluid around to dissolve them. Cholelithiasis is one condition that affects the bile duct and gall bladder. In cholelithiasis, hard stones composed of cholesterol or bile pigments form in the gall bladder (choleccystolithiasis) or in the bile duct (choledocholithiasis). In the US alone, about 9% of women and 6% of men have gallstones, and most are asymptomatic. While in the south western region of Nigeria, Ibadan, the prevalence of cholelithiasis is 2.1%. When the concentration of cholesterol rises to the point of supersaturation, crystallization occurs. In other parts where stones form, stones could be composed of calcium, oxalate, uric acid, struvite. But in this case, stones are composed of cholesterol. A sludge containing cholesterol, mucin, calcium salts, and bilirubin forms, and, ultimately, stones develop. This occur when the concentration of cholesterol rises so high to the point of supersaturation. Normally, in bile, cholesterol leves are at equilibrium with bile salts and phosphatidylcholine. Although gallstones are typically asymptomatic (they show no symptoms), some cause biliary colic, in which stones intermittently obstruct the neck of the gallbladder and cause episodes of abdominal pain. Chronic obstruction may result in cholecystitis (infection and inflammation of the gallbladder) or cholangitis (infection and inflammation of the common bile duct). Both of which are very serious and, if untreated, may result in sepsis, shock, and death. Presenting symptoms include episodic right-upper-quadrant or epigastric pain, which often occurs in the middle of the night after eating a large meal and may radiate to the back, right scapula, or right shoulder. Diaphoresis, nausea, vomiting, dyspepsia, burping, and food intolerance (especially to fatty, greasy, or fried foods; meats; and cheeses) are common. More severe symptoms, including fever and jaundice, may signify cholecystitis or cholangitis. What Are the Possible Risk Factors? 1. Family history: there is every tendency to develop gallstones if there is a family history. In short, it is twice as more in rates. 2. Increasing age: Gallstones are mostly very common in individuals above the age of 40. 3. Female sex: with the presence of the hormone estrogens in female, they are more likely to develop gall stones at all age groups. This increased risk is most notable in young women, who are affected 3-4 times more often than men of the same age. 4. Elevated estrogen and progesterone: During pregnancy, oral contraceptive use, or hormone replacement therapy, estrogen and progesterone induce changes in the bile duct that predispose one to gallstones. 5. Obesity: Due to the elevated secretion and production of cholesterol in obese individual, they are at high risk of developing gall stones. 6. Rapid weight loss: Bariatric surgery and very-low-calorie diets adopted for weight loss regimes can increase risk of gallstone formation, possibly due to increased concentrations of bile constituents. 7. Diabetes mellitus: Hepatic insulin resistance and high triglycerides may increase risk of gallstones. 8. Gallbladder stasis: When bile remains in the gallbladder for an extended period, supersaturation can occur. Gallbladder stasis is associated with diabetes mellitus, total parenteral nutrition (probably due to lack of enteral stimulation), vagotomy, rapid weight loss, celiac sprue, and spinal cord injury. 9. Cirrhosis: Cirrhosis i.e scarring of the liver tissues, increases the risk of developing gall stones 10 times more. 10. Medications: Drugs implicated in the development of cholelithiasis include clofibrate, octreotide, and ceftriaxone. 11. Physical inactivity: Exercise may reduce gallstone risk. Findings from the Health Professionals Follow-Up Study suggested that the risk of symptomatic cholelithiasis could be reduced by 30 minutes of daily aerobic exercise. Young or middle-aged men (65 years or younger) who were the most physically active had half the risk for developing gallstones, compared with those who were least active. In older men, physical activity cut risk by 25%. Physical activity is also associated with reduced gallstone risk in women. How can it be Diagnosed? Laboratory tests include complete blood count (CBC), liver function tests, amylase, and lipase. – Right-upper-quadrant (trans-abdominal) ultrasound will reveal the presence of gallstones and show evidence of cholecystitis, if present. – Hydroxy iminodiacetic acid (HIDA) scan is sometimes indicated to rule out cystic duct obstruction and acute cholecystitis. – Endoscopic retrograde cholangiopancreatography (ERCP) or magnetic resonance cholangiopancreatography (MRCP) assesses the presence of gallstones within the bile ducts. ERCP can also be used to extract stones when they are found, preventing the need for surgery. Are there Treatment options? Asymptomatic gallstones are generally not treated. Cholecystectomy (surgical removal of the gall bladder) is the treatment of choice for symptomatic disease. Oral bile acids (e.g., ursodeoxycholic acid) can be used to dissolve small stones and stone fragments. However, they are not really efficient as the stones typically reoccur. It is helpful to avoid large, fatty meals, as a large caloric load is the most likely trigger for biliary colic symptoms. Long-term statin use has been associated with a reduced risk of gallstone development. Gallstones are strongly related to high-fat, low-fibre diets. In areas like Asia and Africa populations which have plant-based diets as traditional diets. An abundance of high protein and high saturated fatty diets are risk factors to developing gallstones. Diets low in dietary fibre, especially the westernized diets play a major role in the development of gall stones. The following factors are associated with reduced risk of gallstones: – Plant-based diets: Both animal fat and animal protein may contribute to the formation of gallstones. According to research, up to 90% of gallstones are cholesterol. This totally suggests that a change diet (e.g., reducing dietary saturated fat and cholesterol and increasing soluble fibre) may reduce the risk of gallstones. “Vitamin C, which is found in plants and is absent from meat, affects the rate-limiting step in the catabolism of cholesterol to bile acids and is inversely related to the risk of gallstones in women” In a 12-year prospective cohort study among US men, individuals consuming the most refined carbohydrates have a 60% greater risk for developing gallstones, compared with those who consumed the least. Conversely, in a 1998 cross-sectional study of men and women in Italy, individuals eating the most fiber (particularly insoluble fiber) have a 15% lower risk for gallstones compared with those eating the least. – Avoidance of excess weight: staying within a healthy BMI results in reduced risks of developing gall stones as obesity is a huge factor to increased risk. Those with BMI above 30 kg/m2 should endeavour to shed some few extra pounds to reduce their risk. – Weight cycling: simply meaning repeatedly intentionally losing and unintentionally regaining weight. This cycle increases the likelihood of cholelithiasis. – Moderate alcohol intake: alcohol consumption, especially when it is too much, has always been linked to different types of ailments; gallstone formation isn’t left out. Adopting western diets totally puts you at risk of developing gall stones. A diet rich in antioxidants, fibre, anti-inflammatory substances keeps you at reduced risk rate. Stones make life very unbearable, you should be very conscious about your diet and lifestyle. Biddinger SB, Haas JT, Yu BB, et al. Hepatic insulin resistance directly promotes formation of cholesterol gallstones. Nat Med. 2008;14(7):778-82. [PMID:18587407] Leitzmann MF, Giovannucci EL, Rimm EB, et al. The relation of physical activity to risk for symptomatic gallstone disease in men. Ann Intern Med. 1998;128(6):417-25. [PMID:9499324] Leitzmann MF, Rimm EB, Willett WC, et al. Recreational physical activity and the risk of cholecystectomy in women. N Engl J Med. 1999;341(11):777-84. [PMID:10477775] Erichsen R, Frøslev T, Lash TL, et al. Long-term statin use and the risk of gallstone disease: A population-based case-control study. Am J Epidemiol. 2011;173(2):162-70. [PMID:21084557] Bodmer M, Brauchli YB, Krähenbühl S, et al. Statin use and risk of gallstone disease followed by cholecystectomy. JAMA. 2009;302(18):2001-7. [PMID:19903921] Stinton LM, Shaffer EA. Epidemiology of gallbladder disease: cholelithiasis and cancer. Gut Liver. 2012;6(2):172-87. [PMID:22570746] Ahmed A, Cheung RC, Keeffe EB. Management of gallstones and their complications. Am Fam Physician. 2000;61(6):1673-80, 1687-8. [PMID:10750875] Pixley F, Wilson D, McPherson K, Mann J. Effect of vegetarianism on development of gall stones in women. Br Med J (Clin Res Ed) . 1985;291:11-12. Tsai CJ, Leitzmann MF, Willett WC, et al. Fruit and vegetable consumption and risk of cholecystectomy in women. Am J Med. 2006;119(9):760-7. [PMID:16945611] Simon JA, Hudes ES. Serum ascorbic acid and gallbladder disease prevalence among US adults: the Third National Health and Nutrition Examination Survey (NHANES III). Arch Intern Med. 2000;160(7):931-6. [PMID:10761957]
23 Mar 2015 21 Jul 2017 Keywords: ped importance, price elasticity importance Price Elasticity of Demand (PED) measures the percentage change in the price of a product, to the percentage change of demand for that same product. It is measured through varying degrees of elasticity. An inelastic good means that a change in price will have a very little effect on the demand. Due to PED=%?Q / %?P, inelastic goods have a PED<1, explaining the gradient of the graph in Figure 1. If a product is completely inelastic (PED=0),it implys that however much a tax increases, it does not affect demand, as consumers are still going to buy it. If the price doubles and people buy more than half of what they were originally going to buy, it's inelastic. If it's less than half, it's elastic. Because smoking is addictive, and buyers are always willing to pay, it is inelastic. Taxation is the process in which the government influences the economy, and is the basis of the Fiscal Policy. The main motivation for this policy is revenue, as tax raises money to spend on improving society for the public, for example improving the NHS, schools and roads. It also helps redistribute wealth more equally among the public, and reduce negative externalities. There are two types of indirect taxation. Ad Valorem Tax is implemented most frequently, where the tax attached to a good is correlated to the price it is being sold at, so is a percentage of the total price, currently levied at the standard rate of 15%. However, Specific Tax is a form of indirect taxation that is not based on the value of the goods, only the quantity. For example, if two packets of cigarettes both contained 20 cigarettes, one was priced at £5.99 and the other at £2.99, because they both contain the same quantity, the tax added would be a constant value, regardless of the original selling price. There are many factors that need to be taken into consideration when looking at price elasticity of demands and taxation. The availabilty of substitutes will make demand more elastic, as customers have more alternatives to buy. However, when there are no appropriate substitutes, and the customer does not have the ability to postpone consumtion, they are seen as a necessity, and therefore the price elasticity of demand will be very inelastic. When considering specific tax being mandatory for buyers of cigarettes, and the changes this would bring about regarding elasticity, the addictive element of niccotine must be taken into account as to what effects this would have on demand. It is shown that "taxes on cigarettes can be raised nearly 2.5 times the current level without any fall in revenue" (Rigo, 2005). Therefore, the demand for cigarettes will be inelastic. As shown in Figure 1, the demand for an inelastic goods such as cigarettes, is very unresponsive to a change in price. The creation of a larger specific tax for cigarettes is not only a burden on the consumer however. The producer may have to lower their original price to make sure their customers still purchase their product. This means the company has to absorb most of the tax if the PED is elastic. However, the alternative is that the business can pass the tax on to the consumer by increasing the price of a good, and this is called shifting the burden of tax. Seeing as cigarettes have a very inelastic demand, this gives the company the ability and confidence to increase prices without worrying about loss of sales. This is shown in Figure 2. Figure 2 shows the effect that a specific tax has on consumers when demand is inelastic. It will cause demand to fall from D to Dt, which causes the quantity demanded to fall from q* to qt. The equilibrium price after the tax is Ps, which is what the producers will charge, and adding the tax on top of that price will give Pc, with the tax revenue being AC. Similarly, when demand is elastic and a specific tax is imposed, it will cause demand to shift left from F to Ft. this will cause the quantity demanded to also fall. With a new equilibrium, suppliers will charge Ts, however consumers are paying Tc. This is shown in Figure 3. Therefore, due to the fact that demand for cigarettes is highly inelastic, the imposition of a specific tax on consumers for this good will have little effect on the quantity bought, as shown in Figure 2. Another major contributing factor to the tax incidence is called the Deadweight Loss, which is shown by the shaded triangle In both Figure 2 and Figure 3. B+D is the dead weight loss that occurs due to the imposition of the tax on consumers. In the case of cigarettes, the government would hope to minimise the size of the negative externality that is caused by people smoking. It shows the loss of economic activity due to the tax. The government gets a lot of positive results from raising revenue through increasing taxes. It is needed as their budget may be running in a deficit. The publics money could be used to reduce this. By placing a specific tax on cigarettes, it means they get direct control over how much money they receive from each packet of cigarettes bought, as it is not a percentage. Through the increase in price, this will hopefully cut out a small minority of smokers; however the vast majority of the target consumers will still be prepared to pay the increased price, due to how inelastic the demand is. It disencourages smokers, which for the government's image shows altruism as they are trying to protect the health of their country, however, they are still benefiting by receiving all the additional tax. A consequence of this is less people relying on the NHS for smoking related problems, meaning the government can put more time and money into improving other aspects of society. So the government has two motivations; the tax decreases demand whilst also raising revenue. Cigarettes are seen as a demerit good, meaning that directly to the consumer they have a negative effect, but also to all of society indirectly. Some tangible costs are as follows. The NHS is put under strain as they have to deal with medical conditions as a consequence of tobacco consumption. This includes the effect of passive smoking to non-smokers. It also can affect production for a company, as if there is more sickness or death due to smoking, productivity will decrease. Another negative aspect of smoking is the potential for fires. Dropping cigarettes in the home, or even in public, may result in fires, which will put an additional strain on the government as they need to provide a fire service. A similar demerit is litter. Cigarette butts are littered throughout streets, and once again it's down to your council or government to clean up. By imposing a specific tax on smokers, these the number of people buying cigarettes will decrease slightly; however the biggest impact will be the increase in funds the government has to spend combating these problems more efficiently. In societies where the government takes advantage of its power over charging society whatever it wants, black markets may appear. These sell goods that are cheaper than normal goods, as they exclude any tax implemented by governments. Therefore, if the government imposed an extreme tax on cigarettes and the target market could not afford the product, they could turn to black markets. "In England, one-half of all cigarettes are sold on the black market." (Smoking Aloud, 2006) These markets are very detrimental to the government, as it means that they start to lose a fragment of control over cigarettes, and consequently they cannot keep accurate records of the nicotine market. The addictive nature of smoking could lead to more people using black markets as demand is so inelastic. Although the demand for cigarettes has been very inelastic, there is evidence that a 'tipping point' may be approaching, meaning the government will have to be careful about how much tax they place on the product. The evidence for this change is the recent boom in substitute goods, such as nicotine patches and chewing gum, however, this only affects cigarette sales on a small scale- "for every £1 spent on nicotine replacement, over £130 is spent on cigarettes" (Riley, 2006). As long as the PED is very inelastic, the government can control and increase the specific tax placed on cigarettes, as the consumers will always be willing to purchase. It is clear from the examples shown above that elasticity has a significant effect on the imposition of taxes on buyers. In the case of cigarettes there is an extremely inelastic demand for the good, due the lack of presence of no close substitutes and the addictive nature of the product. If you are the real writer of this essay and no longer want to have the essay published on the our website then please click on the link below to send us request removal:Request the removal of this essay Get in touch with our dedicated team to discuss about your requirements in detail. We are here to help you our best in any way. If you are unsure about what you exactly need, please complete the short enquiry form below and we will get back to you with quote as soon as possible.
Quantitative observation, also called quantitative data, includes information that includes numbers, measurements and statistics. Quantitative data serves as a tool to measure data in many areas, including algebra in mathematics. Quantitative data complements the use of qualitative data, which uses descriptions, adjectives and linguistic elements to describe objects and images. Although qualitative and quantitative data differ in methods used to describe and observe the surrounding world, they provide equally valuable knowledge to describe subjects and provide useful information on them. Quantitative data takes a practical and realistic approach to documenting data, in contrast to qualitative data, which explains information in a more abstract method. This data typically uses several methods of evaluation, including numbers and measurements. Factors such as length, height, volume, area, speed, temperature, sound, humidity level, amount of precipitation, numbers of people, ages and gender are factors used to describe quantitative data. In essence, quantitative data expresses a precise measurement of quantity using the most appropriate and logical methods. Many different types of objects, such as paintings, cups of coffee and students in classrooms demonstrate quantitative data. In paintings, quantitative analysis includes measurements of height and width of paintings. Quantitative data of coffee includes temperature of coffee, amount of liquid in ounces and price of the beverage. In contrast to quantitative data, qualitative analysis describes scent, texture and taste.
Cholesterol: Up with the Good and Down with the Bad A cholesterol level is measured by a simple blood test that is ordered by your health care provider. You should fast for 9-12 hours before having your blood cholesterol Everyone age 20 and older should have their cholesterol measured at least once every 5 years, more often as you get older. It is best to have a blood test called a “lipoprotein profile”. Cholesterol is a waxy, fat-like substance made in the liver, but also is found in certain foods. There are several types of cholesterol, including low-density lipoproteins (LDL) and high-density lipoproteins (HDL). Throughout life, beginning in childhood, there is a gradual build up of cholesterol and other substances on the inner lining of an artery referred to as atherosclerotic plaques. Over time, these plaques can harden and narrow an artery enough to slow or even block blood flow. The illustration to the right, courtesy of 3D Science.com, shows the build up of an atherosclerotic plaque on an artery wall. Atherosclerotic plaques are often unstable and can rupture into the vessel lumen causing a blood clot to form. This can result in a sudden blockage of an artery. This is often the process by which people experience heart attacks or strokes. In some people, the first sign of atherosclerosis might be a heart attack or even sudden death. As the level of blood cholesterol increases, so does the possibility of blocking the blood flow in the arteries due to cholesterol plaque build-up. the level of cholesterol in the blood can be measured by a blood test: - Total Cholesterol Levels: - 200 mg/dL or below=good; - 200 to - 240 or above=high. LDL cholesterol is considered "bad" cholesterol because excessive levels of LDL can lead to a build up of thick, hard deposits called plaque on the inside of the blood vessel wall. (See yellowish plaque build up in wall of blood vessel in illustration to right, Courtesy of 3DScience.com) These plaques also contain various types of cells often associated with inflammation as the body tries to rid itself of this abnormal buildup. When it comes to bad cholesterol, the lower the better. The less LDL there is in the blood, the less the risk of heart disease. - An LDL level of 100 or less is usually recommended, but 70 or less is optimal, especially in some patients at high risk of coronary heart disease. - Saturated fats and transfatty acids in the diet raise the LDL cholesterol levels in the HDL Cholesterol (good) HDL cholesterol is referred to as "good" cholesterol because it lowers your risk of heart disease and stroke. HDL protects your arteries from plaque buildup by acting as a scavenger that removes cholesterol from the arterial walls and carries it back to the liver, which leads to its removal from the higher the blood level of HDL-cholesterol, the better. Conversely, low levels of HDL increase the risk of heart disease and stroke. - HDL Blood levels of: - less than 40 mg/dL are a major risk factor for heart disease. - 60 mg/dL and above is considered protective against heart disease.(1) - Smoking lowers HDL levels; - Exercise increases Triglycerides can also raise heart disease risk. Triglyceride levels that are borderline high or high may need treatment in some people. - borderline high (150-199 mg/dL) - high (200 mg/dL or more) Factors that affect - Diet One of the most important determinants of blood cholesterol level is fat in the diet--not total fat, but specific types of fat. Bad fats increase the risk for certain diseases and good fats lower the - Weight: Being overweight can increase cholesterol levels. Losing weight can help lower the LDL (bad) and total cholesterol, as well as increase HDL (good) cholesterol. (Excess weight is also a risk factor for heart disease) - Exercise: Regular, continuous, aerobic exercise can raise HDL (good) cholesterol. Exercise for 30 minutes/day on most days is generally recommended for most people. For those unable to tolerate 30 minutes at a time, exercise can be broken up into three-10 minute sessions per day. Try to gradually work up to 30 minutes per day, if possible. Activity should be continuous, with a low to moderate intensity. Continuous, aerobic exercise includes walking briskly, swimming, running, jogging, climbing stairs, and bicycling on a regular or stationary bicycle. - Smoking: Cigarette smoking is the most important preventable cause of premature death in the United States(7). In addition to the damaging effects on the lungs, smoking also greatly affects the heart and blood vessels. Smoking lowers HDL (good) cholesterol, increases blood pressure, decreases exercise tolerance and increases the tendency for blood to clot. Cigarette smoking is a major cause of coronary heart disease, which leads to heart attack. If you smoke, quitting is the single best thing you can do to reduce your - Age and Gender: Cholesterol levels rise with age. Menopause also adversely affects women's LDL (bad) cholesterol levels. Before menopause, women tend to have lower total cholesterol levels than men of the same age. After menopause, women's LDL levels tend to rise. Heredity: High blood cholesterol levels can run in families. Other causes: Certain medications and medical conditions can cause high cholesterol. Substitute good fats for bad fats in your diet One of the most important determinants of blood cholesterol level is fat in the diet--not total fat, but specific types of fat. Bad fats increase the risk for certain diseases and good fats lower the risk.(1) Fats and oils are mixtures of fatty acids. Each fat or oil is categorized as "saturated", monounsaturated" or "polyunsaturated," depending on what type of fatty acid in it Fats: increase the risk for heart disease and Saturated Fats In general, limit to 15 to 20 gms/day; for people with diabetes or heart disease, limit saturated fat to <10 grams/day. For people with elevated LDL-cholesterol, limit to 15 gms/day.(2) Saturated fats are usually a bigger problem than the cholesterol we consume. These fats are mainly animal fats and are found in meat, lard, butter, whole milk dairy products (cheese, milk and ice cream) egg yolks, tropical oils-coconut, palm, and palm kernel oil. Saturated fats are usually solid at room Trans-fatty acids (try to eliminate from diet) Trans fats have been found to be far worse than saturated fats when it comes to heart disease. They raise (bad) LDL cholesterol, and lower (good) HDL cholesterol. As of Jan 1, 2006 food manufacturers will be required to list trans fat on nutritional food labels. This will make it easier for consumers to avoid these fats. In the meantime, look at the ingredient list on the food label. If the ingredient list includes the words “shortening,” “partially hydrogenated vegetable oil” or “hydrogenated vegetable oil,” the food contains trans fat. Because ingredients are listed in descending order of predominance, smaller amounts are present when the ingredient is close to the end of the Trans fatty acids are produced from partially hydrogenated oils and are used to help foods stay fresh on the shelf or to produce a solid fat product, such as margarine. French fries, donuts, cookies, chips and other similar snack foods are high in trans fatty acids. In general, trans fats are often found in commercially prepared baked goods, stick margarines, snack foods, processed foods, many fast commercially prepared fried foods (4). Cholesterol (limit to <300 mg/day) One large whole egg is about 212 mg of cholesterol. Note: If LDL-cholesterol is above the desired goal, it is recommended that you limit cholesterol intake to <200 mg/day (2). Unsaturated fats lower the risk of heart disease and stroke Unsaturated fats are found in products derived from plant sources, and are found in vegetable oils, nuts, and seeds. There are 2 main categories of unsaturated fats: Polyunsaturated fats & Monounsaturated fats. Both are liquid at room temperature--polyunsaturated fats are also liquid in the refrigerator, but monounsaturated fats start to solidify at Polyunsaturated fatty acids (PUFAs): There are two major classes of PUFAs -- the omega-3 and the omega-6 fatty acids. Most American diets provide at least 10 times more omega-6 than omega-3 fatty acids. There is now general scientific agreement that individuals should consume more omega-3 and fewer omega-6 fatty acids for good health.(24) - Omega-3 fatty acids are thought to reduce cardiovascular disease risk by lowering triglyceride levels, decreasing the growth rate of atherosclerotic plaques, decreasing risk of sudden death and arrhythmias, decreasing thrombosis (blood clots), improving arterial health, and by lowering blood pressure. (13) Omega-3 fatty acids are found in a variety of dietary supplements. For example, products containing flaxseed oil provide ALA, fish-oil supplements provide EPA and DHA, and algal oils provide a vegetarian source of DHA. - EPA and DHA: Fatty fish (mackerel, salmon, sardines, swordfish and albacore tuna) are high in two kinds of omega-3 fatty acids, EPA and DHA (eicosapentaenoic and - Alpha-linolenic acid (ALA): A third kind, alpha-linolenic acid, is less potent. It comes from soybeans, canola, walnut and flaxseed and oils made from those beans, nuts and seeds. Alpha-linolenic acid can become omega-3 fatty acid in the body; recent studies have found that it seems to lower the risk of heart disease. The extent of this benefit is not yet completely established with more research recommended by the AHA. At this point, the AHA has indicated that 1.5 to 3 grams per day of alpha-linolenic acid seems beneficial(6) Alpha-linolenic acid is found in a variety of green leafy vegetables, some types of nuts (especially walnuts), soybeans, canola oil, flaxseed oil and flaxseed supplements. - Omega-6 fatty acids: Linoleic acid (LA) is an omega-6 fatty acid and is converted in the body to arachidonic acid (AA). Both fatty acids alpha-linolenic acid (ALA) and linoleic acid (LA) must come from the diet because they cannot be made by the body. As previously mentioned, it is thought that individuals should consume more omega-3 and fewer omega-6 fatty acids for good health. It is not known, however, whether a desirable ratio of omega-6 to omega-3 fatty acids exists for the diet or to what extent high intakes of omega-6 fatty acids interfere with any benefits of omega-3 fatty acid consumption. LA is found in many foods consumed by Americans, including meat, vegetable oils (e.g., safflower, sunflower, corn, soy), and processed foods made with these oils. The Institute of Medicine has established Adequate Intakes for ALA (1.1-1.6 g/day) and LA (11-17 g/day) for adults but not for EPA and DHA.(24) The American Heart Association (AHA) currently recommends that everyone eat at least 2 servings of fish a week. important to note that fried white fish commonly found in fast food fish sandwiches and fish sticks lack the beneficial fatty acids and may actually increase the risk of atherosclerosis.(16) Also, mercury consumption can be a concern with certain fish. read more about Mercury from the American Heart Association. Fish Oil Supplements:Fish oil supplements provide EPA and DHA. In the U.S., there currently is no official recommended intake for EPA and DHA in healthy people. - Prescription Fish Oil Supplements: Recently a prescription fish oil supplement, Lovaza, has been tested and approved by the FDA. Lovaza is for adults and is to be used along with a low-fat and low-cholesterol diet to lower very high triglycerides (fats) in the blood. It is called a lipid-regulating medicine and is made of omega-3 fatty acids. One capsule yields almost twice the concentration of both EPA and DHA than most brands available over-the-counter. The usual therapeutic dose of LOVAZA is 4 capsules to be taken all at once or divided into two doses, two capsules two times daily. It is not for women who are pregnant or breastfeeding. The Lovaza website lists more about side effects and precautions. - Over-the-counter fish oil supplements: There are many fish oil supplements available over-the-counter, but the problem is that most herbs and supplements have not been thoroughly tested by the FDA for purity and quality and thus there is a lack of consistency in dose and quality of many of these products that appear on the market. In fact, what appears on the label may not necessarily be in the bottle. One independent company, that is used by many health care professionals, is ConsumerLab.com. It provides independent testing results online about vitamins, supplements, and nutrition products to consumers and healthcare providers. They have apparently recently completed testing of 57 different fish oil products sold in the U.S. claiming to contain EPA and/or DHA and tested them for their levels of omega-3 fatty acids, mercury, lead, PCBs, and signs of decomposition. They found varying levels of quality and these results are available online for an annual subscription fee. More about fish oil supplements: - Taking fish oil supplements should be done in consultation with your physician. - Fish oils are best tolerated when taken with meals, and, if possible, should be taken in divided doses such as dividing the total dose in half and taking it twice daily. - Caution: High intakes could cause excessive bleeding in some people. Of particular concern are people with bleeding disorders such as hemophilia, and those taking blood thinners, such as Coumadin (warfarin) heparin, Lovenox, or aspirin, and those expecting to undergo surgery. Two studies using fish oil supplements: - One large study found that by getting 1 gram per day of omega-3 fatty acids over a 3 1/2 year period, patients who had previously had a heart attack, could lower their risk of dying from heart disease by 25%. The participants in the study obtained their omega-3s from a capsule. Getting a gram/day from fish would mean eating a serving a day of fatty fish. (18) - In a study of 18,000 patients with hypercholesterolemia and a history of coronary artery disease, researchers found the addition of fish oil to statins (cholesterol lowering medication such as Lipitor) reduced the occurrence of major coronary events by 19% over statins alone. Lancet 2007; 369: 1090-98. - Flaxseed oil is the best oil to use. It contains a very low omega 6 to omega 3 ratio (0.24--anything less than 3.0 is healthful) and it is composed mainly of polyunsaturated fats (66% of the total fats) which will help lower blood cholesterol. Flaxseed oil should be used without cooking, as in salad dressing, as high heat destroys the healthy fatty acid. Flaxseeds can be taken as a supplement or sprinkled in cereal or salads. They have a mild, nutty flavor and are easy to include in your diet. Flaxseeds are a good alternative to flaxseed oil and is also a good source of fiber. Both of these products can be found in grocery stores. Flaxseed oil supplements can affect blood clotting and should be taken only in consultation with your physician. - Canola oil is the 2nd best oil to use. It has a low ratio (2.0) of omega 6 to omega 3; it is composed mostly of monounsaturated fats and will also help lower cholesterol, but not as much as flaxseed oil. - Olive oil has only a marginal omega 6 to omega 3 fat ratio of 13.1, but it does have healthy, cholesterol-lowering monounsaturated fats. If you like using this oil it is best to blend olive oil with either canola or flaxseed oil. Nuts: Walnuts are the best nuts and have the best Omega 6 to Omega 3 ratio. Avoid eating nuts by the handfuls, however; more is not better in this case. In general, it's best to limit to one tablespoon of chopped nuts per day. are an important source of the polyunsaturated fat known as omega-3 fatty fish (mackerel, salmon, sardines, swordfish and albacore tuna) are high in two kinds of omega-3 fatty acids, EPA and DHA (eicosapentaenoic and Green leafy vegetables contain alpha-linolenic acid. According to a study reported in the Annals of Internal Medicine, 2005, low-fat diets that include high amounts of whole grains, beans, vegetables and fruits offer twice the power to reduce cholesterol as diets that are simply low in fat. Protein: Choose low-fat or lean meats, poultry, fish, beans, peas, nuts and seeds. Baking, broiling and grilling are recommended, avoiding fried foods. proteins from vegetable sources such as beans are good substitutes for animal sources of protein.(12) Fish is also a good protein. Instead of butter, add plant stanols and sterols to your diet. These are found in the cholesterol-lowering spreads Benecol, Take Control, Smart Balance Plus and in the dietary supplement Benecol SoftGels. These spreads are excellent alternatives to butter--but don't overdo it, as many do contain some saturated fat. Read and Understand Nutritional Food Labels to find out what you are eating! Read the "Nutritional Facts" label and not just the promotional words. Food advertised as containing "No cholesterol" can still contain large amounts of harmful saturated fats and transfatty acids. Below (on the left) is a sample label from a box of breakfast cereal: Serving Size 1 cup (30g), Servings Per Container About 25 Amount Per Serving | Note the serving size as it varies considerably between products! The label to the left refers to the amount of nutrients in 1 cup of this food. There are 25 cups of food in this container. | Calories 120 ||Calories: In general, 40 calories is low, 100 is moderate, and 400 is high for 1 serving | Calories from Fat 15 |% Daily Value is based on a 2,000 calorie per day diet. Daily Values (DVs) are used on food and dietary supplement labels to indicate the percent of the recommended daily amount of each nutrient that a serving provides. DV replaces the previous designation of United States Recommended Daily Allowances (USRDAs). Quick guide: 5% or less of the daily value is low, 20% or more is high. |There is 1.5 g of total fat in 1 cup of this cereal which is 2% of the recommended daily amount for a 2,000 calorie diet. There is actually no optimal amount of total fat in a healthy diet. One of the most important determinants of blood cholesterol level is fat in the diet--not total fat, but specific types of fat. Bad fats increase the risk for certain diseases and good fats lower the risk.(1) . ||There is no saturated Fat-(bad fat) in this cereal. Saturated fats are usually a bigger problem than the cholesterol we consume. In general, limit to 15 to 20 gms/day; for people with diabetes or heart disease, limit saturated fat to <10 grams/day. For people with elevated LDL-cholesterol, limit to 15 gms/day.(2) Fat 0g Trans fats are found in deep-fried foods, bakery products, packaged snack food, margarines, and crackers, and consumption of these foods significantly raise levels of LDL (bad) cholesterol , reduce HDL (good) cholesterol, and increase triglyceride levels. |Trans Fat--bad fat--try to eliminate from diet altogether. The consumption of unhealthy trans fats is remarkably prevalent in the United States yet the adverse health effects of these fats are far more dangerous on average than those of food contaminants or pesticide residues. As of Jan 1, 2006 food manufacturers will be required to list trans fat on nutritional food labels. This will make it easier for consumers to avoid these fats. However, in certain foods, these labels can be misleading. For example, producers of foods that contain less than 500 mg of fatty acids per serving are allowed to list the content as zero on the packaging. But in multiple servings, consumers might unwittingly consume substantial amounts of the trans fats. Read the ingredients list to find clues to hidden trans fats: the words "partially hydrogenated vegetable oils" means the product contains trans fats. Another problem with hidden trans fats is that food labels are rarely seen in restaurants, bakeries, and other retail food outlets. || Polyunsaturated Fat--good fat. There are 2 main categories of unsaturated fats: Polyunsaturated fats & Monounsaturated fats. Both polyunsaturated and monounsaturated fats lower blood cholesterol when substituted for saturated fats in the diet.(2) || Monounsaturated Fat--good fat || Cholesterol: limit to <300 mg/day One large whole egg is about 212 mg of cholesterol. Note: If LDL-cholesterol is high, it is recommended that you limit cholesterol intake to <200 mg/day (2). Note: Food advertised as containing "No cholesterol" can still contain large amounts of harmful saturated fats and transfatty acids. ||Sodium Research shows that eating less than 2,300 milligrams of sodium (about 1 tsp of salt) per day may reduce the risk of high blood pressure. Most of the sodium people eat comes from processed foods, not from the saltshaker. |Total Carbohydrate 24g ||Carbohydrates contain about 4 calories per gram. This product contains 24 grams, thus 96 of the total 120 calories are from carbohydrates. || This cereal provides 8% of the recommended daily amount of fiber. Fiber is part of plant foods that is not digested. In general, an excellent source of fiber contains five grams or more per serving, while a good source of fiber contains 2.5 – 4.9 grams per serving. The recommendation is to eat 25-30 grams of fiber per day. To achieve this goal, increase gradually to avoid stomach irritation. The grams of sugar and fiber are counted as part of the grams of total carbohydrate. If a food has 5 grams or more fiber in a serving, subtract the fiber grams from the total grams of carbohydrate for a more accurate estimate of the carbohydrate content. (10) Sugars: If you are concerned about your intake of sugars, the amount of sugar should not be more than 1/2 of the total carbohydrates. For example, a food with 24g of total carbohydrate with 23g from sugars, should be avoided. Sugar alcohols A sugar alcohol is neither sugar nor alcohol but is actually a carbohydrate with a chemical structure that partially resembles a sugar and partially resembles an alcohol. Another term for sugar alcohols is polyols. They are a group of caloric sweeteners that are incompletely absorbed and metabolized by the body and consequently contribute fewer calories than sugars. The sugar alcohols or polyols commonly used in the United States include sorbitol, mannitol, xylitol, maltitol, maltitol syrup, lactitol, erythritol, isomalt, and hydrogenated starch hydrolysates. Their caloric content ranges from one-and-a-half to three calories per gram compared to four calories per gram for sugars. Some of these polyols are sweeter than sugar so you can use less to get equal sweetness and, as a result, consume fewer calories. Use of sugar alcohols in a product does not necessarily mean the product is low in carbohydrate or calories. And, just because a package says "sugar-free" on the outside, that does not mean that it is calorie or carbohydrate-free. Always remember to check the label for the grams of carbohydrate and calories. (10) Due to their incomplete absorption, the polyol sweeteners produce a lower glycemic response than glucose or sucrose and may be useful for people with diabetes. Sugar alcohol-sweetened products may have fewer calories than comparable products sweetened with sucrose or corn syrup and hence could play a useful role in weight management. || Protein contains about 4 calories per gram. Thus 12 calories in this product comes from protein. Vitamins: This cereal provides 10% of the recommended daily value of Vitamin A and Vitamin C for a 2,000 calorie diet. |Ingredients Whole grain oats, sugar, oat bran, modified corn starch, honey, brown sugar syrup, salt, ground almonds, iron, Vitamin C Ingredients are listed at the end in descending order by weight, meaning the first ingredient makes up the largest proportion of the food. Avoid: Check the ingredient list to find ingredients you'd like to avoid, such as coconut oil or palm oil, which are high in saturated fat. Also try to avoid hydrogenated oils that are high in trans fat. They may not listed by total amount on the label, but you can choose foods that don't list hydrogenated or partially hydrogenated oil in the ingredient list at all. Look for heart-healthy ingredients: The ingredient list is also a good place to look for heart-healthy ingredients such as soy; monounsaturated fats such as olive or canola oils; or whole grains,like whole wheat flour and oats. The FDA recommends substituting whole grains for refined grains (white bread, etc.) Consuming at least 3 or more ounce-equivalents of whole grains per day can reduce the risk of several chronic diseases and may help with weight maintenance. For many whole-grain products, the words "whole" or "whole grain" will appear before the grain ingredient's name. The whole grain should be the first ingredient listed. Wheat flour, enriched flour, and degerminated cornmeal are not whole grains. The Food and Drug Administration requires foods that bear the whole-grain health claim to contain 51 percent or more whole-grain ingredients by weight per reference amount and be low in fat.(2) Sugars: If you are concerned about your intake of sugars, make sure that added sugars are not listed as one of the first few ingredients. Other names for added sugars include: corn syrup, brown sugar syrup, high-fructose corn syrup, fruit juice concentrate, maltose, dextrose, sucrose, fructose, lactose, honey, maple syrup, molasses and turbinado. Evaluating your risk of complications from abnormal cholesterol levels To determine whether you need medication, your doctor looks at your total cholesterol, LDL, HDL, and triglyceride levels and considers these with any risk factors you might have for heart disease. The following reference and tool from the National Institute of Health is often used by physicians in assessing cardiac Heart Disease Risk Calculation Tool ATP III At-A-Glance: Quick Desk Reference.(22) Make lifestyle changes: The first step in improving your blood cholesterol levels is to make - Substitute good fats for bad fats in your diet - Maintain a healthy weight - Stop smoking, if you smoke - Exercise regularly: Gradually work up to 30 to 60 minutes a day, most days of the week. This can include walking, swimming, cycling, jogging, aerobic dances or any continuous activity that increases your heart rate safely. Before beginning any exercise program, ask your doctor what is right for you. Maintain good control of high blood pressure and diabetes if present. These diseases are important risk factors of heart disease. Iin general, for any cholesterol level, the more risk factors you have, the higher your risk of coronary artery disease (CAD). The American Heart Association guidelines call for blood pressure and body mass index assessment at least every two years and cholesterol and glucose testing at least every five years beginning at age 20. Start modifying any cardiac risk factors at age 20. Cholesterol-lowering medications may be necessary in some people if lifestyle changes aren't enough. Read about medications that can help. |Written by N Thompson ARNP in collaboration with R Timmons MD, Internal Medicine and M Thompson MD, Internal Medicine. Last updated March 2009
Physicists propose perfect material for lasers Credit: Elena Khavina/MIPT Press Office Weyl semimetals are a recently discovered class of materials, in which charge carriers behave the way electrons and positrons do in particle accelerators. Researchers from the Moscow Institute of Physics and Technology and Ioffe Institute in St. Petersburg have shown that these materials represent perfect gain media for lasers. The research findings were published in Physical Review B. The 21st-century physics is marked by the search for phenomena from the world of fundamental particles in tabletop materials. In some crystals, electrons move as high-energy particles in accelerators. In others, particles even have properties somewhat similar to black hole matter. MIPT physicists have turned this search inside-out, proving that reactions forbidden for elementary particles can also be forbidden in the crystalline materials known as Weyl semimetals. Specifically, this applies to the forbidden reaction of mutual particle-antiparticle annihilation without light emission. This property suggests that a Weyl semimetal could be the perfect gain medium for lasers. In a semiconductor laser, radiation results from the mutual annihilation of electrons and the positive charge carriers called holes. However, light emission is just one possible outcome of an electron-hole pair collision. Alternatively, the energy can build up the oscillations of atoms nearby or heat the neighboring electrons. The latter process is called Auger recombination, in honor of the French physicist Pierre Auger (pronounced oh-ZHAY’). Auger recombination limits the efficiency of modern lasers in the visible and infrared range, and severely undermines terahertz lasers. It eats up electron-hole pairs that might have otherwise produced radiation. Moreover, this process heats up the device. For almost a century, researchers have sought a “wonder material” in which radiative recombination dominates over Auger recombination. This search was guided by an idea formulated in 1928 by Paul Dirac. He developed a theory that the electron, which had already been discovered, had a positively charged twin particle, the positron. Four years later, the prediction was proved experimentally. In Dirac’s calculations, a mutual annihilation of an electron and positron always produces light and can not impart energy on other electrons. This is why the quest for a wonder material to be used in lasers was largely seen as a search for analogues of the Dirac electron and positron in semiconductors. “In the 1970s, the hopes were largely associated with lead salts, and in the 2000s – with graphene,” says Dmitry Svintsov, the head of the Laboratory of 2D Materials for Optoelectronics at MIPT. “But the particles in these materials exhibited deviations from Dirac’s concept. The graphene case proved quite pathological, because confining electrons and holes to two dimensions actually gives rise to Auger recombination. In the 2D world, there is little space for particles to avoid collisions.” “Our latest paper shows that Weyl semimetals are the closest we’ve gotten to realizing an analogy with Dirac’s electrons and positrons,” added Svintsov, who was the principal investigator in the reported study. Electrons and holes in a semiconductor do have the same electric charges as Dirac’s particles. But it takes more than that to eliminate Auger recombination. Laser engineers seek the kind of particles that would match Dirac’s theory in terms of their dispersion relations. The latter tie particle’s kinetic energy to its momentum. That equation encodes all the information on particle’s motion and the reactions it can undergo. In classical mechanics, objects such as rocks, planets, or spaceships follow a quadratic dispersion equation. That is, doubling of the momentum results in four-fold increase in kinetic energy. In conventional semiconductors — silicon, germanium, or gallium arsenide — the dispersion relation is also quadratic. For photons, the quanta of light, the dispersion relation is linear. One of the consequences is that a photon always moves at precisely the speed of light. The electrons and positrons in Dirac’s theory occupy a middle ground between rocks and photons: at low energies, their dispersion relation is quadratic, but at higher energies it becomes linear. Until recently, though, it took a particle accelerator to “catapult” an electron into the linear section of the dispersion relation. Some newly discovered materials can serve as “pocket accelerators” for charged particles. Among them are the “pencil-tip accelerator” – graphene and its three-dimensional analogues, known as Weyl semimetals: tantalum arsenide, niobium phosphate, molybdenum telluride. In these materials, electrons obey a linear dispersion relation starting from the lowest energies. That is, the charge carriers behave like electrically charged photons. These particles may be viewed as analogous to the Dirac electron and positron, except that their mass approaches zero. The researchers have shown that despite the zero mass, Auger recombination still remains forbidden in Weyl semimetals. Foreseeing the objection that a dispersion relation in an actual crystal is never strictly linear, the team went on to calculate the probability of “residual” Auger recombination due to deviations from the linear law. This probability, which depends on electron concentration, can reach values some 10,000 times lower than in the currently used semiconductors. In other words, the calculations suggest that Dirac’s concept is rather faithfully reproduced in Weyl semimetals. “We were aware of the bitter experience of our predecessors who hoped to reproduce Dirac’s dispersion relation in real crystals to the letter,” Svintsov explained. “That is why we did our best to identify every possible loophole for potential Auger recombination in Weyl semimetals. For example, in an actual Weyl semimetal, there exist several sorts of electrons, slow and fast ones. While a slower electron and a slower hole may collapse, the faster ones can pick up energy. That said, we calculated that the odds of that happening are low.” The team gauged the lifetime of an electron-hole pair in a Weyl semimetal to be about 10 nanoseconds. That timespan looks extremely small by everyday standards, but for laser physics, it is huge. In conventional materials used in laser technology of the far infrared range, the lifetimes of electrons and holes are thousands of times shorter. Extending the lifetime of nonequilibrium electrons and holes in novel materials opens up prospects for using them in new types of long-wavelength lasers. Original research paper: Relativistic suppression of Auger recombination in Weyl semimetals; A. N. Afanasiev, A. A. Greshnov, and D. Svintsov; Phys. Rev. B 99, 115202 — published March 4, 2019 Related Journal Article
Once you have created your program purpose and goals, the next step is to create Student Learning Outcomes (SLOs) for each goal. Think about what a student should know or be able to demonstrate upon his/her completion of your program, keeping in mind you are going to have to come up with a way to measure that it is happening. Also keep in mind that you want at least one of the measures to be direct rather than indirect (refer to Step 3 in this handbook for direct v. indirect measures). SLOs are stated operationally and describe the observable evidence of a student's knowledge, skill, ability, attitude or disposition. State clearly each outcome you are seeking: How would you recognize it? What does it look like? What will the student be able to do? Common words used are: describe, classify, distinguish, explain, interpret, give examples of, etc. What are student learning outcomes? Student learning outcomes or SLOs are statements that specify what students will know, be able to do or be able to demonstrate when they have completed or participated in a program/activity/course/project. Outcomes are usually expressed as knowledge, skills, attitudes or values. What are the characteristics of good SLOs? SLOs specify an action by the student that must be observable, measurable and able to be demonstrated! Goals vs. Outcomes: Goals are broad and typically focus on "what we are going to do" rather than what our recipients are "going to get out of what we do." Outcomes are program/course/unit-specific. Try using this template for writing Student Learning Outcomes: As a result of students participating in the _________________________, they will be able to ___________________________________________. Ex: As a result of students participating in the resident assistant training session for writing incident report forms, they will be able to write concisely, include factual details in their reports and use language that is non-judgmental. For each SLO, use the following checklist to examine its quality: 1. Does the outcome support the program goals? Y N 2. Does the outcome describe what the program intends for students to know (cognitive), think (affective, attitudinal), or do (behavioral, performance)? Y N 3. Is the outcome important/worthwhile? Y N 4. Is the outcome: a. Detailed and specific? Y N b. Measurable/identifiable? Y N c. A result of learning? Y N 5. Do you have or can you create an activity to enable students to learn the desired outcome? Y N 6. Do you have a direct or indirect tool as measurements (direct if possible)? Y N 7. Can the outcome be used to make decisions on how to improve the program? Y N Lora Scagliola, University of Rhode Island Student Affairs, 6/24/2007 Drawn in part from: Keeling & Assiciates, Inc. (2003, January). Developing Learning Outcomes That Work. Atlanta, GA. Fowler, B. (1996). Bloom’s Taxonomy and Critical Thinking. Retrieved February 23, 2005 from http://www.kcmetro.cc.mo.us/longview/ctac/blooms.htm; Template adapted from: Gail Short Hanson, American University, as originally published in Learning Reconsidered 2, p. 39. Examples from Cal Poly Pomona: Goal 1: Understand and can apply fundamental concepts of the discipline. Student Learning Outcomes connected to Goal 1: - 1. Demonstrate understanding of basic concepts in the following areas of the discipline: _______, _______, _________ and _________. - 2. Recognize the source(s) of major viewpoints in discipline. - 3. Apply concepts and/or viewpoints to a new question or issue. Writing S.M.A.R.T. SLOs •Specific – clear, definite terms describing the abilities, knowledge, values, attitudes and performance desired. Use action words or concrete verbs. • Measurable – Your SLO should have a measurable outcome and a target can be set, so that you can determine when you have reached it. • Achievable – Know the outcome is something your students can accomplish • Realistic – make sure the outcome is practical in that it can be achieved in a reasonable • Time-bound – When will the outcome be done? Identify a specific timeframe. Here is the part of the UCA CIP form you will have to complete when you have decided on your program SLOs:
(used with UCSMP Transition Mathematics, Lesson 3-2) If you can't see the demo above, click here. This interactive manipulative displays representations of mixed numbers and improper fractions. You can convert between various representations by dragging and dropping. For example, to convert a unit into parts simply drag a whole unit to the parts area. However, to convert parts back into a whole unit you need to select the appropriate number of parts. If the parts are each 1/2 of a whole, then you need to select them in multiples of two. To select multiple pieces at a time you can drag a selection rectangle or simply hold down the control key as you click. You can enter your own fractions at the top or simply choose a random fraction.
Global Warming: Effects The effects ofGlobalWarming on the environment and for human life are varied while some effects of recent climate change may already be occurring. Students will understand how Global Warming differs from natural climate change and how the evolutions of human societies have depended on a steady climate. We also look at rising sea levels, glacier retreat, and altered patterns of agriculture which are cited as direct consequences of Global Warming. As well, regional effects include extreme weather events, diminishing fresh water supply, human & wildlife health, and economic impacts. Our resource is a real time-saver as all the reading passages and student activities are provided along with overhead transparencies, hands-on activities, Crossword, Word Search and a Final Quiz. What people are saying - Write a review We haven't found any reviews in the usual places.
Physicists claim to have solved a perplexing mystery as to why the Sun's atmosphere is much hotter than its surface. The answer may lie in a type of solar plasma wave that had been predicted to exist, but never observed until now. The Sun's corona is a kind of superheated atmosphere of ionised gas, or plasma, that extends millions of kilometres into space. Researchers have long been puzzled that corona - at a temperature of millions of degrees Kelvin - is up to 200 times as hot as the surface. Difficult to detect Now, a new study in the U.S. journal Science reports that a solar weather phenomenon called Alfvén waves have been observed for the first time and may partially explain the temperature discrepancy. Named after a Swedish physicist who postulated their existence in 1942, these plasma waves are like the vibrations that travel along a perturbed rope. "Alfvén waves have long been postulated as a possible mechanism to transfer energy out into the corona, but until now they have not been observed." said Steve Tomczyk who headed the research U.S. team at the National Centre for Atmospheric Research (NCAR) in Boulder, Colorado. Part of the reason the waves have been so difficult to detect is that, like sound waves, they cannot be compressed and they do not alter the heat and brightness of the solar material through which they pass, said Tomczyk. However these properties are also what may allow them to transport energy without dissipating easily, he said. Tomczyk's team approached the problem using a new polarimeter at the NCAR's High Altitude Observatory to image the surface of the Sun as never before, in narrow bands and using polarised light. The approach has allowed them to detect the characteristic pattern of Alfvén waves travelling across the images. They found that waves are ubiquitous in the corona and may therefore be partly responsible for transferring heat to it. Solar storm detection Tomczyk argues that it will now be possible to use Alfvén waves to measure the strength and direction of magnetic fields in the corona, valuable as they directly control the ejection of solar matter and solar storms that reach Earth. "This research provides the first convincing observations of vigorous and ubiquitous magnetic wave activity in the solar corona," commented Tom Bogdan with the U.S. National Oceanic and Atmospheric Administration Space Environment Centre in Boulder, Colorado. "Before this critical breakthrough, most of our inferences about the Sun's magnetic field relied on theoretical models often plagued by uncertainties and ambiguities." According to Bogdan, the observation of Alfvén waves not only increases our understanding of the complex behaviour of the Sun but will also allows us to more accurately predict space weather, that can knock out our power grids or damage orbiting satellites. "Timely warnings of 'solar tsunamis' will enable high-technology sectors of our global economy including aviation, power grid operators, the satellite industry and commercial space endeavours to secure their assets and operations," he said.
Nuclear fusion holds a tremendous potential how exactly does nuclear fusion power plants work. A terrifying reality unfolds brought on by the damage caused to the fukushima daiichi nuclear plant in japan by the march 2011 earthquake and tsunami with this disaster ultimately comes questions pertaining to what nuclear power is, how it is managed and what a nuclear power plant. Nuclear reactors, which produce heat by splitting uranium atoms, do the same job as conventional power producing equipment in the generation of electricity. News about nuclear energy and the 2011 nuclear crisis in japan. Ninety-nine nuclear plants in the united states generate nearly 20 percent of our electricity learn how nuclear provides clean, reliable power that allows us to live, work and play. In a nuclear power plant, energy is derived from splitting atomic nuclei this process is called fission. A nuclear power plant or nuclear power station is a thermal power station in which the heat source is a nuclear reactor where the thermal energy can be harnessed to produce electricity or to do other useful work. At a basic level, nuclear power is the practice of splitting atoms to boil water, turn turbines, and generate electricity. Nuclear power is often vilified in the press learn the real story about nuclear power, nuclear power plants, the advantages and disadvantages of nuclear power. Operation of a nuclear power plant how electricity is generated from nuclear energy basic schematic of the type of plant most common in the world. Posts about informative speech written by spring09tr111 how it works [model: reactor][pp:3,atom]nuclear power is when you harness the power of radioactive metals decaying all radioactive nuclear power is when you harness the power of radioactive metals decaying. +toolbar how nuclear reactors work when an atom undergoes fission it splits into smaller atoms, other particles and releases energy read about the physics of fission. A nuclear reactor produces and controls the release of energy from splitting the atoms of uranium uranium-fuelled nuclear power is a clean and efficient way of boiling water to make steam which drives turbine generators except for the reactor itself, a nuclear power station works like most coal or. A nuclear power plant works in the same way that a coal power plant works heat is generated which causes water to turn into steam this steam turns a turbine which causes a generator to spin which ge. How is nuclear power produced why use nuclear energy over 75% of their electricity is produced by nuclear power (how stuff works) the united states, on the other hand, only produces about 15% of the electricity from nuclear power nuclear. View notes - how nuclear power works from earth 102 at university of michigan how nuclear power works by marshall brain and robert lamb browse the article how nuclear power works japan earthquake. How a nuclear reactor works nuclear fuel nuclear waste safety nuclear in the energy mix it must undergo four major processing steps to take it from its raw state to usable nuclear fuel: a nuclear plant stops generating electricity to replace a third of its fuel assemblies. Despite all the cosmic energy that the word nuclear invokes, power plants that depend on atomic energy don't operate that differently from a typical coal-burning power plant both heat water into pressurized steam, which drives a turbine generator the key difference between the two plants is the. Do you know how nuclear energy is created learn about the steps involved in generating electricity from uranium atoms in this interactive graphic. Three mile island: how a nuclear reactor works at the heart of every nuclear power plant lies the radioactive core the core is a nuclear furnace, generating heat as its atoms split during a controlled chain reaction. Nuclear power stations work in pretty much the same way as fossil fuel-burning stations, except that a chain reaction inside a nuclear reactor makes the heat instead. Nuclear energy explained: how does it work nuclear energy is a controversial subject the pro- and anti-nuclear lobbies fight furiously, and it's difficult. Nuclear power nuclear energy has the advantage of not burning any fuel so there are no pollutants released into the air how it works in order to use this energy, it has to be released from the atom there are two ways to free the energy inside the atom: 1. Like all other power plants, water is heated and the steam produced creates electricity thanks to alternators driven by steam turbines they are called turbo-alternators the difference between each type of power stations if the fuel used to heat. Is nuclear power the answer to the energy crisis ian sample explains how it works - and how we get the awful side-effects of bombs and waste. +toolbar how nuclear fission power plants work a nuclear fission power plant uses the heat generated by a nuclear fission process to drive a steam turbine which generates usable electricity.
In statistics, a histogram is a graphical display of tabulated frequencies, shown as bars. It shows what proportion of cases fall into each of several categories. A histogram differs from a bar chart in that it is the area of the bar that denotes the value, not the height, a crucial distinction when the categories are not of uniform width (Lancaster, 1974). The categories are usually specified as non-overlapping intervals of some variable. The categories (bars) must be adjacent. The word histogram is derived from Greek: histos 'anything set upright' (as the masts of a ship, the bar of a loom, or the vertical bars of a histogram); gramma 'drawing, record, writing'. The histogram is one of the seven basic tools of quality control, which also include the Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram. A generalization of the histogram is kernel smoothing techniques. This will construct a very smooth probability density function from the supplied data. This histogram shows the number of cases per unit interval so that the height of each bar is equal to the proportion of total people in the survey who fall into that category. The area under the curve represents the total number of cases (124 million). This type of histogram shows absolute numbers. This histogram differs from the first only in the vertical scale. The height of each bar is the decimal percentage of the total that each category represents, and the total area of all the bars is equal to 1, the decimal equivalent of 100%. The curve displayed is a simple density estimate. This version shows proportions, and is also known as a unit area histogram. In other words a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies. They only place the bars together to make it easier to compare data. In a more general mathematical sense, a histogram is a mapping that counts the number of observations that fall into various disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let be the total number of observations and be the total number of bins, the histogram meets the following conditions: A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram of a histogram is defined as: The number of bins can be calculated directly, or from a suggested bin width :